aboutsummaryrefslogtreecommitdiff
diff options
context:
space:
mode:
authorCourtney Goeltzenleuchter <courtneygo@google.com>2016-04-05 19:41:54 +0000
committerandroid-build-merger <android-build-merger@google.com>2016-04-05 19:41:54 +0000
commitb904fbeda867dbaf81424d07e33ec2e2a888334b (patch)
treea5fa837dbccf621ffe55c1cb58f4c0fec1a30c7e
parentc0efdffe9e1bfc5bfece9e985a2ef8829ee31681 (diff)
parent72c8726637038cf703c312f6d035e4502f91b07e (diff)
downloadvulkan-validation-layers-b904fbeda867dbaf81424d07e33ec2e2a888334b.tar.gz
Merge remote-tracking branch \'aosp/upstream-android_layers\' into mymerge
am: 72c8726 * commit '72c8726637038cf703c312f6d035e4502f91b07e': (247 commits) GH171: Use relative path for building Android bits GH171: Fix compiler warning GH171: Fix for NDK packaging layers: remove weird double assignment to pAttachments layers: LX450, Tighten up queueFamilyIndex validation, fix crash layers: Merge renderpass maps in core_validation layers: Merge framebuffer maps in core_validation layers: LX448, Prevent descriptorSetCount overflow in core_validation layers: Fix AV in core_validaton layers: Fix up MTMERGE in CV layer to allow disabling mem_tracker validation layers: GH117: Change warning to an error. layers: GH117: swapchain layer warns if app uses neither semaphore nor fence. demos: cube exit status reflects validation status winrtinstaller: sign Config powershell script tests: Use header macro for API version rather than make_version misc: Update to header version 1.0.6 Remove device from layer_data_map at destroy in parameter_validation layers: Update event tracking to account for sequential cross command buffer and queue tracking. layers: Merge of cmd buffer maps in core_validation build: Cleanup build_windows_targets.bat ... Change-Id: Ibc2bbaf9023353e84411eca89f542b23a87644ac
-rw-r--r--.gitignore2
-rw-r--r--BUILD.md329
-rwxr-xr-xCMakeLists.txt158
-rw-r--r--CONTRIBUTING.md90
-rw-r--r--LunarGLASS_revision1
-rw-r--r--LunarGLASS_revision_R321
-rwxr-xr-xREADME.md34
-rw-r--r--buildAndroid/android-generate.bat33
-rwxr-xr-xbuildAndroid/android-generate.sh12
-rw-r--r--buildAndroid/jni/Android.mk293
-rw-r--r--buildAndroid/jni/Application.mk2
-rw-r--r--build_windows_targets.bat76
-rw-r--r--demos/CMakeLists.txt41
-rw-r--r--demos/cube.c49
-rw-r--r--demos/smoke/CMakeLists.txt78
-rw-r--r--demos/smoke/Game.h133
-rw-r--r--demos/smoke/Helpers.h131
-rw-r--r--demos/smoke/Main.cpp90
-rw-r--r--demos/smoke/Meshes.cpp537
-rw-r--r--demos/smoke/Meshes.h67
-rw-r--r--demos/smoke/Meshes.teapot.h2666
-rw-r--r--demos/smoke/README.md1
-rw-r--r--demos/smoke/Shell.cpp591
-rw-r--r--demos/smoke/Shell.h162
-rw-r--r--demos/smoke/ShellAndroid.cpp227
-rw-r--r--demos/smoke/ShellAndroid.h68
-rw-r--r--demos/smoke/ShellWin32.cpp256
-rw-r--r--demos/smoke/ShellWin32.h63
-rw-r--r--demos/smoke/ShellXcb.cpp344
-rw-r--r--demos/smoke/ShellXcb.h62
-rw-r--r--demos/smoke/Simulation.cpp327
-rw-r--r--demos/smoke/Simulation.h112
-rw-r--r--demos/smoke/Smoke.cpp915
-rw-r--r--demos/smoke/Smoke.frag12
-rw-r--r--demos/smoke/Smoke.h195
-rw-r--r--demos/smoke/Smoke.push_constant.vert27
-rw-r--r--demos/smoke/Smoke.vert27
-rwxr-xr-xdemos/smoke/android/build-and-install30
-rw-r--r--demos/smoke/android/build.gradle87
-rw-r--r--demos/smoke/android/gradle/wrapper/gradle-wrapper.jarbin0 -> 53636 bytes
-rw-r--r--demos/smoke/android/gradle/wrapper/gradle-wrapper.properties6
-rwxr-xr-xdemos/smoke/android/gradlew160
-rw-r--r--demos/smoke/android/gradlew.bat90
-rw-r--r--demos/smoke/android/src/main/AndroidManifest.xml20
-rw-r--r--demos/smoke/android/src/main/jni/Smoke.frag.h78
-rw-r--r--demos/smoke/android/src/main/jni/Smoke.push_constant.vert.h352
-rw-r--r--demos/smoke/android/src/main/jni/Smoke.vert.h354
-rw-r--r--demos/smoke/android/src/main/res/values/strings.xml4
-rwxr-xr-xdemos/smoke/generate-dispatch-table498
-rwxr-xr-xdemos/smoke/glsl-to-spirv100
-rw-r--r--demos/tri.c244
-rw-r--r--demos/vulkaninfo.c106
-rw-r--r--generator.py438
-rwxr-xr-xgenvk.py2
-rw-r--r--glslang_revision2
-rw-r--r--include/vulkan/vk_debug_marker_layer.h44
-rw-r--r--include/vulkan/vk_icd.h13
-rw-r--r--include/vulkan/vk_layer.h15
-rw-r--r--include/vulkan/vk_lunarg_debug_marker.h98
-rw-r--r--include/vulkan/vk_platform.h8
-rw-r--r--include/vulkan/vulkan.h49
-rw-r--r--layers/.clang-format6
-rw-r--r--layers/CMakeLists.txt39
-rw-r--r--layers/README.md27
-rw-r--r--layers/core_validation.cpp10932
-rw-r--r--layers/core_validation.h896
-rw-r--r--layers/device_limits.cpp628
-rw-r--r--layers/device_limits.h56
-rw-r--r--layers/draw_state.cpp8046
-rwxr-xr-xlayers/draw_state.h707
-rw-r--r--layers/glsl_compiler.c0
-rw-r--r--layers/image.cpp1009
-rw-r--r--layers/image.h39
-rw-r--r--layers/linux/VkLayer_core_validation.json (renamed from layers/linux/VkLayer_mem_tracker.json)13
-rw-r--r--layers/linux/VkLayer_device_limits.json4
-rw-r--r--layers/linux/VkLayer_draw_state.json24
-rw-r--r--layers/linux/VkLayer_image.json4
-rw-r--r--layers/linux/VkLayer_object_tracker.json4
-rw-r--r--layers/linux/VkLayer_parameter_validation.json (renamed from layers/windows/VkLayer_param_checker.json)8
-rw-r--r--layers/linux/VkLayer_swapchain.json4
-rw-r--r--layers/linux/VkLayer_threading.json4
-rw-r--r--layers/linux/VkLayer_unique_objects.json2
-rw-r--r--layers/mem_tracker.cpp3598
-rw-r--r--layers/mem_tracker.h221
-rw-r--r--layers/object_tracker.h795
-rw-r--r--layers/param_checker.cpp7759
-rw-r--r--layers/param_checker_utils.h332
-rw-r--r--layers/parameter_validation.cpp5164
-rw-r--r--layers/parameter_validation_utils.h377
-rw-r--r--layers/swapchain.cpp1670
-rw-r--r--layers/swapchain.h229
-rw-r--r--layers/threading.cpp221
-rw-r--r--layers/threading.h145
-rw-r--r--layers/unique_objects.h345
-rw-r--r--[-rwxr-xr-x]layers/vk_layer_config.cpp96
-rw-r--r--layers/vk_layer_config.h2
-rw-r--r--layers/vk_layer_data.h10
-rw-r--r--layers/vk_layer_debug_marker_table.cpp62
-rw-r--r--layers/vk_layer_debug_marker_table.h45
-rw-r--r--layers/vk_layer_extension_utils.cpp16
-rw-r--r--layers/vk_layer_extension_utils.h15
-rw-r--r--layers/vk_layer_logging.h191
-rw-r--r--layers/vk_layer_settings.txt126
-rw-r--r--layers/vk_layer_table.cpp103
-rw-r--r--layers/vk_layer_table.h22
-rw-r--r--layers/vk_layer_utils.cpp510
-rw-r--r--layers/vk_layer_utils.h138
-rw-r--r--layers/vk_validation_layer_details.md226
-rw-r--r--layers/windows/VkLayer_core_validation.json (renamed from layers/windows/VkLayer_mem_tracker.json)8
-rw-r--r--layers/windows/VkLayer_device_limits.json4
-rw-r--r--layers/windows/VkLayer_draw_state.json24
-rw-r--r--layers/windows/VkLayer_image.json4
-rw-r--r--layers/windows/VkLayer_object_tracker.json4
-rw-r--r--layers/windows/VkLayer_parameter_validation.json (renamed from layers/linux/VkLayer_param_checker.json)8
-rw-r--r--layers/windows/VkLayer_swapchain.json4
-rw-r--r--layers/windows/VkLayer_threading.json4
-rw-r--r--layers/windows/VkLayer_unique_objects.json2
-rw-r--r--libs/vkjson/vkjson_info.cc2
-rw-r--r--loader/CMakeLists.txt31
-rw-r--r--loader/LoaderAndLayerInterface.md352
-rw-r--r--loader/cJSON.c3
-rw-r--r--loader/debug_report.c18
-rw-r--r--loader/debug_report.h16
-rw-r--r--loader/dirent_on_windows.c2
-rw-r--r--loader/loader.c1199
-rw-r--r--loader/loader.h248
-rw-r--r--loader/table_ops.h44
-rw-r--r--loader/trampoline.c529
-rwxr-xr-xloader/vk-loader-generate.py19
-rw-r--r--loader/wsi.c551
-rw-r--r--loader/wsi.h121
-rw-r--r--spirv-tools_revision2
-rw-r--r--tests/CMakeLists.txt42
-rw-r--r--tests/layer_validation_tests.cpp417
-rw-r--r--tests/test_environment.cpp2
-rw-r--r--tests/vk_layer_settings.txt30
-rw-r--r--tests/vkrenderframework.cpp119
-rw-r--r--tests/vkrenderframework.h6
-rwxr-xr-xupdate_external_sources.bat343
-rwxr-xr-xupdate_external_sources.sh76
-rwxr-xr-xvk-generate.py15
-rwxr-xr-xvk-layer-generate.py548
-rw-r--r--vk.xml4
-rwxr-xr-xvk_helper.py30
-rwxr-xr-xvk_layer_documentation_generate.py15
-rwxr-xr-xvulkan.py121
-rw-r--r--windowsRuntimeInstaller/ConfigLayersAndVulkanDLL.ps1267
-rw-r--r--windowsRuntimeInstaller/CreateInstallerRT.sh16
-rw-r--r--windowsRuntimeInstaller/InstallerRT.nsi295
-rw-r--r--windowsRuntimeInstaller/README.txt48
150 files changed, 33365 insertions, 28108 deletions
diff --git a/.gitignore b/.gitignore
index 801c88ff7..a1abef1b1 100644
--- a/.gitignore
+++ b/.gitignore
@@ -17,6 +17,8 @@ demos/tri.dir/Debug/*
demos/tri/Debug/*
demos/Win32/Debug/*
demos/xcb_nvidia.dir/*
+demos/smoke/HelpersDispatchTable.cpp
+demos/smoke/HelpersDispatchTable.h
libs/Win32/Debug/*
*.pyc
*.vcproj
diff --git a/BUILD.md b/BUILD.md
index 80b4fc024..ae14b6855 100644
--- a/BUILD.md
+++ b/BUILD.md
@@ -1,279 +1,68 @@
# Build Instructions
-This project fully supports Linux and Windows today.
-Support for Android is TBD.
+This document contains the instructions for building this repository on Linux and Windows.
-## Git the Bits
-
-You should have access to the Khronos GitHub repository at https://github.com/KhronosGroup/. The
-preferred work flow is to clone the repo, create a branch, push branch to GitHub and then
-issue a merge request to integrate that work back into the repo.
-
-Note: If you are doing ICD (driver) development, please make sure to look at documentation in the [ICD Loader](loader/README.md) and the [Sample Driver](icd).
-
-## Linux System Requirements
-Ubuntu 14.04.3 LTS, 14.10, 15.04 and 15.10 have been used with the sample driver.
-
-These packages are used for building and running the samples.
-```
-sudo apt-get install git subversion cmake libgl1-mesa-dev freeglut3-dev libglm-dev qt5-default libpciaccess-dev libpthread-stubs0-dev libudev-dev bison graphviz libpng-dev
-sudo apt-get build-dep mesa
-```
-
-The sample driver uses DRI3 for its window system interface.
-That requires extra configuration of Ubuntu systems.
-
-### Ubuntu 14.04.3 LTS support of DRI 3
-
-Ubuntu 14.04.3 LTS does not ship a xserver-xorg-video-intel package with supported DRI 3 on intel graphics.
-The xserver-xorg-video-intel package can be built from source with DRI 3 enabled.
-Use the following commands to enable DRI3 on ubuntu 14.04.3 LTS.
-
-- Install packages used to build:
-```
-sudo apt-get update
-sudo apt-get dist-upgrade
-sudo apt-get install devscripts
-sudo apt-get build-dep xserver-xorg-video-intel-lts-vivid
-```
-
-- Get the source code for xserver-xorg-video-intel-lts-vivid
-```
-mkdir xserver-xorg-video-intel-lts-vivid_source
-cd xserver-xorg-video-intel-lts-vivid_source
-apt-get source xserver-xorg-video-intel-lts-vivid
-cd xserver-xorg-video-intel-lts-vivid-2.99.917
-debian/rules patch
-quilt new 'enable-DRI3'
-quilt edit configure.ac
-```
-
-- Use the editor to make these changes.
-```
---- a/configure.ac
-+++ b/configure.ac
-@@ -340,9 +340,9 @@
- [DRI2=yes])
- AC_ARG_ENABLE(dri3,
- AS_HELP_STRING([--enable-dri3],
-- [Enable DRI3 support [[default=no]]]),
-+ [Enable DRI3 support [[default=yes]]]),
- [DRI3=$enableval],
-- [DRI3=no])
-+ [DRI3=yes])
- AC_ARG_ENABLE(xvmc, AS_HELP_STRING([--disable-xvmc],
- [Disable XvMC support [[default=yes]]]),
-```
-- Build and install xserver-xorg-video-intel-lts-vivid
-```
-quilt refresh
-debian/rules clean
-debuild -us -uc
-sudo dpkg -i ../xserver-xorg-video-intel-lts-vivid_2.99.917-1~exp1ubuntu2.2~trusty1_amd64.deb
-```
-- Prevent updates from replacing this version of the package.
-```
-sudo bash -c 'echo xserver-xorg-video-intel-lts-vivid hold | dpkg --set-selections'
-```
-- save your work then restart the X server with the next command.
-```
-sudo service lightdm restart
-```
-- After logging in again, check for success with this command and look for DRI3.
-```
-xdpyinfo | grep DRI
-```
-
-### Ubuntu 14.10 support of DRI 3
-
-Warning: Recent versions of 14.10 have **REMOVED** DRI 3.
-Version: 2:2.99.914-1~exp1ubuntu4.1 is known to work.
-To see status of this package:
-```
-dpkg -s xserver-xorg-video-intel
-```
-
-Note:
-Version 2:2.99.914-1~exp1ubuntu4.2 does not work anymore.
-To install the working driver from launchpadlibrarian.net:
-- Remove the current driver:
-```
-sudo apt-get purge xserver-xorg-video-intel
-```
-- Download the old driver:
-```
-wget http://launchpadlibrarian.net/189418339/xserver-xorg-video-intel_2.99.914-1%7Eexp1ubuntu4.1_amd64.deb
-```
-- Install the driver:
-```
-sudo dpkg -i xserver-xorg-video-intel_2.99.914-1~exp1ubuntu4.1_amd64.deb
-```
-- Pin the package to prevent updates
-```
-sudo bash -c "echo $'Package: xserver-xorg-video-intel\nPin: version 2:2.99.914-1~exp1ubuntu4.1\nPin-Priority: 1001' > /etc/apt/preferences.d/xserver-xorg-video-intel"
-```
-
-- Either restart Ubuntu or just X11.
-
-
-### Ubuntu 15.04 support of DRI 3
-
-Ubuntu 15.04 has never shipped a xserver-xorg-video-intel package with supported DRI 3 on intel graphics.
-The xserver-xorg-video-intel package can be built from source with DRI 3 enabled.
-Use the following commands to enable DRI3 on ubuntu 15.04.
-
-- Install packages used to build:
-```
-sudo apt-get update
-sudo apt-get dist-upgrade
-sudo apt-get install devscripts
-sudo apt-get build-dep xserver-xorg-video-intel
-```
+This repository does not contain a Vulkan-capable driver.
+Before proceeding, it is strongly recommended that you obtain a Vulkan driver from your graphics hardware vendor
+and install it.
-- Get the source code for xserver-xorg-video-intel
-```
-mkdir xserver-xorg-video-intel_source
-cd xserver-xorg-video-intel_source
-apt-get source xserver-xorg-video-intel
-cd xserver-xorg-video-intel-2.99.917
-debian/rules patch
-quilt new 'enable-DRI3'
-quilt edit configure.ac
-```
+Note: The sample Vulkan Intel driver for Linux (ICD) is being deprecated in favor of other driver options from Intel.
+This driver has been moved to the [VulkanTools repo](https://github.com/LunarG/VulkanTools).
+Further instructions regarding this ICD are available there.
-- Use the editor to make these changes.
-```
---- a/configure.ac
-+++ b/configure.ac
-@@ -340,9 +340,9 @@
- [DRI2=yes])
- AC_ARG_ENABLE(dri3,
- AS_HELP_STRING([--enable-dri3],
-- [Enable DRI3 support [[default=no]]]),
-+ [Enable DRI3 support [[default=yes]]]),
- [DRI3=$enableval],
-- [DRI3=no])
-+ [DRI3=yes])
- AC_ARG_ENABLE(xvmc, AS_HELP_STRING([--disable-xvmc],
- [Disable XvMC support [[default=yes]]]),
-```
-- Build and install xserver-xorg-video-intel
-```
-quilt refresh
-debian/rules clean
-debuild -us -uc
-sudo dpkg -i ../xserver-xorg-video-intel_2.99.917-1~exp1ubuntu2.2_amd64.deb
-```
-- Prevent updates from replacing this version of the package.
-```
-sudo bash -c 'echo xserver-xorg-video-intel hold | dpkg --set-selections'
-```
-- save your work then restart the X server with the next command.
-```
-sudo service lightdm restart
-```
-- After logging in again, check for success with this command and look for DRI3.
-```
-xdpyinfo | grep DRI
-```
-### Ubuntu 15.10 support of DRI 3
-
-Ubuntu 15.10 has never shipped a xserver-xorg-video-intel package with supported DRI 3 on intel graphics.
-The xserver-xorg-video-intel package can be built from source with DRI 3 enabled.
-Use the following commands to enable DRI3 on ubuntu 15.10.
+## Git the Bits
-- Install packages used to build:
+To create your local git repository:
```
-sudo apt-get update
-sudo apt-get dist-upgrade
-sudo apt-get install devscripts
-sudo apt-get build-dep xserver-xorg-video-intel
+git clone https://github.com/KhronosGroup/Vulkan-LoaderAndValidationLayers
```
-- Get the source code for xserver-xorg-video-intel
-```
-mkdir xserver-xorg-video-intel_source
-cd xserver-xorg-video-intel_source
-apt-get source xserver-xorg-video-intel
-cd xserver-xorg-video-intel-2.99.917+git20150808
-debian/rules patch
-quilt new 'enable-DRI3'
-quilt edit configure.ac
-```
+If you intend to contribute, the preferred work flow is for you to develop your contribution
+in a fork of this repo in your GitHub account and then submit a pull request.
+Please see the CONTRIBUTING.md file in this respository for more details.
-- Use the editor to make these changes.
-```
-Index: xserver-xorg-video-intel-2.99.917+git20150808/configure.ac
-===================================================================
---- xserver-xorg-video-intel-2.99.917+git20150808.orig/configure.ac
-+++ xserver-xorg-video-intel-2.99.917+git20150808/configure.ac
-@@ -356,7 +356,7 @@ AC_ARG_WITH(default-dri,
- AS_HELP_STRING([--with-default-dri],
- [Select the default maximum DRI level [default 2]]),
- [DRI_DEFAULT=$withval],
-- [DRI_DEFAULT=2])
-+ [DRI_DEFAULT=3])
- if test "x$DRI_DEFAULT" = "x0"; then
- AC_DEFINE(DEFAULT_DRI_LEVEL, 0,[Default DRI level])
- else
-```
-- Build and install xserver-xorg-video-intel
-```
-quilt refresh
-debian/rules clean
-debuild -us -uc
-sudo dpkg -i ../xserver-xorg-video-intel_2.99.917+git20150808-0ubuntu4_amd64.deb
-```
-- Prevent updates from replacing this version of the package.
-```
-sudo bash -c 'echo xserver-xorg-video-intel hold | dpkg --set-selections'
-```
-- save your work then restart the X server with the next command.
-```
-sudo service lightdm restart
-```
-- After logging in again, check for success with this command and look for DRI3.
-```
-xdpyinfo | grep DRI
-```
+## Linux Build
+The build process uses CMake to generate makefiles for this project.
+The build generates the loader, layers, and tests.
-## Clone the repository
+This repo has been built and tested on Ubuntu 14.04.3 LTS, 14.10, 15.04 and 15.10.
+It should be straightforward to use it on other Linux distros.
-To create your local git repository:
+These packages are needed to build this repository:
```
-mkdir YOUR_DEV_DIRECTORY # it's called Vulkan-LoaderAndValidationLayers on Github, but the name doesn't matter
-cd YOUR_DEV_DIRECTORY
-git clone -o khronos https://github.com/KhronosGroup/Vulkan-LoaderAndValidationLayers .
-# Or substitute the URL from your forked repo for https://github.com/KhronosGroup/Vulkan-LoaderAndValidationLayers above.
+sudo apt-get install git cmake build-essential bison libxcb1-dev
```
-## Linux Build
-
-The sample driver uses cmake and should work with the usual cmake options and utilities.
-The standard build process builds the icd, the icd loader and all the tests.
-
Example debug build:
```
-cd YOUR_DEV_DIRECTORY # cd to the root of the Vulkan-LoaderAndValidationLayers git repository
-./update_external_sources.sh # Fetches and builds glslang, llvm, LunarGLASS, and spirv-tools
+cd Vulkan-LoaderAndValidationLayers # cd to the root of the cloned git repository
+./update_external_sources.sh # Fetches and builds glslang and spirv-tools
cmake -H. -Bdbuild -DCMAKE_BUILD_TYPE=Debug
cd dbuild
make
```
-To run Vulkan programs you must tell the icd loader where to find the libraries.
-This is described in a specification in the Khronos documentation Git
-repository. See the file:
-https://github.com/KhronosGroup/Vulkan-LoaderAndValidationLayers/blob/sdk-1.0.3/loader/LoaderAndLayerInterface.md#vulkan-installable-client-driver-interface-with-the-loader
+If you have installed a Vulkan driver obtained from your graphics hardware vendor, the install process should
+have configured the driver so that the Vulkan loader can find and load it.
-This specification describes both how ICDs and layers should be properly
-packaged, and how developers can point to ICDs and layers within their builds.
+If you want to use the loader and layers that you have just built:
+```
+export LD_LIBRARY_PATH=<path to your repository root>/dbuild/loader
+export VK_LAYER_PATH=<path to your repository root>/dbuild/layers
+```
+Note that if you have installed the [LunarG Vulkan SDK](https://vulkan.lunarg.com),
+you will also have the SDK version of the loader and layers installed in your default system libraries.
+You can run the `vulkaninfo` application to see which driver, loader and layers are being used.
+
+The `LoaderAndLayerInterface` document in the `loader` folder in this repository is a specification that
+describes both how ICDs and layers should be properly
+packaged, and how developers can point to ICDs and layers within their builds.
## Validation Test
-The test executables can be found in the dbuild/tests directory. The tests use the Google
-gtest infrastructure. Tests available so far:
+The test executables can be found in the dbuild/tests directory.
+Some of the tests that are available:
- vk_layer_validation_tests: Test Vulkan layers.
There are also a few shell and Python scripts that run test collections (eg,
@@ -281,11 +70,11 @@ There are also a few shell and Python scripts that run test collections (eg,
## Linux Demos
-The demos executables can be found in the dbuild/demos directory. The demos use DRI 3
-to render directly onto window surfaces.
+Some demos that can be found in the dbuild/demos directory are:
- vulkaninfo: report GPU properties
-- tri: a textured triangle
+- tri: a textured triangle (which is animated to demonstrate Z-clipping)
- cube: a textured spinning cube
+- smoke/smoke: A "smoke" test using a more complex Vulkan demo
## Windows System Requirements
@@ -296,7 +85,17 @@ Windows 7+ with additional required software packages:
- Tell the installer to "Add CMake to the system PATH" environment variable.
- Python 3 (from https://www.python.org/downloads). Notes:
- Select to install the optional sub-package to add Python to the system PATH environment variable.
+ - Ensure the pip module is installed (it should be by default)
- Need python3.3 or later to get the Windows py.exe launcher that is used to get python3 rather than python2 if both are installed on Windows
+ - 32 bit python works
+- Python lxml package must be installed
+ - Download the lxml package from
+ http://www.lfd.uci.edu/~gohlke/pythonlibs/#lxml
+ 32-bit latest for Python 3.5 is: lxml-3.5.0-cp35-none-win32.whl
+ 64-bit latest for Python 3.5 is: lxml-3.5.0-cp35-none-win_amd64.whl
+ - The package can be installed with pip as follows:
+ pip install lxml-3.5.0-cp35-none-win32.whl
+ If pip is not in your path, you can find it at $PYTHON_HOME\Scripts\pip.exe, where PYTHON_HOME is the folder where you installed Python.
- Git (from http://git-scm.com/download/win).
- Note: If you use Cygwin, you can normally use Cygwin's "git.exe". However, in order to use the "update_external_sources.bat" script, you must have this version.
- Tell the installer to allow it to be used for "Developer Prompt" as well as "Git Bash".
@@ -319,9 +118,9 @@ Optional software packages:
Cygwin is used in order to obtain a local copy of the Git repository, and to run the CMake command that creates Visual Studio files. Visual Studio is used to build the software, and will re-run CMake as appropriate.
-Example debug x64 build (e.g. in a "Developer Command Prompt for VS2013" window):
+To build all Windows targets (e.g. in a "Developer Command Prompt for VS2013" window):
```
-cd YOUR_DEV_DIRECTORY # cd to the root of the Vulkan-LoaderAndValidationLayers git repository
+cd Vulkan-LoaderAndValidationLayers # cd to the root of the cloned git repository
update_external_sources.bat --all
build_windows_targets.bat
```
@@ -331,27 +130,7 @@ At this point, you can use Windows Explorer to launch Visual Studio by double-cl
Vulkan programs must be able to find and use the vulkan-1.dll libary. Make sure it is either installed in the C:\Windows\System32 folder, or the PATH environment variable includes the folder that it is located in.
To run Vulkan programs you must tell the icd loader where to find the libraries.
-This is described in a specification in the Khronos documentation Git
-repository. See the file:
-https://github.com/KhronosGroup/Vulkan-LoaderAndValidationLayers/blob/sdk-1.0.3/loader/LoaderAndLayerInterface.md#vulkan-installable-client-driver-interface-with-the-loader
-
+This is described in a `LoaderAndLayerInterface` document in the `loader` folder in this repository.
This specification describes both how ICDs and layers should be properly
packaged, and how developers can point to ICDs and layers within their builds.
-### Windows 64-bit Installation Notes
-If you plan on creating a Windows Install file (done in the windowsRuntimeInstaller sub-directory) you will need to build for both 32-bit and 64-bit Windows since both versions of EXEs and DLLs exist simultaneously on Windows 64.
-
-To do this, simply create and build the release versions of each target:
-```
-cd LoaderAndTools # cd to the root of the Vulkan git repository
-update_external_sources.bat --all
-mkdir build
-cd build
-cmake -G "Visual Studio 12 Win64" ..
-msbuild ALL_BUILD.vcxproj /p:Platform=x64 /p:Configuration=Release
-mkdir build32
-cd build32
-cmake -G "Visual Studio 12" ..
-msbuild ALL_BUILD.vcxproj /p:Platform=x86 /p:Configuration=Release
-```
-
diff --git a/CMakeLists.txt b/CMakeLists.txt
index 80888e981..75cf7bd98 100755
--- a/CMakeLists.txt
+++ b/CMakeLists.txt
@@ -5,12 +5,11 @@ cmake_minimum_required(VERSION 2.8.11)
project (VULKAN)
# set (CMAKE_VERBOSE_MAKEFILE 1)
-
-
# The MAJOR number of the version we're building, used in naming
# vulkan-<major>.dll (and other files).
set(MAJOR "1")
+find_package(PythonInterp 3 REQUIRED)
if(CMAKE_SYSTEM_NAME STREQUAL "Windows")
add_definitions(-DVK_USE_PLATFORM_WIN32_KHR -DWIN32_LEAN_AND_MEAN)
@@ -19,29 +18,36 @@ elseif(CMAKE_SYSTEM_NAME STREQUAL "Android")
add_definitions(-DVK_USE_PLATFORM_ANDROID_KHR)
set(DisplayServer Android)
elseif(CMAKE_SYSTEM_NAME STREQUAL "Linux")
- add_definitions(-DVK_USE_PLATFORM_XCB_KHR)
- set(DisplayServer Xcb)
-
-# TODO: Basic support is present for Xlib but is untested.
-# Wayland/Mir support is stubbed in but unimplemented and untested.
-
-# add_definitions(-DVK_USE_PLATFORM_XLIB_KHR)
-# set(DisplayServer Xlib)
+ # TODO: Basic support is present for Xlib but is untested.
+ # Mir support is stubbed in but unimplemented and untested.
+ option(BUILD_WSI_XCB_SUPPORT "Build XCB WSI support" ON)
+ option(BUILD_WSI_XLIB_SUPPORT "Build Xlib WSI support" ON)
+ option(BUILD_WSI_WAYLAND_SUPPORT "Build Wayland WSI support" OFF)
+ option(BUILD_WSI_MIR_SUPPORT "Build Mir WSI support" OFF)
+
+ if (BUILD_WSI_XCB_SUPPORT)
+ add_definitions(-DVK_USE_PLATFORM_XCB_KHR)
+ set(DisplayServer Xcb)
+ endif()
-# add_definitions(-DVK_USE_PLATFORM_MIR_KHR)
-# set(DisplayServer Mir)
+ if (BUILD_WSI_XLIB_SUPPORT)
+ add_definitions(-DVK_USE_PLATFORM_XLIB_KHR)
+ set(DisplayServer Xlib)
+ endif()
-# add_definitions(-DVK_USEPLATFORM_WAYLAND_KHR)
-# set(DisplayServer Wayland)
+ if (BUILD_WSI_WAYLAND_SUPPORT)
+ add_definitions(-DVK_USE_PLATFORM_WAYLAND_KHR)
+ set(DisplayServer Wayland)
+ endif()
+ if (BUILD_WSI_MIR_SUPPORT)
+ add_definitions(-DVK_USE_PLATFORM_MIR_KHR)
+ set(DisplayServer Mir)
+ endif()
else()
message(FATAL_ERROR "Unsupported Platform!")
endif()
-
-
-
-
set(CMAKE_MODULE_PATH ${CMAKE_MODULE_PATH} "${CMAKE_SOURCE_DIR}/cmake")
# Header file for CMake settings
@@ -64,21 +70,120 @@ if (CMAKE_COMPILER_IS_GNUCC OR CMAKE_C_COMPILER_ID MATCHES "Clang")
endif()
endif()
+if(NOT WIN32)
+ find_package(XCB REQUIRED)
+ set (BUILDTGT_DIR build)
+ set (BINDATA_DIR Bin)
+ set (LIBSOURCE_DIR Lib)
+else()
+ # For Windows, since 32-bit and 64-bit items can co-exist, we build each in its own build directory.
+ # 32-bit target data goes in build32, and 64-bit target data goes into build. So, include/link the
+ # appropriate data at build time.
+ if (CMAKE_CL_64)
+ set (BUILDTGT_DIR build)
+ set (BINDATA_DIR Bin)
+ set (LIBSOURCE_DIR Lib)
+ else()
+ set (BUILDTGT_DIR build32)
+ set (BINDATA_DIR Bin32)
+ set (LIBSOURCE_DIR Lib32)
+ endif()
+endif()
+
option(BUILD_LOADER "Build loader" ON)
option(BUILD_TESTS "Build tests" ON)
option(BUILD_LAYERS "Build layers" ON)
option(BUILD_DEMOS "Build demos" ON)
option(BUILD_VKJSON "Build vkjson" ON)
-if (BUILD_TESTS)
- # Hard code our glslang path for now
- get_filename_component(GLSLANG_PREFIX ../glslang ABSOLUTE)
+find_program(GLSLANG_VALIDATOR NAMES glslangValidator
+ HINTS "${CMAKE_SOURCE_DIR}/../glslang/${BUILDTGT_DIR}/install/bin"
+ "${PROJECT_SOURCE_DIR}/../${BINDATA_DIR}" )
+
+find_path(GLSLANG_SPIRV_INCLUDE_DIR SPIRV/spirv.hpp HINTS "${CMAKE_SOURCE_DIR}/../glslang" DOC "Path to SPIRV/spirv.hpp")
+find_path(SPIRV_TOOLS_INCLUDE_DIR spirv-tools/libspirv.h HINTS "${CMAKE_SOURCE_DIR}/../spirv-tools/include"
+ "${CMAKE_SOURCE_DIR}/../source/spirv-tools/include"
+ "${CMAKE_SOURCE_DIR}/../spirv-tools/external/include"
+ "${CMAKE_SOURCE_DIR}/../source/spirv-tools/external/include"
+ DOC "Path to spirv-tools/libspirv.h")
+
+if (WIN32)
+ set (GLSLANG_SEARCH_PATH "${CMAKE_SOURCE_DIR}/../glslang/${BUILDTGT_DIR}/glslang/Release"
+ "${CMAKE_SOURCE_DIR}/../glslang/${BUILDTGT_DIR}/glslang/OSDependent/Windows/Release"
+ "${CMAKE_SOURCE_DIR}/../glslang/${BUILDTGT_DIR}/OGLCompilersDLL/Release"
+ "${CMAKE_SOURCE_DIR}/../glslang/${BUILDTGT_DIR}/SPIRV/Release" )
+ set (SPIRV_TOOLS_SEARCH_PATH "${CMAKE_SOURCE_DIR}/../spirv-tools/${BUILDTGT_DIR}/Release")
+else()
+ set (GLSLANG_SEARCH_PATH "${CMAKE_SOURCE_DIR}/../glslang/build/install/lib" "${CMAKE_SOURCE_DIR}/../x86_64/lib/glslang" )
+ set (SPIRV_TOOLS_SEARCH_PATH "${CMAKE_SOURCE_DIR}/../spirv-tools/build" "${CMAKE_SOURCE_DIR}/../x86_64/lib/spirv-tools" )
+endif()
- if(NOT EXISTS ${GLSLANG_PREFIX})
- message(FATAL_ERROR "Necessary glslang components do not exist: " ${GLSLANG_PREFIX})
- endif()
+find_library(GLSLANG_LIB NAMES glslang
+ HINTS ${GLSLANG_SEARCH_PATH} )
+
+find_library(OGLCompiler_LIB NAMES OGLCompiler
+ HINTS ${GLSLANG_SEARCH_PATH} )
+
+find_library(OSDependent_LIB NAMES OSDependent
+ HINTS ${GLSLANG_SEARCH_PATH} )
+
+find_library(SPIRV_LIB NAMES SPIRV
+ HINTS ${GLSLANG_SEARCH_PATH} )
+
+find_library(SPIRV_TOOLS_LIB NAMES SPIRV-Tools
+ HINTS ${SPIRV_TOOLS_SEARCH_PATH} )
+
+# On Windows, we must pair Debug and Release appropriately
+if (WIN32)
+ set (GLSLANG_DEBUG_SEARCH_PATH "${CMAKE_SOURCE_DIR}/../glslang/${BUILDTGT_DIR}/glslang/Debug"
+ "${CMAKE_SOURCE_DIR}/../glslang/${BUILDTGT_DIR}/glslang/OSDependent/Windows/Debug"
+ "${CMAKE_SOURCE_DIR}/../glslang/${BUILDTGT_DIR}/OGLCompilersDLL/Debug"
+ "${CMAKE_SOURCE_DIR}/../glslang/${BUILDTGT_DIR}/SPIRV/Debug")
+ set (SPIRV_TOOLS_DEBUG_SEARCH_PATH "${CMAKE_SOURCE_DIR}/../spirv-tools/${BUILDTGT_DIR}/Debug")
+
+ add_library(glslang STATIC IMPORTED)
+ add_library(OGLCompiler STATIC IMPORTED)
+ add_library(OSDependent STATIC IMPORTED)
+ add_library(SPIRV STATIC IMPORTED)
+ add_library(Loader STATIC IMPORTED)
+ add_library(SPIRV-Tools STATIC IMPORTED)
+
+ find_library(GLSLANG_DLIB NAMES glslang
+ HINTS ${GLSLANG_DEBUG_SEARCH_PATH} )
+ find_library(OGLCompiler_DLIB NAMES OGLCompiler
+ HINTS ${GLSLANG_DEBUG_SEARCH_PATH} )
+ find_library(OSDependent_DLIB NAMES OSDependent
+ HINTS ${GLSLANG_DEBUG_SEARCH_PATH} )
+ find_library(SPIRV_DLIB NAMES SPIRV
+ HINTS ${GLSLANG_DEBUG_SEARCH_PATH} )
+ find_library(SPIRV_TOOLS_DLIB NAMES SPIRV-Tools
+ HINTS ${SPIRV_TOOLS_DEBUG_SEARCH_PATH} )
+
+ set_target_properties(glslang PROPERTIES
+ IMPORTED_LOCATION "${GLSLANG_LIB}"
+ IMPORTED_LOCATION_DEBUG "${GLSLANG_DLIB}")
+ set_target_properties(OGLCompiler PROPERTIES
+ IMPORTED_LOCATION "${OGLCompiler_LIB}"
+ IMPORTED_LOCATION_DEBUG "${OGLCompiler_DLIB}")
+ set_target_properties(OSDependent PROPERTIES
+ IMPORTED_LOCATION "${OSDependent_LIB}"
+ IMPORTED_LOCATION_DEBUG "${OSDependent_DLIB}")
+ set_target_properties(SPIRV PROPERTIES
+ IMPORTED_LOCATION "${SPIRV_LIB}"
+ IMPORTED_LOCATION_DEBUG "${SPIRV_DLIB}")
+ set_target_properties(SPIRV-Tools PROPERTIES
+ IMPORTED_LOCATION "${SPIRV_TOOLS_LIB}"
+ IMPORTED_LOCATION_DEBUG "${SPIRV_TOOLS_DLIB}")
+
+ set (GLSLANG_LIBRARIES glslang OGLCompiler OSDependent SPIRV)
+ set (SPIRV_TOOLS_LIBRARIES SPIRV-Tools)
+else ()
+ set (GLSLANG_LIBRARIES ${GLSLANG_LIB} ${OGLCompiler_LIB} ${OSDependent_LIB} ${SPIRV_LIB})
+ set (SPIRV_TOOLS_LIBRARIES ${SPIRV_TOOLS_LIB})
endif()
+set (PYTHON_CMD ${PYTHON_EXECUTABLE})
+
if(NOT WIN32)
include(GNUInstallDirs)
add_definitions(-DSYSCONFDIR="${CMAKE_INSTALL_SYSCONFDIR}")
@@ -88,11 +193,6 @@ if(NOT WIN32)
else()
add_definitions(-DLOCALPREFIX="${CMAKE_INSTALL_PREFIX}")
endif()
- if(${CMAKE_SYSTEM_NAME} MATCHES "Linux")
- set(PYTHON_CMD "python3")
- endif()
-else()
- set(PYTHON_CMD "py")
endif()
# loader: Generic VULKAN ICD loader
diff --git a/CONTRIBUTING.md b/CONTRIBUTING.md
new file mode 100644
index 000000000..cdc721482
--- /dev/null
+++ b/CONTRIBUTING.md
@@ -0,0 +1,90 @@
+## How to Contribute to Vulkan Source Repositories
+
+### **The Repositories**
+
+The Vulkan source code is distributed across several GitHub repositories.
+The repositories sponsored by Khronos and LunarG are described here.
+In general, the canonical Vulkan Loader and Validation Layers sources are in the Khronos repository,
+while the LunarG repositories host sources for additional tools and sample programs.
+
+* [Khronos Vulkan-LoaderAndValidationLayers](https://github.com/KhronosGroup/Vulkan-LoaderAndValidationLayers)
+* [LunarG VulkanTools](https://github.com/LunarG/VulkanTools)
+* [LunarG VulkanSamples](https://github.com/LunarG/VulkanSamples)
+
+As a convenience, the contents of the Vulkan-LoaderAndValidationLayers repository are downstreamed into the VulkanTools and VulkanSamples repositories via a branch named `trunk`.
+This makes the VulkanTools and VulkanSamples easier to work with and avoids compatibility issues
+that might arise with Vulkan-LoaderAndValidationLayers components if they were obtained from a separate repository.
+
+### **How to Submit Fixes**
+
+* **Ensure that the bug was not already reported or fixed** by searching on GitHub under Issues
+ and Pull Requests.
+* Use the existing GitHub forking and pull request process.
+ This will involve [forking the repository](https://help.github.com/articles/fork-a-repo/),
+ creating a branch with your commits, and then [submitting a pull request](https://help.github.com/articles/using-pull-requests/).
+* Please base your fixes on the master branch. SDK branches are generally not updated except for critical fixes needed to repair an SDK release.
+* Please include the GitHub Issue number near the beginning of the commit text if applicable.
+ * Example: "GitHub 123: Fix missing init"
+* If your changes are restricted only to files from the Vulkan-LoaderAndValidationLayers repository, please direct your pull request to that repository, instead of VulkanTools or VulkanSamples.
+
+
+#### **Coding Conventions and Formatting**
+* Try to follow any existing style in the file. "When in Rome..."
+* Run clang-format on your changes to maintain formatting.
+ * There are `.clang-format files` throughout the repository to define clang-format settings
+ which are found and used automatically by clang-format.
+ * A sample git workflow may look like:
+
+> # Make changes to the source.
+> $ git add .
+> $ clang-format -style=file -i < list of changed code files >
+> # Check to see if clang-format made any changes and if they are OK.
+> $ git add .
+> $ git commit
+
+#### **Testing**
+* Run the existing tests in the repository before and after your changes to check for any regressions.
+ There are some tests that appear in all repositories.
+ These tests can be found in the following folders inside of your target build directory:
+ (These instructions are for Linux)
+* In the `demos` directory, run:
+
+> cube
+> cube --validate
+> tri
+> tri --validate
+> smoke
+> smoke --validate
+> vulkaninfo
+
+* In the `tests` directory, run:
+
+> run_all_tests.sh
+
+* Note that some tests may fail with known issues or driver-specific problems.
+ The idea here is that your changes shouldn't change the test results, unless that was the intent of your changes.
+* Run tests that explicitly exercise your changes.
+* Feel free to subject your code changes to other tests as well!
+
+### **Contributor License Agreement (CLA)**
+
+#### **Khronos Repository (Vulkan-LoaderAndValidationLayers)**
+
+The Khronos Group is still finalizing the CLA process and documentation,
+so the details about using or requiring a CLA are not available yet.
+In the meantime, we suggest that you not submit any contributions unless you are comfortable doing so without a CLA.
+
+#### **LunarG Repositories**
+
+You'll be prompted with a "click-through" CLA as part of submitting your pull request in GitHub.
+
+### **License and Copyrights**
+
+All contributions made to the Vulkan-LoaderAndValidationLayers repository are Khronos branded and as such,
+any new files need to have the Khronos license (MIT style) and copyright included.
+Please see an existing file in this repository for an example.
+
+All contributions made to the LunarG repositories are to be made under the MIT license
+and any new files need to include this license and any applicable copyrights.
+
+You can include your individual copyright after any existing copyrights.
diff --git a/LunarGLASS_revision b/LunarGLASS_revision
deleted file mode 100644
index 6c172bfa4..000000000
--- a/LunarGLASS_revision
+++ /dev/null
@@ -1 +0,0 @@
-502186
diff --git a/LunarGLASS_revision_R32 b/LunarGLASS_revision_R32
deleted file mode 100644
index 22271c852..000000000
--- a/LunarGLASS_revision_R32
+++ /dev/null
@@ -1 +0,0 @@
-32385
diff --git a/README.md b/README.md
index eef7e3508..39b383e61 100755
--- a/README.md
+++ b/README.md
@@ -1,18 +1,22 @@
# Vulkan Ecosystem Components
*Version 1.0, January 25, 2016*
-This project provides loader and validation layers for Vulkan developers on Windows and Linux.
+This project provides Khronos offical ICD loader and validation layers for Vulkan developers on Windows and Linux.
## Introduction
-Vulkan is an Explicit API, enabling direct control over how GPUs actually work. No (or very little) validation or error checking is done inside a VK driver. Applications have full control and responsibility. Any errors in how VK is used are likely to result in a crash. This project provides layered utility libraries to ease development and help guide developers to proven safe patterns.
+Vulkan is an Explicit API, enabling direct control over how GPUs actually work. No (or very little) validation
+or error checking is done inside a Vulkan driver. Applications have full control and responsibility. Any errors in
+how Vulkan is used often result in a crash. This project provides standard validation layers that can be enabled to ease development by
+helping developers verify their applications correctly use the Vulkan API.
-New with Vulkan is an extensible layered architecture that enables validation libraries to be implemented as layers. The loader is essential in supporting multiple drivers and GPUs along with layer library enablement.
+Vulkan supports multiple GPUs and multiple global contexts (VkInstance). The ICD loader is necessary to support multiple GPUs and the VkInstance level Vulkan commands. Additionally, the loader manages inserting Vulkan layer libraries,
+including validation layers between the application and the ICD.
The following components are available in this repository:
- Vulkan header files
-- [*ICD Loader*](loader) and [*Layer Manager*](layers/README.md, loader/README.md
-- Core [*Validation Layers*](layers/)
+- [*ICD Loader*](loader/)
+- [*Validation Layers*](layers/)
- Demos and tests for the loader and validation layers
@@ -24,18 +28,18 @@ includes directions for building all the components, running the validation test
Information on how to enable the various Validation layers is in
[layers/README.md](layers/README.md).
+Architecture and interface information for the loader is in
+[loader/LoaderAndLayerInterface.md](loader/LoaderAndLayerInterface.md).
## License
-This work is intended to be released as open source under a MIT-style
-license once the Vulkan specification is public. Until that time, this work
-is covered by the Khronos NDA governing the details of the VK API.
+This work is released as open source under a MIT-style license from Khronos including a Khronos copyright.
+
+See LICENSE.txt for a full list of licenses used in this repository.
## Acknowledgements
-While this project is being developed by LunarG, Inc; there are many other
-companies and individuals making this possible: Valve Software, funding
-project development; Intel Corporation, providing full hardware specifications
-and valuable technical feedback; AMD, providing VK spec editor contributions;
-ARM, contributing a Chairman for this working group within Khronos; Nvidia,
-providing an initial co-editor for the spec; Qualcomm for picking up the
-co-editor's chair; and Khronos, for providing hosting within GitHub.
+While this project has been developed primarily by LunarG, Inc; there are many other
+companies and individuals making this possible: Valve Corporation, funding
+project development; Google providing significant contributions to the validation layers;
+Khronos providing oversight and hosting of the project.
+
diff --git a/buildAndroid/android-generate.bat b/buildAndroid/android-generate.bat
new file mode 100644
index 000000000..bf053528d
--- /dev/null
+++ b/buildAndroid/android-generate.bat
@@ -0,0 +1,33 @@
+@echo off
+REM # Copyright 2015 The Android Open Source Project
+REM # Copyright (C) 2015 Valve Corporation
+REM
+REM # Licensed under the Apache License, Version 2.0 (the "License");
+REM # you may not use this file except in compliance with the License.
+REM # You may obtain a copy of the License at
+REM
+REM # http://www.apache.org/licenses/LICENSE-2.0
+REM
+REM # Unless required by applicable law or agreed to in writing, software
+REM # distributed under the License is distributed on an "AS IS" BASIS,
+REM # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+REM # See the License for the specific language governing permissions and
+REM # limitations under the License.
+
+if exist generated (
+ rmdir /s /q generated
+)
+mkdir generated
+
+python ../vk-generate.py Android dispatch-table-ops layer > generated/vk_dispatch_table_helper.h
+
+python ../vk_helper.py --gen_enum_string_helper ../include/vulkan/vulkan.h --abs_out_dir generated
+python ../vk_helper.py --gen_struct_wrappers ../include/vulkan/vulkan.h --abs_out_dir generated
+
+python ../vk-layer-generate.py Android object_tracker ../include/vulkan/vulkan.h > generated/object_tracker.cpp
+python ../vk-layer-generate.py Android unique_objects ../include/vulkan/vulkan.h > generated/unique_objects.cpp
+
+cd generated
+python ../../genvk.py threading -registry ../../vk.xml thread_check.h
+python ../../genvk.py paramchecker -registry ../../vk.xml parameter_validation.h
+cd ..
diff --git a/buildAndroid/android-generate.sh b/buildAndroid/android-generate.sh
index 5f9806814..928a17621 100755
--- a/buildAndroid/android-generate.sh
+++ b/buildAndroid/android-generate.sh
@@ -15,15 +15,19 @@
# See the License for the specific language governing permissions and
# limitations under the License.
+dir=$(cd -P -- "$(dirname -- "$0")" && pwd -P)
+cd $dir
+
rm -rf generated
mkdir -p generated
-python ../vk-generate.py dispatch-table-ops layer > generated/vk_dispatch_table_helper.h
+python ../vk-generate.py Android dispatch-table-ops layer > generated/vk_dispatch_table_helper.h
python ../vk_helper.py --gen_enum_string_helper ../include/vulkan/vulkan.h --abs_out_dir generated
python ../vk_helper.py --gen_struct_wrappers ../include/vulkan/vulkan.h --abs_out_dir generated
-python ../vk-layer-generate.py object_tracker ../include/vulkan/vulkan.h > generated/object_tracker.cpp
-python ../vk-layer-generate.py unique_objects ../include/vulkan/vulkan.h > generated/unique_objects.cpp
+python ../vk-layer-generate.py Android object_tracker ../include/vulkan/vulkan.h > generated/object_tracker.cpp
+python ../vk-layer-generate.py Android unique_objects ../include/vulkan/vulkan.h > generated/unique_objects.cpp
( cd generated; python ../../genvk.py threading -registry ../../vk.xml thread_check.h )
-( cd generated; python ../../genvk.py paramchecker -registry ../../vk.xml param_check.h )
+( cd generated; python ../../genvk.py paramchecker -registry ../../vk.xml parameter_validation.h )
+exit 0
diff --git a/buildAndroid/jni/Android.mk b/buildAndroid/jni/Android.mk
index 9241fa2af..e9dfc36e7 100644
--- a/buildAndroid/jni/Android.mk
+++ b/buildAndroid/jni/Android.mk
@@ -1,161 +1,132 @@
-# Copyright 2015 The Android Open Source Project
-# Copyright (C) 2015 Valve Corporation
-
-# Licensed under the Apache License, Version 2.0 (the "License");
-# you may not use this file except in compliance with the License.
-# You may obtain a copy of the License at
-
-# http://www.apache.org/licenses/LICENSE-2.0
-
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS,
-# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-# See the License for the specific language governing permissions and
-# limitations under the License.
-
-LOCAL_PATH := $(abspath $(call my-dir))
-MY_PATH := $(LOCAL_PATH)
-SRC_DIR := $(LOCAL_PATH)/../../
-
-include $(CLEAR_VARS)
-LOCAL_MODULE := layer_utils
-LOCAL_SRC_FILES += $(SRC_DIR)/layers/vk_layer_config.cpp
-LOCAL_SRC_FILES += $(SRC_DIR)/layers/vk_layer_extension_utils.cpp
-LOCAL_SRC_FILES += $(SRC_DIR)/layers/vk_layer_utils.cpp
-LOCAL_C_INCLUDES += $(SRC_DIR)/include
-LOCAL_CPPFLAGS += -DVK_USE_PLATFORM_ANDROID_KHR
-include $(BUILD_STATIC_LIBRARY)
-
-include $(CLEAR_VARS)
-LOCAL_MODULE := VkLayer_draw_state
-LOCAL_SRC_FILES += $(SRC_DIR)/layers/draw_state.cpp
-LOCAL_SRC_FILES += $(SRC_DIR)/layers/vk_layer_debug_marker_table.cpp
-LOCAL_SRC_FILES += $(SRC_DIR)/layers/vk_layer_table.cpp
-LOCAL_C_INCLUDES += $(SRC_DIR)/include \
- $(SRC_DIR)/buildAndroid/generated \
- $(SRC_DIR)/loader \
- $(SRC_DIR)/../glslang/SPIRV
-LOCAL_STATIC_LIBRARIES += layer_utils
-LOCAL_CPPFLAGS += -DVK_USE_PLATFORM_ANDROID_KHR
-LOCAL_LDLIBS := -llog
-include $(BUILD_SHARED_LIBRARY)
-
-include $(CLEAR_VARS)
-LOCAL_MODULE := VkLayer_mem_tracker
-LOCAL_SRC_FILES += $(SRC_DIR)/layers/mem_tracker.cpp
-LOCAL_SRC_FILES += $(SRC_DIR)/layers/vk_layer_table.cpp
-LOCAL_C_INCLUDES += $(SRC_DIR)/include \
- $(SRC_DIR)/buildAndroid/generated \
- $(SRC_DIR)/loader
-LOCAL_STATIC_LIBRARIES += layer_utils
-LOCAL_CPPFLAGS += -DVK_USE_PLATFORM_ANDROID_KHR
-LOCAL_LDLIBS := -llog
-include $(BUILD_SHARED_LIBRARY)
-
-include $(CLEAR_VARS)
-LOCAL_MODULE := VkLayer_device_limits
-LOCAL_SRC_FILES += $(SRC_DIR)/layers/device_limits.cpp
-LOCAL_SRC_FILES += $(SRC_DIR)/layers/vk_layer_debug_marker_table.cpp
-LOCAL_SRC_FILES += $(SRC_DIR)/layers/vk_layer_table.cpp
-LOCAL_C_INCLUDES += $(SRC_DIR)/include \
- $(SRC_DIR)/buildAndroid/generated \
- $(SRC_DIR)/loader
-LOCAL_STATIC_LIBRARIES += layer_utils
-LOCAL_CPPFLAGS += -DVK_USE_PLATFORM_ANDROID_KHR
-LOCAL_LDLIBS := -llog
-include $(BUILD_SHARED_LIBRARY)
-
-include $(CLEAR_VARS)
-LOCAL_MODULE := VkLayer_image
-LOCAL_SRC_FILES += $(SRC_DIR)/layers/image.cpp
-LOCAL_SRC_FILES += $(SRC_DIR)/layers/vk_layer_table.cpp
-LOCAL_C_INCLUDES += $(SRC_DIR)/include \
- $(SRC_DIR)/buildAndroid/generated \
- $(SRC_DIR)/loader
-LOCAL_STATIC_LIBRARIES += layer_utils
-LOCAL_CPPFLAGS += -DVK_USE_PLATFORM_ANDROID_KHR
-LOCAL_LDLIBS := -llog
-include $(BUILD_SHARED_LIBRARY)
-
-include $(CLEAR_VARS)
-LOCAL_MODULE := VkLayer_param_checker
-LOCAL_SRC_FILES += $(SRC_DIR)/layers/param_checker.cpp
-LOCAL_SRC_FILES += $(SRC_DIR)/layers/vk_layer_debug_marker_table.cpp
-LOCAL_SRC_FILES += $(SRC_DIR)/layers/vk_layer_table.cpp
-LOCAL_C_INCLUDES += $(SRC_DIR)/include \
- $(SRC_DIR)/buildAndroid/generated \
- $(SRC_DIR)/layers \
- $(SRC_DIR)/loader
-LOCAL_STATIC_LIBRARIES += layer_utils
-LOCAL_CPPFLAGS += -DVK_USE_PLATFORM_ANDROID_KHR
-LOCAL_LDLIBS := -llog
-include $(BUILD_SHARED_LIBRARY)
-
-include $(CLEAR_VARS)
-LOCAL_MODULE := VkLayer_object_tracker
-LOCAL_SRC_FILES += $(SRC_DIR)/buildAndroid/generated/object_tracker.cpp
-LOCAL_SRC_FILES += $(SRC_DIR)/layers/vk_layer_table.cpp
-LOCAL_C_INCLUDES += $(SRC_DIR)/include \
- $(SRC_DIR)/layers \
- $(SRC_DIR)/buildAndroid/generated \
- $(SRC_DIR)/loader
-LOCAL_STATIC_LIBRARIES += layer_utils
-LOCAL_CPPFLAGS += -DVK_USE_PLATFORM_ANDROID_KHR
-LOCAL_LDLIBS := -llog
-include $(BUILD_SHARED_LIBRARY)
-
-include $(CLEAR_VARS)
-LOCAL_MODULE := VkLayer_threading
-LOCAL_SRC_FILES += $(SRC_DIR)/layers/threading.cpp
-LOCAL_SRC_FILES += $(SRC_DIR)/layers/vk_layer_table.cpp
-LOCAL_C_INCLUDES += $(SRC_DIR)/include \
- $(SRC_DIR)/layers \
- $(SRC_DIR)/buildAndroid/generated \
- $(SRC_DIR)/loader
-LOCAL_STATIC_LIBRARIES += layer_utils
-LOCAL_CPPFLAGS += -DVK_USE_PLATFORM_ANDROID_KHR
-LOCAL_LDLIBS := -llog
-include $(BUILD_SHARED_LIBRARY)
-
-include $(CLEAR_VARS)
-LOCAL_MODULE := VkLayer_unique_objects
-LOCAL_SRC_FILES += $(SRC_DIR)/buildAndroid/generated/unique_objects.cpp
-LOCAL_SRC_FILES += $(SRC_DIR)/buildAndroid/generated/vk_safe_struct.cpp
-LOCAL_SRC_FILES += $(SRC_DIR)/layers/vk_layer_table.cpp
-LOCAL_C_INCLUDES += $(SRC_DIR)/include \
- $(SRC_DIR)/layers \
- $(SRC_DIR)/buildAndroid/generated \
- $(SRC_DIR)/loader
-LOCAL_STATIC_LIBRARIES += layer_utils
-LOCAL_CPPFLAGS += -DVK_USE_PLATFORM_ANDROID_KHR
-LOCAL_LDLIBS := -llog
-include $(BUILD_SHARED_LIBRARY)
-
-include $(CLEAR_VARS)
-LOCAL_MODULE := VkLayer_swapchain
-LOCAL_SRC_FILES += $(SRC_DIR)/layers/swapchain.cpp
-LOCAL_SRC_FILES += $(SRC_DIR)/layers/vk_layer_table.cpp
-LOCAL_C_INCLUDES += $(SRC_DIR)/include \
- $(SRC_DIR)/buildAndroid/generated \
- $(SRC_DIR)/loader
-LOCAL_STATIC_LIBRARIES += layer_utils
-LOCAL_CPPFLAGS += -DVK_USE_PLATFORM_ANDROID_KHR
-LOCAL_LDLIBS := -llog
-include $(BUILD_SHARED_LIBRARY)
-
-include $(CLEAR_VARS)
-LOCAL_MODULE := VkLayerValidationTests
-LOCAL_SRC_FILES += $(SRC_DIR)/tests/layer_validation_tests.cpp \
- $(SRC_DIR)/tests/vktestbinding.cpp \
- $(SRC_DIR)/tests/vktestframeworkandroid.cpp \
- $(SRC_DIR)/tests/vkrenderframework.cpp
-LOCAL_C_INCLUDES += $(SRC_DIR)/include \
- $(SRC_DIR)/layers \
- $(SRC_DIR)/libs \
- $(SRC_DIR)/icd/common
-LOCAL_STATIC_LIBRARIES := googletest_main layer_utils
-LOCAL_CPPFLAGS += -DVK_USE_PLATFORM_ANDROID_KHR
-LOCAL_LDLIBS := -lvulkan
-include $(BUILD_EXECUTABLE)
-
-$(call import-module,third_party/googletest)
+# Copyright 2015 The Android Open Source Project
+# Copyright (C) 2015 Valve Corporation
+
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+
+# http://www.apache.org/licenses/LICENSE-2.0
+
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+LOCAL_PATH := $(abspath $(call my-dir))
+MY_PATH := $(LOCAL_PATH)
+SRC_DIR := $(LOCAL_PATH)/../../
+
+include $(CLEAR_VARS)
+LOCAL_MODULE := layer_utils
+LOCAL_SRC_FILES += $(SRC_DIR)/layers/vk_layer_config.cpp
+LOCAL_SRC_FILES += $(SRC_DIR)/layers/vk_layer_extension_utils.cpp
+LOCAL_SRC_FILES += $(SRC_DIR)/layers/vk_layer_utils.cpp
+LOCAL_C_INCLUDES += $(SRC_DIR)/include \
+ $(SRC_DIR)/loader
+LOCAL_CPPFLAGS += -DVK_USE_PLATFORM_ANDROID_KHR
+include $(BUILD_STATIC_LIBRARY)
+
+include $(CLEAR_VARS)
+LOCAL_MODULE := VkLayer_core_validation
+LOCAL_SRC_FILES += $(SRC_DIR)/layers/core_validation.cpp
+LOCAL_SRC_FILES += $(SRC_DIR)/layers/vk_layer_table.cpp
+LOCAL_C_INCLUDES += $(SRC_DIR)/include \
+ $(MY_PATH)/../generated \
+ $(SRC_DIR)/loader \
+ $(SRC_DIR)/../glslang
+LOCAL_STATIC_LIBRARIES += layer_utils
+LOCAL_CPPFLAGS += -DVK_USE_PLATFORM_ANDROID_KHR
+LOCAL_LDLIBS := -llog
+include $(BUILD_SHARED_LIBRARY)
+
+include $(CLEAR_VARS)
+LOCAL_MODULE := VkLayer_device_limits
+LOCAL_SRC_FILES += $(SRC_DIR)/layers/device_limits.cpp
+LOCAL_SRC_FILES += $(SRC_DIR)/layers/vk_layer_table.cpp
+LOCAL_C_INCLUDES += $(SRC_DIR)/include \
+ $(MY_PATH)/../generated \
+ $(SRC_DIR)/loader
+LOCAL_STATIC_LIBRARIES += layer_utils
+LOCAL_CPPFLAGS += -DVK_USE_PLATFORM_ANDROID_KHR
+LOCAL_LDLIBS := -llog
+include $(BUILD_SHARED_LIBRARY)
+
+include $(CLEAR_VARS)
+LOCAL_MODULE := VkLayer_image
+LOCAL_SRC_FILES += $(SRC_DIR)/layers/image.cpp
+LOCAL_SRC_FILES += $(SRC_DIR)/layers/vk_layer_table.cpp
+LOCAL_C_INCLUDES += $(SRC_DIR)/include \
+ $(MY_PATH)/../generated \
+ $(SRC_DIR)/loader
+LOCAL_STATIC_LIBRARIES += layer_utils
+LOCAL_CPPFLAGS += -DVK_USE_PLATFORM_ANDROID_KHR
+LOCAL_LDLIBS := -llog
+include $(BUILD_SHARED_LIBRARY)
+
+include $(CLEAR_VARS)
+LOCAL_MODULE := VkLayer_parameter_validation
+LOCAL_SRC_FILES += $(SRC_DIR)/layers/parameter_validation.cpp
+LOCAL_SRC_FILES += $(SRC_DIR)/layers/vk_layer_table.cpp
+LOCAL_C_INCLUDES += $(SRC_DIR)/include \
+ $(MY_PATH)/../generated \
+ $(SRC_DIR)/layers \
+ $(SRC_DIR)/loader
+LOCAL_STATIC_LIBRARIES += layer_utils
+LOCAL_CPPFLAGS += -DVK_USE_PLATFORM_ANDROID_KHR
+LOCAL_LDLIBS := -llog
+include $(BUILD_SHARED_LIBRARY)
+
+include $(CLEAR_VARS)
+LOCAL_MODULE := VkLayer_object_tracker
+LOCAL_SRC_FILES += $(MY_PATH)/../generated/object_tracker.cpp
+LOCAL_SRC_FILES += $(SRC_DIR)/layers/vk_layer_table.cpp
+LOCAL_C_INCLUDES += $(SRC_DIR)/include \
+ $(SRC_DIR)/layers \
+ $(MY_PATH)/../generated \
+ $(SRC_DIR)/loader
+LOCAL_STATIC_LIBRARIES += layer_utils
+LOCAL_CPPFLAGS += -DVK_USE_PLATFORM_ANDROID_KHR
+LOCAL_LDLIBS := -llog
+include $(BUILD_SHARED_LIBRARY)
+
+include $(CLEAR_VARS)
+LOCAL_MODULE := VkLayer_threading
+LOCAL_SRC_FILES += $(SRC_DIR)/layers/threading.cpp
+LOCAL_SRC_FILES += $(SRC_DIR)/layers/vk_layer_table.cpp
+LOCAL_C_INCLUDES += $(SRC_DIR)/include \
+ $(SRC_DIR)/layers \
+ $(MY_PATH)/../generated \
+ $(SRC_DIR)/loader
+LOCAL_STATIC_LIBRARIES += layer_utils
+LOCAL_CPPFLAGS += -DVK_USE_PLATFORM_ANDROID_KHR
+LOCAL_LDLIBS := -llog
+include $(BUILD_SHARED_LIBRARY)
+
+include $(CLEAR_VARS)
+LOCAL_MODULE := VkLayer_unique_objects
+LOCAL_SRC_FILES += $(MY_PATH)/../generated/unique_objects.cpp
+LOCAL_SRC_FILES += $(MY_PATH)/../generated/vk_safe_struct.cpp
+LOCAL_SRC_FILES += $(SRC_DIR)/layers/vk_layer_table.cpp
+LOCAL_C_INCLUDES += $(SRC_DIR)/include \
+ $(SRC_DIR)/layers \
+ $(MY_PATH)/../generated \
+ $(SRC_DIR)/loader
+LOCAL_STATIC_LIBRARIES += layer_utils
+LOCAL_CPPFLAGS += -DVK_USE_PLATFORM_ANDROID_KHR
+LOCAL_LDLIBS := -llog
+include $(BUILD_SHARED_LIBRARY)
+
+include $(CLEAR_VARS)
+LOCAL_MODULE := VkLayer_swapchain
+LOCAL_SRC_FILES += $(SRC_DIR)/layers/swapchain.cpp
+LOCAL_SRC_FILES += $(SRC_DIR)/layers/vk_layer_table.cpp
+LOCAL_C_INCLUDES += $(SRC_DIR)/include \
+ $(MY_PATH)/../generated \
+ $(SRC_DIR)/loader
+LOCAL_STATIC_LIBRARIES += layer_utils
+LOCAL_CPPFLAGS += -DVK_USE_PLATFORM_ANDROID_KHR
+LOCAL_LDLIBS := -llog
+include $(BUILD_SHARED_LIBRARY)
+
+$(call import-module,third_party/googletest)
diff --git a/buildAndroid/jni/Application.mk b/buildAndroid/jni/Application.mk
index 781edd6f9..13ec240dd 100644
--- a/buildAndroid/jni/Application.mk
+++ b/buildAndroid/jni/Application.mk
@@ -16,6 +16,6 @@
APP_ABI := armeabi-v7a arm64-v8a x86 x86_64 mips mips64
APP_PLATFORM := android-22
APP_STL := gnustl_static
-APP_MODULES := layer_utils VkLayer_draw_state VkLayer_mem_tracker VkLayer_device_limits VkLayer_image VkLayer_param_checker VkLayer_object_tracker VkLayer_threading VkLayer_swapchain VkLayer_unique_objects VkLayerValidationTests
+APP_MODULES := layer_utils VkLayer_core_validation VkLayer_device_limits VkLayer_image VkLayer_parameter_validation VkLayer_object_tracker VkLayer_threading VkLayer_swapchain VkLayer_unique_objects
APP_CPPFLAGS += -std=c++11 -DVK_PROTOTYPES -Wall -Werror -Wno-unused-function -Wno-unused-const-variable
NDK_TOOLCHAIN_VERSION := clang
diff --git a/build_windows_targets.bat b/build_windows_targets.bat
index 01cf11175..79a516468 100644
--- a/build_windows_targets.bat
+++ b/build_windows_targets.bat
@@ -1,8 +1,14 @@
echo off
REM
-REM This batch file builds both 32-bit and 64-bit versions of the loader.
-REM It is assumed that the developer has run the update_external_sources.bat
-REM file prior to running this.
+REM This Windows batch file builds this repository for the following targets:
+REM 64-bit Debug
+REM 64-bit Release
+REM 32-bit Debug
+REM 32-bit Release
+REM It uses CMake to genererate the project files and then invokes msbuild
+REM to build them.
+REM The update_external_sources.bat batch file must be executed before running
+REM this batch file
REM
REM Determine the appropriate CMake strings for the current version of Visual Studio
@@ -10,72 +16,66 @@ echo Determining VS version
python .\determine_vs_version.py > vsversion.tmp
set /p VS_VERSION=< vsversion.tmp
echo Detected Visual Studio Version as %VS_VERSION%
-
-REM Cleanup the file we used to collect the VS version output since it's no longer needed.
del /Q /F vsversion.tmp
rmdir /Q /S build
rmdir /Q /S build32
REM *******************************************
-REM 64-bit LoaderAndTools build
+REM 64-bit build
REM *******************************************
mkdir build
pushd build
-echo Generating 64-bit spirv-tools CMake files for Visual Studio %VS_VERSION%
+echo Generating 64-bit CMake files for Visual Studio %VS_VERSION%
cmake -G "Visual Studio %VS_VERSION% Win64" ..
-echo Building 64-bit Debug LoaderAndTools
+echo Building 64-bit Debug
msbuild ALL_BUILD.vcxproj /p:Platform=x64 /p:Configuration=Debug /verbosity:quiet
-
-REM Check for existence of one DLL, even though we should check for all results
-if not exist .\loader\Debug\vulkan-1.dll (
+if errorlevel 1 (
echo.
- echo LoaderAndTools 64-bit Debug build failed!
- set errorCode=1
-)
+ echo 64-bit Debug build failed!
+ popd
+ exit /B 1
+)
-echo Building 64-bit Release LoaderAndTools
+echo Building 64-bit Release
msbuild ALL_BUILD.vcxproj /p:Platform=x64 /p:Configuration=Release /verbosity:quiet
-
-REM Check for existence of one DLL, even though we should check for all results
-if not exist .\loader\Release\vulkan-1.dll (
+if errorlevel 1 (
echo.
- echo LoaderAndTools 64-bit Release build failed!
- set errorCode=1
-)
+ echo 64-bit Release build failed!
+ popd
+ exit /B 1
+)
popd
REM *******************************************
-REM 32-bit LoaderAndTools build
+REM 32-bit build
REM *******************************************
mkdir build32
pushd build32
-echo Generating 32-bit LoaderAndTools CMake files for Visual Studio %VS_VERSION%
+echo Generating 32-bit CMake files for Visual Studio %VS_VERSION%
cmake -G "Visual Studio %VS_VERSION%" ..
-echo Building 32-bit Debug LoaderAndTools
+echo Building 32-bit Debug
msbuild ALL_BUILD.vcxproj /p:Platform=x86 /p:Configuration=Debug /verbosity:quiet
-
-REM Check for existence of one DLL, even though we should check for all results
-if not exist .\loader\Debug\vulkan-1.dll (
+if errorlevel 1 (
echo.
- echo LoaderAndTools 32-bit Debug build failed!
- set errorCode=1
-)
+ echo 32-bit Debug build failed!
+ popd
+ exit /B 1
+)
-echo Building 32-bit Release LoaderAndTools
+echo Building 32-bit Release
msbuild ALL_BUILD.vcxproj /p:Platform=x86 /p:Configuration=Release /verbosity:quiet
-
-REM Check for existence of one DLL, even though we should check for all results
-if not exist .\loader\Release\vulkan-1.dll (
+if errorlevel 1 (
echo.
- echo LoaderAndTools 32-bit Release build failed!
- set errorCode=1
-)
+ echo 32-bit Release build failed!
+ popd
+ exit /B 1
+)
popd
-
+exit /b 0
diff --git a/demos/CMakeLists.txt b/demos/CMakeLists.txt
index ebc406b76..eec3d8743 100644
--- a/demos/CMakeLists.txt
+++ b/demos/CMakeLists.txt
@@ -28,45 +28,45 @@ if(WIN32)
endif()
add_custom_command(OUTPUT ${CMAKE_BINARY_DIR}/demos/tri-vert.spv
- COMMAND ${GLSLANG_PREFIX}/${BUILDTGT_DIR}/install/bin/glslangValidator -s -V ${PROJECT_SOURCE_DIR}/demos/tri.vert
+ COMMAND ${GLSLANG_VALIDATOR} -s -V ${PROJECT_SOURCE_DIR}/demos/tri.vert
COMMAND move vert.spv ${CMAKE_BINARY_DIR}/demos/tri-vert.spv
- DEPENDS tri.vert ${GLSLANG_PREFIX}/${BUILDTGT_DIR}/install/bin/glslangValidator
+ DEPENDS tri.vert ${GLSLANG_VALIDATOR}
)
add_custom_command(OUTPUT ${CMAKE_BINARY_DIR}/demos/tri-frag.spv
- COMMAND ${GLSLANG_PREFIX}/${BUILDTGT_DIR}/install/bin/glslangValidator -s -V ${PROJECT_SOURCE_DIR}/demos/tri.frag
+ COMMAND ${GLSLANG_VALIDATOR} -s -V ${PROJECT_SOURCE_DIR}/demos/tri.frag
COMMAND move frag.spv ${CMAKE_BINARY_DIR}/demos/tri-frag.spv
- DEPENDS tri.frag ${GLSLANG_PREFIX}/${BUILDTGT_DIR}/install/bin/glslangValidator
+ DEPENDS tri.frag ${GLSLANG_VALIDATOR}
)
add_custom_command(OUTPUT ${CMAKE_BINARY_DIR}/demos/cube-vert.spv
- COMMAND ${GLSLANG_PREFIX}/${BUILDTGT_DIR}/install/bin/glslangValidator -s -V ${PROJECT_SOURCE_DIR}/demos/cube.vert
+ COMMAND ${GLSLANG_VALIDATOR} -s -V ${PROJECT_SOURCE_DIR}/demos/cube.vert
COMMAND move vert.spv ${CMAKE_BINARY_DIR}/demos/cube-vert.spv
- DEPENDS cube.vert ${GLSLANG_PREFIX}/${BUILDTGT_DIR}/install/bin/glslangValidator
+ DEPENDS cube.vert ${GLSLANG_VALIDATOR}
)
add_custom_command(OUTPUT ${CMAKE_BINARY_DIR}/demos/cube-frag.spv
- COMMAND ${GLSLANG_PREFIX}/${BUILDTGT_DIR}/install/bin/glslangValidator -s -V ${PROJECT_SOURCE_DIR}/demos/cube.frag
+ COMMAND ${GLSLANG_VALIDATOR} -s -V ${PROJECT_SOURCE_DIR}/demos/cube.frag
COMMAND move frag.spv ${CMAKE_BINARY_DIR}/demos/cube-frag.spv
- DEPENDS cube.frag ${GLSLANG_PREFIX}/${BUILDTGT_DIR}/install/bin/glslangValidator
+ DEPENDS cube.frag ${GLSLANG_VALIDATOR}
)
file(COPY cube.vcxproj.user DESTINATION ${CMAKE_BINARY_DIR}/demos)
file(COPY tri.vcxproj.user DESTINATION ${CMAKE_BINARY_DIR}/demos)
file(COPY vulkaninfo.vcxproj.user DESTINATION ${CMAKE_BINARY_DIR}/demos)
else()
add_custom_command(OUTPUT ${CMAKE_BINARY_DIR}/demos/tri-vert.spv
- COMMAND ${GLSLANG_PREFIX}/build/install/bin/glslangValidator -s -V -o tri-vert.spv ${PROJECT_SOURCE_DIR}/demos/tri.vert
- DEPENDS tri.vert ${GLSLANG_PREFIX}/build/install/bin/glslangValidator
+ COMMAND ${GLSLANG_VALIDATOR} -s -V -o tri-vert.spv ${PROJECT_SOURCE_DIR}/demos/tri.vert
+ DEPENDS tri.vert ${GLSLANG_VALIDATOR}
)
add_custom_command(OUTPUT ${CMAKE_BINARY_DIR}/demos/tri-frag.spv
- COMMAND ${GLSLANG_PREFIX}/build/install/bin/glslangValidator -s -V -o tri-frag.spv ${PROJECT_SOURCE_DIR}/demos/tri.frag
- DEPENDS tri.frag ${GLSLANG_PREFIX}/build/install/bin/glslangValidator
+ COMMAND ${GLSLANG_VALIDATOR} -s -V -o tri-frag.spv ${PROJECT_SOURCE_DIR}/demos/tri.frag
+ DEPENDS tri.frag ${GLSLANG_VALIDATOR}
)
add_custom_command(OUTPUT ${CMAKE_BINARY_DIR}/demos/cube-vert.spv
- COMMAND ${GLSLANG_PREFIX}/build/install/bin/glslangValidator -s -V -o cube-vert.spv ${PROJECT_SOURCE_DIR}/demos/cube.vert
- DEPENDS cube.vert ${GLSLANG_PREFIX}/build/install/bin/glslangValidator
+ COMMAND ${GLSLANG_VALIDATOR} -s -V -o cube-vert.spv ${PROJECT_SOURCE_DIR}/demos/cube.vert
+ DEPENDS cube.vert ${GLSLANG_VALIDATOR}
)
add_custom_command(OUTPUT ${CMAKE_BINARY_DIR}/demos/cube-frag.spv
- COMMAND ${GLSLANG_PREFIX}/build/install/bin/glslangValidator -s -V -o cube-frag.spv ${PROJECT_SOURCE_DIR}/demos/cube.frag
- DEPENDS cube.frag ${GLSLANG_PREFIX}/build/install/bin/glslangValidator
+ COMMAND ${GLSLANG_VALIDATOR} -s -V -o cube-frag.spv ${PROJECT_SOURCE_DIR}/demos/cube.frag
+ DEPENDS cube.frag ${GLSLANG_VALIDATOR}
)
endif()
@@ -87,11 +87,7 @@ if(WIN32)
set (CMAKE_CXX_FLAGS "${CMAKE_CXX_FLAGS} -D_CRT_SECURE_NO_WARNINGS -D_USE_MATH_DEFINES")
endif()
-if(UNIX)
- add_executable(vulkaninfo vulkaninfo.c)
-else()
- add_executable(vulkaninfo WIN32 vulkaninfo.c)
-endif()
+add_executable(vulkaninfo vulkaninfo.c)
target_link_libraries(vulkaninfo ${LIBRARIES})
if(UNIX)
@@ -114,3 +110,6 @@ else()
add_executable(cube WIN32 cube.c ${CMAKE_BINARY_DIR}/demos/cube-vert.spv ${CMAKE_BINARY_DIR}/demos/cube-frag.spv)
target_link_libraries(cube ${LIBRARIES} )
endif()
+
+add_subdirectory(smoke)
+
diff --git a/demos/cube.c b/demos/cube.c
index 9b84cccf3..5477beeef 100644
--- a/demos/cube.c
+++ b/demos/cube.c
@@ -117,6 +117,8 @@ struct texture_object {
static char *tex_files[] = {"lunarg.ppm"};
+static int validation_error = 0;
+
struct vkcube_vs_uniform {
// Must start with MVP
float mvp[4][4];
@@ -268,6 +270,7 @@ dbgFunc(VkFlags msgFlags, VkDebugReportObjectTypeEXT objType,
if (msgFlags & VK_DEBUG_REPORT_ERROR_BIT_EXT) {
sprintf(message, "ERROR: [%s] Code %d : %s", pLayerPrefix, msgCode,
pMsg);
+ validation_error = 1;
} else if (msgFlags & VK_DEBUG_REPORT_WARNING_BIT_EXT) {
// We know that we're submitting queues without fences, ignore this
// warning
@@ -277,7 +280,9 @@ dbgFunc(VkFlags msgFlags, VkDebugReportObjectTypeEXT objType,
}
sprintf(message, "WARNING: [%s] Code %d : %s", pLayerPrefix, msgCode,
pMsg);
+ validation_error = 1;
} else {
+ validation_error = 1;
return false;
}
@@ -1917,15 +1922,20 @@ static void demo_run(struct demo *demo) {
LRESULT CALLBACK WndProc(HWND hWnd, UINT uMsg, WPARAM wParam, LPARAM lParam) {
switch (uMsg) {
case WM_CLOSE:
- PostQuitMessage(0);
+ PostQuitMessage(validation_error);
break;
case WM_PAINT:
demo_run(&demo);
break;
case WM_SIZE:
- demo.width = lParam & 0xffff;
- demo.height = lParam & 0xffff0000 >> 16;
- demo_resize(&demo);
+ // Resize the application to the new window size, except when
+ // it was minimized. Vulkan doesn't support images or swapchains
+ // with width=0 and height=0.
+ if (wParam != SIZE_MINIMIZED) {
+ demo.width = lParam & 0xffff;
+ demo.height = lParam & 0xffff0000 >> 16;
+ demo_resize(&demo);
+ }
break;
default:
break;
@@ -2129,23 +2139,22 @@ static void demo_init_vk(struct demo *demo) {
uint32_t enabled_layer_count = 0;
char *instance_validation_layers[] = {
- "VK_LAYER_GOOGLE_threading", "VK_LAYER_LUNARG_mem_tracker",
- "VK_LAYER_LUNARG_object_tracker", "VK_LAYER_LUNARG_draw_state",
- "VK_LAYER_LUNARG_param_checker", "VK_LAYER_LUNARG_swapchain",
- "VK_LAYER_LUNARG_device_limits", "VK_LAYER_LUNARG_image",
- "VK_LAYER_GOOGLE_unique_objects",
+ "VK_LAYER_GOOGLE_threading", "VK_LAYER_LUNARG_parameter_validation",
+ "VK_LAYER_LUNARG_device_limits", "VK_LAYER_LUNARG_object_tracker",
+ "VK_LAYER_LUNARG_image", "VK_LAYER_LUNARG_core_validation",
+ "VK_LAYER_LUNARG_swapchain",
+ "VK_LAYER_GOOGLE_unique_objects"
};
demo->device_validation_layers[0] = "VK_LAYER_GOOGLE_threading";
- demo->device_validation_layers[1] = "VK_LAYER_LUNARG_mem_tracker";
- demo->device_validation_layers[2] = "VK_LAYER_LUNARG_object_tracker";
- demo->device_validation_layers[3] = "VK_LAYER_LUNARG_draw_state";
- demo->device_validation_layers[4] = "VK_LAYER_LUNARG_param_checker";
- demo->device_validation_layers[5] = "VK_LAYER_LUNARG_swapchain";
- demo->device_validation_layers[6] = "VK_LAYER_LUNARG_device_limits";
- demo->device_validation_layers[7] = "VK_LAYER_LUNARG_image";
- demo->device_validation_layers[8] = "VK_LAYER_GOOGLE_unique_objects";
- device_validation_layer_count = 9;
+ demo->device_validation_layers[1] = "VK_LAYER_LUNARG_parameter_validation";
+ demo->device_validation_layers[2] = "VK_LAYER_LUNARG_device_limits";
+ demo->device_validation_layers[3] = "VK_LAYER_LUNARG_object_tracker";
+ demo->device_validation_layers[4] = "VK_LAYER_LUNARG_image";
+ demo->device_validation_layers[5] = "VK_LAYER_LUNARG_core_validation";
+ demo->device_validation_layers[6] = "VK_LAYER_LUNARG_swapchain";
+ demo->device_validation_layers[7] = "VK_LAYER_GOOGLE_unique_objects";
+ device_validation_layer_count = 8;
/* Look for validation layers */
VkBool32 validation_found = 0;
@@ -2264,7 +2273,7 @@ static void demo_init_vk(struct demo *demo) {
.applicationVersion = 0,
.pEngineName = APP_SHORT_NAME,
.engineVersion = 0,
- .apiVersion = VK_API_VERSION,
+ .apiVersion = VK_API_VERSION_1_0,
};
VkInstanceCreateInfo inst_info = {
.sType = VK_STRUCTURE_TYPE_INSTANCE_CREATE_INFO,
@@ -2799,6 +2808,6 @@ int main(int argc, char **argv) {
demo_cleanup(&demo);
- return 0;
+ return validation_error;
}
#endif // _WIN32
diff --git a/demos/smoke/CMakeLists.txt b/demos/smoke/CMakeLists.txt
new file mode 100644
index 000000000..a1789e922
--- /dev/null
+++ b/demos/smoke/CMakeLists.txt
@@ -0,0 +1,78 @@
+set (GLMINC_PREFIX ${PROJECT_SOURCE_DIR}/libs)
+
+macro(generate_dispatch_table out)
+ add_custom_command(OUTPUT ${CMAKE_CURRENT_SOURCE_DIR}/${out}
+ COMMAND ${PYTHON_EXECUTABLE} ${CMAKE_CURRENT_SOURCE_DIR}/generate-dispatch-table ${CMAKE_CURRENT_SOURCE_DIR}/${out}
+ DEPENDS ${CMAKE_CURRENT_SOURCE_DIR}/generate-dispatch-table
+ )
+endmacro()
+
+macro(glsl_to_spirv src)
+ add_custom_command(OUTPUT ${src}.h
+ COMMAND ${PYTHON_EXECUTABLE} ${CMAKE_CURRENT_SOURCE_DIR}/glsl-to-spirv ${CMAKE_CURRENT_SOURCE_DIR}/${src} ${src}.h ${GLSLANG_VALIDATOR}
+ DEPENDS ${CMAKE_CURRENT_SOURCE_DIR}/glsl-to-spirv ${CMAKE_CURRENT_SOURCE_DIR}/${src} ${GLSLANG_VALIDATOR}
+ )
+endmacro()
+
+generate_dispatch_table(HelpersDispatchTable.h)
+generate_dispatch_table(HelpersDispatchTable.cpp)
+glsl_to_spirv(Smoke.frag)
+glsl_to_spirv(Smoke.vert)
+glsl_to_spirv(Smoke.push_constant.vert)
+
+set(sources
+ Game.h
+ Helpers.h
+ HelpersDispatchTable.cpp
+ HelpersDispatchTable.h
+ Smoke.cpp
+ Smoke.h
+ Smoke.frag.h
+ Smoke.vert.h
+ Smoke.push_constant.vert.h
+ Main.cpp
+ Meshes.cpp
+ Meshes.h
+ Meshes.teapot.h
+ Simulation.cpp
+ Simulation.h
+ Shell.cpp
+ Shell.h
+ )
+
+set(definitions
+ PRIVATE -DVK_NO_PROTOTYPES
+ PRIVATE -DGLM_FORCE_RADIANS)
+
+set(includes
+ PRIVATE ${GLMINC_PREFIX}
+ PRIVATE ${CMAKE_CURRENT_BINARY_DIR})
+
+set(libraries PRIVATE ${CMAKE_THREAD_LIBS_INIT})
+
+if(TARGET vulkan)
+ list(APPEND definitions PRIVATE -DUNINSTALLED_LOADER="$<TARGET_FILE:vulkan>")
+endif()
+
+if(WIN32)
+ list(APPEND definitions PRIVATE -DVK_USE_PLATFORM_WIN32_KHR)
+ list(APPEND definitions PRIVATE -DWIN32_LEAN_AND_MEAN)
+
+ list(APPEND sources ShellWin32.cpp ShellWin32.h)
+else()
+ list(APPEND libraries PRIVATE -ldl)
+
+ find_package(XCB REQUIRED)
+
+ list(APPEND sources ShellXcb.cpp ShellXcb.h)
+ list(APPEND definitions PRIVATE -DVK_USE_PLATFORM_XCB_KHR)
+ list(APPEND includes PRIVATE ${XCB_INCLUDES})
+ list(APPEND libraries PRIVATE ${XCB_LIBRARIES})
+endif()
+
+set(CMAKE_RUNTIME_OUTPUT_DIRECTORY ${CMAKE_BINARY_DIR}/demos)
+
+add_executable(smoketest ${sources})
+target_compile_definitions(smoketest ${definitions})
+target_include_directories(smoketest ${includes})
+target_link_libraries(smoketest ${libraries})
diff --git a/demos/smoke/Game.h b/demos/smoke/Game.h
new file mode 100644
index 000000000..00bbf3782
--- /dev/null
+++ b/demos/smoke/Game.h
@@ -0,0 +1,133 @@
+/*
+ * Copyright (C) 2016 Google, Inc.
+ *
+ * Permission is hereby granted, free of charge, to any person obtaining a
+ * copy of this software and associated documentation files (the "Software"),
+ * to deal in the Software without restriction, including without limitation
+ * the rights to use, copy, modify, merge, publish, distribute, sublicense,
+ * and/or sell copies of the Software, and to permit persons to whom the
+ * Software is furnished to do so, subject to the following conditions:
+ *
+ * The above copyright notice and this permission notice shall be included
+ * in all copies or substantial portions of the Software.
+ *
+ * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
+ * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
+ * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL
+ * THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
+ * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
+ * FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER
+ * DEALINGS IN THE SOFTWARE.
+ */
+
+#ifndef GAME_H
+#define GAME_H
+
+#include <string>
+#include <vector>
+
+class Shell;
+
+class Game {
+public:
+ Game(const Game &game) = delete;
+ Game &operator=(const Game &game) = delete;
+ virtual ~Game() {}
+
+ struct Settings {
+ std::string name;
+ int initial_width;
+ int initial_height;
+ int queue_count;
+ int back_buffer_count;
+ int ticks_per_second;
+ bool vsync;
+ bool animate;
+
+ bool validate;
+ bool validate_verbose;
+
+ bool no_tick;
+ bool no_render;
+ bool no_present;
+ };
+ const Settings &settings() const { return settings_; }
+
+ virtual void attach_shell(Shell &shell) { shell_ = &shell; }
+ virtual void detach_shell() { shell_ = nullptr; }
+
+ virtual void attach_swapchain() {}
+ virtual void detach_swapchain() {}
+
+ enum Key {
+ // virtual keys
+ KEY_SHUTDOWN,
+ // physical keys
+ KEY_UNKNOWN,
+ KEY_ESC,
+ KEY_UP,
+ KEY_DOWN,
+ KEY_SPACE,
+ };
+ virtual void on_key(Key key) {}
+ virtual void on_tick() {}
+
+ virtual void on_frame(float frame_pred) {}
+
+protected:
+ Game(const std::string &name, const std::vector<std::string> &args)
+ : settings_(), shell_(nullptr)
+ {
+ settings_.name = name;
+ settings_.initial_width = 1280;
+ settings_.initial_height = 1024;
+ settings_.queue_count = 1;
+ settings_.back_buffer_count = 1;
+ settings_.ticks_per_second = 30;
+ settings_.vsync = true;
+ settings_.animate = true;
+
+ settings_.validate = false;
+ settings_.validate_verbose = false;
+
+ settings_.no_tick = false;
+ settings_.no_render = false;
+ settings_.no_present = false;
+
+ parse_args(args);
+ }
+
+ Settings settings_;
+ Shell *shell_;
+
+private:
+ void parse_args(const std::vector<std::string> &args)
+ {
+ for (auto it = args.begin(); it != args.end(); ++it) {
+ if (*it == "-b") {
+ settings_.vsync = false;
+ } else if (*it == "-w") {
+ ++it;
+ settings_.initial_width = std::stoi(*it);
+ } else if (*it == "-h") {
+ ++it;
+ settings_.initial_height = std::stoi(*it);
+ } else if (*it == "-v") {
+ settings_.validate = true;
+ } else if (*it == "--validate") {
+ settings_.validate = true;
+ } else if (*it == "-vv") {
+ settings_.validate = true;
+ settings_.validate_verbose = true;
+ } else if (*it == "-nt") {
+ settings_.no_tick = true;
+ } else if (*it == "-nr") {
+ settings_.no_render = true;
+ } else if (*it == "-np") {
+ settings_.no_present = true;
+ }
+ }
+ }
+};
+
+#endif // GAME_H
diff --git a/demos/smoke/Helpers.h b/demos/smoke/Helpers.h
new file mode 100644
index 000000000..6b889abce
--- /dev/null
+++ b/demos/smoke/Helpers.h
@@ -0,0 +1,131 @@
+/*
+ * Copyright (C) 2016 Google, Inc.
+ *
+ * Permission is hereby granted, free of charge, to any person obtaining a
+ * copy of this software and associated documentation files (the "Software"),
+ * to deal in the Software without restriction, including without limitation
+ * the rights to use, copy, modify, merge, publish, distribute, sublicense,
+ * and/or sell copies of the Software, and to permit persons to whom the
+ * Software is furnished to do so, subject to the following conditions:
+ *
+ * The above copyright notice and this permission notice shall be included
+ * in all copies or substantial portions of the Software.
+ *
+ * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
+ * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
+ * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL
+ * THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
+ * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
+ * FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER
+ * DEALINGS IN THE SOFTWARE.
+ */
+
+#ifndef HELPERS_H
+#define HELPERS_H
+
+#include <vector>
+#include <sstream>
+#include <stdexcept>
+#include <vulkan/vulkan.h>
+
+#include "HelpersDispatchTable.h"
+
+namespace vk {
+
+inline VkResult assert_success(VkResult res)
+{
+ if (res != VK_SUCCESS) {
+ std::stringstream ss;
+ ss << "VkResult " << res << " returned";
+ throw std::runtime_error(ss.str());
+ }
+
+ return res;
+}
+
+inline VkResult enumerate(const char *layer, std::vector<VkExtensionProperties> &exts)
+{
+ uint32_t count = 0;
+ vk::EnumerateInstanceExtensionProperties(layer, &count, nullptr);
+
+ exts.resize(count);
+ return vk::EnumerateInstanceExtensionProperties(layer, &count, exts.data());
+}
+
+inline VkResult enumerate(VkPhysicalDevice phy, const char *layer, std::vector<VkExtensionProperties> &exts)
+{
+ uint32_t count = 0;
+ vk::EnumerateDeviceExtensionProperties(phy, layer, &count, nullptr);
+
+ exts.resize(count);
+ return vk::EnumerateDeviceExtensionProperties(phy, layer, &count, exts.data());
+}
+
+inline VkResult enumerate(VkInstance instance, std::vector<VkPhysicalDevice> &phys)
+{
+ uint32_t count = 0;
+ vk::EnumeratePhysicalDevices(instance, &count, nullptr);
+
+ phys.resize(count);
+ return vk::EnumeratePhysicalDevices(instance, &count, phys.data());
+}
+
+inline VkResult enumerate(std::vector<VkLayerProperties> &layer_props)
+{
+ uint32_t count = 0;
+ vk::EnumerateInstanceLayerProperties(&count, nullptr);
+
+ layer_props.resize(count);
+ return vk::EnumerateInstanceLayerProperties(&count, layer_props.data());
+}
+
+inline VkResult enumerate(VkPhysicalDevice phy, std::vector<VkLayerProperties> &layer_props)
+{
+ uint32_t count = 0;
+ vk::EnumerateDeviceLayerProperties(phy, &count, nullptr);
+
+ layer_props.resize(count);
+ return vk::EnumerateDeviceLayerProperties(phy, &count, layer_props.data());
+}
+
+inline VkResult get(VkPhysicalDevice phy, std::vector<VkQueueFamilyProperties> &queues)
+{
+ uint32_t count = 0;
+ vk::GetPhysicalDeviceQueueFamilyProperties(phy, &count, nullptr);
+
+ queues.resize(count);
+ vk::GetPhysicalDeviceQueueFamilyProperties(phy, &count, queues.data());
+
+ return VK_SUCCESS;
+}
+
+inline VkResult get(VkPhysicalDevice phy, VkSurfaceKHR surface, std::vector<VkSurfaceFormatKHR> &formats)
+{
+ uint32_t count = 0;
+ vk::GetPhysicalDeviceSurfaceFormatsKHR(phy, surface, &count, nullptr);
+
+ formats.resize(count);
+ return vk::GetPhysicalDeviceSurfaceFormatsKHR(phy, surface, &count, formats.data());
+}
+
+inline VkResult get(VkPhysicalDevice phy, VkSurfaceKHR surface, std::vector<VkPresentModeKHR> &modes)
+{
+ uint32_t count = 0;
+ vk::GetPhysicalDeviceSurfacePresentModesKHR(phy, surface, &count, nullptr);
+
+ modes.resize(count);
+ return vk::GetPhysicalDeviceSurfacePresentModesKHR(phy, surface, &count, modes.data());
+}
+
+inline VkResult get(VkDevice dev, VkSwapchainKHR swapchain, std::vector<VkImage> &images)
+{
+ uint32_t count = 0;
+ vk::GetSwapchainImagesKHR(dev, swapchain, &count, nullptr);
+
+ images.resize(count);
+ return vk::GetSwapchainImagesKHR(dev, swapchain, &count, images.data());
+}
+
+} // namespace vk
+
+#endif // HELPERS_H
diff --git a/demos/smoke/Main.cpp b/demos/smoke/Main.cpp
new file mode 100644
index 000000000..4f24b74b9
--- /dev/null
+++ b/demos/smoke/Main.cpp
@@ -0,0 +1,90 @@
+/*
+ * Copyright (C) 2016 Google, Inc.
+ *
+ * Permission is hereby granted, free of charge, to any person obtaining a
+ * copy of this software and associated documentation files (the "Software"),
+ * to deal in the Software without restriction, including without limitation
+ * the rights to use, copy, modify, merge, publish, distribute, sublicense,
+ * and/or sell copies of the Software, and to permit persons to whom the
+ * Software is furnished to do so, subject to the following conditions:
+ *
+ * The above copyright notice and this permission notice shall be included
+ * in all copies or substantial portions of the Software.
+ *
+ * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
+ * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
+ * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL
+ * THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
+ * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
+ * FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER
+ * DEALINGS IN THE SOFTWARE.
+ */
+
+#include <string>
+#include <vector>
+
+#include "Smoke.h"
+
+namespace {
+
+Game *create_game(int argc, char **argv)
+{
+ std::vector<std::string> args(argv, argv + argc);
+ return new Smoke(args);
+}
+
+} // namespace
+
+#if defined(VK_USE_PLATFORM_XCB_KHR)
+
+#include "ShellXcb.h"
+
+int main(int argc, char **argv)
+{
+ Game *game = create_game(argc, argv);
+ {
+ ShellXcb shell(*game);
+ shell.run();
+ }
+ delete game;
+
+ return 0;
+}
+
+#elif defined(VK_USE_PLATFORM_ANDROID_KHR)
+
+#include <android/log.h>
+#include "ShellAndroid.h"
+
+void android_main(android_app *app)
+{
+ Game *game = create_game(0, nullptr);
+
+ try {
+ ShellAndroid shell(*app, *game);
+ shell.run();
+ } catch (const std::runtime_error &e) {
+ __android_log_print(ANDROID_LOG_ERROR, game->settings().name.c_str(),
+ "%s", e.what());
+ }
+
+ delete game;
+}
+
+#elif defined(VK_USE_PLATFORM_WIN32_KHR)
+
+#include "ShellWin32.h"
+
+int main(int argc, char **argv)
+{
+ Game *game = create_game(argc, argv);
+ {
+ ShellWin32 shell(*game);
+ shell.run();
+ }
+ delete game;
+
+ return 0;
+}
+
+#endif // VK_USE_PLATFORM_XCB_KHR
diff --git a/demos/smoke/Meshes.cpp b/demos/smoke/Meshes.cpp
new file mode 100644
index 000000000..5fdb8fad3
--- /dev/null
+++ b/demos/smoke/Meshes.cpp
@@ -0,0 +1,537 @@
+/*
+ * Copyright (C) 2016 Google, Inc.
+ *
+ * Permission is hereby granted, free of charge, to any person obtaining a
+ * copy of this software and associated documentation files (the "Software"),
+ * to deal in the Software without restriction, including without limitation
+ * the rights to use, copy, modify, merge, publish, distribute, sublicense,
+ * and/or sell copies of the Software, and to permit persons to whom the
+ * Software is furnished to do so, subject to the following conditions:
+ *
+ * The above copyright notice and this permission notice shall be included
+ * in all copies or substantial portions of the Software.
+ *
+ * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
+ * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
+ * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL
+ * THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
+ * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
+ * FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER
+ * DEALINGS IN THE SOFTWARE.
+ */
+
+#include <cassert>
+#include <cmath>
+#include <cstring>
+#include <array>
+#include <unordered_map>
+
+#include "Helpers.h"
+#include "Meshes.h"
+
+namespace {
+
+class Mesh {
+public:
+ struct Position {
+ float x;
+ float y;
+ float z;
+ };
+
+ struct Normal {
+ float x;
+ float y;
+ float z;
+ };
+
+ struct Face {
+ int v0;
+ int v1;
+ int v2;
+ };
+
+ static uint32_t vertex_stride()
+ {
+ // Position + Normal
+ const int comp_count = 6;
+
+ return sizeof(float) * comp_count;
+ }
+
+ static VkVertexInputBindingDescription vertex_input_binding()
+ {
+ VkVertexInputBindingDescription vi_binding = {};
+ vi_binding.binding = 0;
+ vi_binding.stride = vertex_stride();
+ vi_binding.inputRate = VK_VERTEX_INPUT_RATE_VERTEX;
+
+ return vi_binding;
+ }
+
+ static std::vector<VkVertexInputAttributeDescription> vertex_input_attributes()
+ {
+ std::vector<VkVertexInputAttributeDescription> vi_attrs(2);
+ // Position
+ vi_attrs[0].location = 0;
+ vi_attrs[0].binding = 0;
+ vi_attrs[0].format = VK_FORMAT_R32G32B32_SFLOAT;
+ vi_attrs[0].offset = 0;
+ // Normal
+ vi_attrs[1].location = 1;
+ vi_attrs[1].binding = 0;
+ vi_attrs[1].format = VK_FORMAT_R32G32B32_SFLOAT;
+ vi_attrs[1].offset = sizeof(float) * 3;
+
+ return vi_attrs;
+ }
+
+ static VkIndexType index_type()
+ {
+ return VK_INDEX_TYPE_UINT32;
+ }
+
+ static VkPipelineInputAssemblyStateCreateInfo input_assembly_state()
+ {
+ VkPipelineInputAssemblyStateCreateInfo ia_info = {};
+ ia_info.sType = VK_STRUCTURE_TYPE_PIPELINE_INPUT_ASSEMBLY_STATE_CREATE_INFO;
+ ia_info.topology = VK_PRIMITIVE_TOPOLOGY_TRIANGLE_LIST;
+ ia_info.primitiveRestartEnable = false;
+ return ia_info;
+ }
+
+ void build(const std::vector<std::array<float, 6>> &vertices, const std::vector<std::array<int, 3>> &faces)
+ {
+ positions_.reserve(vertices.size());
+ normals_.reserve(vertices.size());
+ for (const auto &v : vertices) {
+ positions_.emplace_back(Position{ v[0], v[1], v[2] });
+ normals_.emplace_back(Normal{ v[3], v[4], v[5] });
+ }
+
+ faces_.reserve(faces.size());
+ for (const auto &f : faces)
+ faces_.emplace_back(Face{ f[0], f[1], f[2] });
+ }
+
+ uint32_t vertex_count() const
+ {
+ return positions_.size();
+ }
+
+ VkDeviceSize vertex_buffer_size() const
+ {
+ return vertex_stride() * vertex_count();
+ }
+
+ void vertex_buffer_write(void *data) const
+ {
+ float *dst = reinterpret_cast<float *>(data);
+ for (size_t i = 0; i < positions_.size(); i++) {
+ const Position &pos = positions_[i];
+ const Normal &normal = normals_[i];
+ dst[0] = pos.x;
+ dst[1] = pos.y;
+ dst[2] = pos.z;
+ dst[3] = normal.x;
+ dst[4] = normal.y;
+ dst[5] = normal.z;
+ dst += 6;
+ }
+ }
+
+ uint32_t index_count() const
+ {
+ return faces_.size() * 3;
+ }
+
+ VkDeviceSize index_buffer_size() const
+ {
+ return sizeof(uint32_t) * index_count();
+ }
+
+ void index_buffer_write(void *data) const
+ {
+ uint32_t *dst = reinterpret_cast<uint32_t *>(data);
+ for (const auto &face : faces_) {
+ dst[0] = face.v0;
+ dst[1] = face.v1;
+ dst[2] = face.v2;
+ dst += 3;
+ }
+ }
+
+ std::vector<Position> positions_;
+ std::vector<Normal> normals_;
+ std::vector<Face> faces_;
+};
+
+class BuildPyramid {
+public:
+ BuildPyramid(Mesh &mesh)
+ {
+ const std::vector<std::array<float, 6>> vertices = {
+ // position normal
+ { 0.0f, 0.0f, 1.0f, 0.0f, 0.0f, 1.0f },
+ { -1.0f, -1.0f, -1.0f, -1.0f, -1.0f, -1.0f },
+ { 1.0f, -1.0f, -1.0f, 1.0f, -1.0f, -1.0f },
+ { 1.0f, 1.0f, -1.0f, 1.0f, 1.0f, -1.0f },
+ { -1.0f, 1.0f, -1.0f, -1.0f, 1.0f, -1.0f },
+ };
+
+ const std::vector<std::array<int, 3>> faces = {
+ { 0, 1, 2 },
+ { 0, 2, 3 },
+ { 0, 3, 4 },
+ { 0, 4, 1 },
+ { 1, 4, 3 },
+ { 1, 3, 2 },
+ };
+
+ mesh.build(vertices, faces);
+ }
+};
+
+class BuildIcosphere {
+public:
+ BuildIcosphere(Mesh &mesh) : mesh_(mesh), radius_(1.0f)
+ {
+ const int tessellate_level = 2;
+
+ build_icosahedron();
+ for (int i = 0; i < tessellate_level; i++)
+ tessellate();
+ }
+
+private:
+ void build_icosahedron()
+ {
+ // https://en.wikipedia.org/wiki/Regular_icosahedron
+ const float l1 = std::sqrt(2.0f / (5.0f + std::sqrt(5.0f))) * radius_;
+ const float l2 = std::sqrt(2.0f / (5.0f - std::sqrt(5.0f))) * radius_;
+ // vertices are from three golden rectangles
+ const std::vector<std::array<float, 6>> icosahedron_vertices = {
+ // position normal
+ { -l1, -l2, 0.0f, -l1, -l2, 0.0f, },
+ { l1, -l2, 0.0f, l1, -l2, 0.0f, },
+ { l1, l2, 0.0f, l1, l2, 0.0f, },
+ { -l1, l2, 0.0f, -l1, l2, 0.0f, },
+
+ { -l2, 0.0f, -l1, -l2, 0.0f, -l1, },
+ { l2, 0.0f, -l1, l2, 0.0f, -l1, },
+ { l2, 0.0f, l1, l2, 0.0f, l1, },
+ { -l2, 0.0f, l1, -l2, 0.0f, l1, },
+
+ { 0.0f, -l1, -l2, 0.0f, -l1, -l2, },
+ { 0.0f, l1, -l2, 0.0f, l1, -l2, },
+ { 0.0f, l1, l2, 0.0f, l1, l2, },
+ { 0.0f, -l1, l2, 0.0f, -l1, l2, },
+ };
+ const std::vector<std::array<int, 3>> icosahedron_faces = {
+ // triangles sharing vertex 0
+ { 0, 1, 11 },
+ { 0, 11, 7 },
+ { 0, 7, 4 },
+ { 0, 4, 8 },
+ { 0, 8, 1 },
+ // adjacent triangles
+ { 11, 1, 6 },
+ { 7, 11, 10 },
+ { 4, 7, 3 },
+ { 8, 4, 9 },
+ { 1, 8, 5 },
+ // triangles sharing vertex 2
+ { 2, 3, 10 },
+ { 2, 10, 6 },
+ { 2, 6, 5 },
+ { 2, 5, 9 },
+ { 2, 9, 3 },
+ // adjacent triangles
+ { 10, 3, 7 },
+ { 6, 10, 11 },
+ { 5, 6, 1 },
+ { 9, 5, 8 },
+ { 3, 9, 4 },
+ };
+
+ mesh_.build(icosahedron_vertices, icosahedron_faces);
+ }
+
+ void tessellate()
+ {
+ size_t middle_point_count = mesh_.faces_.size() * 3 / 2;
+ size_t final_face_count = mesh_.faces_.size() * 4;
+
+ std::vector<Mesh::Face> faces;
+ faces.reserve(final_face_count);
+
+ middle_points_.clear();
+ middle_points_.reserve(middle_point_count);
+
+ mesh_.positions_.reserve(mesh_.vertex_count() + middle_point_count);
+ mesh_.normals_.reserve(mesh_.vertex_count() + middle_point_count);
+
+ for (const auto &f : mesh_.faces_) {
+ int v0 = f.v0;
+ int v1 = f.v1;
+ int v2 = f.v2;
+
+ int v01 = add_middle_point(v0, v1);
+ int v12 = add_middle_point(v1, v2);
+ int v20 = add_middle_point(v2, v0);
+
+ faces.emplace_back(Mesh::Face{ v0, v01, v20 });
+ faces.emplace_back(Mesh::Face{ v1, v12, v01 });
+ faces.emplace_back(Mesh::Face{ v2, v20, v12 });
+ faces.emplace_back(Mesh::Face{ v01, v12, v20 });
+ }
+
+ mesh_.faces_.swap(faces);
+ }
+
+ int add_middle_point(int a, int b)
+ {
+ uint64_t key = (a < b) ? ((uint64_t) a << 32 | b) : ((uint64_t) b << 32 | a);
+ auto it = middle_points_.find(key);
+ if (it != middle_points_.end())
+ return it->second;
+
+ const Mesh::Position &pos_a = mesh_.positions_[a];
+ const Mesh::Position &pos_b = mesh_.positions_[b];
+ Mesh::Position pos_mid = {
+ (pos_a.x + pos_b.x) / 2.0f,
+ (pos_a.y + pos_b.y) / 2.0f,
+ (pos_a.z + pos_b.z) / 2.0f,
+ };
+ float scale = radius_ / std::sqrt(pos_mid.x * pos_mid.x +
+ pos_mid.y * pos_mid.y +
+ pos_mid.z * pos_mid.z);
+ pos_mid.x *= scale;
+ pos_mid.y *= scale;
+ pos_mid.z *= scale;
+
+ Mesh::Normal normal_mid = { pos_mid.x, pos_mid.y, pos_mid.z };
+ normal_mid.x /= radius_;
+ normal_mid.y /= radius_;
+ normal_mid.z /= radius_;
+
+ mesh_.positions_.emplace_back(pos_mid);
+ mesh_.normals_.emplace_back(normal_mid);
+
+ int mid = mesh_.vertex_count() - 1;
+ middle_points_.emplace(std::make_pair(key, mid));
+
+ return mid;
+ }
+
+ Mesh &mesh_;
+ const float radius_;
+ std::unordered_map<uint64_t, uint32_t> middle_points_;
+};
+
+class BuildTeapot {
+public:
+ BuildTeapot(Mesh &mesh)
+ {
+#include "Meshes.teapot.h"
+ const int position_count = sizeof(teapot_positions) / sizeof(teapot_positions[0]);
+ const int index_count = sizeof(teapot_indices) / sizeof(teapot_indices[0]);
+ assert(position_count % 3 == 0 && index_count % 3 == 0);
+
+ Mesh::Position translate;
+ float scale;
+ get_transform(teapot_positions, position_count, translate, scale);
+
+ for (int i = 0; i < position_count; i += 3) {
+ mesh.positions_.emplace_back(Mesh::Position{
+ (teapot_positions[i + 0] + translate.x) * scale,
+ (teapot_positions[i + 1] + translate.y) * scale,
+ (teapot_positions[i + 2] + translate.z) * scale,
+ });
+
+ mesh.normals_.emplace_back(Mesh::Normal{
+ teapot_normals[i + 0],
+ teapot_normals[i + 1],
+ teapot_normals[i + 2],
+ });
+ }
+
+ for (int i = 0; i < index_count; i += 3) {
+ mesh.faces_.emplace_back(Mesh::Face{
+ teapot_indices[i + 0],
+ teapot_indices[i + 1],
+ teapot_indices[i + 2]
+ });
+ }
+ }
+
+ void get_transform(const float *positions, int position_count,
+ Mesh::Position &translate, float &scale)
+ {
+ float min[3] = {
+ positions[0],
+ positions[1],
+ positions[2],
+ };
+ float max[3] = {
+ positions[0],
+ positions[1],
+ positions[2],
+ };
+ for (int i = 3; i < position_count; i += 3) {
+ for (int j = 0; j < 3; j++) {
+ if (min[j] > positions[i + j])
+ min[j] = positions[i + j];
+ if (max[j] < positions[i + j])
+ max[j] = positions[i + j];
+ }
+ }
+
+ translate.x = -(min[0] + max[0]) / 2.0f;
+ translate.y = -(min[1] + max[1]) / 2.0f;
+ translate.z = -(min[2] + max[2]) / 2.0f;
+
+ float extents[3] = {
+ max[0] + translate.x,
+ max[1] + translate.y,
+ max[2] + translate.z,
+ };
+
+ float max_extent = extents[0];
+ if (max_extent < extents[1])
+ max_extent = extents[1];
+ if (max_extent < extents[2])
+ max_extent = extents[2];
+
+ scale = 1.0f / max_extent;
+ }
+};
+
+void build_meshes(std::array<Mesh, Meshes::MESH_COUNT> &meshes)
+{
+ BuildPyramid build_pyramid(meshes[Meshes::MESH_PYRAMID]);
+ BuildIcosphere build_icosphere(meshes[Meshes::MESH_ICOSPHERE]);
+ BuildTeapot build_teapot(meshes[Meshes::MESH_TEAPOT]);
+}
+
+} // namespace
+
+Meshes::Meshes(VkDevice dev, const std::vector<VkMemoryPropertyFlags> &mem_flags)
+ : dev_(dev),
+ vertex_input_binding_(Mesh::vertex_input_binding()),
+ vertex_input_attrs_(Mesh::vertex_input_attributes()),
+ vertex_input_state_(),
+ input_assembly_state_(Mesh::input_assembly_state()),
+ index_type_(Mesh::index_type())
+{
+ vertex_input_state_.sType = VK_STRUCTURE_TYPE_PIPELINE_VERTEX_INPUT_STATE_CREATE_INFO;
+ vertex_input_state_.vertexBindingDescriptionCount = 1;
+ vertex_input_state_.pVertexBindingDescriptions = &vertex_input_binding_;
+ vertex_input_state_.vertexAttributeDescriptionCount = static_cast<uint32_t>(vertex_input_attrs_.size());
+ vertex_input_state_.pVertexAttributeDescriptions = vertex_input_attrs_.data();
+
+ std::array<Mesh, MESH_COUNT> meshes;
+ build_meshes(meshes);
+
+ draw_commands_.reserve(meshes.size());
+ uint32_t first_index = 0;
+ int32_t vertex_offset = 0;
+ VkDeviceSize vb_size = 0;
+ VkDeviceSize ib_size = 0;
+ for (const auto &mesh : meshes) {
+ VkDrawIndexedIndirectCommand draw = {};
+ draw.indexCount = mesh.index_count();
+ draw.instanceCount = 1;
+ draw.firstIndex = first_index;
+ draw.vertexOffset = vertex_offset;
+ draw.firstInstance = 0;
+
+ draw_commands_.push_back(draw);
+
+ first_index += mesh.index_count();
+ vertex_offset += mesh.vertex_count();
+ vb_size += mesh.vertex_buffer_size();
+ ib_size += mesh.index_buffer_size();
+ }
+
+ allocate_resources(vb_size, ib_size, mem_flags);
+
+ uint8_t *vb_data, *ib_data;
+ vk::assert_success(vk::MapMemory(dev_, mem_, 0, VK_WHOLE_SIZE,
+ 0, reinterpret_cast<void **>(&vb_data)));
+ ib_data = vb_data + ib_mem_offset_;
+
+ for (const auto &mesh : meshes) {
+ mesh.vertex_buffer_write(vb_data);
+ mesh.index_buffer_write(ib_data);
+ vb_data += mesh.vertex_buffer_size();
+ ib_data += mesh.index_buffer_size();
+ }
+
+ vk::UnmapMemory(dev_, mem_);
+}
+
+Meshes::~Meshes()
+{
+ vk::FreeMemory(dev_, mem_, nullptr);
+ vk::DestroyBuffer(dev_, vb_, nullptr);
+ vk::DestroyBuffer(dev_, ib_, nullptr);
+}
+
+void Meshes::cmd_bind_buffers(VkCommandBuffer cmd) const
+{
+ const VkDeviceSize vb_offset = 0;
+ vk::CmdBindVertexBuffers(cmd, 0, 1, &vb_, &vb_offset);
+
+ vk::CmdBindIndexBuffer(cmd, ib_, 0, index_type_);
+}
+
+void Meshes::cmd_draw(VkCommandBuffer cmd, Type type) const
+{
+ const auto &draw = draw_commands_[type];
+ vk::CmdDrawIndexed(cmd, draw.indexCount, draw.instanceCount,
+ draw.firstIndex, draw.vertexOffset, draw.firstInstance);
+}
+
+void Meshes::allocate_resources(VkDeviceSize vb_size, VkDeviceSize ib_size, const std::vector<VkMemoryPropertyFlags> &mem_flags)
+{
+ VkBufferCreateInfo buf_info = {};
+ buf_info.sType = VK_STRUCTURE_TYPE_BUFFER_CREATE_INFO;
+ buf_info.size = vb_size;
+ buf_info.usage = VK_BUFFER_USAGE_VERTEX_BUFFER_BIT;
+ buf_info.sharingMode = VK_SHARING_MODE_EXCLUSIVE;
+ vk::CreateBuffer(dev_, &buf_info, nullptr, &vb_);
+
+ buf_info.size = ib_size;
+ buf_info.usage = VK_BUFFER_USAGE_INDEX_BUFFER_BIT;
+ vk::CreateBuffer(dev_, &buf_info, nullptr, &ib_);
+
+ VkMemoryRequirements vb_mem_reqs, ib_mem_reqs;
+ vk::GetBufferMemoryRequirements(dev_, vb_, &vb_mem_reqs);
+ vk::GetBufferMemoryRequirements(dev_, ib_, &ib_mem_reqs);
+
+ // indices follow vertices
+ ib_mem_offset_ = vb_mem_reqs.size +
+ (ib_mem_reqs.alignment - (vb_mem_reqs.size % ib_mem_reqs.alignment));
+
+ VkMemoryAllocateInfo mem_info = {};
+ mem_info.sType = VK_STRUCTURE_TYPE_MEMORY_ALLOCATE_INFO;
+ mem_info.allocationSize = ib_mem_offset_ + ib_mem_reqs.size;
+
+ // find any supported and mappable memory type
+ uint32_t mem_types = (vb_mem_reqs.memoryTypeBits & ib_mem_reqs.memoryTypeBits);
+ for (uint32_t idx = 0; idx < mem_flags.size(); idx++) {
+ if ((mem_types & (1 << idx)) &&
+ (mem_flags[idx] & VK_MEMORY_PROPERTY_HOST_VISIBLE_BIT) &&
+ (mem_flags[idx] & VK_MEMORY_PROPERTY_HOST_COHERENT_BIT)) {
+ // TODO this may not be reachable
+ mem_info.memoryTypeIndex = idx;
+ break;
+ }
+ }
+
+ vk::AllocateMemory(dev_, &mem_info, nullptr, &mem_);
+
+ vk::BindBufferMemory(dev_, vb_, mem_, 0);
+ vk::BindBufferMemory(dev_, ib_, mem_, ib_mem_offset_);
+}
diff --git a/demos/smoke/Meshes.h b/demos/smoke/Meshes.h
new file mode 100644
index 000000000..2fb9e3fed
--- /dev/null
+++ b/demos/smoke/Meshes.h
@@ -0,0 +1,67 @@
+/*
+ * Copyright (C) 2016 Google, Inc.
+ *
+ * Permission is hereby granted, free of charge, to any person obtaining a
+ * copy of this software and associated documentation files (the "Software"),
+ * to deal in the Software without restriction, including without limitation
+ * the rights to use, copy, modify, merge, publish, distribute, sublicense,
+ * and/or sell copies of the Software, and to permit persons to whom the
+ * Software is furnished to do so, subject to the following conditions:
+ *
+ * The above copyright notice and this permission notice shall be included
+ * in all copies or substantial portions of the Software.
+ *
+ * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
+ * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
+ * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL
+ * THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
+ * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
+ * FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER
+ * DEALINGS IN THE SOFTWARE.
+ */
+
+#ifndef MESHES_H
+#define MESHES_H
+
+#include <vulkan/vulkan.h>
+#include <vector>
+
+class Meshes {
+public:
+ Meshes(VkDevice dev, const std::vector<VkMemoryPropertyFlags> &mem_flags);
+ ~Meshes();
+
+ const VkPipelineVertexInputStateCreateInfo &vertex_input_state() const { return vertex_input_state_; }
+ const VkPipelineInputAssemblyStateCreateInfo &input_assembly_state() const { return input_assembly_state_; }
+
+ enum Type {
+ MESH_PYRAMID,
+ MESH_ICOSPHERE,
+ MESH_TEAPOT,
+
+ MESH_COUNT,
+ };
+
+ void cmd_bind_buffers(VkCommandBuffer cmd) const;
+ void cmd_draw(VkCommandBuffer cmd, Type type) const;
+
+private:
+ void allocate_resources(VkDeviceSize vb_size, VkDeviceSize ib_size, const std::vector<VkMemoryPropertyFlags> &mem_flags);
+
+ VkDevice dev_;
+
+ VkVertexInputBindingDescription vertex_input_binding_;
+ std::vector<VkVertexInputAttributeDescription> vertex_input_attrs_;
+ VkPipelineVertexInputStateCreateInfo vertex_input_state_;
+ VkPipelineInputAssemblyStateCreateInfo input_assembly_state_;
+ VkIndexType index_type_;
+
+ std::vector<VkDrawIndexedIndirectCommand> draw_commands_;
+
+ VkBuffer vb_;
+ VkBuffer ib_;
+ VkDeviceMemory mem_;
+ VkDeviceSize ib_mem_offset_;
+};
+
+#endif // MESHES_H
diff --git a/demos/smoke/Meshes.teapot.h b/demos/smoke/Meshes.teapot.h
new file mode 100644
index 000000000..68aa29743
--- /dev/null
+++ b/demos/smoke/Meshes.teapot.h
@@ -0,0 +1,2666 @@
+/*
+ * Copyright (c) 2009 The Chromium Authors. All rights reserved.
+ *
+ * Redistribution and use in source and binary forms, with or without
+ * modification, are permitted provided that the following conditions are
+ * met:
+ *
+ * * Redistributions of source code must retain the above copyright
+ * notice, this list of conditions and the following disclaimer.
+ * * Redistributions in binary form must reproduce the above
+ * copyright notice, this list of conditions and the following disclaimer
+ * in the documentation and/or other materials provided with the
+ * distribution.
+ * * Neither the name of Google Inc. nor the names of its
+ * contributors may be used to endorse or promote products derived from
+ * this software without specific prior written permission.
+ *
+ * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+ * "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+ * LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
+ * A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
+ * OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+ * SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
+ * LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+ * DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+ * THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+ * (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+ * OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+ */
+
+// Modified from
+//
+// https://raw.githubusercontent.com/KhronosGroup/WebGL/master/sdk/demos/google/shiny-teapot/teapot-streams.js
+
+static const float teapot_positions[] = {
+ 17.83489990234375f, 0.0f, 30.573999404907227f,
+ 16.452699661254883f, -7.000179767608643f, 30.573999404907227f,
+ 16.223100662231445f, -6.902520179748535f, 31.51460075378418f,
+ 17.586000442504883f, 0.0f, 31.51460075378418f,
+ 16.48940086364746f, -7.015810012817383f, 31.828100204467773f,
+ 17.87470054626465f, 0.0f, 31.828100204467773f,
+ 17.031099319458008f, -7.246280193328857f, 31.51460075378418f,
+ 18.46190071105957f, 0.0f, 31.51460075378418f,
+ 17.62779998779297f, -7.500199794769287f, 30.573999404907227f,
+ 19.108800888061523f, 0.0f, 30.573999404907227f,
+ 12.662699699401855f, -12.662699699401855f, 30.573999404907227f,
+ 12.486100196838379f, -12.486100196838379f, 31.51460075378418f,
+ 12.690999984741211f, -12.690999984741211f, 31.828100204467773f,
+ 13.10789966583252f, -13.10789966583252f, 31.51460075378418f,
+ 13.56719970703125f, -13.56719970703125f, 30.573999404907227f,
+ 7.000179767608643f, -16.452699661254883f, 30.573999404907227f,
+ 6.902520179748535f, -16.223100662231445f, 31.51460075378418f,
+ 7.015810012817383f, -16.48940086364746f, 31.828100204467773f,
+ 7.246280193328857f, -17.031099319458008f, 31.51460075378418f,
+ 7.500199794769287f, -17.62779998779297f, 30.573999404907227f,
+ 0.0f, -17.83489990234375f, 30.573999404907227f,
+ 0.0f, -17.586000442504883f, 31.51460075378418f,
+ 0.0f, -17.87470054626465f, 31.828100204467773f,
+ 0.0f, -18.46190071105957f, 31.51460075378418f,
+ 0.0f, -19.108800888061523f, 30.573999404907227f,
+ 0.0f, -17.83489990234375f, 30.573999404907227f,
+ -7.483870029449463f, -16.452699661254883f, 30.573999404907227f,
+ -7.106579780578613f, -16.223100662231445f, 31.51460075378418f,
+ 0.0f, -17.586000442504883f, 31.51460075378418f,
+ -7.07627010345459f, -16.48940086364746f, 31.828100204467773f,
+ 0.0f, -17.87470054626465f, 31.828100204467773f,
+ -7.25383996963501f, -17.031099319458008f, 31.51460075378418f,
+ 0.0f, -18.46190071105957f, 31.51460075378418f,
+ -7.500199794769287f, -17.62779998779297f, 30.573999404907227f,
+ 0.0f, -19.108800888061523f, 30.573999404907227f,
+ -13.092700004577637f, -12.662699699401855f, 30.573999404907227f,
+ -12.667499542236328f, -12.486100196838379f, 31.51460075378418f,
+ -12.744799613952637f, -12.690999984741211f, 31.828100204467773f,
+ -13.11460018157959f, -13.10789966583252f, 31.51460075378418f,
+ -13.56719970703125f, -13.56719970703125f, 30.573999404907227f,
+ -16.61389923095703f, -7.000179767608643f, 30.573999404907227f,
+ -16.291099548339844f, -6.902520179748535f, 31.51460075378418f,
+ -16.50950050354004f, -7.015810012817383f, 31.828100204467773f,
+ -17.033599853515625f, -7.246280193328857f, 31.51460075378418f,
+ -17.62779998779297f, -7.500199794769287f, 30.573999404907227f,
+ -17.83489990234375f, 0.0f, 30.573999404907227f,
+ -17.586000442504883f, 0.0f, 31.51460075378418f,
+ -17.87470054626465f, 0.0f, 31.828100204467773f,
+ -18.46190071105957f, 0.0f, 31.51460075378418f,
+ -19.108800888061523f, 0.0f, 30.573999404907227f,
+ -17.83489990234375f, 0.0f, 30.573999404907227f,
+ -16.452699661254883f, 7.000179767608643f, 30.573999404907227f,
+ -16.223100662231445f, 6.902520179748535f, 31.51460075378418f,
+ -17.586000442504883f, 0.0f, 31.51460075378418f,
+ -16.48940086364746f, 7.015810012817383f, 31.828100204467773f,
+ -17.87470054626465f, 0.0f, 31.828100204467773f,
+ -17.031099319458008f, 7.246280193328857f, 31.51460075378418f,
+ -18.46190071105957f, 0.0f, 31.51460075378418f,
+ -17.62779998779297f, 7.500199794769287f, 30.573999404907227f,
+ -19.108800888061523f, 0.0f, 30.573999404907227f,
+ -12.662699699401855f, 12.662699699401855f, 30.573999404907227f,
+ -12.486100196838379f, 12.486100196838379f, 31.51460075378418f,
+ -12.690999984741211f, 12.690999984741211f, 31.828100204467773f,
+ -13.10789966583252f, 13.10789966583252f, 31.51460075378418f,
+ -13.56719970703125f, 13.56719970703125f, 30.573999404907227f,
+ -7.000179767608643f, 16.452699661254883f, 30.573999404907227f,
+ -6.902520179748535f, 16.223100662231445f, 31.51460075378418f,
+ -7.015810012817383f, 16.48940086364746f, 31.828100204467773f,
+ -7.246280193328857f, 17.031099319458008f, 31.51460075378418f,
+ -7.500199794769287f, 17.62779998779297f, 30.573999404907227f,
+ 0.0f, 17.83489990234375f, 30.573999404907227f,
+ 0.0f, 17.586000442504883f, 31.51460075378418f,
+ 0.0f, 17.87470054626465f, 31.828100204467773f,
+ 0.0f, 18.46190071105957f, 31.51460075378418f,
+ 0.0f, 19.108800888061523f, 30.573999404907227f,
+ 0.0f, 17.83489990234375f, 30.573999404907227f,
+ 7.000179767608643f, 16.452699661254883f, 30.573999404907227f,
+ 6.902520179748535f, 16.223100662231445f, 31.51460075378418f,
+ 0.0f, 17.586000442504883f, 31.51460075378418f,
+ 7.015810012817383f, 16.48940086364746f, 31.828100204467773f,
+ 0.0f, 17.87470054626465f, 31.828100204467773f,
+ 7.246280193328857f, 17.031099319458008f, 31.51460075378418f,
+ 0.0f, 18.46190071105957f, 31.51460075378418f,
+ 7.500199794769287f, 17.62779998779297f, 30.573999404907227f,
+ 0.0f, 19.108800888061523f, 30.573999404907227f,
+ 12.662699699401855f, 12.662699699401855f, 30.573999404907227f,
+ 12.486100196838379f, 12.486100196838379f, 31.51460075378418f,
+ 12.690999984741211f, 12.690999984741211f, 31.828100204467773f,
+ 13.10789966583252f, 13.10789966583252f, 31.51460075378418f,
+ 13.56719970703125f, 13.56719970703125f, 30.573999404907227f,
+ 16.452699661254883f, 7.000179767608643f, 30.573999404907227f,
+ 16.223100662231445f, 6.902520179748535f, 31.51460075378418f,
+ 16.48940086364746f, 7.015810012817383f, 31.828100204467773f,
+ 17.031099319458008f, 7.246280193328857f, 31.51460075378418f,
+ 17.62779998779297f, 7.500199794769287f, 30.573999404907227f,
+ 17.83489990234375f, 0.0f, 30.573999404907227f,
+ 17.586000442504883f, 0.0f, 31.51460075378418f,
+ 17.87470054626465f, 0.0f, 31.828100204467773f,
+ 18.46190071105957f, 0.0f, 31.51460075378418f,
+ 19.108800888061523f, 0.0f, 30.573999404907227f,
+ 19.108800888061523f, 0.0f, 30.573999404907227f,
+ 17.62779998779297f, -7.500199794769287f, 30.573999404907227f,
+ 19.785400390625f, -8.418190002441406f, 25.572900772094727f,
+ 21.447599411010742f, 0.0f, 25.572900772094727f,
+ 21.667600631713867f, -9.218990325927734f, 20.661399841308594f,
+ 23.487899780273438f, 0.0f, 20.661399841308594f,
+ 22.99880027770996f, -9.785409927368164f, 15.928999900817871f,
+ 24.930999755859375f, 0.0f, 15.928999900817871f,
+ 23.503799438476562f, -10.000300407409668f, 11.465299606323242f,
+ 25.4783992767334f, 0.0f, 11.465299606323242f,
+ 13.56719970703125f, -13.56719970703125f, 30.573999404907227f,
+ 15.227800369262695f, -15.227800369262695f, 25.572900772094727f,
+ 16.67639923095703f, -16.67639923095703f, 20.661399841308594f,
+ 17.701000213623047f, -17.701000213623047f, 15.928999900817871f,
+ 18.089599609375f, -18.089599609375f, 11.465299606323242f,
+ 7.500199794769287f, -17.62779998779297f, 30.573999404907227f,
+ 8.418190002441406f, -19.785400390625f, 25.572900772094727f,
+ 9.218990325927734f, -21.667600631713867f, 20.661399841308594f,
+ 9.785409927368164f, -22.99880027770996f, 15.928999900817871f,
+ 10.000300407409668f, -23.503799438476562f, 11.465299606323242f,
+ 0.0f, -19.108800888061523f, 30.573999404907227f,
+ 0.0f, -21.447599411010742f, 25.572900772094727f,
+ 0.0f, -23.487899780273438f, 20.661399841308594f,
+ 0.0f, -24.930999755859375f, 15.928999900817871f,
+ 0.0f, -25.4783992767334f, 11.465299606323242f,
+ 0.0f, -19.108800888061523f, 30.573999404907227f,
+ -7.500199794769287f, -17.62779998779297f, 30.573999404907227f,
+ -8.418190002441406f, -19.785400390625f, 25.572900772094727f,
+ 0.0f, -21.447599411010742f, 25.572900772094727f,
+ -9.218990325927734f, -21.667600631713867f, 20.661399841308594f,
+ 0.0f, -23.487899780273438f, 20.661399841308594f,
+ -9.785409927368164f, -22.99880027770996f, 15.928999900817871f,
+ 0.0f, -24.930999755859375f, 15.928999900817871f,
+ -10.000300407409668f, -23.503799438476562f, 11.465299606323242f,
+ 0.0f, -25.4783992767334f, 11.465299606323242f,
+ -13.56719970703125f, -13.56719970703125f, 30.573999404907227f,
+ -15.227800369262695f, -15.227800369262695f, 25.572900772094727f,
+ -16.67639923095703f, -16.67639923095703f, 20.661399841308594f,
+ -17.701000213623047f, -17.701000213623047f, 15.928999900817871f,
+ -18.089599609375f, -18.089599609375f, 11.465299606323242f,
+ -17.62779998779297f, -7.500199794769287f, 30.573999404907227f,
+ -19.785400390625f, -8.418190002441406f, 25.572900772094727f,
+ -21.667600631713867f, -9.218990325927734f, 20.661399841308594f,
+ -22.99880027770996f, -9.785409927368164f, 15.928999900817871f,
+ -23.503799438476562f, -10.000300407409668f, 11.465299606323242f,
+ -19.108800888061523f, 0.0f, 30.573999404907227f,
+ -21.447599411010742f, 0.0f, 25.572900772094727f,
+ -23.487899780273438f, 0.0f, 20.661399841308594f,
+ -24.930999755859375f, 0.0f, 15.928999900817871f,
+ -25.4783992767334f, 0.0f, 11.465299606323242f,
+ -19.108800888061523f, 0.0f, 30.573999404907227f,
+ -17.62779998779297f, 7.500199794769287f, 30.573999404907227f,
+ -19.785400390625f, 8.418190002441406f, 25.572900772094727f,
+ -21.447599411010742f, 0.0f, 25.572900772094727f,
+ -21.667600631713867f, 9.218990325927734f, 20.661399841308594f,
+ -23.487899780273438f, 0.0f, 20.661399841308594f,
+ -22.99880027770996f, 9.785409927368164f, 15.928999900817871f,
+ -24.930999755859375f, 0.0f, 15.928999900817871f,
+ -23.503799438476562f, 10.000300407409668f, 11.465299606323242f,
+ -25.4783992767334f, 0.0f, 11.465299606323242f,
+ -13.56719970703125f, 13.56719970703125f, 30.573999404907227f,
+ -15.227800369262695f, 15.227800369262695f, 25.572900772094727f,
+ -16.67639923095703f, 16.67639923095703f, 20.661399841308594f,
+ -17.701000213623047f, 17.701000213623047f, 15.928999900817871f,
+ -18.089599609375f, 18.089599609375f, 11.465299606323242f,
+ -7.500199794769287f, 17.62779998779297f, 30.573999404907227f,
+ -8.418190002441406f, 19.785400390625f, 25.572900772094727f,
+ -9.218990325927734f, 21.667600631713867f, 20.661399841308594f,
+ -9.785409927368164f, 22.99880027770996f, 15.928999900817871f,
+ -10.000300407409668f, 23.503799438476562f, 11.465299606323242f,
+ 0.0f, 19.108800888061523f, 30.573999404907227f,
+ 0.0f, 21.447599411010742f, 25.572900772094727f,
+ 0.0f, 23.487899780273438f, 20.661399841308594f,
+ 0.0f, 24.930999755859375f, 15.928999900817871f,
+ 0.0f, 25.4783992767334f, 11.465299606323242f,
+ 0.0f, 19.108800888061523f, 30.573999404907227f,
+ 7.500199794769287f, 17.62779998779297f, 30.573999404907227f,
+ 8.418190002441406f, 19.785400390625f, 25.572900772094727f,
+ 0.0f, 21.447599411010742f, 25.572900772094727f,
+ 9.218990325927734f, 21.667600631713867f, 20.661399841308594f,
+ 0.0f, 23.487899780273438f, 20.661399841308594f,
+ 9.785409927368164f, 22.99880027770996f, 15.928999900817871f,
+ 0.0f, 24.930999755859375f, 15.928999900817871f,
+ 10.000300407409668f, 23.503799438476562f, 11.465299606323242f,
+ 0.0f, 25.4783992767334f, 11.465299606323242f,
+ 13.56719970703125f, 13.56719970703125f, 30.573999404907227f,
+ 15.227800369262695f, 15.227800369262695f, 25.572900772094727f,
+ 16.67639923095703f, 16.67639923095703f, 20.661399841308594f,
+ 17.701000213623047f, 17.701000213623047f, 15.928999900817871f,
+ 18.089599609375f, 18.089599609375f, 11.465299606323242f,
+ 17.62779998779297f, 7.500199794769287f, 30.573999404907227f,
+ 19.785400390625f, 8.418190002441406f, 25.572900772094727f,
+ 21.667600631713867f, 9.218990325927734f, 20.661399841308594f,
+ 22.99880027770996f, 9.785409927368164f, 15.928999900817871f,
+ 23.503799438476562f, 10.000300407409668f, 11.465299606323242f,
+ 19.108800888061523f, 0.0f, 30.573999404907227f,
+ 21.447599411010742f, 0.0f, 25.572900772094727f,
+ 23.487899780273438f, 0.0f, 20.661399841308594f,
+ 24.930999755859375f, 0.0f, 15.928999900817871f,
+ 25.4783992767334f, 0.0f, 11.465299606323242f,
+ 25.4783992767334f, 0.0f, 11.465299606323242f,
+ 23.503799438476562f, -10.000300407409668f, 11.465299606323242f,
+ 22.5856990814209f, -9.609620094299316f, 7.688300132751465f,
+ 24.48310089111328f, 0.0f, 7.688300132751465f,
+ 20.565799713134766f, -8.750229835510254f, 4.89661979675293f,
+ 22.29360008239746f, 0.0f, 4.89661979675293f,
+ 18.54599952697754f, -7.890830039978027f, 3.0006699562072754f,
+ 20.104000091552734f, 0.0f, 3.0006699562072754f,
+ 17.62779998779297f, -7.500199794769287f, 1.9108799695968628f,
+ 19.108800888061523f, 0.0f, 1.9108799695968628f,
+ 18.089599609375f, -18.089599609375f, 11.465299606323242f,
+ 17.382999420166016f, -17.382999420166016f, 7.688300132751465f,
+ 15.828399658203125f, -15.828399658203125f, 4.89661979675293f,
+ 14.273900032043457f, -14.273900032043457f, 3.0006699562072754f,
+ 13.56719970703125f, -13.56719970703125f, 1.9108799695968628f,
+ 10.000300407409668f, -23.503799438476562f, 11.465299606323242f,
+ 9.609620094299316f, -22.5856990814209f, 7.688300132751465f,
+ 8.750229835510254f, -20.565799713134766f, 4.89661979675293f,
+ 7.890830039978027f, -18.54599952697754f, 3.0006699562072754f,
+ 7.500199794769287f, -17.62779998779297f, 1.9108799695968628f,
+ 0.0f, -25.4783992767334f, 11.465299606323242f,
+ 0.0f, -24.48310089111328f, 7.688300132751465f,
+ 0.0f, -22.29360008239746f, 4.89661979675293f,
+ 0.0f, -20.104000091552734f, 3.0006699562072754f,
+ 0.0f, -19.108800888061523f, 1.9108799695968628f,
+ 0.0f, -25.4783992767334f, 11.465299606323242f,
+ -10.000300407409668f, -23.503799438476562f, 11.465299606323242f,
+ -9.609620094299316f, -22.5856990814209f, 7.688300132751465f,
+ 0.0f, -24.48310089111328f, 7.688300132751465f,
+ -8.750229835510254f, -20.565799713134766f, 4.89661979675293f,
+ 0.0f, -22.29360008239746f, 4.89661979675293f,
+ -7.890830039978027f, -18.54599952697754f, 3.0006699562072754f,
+ 0.0f, -20.104000091552734f, 3.0006699562072754f,
+ -7.500199794769287f, -17.62779998779297f, 1.9108799695968628f,
+ 0.0f, -19.108800888061523f, 1.9108799695968628f,
+ -18.089599609375f, -18.089599609375f, 11.465299606323242f,
+ -17.382999420166016f, -17.382999420166016f, 7.688300132751465f,
+ -15.828399658203125f, -15.828399658203125f, 4.89661979675293f,
+ -14.273900032043457f, -14.273900032043457f, 3.0006699562072754f,
+ -13.56719970703125f, -13.56719970703125f, 1.9108799695968628f,
+ -23.503799438476562f, -10.000300407409668f, 11.465299606323242f,
+ -22.5856990814209f, -9.609620094299316f, 7.688300132751465f,
+ -20.565799713134766f, -8.750229835510254f, 4.89661979675293f,
+ -18.54599952697754f, -7.890830039978027f, 3.0006699562072754f,
+ -17.62779998779297f, -7.500199794769287f, 1.9108799695968628f,
+ -25.4783992767334f, 0.0f, 11.465299606323242f,
+ -24.48310089111328f, 0.0f, 7.688300132751465f,
+ -22.29360008239746f, 0.0f, 4.89661979675293f,
+ -20.104000091552734f, 0.0f, 3.0006699562072754f,
+ -19.108800888061523f, 0.0f, 1.9108799695968628f,
+ -25.4783992767334f, 0.0f, 11.465299606323242f,
+ -23.503799438476562f, 10.000300407409668f, 11.465299606323242f,
+ -22.5856990814209f, 9.609620094299316f, 7.688300132751465f,
+ -24.48310089111328f, 0.0f, 7.688300132751465f,
+ -20.565799713134766f, 8.750229835510254f, 4.89661979675293f,
+ -22.29360008239746f, 0.0f, 4.89661979675293f,
+ -18.54599952697754f, 7.890830039978027f, 3.0006699562072754f,
+ -20.104000091552734f, 0.0f, 3.0006699562072754f,
+ -17.62779998779297f, 7.500199794769287f, 1.9108799695968628f,
+ -19.108800888061523f, 0.0f, 1.9108799695968628f,
+ -18.089599609375f, 18.089599609375f, 11.465299606323242f,
+ -17.382999420166016f, 17.382999420166016f, 7.688300132751465f,
+ -15.828399658203125f, 15.828399658203125f, 4.89661979675293f,
+ -14.273900032043457f, 14.273900032043457f, 3.0006699562072754f,
+ -13.56719970703125f, 13.56719970703125f, 1.9108799695968628f,
+ -10.000300407409668f, 23.503799438476562f, 11.465299606323242f,
+ -9.609620094299316f, 22.5856990814209f, 7.688300132751465f,
+ -8.750229835510254f, 20.565799713134766f, 4.89661979675293f,
+ -7.890830039978027f, 18.54599952697754f, 3.0006699562072754f,
+ -7.500199794769287f, 17.62779998779297f, 1.9108799695968628f,
+ 0.0f, 25.4783992767334f, 11.465299606323242f,
+ 0.0f, 24.48310089111328f, 7.688300132751465f,
+ 0.0f, 22.29360008239746f, 4.89661979675293f,
+ 0.0f, 20.104000091552734f, 3.0006699562072754f,
+ 0.0f, 19.108800888061523f, 1.9108799695968628f,
+ 0.0f, 25.4783992767334f, 11.465299606323242f,
+ 10.000300407409668f, 23.503799438476562f, 11.465299606323242f,
+ 9.609620094299316f, 22.5856990814209f, 7.688300132751465f,
+ 0.0f, 24.48310089111328f, 7.688300132751465f,
+ 8.750229835510254f, 20.565799713134766f, 4.89661979675293f,
+ 0.0f, 22.29360008239746f, 4.89661979675293f,
+ 7.890830039978027f, 18.54599952697754f, 3.0006699562072754f,
+ 0.0f, 20.104000091552734f, 3.0006699562072754f,
+ 7.500199794769287f, 17.62779998779297f, 1.9108799695968628f,
+ 0.0f, 19.108800888061523f, 1.9108799695968628f,
+ 18.089599609375f, 18.089599609375f, 11.465299606323242f,
+ 17.382999420166016f, 17.382999420166016f, 7.688300132751465f,
+ 15.828399658203125f, 15.828399658203125f, 4.89661979675293f,
+ 14.273900032043457f, 14.273900032043457f, 3.0006699562072754f,
+ 13.56719970703125f, 13.56719970703125f, 1.9108799695968628f,
+ 23.503799438476562f, 10.000300407409668f, 11.465299606323242f,
+ 22.5856990814209f, 9.609620094299316f, 7.688300132751465f,
+ 20.565799713134766f, 8.750229835510254f, 4.89661979675293f,
+ 18.54599952697754f, 7.890830039978027f, 3.0006699562072754f,
+ 17.62779998779297f, 7.500199794769287f, 1.9108799695968628f,
+ 25.4783992767334f, 0.0f, 11.465299606323242f,
+ 24.48310089111328f, 0.0f, 7.688300132751465f,
+ 22.29360008239746f, 0.0f, 4.89661979675293f,
+ 20.104000091552734f, 0.0f, 3.0006699562072754f,
+ 19.108800888061523f, 0.0f, 1.9108799695968628f,
+ 19.108800888061523f, 0.0f, 1.9108799695968628f,
+ 17.62779998779297f, -7.500199794769287f, 1.9108799695968628f,
+ 17.228500366210938f, -7.330269813537598f, 1.2092299461364746f,
+ 18.675800323486328f, 0.0f, 1.2092299461364746f,
+ 15.093799591064453f, -6.422039985656738f, 0.5971490144729614f,
+ 16.361900329589844f, 0.0f, 0.5971490144729614f,
+ 9.819259643554688f, -4.177840232849121f, 0.16421599686145782f,
+ 10.644200325012207f, 0.0f, 0.16421599686145782f,
+ 0.0f, 0.0f, 0.0f,
+ 0.0f, 0.0f, 0.0f,
+ 13.56719970703125f, -13.56719970703125f, 1.9108799695968628f,
+ 13.25979995727539f, -13.25979995727539f, 1.2092299461364746f,
+ 11.616900444030762f, -11.616900444030762f, 0.5971490144729614f,
+ 7.557370185852051f, -7.557370185852051f, 0.16421599686145782f,
+ 0.0f, 0.0f, 0.0f,
+ 7.500199794769287f, -17.62779998779297f, 1.9108799695968628f,
+ 7.330269813537598f, -17.228500366210938f, 1.2092299461364746f,
+ 6.422039985656738f, -15.093799591064453f, 0.5971490144729614f,
+ 4.177840232849121f, -9.819259643554688f, 0.16421599686145782f,
+ 0.0f, 0.0f, 0.0f,
+ 0.0f, -19.108800888061523f, 1.9108799695968628f,
+ 0.0f, -18.675800323486328f, 1.2092299461364746f,
+ 0.0f, -16.361900329589844f, 0.5971490144729614f,
+ 0.0f, -10.644200325012207f, 0.16421599686145782f,
+ 0.0f, 0.0f, 0.0f,
+ 0.0f, -19.108800888061523f, 1.9108799695968628f,
+ -7.500199794769287f, -17.62779998779297f, 1.9108799695968628f,
+ -7.330269813537598f, -17.228500366210938f, 1.2092299461364746f,
+ 0.0f, -18.675800323486328f, 1.2092299461364746f,
+ -6.422039985656738f, -15.093799591064453f, 0.5971490144729614f,
+ 0.0f, -16.361900329589844f, 0.5971490144729614f,
+ -4.177840232849121f, -9.819259643554688f, 0.16421599686145782f,
+ 0.0f, -10.644200325012207f, 0.16421599686145782f,
+ 0.0f, 0.0f, 0.0f,
+ 0.0f, 0.0f, 0.0f,
+ -13.56719970703125f, -13.56719970703125f, 1.9108799695968628f,
+ -13.25979995727539f, -13.25979995727539f, 1.2092299461364746f,
+ -11.616900444030762f, -11.616900444030762f, 0.5971490144729614f,
+ -7.557370185852051f, -7.557370185852051f, 0.16421599686145782f,
+ 0.0f, 0.0f, 0.0f,
+ -17.62779998779297f, -7.500199794769287f, 1.9108799695968628f,
+ -17.228500366210938f, -7.330269813537598f, 1.2092299461364746f,
+ -15.093799591064453f, -6.422039985656738f, 0.5971490144729614f,
+ -9.819259643554688f, -4.177840232849121f, 0.16421599686145782f,
+ 0.0f, 0.0f, 0.0f,
+ -19.108800888061523f, 0.0f, 1.9108799695968628f,
+ -18.675800323486328f, 0.0f, 1.2092299461364746f,
+ -16.361900329589844f, 0.0f, 0.5971490144729614f,
+ -10.644200325012207f, 0.0f, 0.16421599686145782f,
+ 0.0f, 0.0f, 0.0f,
+ -19.108800888061523f, 0.0f, 1.9108799695968628f,
+ -17.62779998779297f, 7.500199794769287f, 1.9108799695968628f,
+ -17.228500366210938f, 7.330269813537598f, 1.2092299461364746f,
+ -18.675800323486328f, 0.0f, 1.2092299461364746f,
+ -15.093799591064453f, 6.422039985656738f, 0.5971490144729614f,
+ -16.361900329589844f, 0.0f, 0.5971490144729614f,
+ -9.819259643554688f, 4.177840232849121f, 0.16421599686145782f,
+ -10.644200325012207f, 0.0f, 0.16421599686145782f,
+ 0.0f, 0.0f, 0.0f,
+ 0.0f, 0.0f, 0.0f,
+ -13.56719970703125f, 13.56719970703125f, 1.9108799695968628f,
+ -13.25979995727539f, 13.25979995727539f, 1.2092299461364746f,
+ -11.616900444030762f, 11.616900444030762f, 0.5971490144729614f,
+ -7.557370185852051f, 7.557370185852051f, 0.16421599686145782f,
+ 0.0f, 0.0f, 0.0f,
+ -7.500199794769287f, 17.62779998779297f, 1.9108799695968628f,
+ -7.330269813537598f, 17.228500366210938f, 1.2092299461364746f,
+ -6.422039985656738f, 15.093799591064453f, 0.5971490144729614f,
+ -4.177840232849121f, 9.819259643554688f, 0.16421599686145782f,
+ 0.0f, 0.0f, 0.0f,
+ 0.0f, 19.108800888061523f, 1.9108799695968628f,
+ 0.0f, 18.675800323486328f, 1.2092299461364746f,
+ 0.0f, 16.361900329589844f, 0.5971490144729614f,
+ 0.0f, 10.644200325012207f, 0.16421599686145782f,
+ 0.0f, 0.0f, 0.0f,
+ 0.0f, 19.108800888061523f, 1.9108799695968628f,
+ 7.500199794769287f, 17.62779998779297f, 1.9108799695968628f,
+ 7.330269813537598f, 17.228500366210938f, 1.2092299461364746f,
+ 0.0f, 18.675800323486328f, 1.2092299461364746f,
+ 6.422039985656738f, 15.093799591064453f, 0.5971490144729614f,
+ 0.0f, 16.361900329589844f, 0.5971490144729614f,
+ 4.177840232849121f, 9.819259643554688f, 0.16421599686145782f,
+ 0.0f, 10.644200325012207f, 0.16421599686145782f,
+ 0.0f, 0.0f, 0.0f,
+ 0.0f, 0.0f, 0.0f,
+ 13.56719970703125f, 13.56719970703125f, 1.9108799695968628f,
+ 13.25979995727539f, 13.25979995727539f, 1.2092299461364746f,
+ 11.616900444030762f, 11.616900444030762f, 0.5971490144729614f,
+ 7.557370185852051f, 7.557370185852051f, 0.16421599686145782f,
+ 0.0f, 0.0f, 0.0f,
+ 17.62779998779297f, 7.500199794769287f, 1.9108799695968628f,
+ 17.228500366210938f, 7.330269813537598f, 1.2092299461364746f,
+ 15.093799591064453f, 6.422039985656738f, 0.5971490144729614f,
+ 9.819259643554688f, 4.177840232849121f, 0.16421599686145782f,
+ 0.0f, 0.0f, 0.0f,
+ 19.108800888061523f, 0.0f, 1.9108799695968628f,
+ 18.675800323486328f, 0.0f, 1.2092299461364746f,
+ 16.361900329589844f, 0.0f, 0.5971490144729614f,
+ 10.644200325012207f, 0.0f, 0.16421599686145782f,
+ 0.0f, 0.0f, 0.0f,
+ -20.382699966430664f, 0.0f, 25.796899795532227f,
+ -20.1835994720459f, -2.149739980697632f, 26.244699478149414f,
+ -26.511600494384766f, -2.149739980697632f, 26.192899703979492f,
+ -26.334299087524414f, 0.0f, 25.752099990844727f,
+ -31.156299591064453f, -2.149739980697632f, 25.830400466918945f,
+ -30.733299255371094f, 0.0f, 25.438600540161133f,
+ -34.016998291015625f, -2.149739980697632f, 24.846500396728516f,
+ -33.46030044555664f, 0.0f, 24.587600708007812f,
+ -34.99290084838867f, -2.149739980697632f, 22.930500030517578f,
+ -34.39580154418945f, 0.0f, 22.930500030517578f,
+ -19.74570083618164f, -2.8663198947906494f, 27.229999542236328f,
+ -26.901599884033203f, -2.8663198947906494f, 27.162799835205078f,
+ -32.08679962158203f, -2.8663198947906494f, 26.69260025024414f,
+ -35.241798400878906f, -2.8663198947906494f, 25.416200637817383f,
+ -36.30670166015625f, -2.8663198947906494f, 22.930500030517578f,
+ -19.30780029296875f, -2.149739980697632f, 28.215299606323242f,
+ -27.29159927368164f, -2.149739980697632f, 28.132699966430664f,
+ -33.017398834228516f, -2.149739980697632f, 27.55470085144043f,
+ -36.46649932861328f, -2.149739980697632f, 25.98579978942871f,
+ -37.620399475097656f, -2.149739980697632f, 22.930500030517578f,
+ -19.108800888061523f, 0.0f, 28.66320037841797f,
+ -27.468900680541992f, 0.0f, 28.57360076904297f,
+ -33.440399169921875f, 0.0f, 27.94659996032715f,
+ -37.02330017089844f, 0.0f, 26.244699478149414f,
+ -38.21760177612305f, 0.0f, 22.930500030517578f,
+ -19.108800888061523f, 0.0f, 28.66320037841797f,
+ -19.30780029296875f, 2.149739980697632f, 28.215299606323242f,
+ -27.29159927368164f, 2.149739980697632f, 28.132699966430664f,
+ -27.468900680541992f, 0.0f, 28.57360076904297f,
+ -33.017398834228516f, 2.149739980697632f, 27.55470085144043f,
+ -33.440399169921875f, 0.0f, 27.94659996032715f,
+ -36.46649932861328f, 2.149739980697632f, 25.98579978942871f,
+ -37.02330017089844f, 0.0f, 26.244699478149414f,
+ -37.620399475097656f, 2.149739980697632f, 22.930500030517578f,
+ -38.21760177612305f, 0.0f, 22.930500030517578f,
+ -19.74570083618164f, 2.8663198947906494f, 27.229999542236328f,
+ -26.901599884033203f, 2.8663198947906494f, 27.162799835205078f,
+ -32.08679962158203f, 2.8663198947906494f, 26.69260025024414f,
+ -35.241798400878906f, 2.8663198947906494f, 25.416200637817383f,
+ -36.30670166015625f, 2.8663198947906494f, 22.930500030517578f,
+ -20.1835994720459f, 2.149739980697632f, 26.244699478149414f,
+ -26.511600494384766f, 2.149739980697632f, 26.192899703979492f,
+ -31.156299591064453f, 2.149739980697632f, 25.830400466918945f,
+ -34.016998291015625f, 2.149739980697632f, 24.846500396728516f,
+ -34.99290084838867f, 2.149739980697632f, 22.930500030517578f,
+ -20.382699966430664f, 0.0f, 25.796899795532227f,
+ -26.334299087524414f, 0.0f, 25.752099990844727f,
+ -30.733299255371094f, 0.0f, 25.438600540161133f,
+ -33.46030044555664f, 0.0f, 24.587600708007812f,
+ -34.39580154418945f, 0.0f, 22.930500030517578f,
+ -34.39580154418945f, 0.0f, 22.930500030517578f,
+ -34.99290084838867f, -2.149739980697632f, 22.930500030517578f,
+ -34.44089889526367f, -2.149739980697632f, 20.082199096679688f,
+ -33.89820098876953f, 0.0f, 20.33289909362793f,
+ -32.711299896240234f, -2.149739980697632f, 16.81529998779297f,
+ -32.32569885253906f, 0.0f, 17.197900772094727f,
+ -29.69420051574707f, -2.149739980697632f, 13.590499877929688f,
+ -29.558900833129883f, 0.0f, 14.062899589538574f,
+ -25.279300689697266f, -2.149739980697632f, 10.8681001663208f,
+ -25.4783992767334f, 0.0f, 11.465299606323242f,
+ -36.30670166015625f, -2.8663198947906494f, 22.930500030517578f,
+ -35.6348991394043f, -2.8663198947906494f, 19.530500411987305f,
+ -33.55979919433594f, -2.8663198947906494f, 15.973699569702148f,
+ -29.99180030822754f, -2.8663198947906494f, 12.551300048828125f,
+ -24.841400146484375f, -2.8663198947906494f, 9.554389953613281f,
+ -37.620399475097656f, -2.149739980697632f, 22.930500030517578f,
+ -36.82889938354492f, -2.149739980697632f, 18.97879981994629f,
+ -34.408199310302734f, -2.149739980697632f, 15.132100105285645f,
+ -30.289499282836914f, -2.149739980697632f, 11.512200355529785f,
+ -24.403499603271484f, -2.149739980697632f, 8.240659713745117f,
+ -38.21760177612305f, 0.0f, 22.930500030517578f,
+ -37.37160110473633f, 0.0f, 18.728099822998047f,
+ -34.79389953613281f, 0.0f, 14.749600410461426f,
+ -30.424800872802734f, 0.0f, 11.039799690246582f,
+ -24.204500198364258f, 0.0f, 7.643509864807129f,
+ -38.21760177612305f, 0.0f, 22.930500030517578f,
+ -37.620399475097656f, 2.149739980697632f, 22.930500030517578f,
+ -36.82889938354492f, 2.149739980697632f, 18.97879981994629f,
+ -37.37160110473633f, 0.0f, 18.728099822998047f,
+ -34.408199310302734f, 2.149739980697632f, 15.132100105285645f,
+ -34.79389953613281f, 0.0f, 14.749600410461426f,
+ -30.289499282836914f, 2.149739980697632f, 11.512200355529785f,
+ -30.424800872802734f, 0.0f, 11.039799690246582f,
+ -24.403499603271484f, 2.149739980697632f, 8.240659713745117f,
+ -24.204500198364258f, 0.0f, 7.643509864807129f,
+ -36.30670166015625f, 2.8663198947906494f, 22.930500030517578f,
+ -35.6348991394043f, 2.8663198947906494f, 19.530500411987305f,
+ -33.55979919433594f, 2.8663198947906494f, 15.973699569702148f,
+ -29.99180030822754f, 2.8663198947906494f, 12.551300048828125f,
+ -24.841400146484375f, 2.8663198947906494f, 9.554389953613281f,
+ -34.99290084838867f, 2.149739980697632f, 22.930500030517578f,
+ -34.44089889526367f, 2.149739980697632f, 20.082199096679688f,
+ -32.711299896240234f, 2.149739980697632f, 16.81529998779297f,
+ -29.69420051574707f, 2.149739980697632f, 13.590499877929688f,
+ -25.279300689697266f, 2.149739980697632f, 10.8681001663208f,
+ -34.39580154418945f, 0.0f, 22.930500030517578f,
+ -33.89820098876953f, 0.0f, 20.33289909362793f,
+ -32.32569885253906f, 0.0f, 17.197900772094727f,
+ -29.558900833129883f, 0.0f, 14.062899589538574f,
+ -25.4783992767334f, 0.0f, 11.465299606323242f,
+ 21.656600952148438f, 0.0f, 18.15329933166504f,
+ 21.656600952148438f, -4.729420185089111f, 16.511199951171875f,
+ 28.233999252319336f, -4.270359992980957f, 18.339000701904297f,
+ 27.76740074157715f, 0.0f, 19.55660057067871f,
+ 31.011899948120117f, -3.2604401111602783f, 22.221399307250977f,
+ 30.4148006439209f, 0.0f, 22.930500030517578f,
+ 32.59560012817383f, -2.2505099773406982f, 26.764400482177734f,
+ 31.867900848388672f, 0.0f, 27.020999908447266f,
+ 35.5900993347168f, -1.791450023651123f, 30.573999404907227f,
+ 34.39580154418945f, 0.0f, 30.573999404907227f,
+ 21.656600952148438f, -6.3059000968933105f, 12.89840030670166f,
+ 29.260299682617188f, -5.693819999694824f, 15.660200119018555f,
+ 32.32569885253906f, -4.347249984741211f, 20.661399841308594f,
+ 34.19670104980469f, -3.0006699562072754f, 26.199899673461914f,
+ 38.21760177612305f, -2.3886001110076904f, 30.573999404907227f,
+ 21.656600952148438f, -4.729420185089111f, 9.285670280456543f,
+ 30.286699295043945f, -4.270359992980957f, 12.981499671936035f,
+ 33.639400482177734f, -3.2604401111602783f, 19.101299285888672f,
+ 35.79790115356445f, -2.2505099773406982f, 25.635400772094727f,
+ 40.845001220703125f, -1.791450023651123f, 30.573999404907227f,
+ 21.656600952148438f, 0.0f, 7.643509864807129f,
+ 30.75320053100586f, 0.0f, 11.763799667358398f,
+ 34.23659896850586f, 0.0f, 18.392200469970703f,
+ 36.52560043334961f, 0.0f, 25.378799438476562f,
+ 42.03929901123047f, 0.0f, 30.573999404907227f,
+ 21.656600952148438f, 0.0f, 7.643509864807129f,
+ 21.656600952148438f, 4.729420185089111f, 9.285670280456543f,
+ 30.286699295043945f, 4.270359992980957f, 12.981499671936035f,
+ 30.75320053100586f, 0.0f, 11.763799667358398f,
+ 33.639400482177734f, 3.2604401111602783f, 19.101299285888672f,
+ 34.23659896850586f, 0.0f, 18.392200469970703f,
+ 35.79790115356445f, 2.2505099773406982f, 25.635400772094727f,
+ 36.52560043334961f, 0.0f, 25.378799438476562f,
+ 40.845001220703125f, 1.791450023651123f, 30.573999404907227f,
+ 42.03929901123047f, 0.0f, 30.573999404907227f,
+ 21.656600952148438f, 6.3059000968933105f, 12.89840030670166f,
+ 29.260299682617188f, 5.693819999694824f, 15.660200119018555f,
+ 32.32569885253906f, 4.347249984741211f, 20.661399841308594f,
+ 34.19670104980469f, 3.0006699562072754f, 26.199899673461914f,
+ 38.21760177612305f, 2.3886001110076904f, 30.573999404907227f,
+ 21.656600952148438f, 4.729420185089111f, 16.511199951171875f,
+ 28.233999252319336f, 4.270359992980957f, 18.339000701904297f,
+ 31.011899948120117f, 3.2604401111602783f, 22.221399307250977f,
+ 32.59560012817383f, 2.2505099773406982f, 26.764400482177734f,
+ 35.5900993347168f, 1.791450023651123f, 30.573999404907227f,
+ 21.656600952148438f, 0.0f, 18.15329933166504f,
+ 27.76740074157715f, 0.0f, 19.55660057067871f,
+ 30.4148006439209f, 0.0f, 22.930500030517578f,
+ 31.867900848388672f, 0.0f, 27.020999908447266f,
+ 34.39580154418945f, 0.0f, 30.573999404907227f,
+ 34.39580154418945f, 0.0f, 30.573999404907227f,
+ 35.5900993347168f, -1.791450023651123f, 30.573999404907227f,
+ 36.59049987792969f, -1.679479956626892f, 31.137699127197266f,
+ 35.3114013671875f, 0.0f, 31.111499786376953f,
+ 37.18870162963867f, -1.4331599473953247f, 31.332599639892578f,
+ 35.98820114135742f, 0.0f, 31.290599822998047f,
+ 37.206600189208984f, -1.1868300437927246f, 31.1481990814209f,
+ 36.187198638916016f, 0.0f, 31.111499786376953f,
+ 36.46590042114258f, -1.074869990348816f, 30.573999404907227f,
+ 35.669700622558594f, 0.0f, 30.573999404907227f,
+ 38.21760177612305f, -2.3886001110076904f, 30.573999404907227f,
+ 39.40439987182617f, -2.2393100261688232f, 31.195499420166016f,
+ 39.829898834228516f, -1.9108799695968628f, 31.424999237060547f,
+ 39.44919967651367f, -1.582450032234192f, 31.229000091552734f,
+ 38.21760177612305f, -1.4331599473953247f, 30.573999404907227f,
+ 40.845001220703125f, -1.791450023651123f, 30.573999404907227f,
+ 42.218299865722656f, -1.679479956626892f, 31.25320053100586f,
+ 42.47100067138672f, -1.4331599473953247f, 31.51740074157715f,
+ 41.69169998168945f, -1.1868300437927246f, 31.309900283813477f,
+ 39.969200134277344f, -1.074869990348816f, 30.573999404907227f,
+ 42.03929901123047f, 0.0f, 30.573999404907227f,
+ 43.49729919433594f, 0.0f, 31.279399871826172f,
+ 43.67150115966797f, 0.0f, 31.55929946899414f,
+ 42.71110153198242f, 0.0f, 31.346599578857422f,
+ 40.76539993286133f, 0.0f, 30.573999404907227f,
+ 42.03929901123047f, 0.0f, 30.573999404907227f,
+ 40.845001220703125f, 1.791450023651123f, 30.573999404907227f,
+ 42.218299865722656f, 1.679479956626892f, 31.25320053100586f,
+ 43.49729919433594f, 0.0f, 31.279399871826172f,
+ 42.47100067138672f, 1.4331599473953247f, 31.51740074157715f,
+ 43.67150115966797f, 0.0f, 31.55929946899414f,
+ 41.69169998168945f, 1.1868300437927246f, 31.309900283813477f,
+ 42.71110153198242f, 0.0f, 31.346599578857422f,
+ 39.969200134277344f, 1.074869990348816f, 30.573999404907227f,
+ 40.76539993286133f, 0.0f, 30.573999404907227f,
+ 38.21760177612305f, 2.3886001110076904f, 30.573999404907227f,
+ 39.40439987182617f, 2.2393100261688232f, 31.195499420166016f,
+ 39.829898834228516f, 1.9108799695968628f, 31.424999237060547f,
+ 39.44919967651367f, 1.582450032234192f, 31.229000091552734f,
+ 38.21760177612305f, 1.4331599473953247f, 30.573999404907227f,
+ 35.5900993347168f, 1.791450023651123f, 30.573999404907227f,
+ 36.59049987792969f, 1.679479956626892f, 31.137699127197266f,
+ 37.18870162963867f, 1.4331599473953247f, 31.332599639892578f,
+ 37.206600189208984f, 1.1868300437927246f, 31.1481990814209f,
+ 36.46590042114258f, 1.074869990348816f, 30.573999404907227f,
+ 34.39580154418945f, 0.0f, 30.573999404907227f,
+ 35.3114013671875f, 0.0f, 31.111499786376953f,
+ 35.98820114135742f, 0.0f, 31.290599822998047f,
+ 36.187198638916016f, 0.0f, 31.111499786376953f,
+ 35.669700622558594f, 0.0f, 30.573999404907227f,
+ 0.0f, 0.0f, 40.12839889526367f,
+ 0.0f, 0.0f, 40.12839889526367f,
+ 4.004499912261963f, -1.7077000141143799f, 39.501399993896484f,
+ 4.339280128479004f, 0.0f, 39.501399993896484f,
+ 3.8207099437713623f, -1.6290700435638428f, 37.97869873046875f,
+ 4.140230178833008f, 0.0f, 37.97869873046875f,
+ 2.314160108566284f, -0.985912024974823f, 36.09769821166992f,
+ 2.5080299377441406f, 0.0f, 36.09769821166992f,
+ 2.3503799438476562f, -1.0000300407409668f, 34.39580154418945f,
+ 2.547840118408203f, 0.0f, 34.39580154418945f,
+ 0.0f, 0.0f, 40.12839889526367f,
+ 3.0849199295043945f, -3.0849199295043945f, 39.501399993896484f,
+ 2.943150043487549f, -2.943150043487549f, 37.97869873046875f,
+ 1.782039999961853f, -1.782039999961853f, 36.09769821166992f,
+ 1.8089599609375f, -1.8089599609375f, 34.39580154418945f,
+ 0.0f, 0.0f, 40.12839889526367f,
+ 1.7077000141143799f, -4.004499912261963f, 39.501399993896484f,
+ 1.6290700435638428f, -3.8207099437713623f, 37.97869873046875f,
+ 0.985912024974823f, -2.314160108566284f, 36.09769821166992f,
+ 1.0000300407409668f, -2.3503799438476562f, 34.39580154418945f,
+ 0.0f, 0.0f, 40.12839889526367f,
+ 0.0f, -4.339280128479004f, 39.501399993896484f,
+ 0.0f, -4.140230178833008f, 37.97869873046875f,
+ 0.0f, -2.5080299377441406f, 36.09769821166992f,
+ 0.0f, -2.547840118408203f, 34.39580154418945f,
+ 0.0f, 0.0f, 40.12839889526367f,
+ 0.0f, 0.0f, 40.12839889526367f,
+ -1.7077000141143799f, -4.004499912261963f, 39.501399993896484f,
+ 0.0f, -4.339280128479004f, 39.501399993896484f,
+ -1.6290700435638428f, -3.8207099437713623f, 37.97869873046875f,
+ 0.0f, -4.140230178833008f, 37.97869873046875f,
+ -0.985912024974823f, -2.314160108566284f, 36.09769821166992f,
+ 0.0f, -2.5080299377441406f, 36.09769821166992f,
+ -1.0000300407409668f, -2.3503799438476562f, 34.39580154418945f,
+ 0.0f, -2.547840118408203f, 34.39580154418945f,
+ 0.0f, 0.0f, 40.12839889526367f,
+ -3.0849199295043945f, -3.0849199295043945f, 39.501399993896484f,
+ -2.943150043487549f, -2.943150043487549f, 37.97869873046875f,
+ -1.782039999961853f, -1.782039999961853f, 36.09769821166992f,
+ -1.8089599609375f, -1.8089599609375f, 34.39580154418945f,
+ 0.0f, 0.0f, 40.12839889526367f,
+ -4.004499912261963f, -1.7077000141143799f, 39.501399993896484f,
+ -3.8207099437713623f, -1.6290700435638428f, 37.97869873046875f,
+ -2.314160108566284f, -0.985912024974823f, 36.09769821166992f,
+ -2.3503799438476562f, -1.0000300407409668f, 34.39580154418945f,
+ 0.0f, 0.0f, 40.12839889526367f,
+ -4.339280128479004f, 0.0f, 39.501399993896484f,
+ -4.140230178833008f, 0.0f, 37.97869873046875f,
+ -2.5080299377441406f, 0.0f, 36.09769821166992f,
+ -2.547840118408203f, 0.0f, 34.39580154418945f,
+ 0.0f, 0.0f, 40.12839889526367f,
+ 0.0f, 0.0f, 40.12839889526367f,
+ -4.004499912261963f, 1.7077000141143799f, 39.501399993896484f,
+ -4.339280128479004f, 0.0f, 39.501399993896484f,
+ -3.8207099437713623f, 1.6290700435638428f, 37.97869873046875f,
+ -4.140230178833008f, 0.0f, 37.97869873046875f,
+ -2.314160108566284f, 0.985912024974823f, 36.09769821166992f,
+ -2.5080299377441406f, 0.0f, 36.09769821166992f,
+ -2.3503799438476562f, 1.0000300407409668f, 34.39580154418945f,
+ -2.547840118408203f, 0.0f, 34.39580154418945f,
+ 0.0f, 0.0f, 40.12839889526367f,
+ -3.0849199295043945f, 3.0849199295043945f, 39.501399993896484f,
+ -2.943150043487549f, 2.943150043487549f, 37.97869873046875f,
+ -1.782039999961853f, 1.782039999961853f, 36.09769821166992f,
+ -1.8089599609375f, 1.8089599609375f, 34.39580154418945f,
+ 0.0f, 0.0f, 40.12839889526367f,
+ -1.7077000141143799f, 4.004499912261963f, 39.501399993896484f,
+ -1.6290700435638428f, 3.8207099437713623f, 37.97869873046875f,
+ -0.985912024974823f, 2.314160108566284f, 36.09769821166992f,
+ -1.0000300407409668f, 2.3503799438476562f, 34.39580154418945f,
+ 0.0f, 0.0f, 40.12839889526367f,
+ 0.0f, 4.339280128479004f, 39.501399993896484f,
+ 0.0f, 4.140230178833008f, 37.97869873046875f,
+ 0.0f, 2.5080299377441406f, 36.09769821166992f,
+ 0.0f, 2.547840118408203f, 34.39580154418945f,
+ 0.0f, 0.0f, 40.12839889526367f,
+ 0.0f, 0.0f, 40.12839889526367f,
+ 1.7077000141143799f, 4.004499912261963f, 39.501399993896484f,
+ 0.0f, 4.339280128479004f, 39.501399993896484f,
+ 1.6290700435638428f, 3.8207099437713623f, 37.97869873046875f,
+ 0.0f, 4.140230178833008f, 37.97869873046875f,
+ 0.985912024974823f, 2.314160108566284f, 36.09769821166992f,
+ 0.0f, 2.5080299377441406f, 36.09769821166992f,
+ 1.0000300407409668f, 2.3503799438476562f, 34.39580154418945f,
+ 0.0f, 2.547840118408203f, 34.39580154418945f,
+ 0.0f, 0.0f, 40.12839889526367f,
+ 3.0849199295043945f, 3.0849199295043945f, 39.501399993896484f,
+ 2.943150043487549f, 2.943150043487549f, 37.97869873046875f,
+ 1.782039999961853f, 1.782039999961853f, 36.09769821166992f,
+ 1.8089599609375f, 1.8089599609375f, 34.39580154418945f,
+ 0.0f, 0.0f, 40.12839889526367f,
+ 4.004499912261963f, 1.7077000141143799f, 39.501399993896484f,
+ 3.8207099437713623f, 1.6290700435638428f, 37.97869873046875f,
+ 2.314160108566284f, 0.985912024974823f, 36.09769821166992f,
+ 2.3503799438476562f, 1.0000300407409668f, 34.39580154418945f,
+ 0.0f, 0.0f, 40.12839889526367f,
+ 4.339280128479004f, 0.0f, 39.501399993896484f,
+ 4.140230178833008f, 0.0f, 37.97869873046875f,
+ 2.5080299377441406f, 0.0f, 36.09769821166992f,
+ 2.547840118408203f, 0.0f, 34.39580154418945f,
+ 2.547840118408203f, 0.0f, 34.39580154418945f,
+ 2.3503799438476562f, -1.0000300407409668f, 34.39580154418945f,
+ 5.361800193786621f, -2.2813100814819336f, 33.261199951171875f,
+ 5.812250137329102f, 0.0f, 33.261199951171875f,
+ 9.695320129394531f, -4.125110149383545f, 32.484901428222656f,
+ 10.50979995727539f, 0.0f, 32.484901428222656f,
+ 13.58810043334961f, -5.781400203704834f, 31.708599090576172f,
+ 14.729700088500977f, 0.0f, 31.708599090576172f,
+ 15.27750015258789f, -6.5001702308654785f, 30.573999404907227f,
+ 16.56089973449707f, 0.0f, 30.573999404907227f,
+ 1.8089599609375f, -1.8089599609375f, 34.39580154418945f,
+ 4.126699924468994f, -4.126699924468994f, 33.261199951171875f,
+ 7.461979866027832f, -7.461979866027832f, 32.484901428222656f,
+ 10.458100318908691f, -10.458100318908691f, 31.708599090576172f,
+ 11.758299827575684f, -11.758299827575684f, 30.573999404907227f,
+ 1.0000300407409668f, -2.3503799438476562f, 34.39580154418945f,
+ 2.2813100814819336f, -5.361800193786621f, 33.261199951171875f,
+ 4.125110149383545f, -9.695320129394531f, 32.484901428222656f,
+ 5.781400203704834f, -13.58810043334961f, 31.708599090576172f,
+ 6.5001702308654785f, -15.27750015258789f, 30.573999404907227f,
+ 0.0f, -2.547840118408203f, 34.39580154418945f,
+ 0.0f, -5.812250137329102f, 33.261199951171875f,
+ 0.0f, -10.50979995727539f, 32.484901428222656f,
+ 0.0f, -14.729700088500977f, 31.708599090576172f,
+ 0.0f, -16.56089973449707f, 30.573999404907227f,
+ 0.0f, -2.547840118408203f, 34.39580154418945f,
+ -1.0000300407409668f, -2.3503799438476562f, 34.39580154418945f,
+ -2.2813100814819336f, -5.361800193786621f, 33.261199951171875f,
+ 0.0f, -5.812250137329102f, 33.261199951171875f,
+ -4.125110149383545f, -9.695320129394531f, 32.484901428222656f,
+ 0.0f, -10.50979995727539f, 32.484901428222656f,
+ -5.781400203704834f, -13.58810043334961f, 31.708599090576172f,
+ 0.0f, -14.729700088500977f, 31.708599090576172f,
+ -6.5001702308654785f, -15.27750015258789f, 30.573999404907227f,
+ 0.0f, -16.56089973449707f, 30.573999404907227f,
+ -1.8089599609375f, -1.8089599609375f, 34.39580154418945f,
+ -4.126699924468994f, -4.126699924468994f, 33.261199951171875f,
+ -7.461979866027832f, -7.461979866027832f, 32.484901428222656f,
+ -10.458100318908691f, -10.458100318908691f, 31.708599090576172f,
+ -11.758299827575684f, -11.758299827575684f, 30.573999404907227f,
+ -2.3503799438476562f, -1.0000300407409668f, 34.39580154418945f,
+ -5.361800193786621f, -2.2813100814819336f, 33.261199951171875f,
+ -9.695320129394531f, -4.125110149383545f, 32.484901428222656f,
+ -13.58810043334961f, -5.781400203704834f, 31.708599090576172f,
+ -15.27750015258789f, -6.5001702308654785f, 30.573999404907227f,
+ -2.547840118408203f, 0.0f, 34.39580154418945f,
+ -5.812250137329102f, 0.0f, 33.261199951171875f,
+ -10.50979995727539f, 0.0f, 32.484901428222656f,
+ -14.729700088500977f, 0.0f, 31.708599090576172f,
+ -16.56089973449707f, 0.0f, 30.573999404907227f,
+ -2.547840118408203f, 0.0f, 34.39580154418945f,
+ -2.3503799438476562f, 1.0000300407409668f, 34.39580154418945f,
+ -5.361800193786621f, 2.2813100814819336f, 33.261199951171875f,
+ -5.812250137329102f, 0.0f, 33.261199951171875f,
+ -9.695320129394531f, 4.125110149383545f, 32.484901428222656f,
+ -10.50979995727539f, 0.0f, 32.484901428222656f,
+ -13.58810043334961f, 5.781400203704834f, 31.708599090576172f,
+ -14.729700088500977f, 0.0f, 31.708599090576172f,
+ -15.27750015258789f, 6.5001702308654785f, 30.573999404907227f,
+ -16.56089973449707f, 0.0f, 30.573999404907227f,
+ -1.8089599609375f, 1.8089599609375f, 34.39580154418945f,
+ -4.126699924468994f, 4.126699924468994f, 33.261199951171875f,
+ -7.461979866027832f, 7.461979866027832f, 32.484901428222656f,
+ -10.458100318908691f, 10.458100318908691f, 31.708599090576172f,
+ -11.758299827575684f, 11.758299827575684f, 30.573999404907227f,
+ -1.0000300407409668f, 2.3503799438476562f, 34.39580154418945f,
+ -2.2813100814819336f, 5.361800193786621f, 33.261199951171875f,
+ -4.125110149383545f, 9.695320129394531f, 32.484901428222656f,
+ -5.781400203704834f, 13.58810043334961f, 31.708599090576172f,
+ -6.5001702308654785f, 15.27750015258789f, 30.573999404907227f,
+ 0.0f, 2.547840118408203f, 34.39580154418945f,
+ 0.0f, 5.812250137329102f, 33.261199951171875f,
+ 0.0f, 10.50979995727539f, 32.484901428222656f,
+ 0.0f, 14.729700088500977f, 31.708599090576172f,
+ 0.0f, 16.56089973449707f, 30.573999404907227f,
+ 0.0f, 2.547840118408203f, 34.39580154418945f,
+ 1.0000300407409668f, 2.3503799438476562f, 34.39580154418945f,
+ 2.2813100814819336f, 5.361800193786621f, 33.261199951171875f,
+ 0.0f, 5.812250137329102f, 33.261199951171875f,
+ 4.125110149383545f, 9.695320129394531f, 32.484901428222656f,
+ 0.0f, 10.50979995727539f, 32.484901428222656f,
+ 5.781400203704834f, 13.58810043334961f, 31.708599090576172f,
+ 0.0f, 14.729700088500977f, 31.708599090576172f,
+ 6.5001702308654785f, 15.27750015258789f, 30.573999404907227f,
+ 0.0f, 16.56089973449707f, 30.573999404907227f,
+ 1.8089599609375f, 1.8089599609375f, 34.39580154418945f,
+ 4.126699924468994f, 4.126699924468994f, 33.261199951171875f,
+ 7.461979866027832f, 7.461979866027832f, 32.484901428222656f,
+ 10.458100318908691f, 10.458100318908691f, 31.708599090576172f,
+ 11.758299827575684f, 11.758299827575684f, 30.573999404907227f,
+ 2.3503799438476562f, 1.0000300407409668f, 34.39580154418945f,
+ 5.361800193786621f, 2.2813100814819336f, 33.261199951171875f,
+ 9.695320129394531f, 4.125110149383545f, 32.484901428222656f,
+ 13.58810043334961f, 5.781400203704834f, 31.708599090576172f,
+ 15.27750015258789f, 6.5001702308654785f, 30.573999404907227f,
+ 2.547840118408203f, 0.0f, 34.39580154418945f,
+ 5.812250137329102f, 0.0f, 33.261199951171875f,
+ 10.50979995727539f, 0.0f, 32.484901428222656f,
+ 14.729700088500977f, 0.0f, 31.708599090576172f,
+ 16.56089973449707f, 0.0f, 30.573999404907227f,
+};
+
+static const float teapot_normals[] = {
+ -0.9667419791221619, 0, -0.25575199723243713,
+ -0.8930140137672424, 0.3698819875717163, -0.2563450038433075,
+ -0.8934370279312134, 0.36910200119018555, 0.2559970021247864,
+ -0.9668239951133728, 0, 0.2554430067539215,
+ -0.0838799998164177, 0.03550700098276138, 0.9958429932594299,
+ -0.09205400198698044, 0, 0.9957540035247803,
+ 0.629721999168396, -0.2604379951953888, 0.7318620085716248,
+ 0.6820489764213562, 0, 0.7313070297241211,
+ 0.803725004196167, -0.3325839936733246, 0.4933690130710602,
+ 0.8703010082244873, 0, 0.4925200045108795,
+ -0.6834070086479187, 0.6834070086479187, -0.2567310035228729,
+ -0.6835309863090515, 0.6835309863090515, 0.25606799125671387,
+ -0.06492599844932556, 0.06492500007152557, 0.9957759976387024,
+ 0.48139700293540955, -0.48139700293540955, 0.7324709892272949,
+ 0.6148040294647217, -0.6148040294647217, 0.4939970076084137,
+ -0.3698819875717163, 0.8930140137672424, -0.2563450038433075,
+ -0.36910200119018555, 0.8934370279312134, 0.2559959888458252,
+ -0.03550700098276138, 0.0838790014386177, 0.9958429932594299,
+ 0.26043900847435, -0.6297230124473572, 0.7318609952926636,
+ 0.3325839936733246, -0.803725004196167, 0.4933690130710602,
+ -0.002848000032827258, 0.9661769866943359, -0.25786298513412476,
+ -0.001921999966725707, 0.9670090079307556, 0.2547360062599182,
+ -0.00026500000967644155, 0.09227199852466583, 0.9957339763641357,
+ 0.00002300000051036477, -0.6820600032806396, 0.7312960028648376,
+ 0, -0.8703010082244873, 0.4925200045108795,
+ -0.002848000032827258, 0.9661769866943359, -0.25786298513412476,
+ 0.37905800342559814, 0.852770984172821, -0.35929998755455017,
+ 0.37711000442504883, 0.9140909910202026, 0.14908500015735626,
+ -0.001921999966725707, 0.9670090079307556, 0.2547360062599182,
+ 0.0275030005723238, 0.12255500257015228, 0.9920809864997864,
+ -0.00026500000967644155, 0.09227199852466583, 0.9957339763641357,
+ -0.26100900769233704, -0.6353650093078613, 0.7267630100250244,
+ 0.00002300000051036477, -0.6820600032806396, 0.7312960028648376,
+ -0.33248499035835266, -0.8042709827423096, 0.4925459921360016,
+ 0, -0.8703010082244873, 0.4925200045108795,
+ 0.6635469794273376, 0.6252639889717102, -0.4107919931411743,
+ 0.712664008140564, 0.6976209878921509, 0.07372400164604187,
+ 0.09972699731588364, 0.12198299914598465, 0.98750901222229,
+ -0.4873189926147461, -0.4885669946670532, 0.7237560153007507,
+ -0.6152420043945312, -0.6154839992523193, 0.4926010072231293,
+ 0.8800280094146729, 0.3387089967727661, -0.3329069912433624,
+ 0.9172769784927368, 0.36149299144744873, 0.16711199283599854,
+ 0.11358699947595596, 0.04806999862194061, 0.9923650026321411,
+ -0.6341490149497986, -0.2618879973888397, 0.7275090217590332,
+ -0.8041260242462158, -0.33270499110221863, 0.49263399839401245,
+ 0.9666900038719177, -0.010453999973833561, -0.2557379901409149,
+ 0.967441976070404, -0.00810300000011921, 0.25296199321746826,
+ 0.0934389978647232, -0.0012799999676644802, 0.9956240057945251,
+ -0.6821659803390503, 0.0003429999924264848, 0.7311969995498657,
+ -0.8703219890594482, 0.00005400000009103678, 0.492482990026474,
+ 0.9666900038719177, -0.010453999973833561, -0.2557379901409149,
+ 0.8930140137672424, -0.3698819875717163, -0.2563450038433075,
+ 0.8934370279312134, -0.36910200119018555, 0.2559970021247864,
+ 0.967441976070404, -0.00810300000011921, 0.25296199321746826,
+ 0.0838799998164177, -0.03550700098276138, 0.9958429932594299,
+ 0.0934389978647232, -0.0012799999676644802, 0.9956240057945251,
+ -0.629721999168396, 0.2604379951953888, 0.7318620085716248,
+ -0.6821659803390503, 0.0003429999924264848, 0.7311969995498657,
+ -0.803725004196167, 0.3325839936733246, 0.4933690130710602,
+ -0.8703219890594482, 0.00005400000009103678, 0.492482990026474,
+ 0.6834070086479187, -0.6834070086479187, -0.2567310035228729,
+ 0.6835309863090515, -0.6835309863090515, 0.25606799125671387,
+ 0.06492599844932556, -0.06492500007152557, 0.9957759976387024,
+ -0.48139700293540955, 0.48139700293540955, 0.7324709892272949,
+ -0.6148040294647217, 0.6148040294647217, 0.4939970076084137,
+ 0.3698819875717163, -0.8930140137672424, -0.2563450038433075,
+ 0.36910200119018555, -0.8934370279312134, 0.2559959888458252,
+ 0.03550700098276138, -0.0838790014386177, 0.9958429932594299,
+ -0.26043900847435, 0.6297230124473572, 0.7318609952926636,
+ -0.3325839936733246, 0.803725004196167, 0.4933690130710602,
+ 0, -0.9667419791221619, -0.25575199723243713,
+ 0, -0.9668239951133728, 0.2554430067539215,
+ 0, -0.09205400198698044, 0.9957540035247803,
+ 0, 0.6820489764213562, 0.7313070297241211,
+ 0, 0.8703010082244873, 0.4925200045108795,
+ 0, -0.9667419791221619, -0.25575199723243713,
+ -0.3698819875717163, -0.8930140137672424, -0.2563450038433075,
+ -0.36910200119018555, -0.8934370279312134, 0.2559970021247864,
+ 0, -0.9668239951133728, 0.2554430067539215,
+ -0.03550700098276138, -0.0838799998164177, 0.9958429932594299,
+ 0, -0.09205400198698044, 0.9957540035247803,
+ 0.2604379951953888, 0.629721999168396, 0.7318620085716248,
+ 0, 0.6820489764213562, 0.7313070297241211,
+ 0.3325839936733246, 0.803725004196167, 0.4933690130710602,
+ 0, 0.8703010082244873, 0.4925200045108795,
+ -0.6834070086479187, -0.6834070086479187, -0.2567310035228729,
+ -0.6835309863090515, -0.6835309863090515, 0.25606799125671387,
+ -0.06492500007152557, -0.06492599844932556, 0.9957759976387024,
+ 0.48139700293540955, 0.48139700293540955, 0.7324709892272949,
+ 0.6148040294647217, 0.6148040294647217, 0.4939970076084137,
+ -0.8930140137672424, -0.3698819875717163, -0.2563450038433075,
+ -0.8934370279312134, -0.36910200119018555, 0.2559959888458252,
+ -0.0838790014386177, -0.03550700098276138, 0.9958429932594299,
+ 0.6297230124473572, 0.26043900847435, 0.7318609952926636,
+ 0.803725004196167, 0.3325839936733246, 0.4933690130710602,
+ -0.9667419791221619, 0, -0.25575199723243713,
+ -0.9668239951133728, 0, 0.2554430067539215,
+ -0.09205400198698044, 0, 0.9957540035247803,
+ 0.6820489764213562, 0, 0.7313070297241211,
+ 0.8703010082244873, 0, 0.4925200045108795,
+ 0.8703010082244873, 0, 0.4925200045108795,
+ 0.803725004196167, -0.3325839936733246, 0.4933690130710602,
+ 0.8454390168190002, -0.34983500838279724, 0.40354499220848083,
+ 0.9153209924697876, 0, 0.4027250111103058,
+ 0.8699960112571716, -0.36004599928855896, 0.33685898780822754,
+ 0.9418079853057861, 0, 0.33615100383758545,
+ 0.9041929841041565, -0.37428000569343567, 0.20579099655151367,
+ 0.9786900281906128, 0, 0.20534199476242065,
+ 0.9218789935112, -0.38175201416015625, -0.06636899709701538,
+ 0.9978039860725403, 0, -0.06623899936676025,
+ 0.6148040294647217, -0.6148040294647217, 0.4939970076084137,
+ 0.6468020081520081, -0.6468020081520081, 0.40409600734710693,
+ 0.6656550168991089, -0.6656550168991089, 0.3373520076274872,
+ 0.6919230222702026, -0.6919230222702026, 0.20611999928951263,
+ 0.7055429816246033, -0.7055429816246033, -0.06647899746894836,
+ 0.3325839936733246, -0.803725004196167, 0.4933690130710602,
+ 0.34983500838279724, -0.8454390168190002, 0.40354499220848083,
+ 0.36004701256752014, -0.8699960112571716, 0.33685800433158875,
+ 0.37428000569343567, -0.9041929841041565, 0.20579099655151367,
+ 0.38175201416015625, -0.9218789935112, -0.06636899709701538,
+ 0, -0.8703010082244873, 0.4925200045108795,
+ 0, -0.9153209924697876, 0.4027250111103058,
+ 0, -0.9418079853057861, 0.33615100383758545,
+ 0, -0.9786900281906128, 0.20534199476242065,
+ 0, -0.9978039860725403, -0.06623899936676025,
+ 0, -0.8703010082244873, 0.4925200045108795,
+ -0.33248499035835266, -0.8042709827423096, 0.4925459921360016,
+ -0.34983500838279724, -0.8454390168190002, 0.40354499220848083,
+ 0, -0.9153209924697876, 0.4027250111103058,
+ -0.36004599928855896, -0.8699960112571716, 0.33685898780822754,
+ 0, -0.9418079853057861, 0.33615100383758545,
+ -0.37428000569343567, -0.9041929841041565, 0.20579099655151367,
+ 0, -0.9786900281906128, 0.20534199476242065,
+ -0.38175201416015625, -0.9218789935112, -0.06636899709701538,
+ 0, -0.9978039860725403, -0.06623899936676025,
+ -0.6152420043945312, -0.6154839992523193, 0.4926010072231293,
+ -0.6468020081520081, -0.6468020081520081, 0.40409600734710693,
+ -0.6656550168991089, -0.6656550168991089, 0.3373520076274872,
+ -0.6919230222702026, -0.6919230222702026, 0.20611999928951263,
+ -0.7055429816246033, -0.7055429816246033, -0.06647899746894836,
+ -0.8041260242462158, -0.33270499110221863, 0.49263399839401245,
+ -0.8454390168190002, -0.34983500838279724, 0.40354499220848083,
+ -0.8699960112571716, -0.36004701256752014, 0.33685800433158875,
+ -0.9041929841041565, -0.37428000569343567, 0.20579099655151367,
+ -0.9218789935112, -0.38175201416015625, -0.06636899709701538,
+ -0.8703219890594482, 0.00005400000009103678, 0.492482990026474,
+ -0.9153209924697876, 0, 0.4027250111103058,
+ -0.9418079853057861, 0, 0.33615100383758545,
+ -0.9786900281906128, 0, 0.20534199476242065,
+ -0.9978039860725403, 0, -0.06623899936676025,
+ -0.8703219890594482, 0.00005400000009103678, 0.492482990026474,
+ -0.803725004196167, 0.3325839936733246, 0.4933690130710602,
+ -0.8454390168190002, 0.34983500838279724, 0.40354499220848083,
+ -0.9153209924697876, 0, 0.4027250111103058,
+ -0.8699960112571716, 0.36004599928855896, 0.33685898780822754,
+ -0.9418079853057861, 0, 0.33615100383758545,
+ -0.9041929841041565, 0.37428000569343567, 0.20579099655151367,
+ -0.9786900281906128, 0, 0.20534199476242065,
+ -0.9218789935112, 0.38175201416015625, -0.06636899709701538,
+ -0.9978039860725403, 0, -0.06623899936676025,
+ -0.6148040294647217, 0.6148040294647217, 0.4939970076084137,
+ -0.6468020081520081, 0.6468020081520081, 0.40409600734710693,
+ -0.6656550168991089, 0.6656550168991089, 0.3373520076274872,
+ -0.6919230222702026, 0.6919230222702026, 0.20611999928951263,
+ -0.7055429816246033, 0.7055429816246033, -0.06647899746894836,
+ -0.3325839936733246, 0.803725004196167, 0.4933690130710602,
+ -0.34983500838279724, 0.8454390168190002, 0.40354499220848083,
+ -0.36004701256752014, 0.8699960112571716, 0.33685800433158875,
+ -0.37428000569343567, 0.9041929841041565, 0.20579099655151367,
+ -0.38175201416015625, 0.9218789935112, -0.06636899709701538,
+ 0, 0.8703010082244873, 0.4925200045108795,
+ 0, 0.9153209924697876, 0.4027250111103058,
+ 0, 0.9418079853057861, 0.33615100383758545,
+ 0, 0.9786900281906128, 0.20534199476242065,
+ 0, 0.9978039860725403, -0.06623899936676025,
+ 0, 0.8703010082244873, 0.4925200045108795,
+ 0.3325839936733246, 0.803725004196167, 0.4933690130710602,
+ 0.34983500838279724, 0.8454390168190002, 0.40354499220848083,
+ 0, 0.9153209924697876, 0.4027250111103058,
+ 0.36004599928855896, 0.8699960112571716, 0.33685898780822754,
+ 0, 0.9418079853057861, 0.33615100383758545,
+ 0.37428000569343567, 0.9041929841041565, 0.20579099655151367,
+ 0, 0.9786900281906128, 0.20534199476242065,
+ 0.38175201416015625, 0.9218789935112, -0.06636899709701538,
+ 0, 0.9978039860725403, -0.06623899936676025,
+ 0.6148040294647217, 0.6148040294647217, 0.4939970076084137,
+ 0.6468020081520081, 0.6468020081520081, 0.40409600734710693,
+ 0.6656550168991089, 0.6656550168991089, 0.3373520076274872,
+ 0.6919230222702026, 0.6919230222702026, 0.20611999928951263,
+ 0.7055429816246033, 0.7055429816246033, -0.06647899746894836,
+ 0.803725004196167, 0.3325839936733246, 0.4933690130710602,
+ 0.8454390168190002, 0.34983500838279724, 0.40354499220848083,
+ 0.8699960112571716, 0.36004701256752014, 0.33685800433158875,
+ 0.9041929841041565, 0.37428000569343567, 0.20579099655151367,
+ 0.9218789935112, 0.38175201416015625, -0.06636899709701538,
+ 0.8703010082244873, 0, 0.4925200045108795,
+ 0.9153209924697876, 0, 0.4027250111103058,
+ 0.9418079853057861, 0, 0.33615100383758545,
+ 0.9786900281906128, 0, 0.20534199476242065,
+ 0.9978039860725403, 0, -0.06623899936676025,
+ 0.9978039860725403, 0, -0.06623899936676025,
+ 0.9218789935112, -0.38175201416015625, -0.06636899709701538,
+ 0.8314369916915894, -0.3441790044307709, -0.4361799955368042,
+ 0.9001820087432861, 0, -0.4355129897594452,
+ 0.6735119819641113, -0.2785939872264862, -0.6846650242805481,
+ 0.7296109795570374, 0, -0.6838629841804504,
+ 0.6403989791870117, -0.26487401127815247, -0.7209240198135376,
+ 0.6939510107040405, 0, -0.7200220227241516,
+ 0.7329490184783936, -0.303166002035141, -0.6089959740638733,
+ 0.7939500212669373, 0, -0.6079840064048767,
+ 0.7055429816246033, -0.7055429816246033, -0.06647899746894836,
+ 0.6360920071601868, -0.6360920071601868, -0.4367780089378357,
+ 0.5149649977684021, -0.5149649977684021, -0.6852890253067017,
+ 0.48965099453926086, -0.48965099453926086, -0.7214459776878357,
+ 0.5605549812316895, -0.5605549812316895, -0.6095539927482605,
+ 0.38175201416015625, -0.9218789935112, -0.06636899709701538,
+ 0.3441790044307709, -0.8314369916915894, -0.4361799955368042,
+ 0.2785939872264862, -0.6735119819641113, -0.6846650242805481,
+ 0.26487401127815247, -0.6403989791870117, -0.7209240198135376,
+ 0.303166002035141, -0.7329490184783936, -0.6089959740638733,
+ 0, -0.9978039860725403, -0.06623899936676025,
+ 0, -0.9001820087432861, -0.4355129897594452,
+ 0, -0.7296109795570374, -0.6838629841804504,
+ 0, -0.6939510107040405, -0.7200220227241516,
+ 0, -0.7939500212669373, -0.6079840064048767,
+ 0, -0.9978039860725403, -0.06623899936676025,
+ -0.38175201416015625, -0.9218789935112, -0.06636899709701538,
+ -0.3441790044307709, -0.8314369916915894, -0.4361799955368042,
+ 0, -0.9001820087432861, -0.4355129897594452,
+ -0.2785939872264862, -0.6735119819641113, -0.6846650242805481,
+ 0, -0.7296109795570374, -0.6838629841804504,
+ -0.26487401127815247, -0.6403989791870117, -0.7209240198135376,
+ 0, -0.6939510107040405, -0.7200220227241516,
+ -0.303166002035141, -0.7329490184783936, -0.6089959740638733,
+ 0, -0.7939500212669373, -0.6079840064048767,
+ -0.7055429816246033, -0.7055429816246033, -0.06647899746894836,
+ -0.6360920071601868, -0.6360920071601868, -0.4367780089378357,
+ -0.5149649977684021, -0.5149649977684021, -0.6852890253067017,
+ -0.48965099453926086, -0.48965099453926086, -0.7214459776878357,
+ -0.5605549812316895, -0.5605549812316895, -0.6095539927482605,
+ -0.9218789935112, -0.38175201416015625, -0.06636899709701538,
+ -0.8314369916915894, -0.3441790044307709, -0.4361799955368042,
+ -0.6735119819641113, -0.2785939872264862, -0.6846650242805481,
+ -0.6403989791870117, -0.26487401127815247, -0.7209240198135376,
+ -0.7329490184783936, -0.303166002035141, -0.6089959740638733,
+ -0.9978039860725403, 0, -0.06623899936676025,
+ -0.9001820087432861, 0, -0.4355129897594452,
+ -0.7296109795570374, 0, -0.6838629841804504,
+ -0.6939510107040405, 0, -0.7200220227241516,
+ -0.7939500212669373, 0, -0.6079840064048767,
+ -0.9978039860725403, 0, -0.06623899936676025,
+ -0.9218789935112, 0.38175201416015625, -0.06636899709701538,
+ -0.8314369916915894, 0.3441790044307709, -0.4361799955368042,
+ -0.9001820087432861, 0, -0.4355129897594452,
+ -0.6735119819641113, 0.2785939872264862, -0.6846650242805481,
+ -0.7296109795570374, 0, -0.6838629841804504,
+ -0.6403989791870117, 0.26487401127815247, -0.7209240198135376,
+ -0.6939510107040405, 0, -0.7200220227241516,
+ -0.7329490184783936, 0.303166002035141, -0.6089959740638733,
+ -0.7939500212669373, 0, -0.6079840064048767,
+ -0.7055429816246033, 0.7055429816246033, -0.06647899746894836,
+ -0.6360920071601868, 0.6360920071601868, -0.4367780089378357,
+ -0.5149649977684021, 0.5149649977684021, -0.6852890253067017,
+ -0.48965099453926086, 0.48965099453926086, -0.7214459776878357,
+ -0.5605549812316895, 0.5605549812316895, -0.6095539927482605,
+ -0.38175201416015625, 0.9218789935112, -0.06636899709701538,
+ -0.3441790044307709, 0.8314369916915894, -0.4361799955368042,
+ -0.2785939872264862, 0.6735119819641113, -0.6846650242805481,
+ -0.26487401127815247, 0.6403989791870117, -0.7209240198135376,
+ -0.303166002035141, 0.7329490184783936, -0.6089959740638733,
+ 0, 0.9978039860725403, -0.06623899936676025,
+ 0, 0.9001820087432861, -0.4355129897594452,
+ 0, 0.7296109795570374, -0.6838629841804504,
+ 0, 0.6939510107040405, -0.7200220227241516,
+ 0, 0.7939500212669373, -0.6079840064048767,
+ 0, 0.9978039860725403, -0.06623899936676025,
+ 0.38175201416015625, 0.9218789935112, -0.06636899709701538,
+ 0.3441790044307709, 0.8314369916915894, -0.4361799955368042,
+ 0, 0.9001820087432861, -0.4355129897594452,
+ 0.2785939872264862, 0.6735119819641113, -0.6846650242805481,
+ 0, 0.7296109795570374, -0.6838629841804504,
+ 0.26487401127815247, 0.6403989791870117, -0.7209240198135376,
+ 0, 0.6939510107040405, -0.7200220227241516,
+ 0.303166002035141, 0.7329490184783936, -0.6089959740638733,
+ 0, 0.7939500212669373, -0.6079840064048767,
+ 0.7055429816246033, 0.7055429816246033, -0.06647899746894836,
+ 0.6360920071601868, 0.6360920071601868, -0.4367780089378357,
+ 0.5149649977684021, 0.5149649977684021, -0.6852890253067017,
+ 0.48965099453926086, 0.48965099453926086, -0.7214459776878357,
+ 0.5605549812316895, 0.5605549812316895, -0.6095539927482605,
+ 0.9218789935112, 0.38175201416015625, -0.06636899709701538,
+ 0.8314369916915894, 0.3441790044307709, -0.4361799955368042,
+ 0.6735119819641113, 0.2785939872264862, -0.6846650242805481,
+ 0.6403989791870117, 0.26487401127815247, -0.7209240198135376,
+ 0.7329490184783936, 0.303166002035141, -0.6089959740638733,
+ 0.9978039860725403, 0, -0.06623899936676025,
+ 0.9001820087432861, 0, -0.4355129897594452,
+ 0.7296109795570374, 0, -0.6838629841804504,
+ 0.6939510107040405, 0, -0.7200220227241516,
+ 0.7939500212669373, 0, -0.6079840064048767,
+ 0.7939500212669373, 0, -0.6079840064048767,
+ 0.7329490184783936, -0.303166002035141, -0.6089959740638733,
+ 0.576229989528656, -0.23821599781513214, -0.7818009853363037,
+ 0.6238600015640259, 0, -0.7815359830856323,
+ 0.16362899541854858, -0.06752700358629227, -0.9842079877853394,
+ 0.17729100584983826, 0, -0.984158992767334,
+ 0.04542100057005882, -0.018735000863671303, -0.9987919926643372,
+ 0.04920699819922447, 0, -0.9987890124320984,
+ 0, 0, -1,
+ 0, 0, -1,
+ 0.5605549812316895, -0.5605549812316895, -0.6095539927482605,
+ 0.44041600823402405, -0.44041600823402405, -0.7823479771614075,
+ 0.12490200251340866, -0.12490200251340866, -0.9842759966850281,
+ 0.034662000834941864, -0.034662000834941864, -0.9987980127334595,
+ 0, 0, -1,
+ 0.303166002035141, -0.7329490184783936, -0.6089959740638733,
+ 0.23821599781513214, -0.576229989528656, -0.7818009853363037,
+ 0.06752700358629227, -0.16362899541854858, -0.9842079877853394,
+ 0.018735000863671303, -0.04542100057005882, -0.9987919926643372,
+ 0, 0, -1,
+ 0, -0.7939500212669373, -0.6079840064048767,
+ 0, -0.6238600015640259, -0.7815359830856323,
+ 0, -0.17729100584983826, -0.984158992767334,
+ 0, -0.04920699819922447, -0.9987890124320984,
+ 0, 0, -1,
+ 0, -0.7939500212669373, -0.6079840064048767,
+ -0.303166002035141, -0.7329490184783936, -0.6089959740638733,
+ -0.23821599781513214, -0.576229989528656, -0.7818009853363037,
+ 0, -0.6238600015640259, -0.7815359830856323,
+ -0.06752700358629227, -0.16362899541854858, -0.9842079877853394,
+ 0, -0.17729100584983826, -0.984158992767334,
+ -0.018735000863671303, -0.04542100057005882, -0.9987919926643372,
+ 0, -0.04920699819922447, -0.9987890124320984,
+ 0, 0, -1,
+ 0, 0, -1,
+ -0.5605549812316895, -0.5605549812316895, -0.6095539927482605,
+ -0.44041600823402405, -0.44041600823402405, -0.7823479771614075,
+ -0.12490200251340866, -0.12490200251340866, -0.9842759966850281,
+ -0.034662000834941864, -0.034662000834941864, -0.9987980127334595,
+ 0, 0, -1,
+ -0.7329490184783936, -0.303166002035141, -0.6089959740638733,
+ -0.576229989528656, -0.23821599781513214, -0.7818009853363037,
+ -0.16362899541854858, -0.06752700358629227, -0.9842079877853394,
+ -0.04542100057005882, -0.018735000863671303, -0.9987919926643372,
+ 0, 0, -1,
+ -0.7939500212669373, 0, -0.6079840064048767,
+ -0.6238600015640259, 0, -0.7815359830856323,
+ -0.17729100584983826, 0, -0.984158992767334,
+ -0.04920699819922447, 0, -0.9987890124320984,
+ 0, 0, -1,
+ -0.7939500212669373, 0, -0.6079840064048767,
+ -0.7329490184783936, 0.303166002035141, -0.6089959740638733,
+ -0.576229989528656, 0.23821599781513214, -0.7818009853363037,
+ -0.6238600015640259, 0, -0.7815359830856323,
+ -0.16362899541854858, 0.06752700358629227, -0.9842079877853394,
+ -0.17729100584983826, 0, -0.984158992767334,
+ -0.04542100057005882, 0.018735000863671303, -0.9987919926643372,
+ -0.04920699819922447, 0, -0.9987890124320984,
+ 0, 0, -1,
+ 0, 0, -1,
+ -0.5605549812316895, 0.5605549812316895, -0.6095539927482605,
+ -0.44041600823402405, 0.44041600823402405, -0.7823479771614075,
+ -0.12490200251340866, 0.12490200251340866, -0.9842759966850281,
+ -0.034662000834941864, 0.034662000834941864, -0.9987980127334595,
+ 0, 0, -1,
+ -0.303166002035141, 0.7329490184783936, -0.6089959740638733,
+ -0.23821599781513214, 0.576229989528656, -0.7818009853363037,
+ -0.06752700358629227, 0.16362899541854858, -0.9842079877853394,
+ -0.018735000863671303, 0.04542100057005882, -0.9987919926643372,
+ 0, 0, -1,
+ 0, 0.7939500212669373, -0.6079840064048767,
+ 0, 0.6238600015640259, -0.7815359830856323,
+ 0, 0.17729100584983826, -0.984158992767334,
+ 0, 0.04920699819922447, -0.9987890124320984,
+ 0, 0, -1,
+ 0, 0.7939500212669373, -0.6079840064048767,
+ 0.303166002035141, 0.7329490184783936, -0.6089959740638733,
+ 0.23821599781513214, 0.576229989528656, -0.7818009853363037,
+ 0, 0.6238600015640259, -0.7815359830856323,
+ 0.06752700358629227, 0.16362899541854858, -0.9842079877853394,
+ 0, 0.17729100584983826, -0.984158992767334,
+ 0.018735000863671303, 0.04542100057005882, -0.9987919926643372,
+ 0, 0.04920699819922447, -0.9987890124320984,
+ 0, 0, -1,
+ 0, 0, -1,
+ 0.5605549812316895, 0.5605549812316895, -0.6095539927482605,
+ 0.44041600823402405, 0.44041600823402405, -0.7823479771614075,
+ 0.12490200251340866, 0.12490200251340866, -0.9842759966850281,
+ 0.034662000834941864, 0.034662000834941864, -0.9987980127334595,
+ 0, 0, -1,
+ 0.7329490184783936, 0.303166002035141, -0.6089959740638733,
+ 0.576229989528656, 0.23821599781513214, -0.7818009853363037,
+ 0.16362899541854858, 0.06752700358629227, -0.9842079877853394,
+ 0.04542100057005882, 0.018735000863671303, -0.9987919926643372,
+ 0, 0, -1,
+ 0.7939500212669373, 0, -0.6079840064048767,
+ 0.6238600015640259, 0, -0.7815359830856323,
+ 0.17729100584983826, 0, -0.984158992767334,
+ 0.04920699819922447, 0, -0.9987890124320984,
+ 0, 0, -1,
+ 0.007784999907016754, 0.00021499999274965376, -0.999970018863678,
+ 0.007038000039756298, -0.5829259753227234, -0.8124949932098389,
+ 0.0361270010471344, -0.5456140041351318, -0.837257981300354,
+ 0.03913800045847893, 0.0009879999561235309, -0.9992330074310303,
+ 0.16184599697589874, -0.5630490183830261, -0.8104209899902344,
+ 0.17951199412345886, 0.0043680001981556416, -0.9837459921836853,
+ 0.4823650121688843, -0.6427459716796875, -0.5951480269432068,
+ 0.6122999787330627, 0.010459000244736671, -0.790556013584137,
+ 0.7387199997901917, -0.6641989946365356, -0.11459299921989441,
+ 0.9861519932746887, 0.006668999791145325, -0.16570700705051422,
+ -0.0019079999765381217, -0.9867690205574036, 0.1621209979057312,
+ 0.002761000068858266, -0.9998499751091003, 0.017105000093579292,
+ 0.010532000102102757, -0.9972469806671143, 0.07339800149202347,
+ -0.06604000180959702, -0.9893029928207397, 0.13006900250911713,
+ -0.09442699700593948, -0.9953929781913757, 0.016594000160694122,
+ -0.009201999753713608, -0.4902929961681366, 0.8715090155601501,
+ -0.04860600084066391, -0.5394579768180847, 0.8406090140342712,
+ -0.22329799830913544, -0.5527390241622925, 0.8028810024261475,
+ -0.5963649749755859, -0.5751349925994873, 0.5599709749221802,
+ -0.8033369779586792, -0.5916029810905457, 0.06823500245809555,
+ -0.01056000031530857, -0.00010299999848939478, 0.9999439716339111,
+ -0.05879800021648407, -0.0007089999853633344, 0.9982699751853943,
+ -0.28071001172065735, -0.0032679999712854624, 0.9597870111465454,
+ -0.7497230172157288, -0.004267000127583742, 0.6617379784584045,
+ -0.9973509907722473, -0.0020580000709742308, 0.07271400094032288,
+ -0.01056000031530857, -0.00010299999848939478, 0.9999439716339111,
+ -0.008791999891400337, 0.49032899737358093, 0.8714929819107056,
+ -0.04649300128221512, 0.5387560129165649, 0.8411779999732971,
+ -0.05879800021648407, -0.0007089999853633344, 0.9982699751853943,
+ -0.21790899336338043, 0.5491610169410706, 0.8068069815635681,
+ -0.28071001172065735, -0.0032679999712854624, 0.9597870111465454,
+ -0.5972909927368164, 0.5741199851036072, 0.560027003288269,
+ -0.7497230172157288, -0.004267000127583742, 0.6617379784584045,
+ -0.8040000200271606, 0.5912910103797913, 0.0629120022058487,
+ -0.9973509907722473, -0.0020580000709742308, 0.07271400094032288,
+ -0.0018050000071525574, 0.986840009689331, 0.16169099509716034,
+ 0.0020310000982135534, 0.999891996383667, 0.014553000219166279,
+ 0.009215000085532665, 0.9981520175933838, 0.060068998485803604,
+ -0.059335000813007355, 0.9917230010032654, 0.11386600136756897,
+ -0.08690100163221359, 0.9961410164833069, 0.01228999998420477,
+ 0.006417000200599432, 0.5830950140953064, -0.812379002571106,
+ 0.03378299996256828, 0.5453730225563049, -0.8375130295753479,
+ 0.1571130007505417, 0.562188982963562, -0.8119469881057739,
+ 0.4844059944152832, 0.6465290188789368, -0.5893650054931641,
+ 0.7388700246810913, 0.6661880016326904, -0.10131999850273132,
+ 0.007784999907016754, 0.00021499999274965376, -0.999970018863678,
+ 0.03913800045847893, 0.0009879999561235309, -0.9992330074310303,
+ 0.17951199412345886, 0.0043680001981556416, -0.9837459921836853,
+ 0.6122999787330627, 0.010459000244736671, -0.790556013584137,
+ 0.9861519932746887, 0.006668999791145325, -0.16570700705051422,
+ 0.9861519932746887, 0.006668999791145325, -0.16570700705051422,
+ 0.7387199997901917, -0.6641989946365356, -0.11459299921989441,
+ 0.7256090044975281, -0.6373609900474548, 0.25935098528862,
+ 0.94651198387146, 0.0033569999504834414, 0.3226499855518341,
+ 0.6459450125694275, -0.6077200174331665, 0.46198800206184387,
+ 0.8258299827575684, 0.007451999932527542, 0.5638700127601624,
+ 0.5316150188446045, -0.5586140155792236, 0.6366599798202515,
+ 0.6500110030174255, 0.006936000194400549, 0.759893000125885,
+ 0.4249640107154846, -0.5955389738082886, 0.6817179918289185,
+ 0.5324289798736572, 0.005243999883532524, 0.8464580178260803,
+ -0.09442699700593948, -0.9953929781913757, 0.016594000160694122,
+ -0.04956100136041641, -0.9985759854316711, -0.01975500024855137,
+ -0.03781700134277344, -0.998649001121521, -0.035624999552965164,
+ -0.0379129983484745, -0.9986140131950378, -0.03651199862360954,
+ -0.1688539981842041, -0.9395300149917603, -0.2979460060596466,
+ -0.8033369779586792, -0.5916029810905457, 0.06823500245809555,
+ -0.7423409819602966, -0.5995240211486816, -0.2991659939289093,
+ -0.6196020245552063, -0.5795029997825623, -0.5294060111045837,
+ -0.483707994222641, -0.5438370108604431, -0.6857600212097168,
+ -0.44529199600219727, -0.4131770133972168, -0.7943549752235413,
+ -0.9973509907722473, -0.0020580000709742308, 0.07271400094032288,
+ -0.9265130162239075, -0.0019950000569224358, -0.3762570023536682,
+ -0.7539200186729431, -0.004317000042647123, -0.6569520235061646,
+ -0.5662239789962769, -0.003461000043898821, -0.8242440223693848,
+ -0.4818040132522583, -0.0018500000005587935, -0.8762770295143127,
+ -0.9973509907722473, -0.0020580000709742308, 0.07271400094032288,
+ -0.8040000200271606, 0.5912910103797913, 0.0629120022058487,
+ -0.7446749806404114, 0.5989770293235779, -0.29442399740219116,
+ -0.9265130162239075, -0.0019950000569224358, -0.3762570023536682,
+ -0.6219490170478821, 0.5781649947166443, -0.5281140208244324,
+ -0.7539200186729431, -0.004317000042647123, -0.6569520235061646,
+ -0.48117101192474365, 0.5428280234336853, -0.6883400082588196,
+ -0.5662239789962769, -0.003461000043898821, -0.8242440223693848,
+ -0.43805500864982605, 0.41574400663375854, -0.7970349788665771,
+ -0.4818040132522583, -0.0018500000005587935, -0.8762770295143127,
+ -0.08690100163221359, 0.9961410164833069, 0.01228999998420477,
+ -0.04433799907565117, 0.9988710284233093, -0.017055999487638474,
+ -0.026177000254392624, 0.9992600083351135, -0.02816700004041195,
+ -0.025293000042438507, 0.9992780089378357, -0.028332000598311424,
+ -0.15748199820518494, 0.9441670179367065, -0.28939300775527954,
+ 0.7388700246810913, 0.6661880016326904, -0.10131999850273132,
+ 0.7282440066337585, 0.63714200258255, 0.25240999460220337,
+ 0.6470540165901184, 0.6082550287246704, 0.4597249925136566,
+ 0.5229939818382263, 0.5621700286865234, 0.6406570076942444,
+ 0.4099780023097992, 0.6046689748764038, 0.6828569769859314,
+ 0.9861519932746887, 0.006668999791145325, -0.16570700705051422,
+ 0.94651198387146, 0.0033569999504834414, 0.3226499855518341,
+ 0.8258299827575684, 0.007451999932527542, 0.5638700127601624,
+ 0.6500110030174255, 0.006936000194400549, 0.759893000125885,
+ 0.5324289798736572, 0.005243999883532524, 0.8464580178260803,
+ -0.230786994099617, 0.006523000076413155, 0.9729819893836975,
+ -0.15287800133228302, -0.7101899981498718, 0.6872109770774841,
+ -0.31672099232673645, -0.7021129727363586, 0.6377500295639038,
+ -0.5489360094070435, 0.0015109999803826213, 0.8358629941940308,
+ -0.6010670065879822, -0.645330011844635, 0.471451997756958,
+ -0.8756710290908813, -0.009891999885439873, 0.4828070104122162,
+ -0.635890007019043, -0.629800021648407, 0.4460900127887726,
+ -0.8775539994239807, -0.01909100078046322, 0.47909700870513916,
+ -0.4357450008392334, -0.670009970664978, 0.6010090112686157,
+ -0.6961889863014221, -0.02449600026011467, 0.7174400091171265,
+ 0.11111299693584442, -0.9901599884033203, -0.08506900072097778,
+ 0.22330999374389648, -0.9747260212898254, 0.006539999973028898,
+ 0.19009700417518616, -0.9694579839706421, 0.15496399998664856,
+ 0.005270000081509352, -0.9818699955940247, 0.18948200345039368,
+ -0.011750999838113785, -0.9690240025520325, 0.24668699502944946,
+ 0.3439059853553772, -0.5994120240211487, -0.7227950096130371,
+ 0.5724899768829346, -0.5916270017623901, -0.5676559805870056,
+ 0.7874360084533691, -0.5605109930038452, -0.2564600110054016,
+ 0.6470969915390015, -0.6981409788131714, -0.3063740134239197,
+ 0.4275279939174652, -0.7535750269889832, -0.49934399127960205,
+ 0.4109260141849518, -0.0012839999981224537, -0.9116680026054382,
+ 0.6715199947357178, 0.0008989999769255519, -0.7409859895706177,
+ 0.9220259785652161, 0.00725199980661273, -0.3870599865913391,
+ 0.8469099998474121, 0.01385399978607893, -0.5315560102462769,
+ 0.5359240174293518, 0.010503999888896942, -0.8442010283470154,
+ 0.4109260141849518, -0.0012839999981224537, -0.9116680026054382,
+ 0.3411880135536194, 0.6009309887886047, -0.7228230237960815,
+ 0.5786640048027039, 0.591838002204895, -0.5611389875411987,
+ 0.6715199947357178, 0.0008989999769255519, -0.7409859895706177,
+ 0.7848690152168274, 0.5665420293807983, -0.25102001428604126,
+ 0.9220259785652161, 0.00725199980661273, -0.3870599865913391,
+ 0.6426810026168823, 0.7039899826049805, -0.3022570013999939,
+ 0.8469099998474121, 0.01385399978607893, -0.5315560102462769,
+ 0.4185889959335327, 0.7581170201301575, -0.5000420212745667,
+ 0.5359240174293518, 0.010503999888896942, -0.8442010283470154,
+ 0.11580599844455719, 0.9901139736175537, -0.07913900166749954,
+ 0.23281100392341614, 0.9724410176277161, 0.012564999982714653,
+ 0.20666299760341644, 0.9662799835205078, 0.15360000729560852,
+ 0.02449899911880493, 0.9865779876708984, 0.16144299507141113,
+ 0.0033809999004006386, 0.9774550199508667, 0.2111150026321411,
+ -0.13491199910640717, 0.7135509848594666, 0.6874909996986389,
+ -0.31953999400138855, 0.7050619721412659, 0.6330729722976685,
+ -0.6039019823074341, 0.6499029994010925, 0.4614419937133789,
+ -0.6318150162696838, 0.6400719881057739, 0.43716898560523987,
+ -0.4243049919605255, 0.6667500138282776, 0.6127070188522339,
+ -0.230786994099617, 0.006523000076413155, 0.9729819893836975,
+ -0.5489360094070435, 0.0015109999803826213, 0.8358629941940308,
+ -0.8756710290908813, -0.009891999885439873, 0.4828070104122162,
+ -0.8775539994239807, -0.01909100078046322, 0.47909700870513916,
+ -0.6961889863014221, -0.02449600026011467, 0.7174400091171265,
+ -0.6961889863014221, -0.02449600026011467, 0.7174400091171265,
+ -0.4357450008392334, -0.670009970664978, 0.6010090112686157,
+ -0.25985801219940186, -0.5525479912757874, 0.7919380068778992,
+ -0.42579901218414307, -0.010804999619722366, 0.9047530293464661,
+ 0.009537000209093094, 0.021669000387191772, 0.9997199773788452,
+ 0.022041000425815582, -0.001623000018298626, 0.9997559785842896,
+ 0.4101540148258209, 0.8490809798240662, 0.3329179883003235,
+ 0.9995980262756348, -0.01155600044876337, 0.02587899938225746,
+ 0.5415220260620117, 0.6370009779930115, -0.5486199855804443,
+ 0.7095860242843628, -0.009670999832451344, -0.7045519948005676,
+ -0.011750999838113785, -0.9690240025520325, 0.24668699502944946,
+ 0.046310000121593475, -0.8891720175743103, 0.45522499084472656,
+ -0.010688000358641148, -0.14889900386333466, 0.9887949824333191,
+ -0.04437499865889549, 0.7291200160980225, 0.6829460263252258,
+ 0.12282499670982361, 0.9923850297927856, 0.009232000447809696,
+ 0.4275279939174652, -0.7535750269889832, -0.49934399127960205,
+ 0.48183900117874146, -0.857479989528656, -0.18044300377368927,
+ 0.45527198910713196, -0.49992498755455017, 0.7367510199546814,
+ -0.22054199874401093, 0.3582780063152313, 0.9071930050849915,
+ -0.23591899871826172, 0.7157959938049316, 0.6572499871253967,
+ 0.5359240174293518, 0.010503999888896942, -0.8442010283470154,
+ 0.7280910015106201, 0.015584999695420265, -0.6853029727935791,
+ 0.8887389898300171, 0.016679000109434128, 0.4581089913845062,
+ -0.26009801030158997, -0.0007999999797903001, 0.965582013130188,
+ -0.37161099910736084, 0.004416999872773886, 0.9283779859542847,
+ 0.5359240174293518, 0.010503999888896942, -0.8442010283470154,
+ 0.4185889959335327, 0.7581170201301575, -0.5000420212745667,
+ 0.4801650047302246, 0.8588529825210571, -0.17836299538612366,
+ 0.7280910015106201, 0.015584999695420265, -0.6853029727935791,
+ 0.4881030023097992, 0.49794700741767883, 0.7168020009994507,
+ 0.8887389898300171, 0.016679000109434128, 0.4581089913845062,
+ -0.2220049947500229, -0.36189401149749756, 0.9053990244865417,
+ -0.26009801030158997, -0.0007999999797903001, 0.965582013130188,
+ -0.23540399968624115, -0.7104769945144653, 0.6631799936294556,
+ -0.37161099910736084, 0.004416999872773886, 0.9283779859542847,
+ 0.0033809999004006386, 0.9774550199508667, 0.2111150026321411,
+ 0.058719001710414886, 0.8971999883651733, 0.437703013420105,
+ 0.0013249999610707164, 0.164000004529953, 0.9864590167999268,
+ -0.04418899863958359, -0.7303190231323242, 0.6816750168800354,
+ 0.13880200684070587, -0.9897300004959106, -0.034189000725746155,
+ -0.4243049919605255, 0.6667500138282776, 0.6127070188522339,
+ -0.25888898968696594, 0.5453789830207825, 0.7972059845924377,
+ 0.012268000282347202, -0.01928500086069107, 0.9997389912605286,
+ 0.3986299932003021, -0.8456630110740662, 0.3548929989337921,
+ 0.5375639796257019, -0.6107370257377625, -0.5813990235328674,
+ -0.6961889863014221, -0.02449600026011467, 0.7174400091171265,
+ -0.42579901218414307, -0.010804999619722366, 0.9047530293464661,
+ 0.022041000425815582, -0.001623000018298626, 0.9997559785842896,
+ 0.9995980262756348, -0.01155600044876337, 0.02587899938225746,
+ 0.7095860242843628, -0.009670999832451344, -0.7045519948005676,
+ 0, 0, 1,
+ 0, 0, 1,
+ 0.7626410126686096, -0.31482499837875366, 0.5650339722633362,
+ 0.8245400190353394, -0.00001700000029813964, 0.5658029913902283,
+ 0.8479819893836975, -0.3500339984893799, -0.39799800515174866,
+ 0.917701005935669, -0.00003300000025774352, -0.397271990776062,
+ 0.8641409873962402, -0.35644200444221497, -0.3552600145339966,
+ 0.9352689981460571, -0.00011200000153621659, -0.3539389967918396,
+ 0.7209920287132263, -0.29793301224708557, 0.6256250143051147,
+ 0.7807120084762573, -0.00007500000356230885, 0.6248909831047058,
+ 0, 0, 1,
+ 0.5833569765090942, -0.5833380222320557, 0.5651649832725525,
+ 0.648485004901886, -0.6484479904174805, -0.3987259864807129,
+ 0.6608719825744629, -0.6607480049133301, -0.35589399933815,
+ 0.5518630146980286, -0.5517799854278564, 0.6252880096435547,
+ 0, 0, 1,
+ 0.31482499837875366, -0.762628972530365, 0.5650510191917419,
+ 0.35004499554634094, -0.8479880094528198, -0.39797601103782654,
+ 0.35647401213645935, -0.8641520142555237, -0.35519900918006897,
+ 0.29798200726509094, -0.7210670113563538, 0.6255149841308594,
+ 0, 0, 1,
+ -0.00001700000029813964, -0.8245400190353394, 0.5658029913902283,
+ -0.00003300000025774352, -0.917701005935669, -0.397271990776062,
+ -0.00011200000153621659, -0.9352689981460571, -0.3539389967918396,
+ -0.00007500000356230885, -0.7807120084762573, 0.6248900294303894,
+ 0, 0, 1,
+ 0, 0, 1,
+ -0.31482499837875366, -0.7626410126686096, 0.5650339722633362,
+ -0.00001700000029813964, -0.8245400190353394, 0.5658029913902283,
+ -0.3500339984893799, -0.8479819893836975, -0.39799800515174866,
+ -0.00003300000025774352, -0.917701005935669, -0.397271990776062,
+ -0.35644200444221497, -0.8641409873962402, -0.3552600145339966,
+ -0.00011200000153621659, -0.9352689981460571, -0.3539389967918396,
+ -0.29793301224708557, -0.7209920287132263, 0.6256250143051147,
+ -0.00007500000356230885, -0.7807120084762573, 0.6248900294303894,
+ 0, 0, 1,
+ -0.5833380222320557, -0.5833569765090942, 0.5651649832725525,
+ -0.6484479904174805, -0.648485004901886, -0.3987259864807129,
+ -0.6607480049133301, -0.6608719825744629, -0.35589399933815,
+ -0.5517799854278564, -0.5518630146980286, 0.6252880096435547,
+ 0, 0, 1,
+ -0.762628972530365, -0.31482499837875366, 0.5650510191917419,
+ -0.8479880094528198, -0.35004499554634094, -0.39797601103782654,
+ -0.8641520142555237, -0.35647401213645935, -0.35519900918006897,
+ -0.7210670113563538, -0.29798200726509094, 0.6255149841308594,
+ 0, 0, 1,
+ -0.8245400190353394, 0.00001700000029813964, 0.5658029913902283,
+ -0.917701005935669, 0.00003300000025774352, -0.397271990776062,
+ -0.9352689981460571, 0.00011200000153621659, -0.3539389967918396,
+ -0.7807120084762573, 0.00007500000356230885, 0.6248900294303894,
+ 0, 0, 1,
+ 0, 0, 1,
+ -0.7626410126686096, 0.31482499837875366, 0.5650339722633362,
+ -0.8245400190353394, 0.00001700000029813964, 0.5658029913902283,
+ -0.8479819893836975, 0.3500339984893799, -0.39799800515174866,
+ -0.917701005935669, 0.00003300000025774352, -0.397271990776062,
+ -0.8641409873962402, 0.35644200444221497, -0.3552600145339966,
+ -0.9352689981460571, 0.00011200000153621659, -0.3539389967918396,
+ -0.7209920287132263, 0.29793301224708557, 0.6256250143051147,
+ -0.7807120084762573, 0.00007500000356230885, 0.6248900294303894,
+ 0, 0, 1,
+ -0.5833569765090942, 0.5833380222320557, 0.5651649832725525,
+ -0.648485004901886, 0.6484479904174805, -0.3987259864807129,
+ -0.6608719825744629, 0.6607480049133301, -0.35589399933815,
+ -0.5518630146980286, 0.5517799854278564, 0.6252880096435547,
+ 0, 0, 1,
+ -0.31482499837875366, 0.762628972530365, 0.5650510191917419,
+ -0.35004499554634094, 0.8479880094528198, -0.39797601103782654,
+ -0.35647401213645935, 0.8641520142555237, -0.35519900918006897,
+ -0.29798200726509094, 0.7210670113563538, 0.6255149841308594,
+ 0, 0, 1,
+ 0.00001700000029813964, 0.8245400190353394, 0.5658029913902283,
+ 0.00003300000025774352, 0.917701005935669, -0.397271990776062,
+ 0.00011200000153621659, 0.9352689981460571, -0.3539389967918396,
+ 0.00007500000356230885, 0.7807120084762573, 0.6248900294303894,
+ 0, 0, 1,
+ 0, 0, 1,
+ 0.31482499837875366, 0.7626410126686096, 0.5650339722633362,
+ 0.00001700000029813964, 0.8245400190353394, 0.5658029913902283,
+ 0.3500339984893799, 0.8479819893836975, -0.39799800515174866,
+ 0.00003300000025774352, 0.917701005935669, -0.397271990776062,
+ 0.35644200444221497, 0.8641409873962402, -0.3552600145339966,
+ 0.00011200000153621659, 0.9352689981460571, -0.3539389967918396,
+ 0.29793301224708557, 0.7209920287132263, 0.6256250143051147,
+ 0.00007500000356230885, 0.7807120084762573, 0.6248900294303894,
+ 0, 0, 1,
+ 0.5833380222320557, 0.5833569765090942, 0.5651649832725525,
+ 0.6484479904174805, 0.648485004901886, -0.3987259864807129,
+ 0.6607480049133301, 0.6608719825744629, -0.35589399933815,
+ 0.5517799854278564, 0.5518630146980286, 0.6252880096435547,
+ 0, 0, 1,
+ 0.762628972530365, 0.31482499837875366, 0.5650510191917419,
+ 0.8479880094528198, 0.35004499554634094, -0.39797601103782654,
+ 0.8641520142555237, 0.35647401213645935, -0.35519900918006897,
+ 0.7210670113563538, 0.29798200726509094, 0.6255149841308594,
+ 0, 0, 1,
+ 0.8245400190353394, -0.00001700000029813964, 0.5658029913902283,
+ 0.917701005935669, -0.00003300000025774352, -0.397271990776062,
+ 0.9352689981460571, -0.00011200000153621659, -0.3539389967918396,
+ 0.7807120084762573, -0.00007500000356230885, 0.6248909831047058,
+ 0.7807120084762573, -0.00007500000356230885, 0.6248909831047058,
+ 0.7209920287132263, -0.29793301224708557, 0.6256250143051147,
+ 0.21797800064086914, -0.0902160033583641, 0.9717749953269958,
+ 0.23658299446105957, 0, 0.9716110229492188,
+ 0.1595889925956726, -0.06596100330352783, 0.9849770069122314,
+ 0.17308400571346283, 0, 0.9849069714546204,
+ 0.3504979908466339, -0.1447400003671646, 0.9253119826316833,
+ 0.37970298528671265, 0, 0.925108015537262,
+ 0.48558899760246277, -0.20147399604320526, 0.8506529927253723,
+ 0.5266720056533813, 0, 0.8500679731369019,
+ 0.5518630146980286, -0.5517799854278564, 0.6252880096435547,
+ 0.16663099825382233, -0.16663099825382233, 0.9718379974365234,
+ 0.12190800160169601, -0.12190800160169601, 0.9850260019302368,
+ 0.2676680088043213, -0.2676680088043213, 0.9255849719047546,
+ 0.37131500244140625, -0.37131500244140625, 0.8510289788246155,
+ 0.29798200726509094, -0.7210670113563538, 0.6255149841308594,
+ 0.0902160033583641, -0.21797800064086914, 0.9717749953269958,
+ 0.06596100330352783, -0.1595889925956726, 0.9849770069122314,
+ 0.1447400003671646, -0.3504979908466339, 0.9253119826316833,
+ 0.20147399604320526, -0.48558899760246277, 0.8506529927253723,
+ -0.00007500000356230885, -0.7807120084762573, 0.6248900294303894,
+ 0, -0.23658299446105957, 0.9716110229492188,
+ 0, -0.17308400571346283, 0.9849069714546204,
+ 0, -0.37970298528671265, 0.925108015537262,
+ 0, -0.5266720056533813, 0.8500679731369019,
+ -0.00007500000356230885, -0.7807120084762573, 0.6248900294303894,
+ -0.29793301224708557, -0.7209920287132263, 0.6256250143051147,
+ -0.0902160033583641, -0.21797800064086914, 0.9717749953269958,
+ 0, -0.23658299446105957, 0.9716110229492188,
+ -0.06596100330352783, -0.1595889925956726, 0.9849770069122314,
+ 0, -0.17308400571346283, 0.9849069714546204,
+ -0.1447400003671646, -0.3504979908466339, 0.9253119826316833,
+ 0, -0.37970298528671265, 0.925108015537262,
+ -0.20147399604320526, -0.48558899760246277, 0.8506529927253723,
+ 0, -0.5266720056533813, 0.8500679731369019,
+ -0.5517799854278564, -0.5518630146980286, 0.6252880096435547,
+ -0.16663099825382233, -0.16663099825382233, 0.9718379974365234,
+ -0.12190800160169601, -0.12190800160169601, 0.9850260019302368,
+ -0.2676680088043213, -0.2676680088043213, 0.9255849719047546,
+ -0.37131500244140625, -0.37131500244140625, 0.8510289788246155,
+ -0.7210670113563538, -0.29798200726509094, 0.6255149841308594,
+ -0.21797800064086914, -0.0902160033583641, 0.9717749953269958,
+ -0.1595889925956726, -0.06596100330352783, 0.9849770069122314,
+ -0.3504979908466339, -0.1447400003671646, 0.9253119826316833,
+ -0.48558899760246277, -0.20147399604320526, 0.8506529927253723,
+ -0.7807120084762573, 0.00007500000356230885, 0.6248900294303894,
+ -0.23658299446105957, 0, 0.9716110229492188,
+ -0.17308400571346283, 0, 0.9849069714546204,
+ -0.37970298528671265, 0, 0.925108015537262,
+ -0.5266720056533813, 0, 0.8500679731369019,
+ -0.7807120084762573, 0.00007500000356230885, 0.6248900294303894,
+ -0.7209920287132263, 0.29793301224708557, 0.6256250143051147,
+ -0.21797800064086914, 0.0902160033583641, 0.9717749953269958,
+ -0.23658299446105957, 0, 0.9716110229492188,
+ -0.1595889925956726, 0.06596100330352783, 0.9849770069122314,
+ -0.17308400571346283, 0, 0.9849069714546204,
+ -0.3504979908466339, 0.1447400003671646, 0.9253119826316833,
+ -0.37970298528671265, 0, 0.925108015537262,
+ -0.48558899760246277, 0.20147399604320526, 0.8506529927253723,
+ -0.5266720056533813, 0, 0.8500679731369019,
+ -0.5518630146980286, 0.5517799854278564, 0.6252880096435547,
+ -0.16663099825382233, 0.16663099825382233, 0.9718379974365234,
+ -0.12190800160169601, 0.12190800160169601, 0.9850260019302368,
+ -0.2676680088043213, 0.2676680088043213, 0.9255849719047546,
+ -0.37131500244140625, 0.37131500244140625, 0.8510289788246155,
+ -0.29798200726509094, 0.7210670113563538, 0.6255149841308594,
+ -0.0902160033583641, 0.21797800064086914, 0.9717749953269958,
+ -0.06596100330352783, 0.1595889925956726, 0.9849770069122314,
+ -0.1447400003671646, 0.3504979908466339, 0.9253119826316833,
+ -0.20147399604320526, 0.48558899760246277, 0.8506529927253723,
+ 0.00007500000356230885, 0.7807120084762573, 0.6248900294303894,
+ 0, 0.23658299446105957, 0.9716110229492188,
+ 0, 0.17308400571346283, 0.9849069714546204,
+ 0, 0.37970298528671265, 0.925108015537262,
+ 0, 0.5266720056533813, 0.8500679731369019,
+ 0.00007500000356230885, 0.7807120084762573, 0.6248900294303894,
+ 0.29793301224708557, 0.7209920287132263, 0.6256250143051147,
+ 0.0902160033583641, 0.21797800064086914, 0.9717749953269958,
+ 0, 0.23658299446105957, 0.9716110229492188,
+ 0.06596100330352783, 0.1595889925956726, 0.9849770069122314,
+ 0, 0.17308400571346283, 0.9849069714546204,
+ 0.1447400003671646, 0.3504979908466339, 0.9253119826316833,
+ 0, 0.37970298528671265, 0.925108015537262,
+ 0.20147399604320526, 0.48558899760246277, 0.8506529927253723,
+ 0, 0.5266720056533813, 0.8500679731369019,
+ 0.5517799854278564, 0.5518630146980286, 0.6252880096435547,
+ 0.16663099825382233, 0.16663099825382233, 0.9718379974365234,
+ 0.12190800160169601, 0.12190800160169601, 0.9850260019302368,
+ 0.2676680088043213, 0.2676680088043213, 0.9255849719047546,
+ 0.37131500244140625, 0.37131500244140625, 0.8510289788246155,
+ 0.7210670113563538, 0.29798200726509094, 0.6255149841308594,
+ 0.21797800064086914, 0.0902160033583641, 0.9717749953269958,
+ 0.1595889925956726, 0.06596100330352783, 0.9849770069122314,
+ 0.3504979908466339, 0.1447400003671646, 0.9253119826316833,
+ 0.48558899760246277, 0.20147399604320526, 0.8506529927253723,
+ 0.7807120084762573, -0.00007500000356230885, 0.6248909831047058,
+ 0.23658299446105957, 0, 0.9716110229492188,
+ 0.17308400571346283, 0, 0.9849069714546204,
+ 0.37970298528671265, 0, 0.925108015537262,
+ 0.5266720056533813, 0, 0.8500679731369019,
+};
+
+static const int teapot_indices[] = {
+ 0, 1, 2,
+ 2, 3, 0,
+ 3, 2, 4,
+ 4, 5, 3,
+ 5, 4, 6,
+ 6, 7, 5,
+ 7, 6, 8,
+ 8, 9, 7,
+ 1, 10, 11,
+ 11, 2, 1,
+ 2, 11, 12,
+ 12, 4, 2,
+ 4, 12, 13,
+ 13, 6, 4,
+ 6, 13, 14,
+ 14, 8, 6,
+ 10, 15, 16,
+ 16, 11, 10,
+ 11, 16, 17,
+ 17, 12, 11,
+ 12, 17, 18,
+ 18, 13, 12,
+ 13, 18, 19,
+ 19, 14, 13,
+ 15, 20, 21,
+ 21, 16, 15,
+ 16, 21, 22,
+ 22, 17, 16,
+ 17, 22, 23,
+ 23, 18, 17,
+ 18, 23, 24,
+ 24, 19, 18,
+ 25, 26, 27,
+ 27, 28, 25,
+ 28, 27, 29,
+ 29, 30, 28,
+ 30, 29, 31,
+ 31, 32, 30,
+ 32, 31, 33,
+ 33, 34, 32,
+ 26, 35, 36,
+ 36, 27, 26,
+ 27, 36, 37,
+ 37, 29, 27,
+ 29, 37, 38,
+ 38, 31, 29,
+ 31, 38, 39,
+ 39, 33, 31,
+ 35, 40, 41,
+ 41, 36, 35,
+ 36, 41, 42,
+ 42, 37, 36,
+ 37, 42, 43,
+ 43, 38, 37,
+ 38, 43, 44,
+ 44, 39, 38,
+ 40, 45, 46,
+ 46, 41, 40,
+ 41, 46, 47,
+ 47, 42, 41,
+ 42, 47, 48,
+ 48, 43, 42,
+ 43, 48, 49,
+ 49, 44, 43,
+ 50, 51, 52,
+ 52, 53, 50,
+ 53, 52, 54,
+ 54, 55, 53,
+ 55, 54, 56,
+ 56, 57, 55,
+ 57, 56, 58,
+ 58, 59, 57,
+ 51, 60, 61,
+ 61, 52, 51,
+ 52, 61, 62,
+ 62, 54, 52,
+ 54, 62, 63,
+ 63, 56, 54,
+ 56, 63, 64,
+ 64, 58, 56,
+ 60, 65, 66,
+ 66, 61, 60,
+ 61, 66, 67,
+ 67, 62, 61,
+ 62, 67, 68,
+ 68, 63, 62,
+ 63, 68, 69,
+ 69, 64, 63,
+ 65, 70, 71,
+ 71, 66, 65,
+ 66, 71, 72,
+ 72, 67, 66,
+ 67, 72, 73,
+ 73, 68, 67,
+ 68, 73, 74,
+ 74, 69, 68,
+ 75, 76, 77,
+ 77, 78, 75,
+ 78, 77, 79,
+ 79, 80, 78,
+ 80, 79, 81,
+ 81, 82, 80,
+ 82, 81, 83,
+ 83, 84, 82,
+ 76, 85, 86,
+ 86, 77, 76,
+ 77, 86, 87,
+ 87, 79, 77,
+ 79, 87, 88,
+ 88, 81, 79,
+ 81, 88, 89,
+ 89, 83, 81,
+ 85, 90, 91,
+ 91, 86, 85,
+ 86, 91, 92,
+ 92, 87, 86,
+ 87, 92, 93,
+ 93, 88, 87,
+ 88, 93, 94,
+ 94, 89, 88,
+ 90, 95, 96,
+ 96, 91, 90,
+ 91, 96, 97,
+ 97, 92, 91,
+ 92, 97, 98,
+ 98, 93, 92,
+ 93, 98, 99,
+ 99, 94, 93,
+ 100, 101, 102,
+ 102, 103, 100,
+ 103, 102, 104,
+ 104, 105, 103,
+ 105, 104, 106,
+ 106, 107, 105,
+ 107, 106, 108,
+ 108, 109, 107,
+ 101, 110, 111,
+ 111, 102, 101,
+ 102, 111, 112,
+ 112, 104, 102,
+ 104, 112, 113,
+ 113, 106, 104,
+ 106, 113, 114,
+ 114, 108, 106,
+ 110, 115, 116,
+ 116, 111, 110,
+ 111, 116, 117,
+ 117, 112, 111,
+ 112, 117, 118,
+ 118, 113, 112,
+ 113, 118, 119,
+ 119, 114, 113,
+ 115, 120, 121,
+ 121, 116, 115,
+ 116, 121, 122,
+ 122, 117, 116,
+ 117, 122, 123,
+ 123, 118, 117,
+ 118, 123, 124,
+ 124, 119, 118,
+ 125, 126, 127,
+ 127, 128, 125,
+ 128, 127, 129,
+ 129, 130, 128,
+ 130, 129, 131,
+ 131, 132, 130,
+ 132, 131, 133,
+ 133, 134, 132,
+ 126, 135, 136,
+ 136, 127, 126,
+ 127, 136, 137,
+ 137, 129, 127,
+ 129, 137, 138,
+ 138, 131, 129,
+ 131, 138, 139,
+ 139, 133, 131,
+ 135, 140, 141,
+ 141, 136, 135,
+ 136, 141, 142,
+ 142, 137, 136,
+ 137, 142, 143,
+ 143, 138, 137,
+ 138, 143, 144,
+ 144, 139, 138,
+ 140, 145, 146,
+ 146, 141, 140,
+ 141, 146, 147,
+ 147, 142, 141,
+ 142, 147, 148,
+ 148, 143, 142,
+ 143, 148, 149,
+ 149, 144, 143,
+ 150, 151, 152,
+ 152, 153, 150,
+ 153, 152, 154,
+ 154, 155, 153,
+ 155, 154, 156,
+ 156, 157, 155,
+ 157, 156, 158,
+ 158, 159, 157,
+ 151, 160, 161,
+ 161, 152, 151,
+ 152, 161, 162,
+ 162, 154, 152,
+ 154, 162, 163,
+ 163, 156, 154,
+ 156, 163, 164,
+ 164, 158, 156,
+ 160, 165, 166,
+ 166, 161, 160,
+ 161, 166, 167,
+ 167, 162, 161,
+ 162, 167, 168,
+ 168, 163, 162,
+ 163, 168, 169,
+ 169, 164, 163,
+ 165, 170, 171,
+ 171, 166, 165,
+ 166, 171, 172,
+ 172, 167, 166,
+ 167, 172, 173,
+ 173, 168, 167,
+ 168, 173, 174,
+ 174, 169, 168,
+ 175, 176, 177,
+ 177, 178, 175,
+ 178, 177, 179,
+ 179, 180, 178,
+ 180, 179, 181,
+ 181, 182, 180,
+ 182, 181, 183,
+ 183, 184, 182,
+ 176, 185, 186,
+ 186, 177, 176,
+ 177, 186, 187,
+ 187, 179, 177,
+ 179, 187, 188,
+ 188, 181, 179,
+ 181, 188, 189,
+ 189, 183, 181,
+ 185, 190, 191,
+ 191, 186, 185,
+ 186, 191, 192,
+ 192, 187, 186,
+ 187, 192, 193,
+ 193, 188, 187,
+ 188, 193, 194,
+ 194, 189, 188,
+ 190, 195, 196,
+ 196, 191, 190,
+ 191, 196, 197,
+ 197, 192, 191,
+ 192, 197, 198,
+ 198, 193, 192,
+ 193, 198, 199,
+ 199, 194, 193,
+ 200, 201, 202,
+ 202, 203, 200,
+ 203, 202, 204,
+ 204, 205, 203,
+ 205, 204, 206,
+ 206, 207, 205,
+ 207, 206, 208,
+ 208, 209, 207,
+ 201, 210, 211,
+ 211, 202, 201,
+ 202, 211, 212,
+ 212, 204, 202,
+ 204, 212, 213,
+ 213, 206, 204,
+ 206, 213, 214,
+ 214, 208, 206,
+ 210, 215, 216,
+ 216, 211, 210,
+ 211, 216, 217,
+ 217, 212, 211,
+ 212, 217, 218,
+ 218, 213, 212,
+ 213, 218, 219,
+ 219, 214, 213,
+ 215, 220, 221,
+ 221, 216, 215,
+ 216, 221, 222,
+ 222, 217, 216,
+ 217, 222, 223,
+ 223, 218, 217,
+ 218, 223, 224,
+ 224, 219, 218,
+ 225, 226, 227,
+ 227, 228, 225,
+ 228, 227, 229,
+ 229, 230, 228,
+ 230, 229, 231,
+ 231, 232, 230,
+ 232, 231, 233,
+ 233, 234, 232,
+ 226, 235, 236,
+ 236, 227, 226,
+ 227, 236, 237,
+ 237, 229, 227,
+ 229, 237, 238,
+ 238, 231, 229,
+ 231, 238, 239,
+ 239, 233, 231,
+ 235, 240, 241,
+ 241, 236, 235,
+ 236, 241, 242,
+ 242, 237, 236,
+ 237, 242, 243,
+ 243, 238, 237,
+ 238, 243, 244,
+ 244, 239, 238,
+ 240, 245, 246,
+ 246, 241, 240,
+ 241, 246, 247,
+ 247, 242, 241,
+ 242, 247, 248,
+ 248, 243, 242,
+ 243, 248, 249,
+ 249, 244, 243,
+ 250, 251, 252,
+ 252, 253, 250,
+ 253, 252, 254,
+ 254, 255, 253,
+ 255, 254, 256,
+ 256, 257, 255,
+ 257, 256, 258,
+ 258, 259, 257,
+ 251, 260, 261,
+ 261, 252, 251,
+ 252, 261, 262,
+ 262, 254, 252,
+ 254, 262, 263,
+ 263, 256, 254,
+ 256, 263, 264,
+ 264, 258, 256,
+ 260, 265, 266,
+ 266, 261, 260,
+ 261, 266, 267,
+ 267, 262, 261,
+ 262, 267, 268,
+ 268, 263, 262,
+ 263, 268, 269,
+ 269, 264, 263,
+ 265, 270, 271,
+ 271, 266, 265,
+ 266, 271, 272,
+ 272, 267, 266,
+ 267, 272, 273,
+ 273, 268, 267,
+ 268, 273, 274,
+ 274, 269, 268,
+ 275, 276, 277,
+ 277, 278, 275,
+ 278, 277, 279,
+ 279, 280, 278,
+ 280, 279, 281,
+ 281, 282, 280,
+ 282, 281, 283,
+ 283, 284, 282,
+ 276, 285, 286,
+ 286, 277, 276,
+ 277, 286, 287,
+ 287, 279, 277,
+ 279, 287, 288,
+ 288, 281, 279,
+ 281, 288, 289,
+ 289, 283, 281,
+ 285, 290, 291,
+ 291, 286, 285,
+ 286, 291, 292,
+ 292, 287, 286,
+ 287, 292, 293,
+ 293, 288, 287,
+ 288, 293, 294,
+ 294, 289, 288,
+ 290, 295, 296,
+ 296, 291, 290,
+ 291, 296, 297,
+ 297, 292, 291,
+ 292, 297, 298,
+ 298, 293, 292,
+ 293, 298, 299,
+ 299, 294, 293,
+ 300, 301, 302,
+ 302, 303, 300,
+ 303, 302, 304,
+ 304, 305, 303,
+ 305, 304, 306,
+ 306, 307, 305,
+ 307, 306, 308,
+ 308, 309, 307,
+ 301, 310, 311,
+ 311, 302, 301,
+ 302, 311, 312,
+ 312, 304, 302,
+ 304, 312, 313,
+ 313, 306, 304,
+ 306, 313, 314,
+ 314, 308, 306,
+ 310, 315, 316,
+ 316, 311, 310,
+ 311, 316, 317,
+ 317, 312, 311,
+ 312, 317, 318,
+ 318, 313, 312,
+ 313, 318, 319,
+ 319, 314, 313,
+ 315, 320, 321,
+ 321, 316, 315,
+ 316, 321, 322,
+ 322, 317, 316,
+ 317, 322, 323,
+ 323, 318, 317,
+ 318, 323, 324,
+ 324, 319, 318,
+ 325, 326, 327,
+ 327, 328, 325,
+ 328, 327, 329,
+ 329, 330, 328,
+ 330, 329, 331,
+ 331, 332, 330,
+ 332, 331, 333,
+ 333, 334, 332,
+ 326, 335, 336,
+ 336, 327, 326,
+ 327, 336, 337,
+ 337, 329, 327,
+ 329, 337, 338,
+ 338, 331, 329,
+ 331, 338, 339,
+ 339, 333, 331,
+ 335, 340, 341,
+ 341, 336, 335,
+ 336, 341, 342,
+ 342, 337, 336,
+ 337, 342, 343,
+ 343, 338, 337,
+ 338, 343, 344,
+ 344, 339, 338,
+ 340, 345, 346,
+ 346, 341, 340,
+ 341, 346, 347,
+ 347, 342, 341,
+ 342, 347, 348,
+ 348, 343, 342,
+ 343, 348, 349,
+ 349, 344, 343,
+ 350, 351, 352,
+ 352, 353, 350,
+ 353, 352, 354,
+ 354, 355, 353,
+ 355, 354, 356,
+ 356, 357, 355,
+ 357, 356, 358,
+ 358, 359, 357,
+ 351, 360, 361,
+ 361, 352, 351,
+ 352, 361, 362,
+ 362, 354, 352,
+ 354, 362, 363,
+ 363, 356, 354,
+ 356, 363, 364,
+ 364, 358, 356,
+ 360, 365, 366,
+ 366, 361, 360,
+ 361, 366, 367,
+ 367, 362, 361,
+ 362, 367, 368,
+ 368, 363, 362,
+ 363, 368, 369,
+ 369, 364, 363,
+ 365, 370, 371,
+ 371, 366, 365,
+ 366, 371, 372,
+ 372, 367, 366,
+ 367, 372, 373,
+ 373, 368, 367,
+ 368, 373, 374,
+ 374, 369, 368,
+ 375, 376, 377,
+ 377, 378, 375,
+ 378, 377, 379,
+ 379, 380, 378,
+ 380, 379, 381,
+ 381, 382, 380,
+ 382, 381, 383,
+ 383, 384, 382,
+ 376, 385, 386,
+ 386, 377, 376,
+ 377, 386, 387,
+ 387, 379, 377,
+ 379, 387, 388,
+ 388, 381, 379,
+ 381, 388, 389,
+ 389, 383, 381,
+ 385, 390, 391,
+ 391, 386, 385,
+ 386, 391, 392,
+ 392, 387, 386,
+ 387, 392, 393,
+ 393, 388, 387,
+ 388, 393, 394,
+ 394, 389, 388,
+ 390, 395, 396,
+ 396, 391, 390,
+ 391, 396, 397,
+ 397, 392, 391,
+ 392, 397, 398,
+ 398, 393, 392,
+ 393, 398, 399,
+ 399, 394, 393,
+ 400, 401, 402,
+ 402, 403, 400,
+ 403, 402, 404,
+ 404, 405, 403,
+ 405, 404, 406,
+ 406, 407, 405,
+ 407, 406, 408,
+ 408, 409, 407,
+ 401, 410, 411,
+ 411, 402, 401,
+ 402, 411, 412,
+ 412, 404, 402,
+ 404, 412, 413,
+ 413, 406, 404,
+ 406, 413, 414,
+ 414, 408, 406,
+ 410, 415, 416,
+ 416, 411, 410,
+ 411, 416, 417,
+ 417, 412, 411,
+ 412, 417, 418,
+ 418, 413, 412,
+ 413, 418, 419,
+ 419, 414, 413,
+ 415, 420, 421,
+ 421, 416, 415,
+ 416, 421, 422,
+ 422, 417, 416,
+ 417, 422, 423,
+ 423, 418, 417,
+ 418, 423, 424,
+ 424, 419, 418,
+ 425, 426, 427,
+ 427, 428, 425,
+ 428, 427, 429,
+ 429, 430, 428,
+ 430, 429, 431,
+ 431, 432, 430,
+ 432, 431, 433,
+ 433, 434, 432,
+ 426, 435, 436,
+ 436, 427, 426,
+ 427, 436, 437,
+ 437, 429, 427,
+ 429, 437, 438,
+ 438, 431, 429,
+ 431, 438, 439,
+ 439, 433, 431,
+ 435, 440, 441,
+ 441, 436, 435,
+ 436, 441, 442,
+ 442, 437, 436,
+ 437, 442, 443,
+ 443, 438, 437,
+ 438, 443, 444,
+ 444, 439, 438,
+ 440, 445, 446,
+ 446, 441, 440,
+ 441, 446, 447,
+ 447, 442, 441,
+ 442, 447, 448,
+ 448, 443, 442,
+ 443, 448, 449,
+ 449, 444, 443,
+ 450, 451, 452,
+ 452, 453, 450,
+ 453, 452, 454,
+ 454, 455, 453,
+ 455, 454, 456,
+ 456, 457, 455,
+ 457, 456, 458,
+ 458, 459, 457,
+ 451, 460, 461,
+ 461, 452, 451,
+ 452, 461, 462,
+ 462, 454, 452,
+ 454, 462, 463,
+ 463, 456, 454,
+ 456, 463, 464,
+ 464, 458, 456,
+ 460, 465, 466,
+ 466, 461, 460,
+ 461, 466, 467,
+ 467, 462, 461,
+ 462, 467, 468,
+ 468, 463, 462,
+ 463, 468, 469,
+ 469, 464, 463,
+ 465, 470, 471,
+ 471, 466, 465,
+ 466, 471, 472,
+ 472, 467, 466,
+ 467, 472, 473,
+ 473, 468, 467,
+ 468, 473, 474,
+ 474, 469, 468,
+ 475, 476, 477,
+ 477, 478, 475,
+ 478, 477, 479,
+ 479, 480, 478,
+ 480, 479, 481,
+ 481, 482, 480,
+ 482, 481, 483,
+ 483, 484, 482,
+ 476, 485, 486,
+ 486, 477, 476,
+ 477, 486, 487,
+ 487, 479, 477,
+ 479, 487, 488,
+ 488, 481, 479,
+ 481, 488, 489,
+ 489, 483, 481,
+ 485, 490, 491,
+ 491, 486, 485,
+ 486, 491, 492,
+ 492, 487, 486,
+ 487, 492, 493,
+ 493, 488, 487,
+ 488, 493, 494,
+ 494, 489, 488,
+ 490, 495, 496,
+ 496, 491, 490,
+ 491, 496, 497,
+ 497, 492, 491,
+ 492, 497, 498,
+ 498, 493, 492,
+ 493, 498, 499,
+ 499, 494, 493,
+ 500, 501, 502,
+ 502, 503, 500,
+ 503, 502, 504,
+ 504, 505, 503,
+ 505, 504, 506,
+ 506, 507, 505,
+ 507, 506, 508,
+ 508, 509, 507,
+ 501, 510, 511,
+ 511, 502, 501,
+ 502, 511, 512,
+ 512, 504, 502,
+ 504, 512, 513,
+ 513, 506, 504,
+ 506, 513, 514,
+ 514, 508, 506,
+ 510, 515, 516,
+ 516, 511, 510,
+ 511, 516, 517,
+ 517, 512, 511,
+ 512, 517, 518,
+ 518, 513, 512,
+ 513, 518, 519,
+ 519, 514, 513,
+ 515, 520, 521,
+ 521, 516, 515,
+ 516, 521, 522,
+ 522, 517, 516,
+ 517, 522, 523,
+ 523, 518, 517,
+ 518, 523, 524,
+ 524, 519, 518,
+ 525, 526, 527,
+ 527, 528, 525,
+ 528, 527, 529,
+ 529, 530, 528,
+ 530, 529, 531,
+ 531, 532, 530,
+ 532, 531, 533,
+ 533, 534, 532,
+ 526, 535, 536,
+ 536, 527, 526,
+ 527, 536, 537,
+ 537, 529, 527,
+ 529, 537, 538,
+ 538, 531, 529,
+ 531, 538, 539,
+ 539, 533, 531,
+ 535, 540, 541,
+ 541, 536, 535,
+ 536, 541, 542,
+ 542, 537, 536,
+ 537, 542, 543,
+ 543, 538, 537,
+ 538, 543, 544,
+ 544, 539, 538,
+ 540, 545, 546,
+ 546, 541, 540,
+ 541, 546, 547,
+ 547, 542, 541,
+ 542, 547, 548,
+ 548, 543, 542,
+ 543, 548, 549,
+ 549, 544, 543,
+ 550, 551, 552,
+ 552, 553, 550,
+ 553, 552, 554,
+ 554, 555, 553,
+ 555, 554, 556,
+ 556, 557, 555,
+ 557, 556, 558,
+ 558, 559, 557,
+ 551, 560, 561,
+ 561, 552, 551,
+ 552, 561, 562,
+ 562, 554, 552,
+ 554, 562, 563,
+ 563, 556, 554,
+ 556, 563, 564,
+ 564, 558, 556,
+ 560, 565, 566,
+ 566, 561, 560,
+ 561, 566, 567,
+ 567, 562, 561,
+ 562, 567, 568,
+ 568, 563, 562,
+ 563, 568, 569,
+ 569, 564, 563,
+ 565, 570, 571,
+ 571, 566, 565,
+ 566, 571, 572,
+ 572, 567, 566,
+ 567, 572, 573,
+ 573, 568, 567,
+ 568, 573, 574,
+ 574, 569, 568,
+ 575, 576, 577,
+ 577, 578, 575,
+ 578, 577, 579,
+ 579, 580, 578,
+ 580, 579, 581,
+ 581, 582, 580,
+ 582, 581, 583,
+ 583, 584, 582,
+ 576, 585, 586,
+ 586, 577, 576,
+ 577, 586, 587,
+ 587, 579, 577,
+ 579, 587, 588,
+ 588, 581, 579,
+ 581, 588, 589,
+ 589, 583, 581,
+ 585, 590, 591,
+ 591, 586, 585,
+ 586, 591, 592,
+ 592, 587, 586,
+ 587, 592, 593,
+ 593, 588, 587,
+ 588, 593, 594,
+ 594, 589, 588,
+ 590, 595, 596,
+ 596, 591, 590,
+ 591, 596, 597,
+ 597, 592, 591,
+ 592, 597, 598,
+ 598, 593, 592,
+ 593, 598, 599,
+ 599, 594, 593,
+ 600, 601, 602,
+ 602, 603, 600,
+ 603, 602, 604,
+ 604, 605, 603,
+ 605, 604, 606,
+ 606, 607, 605,
+ 607, 606, 608,
+ 608, 609, 607,
+ 601, 610, 611,
+ 611, 602, 601,
+ 602, 611, 612,
+ 612, 604, 602,
+ 604, 612, 613,
+ 613, 606, 604,
+ 606, 613, 614,
+ 614, 608, 606,
+ 610, 615, 616,
+ 616, 611, 610,
+ 611, 616, 617,
+ 617, 612, 611,
+ 612, 617, 618,
+ 618, 613, 612,
+ 613, 618, 619,
+ 619, 614, 613,
+ 615, 620, 621,
+ 621, 616, 615,
+ 616, 621, 622,
+ 622, 617, 616,
+ 617, 622, 623,
+ 623, 618, 617,
+ 618, 623, 624,
+ 624, 619, 618,
+ 625, 626, 627,
+ 627, 628, 625,
+ 628, 627, 629,
+ 629, 630, 628,
+ 630, 629, 631,
+ 631, 632, 630,
+ 632, 631, 633,
+ 633, 634, 632,
+ 626, 635, 636,
+ 636, 627, 626,
+ 627, 636, 637,
+ 637, 629, 627,
+ 629, 637, 638,
+ 638, 631, 629,
+ 631, 638, 639,
+ 639, 633, 631,
+ 635, 640, 641,
+ 641, 636, 635,
+ 636, 641, 642,
+ 642, 637, 636,
+ 637, 642, 643,
+ 643, 638, 637,
+ 638, 643, 644,
+ 644, 639, 638,
+ 640, 645, 646,
+ 646, 641, 640,
+ 641, 646, 647,
+ 647, 642, 641,
+ 642, 647, 648,
+ 648, 643, 642,
+ 643, 648, 649,
+ 649, 644, 643,
+ 650, 651, 652,
+ 652, 653, 650,
+ 653, 652, 654,
+ 654, 655, 653,
+ 655, 654, 656,
+ 656, 657, 655,
+ 657, 656, 658,
+ 658, 659, 657,
+ 651, 660, 661,
+ 661, 652, 651,
+ 652, 661, 662,
+ 662, 654, 652,
+ 654, 662, 663,
+ 663, 656, 654,
+ 656, 663, 664,
+ 664, 658, 656,
+ 660, 665, 666,
+ 666, 661, 660,
+ 661, 666, 667,
+ 667, 662, 661,
+ 662, 667, 668,
+ 668, 663, 662,
+ 663, 668, 669,
+ 669, 664, 663,
+ 665, 670, 671,
+ 671, 666, 665,
+ 666, 671, 672,
+ 672, 667, 666,
+ 667, 672, 673,
+ 673, 668, 667,
+ 668, 673, 674,
+ 674, 669, 668,
+ 675, 676, 677,
+ 677, 678, 675,
+ 678, 677, 679,
+ 679, 680, 678,
+ 680, 679, 681,
+ 681, 682, 680,
+ 682, 681, 683,
+ 683, 684, 682,
+ 676, 685, 686,
+ 686, 677, 676,
+ 677, 686, 687,
+ 687, 679, 677,
+ 679, 687, 688,
+ 688, 681, 679,
+ 681, 688, 689,
+ 689, 683, 681,
+ 685, 690, 691,
+ 691, 686, 685,
+ 686, 691, 692,
+ 692, 687, 686,
+ 687, 692, 693,
+ 693, 688, 687,
+ 688, 693, 694,
+ 694, 689, 688,
+ 690, 695, 696,
+ 696, 691, 690,
+ 691, 696, 697,
+ 697, 692, 691,
+ 692, 697, 698,
+ 698, 693, 692,
+ 693, 698, 699,
+ 699, 694, 693,
+ 700, 701, 702,
+ 702, 703, 700,
+ 703, 702, 704,
+ 704, 705, 703,
+ 705, 704, 706,
+ 706, 707, 705,
+ 707, 706, 708,
+ 708, 709, 707,
+ 701, 710, 711,
+ 711, 702, 701,
+ 702, 711, 712,
+ 712, 704, 702,
+ 704, 712, 713,
+ 713, 706, 704,
+ 706, 713, 714,
+ 714, 708, 706,
+ 710, 715, 716,
+ 716, 711, 710,
+ 711, 716, 717,
+ 717, 712, 711,
+ 712, 717, 718,
+ 718, 713, 712,
+ 713, 718, 719,
+ 719, 714, 713,
+ 715, 720, 721,
+ 721, 716, 715,
+ 716, 721, 722,
+ 722, 717, 716,
+ 717, 722, 723,
+ 723, 718, 717,
+ 718, 723, 724,
+ 724, 719, 718,
+ 725, 726, 727,
+ 727, 728, 725,
+ 728, 727, 729,
+ 729, 730, 728,
+ 730, 729, 731,
+ 731, 732, 730,
+ 732, 731, 733,
+ 733, 734, 732,
+ 726, 735, 736,
+ 736, 727, 726,
+ 727, 736, 737,
+ 737, 729, 727,
+ 729, 737, 738,
+ 738, 731, 729,
+ 731, 738, 739,
+ 739, 733, 731,
+ 735, 740, 741,
+ 741, 736, 735,
+ 736, 741, 742,
+ 742, 737, 736,
+ 737, 742, 743,
+ 743, 738, 737,
+ 738, 743, 744,
+ 744, 739, 738,
+ 740, 745, 746,
+ 746, 741, 740,
+ 741, 746, 747,
+ 747, 742, 741,
+ 742, 747, 748,
+ 748, 743, 742,
+ 743, 748, 749,
+ 749, 744, 743,
+ 750, 751, 752,
+ 752, 753, 750,
+ 753, 752, 754,
+ 754, 755, 753,
+ 755, 754, 756,
+ 756, 757, 755,
+ 757, 756, 758,
+ 758, 759, 757,
+ 751, 760, 761,
+ 761, 752, 751,
+ 752, 761, 762,
+ 762, 754, 752,
+ 754, 762, 763,
+ 763, 756, 754,
+ 756, 763, 764,
+ 764, 758, 756,
+ 760, 765, 766,
+ 766, 761, 760,
+ 761, 766, 767,
+ 767, 762, 761,
+ 762, 767, 768,
+ 768, 763, 762,
+ 763, 768, 769,
+ 769, 764, 763,
+ 765, 770, 771,
+ 771, 766, 765,
+ 766, 771, 772,
+ 772, 767, 766,
+ 767, 772, 773,
+ 773, 768, 767,
+ 768, 773, 774,
+ 774, 769, 768,
+ 775, 776, 777,
+ 777, 778, 775,
+ 778, 777, 779,
+ 779, 780, 778,
+ 780, 779, 781,
+ 781, 782, 780,
+ 782, 781, 783,
+ 783, 784, 782,
+ 776, 785, 786,
+ 786, 777, 776,
+ 777, 786, 787,
+ 787, 779, 777,
+ 779, 787, 788,
+ 788, 781, 779,
+ 781, 788, 789,
+ 789, 783, 781,
+ 785, 790, 791,
+ 791, 786, 785,
+ 786, 791, 792,
+ 792, 787, 786,
+ 787, 792, 793,
+ 793, 788, 787,
+ 788, 793, 794,
+ 794, 789, 788,
+ 790, 795, 796,
+ 796, 791, 790,
+ 791, 796, 797,
+ 797, 792, 791,
+ 792, 797, 798,
+ 798, 793, 792,
+ 793, 798, 799,
+ 799, 794, 793,
+};
diff --git a/demos/smoke/README.md b/demos/smoke/README.md
new file mode 100644
index 000000000..da0c7a9f9
--- /dev/null
+++ b/demos/smoke/README.md
@@ -0,0 +1 @@
+This demo demonstrates multi-thread command buffer recording.
diff --git a/demos/smoke/Shell.cpp b/demos/smoke/Shell.cpp
new file mode 100644
index 000000000..e774e45ea
--- /dev/null
+++ b/demos/smoke/Shell.cpp
@@ -0,0 +1,591 @@
+/*
+ * Copyright (C) 2016 Google, Inc.
+ *
+ * Permission is hereby granted, free of charge, to any person obtaining a
+ * copy of this software and associated documentation files (the "Software"),
+ * to deal in the Software without restriction, including without limitation
+ * the rights to use, copy, modify, merge, publish, distribute, sublicense,
+ * and/or sell copies of the Software, and to permit persons to whom the
+ * Software is furnished to do so, subject to the following conditions:
+ *
+ * The above copyright notice and this permission notice shall be included
+ * in all copies or substantial portions of the Software.
+ *
+ * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
+ * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
+ * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL
+ * THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
+ * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
+ * FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER
+ * DEALINGS IN THE SOFTWARE.
+ */
+
+#include <cassert>
+#include <array>
+#include <iostream>
+#include <string>
+#include <sstream>
+#include <set>
+#include "Helpers.h"
+#include "Shell.h"
+#include "Game.h"
+
+Shell::Shell(Game &game)
+ : game_(game), settings_(game.settings()), ctx_(),
+ game_tick_(1.0f / settings_.ticks_per_second), game_time_(game_tick_)
+{
+ // require generic WSI extensions
+ instance_extensions_.push_back(VK_KHR_SURFACE_EXTENSION_NAME);
+ device_extensions_.push_back(VK_KHR_SWAPCHAIN_EXTENSION_NAME);
+
+ // require "standard" validation layers
+ if (settings_.validate) {
+ device_layers_.push_back("VK_LAYER_LUNARG_standard_validation");
+ instance_layers_.push_back("VK_LAYER_LUNARG_standard_validation");
+
+ instance_extensions_.push_back(VK_EXT_DEBUG_REPORT_EXTENSION_NAME);
+ }
+}
+
+void Shell::log(LogPriority priority, const char *msg)
+{
+ std::ostream &st = (priority >= LOG_ERR) ? std::cerr : std::cout;
+ st << msg << "\n";
+}
+
+void Shell::init_vk()
+{
+ vk::init_dispatch_table_top(load_vk());
+
+ init_instance();
+ vk::init_dispatch_table_middle(ctx_.instance, false);
+
+ init_debug_report();
+ init_physical_dev();
+}
+
+void Shell::cleanup_vk()
+{
+ if (settings_.validate)
+ vk::DestroyDebugReportCallbackEXT(ctx_.instance, ctx_.debug_report, nullptr);
+
+ vk::DestroyInstance(ctx_.instance, nullptr);
+}
+
+bool Shell::debug_report_callback(VkDebugReportFlagsEXT flags,
+ VkDebugReportObjectTypeEXT obj_type,
+ uint64_t object,
+ size_t location,
+ int32_t msg_code,
+ const char *layer_prefix,
+ const char *msg)
+{
+ LogPriority prio = LOG_WARN;
+ if (flags & VK_DEBUG_REPORT_ERROR_BIT_EXT)
+ prio = LOG_ERR;
+ else if (flags & (VK_DEBUG_REPORT_WARNING_BIT_EXT | VK_DEBUG_REPORT_PERFORMANCE_WARNING_BIT_EXT))
+ prio = LOG_WARN;
+ else if (flags & VK_DEBUG_REPORT_INFORMATION_BIT_EXT)
+ prio = LOG_INFO;
+ else if (flags & VK_DEBUG_REPORT_DEBUG_BIT_EXT)
+ prio = LOG_DEBUG;
+
+ std::stringstream ss;
+ ss << layer_prefix << ": " << msg;
+
+ log(prio, ss.str().c_str());
+
+ return false;
+}
+
+void Shell::assert_all_instance_layers() const
+{
+ // enumerate instance layer
+ std::vector<VkLayerProperties> layers;
+ vk::enumerate(layers);
+
+ std::set<std::string> layer_names;
+ for (const auto &layer : layers)
+ layer_names.insert(layer.layerName);
+
+ // all listed instance layers are required
+ for (const auto &name : instance_layers_) {
+ if (layer_names.find(name) == layer_names.end()) {
+ std::stringstream ss;
+ ss << "instance layer " << name << " is missing";
+ throw std::runtime_error(ss.str());
+ }
+ }
+}
+
+void Shell::assert_all_instance_extensions() const
+{
+ // enumerate instance extensions
+ std::vector<VkExtensionProperties> exts;
+ vk::enumerate(nullptr, exts);
+
+ std::set<std::string> ext_names;
+ for (const auto &ext : exts)
+ ext_names.insert(ext.extensionName);
+
+ // all listed instance extensions are required
+ for (const auto &name : instance_extensions_) {
+ if (ext_names.find(name) == ext_names.end()) {
+ std::stringstream ss;
+ ss << "instance extension " << name << " is missing";
+ throw std::runtime_error(ss.str());
+ }
+ }
+}
+
+bool Shell::has_all_device_layers(VkPhysicalDevice phy) const
+{
+ // enumerate device layers
+ std::vector<VkLayerProperties> layers;
+ vk::enumerate(phy, layers);
+
+ std::set<std::string> layer_names;
+ for (const auto &layer : layers)
+ layer_names.insert(layer.layerName);
+
+ // all listed device layers are required
+ for (const auto &name : device_layers_) {
+ if (layer_names.find(name) == layer_names.end())
+ return false;
+ }
+
+ return true;
+}
+
+bool Shell::has_all_device_extensions(VkPhysicalDevice phy) const
+{
+ // enumerate device extensions
+ std::vector<VkExtensionProperties> exts;
+ vk::enumerate(phy, nullptr, exts);
+
+ std::set<std::string> ext_names;
+ for (const auto &ext : exts)
+ ext_names.insert(ext.extensionName);
+
+ // all listed device extensions are required
+ for (const auto &name : device_extensions_) {
+ if (ext_names.find(name) == ext_names.end())
+ return false;
+ }
+
+ return true;
+}
+
+void Shell::init_instance()
+{
+ assert_all_instance_layers();
+ assert_all_instance_extensions();
+
+ VkApplicationInfo app_info = {};
+ app_info.sType = VK_STRUCTURE_TYPE_APPLICATION_INFO;
+ app_info.pApplicationName = settings_.name.c_str();
+ app_info.applicationVersion = 0;
+ app_info.apiVersion = VK_API_VERSION_1_0;
+
+ VkInstanceCreateInfo instance_info = {};
+ instance_info.sType = VK_STRUCTURE_TYPE_INSTANCE_CREATE_INFO;
+ instance_info.pApplicationInfo = &app_info;
+ instance_info.enabledLayerCount = static_cast<uint32_t>(instance_layers_.size());
+ instance_info.ppEnabledLayerNames = instance_layers_.data();
+ instance_info.enabledExtensionCount = static_cast<uint32_t>(instance_extensions_.size());
+ instance_info.ppEnabledExtensionNames = instance_extensions_.data();
+
+ vk::assert_success(vk::CreateInstance(&instance_info, nullptr, &ctx_.instance));
+}
+
+void Shell::init_debug_report()
+{
+ if (!settings_.validate)
+ return;
+
+ VkDebugReportCallbackCreateInfoEXT debug_report_info = {};
+ debug_report_info.sType = VK_STRUCTURE_TYPE_DEBUG_REPORT_CREATE_INFO_EXT;
+
+ debug_report_info.flags = VK_DEBUG_REPORT_WARNING_BIT_EXT |
+ VK_DEBUG_REPORT_PERFORMANCE_WARNING_BIT_EXT |
+ VK_DEBUG_REPORT_ERROR_BIT_EXT;
+ if (settings_.validate_verbose) {
+ debug_report_info.flags = VK_DEBUG_REPORT_INFORMATION_BIT_EXT |
+ VK_DEBUG_REPORT_DEBUG_BIT_EXT;
+ }
+
+ debug_report_info.pfnCallback = debug_report_callback;
+ debug_report_info.pUserData = reinterpret_cast<void *>(this);
+
+ vk::assert_success(vk::CreateDebugReportCallbackEXT(ctx_.instance,
+ &debug_report_info, nullptr, &ctx_.debug_report));
+}
+
+void Shell::init_physical_dev()
+{
+ // enumerate physical devices
+ std::vector<VkPhysicalDevice> phys;
+ vk::assert_success(vk::enumerate(ctx_.instance, phys));
+
+ ctx_.physical_dev = VK_NULL_HANDLE;
+ for (auto phy : phys) {
+ if (!has_all_device_layers(phy) || !has_all_device_extensions(phy))
+ continue;
+
+ // get queue properties
+ std::vector<VkQueueFamilyProperties> queues;
+ vk::get(phy, queues);
+
+ int game_queue_family = -1, present_queue_family = -1;
+ for (uint32_t i = 0; i < queues.size(); i++) {
+ const VkQueueFamilyProperties &q = queues[i];
+
+ // requires only GRAPHICS for game queues
+ const VkFlags game_queue_flags = VK_QUEUE_GRAPHICS_BIT;
+ if (game_queue_family < 0 &&
+ (q.queueFlags & game_queue_flags) == game_queue_flags)
+ game_queue_family = i;
+
+ // present queue must support the surface
+ if (present_queue_family < 0 && can_present(phy, i))
+ present_queue_family = i;
+
+ if (game_queue_family >= 0 && present_queue_family >= 0)
+ break;
+ }
+
+ if (game_queue_family >= 0 && present_queue_family >= 0) {
+ ctx_.physical_dev = phy;
+ ctx_.game_queue_family = game_queue_family;
+ ctx_.present_queue_family = present_queue_family;
+ break;
+ }
+ }
+
+ if (ctx_.physical_dev == VK_NULL_HANDLE)
+ throw std::runtime_error("failed to find any capable Vulkan physical device");
+}
+
+void Shell::create_context()
+{
+ create_dev();
+ vk::init_dispatch_table_bottom(ctx_.instance, ctx_.dev);
+
+ vk::GetDeviceQueue(ctx_.dev, ctx_.game_queue_family, 0, &ctx_.game_queue);
+ vk::GetDeviceQueue(ctx_.dev, ctx_.present_queue_family, 0, &ctx_.present_queue);
+
+ create_back_buffers();
+
+ // initialize ctx_.{surface,format} before attach_shell
+ create_swapchain();
+
+ game_.attach_shell(*this);
+}
+
+void Shell::destroy_context()
+{
+ if (ctx_.dev == VK_NULL_HANDLE)
+ return;
+
+ vk::DeviceWaitIdle(ctx_.dev);
+
+ destroy_swapchain();
+
+ game_.detach_shell();
+
+ destroy_back_buffers();
+
+ ctx_.game_queue = VK_NULL_HANDLE;
+ ctx_.present_queue = VK_NULL_HANDLE;
+
+ vk::DestroyDevice(ctx_.dev, nullptr);
+ ctx_.dev = VK_NULL_HANDLE;
+}
+
+void Shell::create_dev()
+{
+ VkDeviceCreateInfo dev_info = {};
+ dev_info.sType = VK_STRUCTURE_TYPE_DEVICE_CREATE_INFO;
+
+ const std::vector<float> queue_priorities(settings_.queue_count, 0.0f);
+ std::array<VkDeviceQueueCreateInfo, 2> queue_info = {};
+ queue_info[0].sType = VK_STRUCTURE_TYPE_DEVICE_QUEUE_CREATE_INFO;
+ queue_info[0].queueFamilyIndex = ctx_.game_queue_family;
+ queue_info[0].queueCount = settings_.queue_count;
+ queue_info[0].pQueuePriorities = queue_priorities.data();
+
+ if (ctx_.game_queue_family != ctx_.present_queue_family) {
+ queue_info[1].sType = VK_STRUCTURE_TYPE_DEVICE_QUEUE_CREATE_INFO;
+ queue_info[1].queueFamilyIndex = ctx_.present_queue_family;
+ queue_info[1].queueCount = 1;
+ queue_info[1].pQueuePriorities = queue_priorities.data();
+
+ dev_info.queueCreateInfoCount = 2;
+ } else {
+ dev_info.queueCreateInfoCount = 1;
+ }
+
+ dev_info.pQueueCreateInfos = queue_info.data();
+
+ dev_info.enabledLayerCount = static_cast<uint32_t>(device_layers_.size());
+ dev_info.ppEnabledLayerNames = device_layers_.data();
+ dev_info.enabledExtensionCount = static_cast<uint32_t>(device_extensions_.size());
+ dev_info.ppEnabledExtensionNames = device_extensions_.data();
+
+ // disable all features
+ VkPhysicalDeviceFeatures features = {};
+ dev_info.pEnabledFeatures = &features;
+
+ vk::assert_success(vk::CreateDevice(ctx_.physical_dev, &dev_info, nullptr, &ctx_.dev));
+}
+
+void Shell::create_back_buffers()
+{
+ VkSemaphoreCreateInfo sem_info = {};
+ sem_info.sType = VK_STRUCTURE_TYPE_SEMAPHORE_CREATE_INFO;
+
+ VkFenceCreateInfo fence_info = {};
+ fence_info.sType = VK_STRUCTURE_TYPE_FENCE_CREATE_INFO;
+ fence_info.flags = VK_FENCE_CREATE_SIGNALED_BIT;
+
+ // BackBuffer is used to track which swapchain image and its associated
+ // sync primitives are busy. Having more BackBuffer's than swapchain
+ // images may allows us to replace CPU wait on present_fence by GPU wait
+ // on acquire_semaphore.
+ const int count = settings_.back_buffer_count + 1;
+ for (int i = 0; i < count; i++) {
+ BackBuffer buf = {};
+ vk::assert_success(vk::CreateSemaphore(ctx_.dev, &sem_info, nullptr, &buf.acquire_semaphore));
+ vk::assert_success(vk::CreateSemaphore(ctx_.dev, &sem_info, nullptr, &buf.render_semaphore));
+ vk::assert_success(vk::CreateFence(ctx_.dev, &fence_info, nullptr, &buf.present_fence));
+
+ ctx_.back_buffers.push(buf);
+ }
+}
+
+void Shell::destroy_back_buffers()
+{
+ while (!ctx_.back_buffers.empty()) {
+ const auto &buf = ctx_.back_buffers.front();
+
+ vk::DestroySemaphore(ctx_.dev, buf.acquire_semaphore, nullptr);
+ vk::DestroySemaphore(ctx_.dev, buf.render_semaphore, nullptr);
+ vk::DestroyFence(ctx_.dev, buf.present_fence, nullptr);
+
+ ctx_.back_buffers.pop();
+ }
+}
+
+void Shell::create_swapchain()
+{
+ ctx_.surface = create_surface(ctx_.instance);
+
+ VkBool32 supported;
+ vk::assert_success(vk::GetPhysicalDeviceSurfaceSupportKHR(ctx_.physical_dev,
+ ctx_.present_queue_family, ctx_.surface, &supported));
+ // this should be guaranteed by the platform-specific can_present call
+ assert(supported);
+
+ std::vector<VkSurfaceFormatKHR> formats;
+ vk::get(ctx_.physical_dev, ctx_.surface, formats);
+ ctx_.format = formats[0];
+
+ // defer to resize_swapchain()
+ ctx_.swapchain = VK_NULL_HANDLE;
+ ctx_.extent.width = (uint32_t) -1;
+ ctx_.extent.height = (uint32_t) -1;
+}
+
+void Shell::destroy_swapchain()
+{
+ if (ctx_.swapchain != VK_NULL_HANDLE) {
+ game_.detach_swapchain();
+
+ vk::DestroySwapchainKHR(ctx_.dev, ctx_.swapchain, nullptr);
+ ctx_.swapchain = VK_NULL_HANDLE;
+ }
+
+ vk::DestroySurfaceKHR(ctx_.instance, ctx_.surface, nullptr);
+ ctx_.surface = VK_NULL_HANDLE;
+}
+
+void Shell::resize_swapchain(uint32_t width_hint, uint32_t height_hint)
+{
+ VkSurfaceCapabilitiesKHR caps;
+ vk::assert_success(vk::GetPhysicalDeviceSurfaceCapabilitiesKHR(ctx_.physical_dev,
+ ctx_.surface, &caps));
+
+ VkExtent2D extent = caps.currentExtent;
+ // use the hints
+ if (extent.width == (uint32_t) -1) {
+ extent.width = width_hint;
+ extent.height = height_hint;
+ }
+ // clamp width; to protect us from broken hints?
+ if (extent.width < caps.minImageExtent.width)
+ extent.width = caps.minImageExtent.width;
+ else if (extent.width > caps.maxImageExtent.width)
+ extent.width = caps.maxImageExtent.width;
+ // clamp height
+ if (extent.height < caps.minImageExtent.height)
+ extent.height = caps.minImageExtent.height;
+ else if (extent.height > caps.maxImageExtent.height)
+ extent.height = caps.maxImageExtent.height;
+
+ if (ctx_.extent.width == extent.width && ctx_.extent.height == extent.height)
+ return;
+
+ uint32_t image_count = settings_.back_buffer_count;
+ if (image_count < caps.minImageCount)
+ image_count = caps.minImageCount;
+ else if (image_count > caps.maxImageCount)
+ image_count = caps.maxImageCount;
+
+ assert(caps.supportedUsageFlags & VK_IMAGE_USAGE_COLOR_ATTACHMENT_BIT);
+ assert(caps.supportedTransforms & caps.currentTransform);
+ assert(caps.supportedCompositeAlpha & (VK_COMPOSITE_ALPHA_OPAQUE_BIT_KHR |
+ VK_COMPOSITE_ALPHA_INHERIT_BIT_KHR));
+ VkCompositeAlphaFlagBitsKHR composite_alpha =
+ (caps.supportedCompositeAlpha & VK_COMPOSITE_ALPHA_INHERIT_BIT_KHR) ?
+ VK_COMPOSITE_ALPHA_INHERIT_BIT_KHR : VK_COMPOSITE_ALPHA_OPAQUE_BIT_KHR;
+
+ std::vector<VkPresentModeKHR> modes;
+ vk::get(ctx_.physical_dev, ctx_.surface, modes);
+
+ // FIFO is the only mode universally supported
+ VkPresentModeKHR mode = VK_PRESENT_MODE_FIFO_KHR;
+ for (auto m : modes) {
+ if ((settings_.vsync && m == VK_PRESENT_MODE_MAILBOX_KHR) ||
+ (!settings_.vsync && m == VK_PRESENT_MODE_IMMEDIATE_KHR)) {
+ mode = m;
+ break;
+ }
+ }
+
+ VkSwapchainCreateInfoKHR swapchain_info = {};
+ swapchain_info.sType = VK_STRUCTURE_TYPE_SWAPCHAIN_CREATE_INFO_KHR;
+ swapchain_info.surface = ctx_.surface;
+ swapchain_info.minImageCount = image_count;
+ swapchain_info.imageFormat = ctx_.format.format;
+ swapchain_info.imageColorSpace = ctx_.format.colorSpace;
+ swapchain_info.imageExtent = extent;
+ swapchain_info.imageArrayLayers = 1;
+ swapchain_info.imageUsage = VK_IMAGE_USAGE_COLOR_ATTACHMENT_BIT;
+
+ std::vector<uint32_t> queue_families(1, ctx_.game_queue_family);
+ if (ctx_.game_queue_family != ctx_.present_queue_family) {
+ queue_families.push_back(ctx_.present_queue_family);
+
+ swapchain_info.imageSharingMode = VK_SHARING_MODE_CONCURRENT;
+ swapchain_info.queueFamilyIndexCount = (uint32_t)queue_families.size();
+ swapchain_info.pQueueFamilyIndices = queue_families.data();
+ } else {
+ swapchain_info.imageSharingMode = VK_SHARING_MODE_EXCLUSIVE;
+ }
+
+ swapchain_info.preTransform = caps.currentTransform;;
+ swapchain_info.compositeAlpha = composite_alpha;
+ swapchain_info.presentMode = mode;
+ swapchain_info.clipped = true;
+ swapchain_info.oldSwapchain = ctx_.swapchain;
+
+ vk::assert_success(vk::CreateSwapchainKHR(ctx_.dev, &swapchain_info, nullptr, &ctx_.swapchain));
+ ctx_.extent = extent;
+
+ // destroy the old swapchain
+ if (swapchain_info.oldSwapchain != VK_NULL_HANDLE) {
+ game_.detach_swapchain();
+
+ vk::DeviceWaitIdle(ctx_.dev);
+ vk::DestroySwapchainKHR(ctx_.dev, swapchain_info.oldSwapchain, nullptr);
+ }
+
+ game_.attach_swapchain();
+}
+
+void Shell::add_game_time(float time)
+{
+ int max_ticks = 3;
+
+ if (!settings_.no_tick)
+ game_time_ += time;
+
+ while (game_time_ >= game_tick_ && max_ticks--) {
+ game_.on_tick();
+ game_time_ -= game_tick_;
+ }
+}
+
+void Shell::acquire_back_buffer()
+{
+ // acquire just once when not presenting
+ if (settings_.no_present &&
+ ctx_.acquired_back_buffer.acquire_semaphore != VK_NULL_HANDLE)
+ return;
+
+ auto &buf = ctx_.back_buffers.front();
+
+ // wait until acquire and render semaphores are waited/unsignaled
+ vk::assert_success(vk::WaitForFences(ctx_.dev, 1, &buf.present_fence,
+ true, UINT64_MAX));
+ // reset the fence
+ vk::assert_success(vk::ResetFences(ctx_.dev, 1, &buf.present_fence));
+
+ vk::assert_success(vk::AcquireNextImageKHR(ctx_.dev, ctx_.swapchain,
+ UINT64_MAX, buf.acquire_semaphore, VK_NULL_HANDLE,
+ &buf.image_index));
+
+ ctx_.acquired_back_buffer = buf;
+ ctx_.back_buffers.pop();
+}
+
+void Shell::present_back_buffer()
+{
+ const auto &buf = ctx_.acquired_back_buffer;
+
+ if (!settings_.no_render)
+ game_.on_frame(game_time_ / game_tick_);
+
+ if (settings_.no_present) {
+ fake_present();
+ return;
+ }
+
+ VkPresentInfoKHR present_info = {};
+ present_info.sType = VK_STRUCTURE_TYPE_PRESENT_INFO_KHR;
+ present_info.waitSemaphoreCount = 1;
+ present_info.pWaitSemaphores = (settings_.no_render) ?
+ &buf.acquire_semaphore : &buf.render_semaphore;
+ present_info.swapchainCount = 1;
+ present_info.pSwapchains = &ctx_.swapchain;
+ present_info.pImageIndices = &buf.image_index;
+
+ vk::assert_success(vk::QueuePresentKHR(ctx_.present_queue, &present_info));
+
+ vk::assert_success(vk::QueueSubmit(ctx_.present_queue, 0, nullptr, buf.present_fence));
+ ctx_.back_buffers.push(buf);
+}
+
+void Shell::fake_present()
+{
+ const auto &buf = ctx_.acquired_back_buffer;
+
+ assert(settings_.no_present);
+
+ // wait render semaphore and signal acquire semaphore
+ if (!settings_.no_render) {
+ VkPipelineStageFlags stage = VK_PIPELINE_STAGE_BOTTOM_OF_PIPE_BIT;
+ VkSubmitInfo submit_info = {};
+ submit_info.sType = VK_STRUCTURE_TYPE_SUBMIT_INFO;
+ submit_info.waitSemaphoreCount = 1;
+ submit_info.pWaitSemaphores = &buf.render_semaphore;
+ submit_info.pWaitDstStageMask = &stage;
+ submit_info.signalSemaphoreCount = 1;
+ submit_info.pSignalSemaphores = &buf.acquire_semaphore;
+ vk::assert_success(vk::QueueSubmit(ctx_.game_queue, 1, &submit_info, VK_NULL_HANDLE));
+ }
+
+ // push the buffer back just once for Shell::cleanup_vk
+ if (buf.acquire_semaphore != ctx_.back_buffers.back().acquire_semaphore)
+ ctx_.back_buffers.push(buf);
+}
diff --git a/demos/smoke/Shell.h b/demos/smoke/Shell.h
new file mode 100644
index 000000000..2aa0b6f48
--- /dev/null
+++ b/demos/smoke/Shell.h
@@ -0,0 +1,162 @@
+/*
+ * Copyright (C) 2016 Google, Inc.
+ *
+ * Permission is hereby granted, free of charge, to any person obtaining a
+ * copy of this software and associated documentation files (the "Software"),
+ * to deal in the Software without restriction, including without limitation
+ * the rights to use, copy, modify, merge, publish, distribute, sublicense,
+ * and/or sell copies of the Software, and to permit persons to whom the
+ * Software is furnished to do so, subject to the following conditions:
+ *
+ * The above copyright notice and this permission notice shall be included
+ * in all copies or substantial portions of the Software.
+ *
+ * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
+ * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
+ * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL
+ * THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
+ * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
+ * FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER
+ * DEALINGS IN THE SOFTWARE.
+ */
+
+#ifndef SHELL_H
+#define SHELL_H
+
+#include <queue>
+#include <vector>
+#include <stdexcept>
+#include <vulkan/vulkan.h>
+
+#include "Game.h"
+
+class Game;
+
+class Shell {
+public:
+ Shell(const Shell &sh) = delete;
+ Shell &operator=(const Shell &sh) = delete;
+ virtual ~Shell() {}
+
+ struct BackBuffer {
+ uint32_t image_index;
+
+ VkSemaphore acquire_semaphore;
+ VkSemaphore render_semaphore;
+
+ // signaled when this struct is ready for reuse
+ VkFence present_fence;
+ };
+
+ struct Context {
+ VkInstance instance;
+ VkDebugReportCallbackEXT debug_report;
+
+ VkPhysicalDevice physical_dev;
+ uint32_t game_queue_family;
+ uint32_t present_queue_family;
+
+ VkDevice dev;
+ VkQueue game_queue;
+ VkQueue present_queue;
+
+ std::queue<BackBuffer> back_buffers;
+
+ VkSurfaceKHR surface;
+ VkSurfaceFormatKHR format;
+
+ VkSwapchainKHR swapchain;
+ VkExtent2D extent;
+
+ BackBuffer acquired_back_buffer;
+ };
+ const Context &context() const { return ctx_; }
+
+ enum LogPriority {
+ LOG_DEBUG,
+ LOG_INFO,
+ LOG_WARN,
+ LOG_ERR,
+ };
+ virtual void log(LogPriority priority, const char *msg);
+
+ virtual void run() = 0;
+ virtual void quit() = 0;
+
+protected:
+ Shell(Game &game);
+
+ void init_vk();
+ void cleanup_vk();
+
+ void create_context();
+ void destroy_context();
+
+ void resize_swapchain(uint32_t width_hint, uint32_t height_hint);
+
+ void add_game_time(float time);
+
+ void acquire_back_buffer();
+ void present_back_buffer();
+
+ Game &game_;
+ const Game::Settings &settings_;
+
+ std::vector<const char *> instance_layers_;
+ std::vector<const char *> instance_extensions_;
+
+ std::vector<const char *> device_layers_;
+ std::vector<const char *> device_extensions_;
+
+private:
+ bool debug_report_callback(VkDebugReportFlagsEXT flags,
+ VkDebugReportObjectTypeEXT obj_type,
+ uint64_t object,
+ size_t location,
+ int32_t msg_code,
+ const char *layer_prefix,
+ const char *msg);
+ static VKAPI_ATTR VkBool32 VKAPI_CALL debug_report_callback(
+ VkDebugReportFlagsEXT flags,
+ VkDebugReportObjectTypeEXT obj_type,
+ uint64_t object,
+ size_t location,
+ int32_t msg_code,
+ const char *layer_prefix,
+ const char *msg,
+ void *user_data)
+ {
+ Shell *shell = reinterpret_cast<Shell *>(user_data);
+ return shell->debug_report_callback(flags, obj_type, object, location, msg_code, layer_prefix, msg);
+ }
+
+ void assert_all_instance_layers() const;
+ void assert_all_instance_extensions() const;
+
+ bool has_all_device_layers(VkPhysicalDevice phy) const;
+ bool has_all_device_extensions(VkPhysicalDevice phy) const;
+
+ // called by init_vk
+ virtual PFN_vkGetInstanceProcAddr load_vk() = 0;
+ virtual bool can_present(VkPhysicalDevice phy, uint32_t queue_family) = 0;
+ void init_instance();
+ void init_debug_report();
+ void init_physical_dev();
+
+ // called by create_context
+ void create_dev();
+ void create_back_buffers();
+ void destroy_back_buffers();
+ virtual VkSurfaceKHR create_surface(VkInstance instance) = 0;
+ void create_swapchain();
+ void destroy_swapchain();
+
+ void fake_present();
+
+ Context ctx_;
+
+ const float game_tick_;
+ float game_time_;
+};
+
+#endif // SHELL_H
diff --git a/demos/smoke/ShellAndroid.cpp b/demos/smoke/ShellAndroid.cpp
new file mode 100644
index 000000000..4b813b467
--- /dev/null
+++ b/demos/smoke/ShellAndroid.cpp
@@ -0,0 +1,227 @@
+/*
+ * Copyright (C) 2016 Google, Inc.
+ *
+ * Permission is hereby granted, free of charge, to any person obtaining a
+ * copy of this software and associated documentation files (the "Software"),
+ * to deal in the Software without restriction, including without limitation
+ * the rights to use, copy, modify, merge, publish, distribute, sublicense,
+ * and/or sell copies of the Software, and to permit persons to whom the
+ * Software is furnished to do so, subject to the following conditions:
+ *
+ * The above copyright notice and this permission notice shall be included
+ * in all copies or substantial portions of the Software.
+ *
+ * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
+ * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
+ * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL
+ * THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
+ * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
+ * FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER
+ * DEALINGS IN THE SOFTWARE.
+ */
+
+#include <cassert>
+#include <dlfcn.h>
+#include <time.h>
+#include <android/log.h>
+
+#include "Helpers.h"
+#include "Game.h"
+#include "ShellAndroid.h"
+
+namespace {
+
+// copied from ShellXCB.cpp
+class PosixTimer {
+public:
+ PosixTimer()
+ {
+ reset();
+ }
+
+ void reset()
+ {
+ clock_gettime(CLOCK_MONOTONIC, &start_);
+ }
+
+ double get() const
+ {
+ struct timespec now;
+ clock_gettime(CLOCK_MONOTONIC, &now);
+
+ constexpr long one_s_in_ns = 1000 * 1000 * 1000;
+ constexpr double one_s_in_ns_d = static_cast<double>(one_s_in_ns);
+
+ time_t s = now.tv_sec - start_.tv_sec;
+ long ns;
+ if (now.tv_nsec > start_.tv_nsec) {
+ ns = now.tv_nsec - start_.tv_nsec;
+ } else {
+ assert(s > 0);
+ s--;
+ ns = one_s_in_ns - (start_.tv_nsec - now.tv_nsec);
+ }
+
+ return static_cast<double>(s) + static_cast<double>(ns) / one_s_in_ns_d;
+ }
+
+private:
+ struct timespec start_;
+};
+
+} // namespace
+
+ShellAndroid::ShellAndroid(android_app &app, Game &game) : Shell(game), app_(app)
+{
+ instance_extensions_.push_back(VK_KHR_ANDROID_SURFACE_EXTENSION_NAME);
+
+ app_dummy();
+ app_.userData = this;
+ app_.onAppCmd = on_app_cmd;
+ app_.onInputEvent = on_input_event;
+
+ init_vk();
+}
+
+ShellAndroid::~ShellAndroid()
+{
+ cleanup_vk();
+ dlclose(lib_handle_);
+}
+
+void ShellAndroid::log(LogPriority priority, const char *msg)
+{
+ int prio;
+
+ switch (priority) {
+ case LOG_DEBUG:
+ prio = ANDROID_LOG_DEBUG;
+ break;
+ case LOG_INFO:
+ prio = ANDROID_LOG_INFO;
+ break;
+ case LOG_WARN:
+ prio = ANDROID_LOG_WARN;
+ break;
+ case LOG_ERR:
+ prio = ANDROID_LOG_ERROR;
+ break;
+ default:
+ prio = ANDROID_LOG_UNKNOWN;
+ break;
+ }
+
+ __android_log_write(prio, settings_.name.c_str(), msg);
+}
+
+PFN_vkGetInstanceProcAddr ShellAndroid::load_vk()
+{
+ const char filename[] = "libvulkan.so";
+ void *handle = nullptr, *symbol = nullptr;
+
+ handle = dlopen(filename, RTLD_LAZY);
+ if (handle)
+ symbol = dlsym(handle, "vkGetInstanceProcAddr");
+ if (!symbol) {
+ if (handle)
+ dlclose(handle);
+
+ throw std::runtime_error(dlerror());
+ }
+
+ lib_handle_ = handle;
+
+ return reinterpret_cast<PFN_vkGetInstanceProcAddr>(symbol);
+}
+
+VkSurfaceKHR ShellAndroid::create_surface(VkInstance instance)
+{
+ VkAndroidSurfaceCreateInfoKHR surface_info = {};
+ surface_info.sType = VK_STRUCTURE_TYPE_ANDROID_SURFACE_CREATE_INFO_KHR;
+ surface_info.window = app_.window;
+
+ VkSurfaceKHR surface;
+ vk::assert_success(vk::CreateAndroidSurfaceKHR(instance, &surface_info, nullptr, &surface));
+
+ return surface;
+}
+
+void ShellAndroid::on_app_cmd(int32_t cmd)
+{
+ switch (cmd) {
+ case APP_CMD_INIT_WINDOW:
+ create_context();
+ resize_swapchain(0, 0);
+ break;
+ case APP_CMD_TERM_WINDOW:
+ destroy_context();
+ break;
+ case APP_CMD_WINDOW_RESIZED:
+ resize_swapchain(0, 0);
+ break;
+ case APP_CMD_STOP:
+ ANativeActivity_finish(app_.activity);
+ break;
+ default:
+ break;
+ }
+}
+
+int32_t ShellAndroid::on_input_event(const AInputEvent *event)
+{
+ if (AInputEvent_getType(event) != AINPUT_EVENT_TYPE_MOTION)
+ return false;
+
+ bool handled = false;
+
+ switch (AMotionEvent_getAction(event) & AMOTION_EVENT_ACTION_MASK) {
+ case AMOTION_EVENT_ACTION_UP:
+ game_.on_key(Game::KEY_SPACE);
+ handled = true;
+ break;
+ default:
+ break;
+ }
+
+ return handled;
+}
+
+void ShellAndroid::quit()
+{
+ ANativeActivity_finish(app_.activity);
+}
+
+void ShellAndroid::run()
+{
+ PosixTimer timer;
+
+ double current_time = timer.get();
+
+ while (true) {
+ struct android_poll_source *source;
+ while (true) {
+ int timeout = (settings_.animate && app_.window) ? 0 : -1;
+ if (ALooper_pollAll(timeout, nullptr, nullptr,
+ reinterpret_cast<void **>(&source)) < 0)
+ break;
+
+ if (source)
+ source->process(&app_, source);
+ }
+
+ if (app_.destroyRequested)
+ break;
+
+ if (!app_.window)
+ continue;
+
+ acquire_back_buffer();
+
+ double t = timer.get();
+ add_game_time(static_cast<float>(t - current_time));
+
+ present_back_buffer();
+
+ current_time = t;
+ }
+}
diff --git a/demos/smoke/ShellAndroid.h b/demos/smoke/ShellAndroid.h
new file mode 100644
index 000000000..dae3d0726
--- /dev/null
+++ b/demos/smoke/ShellAndroid.h
@@ -0,0 +1,68 @@
+/*
+ * Copyright (C) 2016 Google, Inc.
+ *
+ * Permission is hereby granted, free of charge, to any person obtaining a
+ * copy of this software and associated documentation files (the "Software"),
+ * to deal in the Software without restriction, including without limitation
+ * the rights to use, copy, modify, merge, publish, distribute, sublicense,
+ * and/or sell copies of the Software, and to permit persons to whom the
+ * Software is furnished to do so, subject to the following conditions:
+ *
+ * The above copyright notice and this permission notice shall be included
+ * in all copies or substantial portions of the Software.
+ *
+ * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
+ * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
+ * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL
+ * THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
+ * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
+ * FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER
+ * DEALINGS IN THE SOFTWARE.
+ */
+
+#ifndef SHELL_ANDROID_H
+#define SHELL_ANDROID_H
+
+#include <android_native_app_glue.h>
+#include "Shell.h"
+
+class ShellAndroid : public Shell {
+public:
+ ShellAndroid(android_app &app, Game &game);
+ ~ShellAndroid();
+
+ void log(LogPriority priority, const char *msg);
+
+ void run();
+ void quit();
+
+private:
+ PFN_vkGetInstanceProcAddr load_vk();
+ bool can_present(VkPhysicalDevice phy, uint32_t queue_family) { return true; }
+
+ VkSurfaceKHR create_surface(VkInstance instance);
+
+ void on_app_cmd(int32_t cmd);
+ int32_t on_input_event(const AInputEvent *event);
+
+ static inline void on_app_cmd(android_app *app, int32_t cmd);
+ static inline int32_t on_input_event(android_app *app, AInputEvent *event);
+
+ android_app &app_;
+
+ void *lib_handle_;
+};
+
+void ShellAndroid::on_app_cmd(android_app *app, int32_t cmd)
+{
+ auto android = reinterpret_cast<ShellAndroid *>(app->userData);
+ android->on_app_cmd(cmd);
+}
+
+int32_t ShellAndroid::on_input_event(android_app *app, AInputEvent *event)
+{
+ auto android = reinterpret_cast<ShellAndroid *>(app->userData);
+ return android->on_input_event(event);
+}
+
+#endif // SHELL_ANDROID_H
diff --git a/demos/smoke/ShellWin32.cpp b/demos/smoke/ShellWin32.cpp
new file mode 100644
index 000000000..1a1a844cc
--- /dev/null
+++ b/demos/smoke/ShellWin32.cpp
@@ -0,0 +1,256 @@
+/*
+ * Copyright (C) 2016 Google, Inc.
+ *
+ * Permission is hereby granted, free of charge, to any person obtaining a
+ * copy of this software and associated documentation files (the "Software"),
+ * to deal in the Software without restriction, including without limitation
+ * the rights to use, copy, modify, merge, publish, distribute, sublicense,
+ * and/or sell copies of the Software, and to permit persons to whom the
+ * Software is furnished to do so, subject to the following conditions:
+ *
+ * The above copyright notice and this permission notice shall be included
+ * in all copies or substantial portions of the Software.
+ *
+ * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
+ * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
+ * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL
+ * THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
+ * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
+ * FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER
+ * DEALINGS IN THE SOFTWARE.
+ */
+
+#include <cassert>
+#include <iostream>
+#include <sstream>
+
+#include "Helpers.h"
+#include "Game.h"
+#include "ShellWin32.h"
+
+namespace {
+
+class Win32Timer {
+public:
+ Win32Timer()
+ {
+ LARGE_INTEGER freq;
+ QueryPerformanceFrequency(&freq);
+ freq_ = static_cast<double>(freq.QuadPart);
+
+ reset();
+ }
+
+ void reset()
+ {
+ QueryPerformanceCounter(&start_);
+ }
+
+ double get() const
+ {
+ LARGE_INTEGER now;
+ QueryPerformanceCounter(&now);
+
+ return static_cast<double>(now.QuadPart - start_.QuadPart) / freq_;
+ }
+
+private:
+ double freq_;
+ LARGE_INTEGER start_;
+};
+
+} // namespace
+
+ShellWin32::ShellWin32(Game &game) : Shell(game), hwnd_(nullptr)
+{
+ instance_extensions_.push_back(VK_KHR_WIN32_SURFACE_EXTENSION_NAME);
+ init_vk();
+}
+
+ShellWin32::~ShellWin32()
+{
+ cleanup_vk();
+ FreeLibrary(hmodule_);
+}
+
+void ShellWin32::create_window()
+{
+ const std::string class_name(settings_.name + "WindowClass");
+
+ hinstance_ = GetModuleHandle(nullptr);
+
+ WNDCLASSEX win_class = {};
+ win_class.cbSize = sizeof(WNDCLASSEX);
+ win_class.style = CS_HREDRAW | CS_VREDRAW;
+ win_class.lpfnWndProc = window_proc;
+ win_class.hInstance = hinstance_;
+ win_class.hCursor = LoadCursor(nullptr, IDC_ARROW);
+ win_class.lpszClassName = class_name.c_str();
+ RegisterClassEx(&win_class);
+
+ const DWORD win_style =
+ WS_CLIPSIBLINGS | WS_CLIPCHILDREN | WS_VISIBLE | WS_OVERLAPPEDWINDOW;
+
+ RECT win_rect = { 0, 0, settings_.initial_width, settings_.initial_height };
+ AdjustWindowRect(&win_rect, win_style, false);
+
+ hwnd_ = CreateWindowEx(WS_EX_APPWINDOW,
+ class_name.c_str(),
+ settings_.name.c_str(),
+ win_style,
+ 0,
+ 0,
+ win_rect.right - win_rect.left,
+ win_rect.bottom - win_rect.top,
+ nullptr,
+ nullptr,
+ hinstance_,
+ nullptr);
+
+ SetForegroundWindow(hwnd_);
+ SetWindowLongPtr(hwnd_, GWLP_USERDATA, (LONG_PTR) this);
+}
+
+PFN_vkGetInstanceProcAddr ShellWin32::load_vk()
+{
+ const char filename[] = "vulkan-1.dll";
+ HMODULE mod;
+ PFN_vkGetInstanceProcAddr get_proc;
+
+ mod = LoadLibrary(filename);
+ if (mod) {
+ get_proc = reinterpret_cast<PFN_vkGetInstanceProcAddr>(GetProcAddress(
+ mod, "vkGetInstanceProcAddr"));
+ }
+
+ if (!mod || !get_proc) {
+ std::stringstream ss;
+ ss << "failed to load " << filename;
+
+ if (mod)
+ FreeLibrary(mod);
+
+ throw std::runtime_error(ss.str());
+ }
+
+ hmodule_ = mod;
+
+ return get_proc;
+}
+
+bool ShellWin32::can_present(VkPhysicalDevice phy, uint32_t queue_family)
+{
+ return vk::GetPhysicalDeviceWin32PresentationSupportKHR(phy, queue_family);
+}
+
+VkSurfaceKHR ShellWin32::create_surface(VkInstance instance)
+{
+ VkWin32SurfaceCreateInfoKHR surface_info = {};
+ surface_info.sType = VK_STRUCTURE_TYPE_WIN32_SURFACE_CREATE_INFO_KHR;
+ surface_info.hinstance = hinstance_;
+ surface_info.hwnd = hwnd_;
+
+ VkSurfaceKHR surface;
+ vk::assert_success(vk::CreateWin32SurfaceKHR(instance, &surface_info, nullptr, &surface));
+
+ return surface;
+}
+
+LRESULT ShellWin32::handle_message(UINT msg, WPARAM wparam, LPARAM lparam)
+{
+ switch (msg) {
+ case WM_SIZE:
+ {
+ UINT w = LOWORD(lparam);
+ UINT h = HIWORD(lparam);
+ resize_swapchain(w, h);
+ }
+ break;
+ case WM_KEYDOWN:
+ {
+ Game::Key key;
+
+ switch (wparam) {
+ case VK_ESCAPE:
+ key = Game::KEY_ESC;
+ break;
+ case VK_UP:
+ key = Game::KEY_UP;
+ break;
+ case VK_DOWN:
+ key = Game::KEY_DOWN;
+ break;
+ case VK_SPACE:
+ key = Game::KEY_SPACE;
+ break;
+ default:
+ key = Game::KEY_UNKNOWN;
+ break;
+ }
+
+ game_.on_key(key);
+ }
+ break;
+ case WM_CLOSE:
+ game_.on_key(Game::KEY_SHUTDOWN);
+ break;
+ case WM_DESTROY:
+ quit();
+ break;
+ default:
+ return DefWindowProc(hwnd_, msg, wparam, lparam);
+ break;
+ }
+
+ return 0;
+}
+
+void ShellWin32::quit()
+{
+ PostQuitMessage(0);
+}
+
+void ShellWin32::run()
+{
+ create_window();
+
+ create_context();
+ resize_swapchain(settings_.initial_width, settings_.initial_height);
+
+ Win32Timer timer;
+ double current_time = timer.get();
+
+ while (true) {
+ bool quit = false;
+
+ assert(settings_.animate);
+
+ // process all messages
+ MSG msg;
+ while (PeekMessage(&msg, nullptr, 0, 0, PM_REMOVE)) {
+ if (msg.message == WM_QUIT) {
+ quit = true;
+ break;
+ }
+
+ TranslateMessage(&msg);
+ DispatchMessage(&msg);
+ }
+
+ if (quit)
+ break;
+
+ acquire_back_buffer();
+
+ double t = timer.get();
+ add_game_time(static_cast<float>(t - current_time));
+
+ present_back_buffer();
+
+ current_time = t;
+ }
+
+ destroy_context();
+
+ DestroyWindow(hwnd_);
+}
diff --git a/demos/smoke/ShellWin32.h b/demos/smoke/ShellWin32.h
new file mode 100644
index 000000000..c5a136bd2
--- /dev/null
+++ b/demos/smoke/ShellWin32.h
@@ -0,0 +1,63 @@
+/*
+ * Copyright (C) 2016 Google, Inc.
+ *
+ * Permission is hereby granted, free of charge, to any person obtaining a
+ * copy of this software and associated documentation files (the "Software"),
+ * to deal in the Software without restriction, including without limitation
+ * the rights to use, copy, modify, merge, publish, distribute, sublicense,
+ * and/or sell copies of the Software, and to permit persons to whom the
+ * Software is furnished to do so, subject to the following conditions:
+ *
+ * The above copyright notice and this permission notice shall be included
+ * in all copies or substantial portions of the Software.
+ *
+ * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
+ * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
+ * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL
+ * THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
+ * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
+ * FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER
+ * DEALINGS IN THE SOFTWARE.
+ */
+
+#ifndef SHELL_WIN32_H
+#define SHELL_WIN32_H
+
+#include <windows.h>
+#include "Shell.h"
+
+class ShellWin32 : public Shell {
+public:
+ ShellWin32(Game &game);
+ ~ShellWin32();
+
+ void run();
+ void quit();
+
+private:
+
+ PFN_vkGetInstanceProcAddr load_vk();
+ bool can_present(VkPhysicalDevice phy, uint32_t queue_family);
+
+ void create_window();
+ VkSurfaceKHR create_surface(VkInstance instance);
+
+ static LRESULT CALLBACK window_proc(HWND hwnd, UINT uMsg, WPARAM wParam, LPARAM lParam)
+ {
+ ShellWin32 *shell = reinterpret_cast<ShellWin32 *>(GetWindowLongPtr(hwnd, GWLP_USERDATA));
+
+ // called from constructor, CreateWindowEx specifically. But why?
+ if (!shell)
+ return DefWindowProc(hwnd, uMsg, wParam, lParam);
+
+ return shell->handle_message(uMsg, wParam, lParam);
+ }
+ LRESULT handle_message(UINT msg, WPARAM wparam, LPARAM lparam);
+
+ HINSTANCE hinstance_;
+ HWND hwnd_;
+
+ HMODULE hmodule_;
+};
+
+#endif // SHELL_WIN32_H
diff --git a/demos/smoke/ShellXcb.cpp b/demos/smoke/ShellXcb.cpp
new file mode 100644
index 000000000..e83896744
--- /dev/null
+++ b/demos/smoke/ShellXcb.cpp
@@ -0,0 +1,344 @@
+/*
+ * Copyright (C) 2016 Google, Inc.
+ *
+ * Permission is hereby granted, free of charge, to any person obtaining a
+ * copy of this software and associated documentation files (the "Software"),
+ * to deal in the Software without restriction, including without limitation
+ * the rights to use, copy, modify, merge, publish, distribute, sublicense,
+ * and/or sell copies of the Software, and to permit persons to whom the
+ * Software is furnished to do so, subject to the following conditions:
+ *
+ * The above copyright notice and this permission notice shall be included
+ * in all copies or substantial portions of the Software.
+ *
+ * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
+ * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
+ * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL
+ * THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
+ * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
+ * FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER
+ * DEALINGS IN THE SOFTWARE.
+ */
+
+#include <cassert>
+#include <sstream>
+#include <dlfcn.h>
+#include <time.h>
+
+#include "Helpers.h"
+#include "Game.h"
+#include "ShellXcb.h"
+
+namespace {
+
+class PosixTimer {
+public:
+ PosixTimer()
+ {
+ reset();
+ }
+
+ void reset()
+ {
+ clock_gettime(CLOCK_MONOTONIC, &start_);
+ }
+
+ double get() const
+ {
+ struct timespec now;
+ clock_gettime(CLOCK_MONOTONIC, &now);
+
+ constexpr long one_s_in_ns = 1000 * 1000 * 1000;
+ constexpr double one_s_in_ns_d = static_cast<double>(one_s_in_ns);
+
+ time_t s = now.tv_sec - start_.tv_sec;
+ long ns;
+ if (now.tv_nsec > start_.tv_nsec) {
+ ns = now.tv_nsec - start_.tv_nsec;
+ } else {
+ assert(s > 0);
+ s--;
+ ns = one_s_in_ns - (start_.tv_nsec - now.tv_nsec);
+ }
+
+ return static_cast<double>(s) + static_cast<double>(ns) / one_s_in_ns_d;
+ }
+
+private:
+ struct timespec start_;
+};
+
+xcb_intern_atom_cookie_t intern_atom_cookie(xcb_connection_t *c, const std::string &s)
+{
+ return xcb_intern_atom(c, false, s.size(), s.c_str());
+}
+
+xcb_atom_t intern_atom(xcb_connection_t *c, xcb_intern_atom_cookie_t cookie)
+{
+ xcb_atom_t atom = XCB_ATOM_NONE;
+ xcb_intern_atom_reply_t *reply = xcb_intern_atom_reply(c, cookie, nullptr);
+ if (reply) {
+ atom = reply->atom;
+ free(reply);
+ }
+
+ return atom;
+}
+
+} // namespace
+
+ShellXcb::ShellXcb(Game &game) : Shell(game)
+{
+ instance_extensions_.push_back(VK_KHR_XCB_SURFACE_EXTENSION_NAME);
+
+ init_connection();
+ init_vk();
+}
+
+ShellXcb::~ShellXcb()
+{
+ cleanup_vk();
+ dlclose(lib_handle_);
+
+ xcb_disconnect(c_);
+}
+
+void ShellXcb::init_connection()
+{
+ int scr;
+
+ c_ = xcb_connect(nullptr, &scr);
+ if (!c_ || xcb_connection_has_error(c_)) {
+ xcb_disconnect(c_);
+ throw std::runtime_error("failed to connect to the display server");
+ }
+
+ const xcb_setup_t *setup = xcb_get_setup(c_);
+ xcb_screen_iterator_t iter = xcb_setup_roots_iterator(setup);
+ while (scr-- > 0)
+ xcb_screen_next(&iter);
+
+ scr_ = iter.data;
+}
+
+void ShellXcb::create_window()
+{
+ win_ = xcb_generate_id(c_);
+
+ uint32_t value_mask, value_list[32];
+ value_mask = XCB_CW_BACK_PIXEL | XCB_CW_EVENT_MASK;
+ value_list[0] = scr_->black_pixel;
+ value_list[1] = XCB_EVENT_MASK_KEY_PRESS |
+ XCB_EVENT_MASK_STRUCTURE_NOTIFY;
+
+ xcb_create_window(c_,
+ XCB_COPY_FROM_PARENT,
+ win_, scr_->root, 0, 0,
+ settings_.initial_width, settings_.initial_height, 0,
+ XCB_WINDOW_CLASS_INPUT_OUTPUT,
+ scr_->root_visual,
+ value_mask, value_list);
+
+ xcb_intern_atom_cookie_t utf8_string_cookie = intern_atom_cookie(c_, "UTF8_STRING");
+ xcb_intern_atom_cookie_t _net_wm_name_cookie = intern_atom_cookie(c_, "_NET_WM_NAME");
+ xcb_intern_atom_cookie_t wm_protocols_cookie = intern_atom_cookie(c_, "WM_PROTOCOLS");
+ xcb_intern_atom_cookie_t wm_delete_window_cookie = intern_atom_cookie(c_, "WM_DELETE_WINDOW");
+
+ // set title
+ xcb_atom_t utf8_string = intern_atom(c_, utf8_string_cookie);
+ xcb_atom_t _net_wm_name = intern_atom(c_, _net_wm_name_cookie);
+ xcb_change_property(c_, XCB_PROP_MODE_REPLACE, win_, _net_wm_name,
+ utf8_string, 8, settings_.name.size(), settings_.name.c_str());
+
+ // advertise WM_DELETE_WINDOW
+ wm_protocols_ = intern_atom(c_, wm_protocols_cookie);
+ wm_delete_window_ = intern_atom(c_, wm_delete_window_cookie);
+ xcb_change_property(c_, XCB_PROP_MODE_REPLACE, win_, wm_protocols_,
+ XCB_ATOM_ATOM, 32, 1, &wm_delete_window_);
+}
+
+PFN_vkGetInstanceProcAddr ShellXcb::load_vk()
+{
+ const char filename[] = "libvulkan.so";
+ void *handle, *symbol;
+
+#ifdef UNINSTALLED_LOADER
+ handle = dlopen(UNINSTALLED_LOADER, RTLD_LAZY);
+ if (!handle)
+ handle = dlopen(filename, RTLD_LAZY);
+#else
+ handle = dlopen(filename, RTLD_LAZY);
+#endif
+
+ if (handle)
+ symbol = dlsym(handle, "vkGetInstanceProcAddr");
+
+ if (!handle || !symbol) {
+ std::stringstream ss;
+ ss << "failed to load " << dlerror();
+
+ if (handle)
+ dlclose(handle);
+
+ throw std::runtime_error(ss.str());
+ }
+
+ lib_handle_ = handle;
+
+ return reinterpret_cast<PFN_vkGetInstanceProcAddr>(symbol);
+}
+
+bool ShellXcb::can_present(VkPhysicalDevice phy, uint32_t queue_family)
+{
+ return vk::GetPhysicalDeviceXcbPresentationSupportKHR(phy,
+ queue_family, c_, scr_->root_visual);
+}
+
+VkSurfaceKHR ShellXcb::create_surface(VkInstance instance)
+{
+ VkXcbSurfaceCreateInfoKHR surface_info = {};
+ surface_info.sType = VK_STRUCTURE_TYPE_XCB_SURFACE_CREATE_INFO_KHR;
+ surface_info.connection = c_;
+ surface_info.window = win_;
+
+ VkSurfaceKHR surface;
+ vk::assert_success(vk::CreateXcbSurfaceKHR(instance, &surface_info, nullptr, &surface));
+
+ return surface;
+}
+
+void ShellXcb::handle_event(const xcb_generic_event_t *ev)
+{
+ switch (ev->response_type & 0x7f) {
+ case XCB_CONFIGURE_NOTIFY:
+ {
+ const xcb_configure_notify_event_t *notify =
+ reinterpret_cast<const xcb_configure_notify_event_t *>(ev);
+ resize_swapchain(notify->width, notify->height);
+ }
+ break;
+ case XCB_KEY_PRESS:
+ {
+ const xcb_key_press_event_t *press =
+ reinterpret_cast<const xcb_key_press_event_t *>(ev);
+ Game::Key key;
+
+ // TODO translate xcb_keycode_t
+ switch (press->detail) {
+ case 9:
+ key = Game::KEY_ESC;
+ break;
+ case 111:
+ key = Game::KEY_UP;
+ break;
+ case 116:
+ key = Game::KEY_DOWN;
+ break;
+ case 65:
+ key = Game::KEY_SPACE;
+ break;
+ default:
+ key = Game::KEY_UNKNOWN;
+ break;
+ }
+
+ game_.on_key(key);
+ }
+ break;
+ case XCB_CLIENT_MESSAGE:
+ {
+ const xcb_client_message_event_t *msg =
+ reinterpret_cast<const xcb_client_message_event_t *>(ev);
+ if (msg->type == wm_protocols_ && msg->data.data32[0] == wm_delete_window_)
+ game_.on_key(Game::KEY_SHUTDOWN);
+ }
+ break;
+ default:
+ break;
+ }
+}
+
+void ShellXcb::loop_wait()
+{
+ while (true) {
+ xcb_generic_event_t *ev = xcb_wait_for_event(c_);
+ if (!ev)
+ continue;
+
+ handle_event(ev);
+ free(ev);
+
+ if (quit_)
+ break;
+
+ acquire_back_buffer();
+ present_back_buffer();
+ }
+}
+
+void ShellXcb::loop_poll()
+{
+ PosixTimer timer;
+
+ double current_time = timer.get();
+ double profile_start_time = current_time;
+ int profile_present_count = 0;
+
+ while (true) {
+ // handle pending events
+ while (true) {
+ xcb_generic_event_t *ev = xcb_poll_for_event(c_);
+ if (!ev)
+ break;
+
+ handle_event(ev);
+ free(ev);
+ }
+
+ if (quit_)
+ break;
+
+ acquire_back_buffer();
+
+ double t = timer.get();
+ add_game_time(static_cast<float>(t - current_time));
+
+ present_back_buffer();
+
+ current_time = t;
+
+ profile_present_count++;
+ if (current_time - profile_start_time >= 5.0) {
+ const double fps = profile_present_count / (current_time - profile_start_time);
+ std::stringstream ss;
+ ss << profile_present_count << " presents in " <<
+ current_time - profile_start_time << " seconds " <<
+ "(FPS: " << fps << ")";
+ log(LOG_INFO, ss.str().c_str());
+
+ profile_start_time = current_time;
+ profile_present_count = 0;
+ }
+ }
+}
+
+void ShellXcb::run()
+{
+ create_window();
+ xcb_map_window(c_, win_);
+ xcb_flush(c_);
+
+ create_context();
+ resize_swapchain(settings_.initial_width, settings_.initial_height);
+
+ quit_ = false;
+ if (settings_.animate)
+ loop_poll();
+ else
+ loop_wait();
+
+ destroy_context();
+
+ xcb_destroy_window(c_, win_);
+ xcb_flush(c_);
+}
diff --git a/demos/smoke/ShellXcb.h b/demos/smoke/ShellXcb.h
new file mode 100644
index 000000000..89f9a436d
--- /dev/null
+++ b/demos/smoke/ShellXcb.h
@@ -0,0 +1,62 @@
+/*
+ * Copyright (C) 2016 Google, Inc.
+ *
+ * Permission is hereby granted, free of charge, to any person obtaining a
+ * copy of this software and associated documentation files (the "Software"),
+ * to deal in the Software without restriction, including without limitation
+ * the rights to use, copy, modify, merge, publish, distribute, sublicense,
+ * and/or sell copies of the Software, and to permit persons to whom the
+ * Software is furnished to do so, subject to the following conditions:
+ *
+ * The above copyright notice and this permission notice shall be included
+ * in all copies or substantial portions of the Software.
+ *
+ * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
+ * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
+ * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL
+ * THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
+ * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
+ * FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER
+ * DEALINGS IN THE SOFTWARE.
+ */
+
+#ifndef SHELL_XCB_H
+#define SHELL_XCB_H
+
+#include <xcb/xcb.h>
+#include "Shell.h"
+
+class ShellXcb : public Shell {
+public:
+ ShellXcb(Game &game);
+ ~ShellXcb();
+
+ void run();
+ void quit() { quit_ = true; }
+
+private:
+ void init_connection();
+
+ PFN_vkGetInstanceProcAddr load_vk();
+ bool can_present(VkPhysicalDevice phy, uint32_t queue_family);
+
+ void create_window();
+ VkSurfaceKHR create_surface(VkInstance instance);
+
+ void handle_event(const xcb_generic_event_t *ev);
+ void loop_wait();
+ void loop_poll();
+
+ xcb_connection_t *c_;
+ xcb_screen_t *scr_;
+ xcb_window_t win_;
+
+ xcb_atom_t wm_protocols_;
+ xcb_atom_t wm_delete_window_;
+
+ void *lib_handle_;
+
+ bool quit_;
+};
+
+#endif // SHELL_XCB_H
diff --git a/demos/smoke/Simulation.cpp b/demos/smoke/Simulation.cpp
new file mode 100644
index 000000000..dab45d706
--- /dev/null
+++ b/demos/smoke/Simulation.cpp
@@ -0,0 +1,327 @@
+/*
+ * Copyright (C) 2016 Google, Inc.
+ *
+ * Permission is hereby granted, free of charge, to any person obtaining a
+ * copy of this software and associated documentation files (the "Software"),
+ * to deal in the Software without restriction, including without limitation
+ * the rights to use, copy, modify, merge, publish, distribute, sublicense,
+ * and/or sell copies of the Software, and to permit persons to whom the
+ * Software is furnished to do so, subject to the following conditions:
+ *
+ * The above copyright notice and this permission notice shall be included
+ * in all copies or substantial portions of the Software.
+ *
+ * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
+ * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
+ * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL
+ * THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
+ * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
+ * FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER
+ * DEALINGS IN THE SOFTWARE.
+ */
+
+#include <cassert>
+#include <cmath>
+#include <array>
+#include <glm/gtc/matrix_transform.hpp>
+#include "Simulation.h"
+
+namespace {
+
+class MeshPicker {
+public:
+ MeshPicker() :
+ pattern_({
+ Meshes::MESH_PYRAMID,
+ Meshes::MESH_ICOSPHERE,
+ Meshes::MESH_TEAPOT,
+ Meshes::MESH_PYRAMID,
+ Meshes::MESH_ICOSPHERE,
+ Meshes::MESH_PYRAMID,
+ Meshes::MESH_PYRAMID,
+ Meshes::MESH_PYRAMID,
+ Meshes::MESH_PYRAMID,
+ Meshes::MESH_PYRAMID,
+ }), cur_(-1)
+ {
+ }
+
+ Meshes::Type pick()
+ {
+ cur_ = (cur_ + 1) % pattern_.size();
+ return pattern_[cur_];
+ }
+
+ float scale(Meshes::Type type) const
+ {
+ float base = 0.005f;
+
+ switch (type) {
+ case Meshes::MESH_PYRAMID:
+ default:
+ return base * 1.0f;
+ case Meshes::MESH_ICOSPHERE:
+ return base * 3.0f;
+ case Meshes::MESH_TEAPOT:
+ return base * 10.0f;
+ }
+ }
+
+private:
+ const std::array<Meshes::Type, 10> pattern_;
+ int cur_;
+};
+
+class ColorPicker {
+public:
+ ColorPicker(unsigned int rng_seed) :
+ rng_(rng_seed),
+ red_(0.0f, 1.0f),
+ green_(0.0f, 1.0f),
+ blue_(0.0f, 1.0f)
+ {
+ }
+
+ glm::vec3 pick()
+ {
+ return glm::vec3{ red_(rng_),
+ green_(rng_),
+ blue_(rng_) };
+ }
+
+private:
+ std::mt19937 rng_;
+ std::uniform_real_distribution<float> red_;
+ std::uniform_real_distribution<float> green_;
+ std::uniform_real_distribution<float> blue_;
+};
+
+} // namespace
+
+Animation::Animation(unsigned int rng_seed, float scale)
+ : rng_(rng_seed), dir_(-1.0f, 1.0f), speed_(0.1f, 1.0f)
+{
+ float x = dir_(rng_);
+ float y = dir_(rng_);
+ float z = dir_(rng_);
+ if (std::abs(x) + std::abs(y) + std::abs(z) == 0.0f)
+ x = 1.0f;
+
+ current_.axis = glm::normalize(glm::vec3(x, y, z));
+
+ current_.speed = speed_(rng_);
+ current_.scale = scale;
+
+ current_.matrix = glm::scale(glm::mat4(1.0f), glm::vec3(current_.scale));
+}
+
+glm::mat4 Animation::transformation(float t)
+{
+ current_.matrix = glm::rotate(current_.matrix, current_.speed * t, current_.axis);
+
+ return current_.matrix;
+}
+
+class Curve {
+public:
+ virtual ~Curve() {}
+ virtual glm::vec3 evaluate(float t) = 0;
+};
+
+namespace {
+
+enum CurveType {
+ CURVE_RANDOM,
+ CURVE_CIRCLE,
+ CURVE_COUNT,
+};
+
+class RandomCurve : public Curve {
+public:
+ RandomCurve(unsigned int rng_seed)
+ : rng_(rng_seed), direction_(-0.3f, 0.3f), duration_(1.0f, 5.0f),
+ segment_start_(0.0f), segment_direction_(0.0f),
+ time_start_(0.0f), time_duration_(0.0f)
+ {
+ }
+
+ glm::vec3 evaluate(float t)
+ {
+ if (t >= time_start_ + time_duration_)
+ new_segment(t);
+
+ pos_ += unit_dir_ * (t - last_);
+ last_ = t;
+
+ return pos_;
+ }
+
+private:
+ void new_segment(float time_start)
+ {
+ segment_start_ += segment_direction_;
+ segment_direction_ = glm::vec3(direction_(rng_),
+ direction_(rng_),
+ direction_(rng_));
+
+ time_start_ = time_start;
+ time_duration_ = duration_(rng_);
+
+ unit_dir_ = segment_direction_ / time_duration_;
+ pos_ = segment_start_;
+ last_ = time_start_;
+ }
+
+ std::mt19937 rng_;
+ std::uniform_real_distribution<float> direction_;
+ std::uniform_real_distribution<float> duration_;
+
+ glm::vec3 segment_start_;
+ glm::vec3 segment_direction_;
+ float time_start_;
+ float time_duration_;
+
+ glm::vec3 unit_dir_;
+ glm::vec3 pos_;
+ float last_;
+};
+
+class CircleCurve : public Curve {
+public:
+ CircleCurve(float radius, glm::vec3 axis)
+ : r_(radius)
+ {
+ glm::vec3 a;
+
+ if (axis.x != 0.0f) {
+ a.x = -axis.z / axis.x;
+ a.y = 0.0f;
+ a.z = 1.0f;
+ } else if (axis.y != 0.0f) {
+ a.x = 1.0f;
+ a.y = -axis.x / axis.y;
+ a.z = 0.0f;
+ } else {
+ a.x = 1.0f;
+ a.y = 0.0f;
+ a.z = -axis.x / axis.z;
+ }
+
+ a_ = glm::normalize(a);
+ b_ = glm::normalize(glm::cross(a_, axis));
+ }
+
+ glm::vec3 evaluate(float t)
+ {
+ return (a_ * (glm::vec3(std::cos(t)) - glm::vec3(1.0f)) + b_ * glm::vec3(std::sin(t))) *
+ glm::vec3(r_);
+ }
+
+private:
+ float r_;
+ glm::vec3 a_;
+ glm::vec3 b_;
+};
+
+} // namespace
+
+Path::Path(unsigned int rng_seed)
+ : rng_(rng_seed), type_(0, CURVE_COUNT - 1), duration_(5.0f, 20.0f)
+{
+ // trigger a subpath generation
+ current_.end = -1.0f;
+ current_.now = 0.0f;
+}
+
+glm::vec3 Path::position(float t)
+{
+ current_.now += t;
+
+ while (current_.now >= current_.end)
+ generate_subpath();
+
+ return current_.origin + current_.curve->evaluate(current_.now - current_.start);
+}
+
+void Path::generate_subpath()
+{
+ float duration = duration_(rng_);
+ CurveType type = static_cast<CurveType>(type_(rng_));
+
+ if (current_.curve) {
+ current_.origin += current_.curve->evaluate(current_.end - current_.start);
+ current_.start = current_.end;
+ } else {
+ std::uniform_real_distribution<float> origin(0.0f, 2.0f);
+ current_.origin = glm::vec3(origin(rng_), origin(rng_), origin(rng_));
+ current_.start = current_.now;
+ }
+
+ current_.end = current_.start + duration;
+
+ Curve *curve;
+
+ switch (type) {
+ case CURVE_RANDOM:
+ curve = new RandomCurve(rng_());
+ break;
+ case CURVE_CIRCLE:
+ {
+ std::uniform_real_distribution<float> dir(-1.0f, 1.0f);
+ glm::vec3 axis(dir(rng_), dir(rng_), dir(rng_));
+ if (axis.x == 0.0f && axis.y == 0.0f && axis.z == 0.0f)
+ axis.x = 1.0f;
+
+ std::uniform_real_distribution<float> radius_(0.02f, 0.2f);
+ curve = new CircleCurve(radius_(rng_), axis);
+ }
+ break;
+ default:
+ assert(!"unreachable");
+ curve = nullptr;
+ break;
+ }
+
+ current_.curve.reset(curve);
+}
+
+Simulation::Simulation(int object_count)
+ : random_dev_()
+{
+ MeshPicker mesh;
+ ColorPicker color(random_dev_());
+
+ objects_.reserve(object_count);
+ for (int i = 0; i < object_count; i++) {
+ Meshes::Type type = mesh.pick();
+ float scale = mesh.scale(type);
+
+ objects_.emplace_back(Object{
+ type,
+ glm::vec3(0.5 + 0.5 * (float) i / object_count),
+ color.pick(),
+ Animation(random_dev_(), scale),
+ Path(random_dev_()),
+ });
+ }
+}
+
+void Simulation::set_frame_data_size(uint32_t size)
+{
+ uint32_t offset = 0;
+ for (auto &obj : objects_) {
+ obj.frame_data_offset = offset;
+ offset += size;
+ }
+}
+
+void Simulation::update(float time, int begin, int end)
+{
+ for (int i = begin; i < end; i++) {
+ auto &obj = objects_[i];
+
+ glm::vec3 pos = obj.path.position(time);
+ glm::mat4 trans = obj.animation.transformation(time);
+ obj.model = glm::translate(glm::mat4(1.0f), pos) * trans;
+ }
+}
diff --git a/demos/smoke/Simulation.h b/demos/smoke/Simulation.h
new file mode 100644
index 000000000..31241f9cf
--- /dev/null
+++ b/demos/smoke/Simulation.h
@@ -0,0 +1,112 @@
+/*
+ * Copyright (C) 2016 Google, Inc.
+ *
+ * Permission is hereby granted, free of charge, to any person obtaining a
+ * copy of this software and associated documentation files (the "Software"),
+ * to deal in the Software without restriction, including without limitation
+ * the rights to use, copy, modify, merge, publish, distribute, sublicense,
+ * and/or sell copies of the Software, and to permit persons to whom the
+ * Software is furnished to do so, subject to the following conditions:
+ *
+ * The above copyright notice and this permission notice shall be included
+ * in all copies or substantial portions of the Software.
+ *
+ * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
+ * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
+ * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL
+ * THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
+ * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
+ * FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER
+ * DEALINGS IN THE SOFTWARE.
+ */
+
+#ifndef SIMULATION_H
+#define SIMULATION_H
+
+#include <memory>
+#include <random>
+#include <vector>
+
+#include <glm/glm.hpp>
+
+#include "Meshes.h"
+
+class Animation {
+public:
+ Animation(unsigned rng_seed, float scale);
+
+ glm::mat4 transformation(float t);
+
+private:
+ struct Data {
+ glm::vec3 axis;
+ float speed;
+ float scale;
+
+ glm::mat4 matrix;
+ };
+
+ std::mt19937 rng_;
+ std::uniform_real_distribution<float> dir_;
+ std::uniform_real_distribution<float> speed_;
+
+ Data current_;
+};
+
+class Curve;
+
+class Path {
+public:
+ Path(unsigned rng_seed);
+
+ glm::vec3 position(float t);
+
+private:
+ struct Subpath {
+ glm::vec3 origin;
+ float start;
+ float end;
+ float now;
+
+ std::shared_ptr<Curve> curve;
+ };
+
+ void generate_subpath();
+
+ std::mt19937 rng_;
+ std::uniform_int_distribution<> type_;
+ std::uniform_real_distribution<float> duration_;
+
+ Subpath current_;
+};
+
+class Simulation {
+public:
+ Simulation(int object_count);
+
+ struct Object {
+ Meshes::Type mesh;
+ glm::vec3 light_pos;
+ glm::vec3 light_color;
+
+ Animation animation;
+ Path path;
+
+ uint32_t frame_data_offset;
+
+ glm::mat4 model;
+ };
+
+ const std::vector<Object> &objects() const { return objects_; }
+
+ unsigned int rng_seed() { return random_dev_(); }
+
+ void set_frame_data_size(uint32_t size);
+ void update(float time, int begin, int end);
+
+private:
+ std::random_device random_dev_;
+ std::vector<Object> objects_;
+};
+
+#endif // SIMULATION_H
diff --git a/demos/smoke/Smoke.cpp b/demos/smoke/Smoke.cpp
new file mode 100644
index 000000000..ed6e01781
--- /dev/null
+++ b/demos/smoke/Smoke.cpp
@@ -0,0 +1,915 @@
+/*
+ * Copyright (C) 2016 Google, Inc.
+ *
+ * Permission is hereby granted, free of charge, to any person obtaining a
+ * copy of this software and associated documentation files (the "Software"),
+ * to deal in the Software without restriction, including without limitation
+ * the rights to use, copy, modify, merge, publish, distribute, sublicense,
+ * and/or sell copies of the Software, and to permit persons to whom the
+ * Software is furnished to do so, subject to the following conditions:
+ *
+ * The above copyright notice and this permission notice shall be included
+ * in all copies or substantial portions of the Software.
+ *
+ * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
+ * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
+ * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL
+ * THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
+ * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
+ * FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER
+ * DEALINGS IN THE SOFTWARE.
+ */
+
+#include <array>
+
+#include <glm/gtc/type_ptr.hpp>
+#include <glm/gtc/matrix_transform.hpp>
+
+#include "Helpers.h"
+#include "Smoke.h"
+#include "Meshes.h"
+#include "Shell.h"
+
+namespace {
+
+// TODO do not rely on compiler to use std140 layout
+// TODO move lower frequency data to another descriptor set
+struct ShaderParamBlock {
+ float light_pos[4];
+ float light_color[4];
+ float model[4 * 4];
+ float view_projection[4 * 4];
+};
+
+} // namespace
+
+Smoke::Smoke(const std::vector<std::string> &args)
+ : Game("Smoke", args), multithread_(true), use_push_constants_(false),
+ sim_paused_(false), sim_(5000), camera_(2.5f), frame_data_(),
+ render_pass_clear_value_({{ 0.0f, 0.1f, 0.2f, 1.0f }}),
+ render_pass_begin_info_(),
+ primary_cmd_begin_info_(), primary_cmd_submit_info_()
+{
+ for (auto it = args.begin(); it != args.end(); ++it) {
+ if (*it == "-s")
+ multithread_ = false;
+ else if (*it == "-p")
+ use_push_constants_ = true;
+ }
+
+ init_workers();
+}
+
+Smoke::~Smoke()
+{
+}
+
+void Smoke::init_workers()
+{
+ int worker_count = std::thread::hardware_concurrency();
+
+ // not enough cores
+ if (!multithread_ || worker_count < 2) {
+ multithread_ = false;
+ worker_count = 1;
+ }
+
+ const int object_per_worker = sim_.objects().size() / worker_count;
+ int object_begin = 0, object_end = 0;
+
+ workers_.reserve(worker_count);
+ for (int i = 0; i < worker_count; i++) {
+ object_begin = object_end;
+ if (i < worker_count - 1)
+ object_end += object_per_worker;
+ else
+ object_end = sim_.objects().size();
+
+ Worker *worker = new Worker(*this, i, object_begin, object_end);
+ workers_.emplace_back(std::unique_ptr<Worker>(worker));
+ }
+}
+
+void Smoke::attach_shell(Shell &sh)
+{
+ Game::attach_shell(sh);
+
+ const Shell::Context &ctx = sh.context();
+ physical_dev_ = ctx.physical_dev;
+ dev_ = ctx.dev;
+ queue_ = ctx.game_queue;
+ queue_family_ = ctx.game_queue_family;
+ format_ = ctx.format.format;
+
+ vk::GetPhysicalDeviceProperties(physical_dev_, &physical_dev_props_);
+
+ if (use_push_constants_ &&
+ sizeof(ShaderParamBlock) > physical_dev_props_.limits.maxPushConstantsSize) {
+ shell_->log(Shell::LOG_WARN, "cannot enable push constants");
+ use_push_constants_ = false;
+ }
+
+ VkPhysicalDeviceMemoryProperties mem_props;
+ vk::GetPhysicalDeviceMemoryProperties(physical_dev_, &mem_props);
+ mem_flags_.reserve(mem_props.memoryTypeCount);
+ for (uint32_t i = 0; i < mem_props.memoryTypeCount; i++)
+ mem_flags_.push_back(mem_props.memoryTypes[i].propertyFlags);
+
+ meshes_ = new Meshes(dev_, mem_flags_);
+
+ create_render_pass();
+ create_shader_modules();
+ create_descriptor_set_layout();
+ create_pipeline_layout();
+ create_pipeline();
+
+ create_frame_data(2);
+
+ render_pass_begin_info_.sType = VK_STRUCTURE_TYPE_RENDER_PASS_BEGIN_INFO;
+ render_pass_begin_info_.renderPass = render_pass_;
+ render_pass_begin_info_.clearValueCount = 1;
+ render_pass_begin_info_.pClearValues = &render_pass_clear_value_;
+
+ primary_cmd_begin_info_.sType = VK_STRUCTURE_TYPE_COMMAND_BUFFER_BEGIN_INFO;
+ primary_cmd_begin_info_.flags = VK_COMMAND_BUFFER_USAGE_ONE_TIME_SUBMIT_BIT;
+
+ // we will render to the swapchain images
+ primary_cmd_submit_wait_stages_ = VK_PIPELINE_STAGE_COLOR_ATTACHMENT_OUTPUT_BIT;
+
+ primary_cmd_submit_info_.sType = VK_STRUCTURE_TYPE_SUBMIT_INFO;
+ primary_cmd_submit_info_.waitSemaphoreCount = 1;
+ primary_cmd_submit_info_.pWaitDstStageMask = &primary_cmd_submit_wait_stages_;
+ primary_cmd_submit_info_.commandBufferCount = 1;
+ primary_cmd_submit_info_.signalSemaphoreCount = 1;
+
+ if (multithread_) {
+ for (auto &worker : workers_)
+ worker->start();
+ }
+}
+
+void Smoke::detach_shell()
+{
+ if (multithread_) {
+ for (auto &worker : workers_)
+ worker->stop();
+ }
+
+ destroy_frame_data();
+
+ vk::DestroyPipeline(dev_, pipeline_, nullptr);
+ vk::DestroyPipelineLayout(dev_, pipeline_layout_, nullptr);
+ if (!use_push_constants_)
+ vk::DestroyDescriptorSetLayout(dev_, desc_set_layout_, nullptr);
+ vk::DestroyShaderModule(dev_, fs_, nullptr);
+ vk::DestroyShaderModule(dev_, vs_, nullptr);
+ vk::DestroyRenderPass(dev_, render_pass_, nullptr);
+
+ delete meshes_;
+
+ Game::detach_shell();
+}
+
+void Smoke::create_render_pass()
+{
+ VkAttachmentDescription attachment = {};
+ attachment.format = format_;
+ attachment.samples = VK_SAMPLE_COUNT_1_BIT;
+ attachment.loadOp = VK_ATTACHMENT_LOAD_OP_CLEAR;
+ attachment.storeOp = VK_ATTACHMENT_STORE_OP_STORE;
+ attachment.initialLayout = VK_IMAGE_LAYOUT_UNDEFINED;
+ attachment.finalLayout = VK_IMAGE_LAYOUT_PRESENT_SRC_KHR;
+
+ VkAttachmentReference attachment_ref = {};
+ attachment_ref.attachment = 0;
+ attachment_ref.layout = VK_IMAGE_LAYOUT_COLOR_ATTACHMENT_OPTIMAL;
+
+ VkSubpassDescription subpass = {};
+ subpass.pipelineBindPoint = VK_PIPELINE_BIND_POINT_GRAPHICS;
+ subpass.colorAttachmentCount = 1;
+ subpass.pColorAttachments = &attachment_ref;
+
+ std::array<VkSubpassDependency, 2> subpass_deps;
+ subpass_deps[0].srcSubpass = VK_SUBPASS_EXTERNAL;
+ subpass_deps[0].dstSubpass = 0;
+ subpass_deps[0].srcStageMask = VK_PIPELINE_STAGE_BOTTOM_OF_PIPE_BIT;
+ subpass_deps[0].dstStageMask = VK_PIPELINE_STAGE_COLOR_ATTACHMENT_OUTPUT_BIT;
+ subpass_deps[0].srcAccessMask = VK_ACCESS_MEMORY_READ_BIT;
+ subpass_deps[0].dstAccessMask = VK_ACCESS_COLOR_ATTACHMENT_READ_BIT |
+ VK_ACCESS_COLOR_ATTACHMENT_WRITE_BIT;
+ subpass_deps[0].dependencyFlags = VK_DEPENDENCY_BY_REGION_BIT;
+
+ subpass_deps[1].srcSubpass = 0;
+ subpass_deps[1].dstSubpass = VK_SUBPASS_EXTERNAL;
+ subpass_deps[1].srcStageMask = VK_PIPELINE_STAGE_COLOR_ATTACHMENT_OUTPUT_BIT;
+ subpass_deps[1].dstStageMask = VK_PIPELINE_STAGE_BOTTOM_OF_PIPE_BIT;
+ subpass_deps[1].srcAccessMask = VK_ACCESS_COLOR_ATTACHMENT_READ_BIT |
+ VK_ACCESS_COLOR_ATTACHMENT_WRITE_BIT;
+ subpass_deps[1].dstAccessMask = VK_ACCESS_MEMORY_READ_BIT;
+ subpass_deps[1].dependencyFlags = VK_DEPENDENCY_BY_REGION_BIT;
+
+ VkRenderPassCreateInfo render_pass_info = {};
+ render_pass_info.sType = VK_STRUCTURE_TYPE_RENDER_PASS_CREATE_INFO;
+ render_pass_info.attachmentCount = 1;
+ render_pass_info.pAttachments = &attachment;
+ render_pass_info.subpassCount = 1;
+ render_pass_info.pSubpasses = &subpass;
+ render_pass_info.dependencyCount = (uint32_t)subpass_deps.size();
+ render_pass_info.pDependencies = subpass_deps.data();
+
+ vk::assert_success(vk::CreateRenderPass(dev_, &render_pass_info, nullptr, &render_pass_));
+}
+
+void Smoke::create_shader_modules()
+{
+ VkShaderModuleCreateInfo sh_info = {};
+ sh_info.sType = VK_STRUCTURE_TYPE_SHADER_MODULE_CREATE_INFO;
+ if (use_push_constants_) {
+#include "Smoke.push_constant.vert.h"
+ sh_info.codeSize = sizeof(Smoke_push_constant_vert);
+ sh_info.pCode = Smoke_push_constant_vert;
+ } else {
+#include "Smoke.vert.h"
+ sh_info.codeSize = sizeof(Smoke_vert);
+ sh_info.pCode = Smoke_vert;
+ }
+ vk::assert_success(vk::CreateShaderModule(dev_, &sh_info, nullptr, &vs_));
+
+#include "Smoke.frag.h"
+ sh_info.codeSize = sizeof(Smoke_frag);
+ sh_info.pCode = Smoke_frag;
+ vk::assert_success(vk::CreateShaderModule(dev_, &sh_info, nullptr, &fs_));
+}
+
+void Smoke::create_descriptor_set_layout()
+{
+ if (use_push_constants_)
+ return;
+
+ VkDescriptorSetLayoutBinding layout_binding = {};
+ layout_binding.binding = 0;
+ layout_binding.descriptorType = VK_DESCRIPTOR_TYPE_STORAGE_BUFFER_DYNAMIC;
+ layout_binding.descriptorCount = 1;
+ layout_binding.stageFlags = VK_SHADER_STAGE_VERTEX_BIT;
+
+ VkDescriptorSetLayoutCreateInfo layout_info = {};
+ layout_info.sType = VK_STRUCTURE_TYPE_DESCRIPTOR_SET_LAYOUT_CREATE_INFO;
+ layout_info.bindingCount = 1;
+ layout_info.pBindings = &layout_binding;
+
+ vk::assert_success(vk::CreateDescriptorSetLayout(dev_, &layout_info,
+ nullptr, &desc_set_layout_));
+}
+
+void Smoke::create_pipeline_layout()
+{
+ VkPushConstantRange push_const_range = {};
+
+ VkPipelineLayoutCreateInfo pipeline_layout_info = {};
+ pipeline_layout_info.sType = VK_STRUCTURE_TYPE_PIPELINE_LAYOUT_CREATE_INFO;
+
+ if (use_push_constants_) {
+ push_const_range.stageFlags = VK_SHADER_STAGE_VERTEX_BIT;
+ push_const_range.offset = 0;
+ push_const_range.size = sizeof(ShaderParamBlock);
+
+ pipeline_layout_info.pushConstantRangeCount = 1;
+ pipeline_layout_info.pPushConstantRanges = &push_const_range;
+ } else {
+ pipeline_layout_info.setLayoutCount = 1;
+ pipeline_layout_info.pSetLayouts = &desc_set_layout_;
+ }
+
+ vk::assert_success(vk::CreatePipelineLayout(dev_, &pipeline_layout_info,
+ nullptr, &pipeline_layout_));
+}
+
+void Smoke::create_pipeline()
+{
+ VkPipelineShaderStageCreateInfo stage_info[2] = {};
+ stage_info[0].sType = VK_STRUCTURE_TYPE_PIPELINE_SHADER_STAGE_CREATE_INFO;
+ stage_info[0].stage = VK_SHADER_STAGE_VERTEX_BIT;
+ stage_info[0].module = vs_;
+ stage_info[0].pName = "main";
+ stage_info[1].sType = VK_STRUCTURE_TYPE_PIPELINE_SHADER_STAGE_CREATE_INFO;
+ stage_info[1].stage = VK_SHADER_STAGE_FRAGMENT_BIT;
+ stage_info[1].module = fs_;
+ stage_info[1].pName = "main";
+
+ VkPipelineViewportStateCreateInfo viewport_info = {};
+ viewport_info.sType = VK_STRUCTURE_TYPE_PIPELINE_VIEWPORT_STATE_CREATE_INFO;
+ // both dynamic
+ viewport_info.viewportCount = 1;
+ viewport_info.scissorCount = 1;
+
+ VkPipelineRasterizationStateCreateInfo rast_info = {};
+ rast_info.sType = VK_STRUCTURE_TYPE_PIPELINE_RASTERIZATION_STATE_CREATE_INFO;
+ rast_info.depthClampEnable = false;
+ rast_info.rasterizerDiscardEnable = false;
+ rast_info.polygonMode = VK_POLYGON_MODE_FILL;
+ rast_info.cullMode = VK_CULL_MODE_NONE;
+ rast_info.frontFace = VK_FRONT_FACE_COUNTER_CLOCKWISE;
+ rast_info.depthBiasEnable = false;
+ rast_info.lineWidth = 1.0f;
+
+ VkPipelineMultisampleStateCreateInfo multisample_info = {};
+ multisample_info.sType = VK_STRUCTURE_TYPE_PIPELINE_MULTISAMPLE_STATE_CREATE_INFO;
+ multisample_info.rasterizationSamples = VK_SAMPLE_COUNT_1_BIT;
+ multisample_info.sampleShadingEnable = false;
+ multisample_info.pSampleMask = nullptr;
+ multisample_info.alphaToCoverageEnable = false;
+ multisample_info.alphaToOneEnable = false;
+
+ VkPipelineColorBlendAttachmentState blend_attachment = {};
+ blend_attachment.blendEnable = true;
+ blend_attachment.srcColorBlendFactor = VK_BLEND_FACTOR_SRC_ALPHA;
+ blend_attachment.dstColorBlendFactor = VK_BLEND_FACTOR_ONE_MINUS_SRC_ALPHA;
+ blend_attachment.colorBlendOp = VK_BLEND_OP_ADD;
+ blend_attachment.srcAlphaBlendFactor = VK_BLEND_FACTOR_ONE;
+ blend_attachment.dstAlphaBlendFactor = VK_BLEND_FACTOR_ZERO;
+ blend_attachment.alphaBlendOp = VK_BLEND_OP_ADD;
+ blend_attachment.colorWriteMask = VK_COLOR_COMPONENT_R_BIT |
+ VK_COLOR_COMPONENT_G_BIT |
+ VK_COLOR_COMPONENT_B_BIT |
+ VK_COLOR_COMPONENT_A_BIT;
+
+ VkPipelineColorBlendStateCreateInfo blend_info = {};
+ blend_info.sType = VK_STRUCTURE_TYPE_PIPELINE_COLOR_BLEND_STATE_CREATE_INFO;
+ blend_info.logicOpEnable = false;
+ blend_info.attachmentCount = 1;
+ blend_info.pAttachments = &blend_attachment;
+
+ std::array<VkDynamicState, 2> dynamic_states = {
+ VK_DYNAMIC_STATE_VIEWPORT,
+ VK_DYNAMIC_STATE_SCISSOR
+ };
+ struct VkPipelineDynamicStateCreateInfo dynamic_info = {};
+ dynamic_info.sType = VK_STRUCTURE_TYPE_PIPELINE_DYNAMIC_STATE_CREATE_INFO;
+ dynamic_info.dynamicStateCount = (uint32_t)dynamic_states.size();
+ dynamic_info.pDynamicStates = dynamic_states.data();
+
+ VkGraphicsPipelineCreateInfo pipeline_info = {};
+ pipeline_info.sType = VK_STRUCTURE_TYPE_GRAPHICS_PIPELINE_CREATE_INFO;
+ pipeline_info.stageCount = 2;
+ pipeline_info.pStages = stage_info;
+ pipeline_info.pVertexInputState = &meshes_->vertex_input_state();
+ pipeline_info.pInputAssemblyState = &meshes_->input_assembly_state();
+ pipeline_info.pTessellationState = nullptr;
+ pipeline_info.pViewportState = &viewport_info;
+ pipeline_info.pRasterizationState = &rast_info;
+ pipeline_info.pMultisampleState = &multisample_info;
+ pipeline_info.pDepthStencilState = nullptr;
+ pipeline_info.pColorBlendState = &blend_info;
+ pipeline_info.pDynamicState = &dynamic_info;
+ pipeline_info.layout = pipeline_layout_;
+ pipeline_info.renderPass = render_pass_;
+ pipeline_info.subpass = 0;
+ vk::assert_success(vk::CreateGraphicsPipelines(dev_, VK_NULL_HANDLE, 1, &pipeline_info, nullptr, &pipeline_));
+}
+
+void Smoke::create_frame_data(int count)
+{
+ frame_data_.resize(count);
+
+ create_fences();
+ create_command_buffers();
+
+ if (!use_push_constants_) {
+ create_buffers();
+ create_buffer_memory();
+ create_descriptor_sets();
+ }
+
+ frame_data_index_ = 0;
+}
+
+void Smoke::destroy_frame_data()
+{
+ if (!use_push_constants_) {
+ vk::DestroyDescriptorPool(dev_, desc_pool_, nullptr);
+
+ for (auto cmd_pool : worker_cmd_pools_)
+ vk::DestroyCommandPool(dev_, cmd_pool, nullptr);
+ worker_cmd_pools_.clear();
+ vk::DestroyCommandPool(dev_, primary_cmd_pool_, nullptr);
+
+ vk::UnmapMemory(dev_, frame_data_mem_);
+ vk::FreeMemory(dev_, frame_data_mem_, nullptr);
+
+ for (auto &data : frame_data_)
+ vk::DestroyBuffer(dev_, data.buf, nullptr);
+ }
+
+ for (auto &data : frame_data_)
+ vk::DestroyFence(dev_, data.fence, nullptr);
+
+ frame_data_.clear();
+}
+
+void Smoke::create_fences()
+{
+ VkFenceCreateInfo fence_info = {};
+ fence_info.sType = VK_STRUCTURE_TYPE_FENCE_CREATE_INFO;
+ fence_info.flags = VK_FENCE_CREATE_SIGNALED_BIT;
+
+ for (auto &data : frame_data_)
+ vk::assert_success(vk::CreateFence(dev_, &fence_info, nullptr, &data.fence));
+}
+
+void Smoke::create_command_buffers()
+{
+ VkCommandPoolCreateInfo cmd_pool_info = {};
+ cmd_pool_info.sType = VK_STRUCTURE_TYPE_COMMAND_POOL_CREATE_INFO;
+ cmd_pool_info.flags = VK_COMMAND_POOL_CREATE_RESET_COMMAND_BUFFER_BIT;
+ cmd_pool_info.queueFamilyIndex = queue_family_;
+
+ VkCommandBufferAllocateInfo cmd_info = {};
+ cmd_info.sType = VK_STRUCTURE_TYPE_COMMAND_BUFFER_ALLOCATE_INFO;
+ cmd_info.commandBufferCount = static_cast<uint32_t>(frame_data_.size());
+
+ // create command pools and buffers
+ std::vector<VkCommandPool> cmd_pools(workers_.size() + 1, VK_NULL_HANDLE);
+ std::vector<std::vector<VkCommandBuffer>> cmds_vec(workers_.size() + 1,
+ std::vector<VkCommandBuffer>(frame_data_.size(), VK_NULL_HANDLE));
+ for (size_t i = 0; i < cmd_pools.size(); i++) {
+ auto &cmd_pool = cmd_pools[i];
+ auto &cmds = cmds_vec[i];
+
+ vk::assert_success(vk::CreateCommandPool(dev_, &cmd_pool_info,
+ nullptr, &cmd_pool));
+
+ cmd_info.commandPool = cmd_pool;
+ cmd_info.level = (cmd_pool == cmd_pools.back()) ?
+ VK_COMMAND_BUFFER_LEVEL_PRIMARY : VK_COMMAND_BUFFER_LEVEL_SECONDARY;
+
+ vk::assert_success(vk::AllocateCommandBuffers(dev_, &cmd_info, cmds.data()));
+ }
+
+ // update frame_data_
+ for (size_t i = 0; i < frame_data_.size(); i++) {
+ for (const auto &cmds : cmds_vec) {
+ if (cmds == cmds_vec.back()) {
+ frame_data_[i].primary_cmd = cmds[i];
+ } else {
+ frame_data_[i].worker_cmds.push_back(cmds[i]);
+ }
+ }
+ }
+
+ primary_cmd_pool_ = cmd_pools.back();
+ cmd_pools.pop_back();
+ worker_cmd_pools_ = cmd_pools;
+}
+
+void Smoke::create_buffers()
+{
+ VkDeviceSize object_data_size = sizeof(ShaderParamBlock);
+ // align object data to device limit
+ const VkDeviceSize &alignment =
+ physical_dev_props_.limits.minStorageBufferOffsetAlignment;
+ if (object_data_size % alignment)
+ object_data_size += alignment - (object_data_size % alignment);
+
+ // update simulation
+ sim_.set_frame_data_size(object_data_size);
+
+ VkBufferCreateInfo buf_info = {};
+ buf_info.sType = VK_STRUCTURE_TYPE_BUFFER_CREATE_INFO;
+ buf_info.size = object_data_size * sim_.objects().size();
+ buf_info.usage = VK_BUFFER_USAGE_STORAGE_BUFFER_BIT;
+ buf_info.sharingMode = VK_SHARING_MODE_EXCLUSIVE;
+
+ for (auto &data : frame_data_)
+ vk::assert_success(vk::CreateBuffer(dev_, &buf_info, nullptr, &data.buf));
+}
+
+void Smoke::create_buffer_memory()
+{
+ VkMemoryRequirements mem_reqs;
+ vk::GetBufferMemoryRequirements(dev_, frame_data_[0].buf, &mem_reqs);
+
+ VkDeviceSize aligned_size = mem_reqs.size;
+ if (aligned_size % mem_reqs.alignment)
+ aligned_size += mem_reqs.alignment - (aligned_size % mem_reqs.alignment);
+
+ // allocate memory
+ VkMemoryAllocateInfo mem_info = {};
+ mem_info.sType = VK_STRUCTURE_TYPE_MEMORY_ALLOCATE_INFO;
+ mem_info.allocationSize = aligned_size * (frame_data_.size() - 1) +
+ mem_reqs.size;
+
+ for (uint32_t idx = 0; idx < mem_flags_.size(); idx++) {
+ if ((mem_reqs.memoryTypeBits & (1 << idx)) &&
+ (mem_flags_[idx] & VK_MEMORY_PROPERTY_HOST_VISIBLE_BIT) &&
+ (mem_flags_[idx] & VK_MEMORY_PROPERTY_HOST_COHERENT_BIT)) {
+ // TODO is this guaranteed to exist?
+ mem_info.memoryTypeIndex = idx;
+ break;
+ }
+ }
+
+ vk::AllocateMemory(dev_, &mem_info, nullptr, &frame_data_mem_);
+
+ void *ptr;
+ vk::MapMemory(dev_, frame_data_mem_, 0, VK_WHOLE_SIZE, 0, &ptr);
+
+ VkDeviceSize offset = 0;
+ for (auto &data : frame_data_) {
+ vk::BindBufferMemory(dev_, data.buf, frame_data_mem_, offset);
+ data.base = reinterpret_cast<uint8_t *>(ptr) + offset;
+ offset += aligned_size;
+ }
+}
+
+void Smoke::create_descriptor_sets()
+{
+ VkDescriptorPoolSize desc_pool_size = {};
+ desc_pool_size.type = VK_DESCRIPTOR_TYPE_STORAGE_BUFFER_DYNAMIC;
+ desc_pool_size.descriptorCount = frame_data_.size();
+
+ VkDescriptorPoolCreateInfo desc_pool_info = {};
+ desc_pool_info.sType = VK_STRUCTURE_TYPE_DESCRIPTOR_POOL_CREATE_INFO;
+ desc_pool_info.maxSets = frame_data_.size();
+ desc_pool_info.poolSizeCount = 1;
+ desc_pool_info.pPoolSizes = &desc_pool_size;
+
+ // create descriptor pool
+ vk::assert_success(vk::CreateDescriptorPool(dev_, &desc_pool_info,
+ nullptr, &desc_pool_));
+
+ std::vector<VkDescriptorSetLayout> set_layouts(frame_data_.size(), desc_set_layout_);
+ VkDescriptorSetAllocateInfo set_info = {};
+ set_info.sType = VK_STRUCTURE_TYPE_DESCRIPTOR_SET_ALLOCATE_INFO;
+ set_info.descriptorPool = desc_pool_;
+ set_info.descriptorSetCount = static_cast<uint32_t>(set_layouts.size());
+ set_info.pSetLayouts = set_layouts.data();
+
+ // create descriptor sets
+ std::vector<VkDescriptorSet> desc_sets(frame_data_.size(), VK_NULL_HANDLE);
+ vk::assert_success(vk::AllocateDescriptorSets(dev_, &set_info, desc_sets.data()));
+
+ std::vector<VkDescriptorBufferInfo> desc_bufs(frame_data_.size());
+ std::vector<VkWriteDescriptorSet> desc_writes(frame_data_.size());
+
+ for (size_t i = 0; i < frame_data_.size(); i++) {
+ auto &data = frame_data_[i];
+
+ data.desc_set = desc_sets[i];
+
+ VkDescriptorBufferInfo desc_buf = {};
+ desc_buf.buffer = data.buf;
+ desc_buf.offset = 0;
+ desc_buf.range = VK_WHOLE_SIZE;
+ desc_bufs[i] = desc_buf;
+
+ VkWriteDescriptorSet desc_write = {};
+ desc_write.sType = VK_STRUCTURE_TYPE_WRITE_DESCRIPTOR_SET;
+ desc_write.dstSet = data.desc_set;
+ desc_write.dstBinding = 0;
+ desc_write.dstArrayElement = 0;
+ desc_write.descriptorCount = 1;
+ desc_write.descriptorType = VK_DESCRIPTOR_TYPE_STORAGE_BUFFER_DYNAMIC;
+ desc_write.pBufferInfo = &desc_bufs[i];
+ desc_writes[i] = desc_write;
+ }
+
+ vk::UpdateDescriptorSets(dev_,
+ static_cast<uint32_t>(desc_writes.size()),
+ desc_writes.data(), 0, nullptr);
+}
+
+void Smoke::attach_swapchain()
+{
+ const Shell::Context &ctx = shell_->context();
+
+ prepare_viewport(ctx.extent);
+ prepare_framebuffers(ctx.swapchain);
+
+ update_camera();
+}
+
+void Smoke::detach_swapchain()
+{
+ for (auto fb : framebuffers_)
+ vk::DestroyFramebuffer(dev_, fb, nullptr);
+ for (auto view : image_views_)
+ vk::DestroyImageView(dev_, view, nullptr);
+
+ framebuffers_.clear();
+ image_views_.clear();
+ images_.clear();
+}
+
+void Smoke::prepare_viewport(const VkExtent2D &extent)
+{
+ extent_ = extent;
+
+ viewport_.x = 0.0f;
+ viewport_.y = 0.0f;
+ viewport_.width = static_cast<float>(extent.width);
+ viewport_.height = static_cast<float>(extent.height);
+ viewport_.minDepth = 0.0f;
+ viewport_.maxDepth = 1.0f;
+
+ scissor_.offset = { 0, 0 };
+ scissor_.extent = extent_;
+}
+
+void Smoke::prepare_framebuffers(VkSwapchainKHR swapchain)
+{
+ // get swapchain images
+ vk::get(dev_, swapchain, images_);
+
+ assert(framebuffers_.empty());
+ image_views_.reserve(images_.size());
+ framebuffers_.reserve(images_.size());
+ for (auto img : images_) {
+ VkImageViewCreateInfo view_info = {};
+ view_info.sType = VK_STRUCTURE_TYPE_IMAGE_VIEW_CREATE_INFO;
+ view_info.image = img;
+ view_info.viewType = VK_IMAGE_VIEW_TYPE_2D;
+ view_info.format = format_;
+ view_info.subresourceRange.aspectMask = VK_IMAGE_ASPECT_COLOR_BIT;
+ view_info.subresourceRange.levelCount = 1;
+ view_info.subresourceRange.layerCount = 1;
+
+ VkImageView view;
+ vk::assert_success(vk::CreateImageView(dev_, &view_info, nullptr, &view));
+ image_views_.push_back(view);
+
+ VkFramebufferCreateInfo fb_info = {};
+ fb_info.sType = VK_STRUCTURE_TYPE_FRAMEBUFFER_CREATE_INFO;
+ fb_info.renderPass = render_pass_;
+ fb_info.attachmentCount = 1;
+ fb_info.pAttachments = &view;
+ fb_info.width = extent_.width;
+ fb_info.height = extent_.height;
+ fb_info.layers = 1;
+
+ VkFramebuffer fb;
+ vk::assert_success(vk::CreateFramebuffer(dev_, &fb_info, nullptr, &fb));
+ framebuffers_.push_back(fb);
+ }
+}
+
+void Smoke::update_camera()
+{
+ const glm::vec3 center(0.0f);
+ const glm::vec3 up(0.f, 0.0f, 1.0f);
+ const glm::mat4 view = glm::lookAt(camera_.eye_pos, center, up);
+
+ float aspect = static_cast<float>(extent_.width) / static_cast<float>(extent_.height);
+ const glm::mat4 projection = glm::perspective(0.4f, aspect, 0.1f, 100.0f);
+
+ // Vulkan clip space has inverted Y and half Z.
+ const glm::mat4 clip(1.0f, 0.0f, 0.0f, 0.0f,
+ 0.0f, -1.0f, 0.0f, 0.0f,
+ 0.0f, 0.0f, 0.5f, 0.0f,
+ 0.0f, 0.0f, 0.5f, 1.0f);
+
+ camera_.view_projection = clip * projection * view;
+}
+
+void Smoke::draw_object(const Simulation::Object &obj, FrameData &data, VkCommandBuffer cmd) const
+{
+ if (use_push_constants_) {
+ ShaderParamBlock params;
+ memcpy(params.light_pos, glm::value_ptr(obj.light_pos), sizeof(obj.light_pos));
+ memcpy(params.light_color, glm::value_ptr(obj.light_color), sizeof(obj.light_color));
+ memcpy(params.model, glm::value_ptr(obj.model), sizeof(obj.model));
+ memcpy(params.view_projection, glm::value_ptr(camera_.view_projection), sizeof(camera_.view_projection));
+
+ vk::CmdPushConstants(cmd, pipeline_layout_, VK_SHADER_STAGE_VERTEX_BIT,
+ 0, sizeof(params), &params);
+ } else {
+ ShaderParamBlock *params =
+ reinterpret_cast<ShaderParamBlock *>(data.base + obj.frame_data_offset);
+ memcpy(params->light_pos, glm::value_ptr(obj.light_pos), sizeof(obj.light_pos));
+ memcpy(params->light_color, glm::value_ptr(obj.light_color), sizeof(obj.light_color));
+ memcpy(params->model, glm::value_ptr(obj.model), sizeof(obj.model));
+ memcpy(params->view_projection, glm::value_ptr(camera_.view_projection), sizeof(camera_.view_projection));
+
+ vk::CmdBindDescriptorSets(cmd, VK_PIPELINE_BIND_POINT_GRAPHICS,
+ pipeline_layout_, 0, 1, &data.desc_set, 1, &obj.frame_data_offset);
+ }
+
+ meshes_->cmd_draw(cmd, obj.mesh);
+}
+
+void Smoke::update_simulation(const Worker &worker)
+{
+ sim_.update(worker.tick_interval_, worker.object_begin_, worker.object_end_);
+}
+
+void Smoke::draw_objects(Worker &worker)
+{
+ auto &data = frame_data_[frame_data_index_];
+ auto cmd = data.worker_cmds[worker.index_];
+
+ VkCommandBufferInheritanceInfo inherit_info = {};
+ inherit_info.sType = VK_STRUCTURE_TYPE_COMMAND_BUFFER_INHERITANCE_INFO;
+ inherit_info.renderPass = render_pass_;
+ inherit_info.framebuffer = worker.fb_;
+
+ VkCommandBufferBeginInfo begin_info = {};
+ begin_info.sType = VK_STRUCTURE_TYPE_COMMAND_BUFFER_BEGIN_INFO;
+ begin_info.flags = VK_COMMAND_BUFFER_USAGE_RENDER_PASS_CONTINUE_BIT;
+ begin_info.pInheritanceInfo = &inherit_info;
+
+ vk::BeginCommandBuffer(cmd, &begin_info);
+
+ vk::CmdSetViewport(cmd, 0, 1, &viewport_);
+ vk::CmdSetScissor(cmd, 0, 1, &scissor_);
+
+ vk::CmdBindPipeline(cmd, VK_PIPELINE_BIND_POINT_GRAPHICS, pipeline_);
+
+ meshes_->cmd_bind_buffers(cmd);
+
+ for (int i = worker.object_begin_; i < worker.object_end_; i++) {
+ auto &obj = sim_.objects()[i];
+
+ draw_object(obj, data, cmd);
+ }
+
+ vk::EndCommandBuffer(cmd);
+}
+
+void Smoke::on_key(Key key)
+{
+ switch (key) {
+ case KEY_SHUTDOWN:
+ case KEY_ESC:
+ shell_->quit();
+ break;
+ case KEY_UP:
+ camera_.eye_pos -= glm::vec3(0.05f);
+ update_camera();
+ break;
+ case KEY_DOWN:
+ camera_.eye_pos += glm::vec3(0.05f);
+ update_camera();
+ break;
+ case KEY_SPACE:
+ sim_paused_ = !sim_paused_;
+ break;
+ default:
+ break;
+ }
+}
+
+void Smoke::on_tick()
+{
+ if (sim_paused_)
+ return;
+
+ for (auto &worker : workers_)
+ worker->update_simulation();
+}
+
+void Smoke::on_frame(float frame_pred)
+{
+ auto &data = frame_data_[frame_data_index_];
+
+ // wait for the last submission since we reuse frame data
+ vk::assert_success(vk::WaitForFences(dev_, 1, &data.fence, true, UINT64_MAX));
+ vk::assert_success(vk::ResetFences(dev_, 1, &data.fence));
+
+ const Shell::BackBuffer &back = shell_->context().acquired_back_buffer;
+
+ // ignore frame_pred
+ for (auto &worker : workers_)
+ worker->draw_objects(framebuffers_[back.image_index]);
+
+ VkResult res = vk::BeginCommandBuffer(data.primary_cmd, &primary_cmd_begin_info_);
+
+ VkBufferMemoryBarrier buf_barrier = {};
+ buf_barrier.sType = VK_STRUCTURE_TYPE_BUFFER_MEMORY_BARRIER;
+ buf_barrier.srcAccessMask = VK_ACCESS_HOST_WRITE_BIT;
+ buf_barrier.dstAccessMask = VK_ACCESS_SHADER_READ_BIT;
+ buf_barrier.srcQueueFamilyIndex = VK_QUEUE_FAMILY_IGNORED;
+ buf_barrier.dstQueueFamilyIndex = VK_QUEUE_FAMILY_IGNORED;
+ buf_barrier.buffer = data.buf;
+ buf_barrier.offset = 0;
+ buf_barrier.size = VK_WHOLE_SIZE;
+ vk::CmdPipelineBarrier(data.primary_cmd,
+ VK_PIPELINE_STAGE_HOST_BIT, VK_PIPELINE_STAGE_VERTEX_SHADER_BIT,
+ 0, 0, nullptr, 1, &buf_barrier, 0, nullptr);
+
+ render_pass_begin_info_.framebuffer = framebuffers_[back.image_index];
+ render_pass_begin_info_.renderArea.extent = extent_;
+ vk::CmdBeginRenderPass(data.primary_cmd, &render_pass_begin_info_,
+ VK_SUBPASS_CONTENTS_SECONDARY_COMMAND_BUFFERS);
+
+ // record render pass commands
+ for (auto &worker : workers_)
+ worker->wait_idle();
+ vk::CmdExecuteCommands(data.primary_cmd,
+ static_cast<uint32_t>(data.worker_cmds.size()),
+ data.worker_cmds.data());
+
+ vk::CmdEndRenderPass(data.primary_cmd);
+ vk::EndCommandBuffer(data.primary_cmd);
+
+ // wait for the image to be owned and signal for render completion
+ primary_cmd_submit_info_.pWaitSemaphores = &back.acquire_semaphore;
+ primary_cmd_submit_info_.pCommandBuffers = &data.primary_cmd;
+ primary_cmd_submit_info_.pSignalSemaphores = &back.render_semaphore;
+
+ res = vk::QueueSubmit(queue_, 1, &primary_cmd_submit_info_, data.fence);
+
+ frame_data_index_ = (frame_data_index_ + 1) % frame_data_.size();
+
+ (void) res;
+}
+
+Smoke::Worker::Worker(Smoke &smoke, int index, int object_begin, int object_end)
+ : smoke_(smoke), index_(index),
+ object_begin_(object_begin), object_end_(object_end),
+ tick_interval_(1.0f / smoke.settings_.ticks_per_second), state_(INIT)
+{
+}
+
+void Smoke::Worker::start()
+{
+ state_ = IDLE;
+ thread_ = std::thread(Smoke::Worker::thread_loop, this);
+}
+
+void Smoke::Worker::stop()
+{
+ {
+ std::lock_guard<std::mutex> lock(mutex_);
+ state_ = INIT;
+ }
+ state_cv_.notify_one();
+
+ thread_.join();
+}
+
+void Smoke::Worker::update_simulation()
+{
+ {
+ std::lock_guard<std::mutex> lock(mutex_);
+ bool started = (state_ != INIT);
+
+ state_ = STEP;
+
+ // step directly
+ if (!started) {
+ smoke_.update_simulation(*this);
+ state_ = INIT;
+ }
+ }
+ state_cv_.notify_one();
+}
+
+void Smoke::Worker::draw_objects(VkFramebuffer fb)
+{
+ // wait for step_objects first
+ wait_idle();
+
+ {
+ std::lock_guard<std::mutex> lock(mutex_);
+ bool started = (state_ != INIT);
+
+ fb_ = fb;
+ state_ = DRAW;
+
+ // render directly
+ if (!started) {
+ smoke_.draw_objects(*this);
+ state_ = INIT;
+ }
+ }
+ state_cv_.notify_one();
+}
+
+void Smoke::Worker::wait_idle()
+{
+ std::unique_lock<std::mutex> lock(mutex_);
+ bool started = (state_ != INIT);
+
+ if (started)
+ state_cv_.wait(lock, [this] { return (state_ == IDLE); });
+}
+
+void Smoke::Worker::update_loop()
+{
+ while (true) {
+ std::unique_lock<std::mutex> lock(mutex_);
+
+ state_cv_.wait(lock, [this] { return (state_ != IDLE); });
+ if (state_ == INIT)
+ break;
+
+ assert(state_ == STEP || state_ == DRAW);
+ if (state_ == STEP)
+ smoke_.update_simulation(*this);
+ else
+ smoke_.draw_objects(*this);
+
+ state_ = IDLE;
+ lock.unlock();
+ state_cv_.notify_one();
+ }
+}
diff --git a/demos/smoke/Smoke.frag b/demos/smoke/Smoke.frag
new file mode 100644
index 000000000..e07a46f59
--- /dev/null
+++ b/demos/smoke/Smoke.frag
@@ -0,0 +1,12 @@
+#version 310 es
+
+precision highp float;
+
+in vec3 color;
+
+layout(location = 0) out vec4 fragcolor;
+
+void main()
+{
+ fragcolor = vec4(color, 0.5);
+}
diff --git a/demos/smoke/Smoke.h b/demos/smoke/Smoke.h
new file mode 100644
index 000000000..44bd4812c
--- /dev/null
+++ b/demos/smoke/Smoke.h
@@ -0,0 +1,195 @@
+/*
+ * Copyright (C) 2016 Google, Inc.
+ *
+ * Permission is hereby granted, free of charge, to any person obtaining a
+ * copy of this software and associated documentation files (the "Software"),
+ * to deal in the Software without restriction, including without limitation
+ * the rights to use, copy, modify, merge, publish, distribute, sublicense,
+ * and/or sell copies of the Software, and to permit persons to whom the
+ * Software is furnished to do so, subject to the following conditions:
+ *
+ * The above copyright notice and this permission notice shall be included
+ * in all copies or substantial portions of the Software.
+ *
+ * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
+ * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
+ * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL
+ * THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
+ * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
+ * FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER
+ * DEALINGS IN THE SOFTWARE.
+ */
+
+#ifndef SMOKE_H
+#define SMOKE_H
+
+#include <condition_variable>
+#include <memory>
+#include <mutex>
+#include <string>
+#include <thread>
+#include <vector>
+
+#include <vulkan/vulkan.h>
+#include <glm/glm.hpp>
+
+#include "Simulation.h"
+#include "Game.h"
+
+class Meshes;
+
+class Smoke : public Game {
+public:
+ Smoke(const std::vector<std::string> &args);
+ ~Smoke();
+
+ void attach_shell(Shell &sh);
+ void detach_shell();
+
+ void attach_swapchain();
+ void detach_swapchain();
+
+ void on_key(Key key);
+ void on_tick();
+
+ void on_frame(float frame_pred);
+
+private:
+ class Worker {
+ public:
+ Worker(Smoke &smoke, int index, int object_begin, int object_end);
+
+ void start();
+ void stop();
+ void update_simulation();
+ void draw_objects(VkFramebuffer fb);
+ void wait_idle();
+
+ Smoke &smoke_;
+
+ const int index_;
+ const int object_begin_;
+ const int object_end_;
+
+ const float tick_interval_;
+
+ VkFramebuffer fb_;
+
+ private:
+ enum State {
+ INIT,
+ IDLE,
+ STEP,
+ DRAW,
+ };
+
+ void update_loop();
+
+ static void thread_loop(Worker *worker) { worker->update_loop(); }
+
+ std::thread thread_;
+ std::mutex mutex_;
+ std::condition_variable state_cv_;
+ State state_;
+ };
+
+ struct Camera {
+ glm::vec3 eye_pos;
+ glm::mat4 view_projection;
+
+ Camera(float eye) : eye_pos(eye) {}
+ };
+
+ struct FrameData {
+ // signaled when this struct is ready for reuse
+ VkFence fence;
+
+ VkCommandBuffer primary_cmd;
+ std::vector<VkCommandBuffer> worker_cmds;
+
+ VkBuffer buf;
+ uint8_t *base;
+ VkDescriptorSet desc_set;
+ };
+
+ // called by the constructor
+ void init_workers();
+
+ bool multithread_;
+ bool use_push_constants_;
+
+ // called mostly by on_key
+ void update_camera();
+
+ bool sim_paused_;
+ Simulation sim_;
+ Camera camera_;
+
+ std::vector<std::unique_ptr<Worker>> workers_;
+
+ // called by attach_shell
+ void create_render_pass();
+ void create_shader_modules();
+ void create_descriptor_set_layout();
+ void create_pipeline_layout();
+ void create_pipeline();
+
+ void create_frame_data(int count);
+ void destroy_frame_data();
+ void create_fences();
+ void create_command_buffers();
+ void create_buffers();
+ void create_buffer_memory();
+ void create_descriptor_sets();
+
+ VkPhysicalDevice physical_dev_;
+ VkDevice dev_;
+ VkQueue queue_;
+ uint32_t queue_family_;
+ VkFormat format_;
+
+ VkPhysicalDeviceProperties physical_dev_props_;
+ std::vector<VkMemoryPropertyFlags> mem_flags_;
+
+ const Meshes *meshes_;
+
+ VkRenderPass render_pass_;
+ VkShaderModule vs_;
+ VkShaderModule fs_;
+ VkDescriptorSetLayout desc_set_layout_;
+ VkPipelineLayout pipeline_layout_;
+ VkPipeline pipeline_;
+
+ VkCommandPool primary_cmd_pool_;
+ std::vector<VkCommandPool> worker_cmd_pools_;
+ VkDescriptorPool desc_pool_;
+ VkDeviceMemory frame_data_mem_;
+ std::vector<FrameData> frame_data_;
+ int frame_data_index_;
+
+ VkClearValue render_pass_clear_value_;
+ VkRenderPassBeginInfo render_pass_begin_info_;
+
+ VkCommandBufferBeginInfo primary_cmd_begin_info_;
+ VkPipelineStageFlags primary_cmd_submit_wait_stages_;
+ VkSubmitInfo primary_cmd_submit_info_;
+
+ // called by attach_swapchain
+ void prepare_viewport(const VkExtent2D &extent);
+ void prepare_framebuffers(VkSwapchainKHR swapchain);
+
+ VkExtent2D extent_;
+ VkViewport viewport_;
+ VkRect2D scissor_;
+
+ std::vector<VkImage> images_;
+ std::vector<VkImageView> image_views_;
+ std::vector<VkFramebuffer> framebuffers_;
+
+ // called by workers
+ void update_simulation(const Worker &worker);
+ void draw_object(const Simulation::Object &obj, FrameData &data, VkCommandBuffer cmd) const;
+ void draw_objects(Worker &worker);
+};
+
+#endif // HOLOGRAM_H
diff --git a/demos/smoke/Smoke.push_constant.vert b/demos/smoke/Smoke.push_constant.vert
new file mode 100644
index 000000000..e2357fb06
--- /dev/null
+++ b/demos/smoke/Smoke.push_constant.vert
@@ -0,0 +1,27 @@
+#version 310 es
+
+layout(location = 0) in vec3 in_pos;
+layout(location = 1) in vec3 in_normal;
+
+layout(std140, push_constant) uniform param_block {
+ vec3 light_pos;
+ vec3 light_color;
+ mat4 model;
+ mat4 view_projection;
+} params;
+
+out vec3 color;
+
+void main()
+{
+ vec3 world_light = vec3(params.model * vec4(params.light_pos, 1.0));
+ vec3 world_pos = vec3(params.model * vec4(in_pos, 1.0));
+ vec3 world_normal = mat3(params.model) * in_normal;
+
+ vec3 light_dir = world_light - world_pos;
+ float brightness = dot(light_dir, world_normal) / length(light_dir) / length(world_normal);
+ brightness = abs(brightness);
+
+ gl_Position = params.view_projection * vec4(world_pos, 1.0);
+ color = params.light_color * brightness;
+}
diff --git a/demos/smoke/Smoke.vert b/demos/smoke/Smoke.vert
new file mode 100644
index 000000000..60bda60b5
--- /dev/null
+++ b/demos/smoke/Smoke.vert
@@ -0,0 +1,27 @@
+#version 310 es
+
+layout(location = 0) in vec3 in_pos;
+layout(location = 1) in vec3 in_normal;
+
+layout(std140, set = 0, binding = 0) readonly buffer param_block {
+ vec3 light_pos;
+ vec3 light_color;
+ mat4 model;
+ mat4 view_projection;
+} params;
+
+out vec3 color;
+
+void main()
+{
+ vec3 world_light = vec3(params.model * vec4(params.light_pos, 1.0));
+ vec3 world_pos = vec3(params.model * vec4(in_pos, 1.0));
+ vec3 world_normal = mat3(params.model) * in_normal;
+
+ vec3 light_dir = world_light - world_pos;
+ float brightness = dot(light_dir, world_normal) / length(light_dir) / length(world_normal);
+ brightness = abs(brightness);
+
+ gl_Position = params.view_projection * vec4(world_pos, 1.0);
+ color = params.light_color * brightness;
+}
diff --git a/demos/smoke/android/build-and-install b/demos/smoke/android/build-and-install
new file mode 100755
index 000000000..cbdaf0ac6
--- /dev/null
+++ b/demos/smoke/android/build-and-install
@@ -0,0 +1,30 @@
+#!/bin/sh
+
+set -e
+
+SDK_DIR="$HOME/android/android-sdk-linux"
+NDK_DIR="$HOME/android/android-ndk-r10e"
+
+generate_local_properties() {
+ : > local.properties
+ echo "sdk.dir=${SDK_DIR}" >> local.properties
+ echo "ndk.dir=${NDK_DIR}" >> local.properties
+}
+
+build() {
+ ./gradlew build
+}
+
+install() {
+ adb uninstall com.example.Smoke
+ adb install build/outputs/apk/android-fat-debug.apk
+}
+
+run() {
+ adb shell am start com.example.Smoke/android.app.NativeActivity
+}
+
+generate_local_properties
+build
+install
+#run
diff --git a/demos/smoke/android/build.gradle b/demos/smoke/android/build.gradle
new file mode 100644
index 000000000..d04ba8eed
--- /dev/null
+++ b/demos/smoke/android/build.gradle
@@ -0,0 +1,87 @@
+buildscript {
+ repositories {
+ jcenter()
+ }
+
+ dependencies {
+ classpath 'com.android.tools.build:gradle-experimental:0.6.0-alpha3'
+ }
+}
+
+apply plugin: 'com.android.model.application'
+
+def demosDir = "../.."
+def smokeDir = "${demosDir}/demos/Smoke"
+def glmDir = "${demosDir}/../libs"
+def vulkanDir = "${demosDir}/../Vulkan-Docs/src"
+
+Properties properties = new Properties()
+properties.load(project.rootProject.file('local.properties').newDataInputStream())
+def ndkDir = properties.getProperty('ndk.dir')
+
+model {
+ android {
+ compileSdkVersion = 23
+ buildToolsVersion = "23.0.2"
+
+ defaultConfig.with {
+ applicationId = "com.example.Smoke"
+ minSdkVersion.apiLevel = 22
+ targetSdkVersion.apiLevel = 22
+ versionCode = 1
+ versionName = "0.1"
+ }
+ }
+
+ android.ndk {
+ moduleName = "Smoke"
+ toolchain = "clang"
+ stl = "c++_static"
+
+ cppFlags.addAll(["-std=c++11", "-fexceptions"])
+ cppFlags.addAll(["-Wall", "-Wextra", "-Wno-unused-parameter"])
+
+ cppFlags.addAll([
+ "-DVK_NO_PROTOTYPES",
+ "-DVK_USE_PLATFORM_ANDROID_KHR",
+ "-DGLM_FORCE_RADIANS",
+ ])
+
+ cppFlags.addAll([
+ "-I${file("${ndkDir}/sources/android/native_app_glue")}".toString(),
+ "-I${file("${vulkanDir}")}".toString(),
+ "-I${file("${glmDir}")}".toString(),
+ "-I${file("src/main/jni")}".toString(),
+ ])
+
+ ldLibs.addAll(["android", "log", "dl"])
+ }
+
+ android.sources {
+ main {
+ jni {
+ source {
+ srcDir "${ndkDir}/sources/android/native_app_glue"
+ srcDir "${smokeDir}"
+ exclude 'ShellXcb.cpp'
+ exclude 'ShellWin32.cpp'
+ }
+ }
+ }
+ }
+
+ android.buildTypes {
+ release {
+ ndk.with {
+ debuggable = true
+ }
+ }
+ }
+
+ android.productFlavors {
+ create ("fat") {
+ ndk.abiFilters.add("armeabi-v7a")
+ ndk.abiFilters.add("x86")
+ }
+ }
+}
diff --git a/demos/smoke/android/gradle/wrapper/gradle-wrapper.jar b/demos/smoke/android/gradle/wrapper/gradle-wrapper.jar
new file mode 100644
index 000000000..13372aef5
--- /dev/null
+++ b/demos/smoke/android/gradle/wrapper/gradle-wrapper.jar
Binary files differ
diff --git a/demos/smoke/android/gradle/wrapper/gradle-wrapper.properties b/demos/smoke/android/gradle/wrapper/gradle-wrapper.properties
new file mode 100644
index 000000000..0fa19137e
--- /dev/null
+++ b/demos/smoke/android/gradle/wrapper/gradle-wrapper.properties
@@ -0,0 +1,6 @@
+#Wed Jan 27 08:20:52 CST 2016
+distributionBase=GRADLE_USER_HOME
+distributionPath=wrapper/dists
+zipStoreBase=GRADLE_USER_HOME
+zipStorePath=wrapper/dists
+distributionUrl=https\://services.gradle.org/distributions/gradle-2.9-bin.zip
diff --git a/demos/smoke/android/gradlew b/demos/smoke/android/gradlew
new file mode 100755
index 000000000..9d82f7891
--- /dev/null
+++ b/demos/smoke/android/gradlew
@@ -0,0 +1,160 @@
+#!/usr/bin/env bash
+
+##############################################################################
+##
+## Gradle start up script for UN*X
+##
+##############################################################################
+
+# Add default JVM options here. You can also use JAVA_OPTS and GRADLE_OPTS to pass JVM options to this script.
+DEFAULT_JVM_OPTS=""
+
+APP_NAME="Gradle"
+APP_BASE_NAME=`basename "$0"`
+
+# Use the maximum available, or set MAX_FD != -1 to use that value.
+MAX_FD="maximum"
+
+warn ( ) {
+ echo "$*"
+}
+
+die ( ) {
+ echo
+ echo "$*"
+ echo
+ exit 1
+}
+
+# OS specific support (must be 'true' or 'false').
+cygwin=false
+msys=false
+darwin=false
+case "`uname`" in
+ CYGWIN* )
+ cygwin=true
+ ;;
+ Darwin* )
+ darwin=true
+ ;;
+ MINGW* )
+ msys=true
+ ;;
+esac
+
+# Attempt to set APP_HOME
+# Resolve links: $0 may be a link
+PRG="$0"
+# Need this for relative symlinks.
+while [ -h "$PRG" ] ; do
+ ls=`ls -ld "$PRG"`
+ link=`expr "$ls" : '.*-> \(.*\)$'`
+ if expr "$link" : '/.*' > /dev/null; then
+ PRG="$link"
+ else
+ PRG=`dirname "$PRG"`"/$link"
+ fi
+done
+SAVED="`pwd`"
+cd "`dirname \"$PRG\"`/" >/dev/null
+APP_HOME="`pwd -P`"
+cd "$SAVED" >/dev/null
+
+CLASSPATH=$APP_HOME/gradle/wrapper/gradle-wrapper.jar
+
+# Determine the Java command to use to start the JVM.
+if [ -n "$JAVA_HOME" ] ; then
+ if [ -x "$JAVA_HOME/jre/sh/java" ] ; then
+ # IBM's JDK on AIX uses strange locations for the executables
+ JAVACMD="$JAVA_HOME/jre/sh/java"
+ else
+ JAVACMD="$JAVA_HOME/bin/java"
+ fi
+ if [ ! -x "$JAVACMD" ] ; then
+ die "ERROR: JAVA_HOME is set to an invalid directory: $JAVA_HOME
+
+Please set the JAVA_HOME variable in your environment to match the
+location of your Java installation."
+ fi
+else
+ JAVACMD="java"
+ which java >/dev/null 2>&1 || die "ERROR: JAVA_HOME is not set and no 'java' command could be found in your PATH.
+
+Please set the JAVA_HOME variable in your environment to match the
+location of your Java installation."
+fi
+
+# Increase the maximum file descriptors if we can.
+if [ "$cygwin" = "false" -a "$darwin" = "false" ] ; then
+ MAX_FD_LIMIT=`ulimit -H -n`
+ if [ $? -eq 0 ] ; then
+ if [ "$MAX_FD" = "maximum" -o "$MAX_FD" = "max" ] ; then
+ MAX_FD="$MAX_FD_LIMIT"
+ fi
+ ulimit -n $MAX_FD
+ if [ $? -ne 0 ] ; then
+ warn "Could not set maximum file descriptor limit: $MAX_FD"
+ fi
+ else
+ warn "Could not query maximum file descriptor limit: $MAX_FD_LIMIT"
+ fi
+fi
+
+# For Darwin, add options to specify how the application appears in the dock
+if $darwin; then
+ GRADLE_OPTS="$GRADLE_OPTS \"-Xdock:name=$APP_NAME\" \"-Xdock:icon=$APP_HOME/media/gradle.icns\""
+fi
+
+# For Cygwin, switch paths to Windows format before running java
+if $cygwin ; then
+ APP_HOME=`cygpath --path --mixed "$APP_HOME"`
+ CLASSPATH=`cygpath --path --mixed "$CLASSPATH"`
+ JAVACMD=`cygpath --unix "$JAVACMD"`
+
+ # We build the pattern for arguments to be converted via cygpath
+ ROOTDIRSRAW=`find -L / -maxdepth 1 -mindepth 1 -type d 2>/dev/null`
+ SEP=""
+ for dir in $ROOTDIRSRAW ; do
+ ROOTDIRS="$ROOTDIRS$SEP$dir"
+ SEP="|"
+ done
+ OURCYGPATTERN="(^($ROOTDIRS))"
+ # Add a user-defined pattern to the cygpath arguments
+ if [ "$GRADLE_CYGPATTERN" != "" ] ; then
+ OURCYGPATTERN="$OURCYGPATTERN|($GRADLE_CYGPATTERN)"
+ fi
+ # Now convert the arguments - kludge to limit ourselves to /bin/sh
+ i=0
+ for arg in "$@" ; do
+ CHECK=`echo "$arg"|egrep -c "$OURCYGPATTERN" -`
+ CHECK2=`echo "$arg"|egrep -c "^-"` ### Determine if an option
+
+ if [ $CHECK -ne 0 ] && [ $CHECK2 -eq 0 ] ; then ### Added a condition
+ eval `echo args$i`=`cygpath --path --ignore --mixed "$arg"`
+ else
+ eval `echo args$i`="\"$arg\""
+ fi
+ i=$((i+1))
+ done
+ case $i in
+ (0) set -- ;;
+ (1) set -- "$args0" ;;
+ (2) set -- "$args0" "$args1" ;;
+ (3) set -- "$args0" "$args1" "$args2" ;;
+ (4) set -- "$args0" "$args1" "$args2" "$args3" ;;
+ (5) set -- "$args0" "$args1" "$args2" "$args3" "$args4" ;;
+ (6) set -- "$args0" "$args1" "$args2" "$args3" "$args4" "$args5" ;;
+ (7) set -- "$args0" "$args1" "$args2" "$args3" "$args4" "$args5" "$args6" ;;
+ (8) set -- "$args0" "$args1" "$args2" "$args3" "$args4" "$args5" "$args6" "$args7" ;;
+ (9) set -- "$args0" "$args1" "$args2" "$args3" "$args4" "$args5" "$args6" "$args7" "$args8" ;;
+ esac
+fi
+
+# Split up the JVM_OPTS And GRADLE_OPTS values into an array, following the shell quoting and substitution rules
+function splitJvmOpts() {
+ JVM_OPTS=("$@")
+}
+eval splitJvmOpts $DEFAULT_JVM_OPTS $JAVA_OPTS $GRADLE_OPTS
+JVM_OPTS[${#JVM_OPTS[*]}]="-Dorg.gradle.appname=$APP_BASE_NAME"
+
+exec "$JAVACMD" "${JVM_OPTS[@]}" -classpath "$CLASSPATH" org.gradle.wrapper.GradleWrapperMain "$@"
diff --git a/demos/smoke/android/gradlew.bat b/demos/smoke/android/gradlew.bat
new file mode 100644
index 000000000..aec99730b
--- /dev/null
+++ b/demos/smoke/android/gradlew.bat
@@ -0,0 +1,90 @@
+@if "%DEBUG%" == "" @echo off
+@rem ##########################################################################
+@rem
+@rem Gradle startup script for Windows
+@rem
+@rem ##########################################################################
+
+@rem Set local scope for the variables with windows NT shell
+if "%OS%"=="Windows_NT" setlocal
+
+@rem Add default JVM options here. You can also use JAVA_OPTS and GRADLE_OPTS to pass JVM options to this script.
+set DEFAULT_JVM_OPTS=
+
+set DIRNAME=%~dp0
+if "%DIRNAME%" == "" set DIRNAME=.
+set APP_BASE_NAME=%~n0
+set APP_HOME=%DIRNAME%
+
+@rem Find java.exe
+if defined JAVA_HOME goto findJavaFromJavaHome
+
+set JAVA_EXE=java.exe
+%JAVA_EXE% -version >NUL 2>&1
+if "%ERRORLEVEL%" == "0" goto init
+
+echo.
+echo ERROR: JAVA_HOME is not set and no 'java' command could be found in your PATH.
+echo.
+echo Please set the JAVA_HOME variable in your environment to match the
+echo location of your Java installation.
+
+goto fail
+
+:findJavaFromJavaHome
+set JAVA_HOME=%JAVA_HOME:"=%
+set JAVA_EXE=%JAVA_HOME%/bin/java.exe
+
+if exist "%JAVA_EXE%" goto init
+
+echo.
+echo ERROR: JAVA_HOME is set to an invalid directory: %JAVA_HOME%
+echo.
+echo Please set the JAVA_HOME variable in your environment to match the
+echo location of your Java installation.
+
+goto fail
+
+:init
+@rem Get command-line arguments, handling Windowz variants
+
+if not "%OS%" == "Windows_NT" goto win9xME_args
+if "%@eval[2+2]" == "4" goto 4NT_args
+
+:win9xME_args
+@rem Slurp the command line arguments.
+set CMD_LINE_ARGS=
+set _SKIP=2
+
+:win9xME_args_slurp
+if "x%~1" == "x" goto execute
+
+set CMD_LINE_ARGS=%*
+goto execute
+
+:4NT_args
+@rem Get arguments from the 4NT Shell from JP Software
+set CMD_LINE_ARGS=%$
+
+:execute
+@rem Setup the command line
+
+set CLASSPATH=%APP_HOME%\gradle\wrapper\gradle-wrapper.jar
+
+@rem Execute Gradle
+"%JAVA_EXE%" %DEFAULT_JVM_OPTS% %JAVA_OPTS% %GRADLE_OPTS% "-Dorg.gradle.appname=%APP_BASE_NAME%" -classpath "%CLASSPATH%" org.gradle.wrapper.GradleWrapperMain %CMD_LINE_ARGS%
+
+:end
+@rem End local scope for the variables with windows NT shell
+if "%ERRORLEVEL%"=="0" goto mainEnd
+
+:fail
+rem Set variable GRADLE_EXIT_CONSOLE if you need the _script_ return code instead of
+rem the _cmd.exe /c_ return code!
+if not "" == "%GRADLE_EXIT_CONSOLE%" exit 1
+exit /b 1
+
+:mainEnd
+if "%OS%"=="Windows_NT" endlocal
+
+:omega
diff --git a/demos/smoke/android/src/main/AndroidManifest.xml b/demos/smoke/android/src/main/AndroidManifest.xml
new file mode 100644
index 000000000..c68af879d
--- /dev/null
+++ b/demos/smoke/android/src/main/AndroidManifest.xml
@@ -0,0 +1,20 @@
+<?xml version="1.0" encoding="utf-8"?>
+<manifest xmlns:android="http://schemas.android.com/apk/res/android"
+ package="com.example.Smoke">
+
+ <application android:label="@string/app_name"
+ android:hasCode="false"
+ android:allowBackup="false">
+ <activity android:name="android.app.NativeActivity"
+ android:label="@string/app_name">
+
+ <meta-data android:name="android.app.lib_name"
+ android:value="Smoke"/>
+
+ <intent-filter>
+ <action android:name="android.intent.action.MAIN" />
+ <category android:name="android.intent.category.LAUNCHER" />
+ </intent-filter>
+ </activity>
+ </application>
+</manifest>
diff --git a/demos/smoke/android/src/main/jni/Smoke.frag.h b/demos/smoke/android/src/main/jni/Smoke.frag.h
new file mode 100644
index 000000000..5149ad6b3
--- /dev/null
+++ b/demos/smoke/android/src/main/jni/Smoke.frag.h
@@ -0,0 +1,78 @@
+#include <stdint.h>
+
+#if 0
+/usr/local/google/home/olv/khronos/VulkanSamples/Demos/Hologram/Hologram.frag
+Warning, version 310 is not yet complete; most version-specific features are present, but some are missing.
+
+
+Linked fragment stage:
+
+
+// Module Version 10000
+// Generated by (magic number): 80001
+// Id's are bound by 19
+
+ Capability Shader
+ 1: ExtInstImport "GLSL.std.450"
+ MemoryModel Logical GLSL450
+ EntryPoint Fragment 4 "main" 9 12
+ ExecutionMode 4 OriginLowerLeft
+ Source ESSL 310
+ Name 4 "main"
+ Name 9 "fragcolor"
+ Name 12 "color"
+ Decorate 9(fragcolor) Location 0
+ 2: TypeVoid
+ 3: TypeFunction 2
+ 6: TypeFloat 32
+ 7: TypeVector 6(float) 4
+ 8: TypePointer Output 7(fvec4)
+ 9(fragcolor): 8(ptr) Variable Output
+ 10: TypeVector 6(float) 3
+ 11: TypePointer Input 10(fvec3)
+ 12(color): 11(ptr) Variable Input
+ 14: 6(float) Constant 1056964608
+ 4(main): 2 Function None 3
+ 5: Label
+ 13: 10(fvec3) Load 12(color)
+ 15: 6(float) CompositeExtract 13 0
+ 16: 6(float) CompositeExtract 13 1
+ 17: 6(float) CompositeExtract 13 2
+ 18: 7(fvec4) CompositeConstruct 15 16 17 14
+ Store 9(fragcolor) 18
+ Return
+ FunctionEnd
+#endif
+
+static const uint32_t Smoke_frag[120] = {
+ 0x07230203, 0x00010000, 0x00080001, 0x00000013,
+ 0x00000000, 0x00020011, 0x00000001, 0x0006000b,
+ 0x00000001, 0x4c534c47, 0x6474732e, 0x3035342e,
+ 0x00000000, 0x0003000e, 0x00000000, 0x00000001,
+ 0x0007000f, 0x00000004, 0x00000004, 0x6e69616d,
+ 0x00000000, 0x00000009, 0x0000000c, 0x00030010,
+ 0x00000004, 0x00000008, 0x00030003, 0x00000001,
+ 0x00000136, 0x00040005, 0x00000004, 0x6e69616d,
+ 0x00000000, 0x00050005, 0x00000009, 0x67617266,
+ 0x6f6c6f63, 0x00000072, 0x00040005, 0x0000000c,
+ 0x6f6c6f63, 0x00000072, 0x00040047, 0x00000009,
+ 0x0000001e, 0x00000000, 0x00020013, 0x00000002,
+ 0x00030021, 0x00000003, 0x00000002, 0x00030016,
+ 0x00000006, 0x00000020, 0x00040017, 0x00000007,
+ 0x00000006, 0x00000004, 0x00040020, 0x00000008,
+ 0x00000003, 0x00000007, 0x0004003b, 0x00000008,
+ 0x00000009, 0x00000003, 0x00040017, 0x0000000a,
+ 0x00000006, 0x00000003, 0x00040020, 0x0000000b,
+ 0x00000001, 0x0000000a, 0x0004003b, 0x0000000b,
+ 0x0000000c, 0x00000001, 0x0004002b, 0x00000006,
+ 0x0000000e, 0x3f000000, 0x00050036, 0x00000002,
+ 0x00000004, 0x00000000, 0x00000003, 0x000200f8,
+ 0x00000005, 0x0004003d, 0x0000000a, 0x0000000d,
+ 0x0000000c, 0x00050051, 0x00000006, 0x0000000f,
+ 0x0000000d, 0x00000000, 0x00050051, 0x00000006,
+ 0x00000010, 0x0000000d, 0x00000001, 0x00050051,
+ 0x00000006, 0x00000011, 0x0000000d, 0x00000002,
+ 0x00070050, 0x00000007, 0x00000012, 0x0000000f,
+ 0x00000010, 0x00000011, 0x0000000e, 0x0003003e,
+ 0x00000009, 0x00000012, 0x000100fd, 0x00010038,
+};
diff --git a/demos/smoke/android/src/main/jni/Smoke.push_constant.vert.h b/demos/smoke/android/src/main/jni/Smoke.push_constant.vert.h
new file mode 100644
index 000000000..db571a020
--- /dev/null
+++ b/demos/smoke/android/src/main/jni/Smoke.push_constant.vert.h
@@ -0,0 +1,352 @@
+#include <stdint.h>
+
+#if 0
+/usr/local/google/home/olv/khronos/VulkanSamples/Demos/Hologram/Hologram.push_constant.vert
+Warning, version 310 is not yet complete; most version-specific features are present, but some are missing.
+
+
+Linked vertex stage:
+
+
+// Module Version 10000
+// Generated by (magic number): 80001
+// Id's are bound by 108
+
+ Capability Shader
+ 1: ExtInstImport "GLSL.std.450"
+ MemoryModel Logical GLSL450
+ EntryPoint Vertex 4 "main" 38 67 89 102
+ Source ESSL 310
+ Name 4 "main"
+ Name 9 "world_light"
+ Name 12 "param_block"
+ MemberName 12(param_block) 0 "light_pos"
+ MemberName 12(param_block) 1 "light_color"
+ MemberName 12(param_block) 2 "model"
+ MemberName 12(param_block) 3 "view_projection"
+ Name 14 "params"
+ Name 34 "world_pos"
+ Name 38 "in_pos"
+ Name 49 "world_normal"
+ Name 67 "in_normal"
+ Name 70 "light_dir"
+ Name 75 "brightness"
+ Name 87 "gl_PerVertex"
+ MemberName 87(gl_PerVertex) 0 "gl_Position"
+ MemberName 87(gl_PerVertex) 1 "gl_PointSize"
+ Name 89 ""
+ Name 102 "color"
+ MemberDecorate 12(param_block) 0 Offset 0
+ MemberDecorate 12(param_block) 1 Offset 16
+ MemberDecorate 12(param_block) 2 ColMajor
+ MemberDecorate 12(param_block) 2 Offset 32
+ MemberDecorate 12(param_block) 2 MatrixStride 16
+ MemberDecorate 12(param_block) 3 ColMajor
+ MemberDecorate 12(param_block) 3 Offset 96
+ MemberDecorate 12(param_block) 3 MatrixStride 16
+ Decorate 12(param_block) Block
+ Decorate 14(params) DescriptorSet 0
+ Decorate 38(in_pos) Location 0
+ Decorate 67(in_normal) Location 1
+ MemberDecorate 87(gl_PerVertex) 0 BuiltIn Position
+ MemberDecorate 87(gl_PerVertex) 1 BuiltIn PointSize
+ Decorate 87(gl_PerVertex) Block
+ 2: TypeVoid
+ 3: TypeFunction 2
+ 6: TypeFloat 32
+ 7: TypeVector 6(float) 3
+ 8: TypePointer Function 7(fvec3)
+ 10: TypeVector 6(float) 4
+ 11: TypeMatrix 10(fvec4) 4
+ 12(param_block): TypeStruct 7(fvec3) 7(fvec3) 11 11
+ 13: TypePointer PushConstant 12(param_block)
+ 14(params): 13(ptr) Variable PushConstant
+ 15: TypeInt 32 1
+ 16: 15(int) Constant 2
+ 17: TypePointer PushConstant 11
+ 20: 15(int) Constant 0
+ 21: TypePointer PushConstant 7(fvec3)
+ 24: 6(float) Constant 1065353216
+ 37: TypePointer Input 7(fvec3)
+ 38(in_pos): 37(ptr) Variable Input
+ 52: TypeMatrix 7(fvec3) 3
+ 53: 6(float) Constant 0
+ 67(in_normal): 37(ptr) Variable Input
+ 74: TypePointer Function 6(float)
+87(gl_PerVertex): TypeStruct 10(fvec4) 6(float)
+ 88: TypePointer Output 87(gl_PerVertex)
+ 89: 88(ptr) Variable Output
+ 90: 15(int) Constant 3
+ 99: TypePointer Output 10(fvec4)
+ 101: TypePointer Output 7(fvec3)
+ 102(color): 101(ptr) Variable Output
+ 103: 15(int) Constant 1
+ 4(main): 2 Function None 3
+ 5: Label
+ 9(world_light): 8(ptr) Variable Function
+ 34(world_pos): 8(ptr) Variable Function
+49(world_normal): 8(ptr) Variable Function
+ 70(light_dir): 8(ptr) Variable Function
+ 75(brightness): 74(ptr) Variable Function
+ 18: 17(ptr) AccessChain 14(params) 16
+ 19: 11 Load 18
+ 22: 21(ptr) AccessChain 14(params) 20
+ 23: 7(fvec3) Load 22
+ 25: 6(float) CompositeExtract 23 0
+ 26: 6(float) CompositeExtract 23 1
+ 27: 6(float) CompositeExtract 23 2
+ 28: 10(fvec4) CompositeConstruct 25 26 27 24
+ 29: 10(fvec4) MatrixTimesVector 19 28
+ 30: 6(float) CompositeExtract 29 0
+ 31: 6(float) CompositeExtract 29 1
+ 32: 6(float) CompositeExtract 29 2
+ 33: 7(fvec3) CompositeConstruct 30 31 32
+ Store 9(world_light) 33
+ 35: 17(ptr) AccessChain 14(params) 16
+ 36: 11 Load 35
+ 39: 7(fvec3) Load 38(in_pos)
+ 40: 6(float) CompositeExtract 39 0
+ 41: 6(float) CompositeExtract 39 1
+ 42: 6(float) CompositeExtract 39 2
+ 43: 10(fvec4) CompositeConstruct 40 41 42 24
+ 44: 10(fvec4) MatrixTimesVector 36 43
+ 45: 6(float) CompositeExtract 44 0
+ 46: 6(float) CompositeExtract 44 1
+ 47: 6(float) CompositeExtract 44 2
+ 48: 7(fvec3) CompositeConstruct 45 46 47
+ Store 34(world_pos) 48
+ 50: 17(ptr) AccessChain 14(params) 16
+ 51: 11 Load 50
+ 54: 6(float) CompositeExtract 51 0 0
+ 55: 6(float) CompositeExtract 51 0 1
+ 56: 6(float) CompositeExtract 51 0 2
+ 57: 6(float) CompositeExtract 51 1 0
+ 58: 6(float) CompositeExtract 51 1 1
+ 59: 6(float) CompositeExtract 51 1 2
+ 60: 6(float) CompositeExtract 51 2 0
+ 61: 6(float) CompositeExtract 51 2 1
+ 62: 6(float) CompositeExtract 51 2 2
+ 63: 7(fvec3) CompositeConstruct 54 55 56
+ 64: 7(fvec3) CompositeConstruct 57 58 59
+ 65: 7(fvec3) CompositeConstruct 60 61 62
+ 66: 52 CompositeConstruct 63 64 65
+ 68: 7(fvec3) Load 67(in_normal)
+ 69: 7(fvec3) MatrixTimesVector 66 68
+ Store 49(world_normal) 69
+ 71: 7(fvec3) Load 9(world_light)
+ 72: 7(fvec3) Load 34(world_pos)
+ 73: 7(fvec3) FSub 71 72
+ Store 70(light_dir) 73
+ 76: 7(fvec3) Load 70(light_dir)
+ 77: 7(fvec3) Load 49(world_normal)
+ 78: 6(float) Dot 76 77
+ 79: 7(fvec3) Load 70(light_dir)
+ 80: 6(float) ExtInst 1(GLSL.std.450) 66(Length) 79
+ 81: 6(float) FDiv 78 80
+ 82: 7(fvec3) Load 49(world_normal)
+ 83: 6(float) ExtInst 1(GLSL.std.450) 66(Length) 82
+ 84: 6(float) FDiv 81 83
+ Store 75(brightness) 84
+ 85: 6(float) Load 75(brightness)
+ 86: 6(float) ExtInst 1(GLSL.std.450) 4(FAbs) 85
+ Store 75(brightness) 86
+ 91: 17(ptr) AccessChain 14(params) 90
+ 92: 11 Load 91
+ 93: 7(fvec3) Load 34(world_pos)
+ 94: 6(float) CompositeExtract 93 0
+ 95: 6(float) CompositeExtract 93 1
+ 96: 6(float) CompositeExtract 93 2
+ 97: 10(fvec4) CompositeConstruct 94 95 96 24
+ 98: 10(fvec4) MatrixTimesVector 92 97
+ 100: 99(ptr) AccessChain 89 20
+ Store 100 98
+ 104: 21(ptr) AccessChain 14(params) 103
+ 105: 7(fvec3) Load 104
+ 106: 6(float) Load 75(brightness)
+ 107: 7(fvec3) VectorTimesScalar 105 106
+ Store 102(color) 107
+ Return
+ FunctionEnd
+#endif
+
+static const uint32_t Smoke_push_constant_vert[715] = {
+ 0x07230203, 0x00010000, 0x00080001, 0x0000006c,
+ 0x00000000, 0x00020011, 0x00000001, 0x0006000b,
+ 0x00000001, 0x4c534c47, 0x6474732e, 0x3035342e,
+ 0x00000000, 0x0003000e, 0x00000000, 0x00000001,
+ 0x0009000f, 0x00000000, 0x00000004, 0x6e69616d,
+ 0x00000000, 0x00000026, 0x00000043, 0x00000059,
+ 0x00000066, 0x00030003, 0x00000001, 0x00000136,
+ 0x00040005, 0x00000004, 0x6e69616d, 0x00000000,
+ 0x00050005, 0x00000009, 0x6c726f77, 0x696c5f64,
+ 0x00746867, 0x00050005, 0x0000000c, 0x61726170,
+ 0x6c625f6d, 0x006b636f, 0x00060006, 0x0000000c,
+ 0x00000000, 0x6867696c, 0x6f705f74, 0x00000073,
+ 0x00060006, 0x0000000c, 0x00000001, 0x6867696c,
+ 0x6f635f74, 0x00726f6c, 0x00050006, 0x0000000c,
+ 0x00000002, 0x65646f6d, 0x0000006c, 0x00070006,
+ 0x0000000c, 0x00000003, 0x77656976, 0x6f72705f,
+ 0x7463656a, 0x006e6f69, 0x00040005, 0x0000000e,
+ 0x61726170, 0x0000736d, 0x00050005, 0x00000022,
+ 0x6c726f77, 0x6f705f64, 0x00000073, 0x00040005,
+ 0x00000026, 0x705f6e69, 0x0000736f, 0x00060005,
+ 0x00000031, 0x6c726f77, 0x6f6e5f64, 0x6c616d72,
+ 0x00000000, 0x00050005, 0x00000043, 0x6e5f6e69,
+ 0x616d726f, 0x0000006c, 0x00050005, 0x00000046,
+ 0x6867696c, 0x69645f74, 0x00000072, 0x00050005,
+ 0x0000004b, 0x67697262, 0x656e7468, 0x00007373,
+ 0x00060005, 0x00000057, 0x505f6c67, 0x65567265,
+ 0x78657472, 0x00000000, 0x00060006, 0x00000057,
+ 0x00000000, 0x505f6c67, 0x7469736f, 0x006e6f69,
+ 0x00070006, 0x00000057, 0x00000001, 0x505f6c67,
+ 0x746e696f, 0x657a6953, 0x00000000, 0x00030005,
+ 0x00000059, 0x00000000, 0x00040005, 0x00000066,
+ 0x6f6c6f63, 0x00000072, 0x00050048, 0x0000000c,
+ 0x00000000, 0x00000023, 0x00000000, 0x00050048,
+ 0x0000000c, 0x00000001, 0x00000023, 0x00000010,
+ 0x00040048, 0x0000000c, 0x00000002, 0x00000005,
+ 0x00050048, 0x0000000c, 0x00000002, 0x00000023,
+ 0x00000020, 0x00050048, 0x0000000c, 0x00000002,
+ 0x00000007, 0x00000010, 0x00040048, 0x0000000c,
+ 0x00000003, 0x00000005, 0x00050048, 0x0000000c,
+ 0x00000003, 0x00000023, 0x00000060, 0x00050048,
+ 0x0000000c, 0x00000003, 0x00000007, 0x00000010,
+ 0x00030047, 0x0000000c, 0x00000002, 0x00040047,
+ 0x0000000e, 0x00000022, 0x00000000, 0x00040047,
+ 0x00000026, 0x0000001e, 0x00000000, 0x00040047,
+ 0x00000043, 0x0000001e, 0x00000001, 0x00050048,
+ 0x00000057, 0x00000000, 0x0000000b, 0x00000000,
+ 0x00050048, 0x00000057, 0x00000001, 0x0000000b,
+ 0x00000001, 0x00030047, 0x00000057, 0x00000002,
+ 0x00020013, 0x00000002, 0x00030021, 0x00000003,
+ 0x00000002, 0x00030016, 0x00000006, 0x00000020,
+ 0x00040017, 0x00000007, 0x00000006, 0x00000003,
+ 0x00040020, 0x00000008, 0x00000007, 0x00000007,
+ 0x00040017, 0x0000000a, 0x00000006, 0x00000004,
+ 0x00040018, 0x0000000b, 0x0000000a, 0x00000004,
+ 0x0006001e, 0x0000000c, 0x00000007, 0x00000007,
+ 0x0000000b, 0x0000000b, 0x00040020, 0x0000000d,
+ 0x00000009, 0x0000000c, 0x0004003b, 0x0000000d,
+ 0x0000000e, 0x00000009, 0x00040015, 0x0000000f,
+ 0x00000020, 0x00000001, 0x0004002b, 0x0000000f,
+ 0x00000010, 0x00000002, 0x00040020, 0x00000011,
+ 0x00000009, 0x0000000b, 0x0004002b, 0x0000000f,
+ 0x00000014, 0x00000000, 0x00040020, 0x00000015,
+ 0x00000009, 0x00000007, 0x0004002b, 0x00000006,
+ 0x00000018, 0x3f800000, 0x00040020, 0x00000025,
+ 0x00000001, 0x00000007, 0x0004003b, 0x00000025,
+ 0x00000026, 0x00000001, 0x00040018, 0x00000034,
+ 0x00000007, 0x00000003, 0x0004002b, 0x00000006,
+ 0x00000035, 0x00000000, 0x0004003b, 0x00000025,
+ 0x00000043, 0x00000001, 0x00040020, 0x0000004a,
+ 0x00000007, 0x00000006, 0x0004001e, 0x00000057,
+ 0x0000000a, 0x00000006, 0x00040020, 0x00000058,
+ 0x00000003, 0x00000057, 0x0004003b, 0x00000058,
+ 0x00000059, 0x00000003, 0x0004002b, 0x0000000f,
+ 0x0000005a, 0x00000003, 0x00040020, 0x00000063,
+ 0x00000003, 0x0000000a, 0x00040020, 0x00000065,
+ 0x00000003, 0x00000007, 0x0004003b, 0x00000065,
+ 0x00000066, 0x00000003, 0x0004002b, 0x0000000f,
+ 0x00000067, 0x00000001, 0x00050036, 0x00000002,
+ 0x00000004, 0x00000000, 0x00000003, 0x000200f8,
+ 0x00000005, 0x0004003b, 0x00000008, 0x00000009,
+ 0x00000007, 0x0004003b, 0x00000008, 0x00000022,
+ 0x00000007, 0x0004003b, 0x00000008, 0x00000031,
+ 0x00000007, 0x0004003b, 0x00000008, 0x00000046,
+ 0x00000007, 0x0004003b, 0x0000004a, 0x0000004b,
+ 0x00000007, 0x00050041, 0x00000011, 0x00000012,
+ 0x0000000e, 0x00000010, 0x0004003d, 0x0000000b,
+ 0x00000013, 0x00000012, 0x00050041, 0x00000015,
+ 0x00000016, 0x0000000e, 0x00000014, 0x0004003d,
+ 0x00000007, 0x00000017, 0x00000016, 0x00050051,
+ 0x00000006, 0x00000019, 0x00000017, 0x00000000,
+ 0x00050051, 0x00000006, 0x0000001a, 0x00000017,
+ 0x00000001, 0x00050051, 0x00000006, 0x0000001b,
+ 0x00000017, 0x00000002, 0x00070050, 0x0000000a,
+ 0x0000001c, 0x00000019, 0x0000001a, 0x0000001b,
+ 0x00000018, 0x00050091, 0x0000000a, 0x0000001d,
+ 0x00000013, 0x0000001c, 0x00050051, 0x00000006,
+ 0x0000001e, 0x0000001d, 0x00000000, 0x00050051,
+ 0x00000006, 0x0000001f, 0x0000001d, 0x00000001,
+ 0x00050051, 0x00000006, 0x00000020, 0x0000001d,
+ 0x00000002, 0x00060050, 0x00000007, 0x00000021,
+ 0x0000001e, 0x0000001f, 0x00000020, 0x0003003e,
+ 0x00000009, 0x00000021, 0x00050041, 0x00000011,
+ 0x00000023, 0x0000000e, 0x00000010, 0x0004003d,
+ 0x0000000b, 0x00000024, 0x00000023, 0x0004003d,
+ 0x00000007, 0x00000027, 0x00000026, 0x00050051,
+ 0x00000006, 0x00000028, 0x00000027, 0x00000000,
+ 0x00050051, 0x00000006, 0x00000029, 0x00000027,
+ 0x00000001, 0x00050051, 0x00000006, 0x0000002a,
+ 0x00000027, 0x00000002, 0x00070050, 0x0000000a,
+ 0x0000002b, 0x00000028, 0x00000029, 0x0000002a,
+ 0x00000018, 0x00050091, 0x0000000a, 0x0000002c,
+ 0x00000024, 0x0000002b, 0x00050051, 0x00000006,
+ 0x0000002d, 0x0000002c, 0x00000000, 0x00050051,
+ 0x00000006, 0x0000002e, 0x0000002c, 0x00000001,
+ 0x00050051, 0x00000006, 0x0000002f, 0x0000002c,
+ 0x00000002, 0x00060050, 0x00000007, 0x00000030,
+ 0x0000002d, 0x0000002e, 0x0000002f, 0x0003003e,
+ 0x00000022, 0x00000030, 0x00050041, 0x00000011,
+ 0x00000032, 0x0000000e, 0x00000010, 0x0004003d,
+ 0x0000000b, 0x00000033, 0x00000032, 0x00060051,
+ 0x00000006, 0x00000036, 0x00000033, 0x00000000,
+ 0x00000000, 0x00060051, 0x00000006, 0x00000037,
+ 0x00000033, 0x00000000, 0x00000001, 0x00060051,
+ 0x00000006, 0x00000038, 0x00000033, 0x00000000,
+ 0x00000002, 0x00060051, 0x00000006, 0x00000039,
+ 0x00000033, 0x00000001, 0x00000000, 0x00060051,
+ 0x00000006, 0x0000003a, 0x00000033, 0x00000001,
+ 0x00000001, 0x00060051, 0x00000006, 0x0000003b,
+ 0x00000033, 0x00000001, 0x00000002, 0x00060051,
+ 0x00000006, 0x0000003c, 0x00000033, 0x00000002,
+ 0x00000000, 0x00060051, 0x00000006, 0x0000003d,
+ 0x00000033, 0x00000002, 0x00000001, 0x00060051,
+ 0x00000006, 0x0000003e, 0x00000033, 0x00000002,
+ 0x00000002, 0x00060050, 0x00000007, 0x0000003f,
+ 0x00000036, 0x00000037, 0x00000038, 0x00060050,
+ 0x00000007, 0x00000040, 0x00000039, 0x0000003a,
+ 0x0000003b, 0x00060050, 0x00000007, 0x00000041,
+ 0x0000003c, 0x0000003d, 0x0000003e, 0x00060050,
+ 0x00000034, 0x00000042, 0x0000003f, 0x00000040,
+ 0x00000041, 0x0004003d, 0x00000007, 0x00000044,
+ 0x00000043, 0x00050091, 0x00000007, 0x00000045,
+ 0x00000042, 0x00000044, 0x0003003e, 0x00000031,
+ 0x00000045, 0x0004003d, 0x00000007, 0x00000047,
+ 0x00000009, 0x0004003d, 0x00000007, 0x00000048,
+ 0x00000022, 0x00050083, 0x00000007, 0x00000049,
+ 0x00000047, 0x00000048, 0x0003003e, 0x00000046,
+ 0x00000049, 0x0004003d, 0x00000007, 0x0000004c,
+ 0x00000046, 0x0004003d, 0x00000007, 0x0000004d,
+ 0x00000031, 0x00050094, 0x00000006, 0x0000004e,
+ 0x0000004c, 0x0000004d, 0x0004003d, 0x00000007,
+ 0x0000004f, 0x00000046, 0x0006000c, 0x00000006,
+ 0x00000050, 0x00000001, 0x00000042, 0x0000004f,
+ 0x00050088, 0x00000006, 0x00000051, 0x0000004e,
+ 0x00000050, 0x0004003d, 0x00000007, 0x00000052,
+ 0x00000031, 0x0006000c, 0x00000006, 0x00000053,
+ 0x00000001, 0x00000042, 0x00000052, 0x00050088,
+ 0x00000006, 0x00000054, 0x00000051, 0x00000053,
+ 0x0003003e, 0x0000004b, 0x00000054, 0x0004003d,
+ 0x00000006, 0x00000055, 0x0000004b, 0x0006000c,
+ 0x00000006, 0x00000056, 0x00000001, 0x00000004,
+ 0x00000055, 0x0003003e, 0x0000004b, 0x00000056,
+ 0x00050041, 0x00000011, 0x0000005b, 0x0000000e,
+ 0x0000005a, 0x0004003d, 0x0000000b, 0x0000005c,
+ 0x0000005b, 0x0004003d, 0x00000007, 0x0000005d,
+ 0x00000022, 0x00050051, 0x00000006, 0x0000005e,
+ 0x0000005d, 0x00000000, 0x00050051, 0x00000006,
+ 0x0000005f, 0x0000005d, 0x00000001, 0x00050051,
+ 0x00000006, 0x00000060, 0x0000005d, 0x00000002,
+ 0x00070050, 0x0000000a, 0x00000061, 0x0000005e,
+ 0x0000005f, 0x00000060, 0x00000018, 0x00050091,
+ 0x0000000a, 0x00000062, 0x0000005c, 0x00000061,
+ 0x00050041, 0x00000063, 0x00000064, 0x00000059,
+ 0x00000014, 0x0003003e, 0x00000064, 0x00000062,
+ 0x00050041, 0x00000015, 0x00000068, 0x0000000e,
+ 0x00000067, 0x0004003d, 0x00000007, 0x00000069,
+ 0x00000068, 0x0004003d, 0x00000006, 0x0000006a,
+ 0x0000004b, 0x0005008e, 0x00000007, 0x0000006b,
+ 0x00000069, 0x0000006a, 0x0003003e, 0x00000066,
+ 0x0000006b, 0x000100fd, 0x00010038,
+};
diff --git a/demos/smoke/android/src/main/jni/Smoke.vert.h b/demos/smoke/android/src/main/jni/Smoke.vert.h
new file mode 100644
index 000000000..ac59ed48c
--- /dev/null
+++ b/demos/smoke/android/src/main/jni/Smoke.vert.h
@@ -0,0 +1,354 @@
+#include <stdint.h>
+
+#if 0
+/usr/local/google/home/olv/khronos/VulkanSamples/Demos/Hologram/Hologram.vert
+Warning, version 310 is not yet complete; most version-specific features are present, but some are missing.
+
+
+Linked vertex stage:
+
+
+// Module Version 10000
+// Generated by (magic number): 80001
+// Id's are bound by 108
+
+ Capability Shader
+ 1: ExtInstImport "GLSL.std.450"
+ MemoryModel Logical GLSL450
+ EntryPoint Vertex 4 "main" 38 67 89 102
+ Source ESSL 310
+ Name 4 "main"
+ Name 9 "world_light"
+ Name 12 "param_block"
+ MemberName 12(param_block) 0 "light_pos"
+ MemberName 12(param_block) 1 "light_color"
+ MemberName 12(param_block) 2 "model"
+ MemberName 12(param_block) 3 "view_projection"
+ Name 14 "params"
+ Name 34 "world_pos"
+ Name 38 "in_pos"
+ Name 49 "world_normal"
+ Name 67 "in_normal"
+ Name 70 "light_dir"
+ Name 75 "brightness"
+ Name 87 "gl_PerVertex"
+ MemberName 87(gl_PerVertex) 0 "gl_Position"
+ MemberName 87(gl_PerVertex) 1 "gl_PointSize"
+ Name 89 ""
+ Name 102 "color"
+ MemberDecorate 12(param_block) 0 Offset 0
+ MemberDecorate 12(param_block) 1 Offset 16
+ MemberDecorate 12(param_block) 2 ColMajor
+ MemberDecorate 12(param_block) 2 Offset 32
+ MemberDecorate 12(param_block) 2 MatrixStride 16
+ MemberDecorate 12(param_block) 3 ColMajor
+ MemberDecorate 12(param_block) 3 Offset 96
+ MemberDecorate 12(param_block) 3 MatrixStride 16
+ Decorate 12(param_block) BufferBlock
+ Decorate 14(params) DescriptorSet 0
+ Decorate 14(params) Binding 0
+ Decorate 38(in_pos) Location 0
+ Decorate 67(in_normal) Location 1
+ MemberDecorate 87(gl_PerVertex) 0 BuiltIn Position
+ MemberDecorate 87(gl_PerVertex) 1 BuiltIn PointSize
+ Decorate 87(gl_PerVertex) Block
+ 2: TypeVoid
+ 3: TypeFunction 2
+ 6: TypeFloat 32
+ 7: TypeVector 6(float) 3
+ 8: TypePointer Function 7(fvec3)
+ 10: TypeVector 6(float) 4
+ 11: TypeMatrix 10(fvec4) 4
+ 12(param_block): TypeStruct 7(fvec3) 7(fvec3) 11 11
+ 13: TypePointer Uniform 12(param_block)
+ 14(params): 13(ptr) Variable Uniform
+ 15: TypeInt 32 1
+ 16: 15(int) Constant 2
+ 17: TypePointer Uniform 11
+ 20: 15(int) Constant 0
+ 21: TypePointer Uniform 7(fvec3)
+ 24: 6(float) Constant 1065353216
+ 37: TypePointer Input 7(fvec3)
+ 38(in_pos): 37(ptr) Variable Input
+ 52: TypeMatrix 7(fvec3) 3
+ 53: 6(float) Constant 0
+ 67(in_normal): 37(ptr) Variable Input
+ 74: TypePointer Function 6(float)
+87(gl_PerVertex): TypeStruct 10(fvec4) 6(float)
+ 88: TypePointer Output 87(gl_PerVertex)
+ 89: 88(ptr) Variable Output
+ 90: 15(int) Constant 3
+ 99: TypePointer Output 10(fvec4)
+ 101: TypePointer Output 7(fvec3)
+ 102(color): 101(ptr) Variable Output
+ 103: 15(int) Constant 1
+ 4(main): 2 Function None 3
+ 5: Label
+ 9(world_light): 8(ptr) Variable Function
+ 34(world_pos): 8(ptr) Variable Function
+49(world_normal): 8(ptr) Variable Function
+ 70(light_dir): 8(ptr) Variable Function
+ 75(brightness): 74(ptr) Variable Function
+ 18: 17(ptr) AccessChain 14(params) 16
+ 19: 11 Load 18
+ 22: 21(ptr) AccessChain 14(params) 20
+ 23: 7(fvec3) Load 22
+ 25: 6(float) CompositeExtract 23 0
+ 26: 6(float) CompositeExtract 23 1
+ 27: 6(float) CompositeExtract 23 2
+ 28: 10(fvec4) CompositeConstruct 25 26 27 24
+ 29: 10(fvec4) MatrixTimesVector 19 28
+ 30: 6(float) CompositeExtract 29 0
+ 31: 6(float) CompositeExtract 29 1
+ 32: 6(float) CompositeExtract 29 2
+ 33: 7(fvec3) CompositeConstruct 30 31 32
+ Store 9(world_light) 33
+ 35: 17(ptr) AccessChain 14(params) 16
+ 36: 11 Load 35
+ 39: 7(fvec3) Load 38(in_pos)
+ 40: 6(float) CompositeExtract 39 0
+ 41: 6(float) CompositeExtract 39 1
+ 42: 6(float) CompositeExtract 39 2
+ 43: 10(fvec4) CompositeConstruct 40 41 42 24
+ 44: 10(fvec4) MatrixTimesVector 36 43
+ 45: 6(float) CompositeExtract 44 0
+ 46: 6(float) CompositeExtract 44 1
+ 47: 6(float) CompositeExtract 44 2
+ 48: 7(fvec3) CompositeConstruct 45 46 47
+ Store 34(world_pos) 48
+ 50: 17(ptr) AccessChain 14(params) 16
+ 51: 11 Load 50
+ 54: 6(float) CompositeExtract 51 0 0
+ 55: 6(float) CompositeExtract 51 0 1
+ 56: 6(float) CompositeExtract 51 0 2
+ 57: 6(float) CompositeExtract 51 1 0
+ 58: 6(float) CompositeExtract 51 1 1
+ 59: 6(float) CompositeExtract 51 1 2
+ 60: 6(float) CompositeExtract 51 2 0
+ 61: 6(float) CompositeExtract 51 2 1
+ 62: 6(float) CompositeExtract 51 2 2
+ 63: 7(fvec3) CompositeConstruct 54 55 56
+ 64: 7(fvec3) CompositeConstruct 57 58 59
+ 65: 7(fvec3) CompositeConstruct 60 61 62
+ 66: 52 CompositeConstruct 63 64 65
+ 68: 7(fvec3) Load 67(in_normal)
+ 69: 7(fvec3) MatrixTimesVector 66 68
+ Store 49(world_normal) 69
+ 71: 7(fvec3) Load 9(world_light)
+ 72: 7(fvec3) Load 34(world_pos)
+ 73: 7(fvec3) FSub 71 72
+ Store 70(light_dir) 73
+ 76: 7(fvec3) Load 70(light_dir)
+ 77: 7(fvec3) Load 49(world_normal)
+ 78: 6(float) Dot 76 77
+ 79: 7(fvec3) Load 70(light_dir)
+ 80: 6(float) ExtInst 1(GLSL.std.450) 66(Length) 79
+ 81: 6(float) FDiv 78 80
+ 82: 7(fvec3) Load 49(world_normal)
+ 83: 6(float) ExtInst 1(GLSL.std.450) 66(Length) 82
+ 84: 6(float) FDiv 81 83
+ Store 75(brightness) 84
+ 85: 6(float) Load 75(brightness)
+ 86: 6(float) ExtInst 1(GLSL.std.450) 4(FAbs) 85
+ Store 75(brightness) 86
+ 91: 17(ptr) AccessChain 14(params) 90
+ 92: 11 Load 91
+ 93: 7(fvec3) Load 34(world_pos)
+ 94: 6(float) CompositeExtract 93 0
+ 95: 6(float) CompositeExtract 93 1
+ 96: 6(float) CompositeExtract 93 2
+ 97: 10(fvec4) CompositeConstruct 94 95 96 24
+ 98: 10(fvec4) MatrixTimesVector 92 97
+ 100: 99(ptr) AccessChain 89 20
+ Store 100 98
+ 104: 21(ptr) AccessChain 14(params) 103
+ 105: 7(fvec3) Load 104
+ 106: 6(float) Load 75(brightness)
+ 107: 7(fvec3) VectorTimesScalar 105 106
+ Store 102(color) 107
+ Return
+ FunctionEnd
+#endif
+
+static const uint32_t Smoke_vert[719] = {
+ 0x07230203, 0x00010000, 0x00080001, 0x0000006c,
+ 0x00000000, 0x00020011, 0x00000001, 0x0006000b,
+ 0x00000001, 0x4c534c47, 0x6474732e, 0x3035342e,
+ 0x00000000, 0x0003000e, 0x00000000, 0x00000001,
+ 0x0009000f, 0x00000000, 0x00000004, 0x6e69616d,
+ 0x00000000, 0x00000026, 0x00000043, 0x00000059,
+ 0x00000066, 0x00030003, 0x00000001, 0x00000136,
+ 0x00040005, 0x00000004, 0x6e69616d, 0x00000000,
+ 0x00050005, 0x00000009, 0x6c726f77, 0x696c5f64,
+ 0x00746867, 0x00050005, 0x0000000c, 0x61726170,
+ 0x6c625f6d, 0x006b636f, 0x00060006, 0x0000000c,
+ 0x00000000, 0x6867696c, 0x6f705f74, 0x00000073,
+ 0x00060006, 0x0000000c, 0x00000001, 0x6867696c,
+ 0x6f635f74, 0x00726f6c, 0x00050006, 0x0000000c,
+ 0x00000002, 0x65646f6d, 0x0000006c, 0x00070006,
+ 0x0000000c, 0x00000003, 0x77656976, 0x6f72705f,
+ 0x7463656a, 0x006e6f69, 0x00040005, 0x0000000e,
+ 0x61726170, 0x0000736d, 0x00050005, 0x00000022,
+ 0x6c726f77, 0x6f705f64, 0x00000073, 0x00040005,
+ 0x00000026, 0x705f6e69, 0x0000736f, 0x00060005,
+ 0x00000031, 0x6c726f77, 0x6f6e5f64, 0x6c616d72,
+ 0x00000000, 0x00050005, 0x00000043, 0x6e5f6e69,
+ 0x616d726f, 0x0000006c, 0x00050005, 0x00000046,
+ 0x6867696c, 0x69645f74, 0x00000072, 0x00050005,
+ 0x0000004b, 0x67697262, 0x656e7468, 0x00007373,
+ 0x00060005, 0x00000057, 0x505f6c67, 0x65567265,
+ 0x78657472, 0x00000000, 0x00060006, 0x00000057,
+ 0x00000000, 0x505f6c67, 0x7469736f, 0x006e6f69,
+ 0x00070006, 0x00000057, 0x00000001, 0x505f6c67,
+ 0x746e696f, 0x657a6953, 0x00000000, 0x00030005,
+ 0x00000059, 0x00000000, 0x00040005, 0x00000066,
+ 0x6f6c6f63, 0x00000072, 0x00050048, 0x0000000c,
+ 0x00000000, 0x00000023, 0x00000000, 0x00050048,
+ 0x0000000c, 0x00000001, 0x00000023, 0x00000010,
+ 0x00040048, 0x0000000c, 0x00000002, 0x00000005,
+ 0x00050048, 0x0000000c, 0x00000002, 0x00000023,
+ 0x00000020, 0x00050048, 0x0000000c, 0x00000002,
+ 0x00000007, 0x00000010, 0x00040048, 0x0000000c,
+ 0x00000003, 0x00000005, 0x00050048, 0x0000000c,
+ 0x00000003, 0x00000023, 0x00000060, 0x00050048,
+ 0x0000000c, 0x00000003, 0x00000007, 0x00000010,
+ 0x00030047, 0x0000000c, 0x00000003, 0x00040047,
+ 0x0000000e, 0x00000022, 0x00000000, 0x00040047,
+ 0x0000000e, 0x00000021, 0x00000000, 0x00040047,
+ 0x00000026, 0x0000001e, 0x00000000, 0x00040047,
+ 0x00000043, 0x0000001e, 0x00000001, 0x00050048,
+ 0x00000057, 0x00000000, 0x0000000b, 0x00000000,
+ 0x00050048, 0x00000057, 0x00000001, 0x0000000b,
+ 0x00000001, 0x00030047, 0x00000057, 0x00000002,
+ 0x00020013, 0x00000002, 0x00030021, 0x00000003,
+ 0x00000002, 0x00030016, 0x00000006, 0x00000020,
+ 0x00040017, 0x00000007, 0x00000006, 0x00000003,
+ 0x00040020, 0x00000008, 0x00000007, 0x00000007,
+ 0x00040017, 0x0000000a, 0x00000006, 0x00000004,
+ 0x00040018, 0x0000000b, 0x0000000a, 0x00000004,
+ 0x0006001e, 0x0000000c, 0x00000007, 0x00000007,
+ 0x0000000b, 0x0000000b, 0x00040020, 0x0000000d,
+ 0x00000002, 0x0000000c, 0x0004003b, 0x0000000d,
+ 0x0000000e, 0x00000002, 0x00040015, 0x0000000f,
+ 0x00000020, 0x00000001, 0x0004002b, 0x0000000f,
+ 0x00000010, 0x00000002, 0x00040020, 0x00000011,
+ 0x00000002, 0x0000000b, 0x0004002b, 0x0000000f,
+ 0x00000014, 0x00000000, 0x00040020, 0x00000015,
+ 0x00000002, 0x00000007, 0x0004002b, 0x00000006,
+ 0x00000018, 0x3f800000, 0x00040020, 0x00000025,
+ 0x00000001, 0x00000007, 0x0004003b, 0x00000025,
+ 0x00000026, 0x00000001, 0x00040018, 0x00000034,
+ 0x00000007, 0x00000003, 0x0004002b, 0x00000006,
+ 0x00000035, 0x00000000, 0x0004003b, 0x00000025,
+ 0x00000043, 0x00000001, 0x00040020, 0x0000004a,
+ 0x00000007, 0x00000006, 0x0004001e, 0x00000057,
+ 0x0000000a, 0x00000006, 0x00040020, 0x00000058,
+ 0x00000003, 0x00000057, 0x0004003b, 0x00000058,
+ 0x00000059, 0x00000003, 0x0004002b, 0x0000000f,
+ 0x0000005a, 0x00000003, 0x00040020, 0x00000063,
+ 0x00000003, 0x0000000a, 0x00040020, 0x00000065,
+ 0x00000003, 0x00000007, 0x0004003b, 0x00000065,
+ 0x00000066, 0x00000003, 0x0004002b, 0x0000000f,
+ 0x00000067, 0x00000001, 0x00050036, 0x00000002,
+ 0x00000004, 0x00000000, 0x00000003, 0x000200f8,
+ 0x00000005, 0x0004003b, 0x00000008, 0x00000009,
+ 0x00000007, 0x0004003b, 0x00000008, 0x00000022,
+ 0x00000007, 0x0004003b, 0x00000008, 0x00000031,
+ 0x00000007, 0x0004003b, 0x00000008, 0x00000046,
+ 0x00000007, 0x0004003b, 0x0000004a, 0x0000004b,
+ 0x00000007, 0x00050041, 0x00000011, 0x00000012,
+ 0x0000000e, 0x00000010, 0x0004003d, 0x0000000b,
+ 0x00000013, 0x00000012, 0x00050041, 0x00000015,
+ 0x00000016, 0x0000000e, 0x00000014, 0x0004003d,
+ 0x00000007, 0x00000017, 0x00000016, 0x00050051,
+ 0x00000006, 0x00000019, 0x00000017, 0x00000000,
+ 0x00050051, 0x00000006, 0x0000001a, 0x00000017,
+ 0x00000001, 0x00050051, 0x00000006, 0x0000001b,
+ 0x00000017, 0x00000002, 0x00070050, 0x0000000a,
+ 0x0000001c, 0x00000019, 0x0000001a, 0x0000001b,
+ 0x00000018, 0x00050091, 0x0000000a, 0x0000001d,
+ 0x00000013, 0x0000001c, 0x00050051, 0x00000006,
+ 0x0000001e, 0x0000001d, 0x00000000, 0x00050051,
+ 0x00000006, 0x0000001f, 0x0000001d, 0x00000001,
+ 0x00050051, 0x00000006, 0x00000020, 0x0000001d,
+ 0x00000002, 0x00060050, 0x00000007, 0x00000021,
+ 0x0000001e, 0x0000001f, 0x00000020, 0x0003003e,
+ 0x00000009, 0x00000021, 0x00050041, 0x00000011,
+ 0x00000023, 0x0000000e, 0x00000010, 0x0004003d,
+ 0x0000000b, 0x00000024, 0x00000023, 0x0004003d,
+ 0x00000007, 0x00000027, 0x00000026, 0x00050051,
+ 0x00000006, 0x00000028, 0x00000027, 0x00000000,
+ 0x00050051, 0x00000006, 0x00000029, 0x00000027,
+ 0x00000001, 0x00050051, 0x00000006, 0x0000002a,
+ 0x00000027, 0x00000002, 0x00070050, 0x0000000a,
+ 0x0000002b, 0x00000028, 0x00000029, 0x0000002a,
+ 0x00000018, 0x00050091, 0x0000000a, 0x0000002c,
+ 0x00000024, 0x0000002b, 0x00050051, 0x00000006,
+ 0x0000002d, 0x0000002c, 0x00000000, 0x00050051,
+ 0x00000006, 0x0000002e, 0x0000002c, 0x00000001,
+ 0x00050051, 0x00000006, 0x0000002f, 0x0000002c,
+ 0x00000002, 0x00060050, 0x00000007, 0x00000030,
+ 0x0000002d, 0x0000002e, 0x0000002f, 0x0003003e,
+ 0x00000022, 0x00000030, 0x00050041, 0x00000011,
+ 0x00000032, 0x0000000e, 0x00000010, 0x0004003d,
+ 0x0000000b, 0x00000033, 0x00000032, 0x00060051,
+ 0x00000006, 0x00000036, 0x00000033, 0x00000000,
+ 0x00000000, 0x00060051, 0x00000006, 0x00000037,
+ 0x00000033, 0x00000000, 0x00000001, 0x00060051,
+ 0x00000006, 0x00000038, 0x00000033, 0x00000000,
+ 0x00000002, 0x00060051, 0x00000006, 0x00000039,
+ 0x00000033, 0x00000001, 0x00000000, 0x00060051,
+ 0x00000006, 0x0000003a, 0x00000033, 0x00000001,
+ 0x00000001, 0x00060051, 0x00000006, 0x0000003b,
+ 0x00000033, 0x00000001, 0x00000002, 0x00060051,
+ 0x00000006, 0x0000003c, 0x00000033, 0x00000002,
+ 0x00000000, 0x00060051, 0x00000006, 0x0000003d,
+ 0x00000033, 0x00000002, 0x00000001, 0x00060051,
+ 0x00000006, 0x0000003e, 0x00000033, 0x00000002,
+ 0x00000002, 0x00060050, 0x00000007, 0x0000003f,
+ 0x00000036, 0x00000037, 0x00000038, 0x00060050,
+ 0x00000007, 0x00000040, 0x00000039, 0x0000003a,
+ 0x0000003b, 0x00060050, 0x00000007, 0x00000041,
+ 0x0000003c, 0x0000003d, 0x0000003e, 0x00060050,
+ 0x00000034, 0x00000042, 0x0000003f, 0x00000040,
+ 0x00000041, 0x0004003d, 0x00000007, 0x00000044,
+ 0x00000043, 0x00050091, 0x00000007, 0x00000045,
+ 0x00000042, 0x00000044, 0x0003003e, 0x00000031,
+ 0x00000045, 0x0004003d, 0x00000007, 0x00000047,
+ 0x00000009, 0x0004003d, 0x00000007, 0x00000048,
+ 0x00000022, 0x00050083, 0x00000007, 0x00000049,
+ 0x00000047, 0x00000048, 0x0003003e, 0x00000046,
+ 0x00000049, 0x0004003d, 0x00000007, 0x0000004c,
+ 0x00000046, 0x0004003d, 0x00000007, 0x0000004d,
+ 0x00000031, 0x00050094, 0x00000006, 0x0000004e,
+ 0x0000004c, 0x0000004d, 0x0004003d, 0x00000007,
+ 0x0000004f, 0x00000046, 0x0006000c, 0x00000006,
+ 0x00000050, 0x00000001, 0x00000042, 0x0000004f,
+ 0x00050088, 0x00000006, 0x00000051, 0x0000004e,
+ 0x00000050, 0x0004003d, 0x00000007, 0x00000052,
+ 0x00000031, 0x0006000c, 0x00000006, 0x00000053,
+ 0x00000001, 0x00000042, 0x00000052, 0x00050088,
+ 0x00000006, 0x00000054, 0x00000051, 0x00000053,
+ 0x0003003e, 0x0000004b, 0x00000054, 0x0004003d,
+ 0x00000006, 0x00000055, 0x0000004b, 0x0006000c,
+ 0x00000006, 0x00000056, 0x00000001, 0x00000004,
+ 0x00000055, 0x0003003e, 0x0000004b, 0x00000056,
+ 0x00050041, 0x00000011, 0x0000005b, 0x0000000e,
+ 0x0000005a, 0x0004003d, 0x0000000b, 0x0000005c,
+ 0x0000005b, 0x0004003d, 0x00000007, 0x0000005d,
+ 0x00000022, 0x00050051, 0x00000006, 0x0000005e,
+ 0x0000005d, 0x00000000, 0x00050051, 0x00000006,
+ 0x0000005f, 0x0000005d, 0x00000001, 0x00050051,
+ 0x00000006, 0x00000060, 0x0000005d, 0x00000002,
+ 0x00070050, 0x0000000a, 0x00000061, 0x0000005e,
+ 0x0000005f, 0x00000060, 0x00000018, 0x00050091,
+ 0x0000000a, 0x00000062, 0x0000005c, 0x00000061,
+ 0x00050041, 0x00000063, 0x00000064, 0x00000059,
+ 0x00000014, 0x0003003e, 0x00000064, 0x00000062,
+ 0x00050041, 0x00000015, 0x00000068, 0x0000000e,
+ 0x00000067, 0x0004003d, 0x00000007, 0x00000069,
+ 0x00000068, 0x0004003d, 0x00000006, 0x0000006a,
+ 0x0000004b, 0x0005008e, 0x00000007, 0x0000006b,
+ 0x00000069, 0x0000006a, 0x0003003e, 0x00000066,
+ 0x0000006b, 0x000100fd, 0x00010038,
+};
diff --git a/demos/smoke/android/src/main/res/values/strings.xml b/demos/smoke/android/src/main/res/values/strings.xml
new file mode 100644
index 000000000..c82554f1c
--- /dev/null
+++ b/demos/smoke/android/src/main/res/values/strings.xml
@@ -0,0 +1,4 @@
+<?xml version="1.0" encoding="utf-8"?>
+<resources>
+ <string name="app_name">Smoke</string>
+</resources>
diff --git a/demos/smoke/generate-dispatch-table b/demos/smoke/generate-dispatch-table
new file mode 100755
index 000000000..803cf52dc
--- /dev/null
+++ b/demos/smoke/generate-dispatch-table
@@ -0,0 +1,498 @@
+#!/usr/bin/env python3
+#
+# Copyright (C) 2016 Google, Inc.
+#
+# Permission is hereby granted, free of charge, to any person obtaining a
+# copy of this software and associated documentation files (the "Software"),
+# to deal in the Software without restriction, including without limitation
+# the rights to use, copy, modify, merge, publish, distribute, sublicense,
+# and/or sell copies of the Software, and to permit persons to whom the
+# Software is furnished to do so, subject to the following conditions:
+#
+# The above copyright notice and this permission notice shall be included
+# in all copies or substantial portions of the Software.
+#
+# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
+# IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
+# FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL
+# THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
+# LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
+# FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER
+# DEALINGS IN THE SOFTWARE.
+
+"""Generate Vulkan dispatch table.
+"""
+
+import os
+import sys
+
+class Command(object):
+ PLATFORM = 0
+ LOADER = 1
+ INSTANCE = 2
+ DEVICE = 3
+
+ def __init__(self, name, dispatch):
+ self.name = name
+ self.dispatch = dispatch
+ self.ty = self._get_type()
+
+ @staticmethod
+ def valid_c_typedef(c):
+ return (c.startswith("typedef") and
+ c.endswith(");") and
+ "*PFN_vkVoidFunction" not in c)
+
+ @classmethod
+ def from_c_typedef(cls, c):
+ name_begin = c.find("*PFN_vk") + 7
+ name_end = c.find(")(", name_begin)
+ name = c[name_begin:name_end]
+
+ dispatch_begin = name_end + 2
+ dispatch_end = c.find(" ", dispatch_begin)
+ dispatch = c[dispatch_begin:dispatch_end]
+ if not dispatch.startswith("Vk"):
+ dispatch = None
+
+ return cls(name, dispatch)
+
+ def _get_type(self):
+ if self.dispatch:
+ if self.dispatch in ["VkDevice", "VkQueue", "VkCommandBuffer"]:
+ return self.DEVICE
+ else:
+ return self.INSTANCE
+ else:
+ if self.name in ["GetInstanceProcAddr"]:
+ return self.PLATFORM
+ else:
+ return self.LOADER
+
+ def __repr__(self):
+ return "Command(name=%s, dispatch=%s)" % \
+ (repr(self.name), repr(self.dispatch))
+
+class Extension(object):
+ def __init__(self, name, version, guard=None, commands=[]):
+ self.name = name
+ self.version = version
+ self.guard = guard
+ self.commands = commands[:]
+
+ def add_command(self, cmd):
+ self.commands.append(cmd)
+
+ def __repr__(self):
+ lines = []
+ lines.append("Extension(name=%s, version=%s, guard=%s, commands=[" %
+ (repr(self.name), repr(self.version), repr(self.guard)))
+
+ for cmd in self.commands:
+ lines.append(" %s," % repr(cmd))
+
+ lines.append("])")
+
+ return "\n".join(lines)
+
+# generated by "generate-dispatch-table parse vulkan.h"
+vk_core = Extension(name='VK_core', version=0, guard=None, commands=[
+ Command(name='CreateInstance', dispatch=None),
+ Command(name='DestroyInstance', dispatch='VkInstance'),
+ Command(name='EnumeratePhysicalDevices', dispatch='VkInstance'),
+ Command(name='GetPhysicalDeviceFeatures', dispatch='VkPhysicalDevice'),
+ Command(name='GetPhysicalDeviceFormatProperties', dispatch='VkPhysicalDevice'),
+ Command(name='GetPhysicalDeviceImageFormatProperties', dispatch='VkPhysicalDevice'),
+ Command(name='GetPhysicalDeviceProperties', dispatch='VkPhysicalDevice'),
+ Command(name='GetPhysicalDeviceQueueFamilyProperties', dispatch='VkPhysicalDevice'),
+ Command(name='GetPhysicalDeviceMemoryProperties', dispatch='VkPhysicalDevice'),
+ Command(name='GetInstanceProcAddr', dispatch='VkInstance'),
+ Command(name='GetDeviceProcAddr', dispatch='VkDevice'),
+ Command(name='CreateDevice', dispatch='VkPhysicalDevice'),
+ Command(name='DestroyDevice', dispatch='VkDevice'),
+ Command(name='EnumerateInstanceExtensionProperties', dispatch=None),
+ Command(name='EnumerateDeviceExtensionProperties', dispatch='VkPhysicalDevice'),
+ Command(name='EnumerateInstanceLayerProperties', dispatch=None),
+ Command(name='EnumerateDeviceLayerProperties', dispatch='VkPhysicalDevice'),
+ Command(name='GetDeviceQueue', dispatch='VkDevice'),
+ Command(name='QueueSubmit', dispatch='VkQueue'),
+ Command(name='QueueWaitIdle', dispatch='VkQueue'),
+ Command(name='DeviceWaitIdle', dispatch='VkDevice'),
+ Command(name='AllocateMemory', dispatch='VkDevice'),
+ Command(name='FreeMemory', dispatch='VkDevice'),
+ Command(name='MapMemory', dispatch='VkDevice'),
+ Command(name='UnmapMemory', dispatch='VkDevice'),
+ Command(name='FlushMappedMemoryRanges', dispatch='VkDevice'),
+ Command(name='InvalidateMappedMemoryRanges', dispatch='VkDevice'),
+ Command(name='GetDeviceMemoryCommitment', dispatch='VkDevice'),
+ Command(name='BindBufferMemory', dispatch='VkDevice'),
+ Command(name='BindImageMemory', dispatch='VkDevice'),
+ Command(name='GetBufferMemoryRequirements', dispatch='VkDevice'),
+ Command(name='GetImageMemoryRequirements', dispatch='VkDevice'),
+ Command(name='GetImageSparseMemoryRequirements', dispatch='VkDevice'),
+ Command(name='GetPhysicalDeviceSparseImageFormatProperties', dispatch='VkPhysicalDevice'),
+ Command(name='QueueBindSparse', dispatch='VkQueue'),
+ Command(name='CreateFence', dispatch='VkDevice'),
+ Command(name='DestroyFence', dispatch='VkDevice'),
+ Command(name='ResetFences', dispatch='VkDevice'),
+ Command(name='GetFenceStatus', dispatch='VkDevice'),
+ Command(name='WaitForFences', dispatch='VkDevice'),
+ Command(name='CreateSemaphore', dispatch='VkDevice'),
+ Command(name='DestroySemaphore', dispatch='VkDevice'),
+ Command(name='CreateEvent', dispatch='VkDevice'),
+ Command(name='DestroyEvent', dispatch='VkDevice'),
+ Command(name='GetEventStatus', dispatch='VkDevice'),
+ Command(name='SetEvent', dispatch='VkDevice'),
+ Command(name='ResetEvent', dispatch='VkDevice'),
+ Command(name='CreateQueryPool', dispatch='VkDevice'),
+ Command(name='DestroyQueryPool', dispatch='VkDevice'),
+ Command(name='GetQueryPoolResults', dispatch='VkDevice'),
+ Command(name='CreateBuffer', dispatch='VkDevice'),
+ Command(name='DestroyBuffer', dispatch='VkDevice'),
+ Command(name='CreateBufferView', dispatch='VkDevice'),
+ Command(name='DestroyBufferView', dispatch='VkDevice'),
+ Command(name='CreateImage', dispatch='VkDevice'),
+ Command(name='DestroyImage', dispatch='VkDevice'),
+ Command(name='GetImageSubresourceLayout', dispatch='VkDevice'),
+ Command(name='CreateImageView', dispatch='VkDevice'),
+ Command(name='DestroyImageView', dispatch='VkDevice'),
+ Command(name='CreateShaderModule', dispatch='VkDevice'),
+ Command(name='DestroyShaderModule', dispatch='VkDevice'),
+ Command(name='CreatePipelineCache', dispatch='VkDevice'),
+ Command(name='DestroyPipelineCache', dispatch='VkDevice'),
+ Command(name='GetPipelineCacheData', dispatch='VkDevice'),
+ Command(name='MergePipelineCaches', dispatch='VkDevice'),
+ Command(name='CreateGraphicsPipelines', dispatch='VkDevice'),
+ Command(name='CreateComputePipelines', dispatch='VkDevice'),
+ Command(name='DestroyPipeline', dispatch='VkDevice'),
+ Command(name='CreatePipelineLayout', dispatch='VkDevice'),
+ Command(name='DestroyPipelineLayout', dispatch='VkDevice'),
+ Command(name='CreateSampler', dispatch='VkDevice'),
+ Command(name='DestroySampler', dispatch='VkDevice'),
+ Command(name='CreateDescriptorSetLayout', dispatch='VkDevice'),
+ Command(name='DestroyDescriptorSetLayout', dispatch='VkDevice'),
+ Command(name='CreateDescriptorPool', dispatch='VkDevice'),
+ Command(name='DestroyDescriptorPool', dispatch='VkDevice'),
+ Command(name='ResetDescriptorPool', dispatch='VkDevice'),
+ Command(name='AllocateDescriptorSets', dispatch='VkDevice'),
+ Command(name='FreeDescriptorSets', dispatch='VkDevice'),
+ Command(name='UpdateDescriptorSets', dispatch='VkDevice'),
+ Command(name='CreateFramebuffer', dispatch='VkDevice'),
+ Command(name='DestroyFramebuffer', dispatch='VkDevice'),
+ Command(name='CreateRenderPass', dispatch='VkDevice'),
+ Command(name='DestroyRenderPass', dispatch='VkDevice'),
+ Command(name='GetRenderAreaGranularity', dispatch='VkDevice'),
+ Command(name='CreateCommandPool', dispatch='VkDevice'),
+ Command(name='DestroyCommandPool', dispatch='VkDevice'),
+ Command(name='ResetCommandPool', dispatch='VkDevice'),
+ Command(name='AllocateCommandBuffers', dispatch='VkDevice'),
+ Command(name='FreeCommandBuffers', dispatch='VkDevice'),
+ Command(name='BeginCommandBuffer', dispatch='VkCommandBuffer'),
+ Command(name='EndCommandBuffer', dispatch='VkCommandBuffer'),
+ Command(name='ResetCommandBuffer', dispatch='VkCommandBuffer'),
+ Command(name='CmdBindPipeline', dispatch='VkCommandBuffer'),
+ Command(name='CmdSetViewport', dispatch='VkCommandBuffer'),
+ Command(name='CmdSetScissor', dispatch='VkCommandBuffer'),
+ Command(name='CmdSetLineWidth', dispatch='VkCommandBuffer'),
+ Command(name='CmdSetDepthBias', dispatch='VkCommandBuffer'),
+ Command(name='CmdSetBlendConstants', dispatch='VkCommandBuffer'),
+ Command(name='CmdSetDepthBounds', dispatch='VkCommandBuffer'),
+ Command(name='CmdSetStencilCompareMask', dispatch='VkCommandBuffer'),
+ Command(name='CmdSetStencilWriteMask', dispatch='VkCommandBuffer'),
+ Command(name='CmdSetStencilReference', dispatch='VkCommandBuffer'),
+ Command(name='CmdBindDescriptorSets', dispatch='VkCommandBuffer'),
+ Command(name='CmdBindIndexBuffer', dispatch='VkCommandBuffer'),
+ Command(name='CmdBindVertexBuffers', dispatch='VkCommandBuffer'),
+ Command(name='CmdDraw', dispatch='VkCommandBuffer'),
+ Command(name='CmdDrawIndexed', dispatch='VkCommandBuffer'),
+ Command(name='CmdDrawIndirect', dispatch='VkCommandBuffer'),
+ Command(name='CmdDrawIndexedIndirect', dispatch='VkCommandBuffer'),
+ Command(name='CmdDispatch', dispatch='VkCommandBuffer'),
+ Command(name='CmdDispatchIndirect', dispatch='VkCommandBuffer'),
+ Command(name='CmdCopyBuffer', dispatch='VkCommandBuffer'),
+ Command(name='CmdCopyImage', dispatch='VkCommandBuffer'),
+ Command(name='CmdBlitImage', dispatch='VkCommandBuffer'),
+ Command(name='CmdCopyBufferToImage', dispatch='VkCommandBuffer'),
+ Command(name='CmdCopyImageToBuffer', dispatch='VkCommandBuffer'),
+ Command(name='CmdUpdateBuffer', dispatch='VkCommandBuffer'),
+ Command(name='CmdFillBuffer', dispatch='VkCommandBuffer'),
+ Command(name='CmdClearColorImage', dispatch='VkCommandBuffer'),
+ Command(name='CmdClearDepthStencilImage', dispatch='VkCommandBuffer'),
+ Command(name='CmdClearAttachments', dispatch='VkCommandBuffer'),
+ Command(name='CmdResolveImage', dispatch='VkCommandBuffer'),
+ Command(name='CmdSetEvent', dispatch='VkCommandBuffer'),
+ Command(name='CmdResetEvent', dispatch='VkCommandBuffer'),
+ Command(name='CmdWaitEvents', dispatch='VkCommandBuffer'),
+ Command(name='CmdPipelineBarrier', dispatch='VkCommandBuffer'),
+ Command(name='CmdBeginQuery', dispatch='VkCommandBuffer'),
+ Command(name='CmdEndQuery', dispatch='VkCommandBuffer'),
+ Command(name='CmdResetQueryPool', dispatch='VkCommandBuffer'),
+ Command(name='CmdWriteTimestamp', dispatch='VkCommandBuffer'),
+ Command(name='CmdCopyQueryPoolResults', dispatch='VkCommandBuffer'),
+ Command(name='CmdPushConstants', dispatch='VkCommandBuffer'),
+ Command(name='CmdBeginRenderPass', dispatch='VkCommandBuffer'),
+ Command(name='CmdNextSubpass', dispatch='VkCommandBuffer'),
+ Command(name='CmdEndRenderPass', dispatch='VkCommandBuffer'),
+ Command(name='CmdExecuteCommands', dispatch='VkCommandBuffer'),
+])
+
+vk_khr_surface = Extension(name='VK_KHR_surface', version=25, guard=None, commands=[
+ Command(name='DestroySurfaceKHR', dispatch='VkInstance'),
+ Command(name='GetPhysicalDeviceSurfaceSupportKHR', dispatch='VkPhysicalDevice'),
+ Command(name='GetPhysicalDeviceSurfaceCapabilitiesKHR', dispatch='VkPhysicalDevice'),
+ Command(name='GetPhysicalDeviceSurfaceFormatsKHR', dispatch='VkPhysicalDevice'),
+ Command(name='GetPhysicalDeviceSurfacePresentModesKHR', dispatch='VkPhysicalDevice'),
+])
+
+vk_khr_swapchain = Extension(name='VK_KHR_swapchain', version=67, guard=None, commands=[
+ Command(name='CreateSwapchainKHR', dispatch='VkDevice'),
+ Command(name='DestroySwapchainKHR', dispatch='VkDevice'),
+ Command(name='GetSwapchainImagesKHR', dispatch='VkDevice'),
+ Command(name='AcquireNextImageKHR', dispatch='VkDevice'),
+ Command(name='QueuePresentKHR', dispatch='VkQueue'),
+])
+
+vk_khr_display = Extension(name='VK_KHR_display', version=21, guard=None, commands=[
+ Command(name='GetPhysicalDeviceDisplayPropertiesKHR', dispatch='VkPhysicalDevice'),
+ Command(name='GetPhysicalDeviceDisplayPlanePropertiesKHR', dispatch='VkPhysicalDevice'),
+ Command(name='GetDisplayPlaneSupportedDisplaysKHR', dispatch='VkPhysicalDevice'),
+ Command(name='GetDisplayModePropertiesKHR', dispatch='VkPhysicalDevice'),
+ Command(name='CreateDisplayModeKHR', dispatch='VkPhysicalDevice'),
+ Command(name='GetDisplayPlaneCapabilitiesKHR', dispatch='VkPhysicalDevice'),
+ Command(name='CreateDisplayPlaneSurfaceKHR', dispatch='VkInstance'),
+])
+
+vk_khr_display_swapchain = Extension(name='VK_KHR_display_swapchain', version=9, guard=None, commands=[
+ Command(name='CreateSharedSwapchainsKHR', dispatch='VkDevice'),
+])
+
+vk_khr_xlib_surface = Extension(name='VK_KHR_xlib_surface', version=6, guard='VK_USE_PLATFORM_XLIB_KHR', commands=[
+ Command(name='CreateXlibSurfaceKHR', dispatch='VkInstance'),
+ Command(name='GetPhysicalDeviceXlibPresentationSupportKHR', dispatch='VkPhysicalDevice'),
+])
+
+vk_khr_xcb_surface = Extension(name='VK_KHR_xcb_surface', version=6, guard='VK_USE_PLATFORM_XCB_KHR', commands=[
+ Command(name='CreateXcbSurfaceKHR', dispatch='VkInstance'),
+ Command(name='GetPhysicalDeviceXcbPresentationSupportKHR', dispatch='VkPhysicalDevice'),
+])
+
+vk_khr_wayland_surface = Extension(name='VK_KHR_wayland_surface', version=5, guard='VK_USE_PLATFORM_WAYLAND_KHR', commands=[
+ Command(name='CreateWaylandSurfaceKHR', dispatch='VkInstance'),
+ Command(name='GetPhysicalDeviceWaylandPresentationSupportKHR', dispatch='VkPhysicalDevice'),
+])
+
+vk_khr_mir_surface = Extension(name='VK_KHR_mir_surface', version=4, guard='VK_USE_PLATFORM_MIR_KHR', commands=[
+ Command(name='CreateMirSurfaceKHR', dispatch='VkInstance'),
+ Command(name='GetPhysicalDeviceMirPresentationSupportKHR', dispatch='VkPhysicalDevice'),
+])
+
+vk_khr_android_surface = Extension(name='VK_KHR_android_surface', version=6, guard='VK_USE_PLATFORM_ANDROID_KHR', commands=[
+ Command(name='CreateAndroidSurfaceKHR', dispatch='VkInstance'),
+])
+
+vk_khr_win32_surface = Extension(name='VK_KHR_win32_surface', version=5, guard='VK_USE_PLATFORM_WIN32_KHR', commands=[
+ Command(name='CreateWin32SurfaceKHR', dispatch='VkInstance'),
+ Command(name='GetPhysicalDeviceWin32PresentationSupportKHR', dispatch='VkPhysicalDevice'),
+])
+
+vk_ext_debug_report = Extension(name='VK_EXT_debug_report', version=1, guard=None, commands=[
+ Command(name='CreateDebugReportCallbackEXT', dispatch='VkInstance'),
+ Command(name='DestroyDebugReportCallbackEXT', dispatch='VkInstance'),
+ Command(name='DebugReportMessageEXT', dispatch='VkInstance'),
+])
+
+extensions = [
+ vk_core,
+ vk_khr_surface,
+ vk_khr_swapchain,
+ vk_khr_display,
+ vk_khr_display_swapchain,
+ vk_khr_xlib_surface,
+ vk_khr_xcb_surface,
+ vk_khr_wayland_surface,
+ vk_khr_mir_surface,
+ vk_khr_android_surface,
+ vk_khr_win32_surface,
+ vk_ext_debug_report,
+]
+
+def generate_header(guard):
+ lines = []
+ lines.append("// This file is generated.")
+ lines.append("#ifndef %s" % guard)
+ lines.append("#define %s" % guard)
+ lines.append("")
+ lines.append("#include <vulkan/vulkan.h>")
+ lines.append("")
+ lines.append("namespace vk {")
+ lines.append("")
+
+ for ext in extensions:
+ if ext.guard:
+ lines.append("#ifdef %s" % ext.guard)
+
+ lines.append("// %s" % ext.name)
+ for cmd in ext.commands:
+ lines.append("extern PFN_vk%s %s;" % (cmd.name, cmd.name))
+
+ if ext.guard:
+ lines.append("#endif")
+ lines.append("")
+
+ lines.append("void init_dispatch_table_top(PFN_vkGetInstanceProcAddr get_instance_proc_addr);")
+ lines.append("void init_dispatch_table_middle(VkInstance instance, bool include_bottom);")
+ lines.append("void init_dispatch_table_bottom(VkInstance instance, VkDevice dev);")
+ lines.append("")
+ lines.append("} // namespace vk")
+ lines.append("")
+ lines.append("#endif // %s" % guard)
+
+ return "\n".join(lines)
+
+def get_proc_addr(dispatchable, cmd, guard=None):
+ if dispatchable == "dev":
+ func = "GetDeviceProcAddr"
+ else:
+ func = "GetInstanceProcAddr"
+
+ c = " %s = reinterpret_cast<PFN_vk%s>(%s(%s, \"vk%s\"));" % \
+ (cmd.name, cmd.name, func, dispatchable, cmd.name)
+
+ if guard:
+ c = ("#ifdef %s\n" % guard) + c + "\n#endif"
+
+ return c
+
+def generate_source(header):
+ lines = []
+ lines.append("// This file is generated.")
+ lines.append("#include \"%s\"" % header)
+ lines.append("")
+ lines.append("namespace vk {")
+ lines.append("")
+
+ commands_by_types = {}
+ get_instance_proc_addr = None
+ get_device_proc_addr = None
+ for ext in extensions:
+ if ext.guard:
+ lines.append("#ifdef %s" % ext.guard)
+
+ for cmd in ext.commands:
+ lines.append("PFN_vk%s %s;" % (cmd.name, cmd.name))
+
+ if cmd.ty not in commands_by_types:
+ commands_by_types[cmd.ty] = []
+ commands_by_types[cmd.ty].append([cmd, ext.guard])
+
+ if cmd.name == "GetInstanceProcAddr":
+ get_instance_proc_addr = cmd
+ elif cmd.name == "GetDeviceProcAddr":
+ get_device_proc_addr = cmd
+
+ if ext.guard:
+ lines.append("#endif")
+ lines.append("")
+
+ lines.append("void init_dispatch_table_top(PFN_vkGetInstanceProcAddr get_instance_proc_addr)")
+ lines.append("{")
+ lines.append(" GetInstanceProcAddr = get_instance_proc_addr;")
+ lines.append("")
+ for cmd, guard in commands_by_types[Command.LOADER]:
+ lines.append(get_proc_addr("VK_NULL_HANDLE", cmd, guard))
+ lines.append("}")
+ lines.append("")
+
+ lines.append("void init_dispatch_table_middle(VkInstance instance, bool include_bottom)")
+ lines.append("{")
+ lines.append(get_proc_addr("instance", get_instance_proc_addr))
+ lines.append("")
+ for cmd, guard in commands_by_types[Command.INSTANCE]:
+ if cmd == get_instance_proc_addr:
+ continue
+ lines.append(get_proc_addr("instance", cmd, guard))
+ lines.append("")
+ lines.append(" if (!include_bottom)")
+ lines.append(" return;")
+ lines.append("")
+ for cmd, guard in commands_by_types[Command.DEVICE]:
+ lines.append(get_proc_addr("instance", cmd, guard))
+ lines.append("}")
+ lines.append("")
+
+ lines.append("void init_dispatch_table_bottom(VkInstance instance, VkDevice dev)")
+ lines.append("{")
+ lines.append(get_proc_addr("instance", get_device_proc_addr))
+ lines.append(get_proc_addr("dev", get_device_proc_addr))
+ lines.append("")
+ for cmd, guard in commands_by_types[Command.DEVICE]:
+ if cmd == get_device_proc_addr:
+ continue
+ lines.append(get_proc_addr("dev", cmd, guard))
+ lines.append("}")
+
+ lines.append("")
+ lines.append("} // namespace vk")
+
+ return "\n".join(lines)
+
+def parse_vulkan_h(filename):
+ extensions = []
+
+ with open(filename, "r") as f:
+ current_ext = None
+ ext_guard = None
+ spec_version = None
+
+ for line in f:
+ line = line.strip();
+
+ if line.startswith("#define VK_API_VERSION"):
+ minor_end = line.rfind(",")
+ minor_begin = line.rfind(",", 0, minor_end) + 1
+ spec_version = int(line[minor_begin:minor_end])
+ # add core
+ current_ext = Extension("VK_core", spec_version)
+ extensions.append(current_ext)
+ elif Command.valid_c_typedef(line):
+ current_ext.add_command(Command.from_c_typedef(line))
+ elif line.startswith("#ifdef VK_USE_PLATFORM"):
+ guard_begin = line.find(" ") + 1
+ ext_guard = line[guard_begin:]
+ elif line.startswith("#define") and "SPEC_VERSION " in line:
+ version_begin = line.rfind(" ") + 1
+ spec_version = int(line[version_begin:])
+ elif line.startswith("#define") and "EXTENSION_NAME " in line:
+ name_end = line.rfind("\"")
+ name_begin = line.rfind("\"", 0, name_end) + 1
+ name = line[name_begin:name_end]
+ # add extension
+ current_ext = Extension(name, spec_version, ext_guard)
+ extensions.append(current_ext)
+ elif ext_guard and line.startswith("#endif") and ext_guard in line:
+ ext_guard = None
+
+ for ext in extensions:
+ print("%s = %s" % (ext.name.lower(), repr(ext)))
+ print("")
+
+ print("extensions = [")
+ for ext in extensions:
+ print(" %s," % ext.name.lower())
+ print("]")
+
+if __name__ == "__main__":
+ if sys.argv[1] == "parse":
+ parse_vulkan_h(sys.argv[2])
+ else:
+ filename = sys.argv[1]
+ base = os.path.basename(filename)
+ contents = []
+
+ if base.endswith(".h"):
+ contents = generate_header(base.replace(".", "_").upper())
+ elif base.endswith(".cpp"):
+ contents = generate_source(base.replace(".cpp", ".h"))
+
+ with open(filename, "w") as f:
+ print(contents, file=f)
diff --git a/demos/smoke/glsl-to-spirv b/demos/smoke/glsl-to-spirv
new file mode 100755
index 000000000..f2104362c
--- /dev/null
+++ b/demos/smoke/glsl-to-spirv
@@ -0,0 +1,100 @@
+#!/usr/bin/env python3
+#
+# Copyright (C) 2016 Google, Inc.
+#
+# Permission is hereby granted, free of charge, to any person obtaining a
+# copy of this software and associated documentation files (the "Software"),
+# to deal in the Software without restriction, including without limitation
+# the rights to use, copy, modify, merge, publish, distribute, sublicense,
+# and/or sell copies of the Software, and to permit persons to whom the
+# Software is furnished to do so, subject to the following conditions:
+#
+# The above copyright notice and this permission notice shall be included
+# in all copies or substantial portions of the Software.
+#
+# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
+# IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
+# FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL
+# THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
+# LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
+# FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER
+# DEALINGS IN THE SOFTWARE.
+
+"""Compile GLSL to SPIR-V.
+
+Depends on glslangValidator.
+"""
+
+import os
+import sys
+import subprocess
+import struct
+import re
+
+SPIRV_MAGIC = 0x07230203
+COLUMNS = 4
+INDENT = 4
+
+in_filename = sys.argv[1]
+out_filename = sys.argv[2] if len(sys.argv) > 2 else None
+validator = sys.argv[3] if len(sys.argv) > 3 else \
+ "../../../glslang/build/install/bin/glslangValidator"
+
+def identifierize(s):
+ # translate invalid chars
+ s = re.sub("[^0-9a-zA-Z_]", "_", s)
+ # translate leading digits
+ return re.sub("^[^a-zA-Z_]+", "_", s)
+
+def compile_glsl(filename, tmpfile):
+ # invoke glslangValidator
+ try:
+ args = [validator, "-V", "-H", "-o", tmpfile, filename]
+ output = subprocess.check_output(args, universal_newlines=True)
+ except subprocess.CalledProcessError as e:
+ print(e.output, file=sys.stderr)
+ exit(1)
+
+ # read the temp file into a list of SPIR-V words
+ words = []
+ with open(tmpfile, "rb") as f:
+ data = f.read()
+ assert(len(data) and len(data) % 4 == 0)
+
+ # determine endianness
+ fmt = ("<" if data[0] == (SPIRV_MAGIC & 0xff) else ">") + "I"
+ for i in range(0, len(data), 4):
+ words.append(struct.unpack(fmt, data[i:(i + 4)])[0])
+
+ assert(words[0] == SPIRV_MAGIC)
+
+
+ # remove temp file
+ os.remove(tmpfile)
+
+ return (words, output.rstrip())
+
+base = os.path.basename(in_filename)
+words, comments = compile_glsl(in_filename, base + ".tmp")
+
+literals = []
+for i in range(0, len(words), COLUMNS):
+ columns = ["0x%08x" % word for word in words[i:(i + COLUMNS)]]
+ literals.append(" " * INDENT + ", ".join(columns) + ",")
+
+header = """#include <stdint.h>
+
+#if 0
+%s
+#endif
+
+static const uint32_t %s[%d] = {
+%s
+};
+""" % (comments, identifierize(base), len(words), "\n".join(literals))
+
+if out_filename:
+ with open(out_filename, "w") as f:
+ print(header, end="", file=f)
+else:
+ print(header, end="")
diff --git a/demos/tri.c b/demos/tri.c
index 5910c3766..c17d668c2 100644
--- a/demos/tri.c
+++ b/demos/tri.c
@@ -170,8 +170,6 @@ struct demo {
bool prepared;
bool use_staging_buffer;
- VkAllocationCallbacks allocator;
-
VkInstance inst;
VkPhysicalDevice gpu;
VkDevice device;
@@ -249,6 +247,7 @@ struct demo {
PFN_vkCreateDebugReportCallbackEXT CreateDebugReportCallback;
PFN_vkDestroyDebugReportCallbackEXT DestroyDebugReportCallback;
VkDebugReportCallbackEXT msg_callback;
+ PFN_vkDebugReportMessageEXT DebugReportMessage;
float depthStencil;
float depthIncrement;
@@ -314,7 +313,9 @@ static void demo_flush_init_cmd(struct demo *demo) {
static void demo_set_image_layout(struct demo *demo, VkImage image,
VkImageAspectFlags aspectMask,
VkImageLayout old_image_layout,
- VkImageLayout new_image_layout) {
+ VkImageLayout new_image_layout,
+ VkAccessFlagBits srcAccessMask) {
+
VkResult U_ASSERT_ONLY err;
if (demo->setup_cmd == VK_NULL_HANDLE) {
@@ -352,7 +353,7 @@ static void demo_set_image_layout(struct demo *demo, VkImage image,
VkImageMemoryBarrier image_memory_barrier = {
.sType = VK_STRUCTURE_TYPE_IMAGE_MEMORY_BARRIER,
.pNext = NULL,
- .srcAccessMask = 0,
+ .srcAccessMask = srcAccessMask,
.dstAccessMask = 0,
.oldLayout = old_image_layout,
.newLayout = new_image_layout,
@@ -515,7 +516,8 @@ static void demo_draw(struct demo *demo) {
demo_set_image_layout(demo, demo->buffers[demo->current_buffer].image,
VK_IMAGE_ASPECT_COLOR_BIT,
VK_IMAGE_LAYOUT_PRESENT_SRC_KHR,
- VK_IMAGE_LAYOUT_COLOR_ATTACHMENT_OPTIMAL);
+ VK_IMAGE_LAYOUT_COLOR_ATTACHMENT_OPTIMAL,
+ 0);
demo_flush_init_cmd(demo);
// Wait for the present complete semaphore to be signaled to ensure
@@ -705,7 +707,8 @@ static void demo_prepare_buffers(struct demo *demo) {
// to that state
demo_set_image_layout(
demo, demo->buffers[i].image, VK_IMAGE_ASPECT_COLOR_BIT,
- VK_IMAGE_LAYOUT_UNDEFINED, VK_IMAGE_LAYOUT_PRESENT_SRC_KHR);
+ VK_IMAGE_LAYOUT_UNDEFINED, VK_IMAGE_LAYOUT_PRESENT_SRC_KHR,
+ 0);
color_attachment_view.image = demo->buffers[i].image;
@@ -787,7 +790,8 @@ static void demo_prepare_depth(struct demo *demo) {
demo_set_image_layout(demo, demo->depth.image, VK_IMAGE_ASPECT_DEPTH_BIT,
VK_IMAGE_LAYOUT_UNDEFINED,
- VK_IMAGE_LAYOUT_DEPTH_STENCIL_ATTACHMENT_OPTIMAL);
+ VK_IMAGE_LAYOUT_DEPTH_STENCIL_ATTACHMENT_OPTIMAL,
+ 0);
/* create image view */
view.image = demo->depth.image;
@@ -820,6 +824,7 @@ demo_prepare_texture_image(struct demo *demo, const uint32_t *tex_colors,
.tiling = tiling,
.usage = usage,
.flags = 0,
+ .initialLayout = VK_IMAGE_LAYOUT_PREINITIALIZED
};
VkMemoryAllocateInfo mem_alloc = {
.sType = VK_STRUCTURE_TYPE_MEMORY_ALLOCATE_INFO,
@@ -878,7 +883,8 @@ demo_prepare_texture_image(struct demo *demo, const uint32_t *tex_colors,
tex_obj->imageLayout = VK_IMAGE_LAYOUT_SHADER_READ_ONLY_OPTIMAL;
demo_set_image_layout(demo, tex_obj->image, VK_IMAGE_ASPECT_COLOR_BIT,
- VK_IMAGE_LAYOUT_UNDEFINED, tex_obj->imageLayout);
+ VK_IMAGE_LAYOUT_PREINITIALIZED, tex_obj->imageLayout,
+ VK_ACCESS_HOST_WRITE_BIT);
/* setting the image layout does not reference the actual memory so no need
* to add a mem ref */
}
@@ -930,12 +936,14 @@ static void demo_prepare_textures(struct demo *demo) {
demo_set_image_layout(demo, staging_texture.image,
VK_IMAGE_ASPECT_COLOR_BIT,
staging_texture.imageLayout,
- VK_IMAGE_LAYOUT_TRANSFER_SRC_OPTIMAL);
+ VK_IMAGE_LAYOUT_TRANSFER_SRC_OPTIMAL,
+ 0);
demo_set_image_layout(demo, demo->textures[i].image,
VK_IMAGE_ASPECT_COLOR_BIT,
demo->textures[i].imageLayout,
- VK_IMAGE_LAYOUT_TRANSFER_DST_OPTIMAL);
+ VK_IMAGE_LAYOUT_TRANSFER_DST_OPTIMAL,
+ 0);
VkImageCopy copy_region = {
.srcSubresource = {VK_IMAGE_ASPECT_COLOR_BIT, 0, 0, 1},
@@ -953,7 +961,8 @@ static void demo_prepare_textures(struct demo *demo) {
demo_set_image_layout(demo, demo->textures[i].image,
VK_IMAGE_ASPECT_COLOR_BIT,
VK_IMAGE_LAYOUT_TRANSFER_DST_OPTIMAL,
- demo->textures[i].imageLayout);
+ demo->textures[i].imageLayout,
+ 0);
demo_flush_init_cmd(demo);
@@ -1522,9 +1531,14 @@ LRESULT CALLBACK WndProc(HWND hWnd, UINT uMsg, WPARAM wParam, LPARAM lParam) {
break;
}
case WM_SIZE:
- demo.width = lParam & 0xffff;
- demo.height = lParam & 0xffff0000 >> 16;
- demo_resize(&demo);
+ // Resize the application to the new window size, except when
+ // it was minimized. Vulkan doesn't support images or swapchains
+ // with width=0 and height=0.
+ if (wParam != SIZE_MINIMIZED) {
+ demo.width = lParam & 0xffff;
+ demo.height = lParam & 0xffff0000 >> 16;
+ demo_resize(&demo);
+ }
break;
default:
break;
@@ -1699,76 +1713,75 @@ static VkBool32 demo_check_layers(uint32_t check_count, char **check_names,
return 1;
}
-VKAPI_ATTR void *VKAPI_CALL myrealloc(void *pUserData, void *pOriginal,
- size_t size, size_t alignment,
- VkSystemAllocationScope allocationScope) {
- return realloc(pOriginal, size);
-}
-
-VKAPI_ATTR void *VKAPI_CALL myalloc(void *pUserData, size_t size,
- size_t alignment,
- VkSystemAllocationScope allocationScope) {
-#ifdef _MSC_VER
- return _aligned_malloc(size, alignment);
-#else
- return aligned_alloc(alignment, size);
-#endif
-}
-
-VKAPI_ATTR void VKAPI_CALL myfree(void *pUserData, void *pMemory) {
-#ifdef _MSC_VER
- _aligned_free(pMemory);
-#else
- free(pMemory);
-#endif
-}
-
static void demo_init_vk(struct demo *demo) {
VkResult err;
uint32_t instance_extension_count = 0;
uint32_t instance_layer_count = 0;
uint32_t device_validation_layer_count = 0;
+ char **instance_validation_layers = NULL;
demo->enabled_extension_count = 0;
demo->enabled_layer_count = 0;
- char *instance_validation_layers[] = {
- "VK_LAYER_LUNARG_mem_tracker",
- "VK_LAYER_GOOGLE_unique_objects",
+ char *instance_validation_layers_alt1[] = {
+ "VK_LAYER_LUNARG_standard_validation"
};
- demo->device_validation_layers[0] = "VK_LAYER_LUNARG_mem_tracker";
- demo->device_validation_layers[1] = "VK_LAYER_GOOGLE_unique_objects";
- device_validation_layer_count = 2;
+ char *instance_validation_layers_alt2[] = {
+ "VK_LAYER_GOOGLE_threading", "VK_LAYER_LUNARG_parameter_validation",
+ "VK_LAYER_LUNARG_device_limits", "VK_LAYER_LUNARG_object_tracker",
+ "VK_LAYER_LUNARG_image", "VK_LAYER_LUNARG_core_validation",
+ "VK_LAYER_LUNARG_swapchain", "VK_LAYER_GOOGLE_unique_objects"
+ };
/* Look for validation layers */
VkBool32 validation_found = 0;
- err = vkEnumerateInstanceLayerProperties(&instance_layer_count, NULL);
- assert(!err);
+ if (demo->validate) {
- if (instance_layer_count > 0) {
- VkLayerProperties *instance_layers =
- malloc(sizeof(VkLayerProperties) * instance_layer_count);
- err = vkEnumerateInstanceLayerProperties(&instance_layer_count,
- instance_layers);
+ err = vkEnumerateInstanceLayerProperties(&instance_layer_count, NULL);
assert(!err);
- if (demo->validate) {
+ instance_validation_layers = instance_validation_layers_alt1;
+ if (instance_layer_count > 0) {
+ VkLayerProperties *instance_layers =
+ malloc(sizeof (VkLayerProperties) * instance_layer_count);
+ err = vkEnumerateInstanceLayerProperties(&instance_layer_count,
+ instance_layers);
+ assert(!err);
+
+
validation_found = demo_check_layers(
- ARRAY_SIZE(instance_validation_layers),
- instance_validation_layers, instance_layer_count,
- instance_layers);
- demo->enabled_layer_count = ARRAY_SIZE(instance_validation_layers);
+ ARRAY_SIZE(instance_validation_layers_alt1),
+ instance_validation_layers, instance_layer_count,
+ instance_layers);
+ if (validation_found) {
+ demo->enabled_layer_count = ARRAY_SIZE(instance_validation_layers_alt1);
+ demo->device_validation_layers[0] = "VK_LAYER_LUNARG_standard_validation";
+ device_validation_layer_count = 1;
+ } else {
+ // use alternative set of validation layers
+ instance_validation_layers = instance_validation_layers_alt2;
+ demo->enabled_layer_count = ARRAY_SIZE(instance_validation_layers_alt2);
+ validation_found = demo_check_layers(
+ ARRAY_SIZE(instance_validation_layers_alt2),
+ instance_validation_layers, instance_layer_count,
+ instance_layers);
+ device_validation_layer_count =
+ ARRAY_SIZE(instance_validation_layers_alt2);
+ for (uint32_t i = 0; i < device_validation_layer_count; i++) {
+ demo->device_validation_layers[i] =
+ instance_validation_layers[i];
+ }
+ }
+ free(instance_layers);
}
- free(instance_layers);
- }
-
- if (demo->validate && !validation_found) {
- ERR_EXIT("vkEnumerateInstanceLayerProperties failed to find"
- "required validation layer.\n\n"
- "Please look at the Getting Started guide for additional "
- "information.\n",
- "vkCreateInstance Failure");
+ if (!validation_found) {
+ ERR_EXIT("vkEnumerateInstanceLayerProperties failed to find"
+ "required validation layer.\n\n"
+ "Please look at the Getting Started guide for additional "
+ "information.\n",
+ "vkCreateInstance Failure");
+ }
}
/* Look for instance extensions */
@@ -1856,7 +1869,7 @@ static void demo_init_vk(struct demo *demo) {
.applicationVersion = 0,
.pEngineName = APP_SHORT_NAME,
.engineVersion = 0,
- .apiVersion = VK_API_VERSION,
+ .apiVersion = VK_API_VERSION_1_0,
};
VkInstanceCreateInfo inst_info = {
.sType = VK_STRUCTURE_TYPE_INSTANCE_CREATE_INFO,
@@ -1870,11 +1883,7 @@ static void demo_init_vk(struct demo *demo) {
uint32_t gpu_count;
- demo->allocator.pfnAllocation = myalloc;
- demo->allocator.pfnFree = myfree;
- demo->allocator.pfnReallocation = myrealloc;
-
- err = vkCreateInstance(&inst_info, &demo->allocator, &demo->inst);
+ err = vkCreateInstance(&inst_info, NULL, &demo->inst);
if (err == VK_ERROR_INCOMPATIBLE_DRIVER) {
ERR_EXIT("Cannot find a compatible Vulkan installable client driver "
"(ICD).\n\nPlease look at the Getting Started guide for "
@@ -1913,40 +1922,41 @@ static void demo_init_vk(struct demo *demo) {
}
/* Look for validation layers */
- validation_found = 0;
- demo->enabled_layer_count = 0;
- uint32_t device_layer_count = 0;
- err =
- vkEnumerateDeviceLayerProperties(demo->gpu, &device_layer_count, NULL);
- assert(!err);
-
- if (device_layer_count > 0) {
- VkLayerProperties *device_layers =
- malloc(sizeof(VkLayerProperties) * device_layer_count);
- err = vkEnumerateDeviceLayerProperties(demo->gpu, &device_layer_count,
- device_layers);
+ if (demo->validate) {
+ validation_found = 0;
+ demo->enabled_layer_count = 0;
+ uint32_t device_layer_count = 0;
+ err =
+ vkEnumerateDeviceLayerProperties(demo->gpu, &device_layer_count, NULL);
assert(!err);
- if (demo->validate) {
+ if (device_layer_count > 0) {
+ VkLayerProperties *device_layers =
+ malloc(sizeof (VkLayerProperties) * device_layer_count);
+ err = vkEnumerateDeviceLayerProperties(demo->gpu, &device_layer_count,
+ device_layers);
+ assert(!err);
+
+
validation_found = demo_check_layers(device_validation_layer_count,
- demo->device_validation_layers,
- device_layer_count,
- device_layers);
+ demo->device_validation_layers,
+ device_layer_count,
+ device_layers);
demo->enabled_layer_count = device_validation_layer_count;
- }
- free(device_layers);
- }
+ free(device_layers);
+ }
- if (demo->validate && !validation_found) {
- ERR_EXIT("vkEnumerateDeviceLayerProperties failed to find "
- "a required validation layer.\n\n"
- "Please look at the Getting Started guide for additional "
- "information.\n",
- "vkCreateDevice Failure");
+ if (!validation_found) {
+ ERR_EXIT("vkEnumerateDeviceLayerProperties failed to find "
+ "a required validation layer.\n\n"
+ "Please look at the Getting Started guide for additional "
+ "information.\n",
+ "vkCreateDevice Failure");
+ }
}
- /* Loog for device extensions */
+ /* Look for device extensions */
uint32_t device_extension_count = 0;
VkBool32 swapchainExtFound = 0;
demo->enabled_extension_count = 0;
@@ -1990,11 +2000,27 @@ static void demo_init_vk(struct demo *demo) {
demo->CreateDebugReportCallback =
(PFN_vkCreateDebugReportCallbackEXT)vkGetInstanceProcAddr(
demo->inst, "vkCreateDebugReportCallbackEXT");
+ demo->DestroyDebugReportCallback =
+ (PFN_vkDestroyDebugReportCallbackEXT)vkGetInstanceProcAddr(
+ demo->inst, "vkDestroyDebugReportCallbackEXT");
if (!demo->CreateDebugReportCallback) {
ERR_EXIT(
"GetProcAddr: Unable to find vkCreateDebugReportCallbackEXT\n",
"vkGetProcAddr Failure");
}
+ if (!demo->DestroyDebugReportCallback) {
+ ERR_EXIT(
+ "GetProcAddr: Unable to find vkDestroyDebugReportCallbackEXT\n",
+ "vkGetProcAddr Failure");
+ }
+ demo->DebugReportMessage =
+ (PFN_vkDebugReportMessageEXT)vkGetInstanceProcAddr(
+ demo->inst, "vkDebugReportMessageEXT");
+ if (!demo->DebugReportMessage) {
+ ERR_EXIT("GetProcAddr: Unable to find vkDebugReportMessageEXT\n",
+ "vkGetProcAddr Failure");
+ }
+
VkDebugReportCallbackCreateInfoEXT dbgCreateInfo;
dbgCreateInfo.sType = VK_STRUCTURE_TYPE_DEBUG_REPORT_CREATE_INFO_EXT;
dbgCreateInfo.flags =
@@ -2042,6 +2068,14 @@ static void demo_init_vk(struct demo *demo) {
demo->queue_props);
assert(demo->queue_count >= 1);
+ VkPhysicalDeviceFeatures features;
+ vkGetPhysicalDeviceFeatures(demo->gpu, &features);
+
+ if (!features.shaderClipDistance) {
+ ERR_EXIT("Required device feature `shaderClipDistance` not supported\n",
+ "GetPhysicalDeviceFeatures failure");
+ }
+
// Graphics queue and MemMgr queue can be separate.
// TODO: Add support for separate queues, including synchronization,
// and appropriate tracking for QueueSubmit
@@ -2058,6 +2092,10 @@ static void demo_init_device(struct demo *demo) {
.queueCount = 1,
.pQueuePriorities = queue_priorities};
+ VkPhysicalDeviceFeatures features = {
+ .shaderClipDistance = VK_TRUE,
+ };
+
VkDeviceCreateInfo device = {
.sType = VK_STRUCTURE_TYPE_DEVICE_CREATE_INFO,
.pNext = NULL,
@@ -2070,6 +2108,7 @@ static void demo_init_device(struct demo *demo) {
: NULL),
.enabledExtensionCount = demo->enabled_extension_count,
.ppEnabledExtensionNames = (const char *const *)demo->extension_names,
+ .pEnabledFeatures = &features,
};
err = vkCreateDevice(demo->gpu, &device, NULL, &demo->device);
@@ -2228,6 +2267,8 @@ static void demo_init(struct demo *demo, const int argc, const char *argv[])
if (strncmp(pCmdLine, "--use_staging", strlen("--use_staging")) == 0)
demo->use_staging_buffer = true;
+ else if (strncmp(pCmdLine, "--validate", strlen("--validate")) == 0)
+ demo->validate = true;
else if (strlen(pCmdLine) != 0) {
fprintf(stderr, "Do not recognize argument \"%s\".\n", pCmdLine);
argv_error = true;
@@ -2236,10 +2277,12 @@ static void demo_init(struct demo *demo, const int argc, const char *argv[])
for (int i = 0; i < argc; i++) {
if (strncmp(argv[i], "--use_staging", strlen("--use_staging")) == 0)
demo->use_staging_buffer = true;
+ if (strncmp(argv[i], "--validate", strlen("--validate")) == 0)
+ demo->validate = true;
}
#endif // _WIN32
if (argv_error) {
- fprintf(stderr, "Usage:\n %s [--use_staging]\n", APP_SHORT_NAME);
+ fprintf(stderr, "Usage:\n %s [--use_staging] [--validate]\n", APP_SHORT_NAME);
fflush(stderr);
exit(1);
}
@@ -2297,8 +2340,11 @@ static void demo_cleanup(struct demo *demo) {
free(demo->buffers);
vkDestroyDevice(demo->device, NULL);
+ if (demo->validate) {
+ demo->DestroyDebugReportCallback(demo->inst, demo->msg_callback, NULL);
+ }
vkDestroySurfaceKHR(demo->inst, demo->surface, NULL);
- vkDestroyInstance(demo->inst, &demo->allocator);
+ vkDestroyInstance(demo->inst, NULL);
free(demo->queue_props);
diff --git a/demos/vulkaninfo.c b/demos/vulkaninfo.c
index 8549b3a5c..7fb13f572 100644
--- a/demos/vulkaninfo.c
+++ b/demos/vulkaninfo.c
@@ -48,11 +48,17 @@
#define snprintf _snprintf
-bool consoleCreated = false;
+// Returns nonzero if the console is used only for this process. Will return
+// zero if another process (such as cmd.exe) is also attached.
+static int ConsoleIsExclusive(void) {
+ DWORD pids[2];
+ DWORD num_pids = GetConsoleProcessList(pids, ARRAYSIZE(pids));
+ return num_pids <= 1;
+}
#define WAIT_FOR_CONSOLE_DESTROY \
do { \
- if (consoleCreated) \
+ if (ConsoleIsExclusive()) \
Sleep(INFINITE); \
} while (0)
#else
@@ -573,7 +579,7 @@ static void app_create_instance(struct app_instance *inst) {
.applicationVersion = 1,
.pEngineName = APP_SHORT_NAME,
.engineVersion = 1,
- .apiVersion = VK_API_VERSION,
+ .apiVersion = VK_API_VERSION_1_0,
};
VkInstanceCreateInfo inst_info = {
.sType = VK_STRUCTURE_TYPE_INSTANCE_CREATE_INFO,
@@ -1133,6 +1139,32 @@ static void app_gpu_dump(const struct app_gpu *gpu) {
app_dev_dump(&gpu->dev);
}
+#ifdef _WIN32
+// Enlarges the console window to have a large scrollback size.
+static void ConsoleEnlarge() {
+ HANDLE consoleHandle = GetStdHandle(STD_OUTPUT_HANDLE);
+
+ // make the console window bigger
+ CONSOLE_SCREEN_BUFFER_INFO csbi;
+ COORD bufferSize;
+ if (GetConsoleScreenBufferInfo(consoleHandle, &csbi))
+ {
+ bufferSize.X = csbi.dwSize.X + 30;
+ bufferSize.Y = 20000;
+ SetConsoleScreenBufferSize(consoleHandle, bufferSize);
+ }
+
+ SMALL_RECT r;
+ r.Left = r.Top = 0;
+ r.Right = csbi.dwSize.X - 1 + 30;
+ r.Bottom = 50;
+ SetConsoleWindowInfo(consoleHandle, true, &r);
+
+ // change the console window title
+ SetConsoleTitle(TEXT(APP_SHORT_NAME));
+}
+#endif
+
int main(int argc, char **argv) {
unsigned int major, minor, patch;
struct app_gpu gpus[MAX_GPUS];
@@ -1141,9 +1173,14 @@ int main(int argc, char **argv) {
VkResult err;
struct app_instance inst;
- major = VK_API_VERSION >> 22;
- minor = (VK_API_VERSION >> 12) & 0x3ff;
- patch = VK_API_VERSION & 0xfff;
+#ifdef _WIN32
+ if (ConsoleIsExclusive())
+ ConsoleEnlarge();
+#endif
+
+ major = VK_API_VERSION_1_0 >> 22;
+ minor = (VK_API_VERSION_1_0 >> 12) & 0x3ff;
+ patch = VK_HEADER_VERSION & 0xfff;
printf("===========\n");
printf("VULKAN INFO\n");
printf("===========\n\n");
@@ -1200,58 +1237,11 @@ int main(int argc, char **argv) {
app_destroy_instance(&inst);
- return 0;
-}
-
-#ifdef _WIN32
-
-// Create a console window with a large scrollback size to which to send stdout.
-// Returns true if console window was successfully created, false otherwise.
-bool SetStdOutToNewConsole() {
- // don't do anything if we already have a console
- if (GetStdHandle(STD_OUTPUT_HANDLE))
- return false;
-
- // allocate a console for this app
- if (!AllocConsole())
- return false;
-
- // redirect unbuffered STDOUT to the console
- HANDLE consoleHandle = GetStdHandle(STD_OUTPUT_HANDLE);
- int fileDescriptor = _open_osfhandle((intptr_t)consoleHandle, _O_TEXT);
- FILE *fp = _fdopen(fileDescriptor, "w");
- *stdout = *fp;
- setvbuf(stdout, NULL, _IONBF, 0);
-
- // make the console window bigger
- CONSOLE_SCREEN_BUFFER_INFO csbi;
- COORD bufferSize;
- if (GetConsoleScreenBufferInfo(consoleHandle, &csbi))
- {
- bufferSize.X = csbi.dwSize.X + 30;
- bufferSize.Y = 20000;
- SetConsoleScreenBufferSize(consoleHandle, bufferSize);
- }
-
- SMALL_RECT r;
- r.Left = r.Top = 0;
- r.Right = csbi.dwSize.X - 1 + 30;
- r.Bottom = 50;
- SetConsoleWindowInfo(consoleHandle, true, &r);
-
- // change the console window title
- SetConsoleTitle(TEXT(APP_SHORT_NAME));
-
- return true;
-}
-
-int WINAPI WinMain(HINSTANCE hInstance, HINSTANCE hPrevInstance, PSTR pCmdLine,
- int nCmdShow) {
- char *argv = pCmdLine;
- consoleCreated = SetStdOutToNewConsole();
- main(1, &argv);
fflush(stdout);
- if (consoleCreated)
+#ifdef _WIN32
+ if (ConsoleIsExclusive())
Sleep(INFINITE);
-}
#endif
+
+ return 0;
+}
diff --git a/generator.py b/generator.py
index 2ab8e4902..8e4cebafa 100644
--- a/generator.py
+++ b/generator.py
@@ -327,7 +327,7 @@ class ThreadGeneratorOptions(GeneratorOptions):
# ParamCheckerGeneratorOptions - subclass of GeneratorOptions.
#
-# Adds options used by ParamCheckerOutputGenerator objects during param checker
+# Adds options used by ParamCheckerOutputGenerator objects during parameter validation
# generation.
#
# Additional members
@@ -2707,26 +2707,35 @@ class ParamCheckerOutputGenerator(OutputGenerator):
OutputGenerator.__init__(self, errFile, warnFile, diagFile)
self.INDENT_SPACES = 4
# Commands to ignore
- self.blacklist = ['vkCreateInstance', 'vkCreateDevice', 'vkGetInstanceProcAddr', 'vkGetDeviceProcAddr',
- 'vkEnumerateInstanceLayerProperties', 'vkEnumerateInstanceExtensionsProperties',
- 'vkEnumerateDeviceLayerProperties', 'vkEnumerateDeviceExtensionsProperties',
- 'vkCreateDebugReportCallbackEXT', 'vkDebugReportMessageEXT']
+ self.blacklist = [
+ 'vkGetInstanceProcAddr',
+ 'vkGetDeviceProcAddr',
+ 'vkEnumerateInstanceLayerProperties',
+ 'vkEnumerateInstanceExtensionsProperties',
+ 'vkEnumerateDeviceLayerProperties',
+ 'vkEnumerateDeviceExtensionsProperties',
+ 'vkCreateDebugReportCallbackEXT',
+ 'vkDebugReportMessageEXT']
# Internal state - accumulators for different inner block text
self.sections = dict([(section, []) for section in self.ALL_SECTIONS])
- self.stypes = []
- self.structTypes = dict()
- self.commands = []
+ self.structNames = [] # List of Vulkan struct typenames
+ self.stypes = [] # Values from the VkStructureType enumeration
+ self.structTypes = dict() # Map of Vulkan struct typename to required VkStructureType
+ self.commands = [] # List of CommandData records for all Vulkan commands
+ self.structMembers = [] # List of StructMemberData records for all Vulkan structs
+ self.validatedStructs = set() # Set of structs containing members that require validation
# Named tuples to store struct and command data
self.StructType = namedtuple('StructType', ['name', 'value'])
- self.CommandParam = namedtuple('CommandParam', ['type', 'name', 'ispointer', 'isstaticarray', 'isoptional', 'iscount', 'len', 'cdecl'])
+ self.CommandParam = namedtuple('CommandParam', ['type', 'name', 'ispointer', 'isstaticarray', 'isoptional', 'iscount', 'len', 'extstructs', 'cdecl'])
self.CommandData = namedtuple('CommandData', ['name', 'params', 'cdecl'])
+ self.StructMemberData = namedtuple('StructMemberData', ['name', 'members'])
#
def incIndent(self, indent):
inc = ' ' * self.INDENT_SPACES
if indent:
return indent + inc
return inc
-
+ #
def decIndent(self, indent):
if indent and (len(indent) > self.INDENT_SPACES):
return indent[:-self.INDENT_SPACES]
@@ -2750,7 +2759,8 @@ class ParamCheckerOutputGenerator(OutputGenerator):
#
# Headers
write('#include "vulkan/vulkan.h"', file=self.outFile)
- write('#include "param_checker_utils.h"', file=self.outFile)
+ write('#include "vk_layer_extension_utils.h"', file=self.outFile)
+ write('#include "parameter_validation_utils.h"', file=self.outFile)
#
# Macros
self.newline()
@@ -2774,9 +2784,12 @@ class ParamCheckerOutputGenerator(OutputGenerator):
# end function prototypes separately for this feature. They're only
# printed in endFeature().
self.sections = dict([(section, []) for section in self.ALL_SECTIONS])
+ self.structNames = []
self.stypes = []
self.structTypes = dict()
self.commands = []
+ self.structMembers = []
+ self.validatedStructs = set()
def endFeature(self):
# C-specific
# Actually write the interface to the output file.
@@ -2787,7 +2800,10 @@ class ParamCheckerOutputGenerator(OutputGenerator):
# or move it below the 'for section...' loop.
if (self.featureExtraProtect != None):
write('#ifdef', self.featureExtraProtect, file=self.outFile)
- # Generate the command text from the captured data
+ # Generate the struct member checking code from the captured data
+ self.prepareStructMemberData()
+ self.processStructMemberData()
+ # Generate the command parameter checking code from the captured data
self.processCmdData()
if (self.sections['command']):
if (self.genOpts.protectProto):
@@ -2814,6 +2830,7 @@ class ParamCheckerOutputGenerator(OutputGenerator):
# generating a structure. Otherwise, emit the tag text.
category = typeElem.get('category')
if (category == 'struct' or category == 'union'):
+ self.structNames.append(name)
self.genStruct(typeinfo, name)
#
# Struct parameter check generation.
@@ -2825,18 +2842,29 @@ class ParamCheckerOutputGenerator(OutputGenerator):
# structs etc.)
def genStruct(self, typeinfo, typeName):
OutputGenerator.genStruct(self, typeinfo, typeName)
- for member in typeinfo.elem.findall('.//member'):
+ members = typeinfo.elem.findall('.//member')
+ #
+ # Iterate over members once to get length parameters for arrays
+ lens = set()
+ for member in members:
+ len = self.getLen(member)
+ if len:
+ lens.add(len)
+ #
+ # Generate member info
+ membersInfo = []
+ for member in members:
# Get the member's type and name
- t = self.getTypeNameTuple(member)
- type = t[0]
- name = t[1]
- value = ''
+ info = self.getTypeNameTuple(member)
+ type = info[0]
+ name = info[1]
+ stypeValue = ''
# Process VkStructureType
if type == 'VkStructureType':
# Extract the required struct type value from the comments
# embedded in the original text defining the 'typeinfo' element
rawXml = etree.tostring(typeinfo.elem).decode('ascii')
- result = re.search('VK_STRUCTURE_TYPE_\w+', rawXml)
+ result = re.search(r'VK_STRUCTURE_TYPE_\w+', rawXml)
if result:
value = result.group(0)
# Make sure value is valid
@@ -2844,10 +2872,34 @@ class ParamCheckerOutputGenerator(OutputGenerator):
# print('WARNING: {} is not part of the VkStructureType enumeration [{}]'.format(value, typeName))
else:
value = '<ERROR>'
- # Store the required value
+ # Store the required type value
self.structTypes[typeName] = self.StructType(name=name, value=value)
- #
- # Group (e.g. C "enum" type) generation.
+ #
+ # Store pointer/array/string info
+ # Check for parameter name in lens set
+ iscount = False
+ if name in lens:
+ iscount = True
+ # The pNext members are not tagged as optional, but are treated as
+ # optional for parameter NULL checks. Static array members
+ # are also treated as optional to skip NULL pointer validation, as
+ # they won't be NULL.
+ isstaticarray = self.paramIsStaticArray(member)
+ isoptional = False
+ if self.paramIsOptional(member) or (name == 'pNext') or (isstaticarray):
+ isoptional = True
+ membersInfo.append(self.CommandParam(type=type, name=name,
+ ispointer=self.paramIsPointer(member),
+ isstaticarray=isstaticarray,
+ isoptional=isoptional,
+ iscount=iscount,
+ len=self.getLen(member),
+ extstructs=member.attrib.get('validextensionstructs') if name == 'pNext' else None,
+ cdecl=self.makeCParamDecl(member, 0)))
+ self.structMembers.append(self.StructMemberData(name=typeName, members=membersInfo))
+ #
+ # Capture group (e.g. C "enum" type) info to be used for
+ # param check code generation.
# These are concatenated together with other types.
def genGroup(self, groupinfo, groupName):
OutputGenerator.genGroup(self, groupinfo, groupName)
@@ -2857,13 +2909,12 @@ class ParamCheckerOutputGenerator(OutputGenerator):
name = elem.get('name')
self.stypes.append(name)
#
- # Command generation
+ # Capture command parameter info to be used for param
+ # check code generation.
def genCmd(self, cmdinfo, name):
OutputGenerator.genCmd(self, cmdinfo, name)
if name not in self.blacklist:
- proto = cmdinfo.elem.find('proto') # Function name and return type
params = cmdinfo.elem.findall('param')
- usages = cmdinfo.elem.findall('validity/usage')
# Get list of array lengths
lens = set()
for param in params:
@@ -2884,23 +2935,27 @@ class ParamCheckerOutputGenerator(OutputGenerator):
isoptional=self.paramIsOptional(param),
iscount=iscount,
len=self.getLen(param),
+ extstructs=None,
cdecl=self.makeCParamDecl(param, 0)))
self.commands.append(self.CommandData(name=name, params=paramsInfo, cdecl=self.makeCDecls(cmdinfo.elem)[0]))
#
# Check if the parameter passed in is a pointer
def paramIsPointer(self, param):
- ispointer = False
+ ispointer = 0
paramtype = param.find('type')
- if paramtype.tail is not None and '*' in paramtype.tail:
- ispointer = True
+ if (paramtype.tail is not None) and ('*' in paramtype.tail):
+ ispointer = paramtype.tail.count('*')
+ elif paramtype.text[:4] == 'PFN_':
+ # Treat function pointer typedefs as a pointer to a single value
+ ispointer = 1
return ispointer
#
# Check if the parameter passed in is a static array
def paramIsStaticArray(self, param):
- isstaticarray = False
- tail = param.find('name').tail
- if tail and tail[0] == '[':
- isstaticarray = True
+ isstaticarray = 0
+ paramname = param.find('name')
+ if (paramname.tail is not None) and ('[' in paramname.tail):
+ isstaticarray = paramname.tail.count('[')
return isstaticarray
#
# Check if the parameter passed in is optional
@@ -2928,10 +2983,18 @@ class ParamCheckerOutputGenerator(OutputGenerator):
#
# Retrieve the value of the len tag
def getLen(self, param):
+ result = None
len = param.attrib.get('len')
if len and len != 'null-terminated':
- return len
- return None
+ # For string arrays, 'len' can look like 'count,null-terminated',
+ # indicating that we have a null terminated array of strings. We
+ # strip the null-terminated from the 'len' field and only return
+ # the parameter specifying the string count
+ if 'null-terminated' in len:
+ result = len.split(',')[0]
+ else:
+ result = len
+ return result
#
# Retrieve the type and name for a parameter
def getTypeNameTuple(self, param):
@@ -2950,101 +3013,253 @@ class ParamCheckerOutputGenerator(OutputGenerator):
if param.name == name:
return param
return None
+ #
+ # Get the length paramater record for the specified parameter name
+ def getLenParam(self, params, name):
+ lenParam = None
+ if name:
+ if '->' in name:
+ # The count is obtained by dereferencing a member of a struct parameter
+ lenParam = self.CommandParam(name=name, iscount=True, ispointer=False, isoptional=False, type=None, len=None, isstaticarray=None, extstructs=None, cdecl=None)
+ elif 'latexmath' in name:
+ result = re.search('mathit\{(\w+)\}', name)
+ lenParam = self.getParamByName(params, result.group(1))
+ elif '/' in name:
+ # Len specified as an equation such as dataSize/4
+ lenParam = self.getParamByName(params, name.split('/')[0])
+ else:
+ lenParam = self.getParamByName(params, name)
+ return lenParam
+ #
+ # Convert a vulkan.h command declaration into a parameter_validation.h definition
def getCmdDef(self, cmd):
- # TODO: Override makeCDecls
#
# Strip the trailing ';' and split into individual lines
lines = cmd.cdecl[:-1].split('\n')
# Replace Vulkan prototype
- lines[0] = 'static VkBool32 param_check_' + cmd.name + '('
- # Replace the first argument with debug_report_data
- lines[1] = ' debug_report_data* report_data,'
+ lines[0] = 'static VkBool32 parameter_validation_' + cmd.name + '('
+ # Replace the first argument with debug_report_data, when the first
+ # argument is a handle (not vkCreateInstance)
+ reportData = ' debug_report_data*'.ljust(self.genOpts.alignFuncParam) + 'report_data,'
+ if cmd.name != 'vkCreateInstance':
+ lines[1] = reportData
+ else:
+ lines.insert(1, reportData)
return '\n'.join(lines)
#
- # Generate the command text from the captured data
- def processCmdData(self):
- indent = self.incIndent(None)
- for command in self.commands:
- cmdBody = ''
- unused = []
- for param in command.params:
+ # Generate the code to check for a NULL dereference before calling the
+ # validation function
+ def genCheckedLengthCall(self, indent, name, expr):
+ count = name.count('->')
+ if count:
+ checkedExpr = ''
+ localIndent = indent
+ elements = name.split('->')
+ # Open the if expression blocks
+ for i in range(0, count):
+ checkedExpr += localIndent + 'if ({} != NULL) {{\n'.format('->'.join(elements[0:i+1]))
+ localIndent = self.incIndent(localIndent)
+ # Add the validation expression
+ checkedExpr += localIndent + expr
+ # Close the if blocks
+ for i in range(0, count):
+ localIndent = self.decIndent(localIndent)
+ checkedExpr += localIndent + '}\n'
+ return checkedExpr
+ # No if statements were required
+ return indent + expr
+ #
+ # Generate the parameter checking code
+ def genFuncBody(self, indent, name, values, valuePrefix, variablePrefix, structName):
+ funcBody = ''
+ unused = []
+ for value in values:
+ checkExpr = '' # Code to check the current parameter
+ #
+ # Check for NULL pointers, ignore the inout count parameters that
+ # will be validated with their associated array
+ if (value.ispointer or value.isstaticarray) and not value.iscount:
#
- # Check for NULL pointers, ignore the inout count parameters that
- # will be validated with their associated array
- if (param.ispointer or param.isstaticarray) and not param.iscount:
- #
- # Parameters for function argument generation
- req = 'VK_TRUE' # Paramerter can be NULL
- cpReq = 'VK_TRUE' # Count pointer can be NULL
- cvReq = 'VK_TRUE' # Count value can be 0
- lenParam = None
- #
- # Generate required parameter string for the pointer and count values
- if param.isoptional:
- req = 'VK_FALSE'
- if param.len:
- # The parameter is an array with an explicit count parameter
- # TODO: Better handling for special case counts and counts from struct members
- if param.len in ['pAllocateInfo->descriptorSetCount', 'pAllocateInfo->commandBufferCount']:
- lenParam = self.CommandParam(name=param.len, iscount=True, ispointer=False, isoptional=False, type=None, len=None, isstaticarray=None, cdecl=None)
- elif param.len == 'dataSize/4':
- lenParam = self.getParamByName(command.params, 'dataSize')
- else:
- lenParam = self.getParamByName(command.params, param.len)
- if lenParam.ispointer:
- # Count parameters that are pointers are inout
- if type(lenParam.isoptional) is list:
- if lenParam.isoptional[0]:
- cpReq = 'VK_FALSE'
- if lenParam.isoptional[1]:
- cvReq = 'VK_FALSE'
- else:
- if lenParam.isoptional:
- cpReq = 'VK_FALSE'
+ # Generate the full name of the value, which will be printed in
+ # the error message, by adding the variable prefix to the
+ # value name
+ valueDisplayName = '(std::string({}) + std::string("{}")).c_str()'.format(variablePrefix, value.name) if variablePrefix else '"{}"'.format(value.name)
+ #
+ # Parameters for function argument generation
+ req = 'VK_TRUE' # Paramerter can be NULL
+ cpReq = 'VK_TRUE' # Count pointer can be NULL
+ cvReq = 'VK_TRUE' # Count value can be 0
+ lenParam = None
+ #
+ # Generate required/optional parameter strings for the pointer and count values
+ if value.isoptional:
+ req = 'VK_FALSE'
+ if value.len:
+ # The parameter is an array with an explicit count parameter
+ lenParam = self.getLenParam(values, value.len)
+ if not lenParam: print(value.len)
+ if lenParam.ispointer:
+ # Count parameters that are pointers are inout
+ if type(lenParam.isoptional) is list:
+ if lenParam.isoptional[0]:
+ cpReq = 'VK_FALSE'
+ if lenParam.isoptional[1]:
+ cvReq = 'VK_FALSE'
else:
if lenParam.isoptional:
- cvReq = 'VK_FALSE'
- #
- # If this is a pointer to a struct with an sType field, verify the type
- if param.type in self.structTypes:
- # Add this command to the file; TODO: pre-determine this
- cmdBody += '\n'
- #
- stype = self.structTypes[param.type]
- if lenParam:
- # This is an array
- if lenParam.ispointer:
- cmdBody += indent + 'skipCall |= validate_struct_type_array(report_data, "{}", "{}", "{}", "{}", {}, {}, {}, {}, {}, {});\n'.format(command.name, lenParam.name, param.name, stype.value, lenParam.name, param.name, stype.value, cpReq, cvReq, req)
- else:
- cmdBody += indent + 'skipCall |= validate_struct_type_array(report_data, "{}", "{}", "{}", "{}", {}, {}, {}, {}, {});\n'.format(command.name, lenParam.name, param.name, stype.value, lenParam.name, param.name, stype.value, cvReq, req)
+ cpReq = 'VK_FALSE'
+ else:
+ if lenParam.isoptional:
+ cvReq = 'VK_FALSE'
+ #
+ # If this is a pointer to a struct with an sType field, verify the type
+ if value.type in self.structTypes:
+ stype = self.structTypes[value.type]
+ if lenParam:
+ # This is an array
+ if lenParam.ispointer:
+ # When the length parameter is a pointer, there is an extra Boolean parameter in the function call to indicate if it is required
+ checkExpr = 'skipCall |= validate_struct_type_array(report_data, {}, "{ln}", {dn}, "{sv}", {pf}{ln}, {pf}{vn}, {sv}, {}, {}, {});\n'.format(name, cpReq, cvReq, req, ln=lenParam.name, dn=valueDisplayName, vn=value.name, sv=stype.value, pf=valuePrefix)
else:
- cmdBody += indent + 'skipCall |= validate_struct_type(report_data, "{}", "{}", "{}", {}, {}, {});\n'.format(command.name, param.name, stype.value, param.name, stype.value, req)
+ checkExpr = 'skipCall |= validate_struct_type_array(report_data, {}, "{ln}", {dn}, "{sv}", {pf}{ln}, {pf}{vn}, {sv}, {}, {});\n'.format(name, cvReq, req, ln=lenParam.name, dn=valueDisplayName, vn=value.name, sv=stype.value, pf=valuePrefix)
else:
- if lenParam:
- # Add this command to the file; TODO: pre-determine this
- cmdBody += '\n'
- #
- # This is an array
- if lenParam.ispointer:
- cmdBody += indent + 'skipCall |= validate_array(report_data, "{}", "{}", "{}", {}, {}, {}, {}, {});\n'.format(command.name, lenParam.name, param.name, lenParam.name, param.name, cpReq, cvReq, req)
- else:
- cmdBody += indent + 'skipCall |= validate_array(report_data, "{}", "{}", "{}", {}, {}, {}, {});\n'.format(command.name, lenParam.name, param.name, lenParam.name, param.name, cvReq, req)
- elif not param.isoptional:
- # Add this command to the file; TODO: pre-determine this
- cmdBody += '\n'
- #
- cmdBody += indent + 'skipCall |= validate_required_pointer(report_data, "{}", "{}", {});\n'.format(command.name, param.name, param.name)
+ checkExpr = 'skipCall |= validate_struct_type(report_data, {}, {}, "{sv}", {}{vn}, {sv}, {});\n'.format(name, valueDisplayName, valuePrefix, req, vn=value.name, sv=stype.value)
+ elif value.name == 'pNext':
+ # We need to ignore VkDeviceCreateInfo and VkInstanceCreateInfo, as the loader manipulates them in a way that is not documented in vk.xml
+ if not structName in ['VkDeviceCreateInfo', 'VkInstanceCreateInfo']:
+ # Generate an array of acceptable VkStructureType values for pNext
+ extStructCount = 0
+ extStructVar = 'NULL'
+ extStructNames = 'NULL'
+ if value.extstructs:
+ structs = value.extstructs.split(',')
+ checkExpr = 'const VkStructureType allowedStructs[] = {' + ', '.join([self.structTypes[s].value for s in structs]) + '};\n' + indent
+ extStructCount = 'ARRAY_SIZE(allowedStructs)'
+ extStructVar = 'allowedStructs'
+ extStructNames = '"' + ', '.join(structs) + '"'
+ checkExpr += 'skipCall |= validate_struct_pnext(report_data, {}, {}, {}, {}{vn}, {}, {});\n'.format(name, valueDisplayName, extStructNames, valuePrefix, extStructCount, extStructVar, vn=value.name)
+ else:
+ if lenParam:
+ # This is an array
+ if lenParam.ispointer:
+ # If count and array parameters are optional, there
+ # will be no validation
+ if req == 'VK_TRUE' or cpReq == 'VK_TRUE' or cvReq == 'VK_TRUE':
+ # When the length parameter is a pointer, there is an extra Boolean parameter in the function call to indicate if it is required
+ checkExpr = 'skipCall |= validate_array(report_data, {}, "{ln}", {dn}, {pf}{ln}, {pf}{vn}, {}, {}, {});\n'.format(name, cpReq, cvReq, req, ln=lenParam.name, dn=valueDisplayName, vn=value.name, pf=valuePrefix)
else:
- unused.append(param.name)
- elif not param.iscount:
- unused.append(param.name)
+ # If count and array parameters are optional, there
+ # will be no validation
+ if req == 'VK_TRUE' or cvReq == 'VK_TRUE':
+ funcName = 'validate_array' if value.type != 'char' else 'validate_string_array'
+ checkExpr = 'skipCall |= {}(report_data, {}, "{ln}", {dn}, {pf}{ln}, {pf}{vn}, {}, {});\n'.format(funcName, name, cvReq, req, ln=lenParam.name, dn=valueDisplayName, vn=value.name, pf=valuePrefix)
+ elif not value.isoptional:
+ # Function pointers need a reinterpret_cast to void*
+ if value.type[:4] == 'PFN_':
+ checkExpr = 'skipCall |= validate_required_pointer(report_data, {}, {}, reinterpret_cast<const void*>({}{vn}));\n'.format(name, valueDisplayName, valuePrefix, vn=value.name)
+ else:
+ checkExpr = 'skipCall |= validate_required_pointer(report_data, {}, {}, {}{vn});\n'.format(name, valueDisplayName, valuePrefix, vn=value.name)
+ #
+ # If this is a pointer to a struct, see if it contains members
+ # that need to be checked
+ if value.type in self.validatedStructs:
+ if checkExpr:
+ checkExpr += '\n' + indent
+ #
+ # The name prefix used when reporting an error with a struct member (eg. the 'pCreateInfor->' in 'pCreateInfo->sType')
+ prefix = '(std::string({}) + std::string("{}->")).c_str()'.format(variablePrefix, value.name) if variablePrefix else '"{}->"'.format(value.name)
+ checkExpr += 'skipCall |= parameter_validation_{}(report_data, {}, {}, {}{});\n'.format(value.type, name, prefix, valuePrefix, value.name)
+ elif value.type in self.validatedStructs:
+ # The name prefix used when reporting an error with a struct member (eg. the 'pCreateInfor->' in 'pCreateInfo->sType')
+ prefix = '(std::string({}) + std::string("{}.")).c_str()'.format(variablePrefix, value.name) if variablePrefix else '"{}."'.format(value.name)
+ checkExpr += 'skipCall |= parameter_validation_{}(report_data, {}, {}, &({}{}));\n'.format(value.type, name, prefix, valuePrefix, value.name)
+ #
+ # Append the parameter check to the function body for the current command
+ if checkExpr:
+ funcBody += '\n'
+ if lenParam and ('->' in lenParam.name):
+ # Add checks to ensure the validation call does not dereference a NULL pointer to obtain the count
+ funcBody += self.genCheckedLengthCall(indent, lenParam.name, checkExpr)
+ else:
+ funcBody += indent + checkExpr
+ elif not value.iscount:
+ # The parameter is not checked (counts will be checked with
+ # their associated array)
+ unused.append(value.name)
+ return funcBody, unused
+ #
+ # Post-process the collected struct member data to create a list of structs
+ # with members that need to be validated
+ def prepareStructMemberData(self):
+ for struct in self.structMembers:
+ for member in struct.members:
+ if not member.iscount:
+ lenParam = self.getLenParam(struct.members, member.len)
+ # The sType needs to be validated
+ # An required array/count needs to be validated
+ # A required pointer needs to be validated
+ validated = False
+ if member.type in self.structTypes:
+ validated = True
+ elif member.ispointer and lenParam: # This is an array
+ # Make sure len is not optional
+ if lenParam.ispointer:
+ if not lenParam.isoptional[0] or not lenParam.isoptional[1] or not member.isoptional:
+ validated = True
+ else:
+ if not lenParam.isoptional or not member.isoptional:
+ validated = True
+ elif member.ispointer and not member.isoptional:
+ validated = True
+ #
+ if validated:
+ self.validatedStructs.add(struct.name)
+ # Second pass to check for struct members that are structs
+ # requiring validation
+ for member in struct.members:
+ if member.type in self.validatedStructs:
+ self.validatedStructs.add(struct.name)
+ #
+ # Generate the struct member check code from the captured data
+ def processStructMemberData(self):
+ indent = self.incIndent(None)
+ for struct in self.structMembers:
+ # The string returned by genFuncBody will be nested in an if check
+ # for a NULL pointer, so needs its indent incremented
+ funcBody, unused = self.genFuncBody(self.incIndent(indent), 'pFuncName', struct.members, 'pStruct->', 'pVariableName', struct.name)
+ if funcBody:
+ cmdDef = 'static VkBool32 parameter_validation_{}(\n'.format(struct.name)
+ cmdDef += ' debug_report_data*'.ljust(self.genOpts.alignFuncParam) + ' report_data,\n'
+ cmdDef += ' const char*'.ljust(self.genOpts.alignFuncParam) + ' pFuncName,\n'
+ cmdDef += ' const char*'.ljust(self.genOpts.alignFuncParam) + ' pVariableName,\n'
+ cmdDef += ' const {}*'.format(struct.name).ljust(self.genOpts.alignFuncParam) + ' pStruct)\n'
+ cmdDef += '{\n'
+ cmdDef += indent + 'VkBool32 skipCall = VK_FALSE;\n'
+ cmdDef += '\n'
+ cmdDef += indent + 'if (pStruct != NULL) {'
+ cmdDef += funcBody
+ cmdDef += indent +'}\n'
+ cmdDef += '\n'
+ cmdDef += indent + 'return skipCall;\n'
+ cmdDef += '}\n'
+ self.appendSection('command', cmdDef)
+ #
+ # Generate the command param check code from the captured data
+ def processCmdData(self):
+ indent = self.incIndent(None)
+ for command in self.commands:
+ cmdBody, unused = self.genFuncBody(indent, '"{}"'.format(command.name), command.params, '', None, None)
if cmdBody:
cmdDef = self.getCmdDef(command) + '\n'
cmdDef += '{\n'
- indDnt = self.incIndent(None)
+ # Process unused parameters
# Ignore the first dispatch handle parameter, which is not
- # processed by param_check
- for name in unused[1:]:
+ # processed by parameter_validation (except for vkCreateInstance, which
+ # does not have a handle as its first parameter)
+ startIndex = 1
+ if command.name == 'vkCreateInstance':
+ startIndex = 0
+ for name in unused[startIndex:]:
cmdDef += indent + 'UNUSED_PARAMETER({});\n'.format(name)
if len(unused) > 1:
cmdDef += '\n'
@@ -3054,4 +3269,3 @@ class ParamCheckerOutputGenerator(OutputGenerator):
cmdDef += indent + 'return skipCall;\n'
cmdDef += '}\n'
self.appendSection('command', cmdDef)
-
diff --git a/genvk.py b/genvk.py
index 7831f406c..88e49a067 100755
--- a/genvk.py
+++ b/genvk.py
@@ -273,7 +273,7 @@ buildList = [
],
[ ParamCheckerOutputGenerator,
ParamCheckerGeneratorOptions(
- filename = 'param_check.h',
+ filename = 'parameter_validation.h',
apiname = 'vulkan',
profile = None,
versions = allVersions,
diff --git a/glslang_revision b/glslang_revision
index f18b0980c..c48dee1a3 100644
--- a/glslang_revision
+++ b/glslang_revision
@@ -1 +1 @@
-6c292d3
+3c5b1e6b31aca0eb52fe7e82a963ff735f1de31b
diff --git a/include/vulkan/vk_debug_marker_layer.h b/include/vulkan/vk_debug_marker_layer.h
deleted file mode 100644
index e882b02b4..000000000
--- a/include/vulkan/vk_debug_marker_layer.h
+++ /dev/null
@@ -1,44 +0,0 @@
-//
-// File: vk_debug_marker_layer.h
-//
-/*
- * Copyright (c) 2015-2016 The Khronos Group Inc.
- * Copyright (c) 2015-2016 Valve Corporation
- * Copyright (c) 2015-2016 LunarG, Inc.
- *
- * Permission is hereby granted, free of charge, to any person obtaining a copy
- * of this software and/or associated documentation files (the "Materials"), to
- * deal in the Materials without restriction, including without limitation the
- * rights to use, copy, modify, merge, publish, distribute, sublicense, and/or
- * sell copies of the Materials, and to permit persons to whom the Materials are
- * furnished to do so, subject to the following conditions:
- *
- * The above copyright notice(s) and this permission notice shall be included in
- * all copies or substantial portions of the Materials.
- *
- * THE MATERIALS ARE PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
- * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
- * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT.
- *
- * IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM,
- * DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR
- * OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE MATERIALS OR THE
- * USE OR OTHER DEALINGS IN THE MATERIALS.
- *
- * Authors:
- * Jon Ashburn <jon@lunarg.com>
- * Courtney Goeltzenleuchter <courtney@lunarg.com>
- */
-
-#pragma once
-
-#include "vulkan.h"
-#include "vk_lunarg_debug_marker.h"
-#include "vk_layer.h"
-
-typedef struct VkLayerDebugMarkerDispatchTable_ {
- PFN_vkCmdDbgMarkerBegin CmdDbgMarkerBegin;
- PFN_vkCmdDbgMarkerEnd CmdDbgMarkerEnd;
- PFN_vkDbgSetObjectTag DbgSetObjectTag;
- PFN_vkDbgSetObjectName DbgSetObjectName;
-} VkLayerDebugMarkerDispatchTable;
diff --git a/include/vulkan/vk_icd.h b/include/vulkan/vk_icd.h
index 60b29e037..fdb1e6e43 100644
--- a/include/vulkan/vk_icd.h
+++ b/include/vulkan/vk_icd.h
@@ -30,7 +30,7 @@
#ifndef VKICD_H
#define VKICD_H
-#include "vk_platform.h"
+#include "vulkan.h"
/*
* The ICD must reserve space for a pointer for the loader's dispatch
@@ -65,6 +65,7 @@ typedef enum _VkIcdWsiPlatform {
VK_ICD_WSI_PLATFORM_WIN32,
VK_ICD_WSI_PLATFORM_XCB,
VK_ICD_WSI_PLATFORM_XLIB,
+ VK_ICD_WSI_PLATFORM_DISPLAY
} VkIcdWsiPlatform;
typedef struct _VkIcdSurfaceBase {
@@ -111,4 +112,14 @@ typedef struct _VkIcdSurfaceXlib {
} VkIcdSurfaceXlib;
#endif // VK_USE_PLATFORM_XLIB_KHR
+typedef struct _VkIcdSurfaceDisplay {
+ VkIcdSurfaceBase base;
+ VkDisplayModeKHR displayMode;
+ uint32_t planeIndex;
+ uint32_t planeStackIndex;
+ VkSurfaceTransformFlagBitsKHR transform;
+ float globalAlpha;
+ VkDisplayPlaneAlphaFlagBitsKHR alphaMode;
+ VkExtent2D imageExtent;
+} VkIcdSurfaceDisplay;
#endif // VKICD_H
diff --git a/include/vulkan/vk_layer.h b/include/vulkan/vk_layer.h
index 248704340..95d880255 100644
--- a/include/vulkan/vk_layer.h
+++ b/include/vulkan/vk_layer.h
@@ -34,7 +34,6 @@
#pragma once
#include "vulkan.h"
-#include "vk_lunarg_debug_marker.h"
#if defined(__GNUC__) && __GNUC__ >= 4
#define VK_LAYER_EXPORT __attribute__((visibility("default")))
#elif defined(__SUNPRO_C) && (__SUNPRO_C >= 0x590)
@@ -226,6 +225,20 @@ typedef struct VkLayerInstanceDispatchTable_ {
#ifdef VK_USE_PLATFORM_ANDROID_KHR
PFN_vkCreateAndroidSurfaceKHR CreateAndroidSurfaceKHR;
#endif
+ PFN_vkGetPhysicalDeviceDisplayPropertiesKHR
+ GetPhysicalDeviceDisplayPropertiesKHR;
+ PFN_vkGetPhysicalDeviceDisplayPlanePropertiesKHR
+ GetPhysicalDeviceDisplayPlanePropertiesKHR;
+ PFN_vkGetDisplayPlaneSupportedDisplaysKHR
+ GetDisplayPlaneSupportedDisplaysKHR;
+ PFN_vkGetDisplayModePropertiesKHR
+ GetDisplayModePropertiesKHR;
+ PFN_vkCreateDisplayModeKHR
+ CreateDisplayModeKHR;
+ PFN_vkGetDisplayPlaneCapabilitiesKHR
+ GetDisplayPlaneCapabilitiesKHR;
+ PFN_vkCreateDisplayPlaneSurfaceKHR
+ CreateDisplayPlaneSurfaceKHR;
} VkLayerInstanceDispatchTable;
// LL node for tree of dbg callback functions
diff --git a/include/vulkan/vk_lunarg_debug_marker.h b/include/vulkan/vk_lunarg_debug_marker.h
deleted file mode 100644
index edff2b9ee..000000000
--- a/include/vulkan/vk_lunarg_debug_marker.h
+++ /dev/null
@@ -1,98 +0,0 @@
-//
-// File: vk_lunarg_debug_marker.h
-//
-/*
- * Copyright (c) 2015-2016 The Khronos Group Inc.
- * Copyright (c) 2015-2016 Valve Corporation
- * Copyright (c) 2015-2016 LunarG, Inc.
- *
- * Permission is hereby granted, free of charge, to any person obtaining a copy
- * of this software and/or associated documentation files (the "Materials"), to
- * deal in the Materials without restriction, including without limitation the
- * rights to use, copy, modify, merge, publish, distribute, sublicense, and/or
- * sell copies of the Materials, and to permit persons to whom the Materials are
- * furnished to do so, subject to the following conditions:
- *
- * The above copyright notice(s) and this permission notice shall be included in
- * all copies or substantial portions of the Materials.
- *
- * THE MATERIALS ARE PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
- * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
- * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT.
- *
- * IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM,
- * DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR
- * OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE MATERIALS OR THE
- * USE OR OTHER DEALINGS IN THE MATERIALS.
- *
- * Authors:
- * Jon Ashburn <jon@lunarg.com>
- * Courtney Goeltzenleuchter <courtney@lunarg.com>
- */
-
-#ifndef __VK_DEBUG_MARKER_H__
-#define __VK_DEBUG_MARKER_H__
-
-#include "vulkan.h"
-
-#define VK_DEBUG_MARKER_EXTENSION_NUMBER 6
-#define VK_DEBUG_MARKER_EXTENSION_REVISION 1
-#ifdef __cplusplus
-extern "C" {
-#endif // __cplusplus
-
-/*
-***************************************************************************************************
-* DebugMarker Vulkan Extension API
-***************************************************************************************************
-*/
-
-#define DEBUG_MARKER_EXTENSION_NAME "VK_LUNARG_DEBUG_MARKER"
-
-// ------------------------------------------------------------------------------------------------
-// Enumerations
-
-#define VK_DEBUG_MARKER_ENUM_EXTEND(type, id) \
- ((type)(VK_DEBUG_MARKER_EXTENSION_NUMBER * -1000 + (id)))
-
-#define VK_OBJECT_INFO_TYPE_DBG_OBJECT_TAG \
- VK_DEBUG_MARKER_ENUM_EXTEND(VkDbgObjectInfoType, 0)
-#define VK_OBJECT_INFO_TYPE_DBG_OBJECT_NAME \
- VK_DEBUG_MARKER_ENUM_EXTEND(VkDbgObjectInfoType, 1)
-
-// ------------------------------------------------------------------------------------------------
-// API functions
-
-typedef void(VKAPI_PTR *PFN_vkCmdDbgMarkerBegin)(VkCommandBuffer commandBuffer,
- const char *pMarker);
-typedef void(VKAPI_PTR *PFN_vkCmdDbgMarkerEnd)(VkCommandBuffer commandBuffer);
-typedef VkResult(VKAPI_PTR *PFN_vkDbgSetObjectTag)(
- VkDevice device, VkDebugReportObjectTypeEXT objType, uint64_t object,
- size_t tagSize, const void *pTag);
-typedef VkResult(VKAPI_PTR *PFN_vkDbgSetObjectName)(
- VkDevice device, VkDebugReportObjectTypeEXT objType, uint64_t object,
- size_t nameSize, const char *pName);
-
-#ifndef VK_NO_PROTOTYPES
-
-// DebugMarker extension entrypoints
-VKAPI_ATTR void VKAPI_CALL
-vkCmdDbgMarkerBegin(VkCommandBuffer commandBuffer, const char *pMarker);
-
-VKAPI_ATTR void VKAPI_CALL vkCmdDbgMarkerEnd(VkCommandBuffer commandBuffer);
-
-VKAPI_ATTR VkResult VKAPI_CALL
-vkDbgSetObjectTag(VkDevice device, VkDebugReportObjectTypeEXT objType,
- uint64_t object, size_t tagSize, const void *pTag);
-
-VKAPI_ATTR VkResult VKAPI_CALL
-vkDbgSetObjectName(VkDevice device, VkDebugReportObjectTypeEXT objType,
- uint64_t object, size_t nameSize, const char *pName);
-
-#endif // VK_NO_PROTOTYPES
-
-#ifdef __cplusplus
-} // extern "C"
-#endif // __cplusplus
-
-#endif // __VK_DEBUG_MARKER_H__
diff --git a/include/vulkan/vk_platform.h b/include/vulkan/vk_platform.h
index 075a18cab..f5a5243b8 100644
--- a/include/vulkan/vk_platform.h
+++ b/include/vulkan/vk_platform.h
@@ -2,7 +2,7 @@
// File: vk_platform.h
//
/*
-** Copyright (c) 2014-2016 The Khronos Group Inc.
+** Copyright (c) 2014-2015 The Khronos Group Inc.
**
** Permission is hereby granted, free of charge, to any person obtaining a
** copy of this software and/or associated documentation files (the
@@ -25,8 +25,8 @@
*/
-#ifndef __VK_PLATFORM_H__
-#define __VK_PLATFORM_H__
+#ifndef VK_PLATFORM_H_
+#define VK_PLATFORM_H_
#ifdef __cplusplus
extern "C"
@@ -124,4 +124,4 @@ extern "C"
#include <xcb/xcb.h>
#endif
-#endif // __VK_PLATFORM_H__
+#endif
diff --git a/include/vulkan/vulkan.h b/include/vulkan/vulkan.h
index cd6a71ac1..567671a92 100644
--- a/include/vulkan/vulkan.h
+++ b/include/vulkan/vulkan.h
@@ -1,5 +1,5 @@
-#ifndef __vulkan_h_
-#define __vulkan_h_ 1
+#ifndef VULKAN_H_
+#define VULKAN_H_ 1
#ifdef __cplusplus
extern "C" {
@@ -40,12 +40,18 @@ extern "C" {
#define VK_MAKE_VERSION(major, minor, patch) \
(((major) << 22) | ((minor) << 12) | (patch))
-// Vulkan API version supported by this file
-#define VK_API_VERSION VK_MAKE_VERSION(1, 0, 3)
+// DEPRECATED: This define has been removed. Specific version defines (e.g. VK_API_VERSION_1_0), or the VK_MAKE_VERSION macro, should be used instead.
+//#define VK_API_VERSION VK_MAKE_VERSION(1, 0, 0)
+
+// Vulkan 1.0 version number
+#define VK_API_VERSION_1_0 VK_MAKE_VERSION(1, 0, 0)
#define VK_VERSION_MAJOR(version) ((uint32_t)(version) >> 22)
#define VK_VERSION_MINOR(version) (((uint32_t)(version) >> 12) & 0x3ff)
#define VK_VERSION_PATCH(version) ((uint32_t)(version) & 0xfff)
+// Version of this file
+#define VK_HEADER_VERSION 6
+
#define VK_NULL_HANDLE 0
@@ -142,6 +148,7 @@ typedef enum VkResult {
VK_ERROR_OUT_OF_DATE_KHR = -1000001004,
VK_ERROR_INCOMPATIBLE_DISPLAY_KHR = -1000003001,
VK_ERROR_VALIDATION_FAILED_EXT = -1000011001,
+ VK_ERROR_INVALID_SHADER_NV = -1000012000,
VK_RESULT_BEGIN_RANGE = VK_ERROR_FORMAT_NOT_SUPPORTED,
VK_RESULT_END_RANGE = VK_INCOMPLETE,
VK_RESULT_RANGE_SIZE = (VK_INCOMPLETE - VK_ERROR_FORMAT_NOT_SUPPORTED + 1),
@@ -209,7 +216,7 @@ typedef enum VkStructureType {
VK_STRUCTURE_TYPE_MIR_SURFACE_CREATE_INFO_KHR = 1000007000,
VK_STRUCTURE_TYPE_ANDROID_SURFACE_CREATE_INFO_KHR = 1000008000,
VK_STRUCTURE_TYPE_WIN32_SURFACE_CREATE_INFO_KHR = 1000009000,
- VK_STRUCTURE_TYPE_DEBUG_REPORT_CREATE_INFO_EXT = 1000011000,
+ VK_STRUCTURE_TYPE_DEBUG_REPORT_CALLBACK_CREATE_INFO_EXT = 1000011000,
VK_STRUCTURE_TYPE_BEGIN_RANGE = VK_STRUCTURE_TYPE_APPLICATION_INFO,
VK_STRUCTURE_TYPE_END_RANGE = VK_STRUCTURE_TYPE_LOADER_DEVICE_CREATE_INFO,
VK_STRUCTURE_TYPE_RANGE_SIZE = (VK_STRUCTURE_TYPE_LOADER_DEVICE_CREATE_INFO - VK_STRUCTURE_TYPE_APPLICATION_INFO + 1),
@@ -679,6 +686,7 @@ typedef enum VkDynamicState {
typedef enum VkFilter {
VK_FILTER_NEAREST = 0,
VK_FILTER_LINEAR = 1,
+ VK_FILTER_CUBIC_IMG = 1000015000,
VK_FILTER_BEGIN_RANGE = VK_FILTER_NEAREST,
VK_FILTER_END_RANGE = VK_FILTER_LINEAR,
VK_FILTER_RANGE_SIZE = (VK_FILTER_LINEAR - VK_FILTER_NEAREST + 1),
@@ -701,8 +709,8 @@ typedef enum VkSamplerAddressMode {
VK_SAMPLER_ADDRESS_MODE_CLAMP_TO_BORDER = 3,
VK_SAMPLER_ADDRESS_MODE_MIRROR_CLAMP_TO_EDGE = 4,
VK_SAMPLER_ADDRESS_MODE_BEGIN_RANGE = VK_SAMPLER_ADDRESS_MODE_REPEAT,
- VK_SAMPLER_ADDRESS_MODE_END_RANGE = VK_SAMPLER_ADDRESS_MODE_MIRROR_CLAMP_TO_EDGE,
- VK_SAMPLER_ADDRESS_MODE_RANGE_SIZE = (VK_SAMPLER_ADDRESS_MODE_MIRROR_CLAMP_TO_EDGE - VK_SAMPLER_ADDRESS_MODE_REPEAT + 1),
+ VK_SAMPLER_ADDRESS_MODE_END_RANGE = VK_SAMPLER_ADDRESS_MODE_CLAMP_TO_BORDER,
+ VK_SAMPLER_ADDRESS_MODE_RANGE_SIZE = (VK_SAMPLER_ADDRESS_MODE_CLAMP_TO_BORDER - VK_SAMPLER_ADDRESS_MODE_REPEAT + 1),
VK_SAMPLER_ADDRESS_MODE_MAX_ENUM = 0x7FFFFFFF
} VkSamplerAddressMode;
@@ -808,6 +816,7 @@ typedef enum VkFormatFeatureFlagBits {
VK_FORMAT_FEATURE_BLIT_SRC_BIT = 0x00000400,
VK_FORMAT_FEATURE_BLIT_DST_BIT = 0x00000800,
VK_FORMAT_FEATURE_SAMPLED_IMAGE_FILTER_LINEAR_BIT = 0x00001000,
+ VK_FORMAT_FEATURE_SAMPLED_IMAGE_FILTER_CUBIC_BIT_IMG = 0x00002000,
} VkFormatFeatureFlagBits;
typedef VkFlags VkFormatFeatureFlags;
@@ -979,7 +988,7 @@ typedef enum VkShaderStageFlagBits {
VK_SHADER_STAGE_GEOMETRY_BIT = 0x00000008,
VK_SHADER_STAGE_FRAGMENT_BIT = 0x00000010,
VK_SHADER_STAGE_COMPUTE_BIT = 0x00000020,
- VK_SHADER_STAGE_ALL_GRAPHICS = 0x1F,
+ VK_SHADER_STAGE_ALL_GRAPHICS = 0x0000001F,
VK_SHADER_STAGE_ALL = 0x7FFFFFFF,
} VkShaderStageFlagBits;
typedef VkFlags VkPipelineVertexInputStateCreateFlags;
@@ -992,7 +1001,7 @@ typedef enum VkCullModeFlagBits {
VK_CULL_MODE_NONE = 0,
VK_CULL_MODE_FRONT_BIT = 0x00000001,
VK_CULL_MODE_BACK_BIT = 0x00000002,
- VK_CULL_MODE_FRONT_AND_BACK = 0x3,
+ VK_CULL_MODE_FRONT_AND_BACK = 0x00000003,
} VkCullModeFlagBits;
typedef VkFlags VkCullModeFlags;
typedef VkFlags VkPipelineMultisampleStateCreateFlags;
@@ -1083,7 +1092,7 @@ typedef VkFlags VkCommandBufferResetFlags;
typedef enum VkStencilFaceFlagBits {
VK_STENCIL_FACE_FRONT_BIT = 0x00000001,
VK_STENCIL_FACE_BACK_BIT = 0x00000002,
- VK_STENCIL_FRONT_AND_BACK = 0x3,
+ VK_STENCIL_FRONT_AND_BACK = 0x00000003,
} VkStencilFaceFlagBits;
typedef VkFlags VkStencilFaceFlags;
@@ -3326,8 +3335,8 @@ typedef enum VkDisplayPlaneAlphaFlagBitsKHR {
VK_DISPLAY_PLANE_ALPHA_PER_PIXEL_BIT_KHR = 0x00000004,
VK_DISPLAY_PLANE_ALPHA_PER_PIXEL_PREMULTIPLIED_BIT_KHR = 0x00000008,
} VkDisplayPlaneAlphaFlagBitsKHR;
-typedef VkFlags VkDisplayModeCreateFlagsKHR;
typedef VkFlags VkDisplayPlaneAlphaFlagsKHR;
+typedef VkFlags VkDisplayModeCreateFlagsKHR;
typedef VkFlags VkDisplaySurfaceCreateFlagsKHR;
typedef struct VkDisplayPropertiesKHR {
@@ -3667,11 +3676,17 @@ VKAPI_ATTR VkBool32 VKAPI_CALL vkGetPhysicalDeviceWin32PresentationSupportKHR(
#endif
#endif /* VK_USE_PLATFORM_WIN32_KHR */
+#define VK_KHR_sampler_mirror_clamp_to_edge 1
+#define VK_KHR_SAMPLER_MIRROR_CLAMP_TO_EDGE_SPEC_VERSION 1
+#define VK_KHR_SAMPLER_MIRROR_CLAMP_TO_EDGE_EXTENSION_NAME "VK_KHR_sampler_mirror_clamp_to_edge"
+
+
#define VK_EXT_debug_report 1
VK_DEFINE_NON_DISPATCHABLE_HANDLE(VkDebugReportCallbackEXT)
-#define VK_EXT_DEBUG_REPORT_SPEC_VERSION 1
+#define VK_EXT_DEBUG_REPORT_SPEC_VERSION 2
#define VK_EXT_DEBUG_REPORT_EXTENSION_NAME "VK_EXT_debug_report"
+#define VK_STRUCTURE_TYPE_DEBUG_REPORT_CREATE_INFO_EXT VK_STRUCTURE_TYPE_DEBUG_REPORT_CALLBACK_CREATE_INFO_EXT
typedef enum VkDebugReportObjectTypeEXT {
@@ -3768,6 +3783,16 @@ VKAPI_ATTR void VKAPI_CALL vkDebugReportMessageEXT(
const char* pMessage);
#endif
+#define VK_NV_glsl_shader 1
+#define VK_NV_GLSL_SHADER_SPEC_VERSION 1
+#define VK_NV_GLSL_SHADER_EXTENSION_NAME "VK_NV_glsl_shader"
+
+
+#define VK_IMG_filter_cubic 1
+#define VK_IMG_FILTER_CUBIC_SPEC_VERSION 1
+#define VK_IMG_FILTER_CUBIC_EXTENSION_NAME "VK_IMG_filter_cubic"
+
+
#ifdef __cplusplus
}
#endif
diff --git a/layers/.clang-format b/layers/.clang-format
new file mode 100644
index 000000000..cd70ac163
--- /dev/null
+++ b/layers/.clang-format
@@ -0,0 +1,6 @@
+---
+# We'll use defaults from the LLVM style, but with 4 columns indentation.
+BasedOnStyle: LLVM
+IndentWidth: 4
+ColumnLimit: 132
+...
diff --git a/layers/CMakeLists.txt b/layers/CMakeLists.txt
index 978e9cd98..27e112894 100644
--- a/layers/CMakeLists.txt
+++ b/layers/CMakeLists.txt
@@ -9,7 +9,7 @@ endmacro()
macro(run_vk_layer_generate subcmd output)
add_custom_command(OUTPUT ${output}
- COMMAND ${PYTHON_CMD} ${PROJECT_SOURCE_DIR}/vk-layer-generate.py ${subcmd} ${PROJECT_SOURCE_DIR}/include/vulkan/vulkan.h > ${output}
+ COMMAND ${PYTHON_CMD} ${PROJECT_SOURCE_DIR}/vk-layer-generate.py ${DisplayServer} ${subcmd} ${PROJECT_SOURCE_DIR}/include/vulkan/vulkan.h > ${output}
DEPENDS ${PROJECT_SOURCE_DIR}/vk-layer-generate.py ${PROJECT_SOURCE_DIR}/include/vulkan/vulkan.h ${PROJECT_SOURCE_DIR}/vulkan.py
)
endmacro()
@@ -20,14 +20,12 @@ macro(run_vk_layer_xml_generate subcmd output)
DEPENDS ${PROJECT_SOURCE_DIR}/vk.xml ${PROJECT_SOURCE_DIR}/generator.py ${PROJECT_SOURCE_DIR}/genvk.py ${PROJECT_SOURCE_DIR}/reg.py
)
endmacro()
-
set(LAYER_JSON_FILES
- VkLayer_draw_state
+ VkLayer_core_validation
VkLayer_image
- VkLayer_mem_tracker
VkLayer_object_tracker
VkLayer_unique_objects
- VkLayer_param_checker
+ VkLayer_parameter_validation
VkLayer_swapchain
VkLayer_threading
VkLayer_device_limits
@@ -62,7 +60,7 @@ endif()
if (WIN32)
macro(add_vk_layer target)
add_custom_command(OUTPUT VkLayer_${target}.def
- COMMAND ${PYTHON_CMD} ${PROJECT_SOURCE_DIR}/vk-generate.py win-def-file VkLayer_${target} layer > VkLayer_${target}.def
+ COMMAND ${PYTHON_CMD} ${PROJECT_SOURCE_DIR}/vk-generate.py ${DisplayServer} win-def-file VkLayer_${target} layer > VkLayer_${target}.def
DEPENDS ${PROJECT_SOURCE_DIR}/vk-generate.py ${PROJECT_SOURCE_DIR}/vk.py
)
add_library(VkLayer_${target} SHARED ${ARGN} VkLayer_${target}.def)
@@ -86,25 +84,14 @@ include_directories(
${CMAKE_CURRENT_SOURCE_DIR}/../loader
${CMAKE_CURRENT_SOURCE_DIR}/../include/vulkan
${CMAKE_CURRENT_BINARY_DIR}
- ${PROJECT_SOURCE_DIR}/../glslang/SPIRV
+ ${GLSLANG_SPIRV_INCLUDE_DIR}
)
if (WIN32)
set (CMAKE_CXX_FLAGS_RELEASE "${CMAKE_CXX_FLAGS_RELEASE} -D_CRT_SECURE_NO_WARNINGS")
set (CMAKE_C_FLAGS_RELEASE "${CMAKE_C_FLAGS_RELEASE} -D_CRT_SECURE_NO_WARNINGS")
-
- # For VS 2015, which uses compiler version 1900, draw_state.cpp fails with too many objects
- # without either optimizations enabled, or setting the /bigobj compilation option. Since
- # optimizations are enabled in a release build, this only affects the debug build. For now,
- # enable /bigobj mode for all debug layer files. An alternative for the future is to split
- # draw_state.cpp into multiple files, which will also alleviate the compilation error.
- if (MSVC AND NOT (MSVC_VERSION LESS 1900))
- set (CMAKE_CXX_FLAGS_DEBUG "${CMAKE_CXX_FLAGS_DEBUG} -D_CRT_SECURE_NO_WARNINGS /bigobj")
- set (CMAKE_C_FLAGS_DEBUG "${CMAKE_C_FLAGS_DEBUG} -D_CRT_SECURE_NO_WARNINGS /bigobj")
- else()
- set (CMAKE_CXX_FLAGS_DEBUG "${CMAKE_CXX_FLAGS_DEBUG} -D_CRT_SECURE_NO_WARNINGS")
- set (CMAKE_C_FLAGS_DEBUG "${CMAKE_C_FLAGS_DEBUG} -D_CRT_SECURE_NO_WARNINGS")
- endif()
+ set (CMAKE_CXX_FLAGS_DEBUG "${CMAKE_CXX_FLAGS_DEBUG} -D_CRT_SECURE_NO_WARNINGS /bigobj")
+ set (CMAKE_C_FLAGS_DEBUG "${CMAKE_C_FLAGS_DEBUG} -D_CRT_SECURE_NO_WARNINGS /bigobj")
endif()
if (NOT WIN32)
set (CMAKE_CXX_FLAGS "-std=c++11")
@@ -113,7 +100,7 @@ if (NOT WIN32)
endif()
add_custom_command(OUTPUT vk_dispatch_table_helper.h
- COMMAND ${PYTHON_CMD} ${PROJECT_SOURCE_DIR}/vk-generate.py dispatch-table-ops layer > vk_dispatch_table_helper.h
+ COMMAND ${PYTHON_CMD} ${PROJECT_SOURCE_DIR}/vk-generate.py ${DisplayServer} dispatch-table-ops layer > vk_dispatch_table_helper.h
DEPENDS ${PROJECT_SOURCE_DIR}/vk-generate.py ${PROJECT_SOURCE_DIR}/vulkan.py)
run_vk_helper(gen_enum_string_helper vk_enum_string_helper.h)
@@ -148,7 +135,7 @@ add_custom_target(generate_vk_layer_helpers DEPENDS
run_vk_layer_generate(object_tracker object_tracker.cpp)
run_vk_layer_xml_generate(Threading thread_check.h)
run_vk_layer_generate(unique_objects unique_objects.cpp)
-run_vk_layer_xml_generate(ParamChecker param_check.h)
+run_vk_layer_xml_generate(ParamChecker parameter_validation.h)
add_library(layer_utils SHARED vk_layer_config.cpp vk_layer_extension_utils.cpp vk_layer_utils.cpp)
if (WIN32)
@@ -158,14 +145,12 @@ if (WIN32)
else()
install(TARGETS layer_utils DESTINATION ${PROJECT_BINARY_DIR}/install_staging)
endif()
-
-add_vk_layer(draw_state draw_state.cpp vk_layer_debug_marker_table.cpp vk_layer_table.cpp)
-add_vk_layer(device_limits device_limits.cpp vk_layer_debug_marker_table.cpp vk_layer_table.cpp vk_layer_utils.cpp)
-add_vk_layer(mem_tracker mem_tracker.cpp vk_layer_table.cpp)
+add_vk_layer(core_validation core_validation.cpp vk_layer_table.cpp)
+add_vk_layer(device_limits device_limits.cpp vk_layer_table.cpp vk_layer_utils.cpp)
add_vk_layer(image image.cpp vk_layer_table.cpp)
add_vk_layer(swapchain swapchain.cpp vk_layer_table.cpp)
# generated
add_vk_layer(object_tracker object_tracker.cpp vk_layer_table.cpp)
add_vk_layer(threading threading.cpp thread_check.h vk_layer_table.cpp)
add_vk_layer(unique_objects unique_objects.cpp vk_layer_table.cpp vk_safe_struct.cpp)
-add_vk_layer(param_checker param_checker.cpp param_check.h vk_layer_debug_marker_table.cpp vk_layer_table.cpp)
+add_vk_layer(parameter_validation parameter_validation.cpp parameter_validation.h vk_layer_table.cpp)
diff --git a/layers/README.md b/layers/README.md
index b6dc298e7..dcc81d722 100644
--- a/layers/README.md
+++ b/layers/README.md
@@ -33,30 +33,33 @@ Note that some layers are code-generated and will therefore exist in the directo
### Layer Details
For complete details of current validation layers, including all of the validation checks that they perform, please refer to the document layers/vk_validation_layer_details.md. Below is a brief overview of each layer.
+### Standard Validation
+This is a meta-layer managed by the loader. (name = VK_LAYER_LUNARG_standard_validation) - specifying this layer name will cause the loader to load the all of the standard validation layers (listed below) in the following optimal order: VK_LAYER_GOOGLE_threading, VK_LAYER_LUNARG_parameter_validation, VK_LAYER_LUNARG_device_limits, VK_LAYER_LUNARG_object_tracker, VK_LAYER_LUNARG_image, VK_LAYER_LUNARG_core_validation, VK_LAYER_LUNARG_swapchain, and VK_LAYER_GOOGLE_unique_objects. Other layers can be specified and the loader will remove duplicates.
+
### Print Object Stats
(build dir)/layers/object_tracker.cpp (name=VK_LAYER_LUNARG_object_tracker) - Track object creation, use, and destruction. As objects are created, they're stored in a map. As objects are used, the layer verifies they exist in the map, flagging errors for unknown objects. As objects are destroyed, they're removed from the map. At vkDestroyDevice() and vkDestroyInstance() times, if any objects have not been destroyed, they are reported as leaked objects. If a Dbg callback function is registered, this layer will use callback function(s) for reporting, otherwise uses stdout.
-### Validate Draw State and Shaders
-layers/draw\_state.cpp (name=VK_LAYER_LUNARG_draw_state) - DrawState tracks the Descriptor Set, Pipeline State, Shaders, and dynamic state performing some point validation as states are created and used, and further validation at each Draw call. Of primary interest is making sure that the resources bound to Descriptor Sets correctly align with the layout specified for the Set. Additionally DrawState include sharder validation (formerly separate ShaderChecker layer) that inspects the SPIR-V shader images and fixed function pipeline stages at PSO creation time. It flags errors when inconsistencies are found across interfaces between shader stages. The exact behavior of the checks depends on the pair of pipeline stages involved. If a Dbg callback function is registered, this layer will use callback function(s) for reporting, otherwise uses stdout.
-
-### Track GPU Memory
-layers/mem\_tracker.cpp (name=VK_LAYER_LUNARG_mem_tracker) - The MemTracker layer tracks memory objects and references and validates that they are managed correctly by the application. This includes tracking object bindings, memory hazards, and memory object lifetimes. MemTracker validates several other hazard-related issues related to command buffers, fences, and memory mapping. If a Dbg callback function is registered, this layer will use callback function(s) for reporting, otherwise uses stdout.
+### Validate API State and Shaders
+layers/core\_validation.cpp (name=VK\_LAYER\_LUNARG\_core\_validation) - The core\_validation layer does the bulk of the API validation that requires storing state. Some of the state it tracks includes the Descriptor Set, Pipeline State, Shaders, and dynamic state, and memory objects and bindings. It performs some point validation as states are created and used, and further validation Draw call and QueueSubmit time. Of primary interest is making sure that the resources bound to Descriptor Sets correctly align with the layout specified for the Set. Also, all of the image and buffer layouts are validated to make sure explicit layout transitions are properly managed. Related to memory, core\_validation includes tracking object bindings, memory hazards, and memory object lifetimes. It also validates several other hazard-related issues related to command buffers, fences, and memory mapping. Additionally core\_validation include shader validation (formerly separate shader\_checker layer) that inspects the SPIR-V shader images and fixed function pipeline stages at PSO creation time. It flags errors when inconsistencies are found across interfaces between shader stages. The exact behavior of the checks depends on the pair of pipeline stages involved. If a Dbg callback function is registered, this layer will use callback function(s) for reporting, otherwise uses stdout.
### Check parameters
-layers/param_checker.cpp (name=VK_LAYER_LUNARG_param_checker) - Check the input parameters to API calls for validity. If a Dbg callback function is registered, this layer will use callback function(s) for reporting, otherwise uses stdout.
+layers/parameter_validation.cpp (name=VK_LAYER_LUNARG_parameter_validation) - Check the input parameters to API calls for validity. If a Dbg callback function is registered, this layer will use callback function(s) for reporting, otherwise uses stdout.
### Image parameters
-layers/image.cpp (name=VK_LAYER_LUNARG_image) - The Image layer is intended to validate image parameters, formats, and correct use. Images are a significant enough area that they were given a separate layer. If a Dbg callback function is registered, this layer will use callback function(s) for reporting, otherwise uses stdout.
+layers/image.cpp (name=VK_LAYER_LUNARG_image) - The image layer is intended to validate image parameters, formats, and correct use. Images are a significant enough area that they were given a separate layer. If a Dbg callback function is registered, this layer will use callback function(s) for reporting, otherwise uses stdout.
### Check threading
-<build dir>/layers/threading.cpp (name=VK_LAYER_GOOGLE_threading) - Check multithreading of API calls for validity. Currently this checks that only one thread at a time uses an object in free-threaded API calls. If a Dbg callback function is registered, this layer will use callback function(s) for reporting, otherwise uses stdout.
+layers/threading.cpp (name=VK_LAYER_GOOGLE_threading) - Check multithreading of API calls for validity. Currently this checks that only one thread at a time uses an object in free-threaded API calls. If a Dbg callback function is registered, this layer will use callback function(s) for reporting, otherwise uses stdout.
### Swapchain
-<build dir>/layer/swapchain.cpp (name=VK_LAYER_LUNARG_swapchain) - Check that WSI extensions are being used correctly.
+layers/swapchain.cpp (name=VK_LAYER_LUNARG_swapchain) - Check that WSI extensions are being used correctly.
### Device Limitations
layers/device_limits.cpp (name=VK_LAYER_LUNARG_device_limits) - This layer is intended to capture underlying device features and limitations and then flag errors if an app makes requests for unsupported features or exceeding limitations. This layer is a work in progress and currently only flags some high-level errors without flagging errors on specific feature and limitation. If a Dbg callback function is registered, this layer will use callback function(s) for reporting, otherwise uses stdout.
+### Unique Objects
+(build dir)/layers/unique_objects.cpp (name=VK_LAYER_GOOGLE_unique_objects) - The Vulkan specification allows objects that have non-unique handles. This makes tracking object lifetimes difficult in that it is unclear which object is being referenced on deletion. The unique_objects layer was created to address this problem. If loaded in the correct position (last, which is closest to the display driver) it will wrap all objects with a unique object representation, allowing proper object lifetime tracking. This layer does no validation on its own, and may not be required for the proper operation of all layers or all platforms. One sign that it is needed is the appearance of errors emitted from the object_tracker layer indicating the use of previously destroyed objects.
+
## Using Layers
1. Build VK loader using normal steps (cmake and make)
@@ -69,13 +72,13 @@ layers/device_limits.cpp (name=VK_LAYER_LUNARG_device_limits) - This layer is in
3. Create a vk_layer_settings.txt file in the same directory to specify how your layers should behave.
- Model it after the following example: [*vk_layer_settings.txt*](layers/vk_layer_settings.txt)
+ Model it after the following example: [*vk_layer_settings.txt*](vk_layer_settings.txt)
4. Specify which Layers to activate by using
vkCreateDevice and/or vkCreateInstance or environment variables.
- export VK\_INSTANCE\_LAYERS=VK_LAYER_LUNARG_param_checker:VK_LAYER_LUNARG_draw_state
- export VK\_DEVICE\_LAYERS=VK_LAYER_LUNARG_param_checker:VK_LAYER_LUNARG_draw_state
+ export VK\_INSTANCE\_LAYERS=VK\_LAYER\_LUNARG\_param\_checker:VK\_LAYER\_LUNARG\_core\_validation
+ export VK\_DEVICE\_LAYERS=VK\_LAYER\_LUNARG\_param\_checker:VK\_LAYER\_LUNARG\_core\_validation
cd build/tests; ./vkinfo
diff --git a/layers/core_validation.cpp b/layers/core_validation.cpp
new file mode 100644
index 000000000..63ca5362b
--- /dev/null
+++ b/layers/core_validation.cpp
@@ -0,0 +1,10932 @@
+/* Copyright (c) 2015-2016 The Khronos Group Inc.
+ * Copyright (c) 2015-2016 Valve Corporation
+ * Copyright (c) 2015-2016 LunarG, Inc.
+ * Copyright (C) 2015-2016 Google Inc.
+ *
+ * Permission is hereby granted, free of charge, to any person obtaining a copy
+ * of this software and/or associated documentation files (the "Materials"), to
+ * deal in the Materials without restriction, including without limitation the
+ * rights to use, copy, modify, merge, publish, distribute, sublicense, and/or
+ * sell copies of the Materials, and to permit persons to whom the Materials
+ * are furnished to do so, subject to the following conditions:
+ *
+ * The above copyright notice(s) and this permission notice shall be included
+ * in all copies or substantial portions of the Materials.
+ *
+ * THE MATERIALS ARE PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
+ * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
+ * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT.
+ *
+ * IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM,
+ * DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR
+ * OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE MATERIALS OR THE
+ * USE OR OTHER DEALINGS IN THE MATERIALS
+ *
+ * Author: Cody Northrop <cnorthrop@google.com>
+ * Author: Michael Lentine <mlentine@google.com>
+ * Author: Tobin Ehlis <tobine@google.com>
+ * Author: Chia-I Wu <olv@google.com>
+ * Author: Chris Forbes <chrisf@ijw.co.nz>
+ * Author: Mark Lobodzinski <mark@lunarg.com>
+ * Author: Ian Elliott <ianelliott@google.com>
+ */
+
+// Allow use of STL min and max functions in Windows
+#define NOMINMAX
+
+// Turn on mem_tracker merged code
+#define MTMERGESOURCE 1
+
+#include <stdio.h>
+#include <stdlib.h>
+#include <string.h>
+#include <assert.h>
+#include <unordered_map>
+#include <unordered_set>
+#include <map>
+#include <string>
+#include <iostream>
+#include <algorithm>
+#include <list>
+#include <SPIRV/spirv.hpp>
+#include <set>
+
+#include "vk_loader_platform.h"
+#include "vk_dispatch_table_helper.h"
+#include "vk_struct_string_helper_cpp.h"
+#if defined(__GNUC__)
+#pragma GCC diagnostic ignored "-Wwrite-strings"
+#endif
+#if defined(__GNUC__)
+#pragma GCC diagnostic warning "-Wwrite-strings"
+#endif
+#include "vk_struct_size_helper.h"
+#include "core_validation.h"
+#include "vk_layer_config.h"
+#include "vk_layer_table.h"
+#include "vk_layer_data.h"
+#include "vk_layer_logging.h"
+#include "vk_layer_extension_utils.h"
+#include "vk_layer_utils.h"
+
+#if defined __ANDROID__
+#include <android/log.h>
+#define LOGCONSOLE(...) ((void)__android_log_print(ANDROID_LOG_INFO, "DS", __VA_ARGS__))
+#else
+#define LOGCONSOLE(...) printf(__VA_ARGS__)
+#endif
+
+using std::unordered_map;
+using std::unordered_set;
+
+#if MTMERGESOURCE
+// WSI Image Objects bypass usual Image Object creation methods. A special Memory
+// Object value will be used to identify them internally.
+static const VkDeviceMemory MEMTRACKER_SWAP_CHAIN_IMAGE_KEY = (VkDeviceMemory)(-1);
+#endif
+// Track command pools and their command buffers
+struct CMD_POOL_INFO {
+ VkCommandPoolCreateFlags createFlags;
+ uint32_t queueFamilyIndex;
+ list<VkCommandBuffer> commandBuffers; // list container of cmd buffers allocated from this pool
+};
+
+struct devExts {
+ VkBool32 wsi_enabled;
+ unordered_map<VkSwapchainKHR, SWAPCHAIN_NODE *> swapchainMap;
+ unordered_map<VkImage, VkSwapchainKHR> imageToSwapchainMap;
+};
+
+// fwd decls
+struct shader_module;
+struct render_pass;
+
+struct layer_data {
+ debug_report_data *report_data;
+ std::vector<VkDebugReportCallbackEXT> logging_callback;
+ VkLayerDispatchTable *device_dispatch_table;
+ VkLayerInstanceDispatchTable *instance_dispatch_table;
+#if MTMERGESOURCE
+// MTMERGESOURCE - stuff pulled directly from MT
+ uint64_t currentFenceId;
+ // Maps for tracking key structs related to mem_tracker state
+ unordered_map<VkDescriptorSet, MT_DESCRIPTOR_SET_INFO> descriptorSetMap;
+ // Images and Buffers are 2 objects that can have memory bound to them so they get special treatment
+ unordered_map<uint64_t, MT_OBJ_BINDING_INFO> imageBindingMap;
+ unordered_map<uint64_t, MT_OBJ_BINDING_INFO> bufferBindingMap;
+// MTMERGESOURCE - End of MT stuff
+#endif
+ devExts device_extensions;
+ vector<VkQueue> queues; // all queues under given device
+ // Global set of all cmdBuffers that are inFlight on this device
+ unordered_set<VkCommandBuffer> globalInFlightCmdBuffers;
+ // Layer specific data
+ unordered_map<VkSampler, unique_ptr<SAMPLER_NODE>> sampleMap;
+ unordered_map<VkImageView, VkImageViewCreateInfo> imageViewMap;
+ unordered_map<VkImage, IMAGE_NODE> imageMap;
+ unordered_map<VkBufferView, VkBufferViewCreateInfo> bufferViewMap;
+ unordered_map<VkBuffer, BUFFER_NODE> bufferMap;
+ unordered_map<VkPipeline, PIPELINE_NODE *> pipelineMap;
+ unordered_map<VkCommandPool, CMD_POOL_INFO> commandPoolMap;
+ unordered_map<VkDescriptorPool, DESCRIPTOR_POOL_NODE *> descriptorPoolMap;
+ unordered_map<VkDescriptorSet, SET_NODE *> setMap;
+ unordered_map<VkDescriptorSetLayout, LAYOUT_NODE *> descriptorSetLayoutMap;
+ unordered_map<VkPipelineLayout, PIPELINE_LAYOUT_NODE> pipelineLayoutMap;
+ unordered_map<VkDeviceMemory, DEVICE_MEM_INFO> memObjMap;
+ unordered_map<VkFence, FENCE_NODE> fenceMap;
+ unordered_map<VkQueue, QUEUE_NODE> queueMap;
+ unordered_map<VkEvent, EVENT_NODE> eventMap;
+ unordered_map<QueryObject, bool> queryToStateMap;
+ unordered_map<VkQueryPool, QUERY_POOL_NODE> queryPoolMap;
+ unordered_map<VkSemaphore, SEMAPHORE_NODE> semaphoreMap;
+ unordered_map<VkCommandBuffer, GLOBAL_CB_NODE *> commandBufferMap;
+ unordered_map<VkFramebuffer, FRAMEBUFFER_NODE> frameBufferMap;
+ unordered_map<VkImage, vector<ImageSubresourcePair>> imageSubresourceMap;
+ unordered_map<ImageSubresourcePair, IMAGE_LAYOUT_NODE> imageLayoutMap;
+ unordered_map<VkRenderPass, RENDER_PASS_NODE *> renderPassMap;
+ unordered_map<VkShaderModule, unique_ptr<shader_module>> shaderModuleMap;
+ // Current render pass
+ VkRenderPassBeginInfo renderPassBeginInfo;
+ uint32_t currentSubpass;
+
+ // Device specific data
+ PHYS_DEV_PROPERTIES_NODE physDevProperties;
+// MTMERGESOURCE - added a couple of fields to constructor initializer
+ layer_data()
+ : report_data(nullptr), device_dispatch_table(nullptr), instance_dispatch_table(nullptr),
+#if MTMERGESOURCE
+ currentFenceId(1),
+#endif
+ device_extensions(){};
+};
+
+static const VkLayerProperties cv_global_layers[] = {{
+ "VK_LAYER_LUNARG_core_validation", VK_LAYER_API_VERSION, 1, "LunarG Validation Layer",
+}};
+
+template <class TCreateInfo> void ValidateLayerOrdering(const TCreateInfo &createInfo) {
+ bool foundLayer = false;
+ for (uint32_t i = 0; i < createInfo.enabledLayerCount; ++i) {
+ if (!strcmp(createInfo.ppEnabledLayerNames[i], cv_global_layers[0].layerName)) {
+ foundLayer = true;
+ }
+ // This has to be logged to console as we don't have a callback at this point.
+ if (!foundLayer && !strcmp(createInfo.ppEnabledLayerNames[0], "VK_LAYER_GOOGLE_unique_objects")) {
+ LOGCONSOLE("Cannot activate layer VK_LAYER_GOOGLE_unique_objects prior to activating %s.",
+ cv_global_layers[0].layerName);
+ }
+ }
+}
+
+// Code imported from shader_checker
+static void build_def_index(shader_module *);
+
+// A forward iterator over spirv instructions. Provides easy access to len, opcode, and content words
+// without the caller needing to care too much about the physical SPIRV module layout.
+struct spirv_inst_iter {
+ std::vector<uint32_t>::const_iterator zero;
+ std::vector<uint32_t>::const_iterator it;
+
+ uint32_t len() { return *it >> 16; }
+ uint32_t opcode() { return *it & 0x0ffffu; }
+ uint32_t const &word(unsigned n) { return it[n]; }
+ uint32_t offset() { return (uint32_t)(it - zero); }
+
+ spirv_inst_iter() {}
+
+ spirv_inst_iter(std::vector<uint32_t>::const_iterator zero, std::vector<uint32_t>::const_iterator it) : zero(zero), it(it) {}
+
+ bool operator==(spirv_inst_iter const &other) { return it == other.it; }
+
+ bool operator!=(spirv_inst_iter const &other) { return it != other.it; }
+
+ spirv_inst_iter operator++(int) { /* x++ */
+ spirv_inst_iter ii = *this;
+ it += len();
+ return ii;
+ }
+
+ spirv_inst_iter operator++() { /* ++x; */
+ it += len();
+ return *this;
+ }
+
+ /* The iterator and the value are the same thing. */
+ spirv_inst_iter &operator*() { return *this; }
+ spirv_inst_iter const &operator*() const { return *this; }
+};
+
+struct shader_module {
+ /* the spirv image itself */
+ vector<uint32_t> words;
+ /* a mapping of <id> to the first word of its def. this is useful because walking type
+ * trees, constant expressions, etc requires jumping all over the instruction stream.
+ */
+ unordered_map<unsigned, unsigned> def_index;
+
+ shader_module(VkShaderModuleCreateInfo const *pCreateInfo)
+ : words((uint32_t *)pCreateInfo->pCode, (uint32_t *)pCreateInfo->pCode + pCreateInfo->codeSize / sizeof(uint32_t)),
+ def_index() {
+
+ build_def_index(this);
+ }
+
+ /* expose begin() / end() to enable range-based for */
+ spirv_inst_iter begin() const { return spirv_inst_iter(words.begin(), words.begin() + 5); } /* first insn */
+ spirv_inst_iter end() const { return spirv_inst_iter(words.begin(), words.end()); } /* just past last insn */
+ /* given an offset into the module, produce an iterator there. */
+ spirv_inst_iter at(unsigned offset) const { return spirv_inst_iter(words.begin(), words.begin() + offset); }
+
+ /* gets an iterator to the definition of an id */
+ spirv_inst_iter get_def(unsigned id) const {
+ auto it = def_index.find(id);
+ if (it == def_index.end()) {
+ return end();
+ }
+ return at(it->second);
+ }
+};
+
+// TODO : Do we need to guard access to layer_data_map w/ lock?
+static unordered_map<void *, layer_data *> layer_data_map;
+
+// TODO : This can be much smarter, using separate locks for separate global data
+static int globalLockInitialized = 0;
+static loader_platform_thread_mutex globalLock;
+#define MAX_TID 513
+static loader_platform_thread_id g_tidMapping[MAX_TID] = {0};
+static uint32_t g_maxTID = 0;
+#if MTMERGESOURCE
+// MTMERGESOURCE - start of direct pull
+static VkPhysicalDeviceMemoryProperties memProps;
+
+static void clear_cmd_buf_and_mem_references(layer_data *my_data, const VkCommandBuffer cb);
+
+#define MAX_BINDING 0xFFFFFFFF
+
+static MT_OBJ_BINDING_INFO *get_object_binding_info(layer_data *my_data, uint64_t handle, VkDebugReportObjectTypeEXT type) {
+ MT_OBJ_BINDING_INFO *retValue = NULL;
+ switch (type) {
+ case VK_DEBUG_REPORT_OBJECT_TYPE_IMAGE_EXT: {
+ auto it = my_data->imageBindingMap.find(handle);
+ if (it != my_data->imageBindingMap.end())
+ return &(*it).second;
+ break;
+ }
+ case VK_DEBUG_REPORT_OBJECT_TYPE_BUFFER_EXT: {
+ auto it = my_data->bufferBindingMap.find(handle);
+ if (it != my_data->bufferBindingMap.end())
+ return &(*it).second;
+ break;
+ }
+ default:
+ break;
+ }
+ return retValue;
+}
+// MTMERGESOURCE - end section
+#endif
+template layer_data *get_my_data_ptr<layer_data>(void *data_key, std::unordered_map<void *, layer_data *> &data_map);
+
+// prototype
+static GLOBAL_CB_NODE *getCBNode(layer_data *, const VkCommandBuffer);
+
+#if MTMERGESOURCE
+static void delete_queue_info_list(layer_data *my_data) {
+ // Process queue list, cleaning up each entry before deleting
+ my_data->queueMap.clear();
+}
+
+// Delete CBInfo from container and clear mem references to CB
+static void delete_cmd_buf_info(layer_data *my_data, VkCommandPool commandPool, const VkCommandBuffer cb) {
+ clear_cmd_buf_and_mem_references(my_data, cb);
+ // Delete the CBInfo info
+ my_data->commandPoolMap[commandPool].commandBuffers.remove(cb);
+ my_data->commandBufferMap.erase(cb);
+}
+
+static void add_object_binding_info(layer_data *my_data, const uint64_t handle, const VkDebugReportObjectTypeEXT type,
+ const VkDeviceMemory mem) {
+ switch (type) {
+ // Buffers and images are unique as their CreateInfo is in container struct
+ case VK_DEBUG_REPORT_OBJECT_TYPE_BUFFER_EXT: {
+ auto pCI = &my_data->bufferBindingMap[handle];
+ pCI->mem = mem;
+ break;
+ }
+ case VK_DEBUG_REPORT_OBJECT_TYPE_IMAGE_EXT: {
+ auto pCI = &my_data->imageBindingMap[handle];
+ pCI->mem = mem;
+ break;
+ }
+ default:
+ break;
+ }
+}
+
+static void add_object_create_info(layer_data *my_data, const uint64_t handle, const VkDebugReportObjectTypeEXT type,
+ const void *pCreateInfo) {
+ // TODO : For any CreateInfo struct that has ptrs, need to deep copy them and appropriately clean up on Destroy
+ switch (type) {
+ // Buffers and images are unique as their CreateInfo is in container struct
+ case VK_DEBUG_REPORT_OBJECT_TYPE_BUFFER_EXT: {
+ auto pCI = &my_data->bufferBindingMap[handle];
+ memset(pCI, 0, sizeof(MT_OBJ_BINDING_INFO));
+ memcpy(&pCI->create_info.buffer, pCreateInfo, sizeof(VkBufferCreateInfo));
+ break;
+ }
+ case VK_DEBUG_REPORT_OBJECT_TYPE_IMAGE_EXT: {
+ auto pCI = &my_data->imageBindingMap[handle];
+ memset(pCI, 0, sizeof(MT_OBJ_BINDING_INFO));
+ memcpy(&pCI->create_info.image, pCreateInfo, sizeof(VkImageCreateInfo));
+ break;
+ }
+ // Swap Chain is very unique, use my_data->imageBindingMap, but copy in
+ // SwapChainCreatInfo's usage flags and set the mem value to a unique key. These is used by
+ // vkCreateImageView and internal mem_tracker routines to distinguish swap chain images
+ case VK_DEBUG_REPORT_OBJECT_TYPE_SWAPCHAIN_KHR_EXT: {
+ auto pCI = &my_data->imageBindingMap[handle];
+ memset(pCI, 0, sizeof(MT_OBJ_BINDING_INFO));
+ pCI->mem = MEMTRACKER_SWAP_CHAIN_IMAGE_KEY;
+ pCI->valid = false;
+ pCI->create_info.image.usage =
+ const_cast<VkSwapchainCreateInfoKHR *>(static_cast<const VkSwapchainCreateInfoKHR *>(pCreateInfo))->imageUsage;
+ break;
+ }
+ default:
+ break;
+ }
+}
+
+// Add a fence, creating one if necessary to our list of fences/fenceIds
+static VkBool32 add_fence_info(layer_data *my_data, VkFence fence, VkQueue queue, uint64_t *fenceId) {
+ VkBool32 skipCall = VK_FALSE;
+ *fenceId = my_data->currentFenceId++;
+
+ // If no fence, create an internal fence to track the submissions
+ if (fence != VK_NULL_HANDLE) {
+ my_data->fenceMap[fence].fenceId = *fenceId;
+ my_data->fenceMap[fence].queue = queue;
+ // Validate that fence is in UNSIGNALED state
+ VkFenceCreateInfo *pFenceCI = &(my_data->fenceMap[fence].createInfo);
+ if (pFenceCI->flags & VK_FENCE_CREATE_SIGNALED_BIT) {
+ skipCall = log_msg(my_data->report_data, VK_DEBUG_REPORT_ERROR_BIT_EXT, VK_DEBUG_REPORT_OBJECT_TYPE_FENCE_EXT,
+ (uint64_t)fence, __LINE__, MEMTRACK_INVALID_FENCE_STATE, "MEM",
+ "Fence %#" PRIxLEAST64 " submitted in SIGNALED state. Fences must be reset before being submitted",
+ (uint64_t)fence);
+ }
+ } else {
+ // TODO : Do we need to create an internal fence here for tracking purposes?
+ }
+ // Update most recently submitted fence and fenceId for Queue
+ my_data->queueMap[queue].lastSubmittedId = *fenceId;
+ return skipCall;
+}
+
+// Remove a fenceInfo from our list of fences/fenceIds
+static void delete_fence_info(layer_data *my_data, VkFence fence) { my_data->fenceMap.erase(fence); }
+
+// Record information when a fence is known to be signalled
+static void update_fence_tracking(layer_data *my_data, VkFence fence) {
+ auto fence_item = my_data->fenceMap.find(fence);
+ if (fence_item != my_data->fenceMap.end()) {
+ FENCE_NODE *pCurFenceInfo = &(*fence_item).second;
+ VkQueue queue = pCurFenceInfo->queue;
+ auto queue_item = my_data->queueMap.find(queue);
+ if (queue_item != my_data->queueMap.end()) {
+ QUEUE_NODE *pQueueInfo = &(*queue_item).second;
+ if (pQueueInfo->lastRetiredId < pCurFenceInfo->fenceId) {
+ pQueueInfo->lastRetiredId = pCurFenceInfo->fenceId;
+ }
+ }
+ }
+
+ // Update fence state in fenceCreateInfo structure
+ auto pFCI = &(my_data->fenceMap[fence].createInfo);
+ pFCI->flags = static_cast<VkFenceCreateFlags>(pFCI->flags | VK_FENCE_CREATE_SIGNALED_BIT);
+}
+
+// Helper routine that updates the fence list for a specific queue to all-retired
+static void retire_queue_fences(layer_data *my_data, VkQueue queue) {
+ QUEUE_NODE *pQueueInfo = &my_data->queueMap[queue];
+ // Set queue's lastRetired to lastSubmitted indicating all fences completed
+ pQueueInfo->lastRetiredId = pQueueInfo->lastSubmittedId;
+}
+
+// Helper routine that updates all queues to all-retired
+static void retire_device_fences(layer_data *my_data, VkDevice device) {
+ // Process each queue for device
+ // TODO: Add multiple device support
+ for (auto ii = my_data->queueMap.begin(); ii != my_data->queueMap.end(); ++ii) {
+ // Set queue's lastRetired to lastSubmitted indicating all fences completed
+ QUEUE_NODE *pQueueInfo = &(*ii).second;
+ pQueueInfo->lastRetiredId = pQueueInfo->lastSubmittedId;
+ }
+}
+
+// Helper function to validate correct usage bits set for buffers or images
+// Verify that (actual & desired) flags != 0 or,
+// if strict is true, verify that (actual & desired) flags == desired
+// In case of error, report it via dbg callbacks
+static VkBool32 validate_usage_flags(layer_data *my_data, void *disp_obj, VkFlags actual, VkFlags desired, VkBool32 strict,
+ uint64_t obj_handle, VkDebugReportObjectTypeEXT obj_type, char const *ty_str,
+ char const *func_name, char const *usage_str) {
+ VkBool32 correct_usage = VK_FALSE;
+ VkBool32 skipCall = VK_FALSE;
+ if (strict)
+ correct_usage = ((actual & desired) == desired);
+ else
+ correct_usage = ((actual & desired) != 0);
+ if (!correct_usage) {
+ skipCall = log_msg(my_data->report_data, VK_DEBUG_REPORT_ERROR_BIT_EXT, obj_type, obj_handle, __LINE__,
+ MEMTRACK_INVALID_USAGE_FLAG, "MEM", "Invalid usage flag for %s %#" PRIxLEAST64
+ " used by %s. In this case, %s should have %s set during creation.",
+ ty_str, obj_handle, func_name, ty_str, usage_str);
+ }
+ return skipCall;
+}
+
+// Helper function to validate usage flags for images
+// Pulls image info and then sends actual vs. desired usage off to helper above where
+// an error will be flagged if usage is not correct
+static VkBool32 validate_image_usage_flags(layer_data *my_data, void *disp_obj, VkImage image, VkFlags desired, VkBool32 strict,
+ char const *func_name, char const *usage_string) {
+ VkBool32 skipCall = VK_FALSE;
+ MT_OBJ_BINDING_INFO *pBindInfo = get_object_binding_info(my_data, (uint64_t)image, VK_DEBUG_REPORT_OBJECT_TYPE_IMAGE_EXT);
+ if (pBindInfo) {
+ skipCall = validate_usage_flags(my_data, disp_obj, pBindInfo->create_info.image.usage, desired, strict, (uint64_t)image,
+ VK_DEBUG_REPORT_OBJECT_TYPE_IMAGE_EXT, "image", func_name, usage_string);
+ }
+ return skipCall;
+}
+
+// Helper function to validate usage flags for buffers
+// Pulls buffer info and then sends actual vs. desired usage off to helper above where
+// an error will be flagged if usage is not correct
+static VkBool32 validate_buffer_usage_flags(layer_data *my_data, void *disp_obj, VkBuffer buffer, VkFlags desired, VkBool32 strict,
+ char const *func_name, char const *usage_string) {
+ VkBool32 skipCall = VK_FALSE;
+ MT_OBJ_BINDING_INFO *pBindInfo = get_object_binding_info(my_data, (uint64_t)buffer, VK_DEBUG_REPORT_OBJECT_TYPE_BUFFER_EXT);
+ if (pBindInfo) {
+ skipCall = validate_usage_flags(my_data, disp_obj, pBindInfo->create_info.buffer.usage, desired, strict, (uint64_t)buffer,
+ VK_DEBUG_REPORT_OBJECT_TYPE_BUFFER_EXT, "buffer", func_name, usage_string);
+ }
+ return skipCall;
+}
+
+// Return ptr to info in map container containing mem, or NULL if not found
+// Calls to this function should be wrapped in mutex
+static DEVICE_MEM_INFO *get_mem_obj_info(layer_data *dev_data, const VkDeviceMemory mem) {
+ auto item = dev_data->memObjMap.find(mem);
+ if (item != dev_data->memObjMap.end()) {
+ return &(*item).second;
+ } else {
+ return NULL;
+ }
+}
+
+static void add_mem_obj_info(layer_data *my_data, void *object, const VkDeviceMemory mem,
+ const VkMemoryAllocateInfo *pAllocateInfo) {
+ assert(object != NULL);
+
+ memcpy(&my_data->memObjMap[mem].allocInfo, pAllocateInfo, sizeof(VkMemoryAllocateInfo));
+ // TODO: Update for real hardware, actually process allocation info structures
+ my_data->memObjMap[mem].allocInfo.pNext = NULL;
+ my_data->memObjMap[mem].object = object;
+ my_data->memObjMap[mem].refCount = 0;
+ my_data->memObjMap[mem].mem = mem;
+ my_data->memObjMap[mem].image = VK_NULL_HANDLE;
+ my_data->memObjMap[mem].memRange.offset = 0;
+ my_data->memObjMap[mem].memRange.size = 0;
+ my_data->memObjMap[mem].pData = 0;
+ my_data->memObjMap[mem].pDriverData = 0;
+ my_data->memObjMap[mem].valid = false;
+}
+
+static VkBool32 validate_memory_is_valid(layer_data *dev_data, VkDeviceMemory mem, const char *functionName,
+ VkImage image = VK_NULL_HANDLE) {
+ if (mem == MEMTRACKER_SWAP_CHAIN_IMAGE_KEY) {
+ MT_OBJ_BINDING_INFO *pBindInfo =
+ get_object_binding_info(dev_data, reinterpret_cast<const uint64_t &>(image), VK_DEBUG_REPORT_OBJECT_TYPE_IMAGE_EXT);
+ if (pBindInfo && !pBindInfo->valid) {
+ return log_msg(dev_data->report_data, VK_DEBUG_REPORT_ERROR_BIT_EXT, VK_DEBUG_REPORT_OBJECT_TYPE_DEVICE_MEMORY_EXT,
+ (uint64_t)(mem), __LINE__, MEMTRACK_INVALID_USAGE_FLAG, "MEM",
+ "%s: Cannot read invalid swapchain image %" PRIx64 ", please fill the memory before using.",
+ functionName, (uint64_t)(image));
+ }
+ } else {
+ DEVICE_MEM_INFO *pMemObj = get_mem_obj_info(dev_data, mem);
+ if (pMemObj && !pMemObj->valid) {
+ return log_msg(dev_data->report_data, VK_DEBUG_REPORT_ERROR_BIT_EXT, VK_DEBUG_REPORT_OBJECT_TYPE_DEVICE_MEMORY_EXT,
+ (uint64_t)(mem), __LINE__, MEMTRACK_INVALID_USAGE_FLAG, "MEM",
+ "%s: Cannot read invalid memory %" PRIx64 ", please fill the memory before using.", functionName,
+ (uint64_t)(mem));
+ }
+ }
+ return false;
+}
+
+static void set_memory_valid(layer_data *dev_data, VkDeviceMemory mem, bool valid, VkImage image = VK_NULL_HANDLE) {
+ if (mem == MEMTRACKER_SWAP_CHAIN_IMAGE_KEY) {
+ MT_OBJ_BINDING_INFO *pBindInfo =
+ get_object_binding_info(dev_data, reinterpret_cast<const uint64_t &>(image), VK_DEBUG_REPORT_OBJECT_TYPE_IMAGE_EXT);
+ if (pBindInfo) {
+ pBindInfo->valid = valid;
+ }
+ } else {
+ DEVICE_MEM_INFO *pMemObj = get_mem_obj_info(dev_data, mem);
+ if (pMemObj) {
+ pMemObj->valid = valid;
+ }
+ }
+}
+
+// Find CB Info and add mem reference to list container
+// Find Mem Obj Info and add CB reference to list container
+static VkBool32 update_cmd_buf_and_mem_references(layer_data *dev_data, const VkCommandBuffer cb, const VkDeviceMemory mem,
+ const char *apiName) {
+ VkBool32 skipCall = VK_FALSE;
+
+ // Skip validation if this image was created through WSI
+ if (mem != MEMTRACKER_SWAP_CHAIN_IMAGE_KEY) {
+
+ // First update CB binding in MemObj mini CB list
+ DEVICE_MEM_INFO *pMemInfo = get_mem_obj_info(dev_data, mem);
+ if (pMemInfo) {
+ // Search for cmd buffer object in memory object's binding list
+ VkBool32 found = VK_FALSE;
+ if (pMemInfo->pCommandBufferBindings.size() > 0) {
+ for (list<VkCommandBuffer>::iterator it = pMemInfo->pCommandBufferBindings.begin();
+ it != pMemInfo->pCommandBufferBindings.end(); ++it) {
+ if ((*it) == cb) {
+ found = VK_TRUE;
+ break;
+ }
+ }
+ }
+ // If not present, add to list
+ if (found == VK_FALSE) {
+ pMemInfo->pCommandBufferBindings.push_front(cb);
+ pMemInfo->refCount++;
+ }
+ // Now update CBInfo's Mem reference list
+ GLOBAL_CB_NODE *pCBNode = getCBNode(dev_data, cb);
+ // TODO: keep track of all destroyed CBs so we know if this is a stale or simply invalid object
+ if (pCBNode) {
+ // Search for memory object in cmd buffer's reference list
+ VkBool32 found = VK_FALSE;
+ if (pCBNode->pMemObjList.size() > 0) {
+ for (auto it = pCBNode->pMemObjList.begin(); it != pCBNode->pMemObjList.end(); ++it) {
+ if ((*it) == mem) {
+ found = VK_TRUE;
+ break;
+ }
+ }
+ }
+ // If not present, add to list
+ if (found == VK_FALSE) {
+ pCBNode->pMemObjList.push_front(mem);
+ }
+ }
+ }
+ }
+ return skipCall;
+}
+
+// Free bindings related to CB
+static void clear_cmd_buf_and_mem_references(layer_data *dev_data, const VkCommandBuffer cb) {
+ GLOBAL_CB_NODE *pCBNode = getCBNode(dev_data, cb);
+
+ if (pCBNode) {
+ if (pCBNode->pMemObjList.size() > 0) {
+ list<VkDeviceMemory> mem_obj_list = pCBNode->pMemObjList;
+ for (list<VkDeviceMemory>::iterator it = mem_obj_list.begin(); it != mem_obj_list.end(); ++it) {
+ DEVICE_MEM_INFO *pInfo = get_mem_obj_info(dev_data, *it);
+ if (pInfo) {
+ pInfo->pCommandBufferBindings.remove(cb);
+ pInfo->refCount--;
+ }
+ }
+ pCBNode->pMemObjList.clear();
+ }
+ pCBNode->activeDescriptorSets.clear();
+ pCBNode->validate_functions.clear();
+ }
+}
+
+// Delete the entire CB list
+static void delete_cmd_buf_info_list(layer_data *my_data) {
+ for (auto &cb_node : my_data->commandBufferMap) {
+ clear_cmd_buf_and_mem_references(my_data, cb_node.first);
+ }
+ my_data->commandBufferMap.clear();
+}
+
+// For given MemObjInfo, report Obj & CB bindings
+static VkBool32 reportMemReferencesAndCleanUp(layer_data *dev_data, DEVICE_MEM_INFO *pMemObjInfo) {
+ VkBool32 skipCall = VK_FALSE;
+ size_t cmdBufRefCount = pMemObjInfo->pCommandBufferBindings.size();
+ size_t objRefCount = pMemObjInfo->pObjBindings.size();
+
+ if ((pMemObjInfo->pCommandBufferBindings.size()) != 0) {
+ skipCall = log_msg(dev_data->report_data, VK_DEBUG_REPORT_ERROR_BIT_EXT, VK_DEBUG_REPORT_OBJECT_TYPE_DEVICE_MEMORY_EXT,
+ (uint64_t)pMemObjInfo->mem, __LINE__, MEMTRACK_FREED_MEM_REF, "MEM",
+ "Attempting to free memory object %#" PRIxLEAST64 " which still contains " PRINTF_SIZE_T_SPECIFIER
+ " references",
+ (uint64_t)pMemObjInfo->mem, (cmdBufRefCount + objRefCount));
+ }
+
+ if (cmdBufRefCount > 0 && pMemObjInfo->pCommandBufferBindings.size() > 0) {
+ for (list<VkCommandBuffer>::const_iterator it = pMemObjInfo->pCommandBufferBindings.begin();
+ it != pMemObjInfo->pCommandBufferBindings.end(); ++it) {
+ // TODO : CommandBuffer should be source Obj here
+ log_msg(dev_data->report_data, VK_DEBUG_REPORT_INFORMATION_BIT_EXT, VK_DEBUG_REPORT_OBJECT_TYPE_COMMAND_BUFFER_EXT,
+ (uint64_t)(*it), __LINE__, MEMTRACK_FREED_MEM_REF, "MEM",
+ "Command Buffer %p still has a reference to mem obj %#" PRIxLEAST64, (*it), (uint64_t)pMemObjInfo->mem);
+ }
+ // Clear the list of hanging references
+ pMemObjInfo->pCommandBufferBindings.clear();
+ }
+
+ if (objRefCount > 0 && pMemObjInfo->pObjBindings.size() > 0) {
+ for (auto it = pMemObjInfo->pObjBindings.begin(); it != pMemObjInfo->pObjBindings.end(); ++it) {
+ log_msg(dev_data->report_data, VK_DEBUG_REPORT_INFORMATION_BIT_EXT, it->type, it->handle, __LINE__,
+ MEMTRACK_FREED_MEM_REF, "MEM", "VK Object %#" PRIxLEAST64 " still has a reference to mem obj %#" PRIxLEAST64,
+ it->handle, (uint64_t)pMemObjInfo->mem);
+ }
+ // Clear the list of hanging references
+ pMemObjInfo->pObjBindings.clear();
+ }
+ return skipCall;
+}
+
+static VkBool32 deleteMemObjInfo(layer_data *my_data, void *object, VkDeviceMemory mem) {
+ VkBool32 skipCall = VK_FALSE;
+ auto item = my_data->memObjMap.find(mem);
+ if (item != my_data->memObjMap.end()) {
+ my_data->memObjMap.erase(item);
+ } else {
+ skipCall = log_msg(my_data->report_data, VK_DEBUG_REPORT_ERROR_BIT_EXT, VK_DEBUG_REPORT_OBJECT_TYPE_DEVICE_MEMORY_EXT,
+ (uint64_t)mem, __LINE__, MEMTRACK_INVALID_MEM_OBJ, "MEM",
+ "Request to delete memory object %#" PRIxLEAST64 " not present in memory Object Map", (uint64_t)mem);
+ }
+ return skipCall;
+}
+
+// Check if fence for given CB is completed
+static bool checkCBCompleted(layer_data *my_data, const VkCommandBuffer cb, bool *complete) {
+ GLOBAL_CB_NODE *pCBNode = getCBNode(my_data, cb);
+ VkBool32 skipCall = false;
+ *complete = true;
+
+ if (pCBNode) {
+ if (pCBNode->lastSubmittedQueue != NULL) {
+ VkQueue queue = pCBNode->lastSubmittedQueue;
+ QUEUE_NODE *pQueueInfo = &my_data->queueMap[queue];
+ if (pCBNode->fenceId > pQueueInfo->lastRetiredId) {
+ skipCall = log_msg(my_data->report_data, VK_DEBUG_REPORT_INFORMATION_BIT_EXT,
+ VK_DEBUG_REPORT_OBJECT_TYPE_COMMAND_BUFFER_EXT, (uint64_t)cb, __LINE__, MEMTRACK_NONE, "MEM",
+ "fence %#" PRIxLEAST64 " for CB %p has not been checked for completion",
+ (uint64_t)pCBNode->lastSubmittedFence, cb);
+ *complete = false;
+ }
+ }
+ }
+ return skipCall;
+}
+
+static VkBool32 freeMemObjInfo(layer_data *dev_data, void *object, VkDeviceMemory mem, VkBool32 internal) {
+ VkBool32 skipCall = VK_FALSE;
+ // Parse global list to find info w/ mem
+ DEVICE_MEM_INFO *pInfo = get_mem_obj_info(dev_data, mem);
+ if (pInfo) {
+ if (pInfo->allocInfo.allocationSize == 0 && !internal) {
+ // TODO: Verify against Valid Use section
+ skipCall = log_msg(dev_data->report_data, VK_DEBUG_REPORT_ERROR_BIT_EXT, VK_DEBUG_REPORT_OBJECT_TYPE_DEVICE_MEMORY_EXT,
+ (uint64_t)mem, __LINE__, MEMTRACK_INVALID_MEM_OBJ, "MEM",
+ "Attempting to free memory associated with a Persistent Image, %#" PRIxLEAST64 ", "
+ "this should not be explicitly freed\n",
+ (uint64_t)mem);
+ } else {
+ // Clear any CB bindings for completed CBs
+ // TODO : Is there a better place to do this?
+
+ bool commandBufferComplete = false;
+ assert(pInfo->object != VK_NULL_HANDLE);
+ list<VkCommandBuffer>::iterator it = pInfo->pCommandBufferBindings.begin();
+ list<VkCommandBuffer>::iterator temp;
+ while (pInfo->pCommandBufferBindings.size() > 0 && it != pInfo->pCommandBufferBindings.end()) {
+ skipCall |= checkCBCompleted(dev_data, *it, &commandBufferComplete);
+ if (commandBufferComplete) {
+ temp = it;
+ ++temp;
+ clear_cmd_buf_and_mem_references(dev_data, *it);
+ it = temp;
+ } else {
+ ++it;
+ }
+ }
+
+ // Now verify that no references to this mem obj remain and remove bindings
+ if (0 != pInfo->refCount) {
+ skipCall |= reportMemReferencesAndCleanUp(dev_data, pInfo);
+ }
+ // Delete mem obj info
+ skipCall |= deleteMemObjInfo(dev_data, object, mem);
+ }
+ }
+ return skipCall;
+}
+
+static const char *object_type_to_string(VkDebugReportObjectTypeEXT type) {
+ switch (type) {
+ case VK_DEBUG_REPORT_OBJECT_TYPE_IMAGE_EXT:
+ return "image";
+ break;
+ case VK_DEBUG_REPORT_OBJECT_TYPE_BUFFER_EXT:
+ return "buffer";
+ break;
+ case VK_DEBUG_REPORT_OBJECT_TYPE_SWAPCHAIN_KHR_EXT:
+ return "swapchain";
+ break;
+ default:
+ return "unknown";
+ }
+}
+
+// Remove object binding performs 3 tasks:
+// 1. Remove ObjectInfo from MemObjInfo list container of obj bindings & free it
+// 2. Decrement refCount for MemObjInfo
+// 3. Clear mem binding for image/buffer by setting its handle to 0
+// TODO : This only applied to Buffer, Image, and Swapchain objects now, how should it be updated/customized?
+static VkBool32 clear_object_binding(layer_data *dev_data, void *dispObj, uint64_t handle, VkDebugReportObjectTypeEXT type) {
+ // TODO : Need to customize images/buffers/swapchains to track mem binding and clear it here appropriately
+ VkBool32 skipCall = VK_FALSE;
+ MT_OBJ_BINDING_INFO *pObjBindInfo = get_object_binding_info(dev_data, handle, type);
+ if (pObjBindInfo) {
+ DEVICE_MEM_INFO *pMemObjInfo = get_mem_obj_info(dev_data, pObjBindInfo->mem);
+ // TODO : Make sure this is a reasonable way to reset mem binding
+ pObjBindInfo->mem = VK_NULL_HANDLE;
+ if (pMemObjInfo) {
+ // This obj is bound to a memory object. Remove the reference to this object in that memory object's list, decrement the
+ // memObj's refcount
+ // and set the objects memory binding pointer to NULL.
+ VkBool32 clearSucceeded = VK_FALSE;
+ for (auto it = pMemObjInfo->pObjBindings.begin(); it != pMemObjInfo->pObjBindings.end(); ++it) {
+ if ((it->handle == handle) && (it->type == type)) {
+ pMemObjInfo->refCount--;
+ pMemObjInfo->pObjBindings.erase(it);
+ clearSucceeded = VK_TRUE;
+ break;
+ }
+ }
+ if (VK_FALSE == clearSucceeded) {
+ skipCall |=
+ log_msg(dev_data->report_data, VK_DEBUG_REPORT_ERROR_BIT_EXT, type, handle, __LINE__, MEMTRACK_INVALID_OBJECT,
+ "MEM", "While trying to clear mem binding for %s obj %#" PRIxLEAST64
+ ", unable to find that object referenced by mem obj %#" PRIxLEAST64,
+ object_type_to_string(type), handle, (uint64_t)pMemObjInfo->mem);
+ }
+ }
+ }
+ return skipCall;
+}
+
+// For NULL mem case, output warning
+// Make sure given object is in global object map
+// IF a previous binding existed, output validation error
+// Otherwise, add reference from objectInfo to memoryInfo
+// Add reference off of objInfo
+// device is required for error logging, need a dispatchable
+// object for that.
+static VkBool32 set_mem_binding(layer_data *dev_data, void *dispatch_object, VkDeviceMemory mem, uint64_t handle,
+ VkDebugReportObjectTypeEXT type, const char *apiName) {
+ VkBool32 skipCall = VK_FALSE;
+ // Handle NULL case separately, just clear previous binding & decrement reference
+ if (mem == VK_NULL_HANDLE) {
+ // TODO: Verify against Valid Use section of spec.
+ skipCall = log_msg(dev_data->report_data, VK_DEBUG_REPORT_ERROR_BIT_EXT, type, handle, __LINE__, MEMTRACK_INVALID_MEM_OBJ,
+ "MEM", "In %s, attempting to Bind Obj(%#" PRIxLEAST64 ") to NULL", apiName, handle);
+ } else {
+ MT_OBJ_BINDING_INFO *pObjBindInfo = get_object_binding_info(dev_data, handle, type);
+ if (!pObjBindInfo) {
+ skipCall |=
+ log_msg(dev_data->report_data, VK_DEBUG_REPORT_ERROR_BIT_EXT, type, handle, __LINE__, MEMTRACK_MISSING_MEM_BINDINGS,
+ "MEM", "In %s, attempting to update Binding of %s Obj(%#" PRIxLEAST64 ") that's not in global list()",
+ object_type_to_string(type), apiName, handle);
+ } else {
+ // non-null case so should have real mem obj
+ DEVICE_MEM_INFO *pMemInfo = get_mem_obj_info(dev_data, mem);
+ if (pMemInfo) {
+ // TODO : Need to track mem binding for obj and report conflict here
+ DEVICE_MEM_INFO *pPrevBinding = get_mem_obj_info(dev_data, pObjBindInfo->mem);
+ if (pPrevBinding != NULL) {
+ skipCall |=
+ log_msg(dev_data->report_data, VK_DEBUG_REPORT_ERROR_BIT_EXT, VK_DEBUG_REPORT_OBJECT_TYPE_DEVICE_MEMORY_EXT,
+ (uint64_t)mem, __LINE__, MEMTRACK_REBIND_OBJECT, "MEM",
+ "In %s, attempting to bind memory (%#" PRIxLEAST64 ") to object (%#" PRIxLEAST64
+ ") which has already been bound to mem object %#" PRIxLEAST64,
+ apiName, (uint64_t)mem, handle, (uint64_t)pPrevBinding->mem);
+ } else {
+ MT_OBJ_HANDLE_TYPE oht;
+ oht.handle = handle;
+ oht.type = type;
+ pMemInfo->pObjBindings.push_front(oht);
+ pMemInfo->refCount++;
+ // For image objects, make sure default memory state is correctly set
+ // TODO : What's the best/correct way to handle this?
+ if (VK_DEBUG_REPORT_OBJECT_TYPE_IMAGE_EXT == type) {
+ VkImageCreateInfo ici = pObjBindInfo->create_info.image;
+ if (ici.usage & (VK_IMAGE_USAGE_COLOR_ATTACHMENT_BIT | VK_IMAGE_USAGE_DEPTH_STENCIL_ATTACHMENT_BIT)) {
+ // TODO:: More memory state transition stuff.
+ }
+ }
+ pObjBindInfo->mem = mem;
+ }
+ }
+ }
+ }
+ return skipCall;
+}
+
+// For NULL mem case, clear any previous binding Else...
+// Make sure given object is in its object map
+// IF a previous binding existed, update binding
+// Add reference from objectInfo to memoryInfo
+// Add reference off of object's binding info
+// Return VK_TRUE if addition is successful, VK_FALSE otherwise
+static VkBool32 set_sparse_mem_binding(layer_data *dev_data, void *dispObject, VkDeviceMemory mem, uint64_t handle,
+ VkDebugReportObjectTypeEXT type, const char *apiName) {
+ VkBool32 skipCall = VK_FALSE;
+ // Handle NULL case separately, just clear previous binding & decrement reference
+ if (mem == VK_NULL_HANDLE) {
+ skipCall = clear_object_binding(dev_data, dispObject, handle, type);
+ } else {
+ MT_OBJ_BINDING_INFO *pObjBindInfo = get_object_binding_info(dev_data, handle, type);
+ if (!pObjBindInfo) {
+ skipCall |= log_msg(
+ dev_data->report_data, VK_DEBUG_REPORT_ERROR_BIT_EXT, type, handle, __LINE__, MEMTRACK_MISSING_MEM_BINDINGS, "MEM",
+ "In %s, attempting to update Binding of Obj(%#" PRIxLEAST64 ") that's not in global list()", apiName, handle);
+ }
+ // non-null case so should have real mem obj
+ DEVICE_MEM_INFO *pInfo = get_mem_obj_info(dev_data, mem);
+ if (pInfo) {
+ // Search for object in memory object's binding list
+ VkBool32 found = VK_FALSE;
+ if (pInfo->pObjBindings.size() > 0) {
+ for (auto it = pInfo->pObjBindings.begin(); it != pInfo->pObjBindings.end(); ++it) {
+ if (((*it).handle == handle) && ((*it).type == type)) {
+ found = VK_TRUE;
+ break;
+ }
+ }
+ }
+ // If not present, add to list
+ if (found == VK_FALSE) {
+ MT_OBJ_HANDLE_TYPE oht;
+ oht.handle = handle;
+ oht.type = type;
+ pInfo->pObjBindings.push_front(oht);
+ pInfo->refCount++;
+ }
+ // Need to set mem binding for this object
+ pObjBindInfo->mem = mem;
+ }
+ }
+ return skipCall;
+}
+
+template <typename T>
+void print_object_map_members(layer_data *my_data, void *dispObj, T const &objectName, VkDebugReportObjectTypeEXT objectType,
+ const char *objectStr) {
+ for (auto const &element : objectName) {
+ log_msg(my_data->report_data, VK_DEBUG_REPORT_INFORMATION_BIT_EXT, objectType, 0, __LINE__, MEMTRACK_NONE, "MEM",
+ " %s Object list contains %s Object %#" PRIxLEAST64 " ", objectStr, objectStr, element.first);
+ }
+}
+
+// For given Object, get 'mem' obj that it's bound to or NULL if no binding
+static VkBool32 get_mem_binding_from_object(layer_data *my_data, void *dispObj, const uint64_t handle,
+ const VkDebugReportObjectTypeEXT type, VkDeviceMemory *mem) {
+ VkBool32 skipCall = VK_FALSE;
+ *mem = VK_NULL_HANDLE;
+ MT_OBJ_BINDING_INFO *pObjBindInfo = get_object_binding_info(my_data, handle, type);
+ if (pObjBindInfo) {
+ if (pObjBindInfo->mem) {
+ *mem = pObjBindInfo->mem;
+ } else {
+ skipCall =
+ log_msg(my_data->report_data, VK_DEBUG_REPORT_ERROR_BIT_EXT, type, handle, __LINE__, MEMTRACK_MISSING_MEM_BINDINGS,
+ "MEM", "Trying to get mem binding for object %#" PRIxLEAST64 " but object has no mem binding", handle);
+ }
+ } else {
+ skipCall = log_msg(my_data->report_data, VK_DEBUG_REPORT_ERROR_BIT_EXT, type, handle, __LINE__, MEMTRACK_INVALID_OBJECT,
+ "MEM", "Trying to get mem binding for object %#" PRIxLEAST64 " but no such object in %s list", handle,
+ object_type_to_string(type));
+ }
+ return skipCall;
+}
+
+// Print details of MemObjInfo list
+static void print_mem_list(layer_data *dev_data, void *dispObj) {
+ DEVICE_MEM_INFO *pInfo = NULL;
+
+ // Early out if info is not requested
+ if (!(dev_data->report_data->active_flags & VK_DEBUG_REPORT_INFORMATION_BIT_EXT)) {
+ return;
+ }
+
+ // Just printing each msg individually for now, may want to package these into single large print
+ log_msg(dev_data->report_data, VK_DEBUG_REPORT_INFORMATION_BIT_EXT, VK_DEBUG_REPORT_OBJECT_TYPE_DEVICE_MEMORY_EXT, 0, __LINE__,
+ MEMTRACK_NONE, "MEM", "Details of Memory Object list (of size " PRINTF_SIZE_T_SPECIFIER " elements)",
+ dev_data->memObjMap.size());
+ log_msg(dev_data->report_data, VK_DEBUG_REPORT_INFORMATION_BIT_EXT, VK_DEBUG_REPORT_OBJECT_TYPE_DEVICE_MEMORY_EXT, 0, __LINE__,
+ MEMTRACK_NONE, "MEM", "=============================");
+
+ if (dev_data->memObjMap.size() <= 0)
+ return;
+
+ for (auto ii = dev_data->memObjMap.begin(); ii != dev_data->memObjMap.end(); ++ii) {
+ pInfo = &(*ii).second;
+
+ log_msg(dev_data->report_data, VK_DEBUG_REPORT_INFORMATION_BIT_EXT, VK_DEBUG_REPORT_OBJECT_TYPE_DEVICE_MEMORY_EXT, 0,
+ __LINE__, MEMTRACK_NONE, "MEM", " ===MemObjInfo at %p===", (void *)pInfo);
+ log_msg(dev_data->report_data, VK_DEBUG_REPORT_INFORMATION_BIT_EXT, VK_DEBUG_REPORT_OBJECT_TYPE_DEVICE_MEMORY_EXT, 0,
+ __LINE__, MEMTRACK_NONE, "MEM", " Mem object: %#" PRIxLEAST64, (uint64_t)(pInfo->mem));
+ log_msg(dev_data->report_data, VK_DEBUG_REPORT_INFORMATION_BIT_EXT, VK_DEBUG_REPORT_OBJECT_TYPE_DEVICE_MEMORY_EXT, 0,
+ __LINE__, MEMTRACK_NONE, "MEM", " Ref Count: %u", pInfo->refCount);
+ if (0 != pInfo->allocInfo.allocationSize) {
+ string pAllocInfoMsg = vk_print_vkmemoryallocateinfo(&pInfo->allocInfo, "MEM(INFO): ");
+ log_msg(dev_data->report_data, VK_DEBUG_REPORT_INFORMATION_BIT_EXT, VK_DEBUG_REPORT_OBJECT_TYPE_DEVICE_MEMORY_EXT, 0,
+ __LINE__, MEMTRACK_NONE, "MEM", " Mem Alloc info:\n%s", pAllocInfoMsg.c_str());
+ } else {
+ log_msg(dev_data->report_data, VK_DEBUG_REPORT_INFORMATION_BIT_EXT, VK_DEBUG_REPORT_OBJECT_TYPE_DEVICE_MEMORY_EXT, 0,
+ __LINE__, MEMTRACK_NONE, "MEM", " Mem Alloc info is NULL (alloc done by vkCreateSwapchainKHR())");
+ }
+
+ log_msg(dev_data->report_data, VK_DEBUG_REPORT_INFORMATION_BIT_EXT, VK_DEBUG_REPORT_OBJECT_TYPE_DEVICE_MEMORY_EXT, 0,
+ __LINE__, MEMTRACK_NONE, "MEM", " VK OBJECT Binding list of size " PRINTF_SIZE_T_SPECIFIER " elements:",
+ pInfo->pObjBindings.size());
+ if (pInfo->pObjBindings.size() > 0) {
+ for (list<MT_OBJ_HANDLE_TYPE>::iterator it = pInfo->pObjBindings.begin(); it != pInfo->pObjBindings.end(); ++it) {
+ log_msg(dev_data->report_data, VK_DEBUG_REPORT_INFORMATION_BIT_EXT, VK_DEBUG_REPORT_OBJECT_TYPE_DEVICE_MEMORY_EXT,
+ 0, __LINE__, MEMTRACK_NONE, "MEM", " VK OBJECT %" PRIu64, it->handle);
+ }
+ }
+
+ log_msg(dev_data->report_data, VK_DEBUG_REPORT_INFORMATION_BIT_EXT, VK_DEBUG_REPORT_OBJECT_TYPE_DEVICE_MEMORY_EXT, 0,
+ __LINE__, MEMTRACK_NONE, "MEM",
+ " VK Command Buffer (CB) binding list of size " PRINTF_SIZE_T_SPECIFIER " elements",
+ pInfo->pCommandBufferBindings.size());
+ if (pInfo->pCommandBufferBindings.size() > 0) {
+ for (list<VkCommandBuffer>::iterator it = pInfo->pCommandBufferBindings.begin();
+ it != pInfo->pCommandBufferBindings.end(); ++it) {
+ log_msg(dev_data->report_data, VK_DEBUG_REPORT_INFORMATION_BIT_EXT, VK_DEBUG_REPORT_OBJECT_TYPE_DEVICE_MEMORY_EXT,
+ 0, __LINE__, MEMTRACK_NONE, "MEM", " VK CB %p", (*it));
+ }
+ }
+ }
+}
+
+static void printCBList(layer_data *my_data, void *dispObj) {
+ GLOBAL_CB_NODE *pCBInfo = NULL;
+
+ // Early out if info is not requested
+ if (!(my_data->report_data->active_flags & VK_DEBUG_REPORT_INFORMATION_BIT_EXT)) {
+ return;
+ }
+
+ log_msg(my_data->report_data, VK_DEBUG_REPORT_INFORMATION_BIT_EXT, VK_DEBUG_REPORT_OBJECT_TYPE_DEVICE_MEMORY_EXT, 0, __LINE__,
+ MEMTRACK_NONE, "MEM", "Details of CB list (of size " PRINTF_SIZE_T_SPECIFIER " elements)",
+ my_data->commandBufferMap.size());
+ log_msg(my_data->report_data, VK_DEBUG_REPORT_INFORMATION_BIT_EXT, VK_DEBUG_REPORT_OBJECT_TYPE_DEVICE_MEMORY_EXT, 0, __LINE__,
+ MEMTRACK_NONE, "MEM", "==================");
+
+ if (my_data->commandBufferMap.size() <= 0)
+ return;
+
+ for (auto &cb_node : my_data->commandBufferMap) {
+ pCBInfo = cb_node.second;
+
+ log_msg(my_data->report_data, VK_DEBUG_REPORT_INFORMATION_BIT_EXT, VK_DEBUG_REPORT_OBJECT_TYPE_DEVICE_MEMORY_EXT, 0,
+ __LINE__, MEMTRACK_NONE, "MEM", " CB Info (%p) has CB %p, fenceId %" PRIx64 ", and fence %#" PRIxLEAST64,
+ (void *)pCBInfo, (void *)pCBInfo->commandBuffer, pCBInfo->fenceId, (uint64_t)pCBInfo->lastSubmittedFence);
+
+ if (pCBInfo->pMemObjList.size() <= 0)
+ continue;
+ for (list<VkDeviceMemory>::iterator it = pCBInfo->pMemObjList.begin(); it != pCBInfo->pMemObjList.end(); ++it) {
+ log_msg(my_data->report_data, VK_DEBUG_REPORT_INFORMATION_BIT_EXT, VK_DEBUG_REPORT_OBJECT_TYPE_DEVICE_MEMORY_EXT, 0,
+ __LINE__, MEMTRACK_NONE, "MEM", " Mem obj %" PRIu64, (uint64_t)(*it));
+ }
+ }
+}
+
+#endif
+
+// Map actual TID to an index value and return that index
+// This keeps TIDs in range from 0-MAX_TID and simplifies compares between runs
+static uint32_t getTIDIndex() {
+ loader_platform_thread_id tid = loader_platform_get_thread_id();
+ for (uint32_t i = 0; i < g_maxTID; i++) {
+ if (tid == g_tidMapping[i])
+ return i;
+ }
+ // Don't yet have mapping, set it and return newly set index
+ uint32_t retVal = (uint32_t)g_maxTID;
+ g_tidMapping[g_maxTID++] = tid;
+ assert(g_maxTID < MAX_TID);
+ return retVal;
+}
+
+// Return a string representation of CMD_TYPE enum
+static string cmdTypeToString(CMD_TYPE cmd) {
+ switch (cmd) {
+ case CMD_BINDPIPELINE:
+ return "CMD_BINDPIPELINE";
+ case CMD_BINDPIPELINEDELTA:
+ return "CMD_BINDPIPELINEDELTA";
+ case CMD_SETVIEWPORTSTATE:
+ return "CMD_SETVIEWPORTSTATE";
+ case CMD_SETLINEWIDTHSTATE:
+ return "CMD_SETLINEWIDTHSTATE";
+ case CMD_SETDEPTHBIASSTATE:
+ return "CMD_SETDEPTHBIASSTATE";
+ case CMD_SETBLENDSTATE:
+ return "CMD_SETBLENDSTATE";
+ case CMD_SETDEPTHBOUNDSSTATE:
+ return "CMD_SETDEPTHBOUNDSSTATE";
+ case CMD_SETSTENCILREADMASKSTATE:
+ return "CMD_SETSTENCILREADMASKSTATE";
+ case CMD_SETSTENCILWRITEMASKSTATE:
+ return "CMD_SETSTENCILWRITEMASKSTATE";
+ case CMD_SETSTENCILREFERENCESTATE:
+ return "CMD_SETSTENCILREFERENCESTATE";
+ case CMD_BINDDESCRIPTORSETS:
+ return "CMD_BINDDESCRIPTORSETS";
+ case CMD_BINDINDEXBUFFER:
+ return "CMD_BINDINDEXBUFFER";
+ case CMD_BINDVERTEXBUFFER:
+ return "CMD_BINDVERTEXBUFFER";
+ case CMD_DRAW:
+ return "CMD_DRAW";
+ case CMD_DRAWINDEXED:
+ return "CMD_DRAWINDEXED";
+ case CMD_DRAWINDIRECT:
+ return "CMD_DRAWINDIRECT";
+ case CMD_DRAWINDEXEDINDIRECT:
+ return "CMD_DRAWINDEXEDINDIRECT";
+ case CMD_DISPATCH:
+ return "CMD_DISPATCH";
+ case CMD_DISPATCHINDIRECT:
+ return "CMD_DISPATCHINDIRECT";
+ case CMD_COPYBUFFER:
+ return "CMD_COPYBUFFER";
+ case CMD_COPYIMAGE:
+ return "CMD_COPYIMAGE";
+ case CMD_BLITIMAGE:
+ return "CMD_BLITIMAGE";
+ case CMD_COPYBUFFERTOIMAGE:
+ return "CMD_COPYBUFFERTOIMAGE";
+ case CMD_COPYIMAGETOBUFFER:
+ return "CMD_COPYIMAGETOBUFFER";
+ case CMD_CLONEIMAGEDATA:
+ return "CMD_CLONEIMAGEDATA";
+ case CMD_UPDATEBUFFER:
+ return "CMD_UPDATEBUFFER";
+ case CMD_FILLBUFFER:
+ return "CMD_FILLBUFFER";
+ case CMD_CLEARCOLORIMAGE:
+ return "CMD_CLEARCOLORIMAGE";
+ case CMD_CLEARATTACHMENTS:
+ return "CMD_CLEARCOLORATTACHMENT";
+ case CMD_CLEARDEPTHSTENCILIMAGE:
+ return "CMD_CLEARDEPTHSTENCILIMAGE";
+ case CMD_RESOLVEIMAGE:
+ return "CMD_RESOLVEIMAGE";
+ case CMD_SETEVENT:
+ return "CMD_SETEVENT";
+ case CMD_RESETEVENT:
+ return "CMD_RESETEVENT";
+ case CMD_WAITEVENTS:
+ return "CMD_WAITEVENTS";
+ case CMD_PIPELINEBARRIER:
+ return "CMD_PIPELINEBARRIER";
+ case CMD_BEGINQUERY:
+ return "CMD_BEGINQUERY";
+ case CMD_ENDQUERY:
+ return "CMD_ENDQUERY";
+ case CMD_RESETQUERYPOOL:
+ return "CMD_RESETQUERYPOOL";
+ case CMD_COPYQUERYPOOLRESULTS:
+ return "CMD_COPYQUERYPOOLRESULTS";
+ case CMD_WRITETIMESTAMP:
+ return "CMD_WRITETIMESTAMP";
+ case CMD_INITATOMICCOUNTERS:
+ return "CMD_INITATOMICCOUNTERS";
+ case CMD_LOADATOMICCOUNTERS:
+ return "CMD_LOADATOMICCOUNTERS";
+ case CMD_SAVEATOMICCOUNTERS:
+ return "CMD_SAVEATOMICCOUNTERS";
+ case CMD_BEGINRENDERPASS:
+ return "CMD_BEGINRENDERPASS";
+ case CMD_ENDRENDERPASS:
+ return "CMD_ENDRENDERPASS";
+ default:
+ return "UNKNOWN";
+ }
+}
+
+// SPIRV utility functions
+static void build_def_index(shader_module *module) {
+ for (auto insn : *module) {
+ switch (insn.opcode()) {
+ /* Types */
+ case spv::OpTypeVoid:
+ case spv::OpTypeBool:
+ case spv::OpTypeInt:
+ case spv::OpTypeFloat:
+ case spv::OpTypeVector:
+ case spv::OpTypeMatrix:
+ case spv::OpTypeImage:
+ case spv::OpTypeSampler:
+ case spv::OpTypeSampledImage:
+ case spv::OpTypeArray:
+ case spv::OpTypeRuntimeArray:
+ case spv::OpTypeStruct:
+ case spv::OpTypeOpaque:
+ case spv::OpTypePointer:
+ case spv::OpTypeFunction:
+ case spv::OpTypeEvent:
+ case spv::OpTypeDeviceEvent:
+ case spv::OpTypeReserveId:
+ case spv::OpTypeQueue:
+ case spv::OpTypePipe:
+ module->def_index[insn.word(1)] = insn.offset();
+ break;
+
+ /* Fixed constants */
+ case spv::OpConstantTrue:
+ case spv::OpConstantFalse:
+ case spv::OpConstant:
+ case spv::OpConstantComposite:
+ case spv::OpConstantSampler:
+ case spv::OpConstantNull:
+ module->def_index[insn.word(2)] = insn.offset();
+ break;
+
+ /* Specialization constants */
+ case spv::OpSpecConstantTrue:
+ case spv::OpSpecConstantFalse:
+ case spv::OpSpecConstant:
+ case spv::OpSpecConstantComposite:
+ case spv::OpSpecConstantOp:
+ module->def_index[insn.word(2)] = insn.offset();
+ break;
+
+ /* Variables */
+ case spv::OpVariable:
+ module->def_index[insn.word(2)] = insn.offset();
+ break;
+
+ /* Functions */
+ case spv::OpFunction:
+ module->def_index[insn.word(2)] = insn.offset();
+ break;
+
+ default:
+ /* We don't care about any other defs for now. */
+ break;
+ }
+ }
+}
+
+static spirv_inst_iter find_entrypoint(shader_module *src, char const *name, VkShaderStageFlagBits stageBits) {
+ for (auto insn : *src) {
+ if (insn.opcode() == spv::OpEntryPoint) {
+ auto entrypointName = (char const *)&insn.word(3);
+ auto entrypointStageBits = 1u << insn.word(1);
+
+ if (!strcmp(entrypointName, name) && (entrypointStageBits & stageBits)) {
+ return insn;
+ }
+ }
+ }
+
+ return src->end();
+}
+
+bool shader_is_spirv(VkShaderModuleCreateInfo const *pCreateInfo) {
+ uint32_t *words = (uint32_t *)pCreateInfo->pCode;
+ size_t sizeInWords = pCreateInfo->codeSize / sizeof(uint32_t);
+
+ /* Just validate that the header makes sense. */
+ return sizeInWords >= 5 && words[0] == spv::MagicNumber && words[1] == spv::Version;
+}
+
+static char const *storage_class_name(unsigned sc) {
+ switch (sc) {
+ case spv::StorageClassInput:
+ return "input";
+ case spv::StorageClassOutput:
+ return "output";
+ case spv::StorageClassUniformConstant:
+ return "const uniform";
+ case spv::StorageClassUniform:
+ return "uniform";
+ case spv::StorageClassWorkgroup:
+ return "workgroup local";
+ case spv::StorageClassCrossWorkgroup:
+ return "workgroup global";
+ case spv::StorageClassPrivate:
+ return "private global";
+ case spv::StorageClassFunction:
+ return "function";
+ case spv::StorageClassGeneric:
+ return "generic";
+ case spv::StorageClassAtomicCounter:
+ return "atomic counter";
+ case spv::StorageClassImage:
+ return "image";
+ case spv::StorageClassPushConstant:
+ return "push constant";
+ default:
+ return "unknown";
+ }
+}
+
+/* get the value of an integral constant */
+unsigned get_constant_value(shader_module const *src, unsigned id) {
+ auto value = src->get_def(id);
+ assert(value != src->end());
+
+ if (value.opcode() != spv::OpConstant) {
+ /* TODO: Either ensure that the specialization transform is already performed on a module we're
+ considering here, OR -- specialize on the fly now.
+ */
+ return 1;
+ }
+
+ return value.word(3);
+}
+
+
+static void describe_type_inner(std::ostringstream &ss, shader_module const *src, unsigned type) {
+ auto insn = src->get_def(type);
+ assert(insn != src->end());
+
+ switch (insn.opcode()) {
+ case spv::OpTypeBool:
+ ss << "bool";
+ break;
+ case spv::OpTypeInt:
+ ss << (insn.word(3) ? 's' : 'u') << "int" << insn.word(2);
+ break;
+ case spv::OpTypeFloat:
+ ss << "float" << insn.word(2);
+ break;
+ case spv::OpTypeVector:
+ ss << "vec" << insn.word(3) << " of ";
+ describe_type_inner(ss, src, insn.word(2));
+ break;
+ case spv::OpTypeMatrix:
+ ss << "mat" << insn.word(3) << " of ";
+ describe_type_inner(ss, src, insn.word(2));
+ break;
+ case spv::OpTypeArray:
+ ss << "arr[" << get_constant_value(src, insn.word(3)) << "] of ";
+ describe_type_inner(ss, src, insn.word(2));
+ break;
+ case spv::OpTypePointer:
+ ss << "ptr to " << storage_class_name(insn.word(2)) << " ";
+ describe_type_inner(ss, src, insn.word(3));
+ break;
+ case spv::OpTypeStruct: {
+ ss << "struct of (";
+ for (unsigned i = 2; i < insn.len(); i++) {
+ describe_type_inner(ss, src, insn.word(i));
+ if (i == insn.len() - 1) {
+ ss << ")";
+ } else {
+ ss << ", ";
+ }
+ }
+ break;
+ }
+ case spv::OpTypeSampler:
+ ss << "sampler";
+ break;
+ case spv::OpTypeSampledImage:
+ ss << "sampler+";
+ describe_type_inner(ss, src, insn.word(2));
+ break;
+ case spv::OpTypeImage:
+ ss << "image(dim=" << insn.word(3) << ", sampled=" << insn.word(7) << ")";
+ break;
+ default:
+ ss << "oddtype";
+ break;
+ }
+}
+
+
+static std::string describe_type(shader_module const *src, unsigned type) {
+ std::ostringstream ss;
+ describe_type_inner(ss, src, type);
+ return ss.str();
+}
+
+
+static bool types_match(shader_module const *a, shader_module const *b, unsigned a_type, unsigned b_type, bool b_arrayed) {
+ /* walk two type trees together, and complain about differences */
+ auto a_insn = a->get_def(a_type);
+ auto b_insn = b->get_def(b_type);
+ assert(a_insn != a->end());
+ assert(b_insn != b->end());
+
+ if (b_arrayed && b_insn.opcode() == spv::OpTypeArray) {
+ /* we probably just found the extra level of arrayness in b_type: compare the type inside it to a_type */
+ return types_match(a, b, a_type, b_insn.word(2), false);
+ }
+
+ if (a_insn.opcode() != b_insn.opcode()) {
+ return false;
+ }
+
+ switch (a_insn.opcode()) {
+ /* if b_arrayed and we hit a leaf type, then we can't match -- there's nowhere for the extra OpTypeArray to be! */
+ case spv::OpTypeBool:
+ return true && !b_arrayed;
+ case spv::OpTypeInt:
+ /* match on width, signedness */
+ return a_insn.word(2) == b_insn.word(2) && a_insn.word(3) == b_insn.word(3) && !b_arrayed;
+ case spv::OpTypeFloat:
+ /* match on width */
+ return a_insn.word(2) == b_insn.word(2) && !b_arrayed;
+ case spv::OpTypeVector:
+ case spv::OpTypeMatrix:
+ /* match on element type, count. these all have the same layout. we don't get here if
+ * b_arrayed -- that is handled above. */
+ return !b_arrayed && types_match(a, b, a_insn.word(2), b_insn.word(2), b_arrayed) && a_insn.word(3) == b_insn.word(3);
+ case spv::OpTypeArray:
+ /* match on element type, count. these all have the same layout. we don't get here if
+ * b_arrayed. This differs from vector & matrix types in that the array size is the id of a constant instruction,
+ * not a literal within OpTypeArray */
+ return !b_arrayed && types_match(a, b, a_insn.word(2), b_insn.word(2), b_arrayed) &&
+ get_constant_value(a, a_insn.word(3)) == get_constant_value(b, b_insn.word(3));
+ case spv::OpTypeStruct:
+ /* match on all element types */
+ {
+ if (b_arrayed) {
+ /* for the purposes of matching different levels of arrayness, structs are leaves. */
+ return false;
+ }
+
+ if (a_insn.len() != b_insn.len()) {
+ return false; /* structs cannot match if member counts differ */
+ }
+
+ for (unsigned i = 2; i < a_insn.len(); i++) {
+ if (!types_match(a, b, a_insn.word(i), b_insn.word(i), b_arrayed)) {
+ return false;
+ }
+ }
+
+ return true;
+ }
+ case spv::OpTypePointer:
+ /* match on pointee type. storage class is expected to differ */
+ return types_match(a, b, a_insn.word(3), b_insn.word(3), b_arrayed);
+
+ default:
+ /* remaining types are CLisms, or may not appear in the interfaces we
+ * are interested in. Just claim no match.
+ */
+ return false;
+ }
+}
+
+static int value_or_default(std::unordered_map<unsigned, unsigned> const &map, unsigned id, int def) {
+ auto it = map.find(id);
+ if (it == map.end())
+ return def;
+ else
+ return it->second;
+}
+
+static unsigned get_locations_consumed_by_type(shader_module const *src, unsigned type, bool strip_array_level) {
+ auto insn = src->get_def(type);
+ assert(insn != src->end());
+
+ switch (insn.opcode()) {
+ case spv::OpTypePointer:
+ /* see through the ptr -- this is only ever at the toplevel for graphics shaders;
+ * we're never actually passing pointers around. */
+ return get_locations_consumed_by_type(src, insn.word(3), strip_array_level);
+ case spv::OpTypeArray:
+ if (strip_array_level) {
+ return get_locations_consumed_by_type(src, insn.word(2), false);
+ } else {
+ return get_constant_value(src, insn.word(3)) * get_locations_consumed_by_type(src, insn.word(2), false);
+ }
+ case spv::OpTypeMatrix:
+ /* num locations is the dimension * element size */
+ return insn.word(3) * get_locations_consumed_by_type(src, insn.word(2), false);
+ default:
+ /* everything else is just 1. */
+ return 1;
+
+ /* TODO: extend to handle 64bit scalar types, whose vectors may need
+ * multiple locations. */
+ }
+}
+
+typedef std::pair<unsigned, unsigned> location_t;
+typedef std::pair<unsigned, unsigned> descriptor_slot_t;
+
+struct interface_var {
+ uint32_t id;
+ uint32_t type_id;
+ uint32_t offset;
+ /* TODO: collect the name, too? Isn't required to be present. */
+};
+
+static spirv_inst_iter get_struct_type(shader_module const *src, spirv_inst_iter def, bool is_array_of_verts) {
+ while (true) {
+
+ if (def.opcode() == spv::OpTypePointer) {
+ def = src->get_def(def.word(3));
+ } else if (def.opcode() == spv::OpTypeArray && is_array_of_verts) {
+ def = src->get_def(def.word(2));
+ is_array_of_verts = false;
+ } else if (def.opcode() == spv::OpTypeStruct) {
+ return def;
+ } else {
+ return src->end();
+ }
+ }
+}
+
+static void collect_interface_block_members(layer_data *my_data, VkDevice dev, shader_module const *src,
+ std::map<location_t, interface_var> &out,
+ std::unordered_map<unsigned, unsigned> const &blocks, bool is_array_of_verts,
+ uint32_t id, uint32_t type_id) {
+ /* Walk down the type_id presented, trying to determine whether it's actually an interface block. */
+ auto type = get_struct_type(src, src->get_def(type_id), is_array_of_verts);
+ if (type == src->end() || blocks.find(type.word(1)) == blocks.end()) {
+ /* this isn't an interface block. */
+ return;
+ }
+
+ std::unordered_map<unsigned, unsigned> member_components;
+
+ /* Walk all the OpMemberDecorate for type's result id -- first pass, collect components. */
+ for (auto insn : *src) {
+ if (insn.opcode() == spv::OpMemberDecorate && insn.word(1) == type.word(1)) {
+ unsigned member_index = insn.word(2);
+
+ if (insn.word(3) == spv::DecorationComponent) {
+ unsigned component = insn.word(4);
+ member_components[member_index] = component;
+ }
+ }
+ }
+
+ /* Second pass -- produce the output, from Location decorations */
+ for (auto insn : *src) {
+ if (insn.opcode() == spv::OpMemberDecorate && insn.word(1) == type.word(1)) {
+ unsigned member_index = insn.word(2);
+ unsigned member_type_id = type.word(2 + member_index);
+
+ if (insn.word(3) == spv::DecorationLocation) {
+ unsigned location = insn.word(4);
+ unsigned num_locations = get_locations_consumed_by_type(src, member_type_id, false);
+ auto component_it = member_components.find(member_index);
+ unsigned component = component_it == member_components.end() ? 0 : component_it->second;
+
+ for (unsigned int offset = 0; offset < num_locations; offset++) {
+ interface_var v;
+ v.id = id;
+ /* TODO: member index in interface_var too? */
+ v.type_id = member_type_id;
+ v.offset = offset;
+ out[std::make_pair(location + offset, component)] = v;
+ }
+ }
+ }
+ }
+}
+
+static void collect_interface_by_location(layer_data *my_data, VkDevice dev, shader_module const *src, spirv_inst_iter entrypoint,
+ spv::StorageClass sinterface, std::map<location_t, interface_var> &out,
+ bool is_array_of_verts) {
+ std::unordered_map<unsigned, unsigned> var_locations;
+ std::unordered_map<unsigned, unsigned> var_builtins;
+ std::unordered_map<unsigned, unsigned> var_components;
+ std::unordered_map<unsigned, unsigned> blocks;
+
+ for (auto insn : *src) {
+
+ /* We consider two interface models: SSO rendezvous-by-location, and
+ * builtins. Complain about anything that fits neither model.
+ */
+ if (insn.opcode() == spv::OpDecorate) {
+ if (insn.word(2) == spv::DecorationLocation) {
+ var_locations[insn.word(1)] = insn.word(3);
+ }
+
+ if (insn.word(2) == spv::DecorationBuiltIn) {
+ var_builtins[insn.word(1)] = insn.word(3);
+ }
+
+ if (insn.word(2) == spv::DecorationComponent) {
+ var_components[insn.word(1)] = insn.word(3);
+ }
+
+ if (insn.word(2) == spv::DecorationBlock) {
+ blocks[insn.word(1)] = 1;
+ }
+ }
+ }
+
+ /* TODO: handle grouped decorations */
+ /* TODO: handle index=1 dual source outputs from FS -- two vars will
+ * have the same location, and we DONT want to clobber. */
+
+ /* find the end of the entrypoint's name string. additional zero bytes follow the actual null
+ terminator, to fill out the rest of the word - so we only need to look at the last byte in
+ the word to determine which word contains the terminator. */
+ auto word = 3;
+ while (entrypoint.word(word) & 0xff000000u) {
+ ++word;
+ }
+ ++word;
+
+ for (; word < entrypoint.len(); word++) {
+ auto insn = src->get_def(entrypoint.word(word));
+ assert(insn != src->end());
+ assert(insn.opcode() == spv::OpVariable);
+
+ if (insn.word(3) == sinterface) {
+ unsigned id = insn.word(2);
+ unsigned type = insn.word(1);
+
+ int location = value_or_default(var_locations, id, -1);
+ int builtin = value_or_default(var_builtins, id, -1);
+ unsigned component = value_or_default(var_components, id, 0); /* unspecified is OK, is 0 */
+
+ /* All variables and interface block members in the Input or Output storage classes
+ * must be decorated with either a builtin or an explicit location.
+ *
+ * TODO: integrate the interface block support here. For now, don't complain --
+ * a valid SPIRV module will only hit this path for the interface block case, as the
+ * individual members of the type are decorated, rather than variable declarations.
+ */
+
+ if (location != -1) {
+ /* A user-defined interface variable, with a location. Where a variable
+ * occupied multiple locations, emit one result for each. */
+ unsigned num_locations = get_locations_consumed_by_type(src, type, is_array_of_verts);
+ for (unsigned int offset = 0; offset < num_locations; offset++) {
+ interface_var v;
+ v.id = id;
+ v.type_id = type;
+ v.offset = offset;
+ out[std::make_pair(location + offset, component)] = v;
+ }
+ } else if (builtin == -1) {
+ /* An interface block instance */
+ collect_interface_block_members(my_data, dev, src, out, blocks, is_array_of_verts, id, type);
+ }
+ }
+ }
+}
+
+static void collect_interface_by_descriptor_slot(layer_data *my_data, VkDevice dev, shader_module const *src,
+ std::unordered_set<uint32_t> const &accessible_ids,
+ std::map<descriptor_slot_t, interface_var> &out) {
+
+ std::unordered_map<unsigned, unsigned> var_sets;
+ std::unordered_map<unsigned, unsigned> var_bindings;
+
+ for (auto insn : *src) {
+ /* All variables in the Uniform or UniformConstant storage classes are required to be decorated with both
+ * DecorationDescriptorSet and DecorationBinding.
+ */
+ if (insn.opcode() == spv::OpDecorate) {
+ if (insn.word(2) == spv::DecorationDescriptorSet) {
+ var_sets[insn.word(1)] = insn.word(3);
+ }
+
+ if (insn.word(2) == spv::DecorationBinding) {
+ var_bindings[insn.word(1)] = insn.word(3);
+ }
+ }
+ }
+
+ for (auto id : accessible_ids) {
+ auto insn = src->get_def(id);
+ assert(insn != src->end());
+
+ if (insn.opcode() == spv::OpVariable &&
+ (insn.word(3) == spv::StorageClassUniform || insn.word(3) == spv::StorageClassUniformConstant)) {
+ unsigned set = value_or_default(var_sets, insn.word(2), 0);
+ unsigned binding = value_or_default(var_bindings, insn.word(2), 0);
+
+ auto existing_it = out.find(std::make_pair(set, binding));
+ if (existing_it != out.end()) {
+ /* conflict within spv image */
+ log_msg(my_data->report_data, VK_DEBUG_REPORT_ERROR_BIT_EXT, VK_DEBUG_REPORT_OBJECT_TYPE_DEVICE_EXT, /*dev*/ 0,
+ __LINE__, SHADER_CHECKER_INCONSISTENT_SPIRV, "SC",
+ "var %d (type %d) in %s interface in descriptor slot (%u,%u) conflicts with existing definition",
+ insn.word(2), insn.word(1), storage_class_name(insn.word(3)), existing_it->first.first,
+ existing_it->first.second);
+ }
+
+ interface_var v;
+ v.id = insn.word(2);
+ v.type_id = insn.word(1);
+ out[std::make_pair(set, binding)] = v;
+ }
+ }
+}
+
+static bool validate_interface_between_stages(layer_data *my_data, VkDevice dev, shader_module const *producer,
+ spirv_inst_iter producer_entrypoint, char const *producer_name,
+ shader_module const *consumer, spirv_inst_iter consumer_entrypoint,
+ char const *consumer_name, bool consumer_arrayed_input) {
+ std::map<location_t, interface_var> outputs;
+ std::map<location_t, interface_var> inputs;
+
+ bool pass = true;
+
+ collect_interface_by_location(my_data, dev, producer, producer_entrypoint, spv::StorageClassOutput, outputs, false);
+ collect_interface_by_location(my_data, dev, consumer, consumer_entrypoint, spv::StorageClassInput, inputs,
+ consumer_arrayed_input);
+
+ auto a_it = outputs.begin();
+ auto b_it = inputs.begin();
+
+ /* maps sorted by key (location); walk them together to find mismatches */
+ while ((outputs.size() > 0 && a_it != outputs.end()) || (inputs.size() && b_it != inputs.end())) {
+ bool a_at_end = outputs.size() == 0 || a_it == outputs.end();
+ bool b_at_end = inputs.size() == 0 || b_it == inputs.end();
+ auto a_first = a_at_end ? std::make_pair(0u, 0u) : a_it->first;
+ auto b_first = b_at_end ? std::make_pair(0u, 0u) : b_it->first;
+
+ if (b_at_end || ((!a_at_end) && (a_first < b_first))) {
+ if (log_msg(my_data->report_data, VK_DEBUG_REPORT_PERFORMANCE_WARNING_BIT_EXT, VK_DEBUG_REPORT_OBJECT_TYPE_DEVICE_EXT,
+ /*dev*/ 0, __LINE__, SHADER_CHECKER_OUTPUT_NOT_CONSUMED, "SC",
+ "%s writes to output location %u.%u which is not consumed by %s", producer_name, a_first.first,
+ a_first.second, consumer_name)) {
+ pass = false;
+ }
+ a_it++;
+ } else if (a_at_end || a_first > b_first) {
+ if (log_msg(my_data->report_data, VK_DEBUG_REPORT_ERROR_BIT_EXT, VK_DEBUG_REPORT_OBJECT_TYPE_DEVICE_EXT, /*dev*/ 0,
+ __LINE__, SHADER_CHECKER_INPUT_NOT_PRODUCED, "SC",
+ "%s consumes input location %u.%u which is not written by %s", consumer_name, b_first.first, b_first.second,
+ producer_name)) {
+ pass = false;
+ }
+ b_it++;
+ } else {
+ if (types_match(producer, consumer, a_it->second.type_id, b_it->second.type_id, consumer_arrayed_input)) {
+ /* OK! */
+ } else {
+ if (log_msg(my_data->report_data, VK_DEBUG_REPORT_ERROR_BIT_EXT, VK_DEBUG_REPORT_OBJECT_TYPE_DEVICE_EXT, /*dev*/ 0,
+ __LINE__, SHADER_CHECKER_INTERFACE_TYPE_MISMATCH, "SC", "Type mismatch on location %u.%u: '%s' vs '%s'",
+ a_first.first, a_first.second,
+ describe_type(producer, a_it->second.type_id).c_str(),
+ describe_type(consumer, b_it->second.type_id).c_str())) {
+ pass = false;
+ }
+ }
+ a_it++;
+ b_it++;
+ }
+ }
+
+ return pass;
+}
+
+enum FORMAT_TYPE {
+ FORMAT_TYPE_UNDEFINED,
+ FORMAT_TYPE_FLOAT, /* UNORM, SNORM, FLOAT, USCALED, SSCALED, SRGB -- anything we consider float in the shader */
+ FORMAT_TYPE_SINT,
+ FORMAT_TYPE_UINT,
+};
+
+static unsigned get_format_type(VkFormat fmt) {
+ switch (fmt) {
+ case VK_FORMAT_UNDEFINED:
+ return FORMAT_TYPE_UNDEFINED;
+ case VK_FORMAT_R8_SINT:
+ case VK_FORMAT_R8G8_SINT:
+ case VK_FORMAT_R8G8B8_SINT:
+ case VK_FORMAT_R8G8B8A8_SINT:
+ case VK_FORMAT_R16_SINT:
+ case VK_FORMAT_R16G16_SINT:
+ case VK_FORMAT_R16G16B16_SINT:
+ case VK_FORMAT_R16G16B16A16_SINT:
+ case VK_FORMAT_R32_SINT:
+ case VK_FORMAT_R32G32_SINT:
+ case VK_FORMAT_R32G32B32_SINT:
+ case VK_FORMAT_R32G32B32A32_SINT:
+ case VK_FORMAT_B8G8R8_SINT:
+ case VK_FORMAT_B8G8R8A8_SINT:
+ case VK_FORMAT_A2B10G10R10_SINT_PACK32:
+ case VK_FORMAT_A2R10G10B10_SINT_PACK32:
+ return FORMAT_TYPE_SINT;
+ case VK_FORMAT_R8_UINT:
+ case VK_FORMAT_R8G8_UINT:
+ case VK_FORMAT_R8G8B8_UINT:
+ case VK_FORMAT_R8G8B8A8_UINT:
+ case VK_FORMAT_R16_UINT:
+ case VK_FORMAT_R16G16_UINT:
+ case VK_FORMAT_R16G16B16_UINT:
+ case VK_FORMAT_R16G16B16A16_UINT:
+ case VK_FORMAT_R32_UINT:
+ case VK_FORMAT_R32G32_UINT:
+ case VK_FORMAT_R32G32B32_UINT:
+ case VK_FORMAT_R32G32B32A32_UINT:
+ case VK_FORMAT_B8G8R8_UINT:
+ case VK_FORMAT_B8G8R8A8_UINT:
+ case VK_FORMAT_A2B10G10R10_UINT_PACK32:
+ case VK_FORMAT_A2R10G10B10_UINT_PACK32:
+ return FORMAT_TYPE_UINT;
+ default:
+ return FORMAT_TYPE_FLOAT;
+ }
+}
+
+/* characterizes a SPIR-V type appearing in an interface to a FF stage,
+ * for comparison to a VkFormat's characterization above. */
+static unsigned get_fundamental_type(shader_module const *src, unsigned type) {
+ auto insn = src->get_def(type);
+ assert(insn != src->end());
+
+ switch (insn.opcode()) {
+ case spv::OpTypeInt:
+ return insn.word(3) ? FORMAT_TYPE_SINT : FORMAT_TYPE_UINT;
+ case spv::OpTypeFloat:
+ return FORMAT_TYPE_FLOAT;
+ case spv::OpTypeVector:
+ return get_fundamental_type(src, insn.word(2));
+ case spv::OpTypeMatrix:
+ return get_fundamental_type(src, insn.word(2));
+ case spv::OpTypeArray:
+ return get_fundamental_type(src, insn.word(2));
+ case spv::OpTypePointer:
+ return get_fundamental_type(src, insn.word(3));
+ default:
+ return FORMAT_TYPE_UNDEFINED;
+ }
+}
+
+static uint32_t get_shader_stage_id(VkShaderStageFlagBits stage) {
+ uint32_t bit_pos = u_ffs(stage);
+ return bit_pos - 1;
+}
+
+static bool validate_vi_consistency(layer_data *my_data, VkDevice dev, VkPipelineVertexInputStateCreateInfo const *vi) {
+ /* walk the binding descriptions, which describe the step rate and stride of each vertex buffer.
+ * each binding should be specified only once.
+ */
+ std::unordered_map<uint32_t, VkVertexInputBindingDescription const *> bindings;
+ bool pass = true;
+
+ for (unsigned i = 0; i < vi->vertexBindingDescriptionCount; i++) {
+ auto desc = &vi->pVertexBindingDescriptions[i];
+ auto &binding = bindings[desc->binding];
+ if (binding) {
+ if (log_msg(my_data->report_data, VK_DEBUG_REPORT_ERROR_BIT_EXT, VK_DEBUG_REPORT_OBJECT_TYPE_DEVICE_EXT, /*dev*/ 0,
+ __LINE__, SHADER_CHECKER_INCONSISTENT_VI, "SC",
+ "Duplicate vertex input binding descriptions for binding %d", desc->binding)) {
+ pass = false;
+ }
+ } else {
+ binding = desc;
+ }
+ }
+
+ return pass;
+}
+
+static bool validate_vi_against_vs_inputs(layer_data *my_data, VkDevice dev, VkPipelineVertexInputStateCreateInfo const *vi,
+ shader_module const *vs, spirv_inst_iter entrypoint) {
+ std::map<location_t, interface_var> inputs;
+ bool pass = true;
+
+ collect_interface_by_location(my_data, dev, vs, entrypoint, spv::StorageClassInput, inputs, false);
+
+ /* Build index by location */
+ std::map<uint32_t, VkVertexInputAttributeDescription const *> attribs;
+ if (vi) {
+ for (unsigned i = 0; i < vi->vertexAttributeDescriptionCount; i++)
+ attribs[vi->pVertexAttributeDescriptions[i].location] = &vi->pVertexAttributeDescriptions[i];
+ }
+
+ auto it_a = attribs.begin();
+ auto it_b = inputs.begin();
+
+ while ((attribs.size() > 0 && it_a != attribs.end()) || (inputs.size() > 0 && it_b != inputs.end())) {
+ bool a_at_end = attribs.size() == 0 || it_a == attribs.end();
+ bool b_at_end = inputs.size() == 0 || it_b == inputs.end();
+ auto a_first = a_at_end ? 0 : it_a->first;
+ auto b_first = b_at_end ? 0 : it_b->first.first;
+ if (!a_at_end && (b_at_end || a_first < b_first)) {
+ if (log_msg(my_data->report_data, VK_DEBUG_REPORT_PERFORMANCE_WARNING_BIT_EXT, VK_DEBUG_REPORT_OBJECT_TYPE_DEVICE_EXT,
+ /*dev*/ 0, __LINE__, SHADER_CHECKER_OUTPUT_NOT_CONSUMED, "SC",
+ "Vertex attribute at location %d not consumed by VS", a_first)) {
+ pass = false;
+ }
+ it_a++;
+ } else if (!b_at_end && (a_at_end || b_first < a_first)) {
+ if (log_msg(my_data->report_data, VK_DEBUG_REPORT_ERROR_BIT_EXT, VK_DEBUG_REPORT_OBJECT_TYPE_DEVICE_EXT, /*dev*/ 0,
+ __LINE__, SHADER_CHECKER_INPUT_NOT_PRODUCED, "SC", "VS consumes input at location %d but not provided",
+ b_first)) {
+ pass = false;
+ }
+ it_b++;
+ } else {
+ unsigned attrib_type = get_format_type(it_a->second->format);
+ unsigned input_type = get_fundamental_type(vs, it_b->second.type_id);
+
+ /* type checking */
+ if (attrib_type != FORMAT_TYPE_UNDEFINED && input_type != FORMAT_TYPE_UNDEFINED && attrib_type != input_type) {
+ if (log_msg(my_data->report_data, VK_DEBUG_REPORT_ERROR_BIT_EXT, VK_DEBUG_REPORT_OBJECT_TYPE_DEVICE_EXT, /*dev*/ 0,
+ __LINE__, SHADER_CHECKER_INTERFACE_TYPE_MISMATCH, "SC",
+ "Attribute type of `%s` at location %d does not match VS input type of `%s`",
+ string_VkFormat(it_a->second->format), a_first,
+ describe_type(vs, it_b->second.type_id).c_str())) {
+ pass = false;
+ }
+ }
+
+ /* OK! */
+ it_a++;
+ it_b++;
+ }
+ }
+
+ return pass;
+}
+
+static bool validate_fs_outputs_against_render_pass(layer_data *my_data, VkDevice dev, shader_module const *fs,
+ spirv_inst_iter entrypoint, RENDER_PASS_NODE const *rp, uint32_t subpass) {
+ const std::vector<VkFormat> &color_formats = rp->subpassColorFormats[subpass];
+ std::map<location_t, interface_var> outputs;
+ bool pass = true;
+
+ /* TODO: dual source blend index (spv::DecIndex, zero if not provided) */
+
+ collect_interface_by_location(my_data, dev, fs, entrypoint, spv::StorageClassOutput, outputs, false);
+
+ auto it = outputs.begin();
+ uint32_t attachment = 0;
+
+ /* Walk attachment list and outputs together -- this is a little overpowered since attachments
+ * are currently dense, but the parallel with matching between shader stages is nice.
+ */
+
+ while ((outputs.size() > 0 && it != outputs.end()) || attachment < color_formats.size()) {
+ if (attachment == color_formats.size() || (it != outputs.end() && it->first.first < attachment)) {
+ if (log_msg(my_data->report_data, VK_DEBUG_REPORT_WARNING_BIT_EXT, VK_DEBUG_REPORT_OBJECT_TYPE_DEVICE_EXT, /*dev*/ 0,
+ __LINE__, SHADER_CHECKER_OUTPUT_NOT_CONSUMED, "SC",
+ "FS writes to output location %d with no matching attachment", it->first.first)) {
+ pass = false;
+ }
+ it++;
+ } else if (it == outputs.end() || it->first.first > attachment) {
+ if (log_msg(my_data->report_data, VK_DEBUG_REPORT_ERROR_BIT_EXT, VK_DEBUG_REPORT_OBJECT_TYPE_DEVICE_EXT, /*dev*/ 0,
+ __LINE__, SHADER_CHECKER_INPUT_NOT_PRODUCED, "SC", "Attachment %d not written by FS", attachment)) {
+ pass = false;
+ }
+ attachment++;
+ } else {
+ unsigned output_type = get_fundamental_type(fs, it->second.type_id);
+ unsigned att_type = get_format_type(color_formats[attachment]);
+
+ /* type checking */
+ if (att_type != FORMAT_TYPE_UNDEFINED && output_type != FORMAT_TYPE_UNDEFINED && att_type != output_type) {
+ if (log_msg(my_data->report_data, VK_DEBUG_REPORT_ERROR_BIT_EXT, VK_DEBUG_REPORT_OBJECT_TYPE_DEVICE_EXT, /*dev*/ 0,
+ __LINE__, SHADER_CHECKER_INTERFACE_TYPE_MISMATCH, "SC",
+ "Attachment %d of type `%s` does not match FS output type of `%s`", attachment,
+ string_VkFormat(color_formats[attachment]),
+ describe_type(fs, it->second.type_id).c_str())) {
+ pass = false;
+ }
+ }
+
+ /* OK! */
+ it++;
+ attachment++;
+ }
+ }
+
+ return pass;
+}
+
+/* For some analyses, we need to know about all ids referenced by the static call tree of a particular
+ * entrypoint. This is important for identifying the set of shader resources actually used by an entrypoint,
+ * for example.
+ * Note: we only explore parts of the image which might actually contain ids we care about for the above analyses.
+ * - NOT the shader input/output interfaces.
+ *
+ * TODO: The set of interesting opcodes here was determined by eyeballing the SPIRV spec. It might be worth
+ * converting parts of this to be generated from the machine-readable spec instead.
+ */
+static void mark_accessible_ids(shader_module const *src, spirv_inst_iter entrypoint, std::unordered_set<uint32_t> &ids) {
+ std::unordered_set<uint32_t> worklist;
+ worklist.insert(entrypoint.word(2));
+
+ while (!worklist.empty()) {
+ auto id_iter = worklist.begin();
+ auto id = *id_iter;
+ worklist.erase(id_iter);
+
+ auto insn = src->get_def(id);
+ if (insn == src->end()) {
+ /* id is something we didnt collect in build_def_index. that's OK -- we'll stumble
+ * across all kinds of things here that we may not care about. */
+ continue;
+ }
+
+ /* try to add to the output set */
+ if (!ids.insert(id).second) {
+ continue; /* if we already saw this id, we don't want to walk it again. */
+ }
+
+ switch (insn.opcode()) {
+ case spv::OpFunction:
+ /* scan whole body of the function, enlisting anything interesting */
+ while (++insn, insn.opcode() != spv::OpFunctionEnd) {
+ switch (insn.opcode()) {
+ case spv::OpLoad:
+ case spv::OpAtomicLoad:
+ case spv::OpAtomicExchange:
+ case spv::OpAtomicCompareExchange:
+ case spv::OpAtomicCompareExchangeWeak:
+ case spv::OpAtomicIIncrement:
+ case spv::OpAtomicIDecrement:
+ case spv::OpAtomicIAdd:
+ case spv::OpAtomicISub:
+ case spv::OpAtomicSMin:
+ case spv::OpAtomicUMin:
+ case spv::OpAtomicSMax:
+ case spv::OpAtomicUMax:
+ case spv::OpAtomicAnd:
+ case spv::OpAtomicOr:
+ case spv::OpAtomicXor:
+ worklist.insert(insn.word(3)); /* ptr */
+ break;
+ case spv::OpStore:
+ case spv::OpAtomicStore:
+ worklist.insert(insn.word(1)); /* ptr */
+ break;
+ case spv::OpAccessChain:
+ case spv::OpInBoundsAccessChain:
+ worklist.insert(insn.word(3)); /* base ptr */
+ break;
+ case spv::OpSampledImage:
+ case spv::OpImageSampleImplicitLod:
+ case spv::OpImageSampleExplicitLod:
+ case spv::OpImageSampleDrefImplicitLod:
+ case spv::OpImageSampleDrefExplicitLod:
+ case spv::OpImageSampleProjImplicitLod:
+ case spv::OpImageSampleProjExplicitLod:
+ case spv::OpImageSampleProjDrefImplicitLod:
+ case spv::OpImageSampleProjDrefExplicitLod:
+ case spv::OpImageFetch:
+ case spv::OpImageGather:
+ case spv::OpImageDrefGather:
+ case spv::OpImageRead:
+ case spv::OpImage:
+ case spv::OpImageQueryFormat:
+ case spv::OpImageQueryOrder:
+ case spv::OpImageQuerySizeLod:
+ case spv::OpImageQuerySize:
+ case spv::OpImageQueryLod:
+ case spv::OpImageQueryLevels:
+ case spv::OpImageQuerySamples:
+ case spv::OpImageSparseSampleImplicitLod:
+ case spv::OpImageSparseSampleExplicitLod:
+ case spv::OpImageSparseSampleDrefImplicitLod:
+ case spv::OpImageSparseSampleDrefExplicitLod:
+ case spv::OpImageSparseSampleProjImplicitLod:
+ case spv::OpImageSparseSampleProjExplicitLod:
+ case spv::OpImageSparseSampleProjDrefImplicitLod:
+ case spv::OpImageSparseSampleProjDrefExplicitLod:
+ case spv::OpImageSparseFetch:
+ case spv::OpImageSparseGather:
+ case spv::OpImageSparseDrefGather:
+ case spv::OpImageTexelPointer:
+ worklist.insert(insn.word(3)); /* image or sampled image */
+ break;
+ case spv::OpImageWrite:
+ worklist.insert(insn.word(1)); /* image -- different operand order to above */
+ break;
+ case spv::OpFunctionCall:
+ for (auto i = 3; i < insn.len(); i++) {
+ worklist.insert(insn.word(i)); /* fn itself, and all args */
+ }
+ break;
+
+ case spv::OpExtInst:
+ for (auto i = 5; i < insn.len(); i++) {
+ worklist.insert(insn.word(i)); /* operands to ext inst */
+ }
+ break;
+ }
+ }
+ break;
+ }
+ }
+}
+
+struct shader_stage_attributes {
+ char const *const name;
+ bool arrayed_input;
+};
+
+static shader_stage_attributes shader_stage_attribs[] = {
+ {"vertex shader", false},
+ {"tessellation control shader", true},
+ {"tessellation evaluation shader", false},
+ {"geometry shader", true},
+ {"fragment shader", false},
+};
+
+static bool validate_push_constant_block_against_pipeline(layer_data *my_data, VkDevice dev,
+ std::vector<VkPushConstantRange> const *pushConstantRanges,
+ shader_module const *src, spirv_inst_iter type,
+ VkShaderStageFlagBits stage) {
+ bool pass = true;
+
+ /* strip off ptrs etc */
+ type = get_struct_type(src, type, false);
+ assert(type != src->end());
+
+ /* validate directly off the offsets. this isn't quite correct for arrays
+ * and matrices, but is a good first step. TODO: arrays, matrices, weird
+ * sizes */
+ for (auto insn : *src) {
+ if (insn.opcode() == spv::OpMemberDecorate && insn.word(1) == type.word(1)) {
+
+ if (insn.word(3) == spv::DecorationOffset) {
+ unsigned offset = insn.word(4);
+ auto size = 4; /* bytes; TODO: calculate this based on the type */
+
+ bool found_range = false;
+ for (auto const &range : *pushConstantRanges) {
+ if (range.offset <= offset && range.offset + range.size >= offset + size) {
+ found_range = true;
+
+ if ((range.stageFlags & stage) == 0) {
+ if (log_msg(my_data->report_data, VK_DEBUG_REPORT_ERROR_BIT_EXT, VK_DEBUG_REPORT_OBJECT_TYPE_DEVICE_EXT,
+ /* dev */ 0, __LINE__, SHADER_CHECKER_PUSH_CONSTANT_NOT_ACCESSIBLE_FROM_STAGE, "SC",
+ "Push constant range covering variable starting at "
+ "offset %u not accessible from stage %s",
+ offset, string_VkShaderStageFlagBits(stage))) {
+ pass = false;
+ }
+ }
+
+ break;
+ }
+ }
+
+ if (!found_range) {
+ if (log_msg(my_data->report_data, VK_DEBUG_REPORT_ERROR_BIT_EXT, VK_DEBUG_REPORT_OBJECT_TYPE_DEVICE_EXT,
+ /* dev */ 0, __LINE__, SHADER_CHECKER_PUSH_CONSTANT_OUT_OF_RANGE, "SC",
+ "Push constant range covering variable starting at "
+ "offset %u not declared in layout",
+ offset)) {
+ pass = false;
+ }
+ }
+ }
+ }
+ }
+
+ return pass;
+}
+
+static bool validate_push_constant_usage(layer_data *my_data, VkDevice dev,
+ std::vector<VkPushConstantRange> const *pushConstantRanges, shader_module const *src,
+ std::unordered_set<uint32_t> accessible_ids, VkShaderStageFlagBits stage) {
+ bool pass = true;
+
+ for (auto id : accessible_ids) {
+ auto def_insn = src->get_def(id);
+ if (def_insn.opcode() == spv::OpVariable && def_insn.word(3) == spv::StorageClassPushConstant) {
+ pass = validate_push_constant_block_against_pipeline(my_data, dev, pushConstantRanges, src,
+ src->get_def(def_insn.word(1)), stage) &&
+ pass;
+ }
+ }
+
+ return pass;
+}
+
+// For given pipelineLayout verify that the setLayout at slot.first
+// has the requested binding at slot.second
+static VkDescriptorSetLayoutBinding const * get_descriptor_binding(layer_data *my_data, vector<VkDescriptorSetLayout> *pipelineLayout, descriptor_slot_t slot) {
+
+ if (!pipelineLayout)
+ return nullptr;
+
+ if (slot.first >= pipelineLayout->size())
+ return nullptr;
+
+ auto const layout_node = my_data->descriptorSetLayoutMap[(*pipelineLayout)[slot.first]];
+
+ auto bindingIt = layout_node->bindingToIndexMap.find(slot.second);
+ if ((bindingIt == layout_node->bindingToIndexMap.end()) || (layout_node->createInfo.pBindings == NULL))
+ return nullptr;
+
+ assert(bindingIt->second < layout_node->createInfo.bindingCount);
+ return &layout_node->createInfo.pBindings[bindingIt->second];
+}
+
+// Block of code at start here for managing/tracking Pipeline state that this layer cares about
+
+static uint64_t g_drawCount[NUM_DRAW_TYPES] = {0, 0, 0, 0};
+
+// TODO : Should be tracking lastBound per commandBuffer and when draws occur, report based on that cmd buffer lastBound
+// Then need to synchronize the accesses based on cmd buffer so that if I'm reading state on one cmd buffer, updates
+// to that same cmd buffer by separate thread are not changing state from underneath us
+// Track the last cmd buffer touched by this thread
+
+static VkBool32 hasDrawCmd(GLOBAL_CB_NODE *pCB) {
+ for (uint32_t i = 0; i < NUM_DRAW_TYPES; i++) {
+ if (pCB->drawCount[i])
+ return VK_TRUE;
+ }
+ return VK_FALSE;
+}
+
+// Check object status for selected flag state
+static VkBool32 validate_status(layer_data *my_data, GLOBAL_CB_NODE *pNode, CBStatusFlags enable_mask, CBStatusFlags status_mask,
+ CBStatusFlags status_flag, VkFlags msg_flags, DRAW_STATE_ERROR error_code, const char *fail_msg) {
+ // If non-zero enable mask is present, check it against status but if enable_mask
+ // is 0 then no enable required so we should always just check status
+ if ((!enable_mask) || (enable_mask & pNode->status)) {
+ if ((pNode->status & status_mask) != status_flag) {
+ // TODO : How to pass dispatchable objects as srcObject? Here src obj should be cmd buffer
+ return log_msg(my_data->report_data, msg_flags, VK_DEBUG_REPORT_OBJECT_TYPE_COMMAND_BUFFER_EXT, 0, __LINE__, error_code,
+ "DS", "CB object %#" PRIxLEAST64 ": %s", (uint64_t)(pNode->commandBuffer), fail_msg);
+ }
+ }
+ return VK_FALSE;
+}
+
+// Retrieve pipeline node ptr for given pipeline object
+static PIPELINE_NODE *getPipeline(layer_data *my_data, const VkPipeline pipeline) {
+ if (my_data->pipelineMap.find(pipeline) == my_data->pipelineMap.end()) {
+ return NULL;
+ }
+ return my_data->pipelineMap[pipeline];
+}
+
+// Return VK_TRUE if for a given PSO, the given state enum is dynamic, else return VK_FALSE
+static VkBool32 isDynamic(const PIPELINE_NODE *pPipeline, const VkDynamicState state) {
+ if (pPipeline && pPipeline->graphicsPipelineCI.pDynamicState) {
+ for (uint32_t i = 0; i < pPipeline->graphicsPipelineCI.pDynamicState->dynamicStateCount; i++) {
+ if (state == pPipeline->graphicsPipelineCI.pDynamicState->pDynamicStates[i])
+ return VK_TRUE;
+ }
+ }
+ return VK_FALSE;
+}
+
+// Validate state stored as flags at time of draw call
+static VkBool32 validate_draw_state_flags(layer_data *my_data, GLOBAL_CB_NODE *pCB, VkBool32 indexedDraw) {
+ VkBool32 result;
+ result =
+ validate_status(my_data, pCB, CBSTATUS_NONE, CBSTATUS_VIEWPORT_SET, CBSTATUS_VIEWPORT_SET, VK_DEBUG_REPORT_ERROR_BIT_EXT,
+ DRAWSTATE_VIEWPORT_NOT_BOUND, "Dynamic viewport state not set for this command buffer");
+ result |=
+ validate_status(my_data, pCB, CBSTATUS_NONE, CBSTATUS_SCISSOR_SET, CBSTATUS_SCISSOR_SET, VK_DEBUG_REPORT_ERROR_BIT_EXT,
+ DRAWSTATE_SCISSOR_NOT_BOUND, "Dynamic scissor state not set for this command buffer");
+ result |= validate_status(my_data, pCB, CBSTATUS_NONE, CBSTATUS_LINE_WIDTH_SET, CBSTATUS_LINE_WIDTH_SET,
+ VK_DEBUG_REPORT_ERROR_BIT_EXT, DRAWSTATE_LINE_WIDTH_NOT_BOUND,
+ "Dynamic line width state not set for this command buffer");
+ result |= validate_status(my_data, pCB, CBSTATUS_NONE, CBSTATUS_DEPTH_BIAS_SET, CBSTATUS_DEPTH_BIAS_SET,
+ VK_DEBUG_REPORT_ERROR_BIT_EXT, DRAWSTATE_DEPTH_BIAS_NOT_BOUND,
+ "Dynamic depth bias state not set for this command buffer");
+ result |= validate_status(my_data, pCB, CBSTATUS_COLOR_BLEND_WRITE_ENABLE, CBSTATUS_BLEND_SET, CBSTATUS_BLEND_SET,
+ VK_DEBUG_REPORT_ERROR_BIT_EXT, DRAWSTATE_BLEND_NOT_BOUND,
+ "Dynamic blend object state not set for this command buffer");
+ result |= validate_status(my_data, pCB, CBSTATUS_DEPTH_WRITE_ENABLE, CBSTATUS_DEPTH_BOUNDS_SET, CBSTATUS_DEPTH_BOUNDS_SET,
+ VK_DEBUG_REPORT_ERROR_BIT_EXT, DRAWSTATE_DEPTH_BOUNDS_NOT_BOUND,
+ "Dynamic depth bounds state not set for this command buffer");
+ result |= validate_status(my_data, pCB, CBSTATUS_STENCIL_TEST_ENABLE, CBSTATUS_STENCIL_READ_MASK_SET,
+ CBSTATUS_STENCIL_READ_MASK_SET, VK_DEBUG_REPORT_ERROR_BIT_EXT, DRAWSTATE_STENCIL_NOT_BOUND,
+ "Dynamic stencil read mask state not set for this command buffer");
+ result |= validate_status(my_data, pCB, CBSTATUS_STENCIL_TEST_ENABLE, CBSTATUS_STENCIL_WRITE_MASK_SET,
+ CBSTATUS_STENCIL_WRITE_MASK_SET, VK_DEBUG_REPORT_ERROR_BIT_EXT, DRAWSTATE_STENCIL_NOT_BOUND,
+ "Dynamic stencil write mask state not set for this command buffer");
+ result |= validate_status(my_data, pCB, CBSTATUS_STENCIL_TEST_ENABLE, CBSTATUS_STENCIL_REFERENCE_SET,
+ CBSTATUS_STENCIL_REFERENCE_SET, VK_DEBUG_REPORT_ERROR_BIT_EXT, DRAWSTATE_STENCIL_NOT_BOUND,
+ "Dynamic stencil reference state not set for this command buffer");
+ if (indexedDraw)
+ result |= validate_status(my_data, pCB, CBSTATUS_NONE, CBSTATUS_INDEX_BUFFER_BOUND, CBSTATUS_INDEX_BUFFER_BOUND,
+ VK_DEBUG_REPORT_ERROR_BIT_EXT, DRAWSTATE_INDEX_BUFFER_NOT_BOUND,
+ "Index buffer object not bound to this command buffer when Indexed Draw attempted");
+ return result;
+}
+
+// Verify attachment reference compatibility according to spec
+// If one array is larger, treat missing elements of shorter array as VK_ATTACHMENT_UNUSED & other array much match this
+// If both AttachmentReference arrays have requested index, check their corresponding AttachementDescriptions
+// to make sure that format and samples counts match.
+// If not, they are not compatible.
+static bool attachment_references_compatible(const uint32_t index, const VkAttachmentReference *pPrimary,
+ const uint32_t primaryCount, const VkAttachmentDescription *pPrimaryAttachments,
+ const VkAttachmentReference *pSecondary, const uint32_t secondaryCount,
+ const VkAttachmentDescription *pSecondaryAttachments) {
+ if (index >= primaryCount) { // Check secondary as if primary is VK_ATTACHMENT_UNUSED
+ if (VK_ATTACHMENT_UNUSED != pSecondary[index].attachment)
+ return false;
+ } else if (index >= secondaryCount) { // Check primary as if secondary is VK_ATTACHMENT_UNUSED
+ if (VK_ATTACHMENT_UNUSED != pPrimary[index].attachment)
+ return false;
+ } else { // format and sample count must match
+ if ((pPrimaryAttachments[pPrimary[index].attachment].format ==
+ pSecondaryAttachments[pSecondary[index].attachment].format) &&
+ (pPrimaryAttachments[pPrimary[index].attachment].samples ==
+ pSecondaryAttachments[pSecondary[index].attachment].samples))
+ return true;
+ }
+ // Format and sample counts didn't match
+ return false;
+}
+
+// For give primary and secondary RenderPass objects, verify that they're compatible
+static bool verify_renderpass_compatibility(layer_data *my_data, const VkRenderPass primaryRP, const VkRenderPass secondaryRP,
+ string &errorMsg) {
+ stringstream errorStr;
+ if (my_data->renderPassMap.find(primaryRP) == my_data->renderPassMap.end()) {
+ errorStr << "invalid VkRenderPass (" << primaryRP << ")";
+ errorMsg = errorStr.str();
+ return false;
+ } else if (my_data->renderPassMap.find(secondaryRP) == my_data->renderPassMap.end()) {
+ errorStr << "invalid VkRenderPass (" << secondaryRP << ")";
+ errorMsg = errorStr.str();
+ return false;
+ }
+ // Trivial pass case is exact same RP
+ if (primaryRP == secondaryRP) {
+ return true;
+ }
+ const VkRenderPassCreateInfo *primaryRPCI = my_data->renderPassMap[primaryRP]->pCreateInfo;
+ const VkRenderPassCreateInfo *secondaryRPCI = my_data->renderPassMap[secondaryRP]->pCreateInfo;
+ if (primaryRPCI->subpassCount != secondaryRPCI->subpassCount) {
+ errorStr << "RenderPass for primary cmdBuffer has " << primaryRPCI->subpassCount
+ << " subpasses but renderPass for secondary cmdBuffer has " << secondaryRPCI->subpassCount << " subpasses.";
+ errorMsg = errorStr.str();
+ return false;
+ }
+ uint32_t spIndex = 0;
+ for (spIndex = 0; spIndex < primaryRPCI->subpassCount; ++spIndex) {
+ // For each subpass, verify that corresponding color, input, resolve & depth/stencil attachment references are compatible
+ uint32_t primaryColorCount = primaryRPCI->pSubpasses[spIndex].colorAttachmentCount;
+ uint32_t secondaryColorCount = secondaryRPCI->pSubpasses[spIndex].colorAttachmentCount;
+ uint32_t colorMax = std::max(primaryColorCount, secondaryColorCount);
+ for (uint32_t cIdx = 0; cIdx < colorMax; ++cIdx) {
+ if (!attachment_references_compatible(cIdx, primaryRPCI->pSubpasses[spIndex].pColorAttachments, primaryColorCount,
+ primaryRPCI->pAttachments, secondaryRPCI->pSubpasses[spIndex].pColorAttachments,
+ secondaryColorCount, secondaryRPCI->pAttachments)) {
+ errorStr << "color attachments at index " << cIdx << " of subpass index " << spIndex << " are not compatible.";
+ errorMsg = errorStr.str();
+ return false;
+ } else if (!attachment_references_compatible(cIdx, primaryRPCI->pSubpasses[spIndex].pResolveAttachments,
+ primaryColorCount, primaryRPCI->pAttachments,
+ secondaryRPCI->pSubpasses[spIndex].pResolveAttachments,
+ secondaryColorCount, secondaryRPCI->pAttachments)) {
+ errorStr << "resolve attachments at index " << cIdx << " of subpass index " << spIndex << " are not compatible.";
+ errorMsg = errorStr.str();
+ return false;
+ } else if (!attachment_references_compatible(cIdx, primaryRPCI->pSubpasses[spIndex].pDepthStencilAttachment,
+ primaryColorCount, primaryRPCI->pAttachments,
+ secondaryRPCI->pSubpasses[spIndex].pDepthStencilAttachment,
+ secondaryColorCount, secondaryRPCI->pAttachments)) {
+ errorStr << "depth/stencil attachments at index " << cIdx << " of subpass index " << spIndex
+ << " are not compatible.";
+ errorMsg = errorStr.str();
+ return false;
+ }
+ }
+ uint32_t primaryInputCount = primaryRPCI->pSubpasses[spIndex].inputAttachmentCount;
+ uint32_t secondaryInputCount = secondaryRPCI->pSubpasses[spIndex].inputAttachmentCount;
+ uint32_t inputMax = std::max(primaryInputCount, secondaryInputCount);
+ for (uint32_t i = 0; i < inputMax; ++i) {
+ if (!attachment_references_compatible(i, primaryRPCI->pSubpasses[spIndex].pInputAttachments, primaryColorCount,
+ primaryRPCI->pAttachments, secondaryRPCI->pSubpasses[spIndex].pInputAttachments,
+ secondaryColorCount, secondaryRPCI->pAttachments)) {
+ errorStr << "input attachments at index " << i << " of subpass index " << spIndex << " are not compatible.";
+ errorMsg = errorStr.str();
+ return false;
+ }
+ }
+ }
+ return true;
+}
+
+// For give SET_NODE, verify that its Set is compatible w/ the setLayout corresponding to pipelineLayout[layoutIndex]
+static bool verify_set_layout_compatibility(layer_data *my_data, const SET_NODE *pSet, const VkPipelineLayout layout,
+ const uint32_t layoutIndex, string &errorMsg) {
+ stringstream errorStr;
+ auto pipeline_layout_it = my_data->pipelineLayoutMap.find(layout);
+ if (pipeline_layout_it == my_data->pipelineLayoutMap.end()) {
+ errorStr << "invalid VkPipelineLayout (" << layout << ")";
+ errorMsg = errorStr.str();
+ return false;
+ }
+ if (layoutIndex >= pipeline_layout_it->second.descriptorSetLayouts.size()) {
+ errorStr << "VkPipelineLayout (" << layout << ") only contains " << pipeline_layout_it->second.descriptorSetLayouts.size()
+ << " setLayouts corresponding to sets 0-" << pipeline_layout_it->second.descriptorSetLayouts.size() - 1
+ << ", but you're attempting to bind set to index " << layoutIndex;
+ errorMsg = errorStr.str();
+ return false;
+ }
+ // Get the specific setLayout from PipelineLayout that overlaps this set
+ LAYOUT_NODE *pLayoutNode = my_data->descriptorSetLayoutMap[pipeline_layout_it->second.descriptorSetLayouts[layoutIndex]];
+ if (pLayoutNode->layout == pSet->pLayout->layout) { // trivial pass case
+ return true;
+ }
+ size_t descriptorCount = pLayoutNode->descriptorTypes.size();
+ if (descriptorCount != pSet->pLayout->descriptorTypes.size()) {
+ errorStr << "setLayout " << layoutIndex << " from pipelineLayout " << layout << " has " << descriptorCount
+ << " descriptors, but corresponding set being bound has " << pSet->pLayout->descriptorTypes.size()
+ << " descriptors.";
+ errorMsg = errorStr.str();
+ return false; // trivial fail case
+ }
+ // Now need to check set against corresponding pipelineLayout to verify compatibility
+ for (size_t i = 0; i < descriptorCount; ++i) {
+ // Need to verify that layouts are identically defined
+ // TODO : Is below sufficient? Making sure that types & stageFlags match per descriptor
+ // do we also need to check immutable samplers?
+ if (pLayoutNode->descriptorTypes[i] != pSet->pLayout->descriptorTypes[i]) {
+ errorStr << "descriptor " << i << " for descriptorSet being bound is type '"
+ << string_VkDescriptorType(pSet->pLayout->descriptorTypes[i])
+ << "' but corresponding descriptor from pipelineLayout is type '"
+ << string_VkDescriptorType(pLayoutNode->descriptorTypes[i]) << "'";
+ errorMsg = errorStr.str();
+ return false;
+ }
+ if (pLayoutNode->stageFlags[i] != pSet->pLayout->stageFlags[i]) {
+ errorStr << "stageFlags " << i << " for descriptorSet being bound is " << pSet->pLayout->stageFlags[i]
+ << "' but corresponding descriptor from pipelineLayout has stageFlags " << pLayoutNode->stageFlags[i];
+ errorMsg = errorStr.str();
+ return false;
+ }
+ }
+ return true;
+}
+
+// Validate that data for each specialization entry is fully contained within the buffer.
+static VkBool32 validate_specialization_offsets(layer_data *my_data, VkPipelineShaderStageCreateInfo const *info) {
+ VkBool32 pass = VK_TRUE;
+
+ VkSpecializationInfo const *spec = info->pSpecializationInfo;
+
+ if (spec) {
+ for (auto i = 0u; i < spec->mapEntryCount; i++) {
+ if (spec->pMapEntries[i].offset + spec->pMapEntries[i].size > spec->dataSize) {
+ if (log_msg(my_data->report_data, VK_DEBUG_REPORT_ERROR_BIT_EXT, VK_DEBUG_REPORT_OBJECT_TYPE_DEVICE_EXT,
+ /*dev*/ 0, __LINE__, SHADER_CHECKER_BAD_SPECIALIZATION, "SC",
+ "Specialization entry %u (for constant id %u) references memory outside provided "
+ "specialization data (bytes %u.." PRINTF_SIZE_T_SPECIFIER "; " PRINTF_SIZE_T_SPECIFIER
+ " bytes provided)",
+ i, spec->pMapEntries[i].constantID, spec->pMapEntries[i].offset,
+ spec->pMapEntries[i].offset + spec->pMapEntries[i].size - 1, spec->dataSize)) {
+
+ pass = VK_FALSE;
+ }
+ }
+ }
+ }
+
+ return pass;
+}
+
+static bool descriptor_type_match(layer_data *my_data, shader_module const *module, uint32_t type_id,
+ VkDescriptorType descriptor_type, unsigned &descriptor_count) {
+ auto type = module->get_def(type_id);
+
+ descriptor_count = 1;
+
+ /* Strip off any array or ptrs. Where we remove array levels, adjust the
+ * descriptor count for each dimension. */
+ while (type.opcode() == spv::OpTypeArray || type.opcode() == spv::OpTypePointer) {
+ if (type.opcode() == spv::OpTypeArray) {
+ descriptor_count *= get_constant_value(module, type.word(3));
+ type = module->get_def(type.word(2));
+ }
+ else {
+ type = module->get_def(type.word(3));
+ }
+ }
+
+ switch (type.opcode()) {
+ case spv::OpTypeStruct: {
+ for (auto insn : *module) {
+ if (insn.opcode() == spv::OpDecorate && insn.word(1) == type.word(1)) {
+ if (insn.word(2) == spv::DecorationBlock) {
+ return descriptor_type == VK_DESCRIPTOR_TYPE_UNIFORM_BUFFER ||
+ descriptor_type == VK_DESCRIPTOR_TYPE_UNIFORM_BUFFER_DYNAMIC;
+ } else if (insn.word(2) == spv::DecorationBufferBlock) {
+ return descriptor_type == VK_DESCRIPTOR_TYPE_STORAGE_BUFFER ||
+ descriptor_type == VK_DESCRIPTOR_TYPE_STORAGE_BUFFER_DYNAMIC;
+ }
+ }
+ }
+
+ /* Invalid */
+ return false;
+ }
+
+ case spv::OpTypeSampler:
+ return descriptor_type == VK_DESCRIPTOR_TYPE_SAMPLER;
+
+ case spv::OpTypeSampledImage:
+ return descriptor_type == VK_DESCRIPTOR_TYPE_COMBINED_IMAGE_SAMPLER;
+
+ case spv::OpTypeImage: {
+ /* Many descriptor types backing image types-- depends on dimension
+ * and whether the image will be used with a sampler. SPIRV for
+ * Vulkan requires that sampled be 1 or 2 -- leaving the decision to
+ * runtime is unacceptable.
+ */
+ auto dim = type.word(3);
+ auto sampled = type.word(7);
+
+ if (dim == spv::DimSubpassData) {
+ return descriptor_type == VK_DESCRIPTOR_TYPE_INPUT_ATTACHMENT;
+ } else if (dim == spv::DimBuffer) {
+ if (sampled == 1) {
+ return descriptor_type == VK_DESCRIPTOR_TYPE_UNIFORM_TEXEL_BUFFER;
+ } else {
+ return descriptor_type == VK_DESCRIPTOR_TYPE_STORAGE_TEXEL_BUFFER;
+ }
+ } else if (sampled == 1) {
+ return descriptor_type == VK_DESCRIPTOR_TYPE_SAMPLED_IMAGE;
+ } else {
+ return descriptor_type == VK_DESCRIPTOR_TYPE_STORAGE_IMAGE;
+ }
+ }
+
+ /* We shouldn't really see any other junk types -- but if we do, they're
+ * a mismatch.
+ */
+ default:
+ return false; /* Mismatch */
+ }
+}
+
+static VkBool32 require_feature(layer_data *my_data, VkBool32 feature, char const *feature_name) {
+ if (!feature) {
+ if (log_msg(my_data->report_data, VK_DEBUG_REPORT_ERROR_BIT_EXT, VK_DEBUG_REPORT_OBJECT_TYPE_DEVICE_EXT,
+ /* dev */ 0, __LINE__, SHADER_CHECKER_FEATURE_NOT_ENABLED, "SC",
+ "Shader requires VkPhysicalDeviceFeatures::%s but is not "
+ "enabled on the device",
+ feature_name)) {
+ return false;
+ }
+ }
+
+ return true;
+}
+
+static VkBool32 validate_shader_capabilities(layer_data *my_data, VkDevice dev, shader_module const *src)
+{
+ VkBool32 pass = VK_TRUE;
+
+ auto enabledFeatures = &my_data->physDevProperties.features;
+
+ for (auto insn : *src) {
+ if (insn.opcode() == spv::OpCapability) {
+ switch (insn.word(1)) {
+ case spv::CapabilityMatrix:
+ case spv::CapabilityShader:
+ case spv::CapabilityInputAttachment:
+ case spv::CapabilitySampled1D:
+ case spv::CapabilityImage1D:
+ case spv::CapabilitySampledBuffer:
+ case spv::CapabilityImageBuffer:
+ case spv::CapabilityImageQuery:
+ case spv::CapabilityDerivativeControl:
+ // Always supported by a Vulkan 1.0 implementation -- no feature bits.
+ break;
+
+ case spv::CapabilityGeometry:
+ pass &= require_feature(my_data, enabledFeatures->geometryShader, "geometryShader");
+ break;
+
+ case spv::CapabilityTessellation:
+ pass &= require_feature(my_data, enabledFeatures->tessellationShader, "tessellationShader");
+ break;
+
+ case spv::CapabilityFloat64:
+ pass &= require_feature(my_data, enabledFeatures->shaderFloat64, "shaderFloat64");
+ break;
+
+ case spv::CapabilityInt64:
+ pass &= require_feature(my_data, enabledFeatures->shaderInt64, "shaderInt64");
+ break;
+
+ case spv::CapabilityTessellationPointSize:
+ case spv::CapabilityGeometryPointSize:
+ pass &= require_feature(my_data, enabledFeatures->shaderTessellationAndGeometryPointSize,
+ "shaderTessellationAndGeometryPointSize");
+ break;
+
+ case spv::CapabilityImageGatherExtended:
+ pass &= require_feature(my_data, enabledFeatures->shaderImageGatherExtended, "shaderImageGatherExtended");
+ break;
+
+ case spv::CapabilityStorageImageMultisample:
+ pass &= require_feature(my_data, enabledFeatures->shaderStorageImageMultisample, "shaderStorageImageMultisample");
+ break;
+
+ case spv::CapabilityUniformBufferArrayDynamicIndexing:
+ pass &= require_feature(my_data, enabledFeatures->shaderUniformBufferArrayDynamicIndexing,
+ "shaderUniformBufferArrayDynamicIndexing");
+ break;
+
+ case spv::CapabilitySampledImageArrayDynamicIndexing:
+ pass &= require_feature(my_data, enabledFeatures->shaderSampledImageArrayDynamicIndexing,
+ "shaderSampledImageArrayDynamicIndexing");
+ break;
+
+ case spv::CapabilityStorageBufferArrayDynamicIndexing:
+ pass &= require_feature(my_data, enabledFeatures->shaderStorageBufferArrayDynamicIndexing,
+ "shaderStorageBufferArrayDynamicIndexing");
+ break;
+
+ case spv::CapabilityStorageImageArrayDynamicIndexing:
+ pass &= require_feature(my_data, enabledFeatures->shaderStorageImageArrayDynamicIndexing,
+ "shaderStorageImageArrayDynamicIndexing");
+ break;
+
+ case spv::CapabilityClipDistance:
+ pass &= require_feature(my_data, enabledFeatures->shaderClipDistance, "shaderClipDistance");
+ break;
+
+ case spv::CapabilityCullDistance:
+ pass &= require_feature(my_data, enabledFeatures->shaderCullDistance, "shaderCullDistance");
+ break;
+
+ case spv::CapabilityImageCubeArray:
+ pass &= require_feature(my_data, enabledFeatures->imageCubeArray, "imageCubeArray");
+ break;
+
+ case spv::CapabilitySampleRateShading:
+ pass &= require_feature(my_data, enabledFeatures->sampleRateShading, "sampleRateShading");
+ break;
+
+ case spv::CapabilitySparseResidency:
+ pass &= require_feature(my_data, enabledFeatures->shaderResourceResidency, "shaderResourceResidency");
+ break;
+
+ case spv::CapabilityMinLod:
+ pass &= require_feature(my_data, enabledFeatures->shaderResourceMinLod, "shaderResourceMinLod");
+ break;
+
+ case spv::CapabilitySampledCubeArray:
+ pass &= require_feature(my_data, enabledFeatures->imageCubeArray, "imageCubeArray");
+ break;
+
+ case spv::CapabilityImageMSArray:
+ pass &= require_feature(my_data, enabledFeatures->shaderStorageImageMultisample, "shaderStorageImageMultisample");
+ break;
+
+ case spv::CapabilityStorageImageExtendedFormats:
+ pass &= require_feature(my_data, enabledFeatures->shaderStorageImageExtendedFormats,
+ "shaderStorageImageExtendedFormats");
+ break;
+
+ case spv::CapabilityInterpolationFunction:
+ pass &= require_feature(my_data, enabledFeatures->sampleRateShading, "sampleRateShading");
+ break;
+
+ case spv::CapabilityStorageImageReadWithoutFormat:
+ pass &= require_feature(my_data, enabledFeatures->shaderStorageImageReadWithoutFormat,
+ "shaderStorageImageReadWithoutFormat");
+ break;
+
+ case spv::CapabilityStorageImageWriteWithoutFormat:
+ pass &= require_feature(my_data, enabledFeatures->shaderStorageImageWriteWithoutFormat,
+ "shaderStorageImageWriteWithoutFormat");
+ break;
+
+ case spv::CapabilityMultiViewport:
+ pass &= require_feature(my_data, enabledFeatures->multiViewport, "multiViewport");
+ break;
+
+ default:
+ if (log_msg(my_data->report_data, VK_DEBUG_REPORT_ERROR_BIT_EXT, VK_DEBUG_REPORT_OBJECT_TYPE_DEVICE_EXT, /* dev */0,
+ __LINE__, SHADER_CHECKER_BAD_CAPABILITY, "SC",
+ "Shader declares capability %u, not supported in Vulkan.",
+ insn.word(1)))
+ pass = VK_FALSE;
+ break;
+ }
+ }
+ }
+
+ return pass;
+}
+
+
+// Validate that the shaders used by the given pipeline
+// As a side effect this function also records the sets that are actually used by the pipeline
+static VkBool32 validate_pipeline_shaders(layer_data *my_data, VkDevice dev, PIPELINE_NODE *pPipeline) {
+ VkGraphicsPipelineCreateInfo const *pCreateInfo = &pPipeline->graphicsPipelineCI;
+ /* We seem to allow pipeline stages to be specified out of order, so collect and identify them
+ * before trying to do anything more: */
+ int vertex_stage = get_shader_stage_id(VK_SHADER_STAGE_VERTEX_BIT);
+ int fragment_stage = get_shader_stage_id(VK_SHADER_STAGE_FRAGMENT_BIT);
+
+ shader_module *shaders[5];
+ memset(shaders, 0, sizeof(shaders));
+ spirv_inst_iter entrypoints[5];
+ memset(entrypoints, 0, sizeof(entrypoints));
+ RENDER_PASS_NODE const *rp = 0;
+ VkPipelineVertexInputStateCreateInfo const *vi = 0;
+ VkBool32 pass = VK_TRUE;
+
+ for (uint32_t i = 0; i < pCreateInfo->stageCount; i++) {
+ VkPipelineShaderStageCreateInfo const *pStage = &pCreateInfo->pStages[i];
+ if (pStage->sType == VK_STRUCTURE_TYPE_PIPELINE_SHADER_STAGE_CREATE_INFO) {
+
+ if ((pStage->stage & (VK_SHADER_STAGE_VERTEX_BIT | VK_SHADER_STAGE_GEOMETRY_BIT | VK_SHADER_STAGE_FRAGMENT_BIT |
+ VK_SHADER_STAGE_TESSELLATION_CONTROL_BIT | VK_SHADER_STAGE_TESSELLATION_EVALUATION_BIT)) == 0) {
+ if (log_msg(my_data->report_data, VK_DEBUG_REPORT_WARNING_BIT_EXT, VK_DEBUG_REPORT_OBJECT_TYPE_DEVICE_EXT,
+ /*dev*/ 0, __LINE__, SHADER_CHECKER_UNKNOWN_STAGE, "SC", "Unknown shader stage %d", pStage->stage)) {
+ pass = VK_FALSE;
+ }
+ } else {
+ pass = validate_specialization_offsets(my_data, pStage) && pass;
+
+ auto stage_id = get_shader_stage_id(pStage->stage);
+ auto module = my_data->shaderModuleMap[pStage->module].get();
+ shaders[stage_id] = module;
+
+ /* find the entrypoint */
+ entrypoints[stage_id] = find_entrypoint(module, pStage->pName, pStage->stage);
+ if (entrypoints[stage_id] == module->end()) {
+ if (log_msg(my_data->report_data, VK_DEBUG_REPORT_ERROR_BIT_EXT, VK_DEBUG_REPORT_OBJECT_TYPE_DEVICE_EXT,
+ /*dev*/ 0, __LINE__, SHADER_CHECKER_MISSING_ENTRYPOINT, "SC",
+ "No entrypoint found named `%s` for stage %s", pStage->pName,
+ string_VkShaderStageFlagBits(pStage->stage))) {
+ pass = VK_FALSE;
+ }
+ }
+
+ /* validate shader capabilities against enabled device features */
+ pass = validate_shader_capabilities(my_data, dev, module) && pass;
+
+ /* mark accessible ids */
+ std::unordered_set<uint32_t> accessible_ids;
+ mark_accessible_ids(module, entrypoints[stage_id], accessible_ids);
+
+ /* validate descriptor set layout against what the entrypoint actually uses */
+ std::map<descriptor_slot_t, interface_var> descriptor_uses;
+ collect_interface_by_descriptor_slot(my_data, dev, module, accessible_ids, descriptor_uses);
+
+ auto layouts = pCreateInfo->layout != VK_NULL_HANDLE
+ ? &(my_data->pipelineLayoutMap[pCreateInfo->layout].descriptorSetLayouts)
+ : nullptr;
+
+ for (auto use : descriptor_uses) {
+ // As a side-effect of this function, capture which sets are used by the pipeline
+ pPipeline->active_sets.insert(use.first.first);
+
+ /* find the matching binding */
+ auto binding = get_descriptor_binding(my_data, layouts, use.first);
+ unsigned required_descriptor_count;
+
+ if (!binding) {
+ if (log_msg(my_data->report_data, VK_DEBUG_REPORT_ERROR_BIT_EXT, VK_DEBUG_REPORT_OBJECT_TYPE_DEVICE_EXT,
+ /*dev*/ 0, __LINE__, SHADER_CHECKER_MISSING_DESCRIPTOR, "SC",
+ "Shader uses descriptor slot %u.%u (used as type `%s`) but not declared in pipeline layout",
+ use.first.first, use.first.second, describe_type(module, use.second.type_id).c_str())) {
+ pass = VK_FALSE;
+ }
+ } else if (~binding->stageFlags & pStage->stage) {
+ if (log_msg(my_data->report_data, VK_DEBUG_REPORT_ERROR_BIT_EXT, VK_DEBUG_REPORT_OBJECT_TYPE_DEVICE_EXT,
+ /*dev*/ 0, __LINE__, SHADER_CHECKER_DESCRIPTOR_NOT_ACCESSIBLE_FROM_STAGE, "SC",
+ "Shader uses descriptor slot %u.%u (used "
+ "as type `%s`) but descriptor not "
+ "accessible from stage %s",
+ use.first.first, use.first.second,
+ describe_type(module, use.second.type_id).c_str(),
+ string_VkShaderStageFlagBits(pStage->stage))) {
+ pass = VK_FALSE;
+ }
+ } else if (!descriptor_type_match(my_data, module, use.second.type_id, binding->descriptorType, /*out*/ required_descriptor_count)) {
+ if (log_msg(my_data->report_data, VK_DEBUG_REPORT_ERROR_BIT_EXT, VK_DEBUG_REPORT_OBJECT_TYPE_DEVICE_EXT,
+ /*dev*/ 0, __LINE__, SHADER_CHECKER_DESCRIPTOR_TYPE_MISMATCH, "SC",
+ "Type mismatch on descriptor slot "
+ "%u.%u (used as type `%s`) but "
+ "descriptor of type %s",
+ use.first.first, use.first.second,
+ describe_type(module, use.second.type_id).c_str(),
+ string_VkDescriptorType(binding->descriptorType))) {
+ pass = VK_FALSE;
+ }
+ } else if (binding->descriptorCount < required_descriptor_count) {
+ if (log_msg(my_data->report_data, VK_DEBUG_REPORT_ERROR_BIT_EXT, VK_DEBUG_REPORT_OBJECT_TYPE_DEVICE_EXT,
+ /*dev*/ 0, __LINE__, SHADER_CHECKER_DESCRIPTOR_TYPE_MISMATCH, "SC",
+ "Shader expects at least %u descriptors for binding %u.%u (used as type `%s`) but only %u provided",
+ required_descriptor_count, use.first.first, use.first.second,
+ describe_type(module, use.second.type_id).c_str(),
+ binding->descriptorCount)) {
+ pass = VK_FALSE;
+ }
+ }
+ }
+
+ /* validate push constant usage */
+ pass =
+ validate_push_constant_usage(my_data, dev, &my_data->pipelineLayoutMap[pCreateInfo->layout].pushConstantRanges,
+ module, accessible_ids, pStage->stage) &&
+ pass;
+ }
+ }
+ }
+
+ if (pCreateInfo->renderPass != VK_NULL_HANDLE)
+ rp = my_data->renderPassMap[pCreateInfo->renderPass];
+
+ vi = pCreateInfo->pVertexInputState;
+
+ if (vi) {
+ pass = validate_vi_consistency(my_data, dev, vi) && pass;
+ }
+
+ if (shaders[vertex_stage]) {
+ pass = validate_vi_against_vs_inputs(my_data, dev, vi, shaders[vertex_stage], entrypoints[vertex_stage]) && pass;
+ }
+
+ /* TODO: enforce rules about present combinations of shaders */
+ int producer = get_shader_stage_id(VK_SHADER_STAGE_VERTEX_BIT);
+ int consumer = get_shader_stage_id(VK_SHADER_STAGE_TESSELLATION_CONTROL_BIT);
+
+ while (!shaders[producer] && producer != fragment_stage) {
+ producer++;
+ consumer++;
+ }
+
+ for (; producer != fragment_stage && consumer <= fragment_stage; consumer++) {
+ assert(shaders[producer]);
+ if (shaders[consumer]) {
+ pass = validate_interface_between_stages(my_data, dev, shaders[producer], entrypoints[producer],
+ shader_stage_attribs[producer].name, shaders[consumer], entrypoints[consumer],
+ shader_stage_attribs[consumer].name,
+ shader_stage_attribs[consumer].arrayed_input) &&
+ pass;
+
+ producer = consumer;
+ }
+ }
+
+ if (shaders[fragment_stage] && rp) {
+ pass = validate_fs_outputs_against_render_pass(my_data, dev, shaders[fragment_stage], entrypoints[fragment_stage], rp,
+ pCreateInfo->subpass) &&
+ pass;
+ }
+
+ return pass;
+}
+
+// Return Set node ptr for specified set or else NULL
+static SET_NODE *getSetNode(layer_data *my_data, const VkDescriptorSet set) {
+ if (my_data->setMap.find(set) == my_data->setMap.end()) {
+ return NULL;
+ }
+ return my_data->setMap[set];
+}
+// For the given command buffer, verify that for each set set in activeSetNodes
+// that any dynamic descriptor in that set has a valid dynamic offset bound.
+// To be valid, the dynamic offset combined with the offset and range from its
+// descriptor update must not overflow the size of its buffer being updated
+static VkBool32 validate_dynamic_offsets(layer_data *my_data, const GLOBAL_CB_NODE *pCB, const vector<SET_NODE *> activeSetNodes) {
+ VkBool32 result = VK_FALSE;
+
+ VkWriteDescriptorSet *pWDS = NULL;
+ uint32_t dynOffsetIndex = 0;
+ VkDeviceSize bufferSize = 0;
+ for (auto set_node : activeSetNodes) {
+ for (uint32_t i = 0; i < set_node->descriptorCount; ++i) {
+ // TODO: Add validation for descriptors dynamically skipped in shader
+ if (set_node->ppDescriptors[i] != NULL) {
+ switch (set_node->ppDescriptors[i]->sType) {
+ case VK_STRUCTURE_TYPE_WRITE_DESCRIPTOR_SET:
+ pWDS = (VkWriteDescriptorSet *)set_node->ppDescriptors[i];
+ if ((pWDS->descriptorType == VK_DESCRIPTOR_TYPE_UNIFORM_BUFFER_DYNAMIC) ||
+ (pWDS->descriptorType == VK_DESCRIPTOR_TYPE_STORAGE_BUFFER_DYNAMIC)) {
+ for (uint32_t j = 0; j < pWDS->descriptorCount; ++j) {
+ bufferSize = my_data->bufferMap[pWDS->pBufferInfo[j].buffer].create_info->size;
+ uint32_t dynOffset = pCB->lastBound[VK_PIPELINE_BIND_POINT_GRAPHICS].dynamicOffsets[dynOffsetIndex];
+ if (pWDS->pBufferInfo[j].range == VK_WHOLE_SIZE) {
+ if ((dynOffset + pWDS->pBufferInfo[j].offset) > bufferSize) {
+ result |= log_msg(my_data->report_data, VK_DEBUG_REPORT_ERROR_BIT_EXT,
+ VK_DEBUG_REPORT_OBJECT_TYPE_DESCRIPTOR_SET_EXT,
+ reinterpret_cast<const uint64_t &>(set_node->set), __LINE__,
+ DRAWSTATE_DYNAMIC_OFFSET_OVERFLOW, "DS",
+ "VkDescriptorSet (%#" PRIxLEAST64 ") bound as set #%u has range of "
+ "VK_WHOLE_SIZE but dynamic offset %#" PRIxLEAST32 ". "
+ "combined with offset %#" PRIxLEAST64 " oversteps its buffer (%#" PRIxLEAST64
+ ") which has a size of %#" PRIxLEAST64 ".",
+ reinterpret_cast<const uint64_t &>(set_node->set), i,
+ pCB->dynamicOffsets[dynOffsetIndex], pWDS->pBufferInfo[j].offset,
+ reinterpret_cast<const uint64_t &>(pWDS->pBufferInfo[j].buffer), bufferSize);
+ }
+ } else if ((dynOffset + pWDS->pBufferInfo[j].offset + pWDS->pBufferInfo[j].range) > bufferSize) {
+ result |= log_msg(
+ my_data->report_data, VK_DEBUG_REPORT_ERROR_BIT_EXT,
+ VK_DEBUG_REPORT_OBJECT_TYPE_DESCRIPTOR_SET_EXT,
+ reinterpret_cast<const uint64_t &>(set_node->set), __LINE__, DRAWSTATE_DYNAMIC_OFFSET_OVERFLOW,
+ "DS",
+ "VkDescriptorSet (%#" PRIxLEAST64 ") bound as set #%u has dynamic offset %#" PRIxLEAST32 ". "
+ "Combined with offset %#" PRIxLEAST64 " and range %#" PRIxLEAST64
+ " from its update, this oversteps its buffer "
+ "(%#" PRIxLEAST64 ") which has a size of %#" PRIxLEAST64 ".",
+ reinterpret_cast<const uint64_t &>(set_node->set), i, pCB->dynamicOffsets[dynOffsetIndex],
+ pWDS->pBufferInfo[j].offset, pWDS->pBufferInfo[j].range,
+ reinterpret_cast<const uint64_t &>(pWDS->pBufferInfo[j].buffer), bufferSize);
+ } else if ((dynOffset + pWDS->pBufferInfo[j].offset + pWDS->pBufferInfo[j].range) > bufferSize) {
+ result |= log_msg(
+ my_data->report_data, VK_DEBUG_REPORT_ERROR_BIT_EXT,
+ VK_DEBUG_REPORT_OBJECT_TYPE_DESCRIPTOR_SET_EXT,
+ reinterpret_cast<const uint64_t &>(set_node->set), __LINE__, DRAWSTATE_DYNAMIC_OFFSET_OVERFLOW,
+ "DS",
+ "VkDescriptorSet (%#" PRIxLEAST64 ") bound as set #%u has dynamic offset %#" PRIxLEAST32 ". "
+ "Combined with offset %#" PRIxLEAST64 " and range %#" PRIxLEAST64
+ " from its update, this oversteps its buffer "
+ "(%#" PRIxLEAST64 ") which has a size of %#" PRIxLEAST64 ".",
+ reinterpret_cast<const uint64_t &>(set_node->set), i, pCB->dynamicOffsets[dynOffsetIndex],
+ pWDS->pBufferInfo[j].offset, pWDS->pBufferInfo[j].range,
+ reinterpret_cast<const uint64_t &>(pWDS->pBufferInfo[j].buffer), bufferSize);
+ }
+ dynOffsetIndex++;
+ i += j; // Advance i to end of this set of descriptors (++i at end of for loop will move 1 index past
+ // last of these descriptors)
+ }
+ }
+ break;
+ default: // Currently only shadowing Write update nodes so shouldn't get here
+ assert(0);
+ continue;
+ }
+ }
+ }
+ }
+ return result;
+}
+
+// Validate overall state at the time of a draw call
+static VkBool32 validate_draw_state(layer_data *my_data, GLOBAL_CB_NODE *pCB, VkBool32 indexedDraw) {
+ // First check flag states
+ VkBool32 result = validate_draw_state_flags(my_data, pCB, indexedDraw);
+ PIPELINE_NODE *pPipe = getPipeline(my_data, pCB->lastBound[VK_PIPELINE_BIND_POINT_GRAPHICS].pipeline);
+ // Now complete other state checks
+ // TODO : Currently only performing next check if *something* was bound (non-zero last bound)
+ // There is probably a better way to gate when this check happens, and to know if something *should* have been bound
+ // We should have that check separately and then gate this check based on that check
+ if (pPipe) {
+ auto const &state = pCB->lastBound[VK_PIPELINE_BIND_POINT_GRAPHICS];
+ if (state.pipelineLayout) {
+ string errorString;
+ // Need a vector (vs. std::set) of active Sets for dynamicOffset validation in case same set bound w/ different offsets
+ vector<SET_NODE *> activeSetNodes;
+ for (auto setIndex : pPipe->active_sets) {
+ // If valid set is not bound throw an error
+ if ((state.boundDescriptorSets.size() <= setIndex) || (!state.boundDescriptorSets[setIndex])) {
+ result |= log_msg(my_data->report_data, VK_DEBUG_REPORT_ERROR_BIT_EXT, (VkDebugReportObjectTypeEXT)0, 0,
+ __LINE__, DRAWSTATE_DESCRIPTOR_SET_NOT_BOUND, "DS",
+ "VkPipeline %#" PRIxLEAST64 " uses set #%u but that set is not bound.",
+ (uint64_t)pPipe->pipeline, setIndex);
+ } else if (!verify_set_layout_compatibility(my_data, my_data->setMap[state.boundDescriptorSets[setIndex]],
+ pPipe->graphicsPipelineCI.layout, setIndex, errorString)) {
+ // Set is bound but not compatible w/ overlapping pipelineLayout from PSO
+ VkDescriptorSet setHandle = my_data->setMap[state.boundDescriptorSets[setIndex]]->set;
+ result |= log_msg(
+ my_data->report_data, VK_DEBUG_REPORT_ERROR_BIT_EXT, VK_DEBUG_REPORT_OBJECT_TYPE_DESCRIPTOR_SET_EXT,
+ (uint64_t)setHandle, __LINE__, DRAWSTATE_PIPELINE_LAYOUTS_INCOMPATIBLE, "DS",
+ "VkDescriptorSet (%#" PRIxLEAST64
+ ") bound as set #%u is not compatible with overlapping VkPipelineLayout %#" PRIxLEAST64 " due to: %s",
+ (uint64_t)setHandle, setIndex, (uint64_t)pPipe->graphicsPipelineCI.layout, errorString.c_str());
+ } else { // Valid set is bound and layout compatible, validate that it's updated and verify any dynamic offsets
+ // Pull the set node
+ SET_NODE *pSet = my_data->setMap[state.boundDescriptorSets[setIndex]];
+ // Save vector of all active sets to verify dynamicOffsets below
+ activeSetNodes.push_back(pSet);
+ // Make sure set has been updated
+ if (!pSet->pUpdateStructs) {
+ result |= log_msg(my_data->report_data, VK_DEBUG_REPORT_ERROR_BIT_EXT,
+ VK_DEBUG_REPORT_OBJECT_TYPE_DESCRIPTOR_SET_EXT, (uint64_t)pSet->set, __LINE__,
+ DRAWSTATE_DESCRIPTOR_SET_NOT_UPDATED, "DS",
+ "DS %#" PRIxLEAST64 " bound but it was never updated. It is now being used to draw so "
+ "this will result in undefined behavior.",
+ (uint64_t)pSet->set);
+ }
+ }
+ }
+ // For each dynamic descriptor, make sure dynamic offset doesn't overstep buffer
+ if (!state.dynamicOffsets.empty())
+ result |= validate_dynamic_offsets(my_data, pCB, activeSetNodes);
+ }
+ // Verify Vtx binding
+ if (pPipe->vertexBindingDescriptions.size() > 0) {
+ for (size_t i = 0; i < pPipe->vertexBindingDescriptions.size(); i++) {
+ if ((pCB->currentDrawData.buffers.size() < (i + 1)) || (pCB->currentDrawData.buffers[i] == VK_NULL_HANDLE)) {
+ result |= log_msg(my_data->report_data, VK_DEBUG_REPORT_ERROR_BIT_EXT, (VkDebugReportObjectTypeEXT)0, 0,
+ __LINE__, DRAWSTATE_VTX_INDEX_OUT_OF_BOUNDS, "DS",
+ "The Pipeline State Object (%#" PRIxLEAST64
+ ") expects that this Command Buffer's vertex binding Index " PRINTF_SIZE_T_SPECIFIER
+ " should be set via vkCmdBindVertexBuffers.",
+ (uint64_t)pCB->lastBound[VK_PIPELINE_BIND_POINT_GRAPHICS].pipeline, i);
+ }
+ }
+ } else {
+ if (!pCB->currentDrawData.buffers.empty()) {
+ result |= log_msg(my_data->report_data, VK_DEBUG_REPORT_PERFORMANCE_WARNING_BIT_EXT, (VkDebugReportObjectTypeEXT)0,
+ 0, __LINE__, DRAWSTATE_VTX_INDEX_OUT_OF_BOUNDS, "DS",
+ "Vertex buffers are bound to command buffer (%#" PRIxLEAST64
+ ") but no vertex buffers are attached to this Pipeline State Object (%#" PRIxLEAST64 ").",
+ (uint64_t)pCB->commandBuffer, (uint64_t)pCB->lastBound[VK_PIPELINE_BIND_POINT_GRAPHICS].pipeline);
+ }
+ }
+ // If Viewport or scissors are dynamic, verify that dynamic count matches PSO count.
+ // Skip check if rasterization is disabled or there is no viewport.
+ if ((!pPipe->graphicsPipelineCI.pRasterizationState ||
+ !pPipe->graphicsPipelineCI.pRasterizationState->rasterizerDiscardEnable) &&
+ pPipe->graphicsPipelineCI.pViewportState) {
+ VkBool32 dynViewport = isDynamic(pPipe, VK_DYNAMIC_STATE_VIEWPORT);
+ VkBool32 dynScissor = isDynamic(pPipe, VK_DYNAMIC_STATE_SCISSOR);
+ if (dynViewport) {
+ if (pCB->viewports.size() != pPipe->graphicsPipelineCI.pViewportState->viewportCount) {
+ result |= log_msg(my_data->report_data, VK_DEBUG_REPORT_ERROR_BIT_EXT, (VkDebugReportObjectTypeEXT)0, 0,
+ __LINE__, DRAWSTATE_VIEWPORT_SCISSOR_MISMATCH, "DS",
+ "Dynamic viewportCount from vkCmdSetViewport() is " PRINTF_SIZE_T_SPECIFIER
+ ", but PSO viewportCount is %u. These counts must match.",
+ pCB->viewports.size(), pPipe->graphicsPipelineCI.pViewportState->viewportCount);
+ }
+ }
+ if (dynScissor) {
+ if (pCB->scissors.size() != pPipe->graphicsPipelineCI.pViewportState->scissorCount) {
+ result |= log_msg(my_data->report_data, VK_DEBUG_REPORT_ERROR_BIT_EXT, (VkDebugReportObjectTypeEXT)0, 0,
+ __LINE__, DRAWSTATE_VIEWPORT_SCISSOR_MISMATCH, "DS",
+ "Dynamic scissorCount from vkCmdSetScissor() is " PRINTF_SIZE_T_SPECIFIER
+ ", but PSO scissorCount is %u. These counts must match.",
+ pCB->scissors.size(), pPipe->graphicsPipelineCI.pViewportState->scissorCount);
+ }
+ }
+ }
+ }
+ return result;
+}
+
+// Verify that create state for a pipeline is valid
+static VkBool32 verifyPipelineCreateState(layer_data *my_data, const VkDevice device, std::vector<PIPELINE_NODE *> pPipelines,
+ int pipelineIndex) {
+ VkBool32 skipCall = VK_FALSE;
+
+ PIPELINE_NODE *pPipeline = pPipelines[pipelineIndex];
+
+ // If create derivative bit is set, check that we've specified a base
+ // pipeline correctly, and that the base pipeline was created to allow
+ // derivatives.
+ if (pPipeline->graphicsPipelineCI.flags & VK_PIPELINE_CREATE_DERIVATIVE_BIT) {
+ PIPELINE_NODE *pBasePipeline = nullptr;
+ if (!((pPipeline->graphicsPipelineCI.basePipelineHandle != VK_NULL_HANDLE) ^
+ (pPipeline->graphicsPipelineCI.basePipelineIndex != -1))) {
+ skipCall |= log_msg(my_data->report_data, VK_DEBUG_REPORT_ERROR_BIT_EXT, (VkDebugReportObjectTypeEXT)0, 0, __LINE__,
+ DRAWSTATE_INVALID_PIPELINE_CREATE_STATE, "DS",
+ "Invalid Pipeline CreateInfo: exactly one of base pipeline index and handle must be specified");
+ } else if (pPipeline->graphicsPipelineCI.basePipelineIndex != -1) {
+ if (pPipeline->graphicsPipelineCI.basePipelineIndex >= pipelineIndex) {
+ skipCall |=
+ log_msg(my_data->report_data, VK_DEBUG_REPORT_ERROR_BIT_EXT, (VkDebugReportObjectTypeEXT)0, 0, __LINE__,
+ DRAWSTATE_INVALID_PIPELINE_CREATE_STATE, "DS",
+ "Invalid Pipeline CreateInfo: base pipeline must occur earlier in array than derivative pipeline.");
+ } else {
+ pBasePipeline = pPipelines[pPipeline->graphicsPipelineCI.basePipelineIndex];
+ }
+ } else if (pPipeline->graphicsPipelineCI.basePipelineHandle != VK_NULL_HANDLE) {
+ pBasePipeline = getPipeline(my_data, pPipeline->graphicsPipelineCI.basePipelineHandle);
+ }
+
+ if (pBasePipeline && !(pBasePipeline->graphicsPipelineCI.flags & VK_PIPELINE_CREATE_ALLOW_DERIVATIVES_BIT)) {
+ skipCall |= log_msg(my_data->report_data, VK_DEBUG_REPORT_ERROR_BIT_EXT, (VkDebugReportObjectTypeEXT)0, 0, __LINE__,
+ DRAWSTATE_INVALID_PIPELINE_CREATE_STATE, "DS",
+ "Invalid Pipeline CreateInfo: base pipeline does not allow derivatives.");
+ }
+ }
+
+ if (pPipeline->graphicsPipelineCI.pColorBlendState != NULL) {
+ if (!my_data->physDevProperties.features.independentBlend) {
+ if (pPipeline->attachments.size() > 0) {
+ VkPipelineColorBlendAttachmentState *pAttachments = &pPipeline->attachments[0];
+ for (size_t i = 1; i < pPipeline->attachments.size(); i++) {
+ if ((pAttachments[0].blendEnable != pAttachments[i].blendEnable) ||
+ (pAttachments[0].srcColorBlendFactor != pAttachments[i].srcColorBlendFactor) ||
+ (pAttachments[0].dstColorBlendFactor != pAttachments[i].dstColorBlendFactor) ||
+ (pAttachments[0].colorBlendOp != pAttachments[i].colorBlendOp) ||
+ (pAttachments[0].srcAlphaBlendFactor != pAttachments[i].srcAlphaBlendFactor) ||
+ (pAttachments[0].dstAlphaBlendFactor != pAttachments[i].dstAlphaBlendFactor) ||
+ (pAttachments[0].alphaBlendOp != pAttachments[i].alphaBlendOp) ||
+ (pAttachments[0].colorWriteMask != pAttachments[i].colorWriteMask)) {
+ skipCall |=
+ log_msg(my_data->report_data, VK_DEBUG_REPORT_ERROR_BIT_EXT, (VkDebugReportObjectTypeEXT)0, 0, __LINE__,
+ DRAWSTATE_INDEPENDENT_BLEND, "DS", "Invalid Pipeline CreateInfo: If independent blend feature not "
+ "enabled, all elements of pAttachments must be identical");
+ }
+ }
+ }
+ }
+ if (!my_data->physDevProperties.features.logicOp &&
+ (pPipeline->graphicsPipelineCI.pColorBlendState->logicOpEnable != VK_FALSE)) {
+ skipCall |=
+ log_msg(my_data->report_data, VK_DEBUG_REPORT_ERROR_BIT_EXT, (VkDebugReportObjectTypeEXT)0, 0, __LINE__,
+ DRAWSTATE_DISABLED_LOGIC_OP, "DS",
+ "Invalid Pipeline CreateInfo: If logic operations feature not enabled, logicOpEnable must be VK_FALSE");
+ }
+ if ((pPipeline->graphicsPipelineCI.pColorBlendState->logicOpEnable == VK_TRUE) &&
+ ((pPipeline->graphicsPipelineCI.pColorBlendState->logicOp < VK_LOGIC_OP_CLEAR) ||
+ (pPipeline->graphicsPipelineCI.pColorBlendState->logicOp > VK_LOGIC_OP_SET))) {
+ skipCall |=
+ log_msg(my_data->report_data, VK_DEBUG_REPORT_ERROR_BIT_EXT, (VkDebugReportObjectTypeEXT)0, 0, __LINE__,
+ DRAWSTATE_INVALID_LOGIC_OP, "DS",
+ "Invalid Pipeline CreateInfo: If logicOpEnable is VK_TRUE, logicOp must be a valid VkLogicOp value");
+ }
+ }
+
+ // Ensure the subpass index is valid. If not, then validate_pipeline_shaders
+ // produces nonsense errors that confuse users. Other layers should already
+ // emit errors for renderpass being invalid.
+ auto rp_data = my_data->renderPassMap.find(pPipeline->graphicsPipelineCI.renderPass);
+ if (rp_data != my_data->renderPassMap.end() &&
+ pPipeline->graphicsPipelineCI.subpass >= rp_data->second->pCreateInfo->subpassCount) {
+ skipCall |= log_msg(my_data->report_data, VK_DEBUG_REPORT_ERROR_BIT_EXT, (VkDebugReportObjectTypeEXT)0, 0, __LINE__,
+ DRAWSTATE_INVALID_PIPELINE_CREATE_STATE, "DS", "Invalid Pipeline CreateInfo State: Subpass index %u "
+ "is out of range for this renderpass (0..%u)",
+ pPipeline->graphicsPipelineCI.subpass, rp_data->second->pCreateInfo->subpassCount - 1);
+ }
+
+ if (!validate_pipeline_shaders(my_data, device, pPipeline)) {
+ skipCall = VK_TRUE;
+ }
+ // VS is required
+ if (!(pPipeline->active_shaders & VK_SHADER_STAGE_VERTEX_BIT)) {
+ skipCall |=
+ log_msg(my_data->report_data, VK_DEBUG_REPORT_ERROR_BIT_EXT, (VkDebugReportObjectTypeEXT)0, 0, __LINE__,
+ DRAWSTATE_INVALID_PIPELINE_CREATE_STATE, "DS", "Invalid Pipeline CreateInfo State: Vtx Shader required");
+ }
+ // Either both or neither TC/TE shaders should be defined
+ if (((pPipeline->active_shaders & VK_SHADER_STAGE_TESSELLATION_CONTROL_BIT) == 0) !=
+ ((pPipeline->active_shaders & VK_SHADER_STAGE_TESSELLATION_EVALUATION_BIT) == 0)) {
+ skipCall |= log_msg(my_data->report_data, VK_DEBUG_REPORT_ERROR_BIT_EXT, (VkDebugReportObjectTypeEXT)0, 0, __LINE__,
+ DRAWSTATE_INVALID_PIPELINE_CREATE_STATE, "DS",
+ "Invalid Pipeline CreateInfo State: TE and TC shaders must be included or excluded as a pair");
+ }
+ // Compute shaders should be specified independent of Gfx shaders
+ if ((pPipeline->active_shaders & VK_SHADER_STAGE_COMPUTE_BIT) &&
+ (pPipeline->active_shaders &
+ (VK_SHADER_STAGE_VERTEX_BIT | VK_SHADER_STAGE_TESSELLATION_CONTROL_BIT | VK_SHADER_STAGE_TESSELLATION_EVALUATION_BIT |
+ VK_SHADER_STAGE_GEOMETRY_BIT | VK_SHADER_STAGE_FRAGMENT_BIT))) {
+ skipCall |= log_msg(my_data->report_data, VK_DEBUG_REPORT_ERROR_BIT_EXT, (VkDebugReportObjectTypeEXT)0, 0, __LINE__,
+ DRAWSTATE_INVALID_PIPELINE_CREATE_STATE, "DS",
+ "Invalid Pipeline CreateInfo State: Do not specify Compute Shader for Gfx Pipeline");
+ }
+ // VK_PRIMITIVE_TOPOLOGY_PATCH_LIST primitive topology is only valid for tessellation pipelines.
+ // Mismatching primitive topology and tessellation fails graphics pipeline creation.
+ if (pPipeline->active_shaders & (VK_SHADER_STAGE_TESSELLATION_CONTROL_BIT | VK_SHADER_STAGE_TESSELLATION_EVALUATION_BIT) &&
+ (pPipeline->iaStateCI.topology != VK_PRIMITIVE_TOPOLOGY_PATCH_LIST)) {
+ skipCall |= log_msg(my_data->report_data, VK_DEBUG_REPORT_ERROR_BIT_EXT, (VkDebugReportObjectTypeEXT)0, 0, __LINE__,
+ DRAWSTATE_INVALID_PIPELINE_CREATE_STATE, "DS", "Invalid Pipeline CreateInfo State: "
+ "VK_PRIMITIVE_TOPOLOGY_PATCH_LIST must be set as IA "
+ "topology for tessellation pipelines");
+ }
+ if (pPipeline->iaStateCI.topology == VK_PRIMITIVE_TOPOLOGY_PATCH_LIST) {
+ if (~pPipeline->active_shaders & VK_SHADER_STAGE_TESSELLATION_CONTROL_BIT) {
+ skipCall |= log_msg(my_data->report_data, VK_DEBUG_REPORT_ERROR_BIT_EXT, (VkDebugReportObjectTypeEXT)0, 0, __LINE__,
+ DRAWSTATE_INVALID_PIPELINE_CREATE_STATE, "DS", "Invalid Pipeline CreateInfo State: "
+ "VK_PRIMITIVE_TOPOLOGY_PATCH_LIST primitive "
+ "topology is only valid for tessellation pipelines");
+ }
+ if (!pPipeline->tessStateCI.patchControlPoints || (pPipeline->tessStateCI.patchControlPoints > 32)) {
+ skipCall |= log_msg(my_data->report_data, VK_DEBUG_REPORT_ERROR_BIT_EXT, (VkDebugReportObjectTypeEXT)0, 0, __LINE__,
+ DRAWSTATE_INVALID_PIPELINE_CREATE_STATE, "DS", "Invalid Pipeline CreateInfo State: "
+ "VK_PRIMITIVE_TOPOLOGY_PATCH_LIST primitive "
+ "topology used with patchControlPoints value %u."
+ " patchControlPoints should be >0 and <=32.",
+ pPipeline->tessStateCI.patchControlPoints);
+ }
+ }
+ // Viewport state must be included if rasterization is enabled.
+ // If the viewport state is included, the viewport and scissor counts should always match.
+ // NOTE : Even if these are flagged as dynamic, counts need to be set correctly for shader compiler
+ if (!pPipeline->graphicsPipelineCI.pRasterizationState ||
+ !pPipeline->graphicsPipelineCI.pRasterizationState->rasterizerDiscardEnable) {
+ if (!pPipeline->graphicsPipelineCI.pViewportState) {
+ skipCall |= log_msg(my_data->report_data, VK_DEBUG_REPORT_ERROR_BIT_EXT, (VkDebugReportObjectTypeEXT)0, 0, __LINE__,
+ DRAWSTATE_VIEWPORT_SCISSOR_MISMATCH, "DS", "Gfx Pipeline pViewportState is null. Even if viewport "
+ "and scissors are dynamic PSO must include "
+ "viewportCount and scissorCount in pViewportState.");
+ } else if (pPipeline->graphicsPipelineCI.pViewportState->scissorCount !=
+ pPipeline->graphicsPipelineCI.pViewportState->viewportCount) {
+ skipCall |= log_msg(my_data->report_data, VK_DEBUG_REPORT_ERROR_BIT_EXT, (VkDebugReportObjectTypeEXT)0, 0, __LINE__,
+ DRAWSTATE_VIEWPORT_SCISSOR_MISMATCH, "DS",
+ "Gfx Pipeline viewport count (%u) must match scissor count (%u).",
+ pPipeline->vpStateCI.viewportCount, pPipeline->vpStateCI.scissorCount);
+ } else {
+ // If viewport or scissor are not dynamic, then verify that data is appropriate for count
+ VkBool32 dynViewport = isDynamic(pPipeline, VK_DYNAMIC_STATE_VIEWPORT);
+ VkBool32 dynScissor = isDynamic(pPipeline, VK_DYNAMIC_STATE_SCISSOR);
+ if (!dynViewport) {
+ if (pPipeline->graphicsPipelineCI.pViewportState->viewportCount &&
+ !pPipeline->graphicsPipelineCI.pViewportState->pViewports) {
+ skipCall |= log_msg(my_data->report_data, VK_DEBUG_REPORT_ERROR_BIT_EXT, (VkDebugReportObjectTypeEXT)0, 0,
+ __LINE__, DRAWSTATE_VIEWPORT_SCISSOR_MISMATCH, "DS",
+ "Gfx Pipeline viewportCount is %u, but pViewports is NULL. For non-zero viewportCount, you "
+ "must either include pViewports data, or include viewport in pDynamicState and set it with "
+ "vkCmdSetViewport().",
+ pPipeline->graphicsPipelineCI.pViewportState->viewportCount);
+ }
+ }
+ if (!dynScissor) {
+ if (pPipeline->graphicsPipelineCI.pViewportState->scissorCount &&
+ !pPipeline->graphicsPipelineCI.pViewportState->pScissors) {
+ skipCall |= log_msg(my_data->report_data, VK_DEBUG_REPORT_ERROR_BIT_EXT, (VkDebugReportObjectTypeEXT)0, 0,
+ __LINE__, DRAWSTATE_VIEWPORT_SCISSOR_MISMATCH, "DS",
+ "Gfx Pipeline scissorCount is %u, but pScissors is NULL. For non-zero scissorCount, you "
+ "must either include pScissors data, or include scissor in pDynamicState and set it with "
+ "vkCmdSetScissor().",
+ pPipeline->graphicsPipelineCI.pViewportState->scissorCount);
+ }
+ }
+ }
+ }
+ return skipCall;
+}
+
+// Init the pipeline mapping info based on pipeline create info LL tree
+// Threading note : Calls to this function should wrapped in mutex
+// TODO : this should really just be in the constructor for PIPELINE_NODE
+static PIPELINE_NODE *initGraphicsPipeline(layer_data *dev_data, const VkGraphicsPipelineCreateInfo *pCreateInfo) {
+ PIPELINE_NODE *pPipeline = new PIPELINE_NODE;
+
+ // First init create info
+ memcpy(&pPipeline->graphicsPipelineCI, pCreateInfo, sizeof(VkGraphicsPipelineCreateInfo));
+
+ size_t bufferSize = 0;
+ const VkPipelineVertexInputStateCreateInfo *pVICI = NULL;
+ const VkPipelineColorBlendStateCreateInfo *pCBCI = NULL;
+
+ for (uint32_t i = 0; i < pCreateInfo->stageCount; i++) {
+ const VkPipelineShaderStageCreateInfo *pPSSCI = &pCreateInfo->pStages[i];
+
+ switch (pPSSCI->stage) {
+ case VK_SHADER_STAGE_VERTEX_BIT:
+ memcpy(&pPipeline->vsCI, pPSSCI, sizeof(VkPipelineShaderStageCreateInfo));
+ pPipeline->active_shaders |= VK_SHADER_STAGE_VERTEX_BIT;
+ break;
+ case VK_SHADER_STAGE_TESSELLATION_CONTROL_BIT:
+ memcpy(&pPipeline->tcsCI, pPSSCI, sizeof(VkPipelineShaderStageCreateInfo));
+ pPipeline->active_shaders |= VK_SHADER_STAGE_TESSELLATION_CONTROL_BIT;
+ break;
+ case VK_SHADER_STAGE_TESSELLATION_EVALUATION_BIT:
+ memcpy(&pPipeline->tesCI, pPSSCI, sizeof(VkPipelineShaderStageCreateInfo));
+ pPipeline->active_shaders |= VK_SHADER_STAGE_TESSELLATION_EVALUATION_BIT;
+ break;
+ case VK_SHADER_STAGE_GEOMETRY_BIT:
+ memcpy(&pPipeline->gsCI, pPSSCI, sizeof(VkPipelineShaderStageCreateInfo));
+ pPipeline->active_shaders |= VK_SHADER_STAGE_GEOMETRY_BIT;
+ break;
+ case VK_SHADER_STAGE_FRAGMENT_BIT:
+ memcpy(&pPipeline->fsCI, pPSSCI, sizeof(VkPipelineShaderStageCreateInfo));
+ pPipeline->active_shaders |= VK_SHADER_STAGE_FRAGMENT_BIT;
+ break;
+ case VK_SHADER_STAGE_COMPUTE_BIT:
+ // TODO : Flag error, CS is specified through VkComputePipelineCreateInfo
+ pPipeline->active_shaders |= VK_SHADER_STAGE_COMPUTE_BIT;
+ break;
+ default:
+ // TODO : Flag error
+ break;
+ }
+ }
+ // Copy over GraphicsPipelineCreateInfo structure embedded pointers
+ if (pCreateInfo->stageCount != 0) {
+ pPipeline->graphicsPipelineCI.pStages = new VkPipelineShaderStageCreateInfo[pCreateInfo->stageCount];
+ bufferSize = pCreateInfo->stageCount * sizeof(VkPipelineShaderStageCreateInfo);
+ memcpy((void *)pPipeline->graphicsPipelineCI.pStages, pCreateInfo->pStages, bufferSize);
+ }
+ if (pCreateInfo->pVertexInputState != NULL) {
+ pPipeline->vertexInputCI = *pCreateInfo->pVertexInputState;
+ // Copy embedded ptrs
+ pVICI = pCreateInfo->pVertexInputState;
+ if (pVICI->vertexBindingDescriptionCount) {
+ pPipeline->vertexBindingDescriptions = std::vector<VkVertexInputBindingDescription>(
+ pVICI->pVertexBindingDescriptions, pVICI->pVertexBindingDescriptions + pVICI->vertexBindingDescriptionCount);
+ }
+ if (pVICI->vertexAttributeDescriptionCount) {
+ pPipeline->vertexAttributeDescriptions = std::vector<VkVertexInputAttributeDescription>(
+ pVICI->pVertexAttributeDescriptions, pVICI->pVertexAttributeDescriptions + pVICI->vertexAttributeDescriptionCount);
+ }
+ pPipeline->graphicsPipelineCI.pVertexInputState = &pPipeline->vertexInputCI;
+ }
+ if (pCreateInfo->pInputAssemblyState != NULL) {
+ pPipeline->iaStateCI = *pCreateInfo->pInputAssemblyState;
+ pPipeline->graphicsPipelineCI.pInputAssemblyState = &pPipeline->iaStateCI;
+ }
+ if (pCreateInfo->pTessellationState != NULL) {
+ pPipeline->tessStateCI = *pCreateInfo->pTessellationState;
+ pPipeline->graphicsPipelineCI.pTessellationState = &pPipeline->tessStateCI;
+ }
+ if (pCreateInfo->pViewportState != NULL) {
+ pPipeline->vpStateCI = *pCreateInfo->pViewportState;
+ pPipeline->graphicsPipelineCI.pViewportState = &pPipeline->vpStateCI;
+ }
+ if (pCreateInfo->pRasterizationState != NULL) {
+ pPipeline->rsStateCI = *pCreateInfo->pRasterizationState;
+ pPipeline->graphicsPipelineCI.pRasterizationState = &pPipeline->rsStateCI;
+ }
+ if (pCreateInfo->pMultisampleState != NULL) {
+ pPipeline->msStateCI = *pCreateInfo->pMultisampleState;
+ pPipeline->graphicsPipelineCI.pMultisampleState = &pPipeline->msStateCI;
+ }
+ if (pCreateInfo->pDepthStencilState != NULL) {
+ pPipeline->dsStateCI = *pCreateInfo->pDepthStencilState;
+ pPipeline->graphicsPipelineCI.pDepthStencilState = &pPipeline->dsStateCI;
+ }
+ if (pCreateInfo->pColorBlendState != NULL) {
+ pPipeline->cbStateCI = *pCreateInfo->pColorBlendState;
+ // Copy embedded ptrs
+ pCBCI = pCreateInfo->pColorBlendState;
+ if (pCBCI->attachmentCount) {
+ pPipeline->attachments = std::vector<VkPipelineColorBlendAttachmentState>(
+ pCBCI->pAttachments, pCBCI->pAttachments + pCBCI->attachmentCount);
+ }
+ pPipeline->graphicsPipelineCI.pColorBlendState = &pPipeline->cbStateCI;
+ }
+ if (pCreateInfo->pDynamicState != NULL) {
+ pPipeline->dynStateCI = *pCreateInfo->pDynamicState;
+ if (pPipeline->dynStateCI.dynamicStateCount) {
+ pPipeline->dynStateCI.pDynamicStates = new VkDynamicState[pPipeline->dynStateCI.dynamicStateCount];
+ bufferSize = pPipeline->dynStateCI.dynamicStateCount * sizeof(VkDynamicState);
+ memcpy((void *)pPipeline->dynStateCI.pDynamicStates, pCreateInfo->pDynamicState->pDynamicStates, bufferSize);
+ }
+ pPipeline->graphicsPipelineCI.pDynamicState = &pPipeline->dynStateCI;
+ }
+ return pPipeline;
+}
+
+// Free the Pipeline nodes
+static void deletePipelines(layer_data *my_data) {
+ if (my_data->pipelineMap.size() <= 0)
+ return;
+ for (auto ii = my_data->pipelineMap.begin(); ii != my_data->pipelineMap.end(); ++ii) {
+ if ((*ii).second->graphicsPipelineCI.stageCount != 0) {
+ delete[](*ii).second->graphicsPipelineCI.pStages;
+ }
+ if ((*ii).second->dynStateCI.dynamicStateCount != 0) {
+ delete[](*ii).second->dynStateCI.pDynamicStates;
+ }
+ delete (*ii).second;
+ }
+ my_data->pipelineMap.clear();
+}
+
+// For given pipeline, return number of MSAA samples, or one if MSAA disabled
+static VkSampleCountFlagBits getNumSamples(layer_data *my_data, const VkPipeline pipeline) {
+ PIPELINE_NODE *pPipe = my_data->pipelineMap[pipeline];
+ if (VK_STRUCTURE_TYPE_PIPELINE_MULTISAMPLE_STATE_CREATE_INFO == pPipe->msStateCI.sType) {
+ return pPipe->msStateCI.rasterizationSamples;
+ }
+ return VK_SAMPLE_COUNT_1_BIT;
+}
+
+// Validate state related to the PSO
+static VkBool32 validatePipelineState(layer_data *my_data, const GLOBAL_CB_NODE *pCB, const VkPipelineBindPoint pipelineBindPoint,
+ const VkPipeline pipeline) {
+ if (VK_PIPELINE_BIND_POINT_GRAPHICS == pipelineBindPoint) {
+ // Verify that any MSAA request in PSO matches sample# in bound FB
+ // Skip the check if rasterization is disabled.
+ PIPELINE_NODE *pPipeline = my_data->pipelineMap[pipeline];
+ if (!pPipeline->graphicsPipelineCI.pRasterizationState ||
+ !pPipeline->graphicsPipelineCI.pRasterizationState->rasterizerDiscardEnable) {
+ VkSampleCountFlagBits psoNumSamples = getNumSamples(my_data, pipeline);
+ if (pCB->activeRenderPass) {
+ const VkRenderPassCreateInfo *pRPCI = my_data->renderPassMap[pCB->activeRenderPass]->pCreateInfo;
+ const VkSubpassDescription *pSD = &pRPCI->pSubpasses[pCB->activeSubpass];
+ VkSampleCountFlagBits subpassNumSamples = (VkSampleCountFlagBits)0;
+ uint32_t i;
+
+ for (i = 0; i < pSD->colorAttachmentCount; i++) {
+ VkSampleCountFlagBits samples;
+
+ if (pSD->pColorAttachments[i].attachment == VK_ATTACHMENT_UNUSED)
+ continue;
+
+ samples = pRPCI->pAttachments[pSD->pColorAttachments[i].attachment].samples;
+ if (subpassNumSamples == (VkSampleCountFlagBits)0) {
+ subpassNumSamples = samples;
+ } else if (subpassNumSamples != samples) {
+ subpassNumSamples = (VkSampleCountFlagBits)-1;
+ break;
+ }
+ }
+ if (pSD->pDepthStencilAttachment && pSD->pDepthStencilAttachment->attachment != VK_ATTACHMENT_UNUSED) {
+ const VkSampleCountFlagBits samples = pRPCI->pAttachments[pSD->pDepthStencilAttachment->attachment].samples;
+ if (subpassNumSamples == (VkSampleCountFlagBits)0)
+ subpassNumSamples = samples;
+ else if (subpassNumSamples != samples)
+ subpassNumSamples = (VkSampleCountFlagBits)-1;
+ }
+
+ if (psoNumSamples != subpassNumSamples) {
+ return log_msg(my_data->report_data, VK_DEBUG_REPORT_ERROR_BIT_EXT, VK_DEBUG_REPORT_OBJECT_TYPE_PIPELINE_EXT,
+ (uint64_t)pipeline, __LINE__, DRAWSTATE_NUM_SAMPLES_MISMATCH, "DS",
+ "Num samples mismatch! Binding PSO (%#" PRIxLEAST64
+ ") with %u samples while current RenderPass (%#" PRIxLEAST64 ") w/ %u samples!",
+ (uint64_t)pipeline, psoNumSamples, (uint64_t)pCB->activeRenderPass, subpassNumSamples);
+ }
+ } else {
+ // TODO : I believe it's an error if we reach this point and don't have an activeRenderPass
+ // Verify and flag error as appropriate
+ }
+ }
+ // TODO : Add more checks here
+ } else {
+ // TODO : Validate non-gfx pipeline updates
+ }
+ return VK_FALSE;
+}
+
+// Block of code at start here specifically for managing/tracking DSs
+
+// Return Pool node ptr for specified pool or else NULL
+static DESCRIPTOR_POOL_NODE *getPoolNode(layer_data *my_data, const VkDescriptorPool pool) {
+ if (my_data->descriptorPoolMap.find(pool) == my_data->descriptorPoolMap.end()) {
+ return NULL;
+ }
+ return my_data->descriptorPoolMap[pool];
+}
+
+static LAYOUT_NODE *getLayoutNode(layer_data *my_data, const VkDescriptorSetLayout layout) {
+ if (my_data->descriptorSetLayoutMap.find(layout) == my_data->descriptorSetLayoutMap.end()) {
+ return NULL;
+ }
+ return my_data->descriptorSetLayoutMap[layout];
+}
+
+// Return VK_FALSE if update struct is of valid type, otherwise flag error and return code from callback
+static VkBool32 validUpdateStruct(layer_data *my_data, const VkDevice device, const GENERIC_HEADER *pUpdateStruct) {
+ switch (pUpdateStruct->sType) {
+ case VK_STRUCTURE_TYPE_WRITE_DESCRIPTOR_SET:
+ case VK_STRUCTURE_TYPE_COPY_DESCRIPTOR_SET:
+ return VK_FALSE;
+ default:
+ return log_msg(my_data->report_data, VK_DEBUG_REPORT_ERROR_BIT_EXT, (VkDebugReportObjectTypeEXT)0, 0, __LINE__,
+ DRAWSTATE_INVALID_UPDATE_STRUCT, "DS",
+ "Unexpected UPDATE struct of type %s (value %u) in vkUpdateDescriptors() struct tree",
+ string_VkStructureType(pUpdateStruct->sType), pUpdateStruct->sType);
+ }
+}
+
+// Set count for given update struct in the last parameter
+// Return value of skipCall, which is only VK_TRUE if error occurs and callback signals execution to cease
+static uint32_t getUpdateCount(layer_data *my_data, const VkDevice device, const GENERIC_HEADER *pUpdateStruct) {
+ switch (pUpdateStruct->sType) {
+ case VK_STRUCTURE_TYPE_WRITE_DESCRIPTOR_SET:
+ return ((VkWriteDescriptorSet *)pUpdateStruct)->descriptorCount;
+ case VK_STRUCTURE_TYPE_COPY_DESCRIPTOR_SET:
+ // TODO : Need to understand this case better and make sure code is correct
+ return ((VkCopyDescriptorSet *)pUpdateStruct)->descriptorCount;
+ default:
+ return 0;
+ }
+ return 0;
+}
+
+// For given Layout Node and binding, return index where that binding begins
+static uint32_t getBindingStartIndex(const LAYOUT_NODE *pLayout, const uint32_t binding) {
+ uint32_t offsetIndex = 0;
+ for (uint32_t i = 0; i < pLayout->createInfo.bindingCount; i++) {
+ if (pLayout->createInfo.pBindings[i].binding == binding)
+ break;
+ offsetIndex += pLayout->createInfo.pBindings[i].descriptorCount;
+ }
+ return offsetIndex;
+}
+
+// For given layout node and binding, return last index that is updated
+static uint32_t getBindingEndIndex(const LAYOUT_NODE *pLayout, const uint32_t binding) {
+ uint32_t offsetIndex = 0;
+ for (uint32_t i = 0; i < pLayout->createInfo.bindingCount; i++) {
+ offsetIndex += pLayout->createInfo.pBindings[i].descriptorCount;
+ if (pLayout->createInfo.pBindings[i].binding == binding)
+ break;
+ }
+ return offsetIndex - 1;
+}
+
+// For given layout and update, return the first overall index of the layout that is updated
+static uint32_t getUpdateStartIndex(layer_data *my_data, const VkDevice device, const LAYOUT_NODE *pLayout, const uint32_t binding,
+ const uint32_t arrayIndex, const GENERIC_HEADER *pUpdateStruct) {
+ return getBindingStartIndex(pLayout, binding) + arrayIndex;
+}
+
+// For given layout and update, return the last overall index of the layout that is updated
+static uint32_t getUpdateEndIndex(layer_data *my_data, const VkDevice device, const LAYOUT_NODE *pLayout, const uint32_t binding,
+ const uint32_t arrayIndex, const GENERIC_HEADER *pUpdateStruct) {
+ uint32_t count = getUpdateCount(my_data, device, pUpdateStruct);
+ return getBindingStartIndex(pLayout, binding) + arrayIndex + count - 1;
+}
+
+// Verify that the descriptor type in the update struct matches what's expected by the layout
+static VkBool32 validateUpdateConsistency(layer_data *my_data, const VkDevice device, const LAYOUT_NODE *pLayout,
+ const GENERIC_HEADER *pUpdateStruct, uint32_t startIndex, uint32_t endIndex) {
+ // First get actual type of update
+ VkBool32 skipCall = VK_FALSE;
+ VkDescriptorType actualType;
+ uint32_t i = 0;
+ switch (pUpdateStruct->sType) {
+ case VK_STRUCTURE_TYPE_WRITE_DESCRIPTOR_SET:
+ actualType = ((VkWriteDescriptorSet *)pUpdateStruct)->descriptorType;
+ break;
+ case VK_STRUCTURE_TYPE_COPY_DESCRIPTOR_SET:
+ /* no need to validate */
+ return VK_FALSE;
+ break;
+ default:
+ skipCall |= log_msg(my_data->report_data, VK_DEBUG_REPORT_ERROR_BIT_EXT, (VkDebugReportObjectTypeEXT)0, 0, __LINE__,
+ DRAWSTATE_INVALID_UPDATE_STRUCT, "DS",
+ "Unexpected UPDATE struct of type %s (value %u) in vkUpdateDescriptors() struct tree",
+ string_VkStructureType(pUpdateStruct->sType), pUpdateStruct->sType);
+ }
+ if (VK_FALSE == skipCall) {
+ // Set first stageFlags as reference and verify that all other updates match it
+ VkShaderStageFlags refStageFlags = pLayout->stageFlags[startIndex];
+ for (i = startIndex; i <= endIndex; i++) {
+ if (pLayout->descriptorTypes[i] != actualType) {
+ skipCall |= log_msg(
+ my_data->report_data, VK_DEBUG_REPORT_ERROR_BIT_EXT, (VkDebugReportObjectTypeEXT)0, 0, __LINE__,
+ DRAWSTATE_DESCRIPTOR_TYPE_MISMATCH, "DS",
+ "Write descriptor update has descriptor type %s that does not match overlapping binding descriptor type of %s!",
+ string_VkDescriptorType(actualType), string_VkDescriptorType(pLayout->descriptorTypes[i]));
+ }
+ if (pLayout->stageFlags[i] != refStageFlags) {
+ skipCall |= log_msg(
+ my_data->report_data, VK_DEBUG_REPORT_ERROR_BIT_EXT, (VkDebugReportObjectTypeEXT)0, 0, __LINE__,
+ DRAWSTATE_DESCRIPTOR_STAGEFLAGS_MISMATCH, "DS",
+ "Write descriptor update has stageFlags %x that do not match overlapping binding descriptor stageFlags of %x!",
+ refStageFlags, pLayout->stageFlags[i]);
+ }
+ }
+ }
+ return skipCall;
+}
+
+// Determine the update type, allocate a new struct of that type, shadow the given pUpdate
+// struct into the pNewNode param. Return VK_TRUE if error condition encountered and callback signals early exit.
+// NOTE : Calls to this function should be wrapped in mutex
+static VkBool32 shadowUpdateNode(layer_data *my_data, const VkDevice device, GENERIC_HEADER *pUpdate, GENERIC_HEADER **pNewNode) {
+ VkBool32 skipCall = VK_FALSE;
+ VkWriteDescriptorSet *pWDS = NULL;
+ VkCopyDescriptorSet *pCDS = NULL;
+ switch (pUpdate->sType) {
+ case VK_STRUCTURE_TYPE_WRITE_DESCRIPTOR_SET:
+ pWDS = new VkWriteDescriptorSet;
+ *pNewNode = (GENERIC_HEADER *)pWDS;
+ memcpy(pWDS, pUpdate, sizeof(VkWriteDescriptorSet));
+
+ switch (pWDS->descriptorType) {
+ case VK_DESCRIPTOR_TYPE_SAMPLER:
+ case VK_DESCRIPTOR_TYPE_COMBINED_IMAGE_SAMPLER:
+ case VK_DESCRIPTOR_TYPE_SAMPLED_IMAGE:
+ case VK_DESCRIPTOR_TYPE_STORAGE_IMAGE: {
+ VkDescriptorImageInfo *info = new VkDescriptorImageInfo[pWDS->descriptorCount];
+ memcpy(info, pWDS->pImageInfo, pWDS->descriptorCount * sizeof(VkDescriptorImageInfo));
+ pWDS->pImageInfo = info;
+ } break;
+ case VK_DESCRIPTOR_TYPE_UNIFORM_TEXEL_BUFFER:
+ case VK_DESCRIPTOR_TYPE_STORAGE_TEXEL_BUFFER: {
+ VkBufferView *info = new VkBufferView[pWDS->descriptorCount];
+ memcpy(info, pWDS->pTexelBufferView, pWDS->descriptorCount * sizeof(VkBufferView));
+ pWDS->pTexelBufferView = info;
+ } break;
+ case VK_DESCRIPTOR_TYPE_UNIFORM_BUFFER:
+ case VK_DESCRIPTOR_TYPE_STORAGE_BUFFER:
+ case VK_DESCRIPTOR_TYPE_UNIFORM_BUFFER_DYNAMIC:
+ case VK_DESCRIPTOR_TYPE_STORAGE_BUFFER_DYNAMIC: {
+ VkDescriptorBufferInfo *info = new VkDescriptorBufferInfo[pWDS->descriptorCount];
+ memcpy(info, pWDS->pBufferInfo, pWDS->descriptorCount * sizeof(VkDescriptorBufferInfo));
+ pWDS->pBufferInfo = info;
+ } break;
+ default:
+ return VK_ERROR_VALIDATION_FAILED_EXT;
+ break;
+ }
+ break;
+ case VK_STRUCTURE_TYPE_COPY_DESCRIPTOR_SET:
+ pCDS = new VkCopyDescriptorSet;
+ *pNewNode = (GENERIC_HEADER *)pCDS;
+ memcpy(pCDS, pUpdate, sizeof(VkCopyDescriptorSet));
+ break;
+ default:
+ if (log_msg(my_data->report_data, VK_DEBUG_REPORT_ERROR_BIT_EXT, (VkDebugReportObjectTypeEXT)0, 0, __LINE__,
+ DRAWSTATE_INVALID_UPDATE_STRUCT, "DS",
+ "Unexpected UPDATE struct of type %s (value %u) in vkUpdateDescriptors() struct tree",
+ string_VkStructureType(pUpdate->sType), pUpdate->sType))
+ return VK_TRUE;
+ }
+ // Make sure that pNext for the end of shadow copy is NULL
+ (*pNewNode)->pNext = NULL;
+ return skipCall;
+}
+
+// Verify that given sampler is valid
+static VkBool32 validateSampler(const layer_data *my_data, const VkSampler *pSampler, const VkBool32 immutable) {
+ VkBool32 skipCall = VK_FALSE;
+ auto sampIt = my_data->sampleMap.find(*pSampler);
+ if (sampIt == my_data->sampleMap.end()) {
+ if (!immutable) {
+ skipCall |= log_msg(my_data->report_data, VK_DEBUG_REPORT_ERROR_BIT_EXT, VK_DEBUG_REPORT_OBJECT_TYPE_SAMPLER_EXT,
+ (uint64_t)*pSampler, __LINE__, DRAWSTATE_SAMPLER_DESCRIPTOR_ERROR, "DS",
+ "vkUpdateDescriptorSets: Attempt to update descriptor with invalid sampler %#" PRIxLEAST64,
+ (uint64_t)*pSampler);
+ } else { // immutable
+ skipCall |= log_msg(my_data->report_data, VK_DEBUG_REPORT_ERROR_BIT_EXT, VK_DEBUG_REPORT_OBJECT_TYPE_SAMPLER_EXT,
+ (uint64_t)*pSampler, __LINE__, DRAWSTATE_SAMPLER_DESCRIPTOR_ERROR, "DS",
+ "vkUpdateDescriptorSets: Attempt to update descriptor whose binding has an invalid immutable "
+ "sampler %#" PRIxLEAST64,
+ (uint64_t)*pSampler);
+ }
+ } else {
+ // TODO : Any further checks we want to do on the sampler?
+ }
+ return skipCall;
+}
+
+// find layout(s) on the cmd buf level
+bool FindLayout(const GLOBAL_CB_NODE *pCB, VkImage image, VkImageSubresource range, IMAGE_CMD_BUF_LAYOUT_NODE &node) {
+ ImageSubresourcePair imgpair = {image, true, range};
+ auto imgsubIt = pCB->imageLayoutMap.find(imgpair);
+ if (imgsubIt == pCB->imageLayoutMap.end()) {
+ imgpair = {image, false, VkImageSubresource()};
+ imgsubIt = pCB->imageLayoutMap.find(imgpair);
+ if (imgsubIt == pCB->imageLayoutMap.end())
+ return false;
+ }
+ node = imgsubIt->second;
+ return true;
+}
+
+// find layout(s) on the global level
+bool FindLayout(const layer_data *my_data, ImageSubresourcePair imgpair, VkImageLayout &layout) {
+ auto imgsubIt = my_data->imageLayoutMap.find(imgpair);
+ if (imgsubIt == my_data->imageLayoutMap.end()) {
+ imgpair = {imgpair.image, false, VkImageSubresource()};
+ imgsubIt = my_data->imageLayoutMap.find(imgpair);
+ if (imgsubIt == my_data->imageLayoutMap.end())
+ return false;
+ }
+ layout = imgsubIt->second.layout;
+ return true;
+}
+
+bool FindLayout(const layer_data *my_data, VkImage image, VkImageSubresource range, VkImageLayout &layout) {
+ ImageSubresourcePair imgpair = {image, true, range};
+ return FindLayout(my_data, imgpair, layout);
+}
+
+bool FindLayouts(const layer_data *my_data, VkImage image, std::vector<VkImageLayout> &layouts) {
+ auto sub_data = my_data->imageSubresourceMap.find(image);
+ if (sub_data == my_data->imageSubresourceMap.end())
+ return false;
+ auto imgIt = my_data->imageMap.find(image);
+ if (imgIt == my_data->imageMap.end())
+ return false;
+ bool ignoreGlobal = false;
+ // TODO: Make this robust for >1 aspect mask. Now it will just say ignore
+ // potential errors in this case.
+ if (sub_data->second.size() >= (imgIt->second.createInfo.arrayLayers * imgIt->second.createInfo.mipLevels + 1)) {
+ ignoreGlobal = true;
+ }
+ for (auto imgsubpair : sub_data->second) {
+ if (ignoreGlobal && !imgsubpair.hasSubresource)
+ continue;
+ auto img_data = my_data->imageLayoutMap.find(imgsubpair);
+ if (img_data != my_data->imageLayoutMap.end()) {
+ layouts.push_back(img_data->second.layout);
+ }
+ }
+ return true;
+}
+
+// Set the layout on the global level
+void SetLayout(layer_data *my_data, ImageSubresourcePair imgpair, const VkImageLayout &layout) {
+ VkImage &image = imgpair.image;
+ // TODO (mlentine): Maybe set format if new? Not used atm.
+ my_data->imageLayoutMap[imgpair].layout = layout;
+ // TODO (mlentine): Maybe make vector a set?
+ auto subresource = std::find(my_data->imageSubresourceMap[image].begin(), my_data->imageSubresourceMap[image].end(), imgpair);
+ if (subresource == my_data->imageSubresourceMap[image].end()) {
+ my_data->imageSubresourceMap[image].push_back(imgpair);
+ }
+}
+
+void SetLayout(layer_data *my_data, VkImage image, const VkImageLayout &layout) {
+ ImageSubresourcePair imgpair = {image, false, VkImageSubresource()};
+ SetLayout(my_data, imgpair, layout);
+}
+
+void SetLayout(layer_data *my_data, VkImage image, VkImageSubresource range, const VkImageLayout &layout) {
+ ImageSubresourcePair imgpair = {image, true, range};
+ SetLayout(my_data, imgpair, layout);
+}
+
+// Set the layout on the cmdbuf level
+void SetLayout(GLOBAL_CB_NODE *pCB, VkImage image, ImageSubresourcePair imgpair, const IMAGE_CMD_BUF_LAYOUT_NODE &node) {
+ pCB->imageLayoutMap[imgpair] = node;
+ // TODO (mlentine): Maybe make vector a set?
+ auto subresource = std::find(pCB->imageSubresourceMap[image].begin(), pCB->imageSubresourceMap[image].end(), imgpair);
+ if (subresource == pCB->imageSubresourceMap[image].end()) {
+ pCB->imageSubresourceMap[image].push_back(imgpair);
+ }
+}
+
+void SetLayout(GLOBAL_CB_NODE *pCB, VkImage image, ImageSubresourcePair imgpair, const VkImageLayout &layout) {
+ // TODO (mlentine): Maybe make vector a set?
+ if (std::find(pCB->imageSubresourceMap[image].begin(), pCB->imageSubresourceMap[image].end(), imgpair) !=
+ pCB->imageSubresourceMap[image].end()) {
+ pCB->imageLayoutMap[imgpair].layout = layout;
+ } else {
+ // TODO (mlentine): Could be expensive and might need to be removed.
+ assert(imgpair.hasSubresource);
+ IMAGE_CMD_BUF_LAYOUT_NODE node;
+ FindLayout(pCB, image, imgpair.subresource, node);
+ SetLayout(pCB, image, imgpair, {node.initialLayout, layout});
+ }
+}
+
+void SetLayout(GLOBAL_CB_NODE *pCB, VkImage image, const IMAGE_CMD_BUF_LAYOUT_NODE &node) {
+ ImageSubresourcePair imgpair = {image, false, VkImageSubresource()};
+ SetLayout(pCB, image, imgpair, node);
+}
+
+void SetLayout(GLOBAL_CB_NODE *pCB, VkImage image, VkImageSubresource range, const IMAGE_CMD_BUF_LAYOUT_NODE &node) {
+ ImageSubresourcePair imgpair = {image, true, range};
+ SetLayout(pCB, image, imgpair, node);
+}
+
+void SetLayout(GLOBAL_CB_NODE *pCB, VkImage image, const VkImageLayout &layout) {
+ ImageSubresourcePair imgpair = {image, false, VkImageSubresource()};
+ SetLayout(pCB, image, imgpair, layout);
+}
+
+void SetLayout(GLOBAL_CB_NODE *pCB, VkImage image, VkImageSubresource range, const VkImageLayout &layout) {
+ ImageSubresourcePair imgpair = {image, true, range};
+ SetLayout(pCB, image, imgpair, layout);
+}
+
+void SetLayout(const layer_data *dev_data, GLOBAL_CB_NODE *pCB, VkImageView imageView, const VkImageLayout &layout) {
+ auto image_view_data = dev_data->imageViewMap.find(imageView);
+ assert(image_view_data != dev_data->imageViewMap.end());
+ const VkImage &image = image_view_data->second.image;
+ const VkImageSubresourceRange &subRange = image_view_data->second.subresourceRange;
+ // TODO: Do not iterate over every possibility - consolidate where possible
+ for (uint32_t j = 0; j < subRange.levelCount; j++) {
+ uint32_t level = subRange.baseMipLevel + j;
+ for (uint32_t k = 0; k < subRange.layerCount; k++) {
+ uint32_t layer = subRange.baseArrayLayer + k;
+ VkImageSubresource sub = {subRange.aspectMask, level, layer};
+ SetLayout(pCB, image, sub, layout);
+ }
+ }
+}
+
+// Verify that given imageView is valid
+static VkBool32 validateImageView(const layer_data *my_data, const VkImageView *pImageView, const VkImageLayout imageLayout) {
+ VkBool32 skipCall = VK_FALSE;
+ auto ivIt = my_data->imageViewMap.find(*pImageView);
+ if (ivIt == my_data->imageViewMap.end()) {
+ skipCall |= log_msg(my_data->report_data, VK_DEBUG_REPORT_ERROR_BIT_EXT, VK_DEBUG_REPORT_OBJECT_TYPE_IMAGE_VIEW_EXT,
+ (uint64_t)*pImageView, __LINE__, DRAWSTATE_IMAGEVIEW_DESCRIPTOR_ERROR, "DS",
+ "vkUpdateDescriptorSets: Attempt to update descriptor with invalid imageView %#" PRIxLEAST64,
+ (uint64_t)*pImageView);
+ } else {
+ // Validate that imageLayout is compatible with aspectMask and image format
+ VkImageAspectFlags aspectMask = ivIt->second.subresourceRange.aspectMask;
+ VkImage image = ivIt->second.image;
+ // TODO : Check here in case we have a bad image
+ VkFormat format = VK_FORMAT_MAX_ENUM;
+ auto imgIt = my_data->imageMap.find(image);
+ if (imgIt != my_data->imageMap.end()) {
+ format = (*imgIt).second.createInfo.format;
+ } else {
+ // Also need to check the swapchains.
+ auto swapchainIt = my_data->device_extensions.imageToSwapchainMap.find(image);
+ if (swapchainIt != my_data->device_extensions.imageToSwapchainMap.end()) {
+ VkSwapchainKHR swapchain = swapchainIt->second;
+ auto swapchain_nodeIt = my_data->device_extensions.swapchainMap.find(swapchain);
+ if (swapchain_nodeIt != my_data->device_extensions.swapchainMap.end()) {
+ SWAPCHAIN_NODE *pswapchain_node = swapchain_nodeIt->second;
+ format = pswapchain_node->createInfo.imageFormat;
+ }
+ }
+ }
+ if (format == VK_FORMAT_MAX_ENUM) {
+ skipCall |= log_msg(my_data->report_data, VK_DEBUG_REPORT_ERROR_BIT_EXT, VK_DEBUG_REPORT_OBJECT_TYPE_IMAGE_EXT,
+ (uint64_t)image, __LINE__, DRAWSTATE_IMAGEVIEW_DESCRIPTOR_ERROR, "DS",
+ "vkUpdateDescriptorSets: Attempt to update descriptor with invalid image %#" PRIxLEAST64
+ " in imageView %#" PRIxLEAST64,
+ (uint64_t)image, (uint64_t)*pImageView);
+ } else {
+ VkBool32 ds = vk_format_is_depth_or_stencil(format);
+ switch (imageLayout) {
+ case VK_IMAGE_LAYOUT_COLOR_ATTACHMENT_OPTIMAL:
+ // Only Color bit must be set
+ if ((aspectMask & VK_IMAGE_ASPECT_COLOR_BIT) != VK_IMAGE_ASPECT_COLOR_BIT) {
+ skipCall |=
+ log_msg(my_data->report_data, VK_DEBUG_REPORT_ERROR_BIT_EXT, VK_DEBUG_REPORT_OBJECT_TYPE_IMAGE_VIEW_EXT,
+ (uint64_t)*pImageView, __LINE__, DRAWSTATE_INVALID_IMAGE_ASPECT, "DS",
+ "vkUpdateDescriptorSets: Updating descriptor with layout VK_IMAGE_LAYOUT_COLOR_ATTACHMENT_OPTIMAL "
+ "and imageView %#" PRIxLEAST64 ""
+ " that does not have VK_IMAGE_ASPECT_COLOR_BIT set.",
+ (uint64_t)*pImageView);
+ }
+ // format must NOT be DS
+ if (ds) {
+ skipCall |=
+ log_msg(my_data->report_data, VK_DEBUG_REPORT_ERROR_BIT_EXT, VK_DEBUG_REPORT_OBJECT_TYPE_IMAGE_VIEW_EXT,
+ (uint64_t)*pImageView, __LINE__, DRAWSTATE_IMAGEVIEW_DESCRIPTOR_ERROR, "DS",
+ "vkUpdateDescriptorSets: Updating descriptor with layout VK_IMAGE_LAYOUT_COLOR_ATTACHMENT_OPTIMAL "
+ "and imageView %#" PRIxLEAST64 ""
+ " but the image format is %s which is not a color format.",
+ (uint64_t)*pImageView, string_VkFormat(format));
+ }
+ break;
+ case VK_IMAGE_LAYOUT_DEPTH_STENCIL_ATTACHMENT_OPTIMAL:
+ case VK_IMAGE_LAYOUT_DEPTH_STENCIL_READ_ONLY_OPTIMAL:
+ // Depth or stencil bit must be set, but both must NOT be set
+ if (aspectMask & VK_IMAGE_ASPECT_DEPTH_BIT) {
+ if (aspectMask & VK_IMAGE_ASPECT_STENCIL_BIT) {
+ // both must NOT be set
+ skipCall |=
+ log_msg(my_data->report_data, VK_DEBUG_REPORT_ERROR_BIT_EXT, VK_DEBUG_REPORT_OBJECT_TYPE_IMAGE_VIEW_EXT,
+ (uint64_t)*pImageView, __LINE__, DRAWSTATE_INVALID_IMAGE_ASPECT, "DS",
+ "vkUpdateDescriptorSets: Updating descriptor with imageView %#" PRIxLEAST64 ""
+ " that has both STENCIL and DEPTH aspects set",
+ (uint64_t)*pImageView);
+ }
+ } else if (!(aspectMask & VK_IMAGE_ASPECT_STENCIL_BIT)) {
+ // Neither were set
+ skipCall |=
+ log_msg(my_data->report_data, VK_DEBUG_REPORT_ERROR_BIT_EXT, VK_DEBUG_REPORT_OBJECT_TYPE_IMAGE_VIEW_EXT,
+ (uint64_t)*pImageView, __LINE__, DRAWSTATE_INVALID_IMAGE_ASPECT, "DS",
+ "vkUpdateDescriptorSets: Updating descriptor with layout %s and imageView %#" PRIxLEAST64 ""
+ " that does not have STENCIL or DEPTH aspect set.",
+ string_VkImageLayout(imageLayout), (uint64_t)*pImageView);
+ }
+ // format must be DS
+ if (!ds) {
+ skipCall |=
+ log_msg(my_data->report_data, VK_DEBUG_REPORT_ERROR_BIT_EXT, VK_DEBUG_REPORT_OBJECT_TYPE_IMAGE_VIEW_EXT,
+ (uint64_t)*pImageView, __LINE__, DRAWSTATE_IMAGEVIEW_DESCRIPTOR_ERROR, "DS",
+ "vkUpdateDescriptorSets: Updating descriptor with layout %s and imageView %#" PRIxLEAST64 ""
+ " but the image format is %s which is not a depth/stencil format.",
+ string_VkImageLayout(imageLayout), (uint64_t)*pImageView, string_VkFormat(format));
+ }
+ break;
+ default:
+ // anything to check for other layouts?
+ break;
+ }
+ }
+ }
+ return skipCall;
+}
+
+// Verify that given bufferView is valid
+static VkBool32 validateBufferView(const layer_data *my_data, const VkBufferView *pBufferView) {
+ VkBool32 skipCall = VK_FALSE;
+ auto sampIt = my_data->bufferViewMap.find(*pBufferView);
+ if (sampIt == my_data->bufferViewMap.end()) {
+ skipCall |= log_msg(my_data->report_data, VK_DEBUG_REPORT_ERROR_BIT_EXT, VK_DEBUG_REPORT_OBJECT_TYPE_BUFFER_VIEW_EXT,
+ (uint64_t)*pBufferView, __LINE__, DRAWSTATE_BUFFERVIEW_DESCRIPTOR_ERROR, "DS",
+ "vkUpdateDescriptorSets: Attempt to update descriptor with invalid bufferView %#" PRIxLEAST64,
+ (uint64_t)*pBufferView);
+ } else {
+ // TODO : Any further checks we want to do on the bufferView?
+ }
+ return skipCall;
+}
+
+// Verify that given bufferInfo is valid
+static VkBool32 validateBufferInfo(const layer_data *my_data, const VkDescriptorBufferInfo *pBufferInfo) {
+ VkBool32 skipCall = VK_FALSE;
+ auto sampIt = my_data->bufferMap.find(pBufferInfo->buffer);
+ if (sampIt == my_data->bufferMap.end()) {
+ skipCall |=
+ log_msg(my_data->report_data, VK_DEBUG_REPORT_ERROR_BIT_EXT, VK_DEBUG_REPORT_OBJECT_TYPE_BUFFER_EXT,
+ (uint64_t)pBufferInfo->buffer, __LINE__, DRAWSTATE_BUFFERINFO_DESCRIPTOR_ERROR, "DS",
+ "vkUpdateDescriptorSets: Attempt to update descriptor where bufferInfo has invalid buffer %#" PRIxLEAST64,
+ (uint64_t)pBufferInfo->buffer);
+ } else {
+ // TODO : Any further checks we want to do on the bufferView?
+ }
+ return skipCall;
+}
+
+static VkBool32 validateUpdateContents(const layer_data *my_data, const VkWriteDescriptorSet *pWDS,
+ const VkDescriptorSetLayoutBinding *pLayoutBinding) {
+ VkBool32 skipCall = VK_FALSE;
+ // First verify that for the given Descriptor type, the correct DescriptorInfo data is supplied
+ const VkSampler *pSampler = NULL;
+ VkBool32 immutable = VK_FALSE;
+ uint32_t i = 0;
+ // For given update type, verify that update contents are correct
+ switch (pWDS->descriptorType) {
+ case VK_DESCRIPTOR_TYPE_SAMPLER:
+ for (i = 0; i < pWDS->descriptorCount; ++i) {
+ skipCall |= validateSampler(my_data, &(pWDS->pImageInfo[i].sampler), immutable);
+ }
+ break;
+ case VK_DESCRIPTOR_TYPE_COMBINED_IMAGE_SAMPLER:
+ for (i = 0; i < pWDS->descriptorCount; ++i) {
+ if (NULL == pLayoutBinding->pImmutableSamplers) {
+ pSampler = &(pWDS->pImageInfo[i].sampler);
+ if (immutable) {
+ skipCall |= log_msg(
+ my_data->report_data, VK_DEBUG_REPORT_ERROR_BIT_EXT, VK_DEBUG_REPORT_OBJECT_TYPE_SAMPLER_EXT,
+ (uint64_t)*pSampler, __LINE__, DRAWSTATE_INCONSISTENT_IMMUTABLE_SAMPLER_UPDATE, "DS",
+ "vkUpdateDescriptorSets: Update #%u is not an immutable sampler %#" PRIxLEAST64
+ ", but previous update(s) from this "
+ "VkWriteDescriptorSet struct used an immutable sampler. All updates from a single struct must either "
+ "use immutable or non-immutable samplers.",
+ i, (uint64_t)*pSampler);
+ }
+ } else {
+ if (i > 0 && !immutable) {
+ skipCall |= log_msg(
+ my_data->report_data, VK_DEBUG_REPORT_ERROR_BIT_EXT, VK_DEBUG_REPORT_OBJECT_TYPE_SAMPLER_EXT,
+ (uint64_t)*pSampler, __LINE__, DRAWSTATE_INCONSISTENT_IMMUTABLE_SAMPLER_UPDATE, "DS",
+ "vkUpdateDescriptorSets: Update #%u is an immutable sampler, but previous update(s) from this "
+ "VkWriteDescriptorSet struct used a non-immutable sampler. All updates from a single struct must either "
+ "use immutable or non-immutable samplers.",
+ i);
+ }
+ immutable = VK_TRUE;
+ pSampler = &(pLayoutBinding->pImmutableSamplers[i]);
+ }
+ skipCall |= validateSampler(my_data, pSampler, immutable);
+ }
+ // Intentionally fall through here to also validate image stuff
+ case VK_DESCRIPTOR_TYPE_SAMPLED_IMAGE:
+ case VK_DESCRIPTOR_TYPE_STORAGE_IMAGE:
+ case VK_DESCRIPTOR_TYPE_INPUT_ATTACHMENT:
+ for (i = 0; i < pWDS->descriptorCount; ++i) {
+ skipCall |= validateImageView(my_data, &(pWDS->pImageInfo[i].imageView), pWDS->pImageInfo[i].imageLayout);
+ }
+ break;
+ case VK_DESCRIPTOR_TYPE_UNIFORM_TEXEL_BUFFER:
+ case VK_DESCRIPTOR_TYPE_STORAGE_TEXEL_BUFFER:
+ for (i = 0; i < pWDS->descriptorCount; ++i) {
+ skipCall |= validateBufferView(my_data, &(pWDS->pTexelBufferView[i]));
+ }
+ break;
+ case VK_DESCRIPTOR_TYPE_UNIFORM_BUFFER:
+ case VK_DESCRIPTOR_TYPE_STORAGE_BUFFER:
+ case VK_DESCRIPTOR_TYPE_UNIFORM_BUFFER_DYNAMIC:
+ case VK_DESCRIPTOR_TYPE_STORAGE_BUFFER_DYNAMIC:
+ for (i = 0; i < pWDS->descriptorCount; ++i) {
+ skipCall |= validateBufferInfo(my_data, &(pWDS->pBufferInfo[i]));
+ }
+ break;
+ default:
+ break;
+ }
+ return skipCall;
+}
+// Validate that given set is valid and that it's not being used by an in-flight CmdBuffer
+// func_str is the name of the calling function
+// Return VK_FALSE if no errors occur
+// Return VK_TRUE if validation error occurs and callback returns VK_TRUE (to skip upcoming API call down the chain)
+VkBool32 validateIdleDescriptorSet(const layer_data *my_data, VkDescriptorSet set, std::string func_str) {
+ VkBool32 skip_call = VK_FALSE;
+ auto set_node = my_data->setMap.find(set);
+ if (set_node == my_data->setMap.end()) {
+ skip_call |= log_msg(my_data->report_data, VK_DEBUG_REPORT_ERROR_BIT_EXT, VK_DEBUG_REPORT_OBJECT_TYPE_DESCRIPTOR_SET_EXT,
+ (uint64_t)(set), __LINE__, DRAWSTATE_DOUBLE_DESTROY, "DS",
+ "Cannot call %s() on descriptor set %" PRIxLEAST64 " that has not been allocated.", func_str.c_str(),
+ (uint64_t)(set));
+ } else {
+ if (set_node->second->in_use.load()) {
+ skip_call |= log_msg(my_data->report_data, VK_DEBUG_REPORT_ERROR_BIT_EXT,
+ VK_DEBUG_REPORT_OBJECT_TYPE_DESCRIPTOR_SET_EXT, (uint64_t)(set), __LINE__, DRAWSTATE_OBJECT_INUSE,
+ "DS", "Cannot call %s() on descriptor set %" PRIxLEAST64 " that is in use by a command buffer.",
+ func_str.c_str(), (uint64_t)(set));
+ }
+ }
+ return skip_call;
+}
+static void invalidateBoundCmdBuffers(layer_data *dev_data, const SET_NODE *pSet) {
+ // Flag any CBs this set is bound to as INVALID
+ for (auto cb : pSet->boundCmdBuffers) {
+ auto cb_node = dev_data->commandBufferMap.find(cb);
+ if (cb_node != dev_data->commandBufferMap.end()) {
+ cb_node->second->state = CB_INVALID;
+ }
+ }
+}
+// update DS mappings based on write and copy update arrays
+static VkBool32 dsUpdate(layer_data *my_data, VkDevice device, uint32_t descriptorWriteCount, const VkWriteDescriptorSet *pWDS,
+ uint32_t descriptorCopyCount, const VkCopyDescriptorSet *pCDS) {
+ VkBool32 skipCall = VK_FALSE;
+
+ LAYOUT_NODE *pLayout = NULL;
+ VkDescriptorSetLayoutCreateInfo *pLayoutCI = NULL;
+ // Validate Write updates
+ uint32_t i = 0;
+ for (i = 0; i < descriptorWriteCount; i++) {
+ VkDescriptorSet ds = pWDS[i].dstSet;
+ SET_NODE *pSet = my_data->setMap[ds];
+ // Set being updated cannot be in-flight
+ if ((skipCall = validateIdleDescriptorSet(my_data, ds, "VkUpdateDescriptorSets")) == VK_TRUE)
+ return skipCall;
+ // If set is bound to any cmdBuffers, mark them invalid
+ invalidateBoundCmdBuffers(my_data, pSet);
+ GENERIC_HEADER *pUpdate = (GENERIC_HEADER *)&pWDS[i];
+ pLayout = pSet->pLayout;
+ // First verify valid update struct
+ if ((skipCall = validUpdateStruct(my_data, device, pUpdate)) == VK_TRUE) {
+ break;
+ }
+ uint32_t binding = 0, endIndex = 0;
+ binding = pWDS[i].dstBinding;
+ auto bindingToIndex = pLayout->bindingToIndexMap.find(binding);
+ // Make sure that layout being updated has the binding being updated
+ if (bindingToIndex == pLayout->bindingToIndexMap.end()) {
+ skipCall |= log_msg(my_data->report_data, VK_DEBUG_REPORT_ERROR_BIT_EXT, VK_DEBUG_REPORT_OBJECT_TYPE_DESCRIPTOR_SET_EXT,
+ (uint64_t)(ds), __LINE__, DRAWSTATE_INVALID_UPDATE_INDEX, "DS",
+ "Descriptor Set %" PRIu64 " does not have binding to match "
+ "update binding %u for update type "
+ "%s!",
+ (uint64_t)(ds), binding, string_VkStructureType(pUpdate->sType));
+ } else {
+ // Next verify that update falls within size of given binding
+ endIndex = getUpdateEndIndex(my_data, device, pLayout, binding, pWDS[i].dstArrayElement, pUpdate);
+ if (getBindingEndIndex(pLayout, binding) < endIndex) {
+ pLayoutCI = &pLayout->createInfo;
+ string DSstr = vk_print_vkdescriptorsetlayoutcreateinfo(pLayoutCI, "{DS} ");
+ skipCall |=
+ log_msg(my_data->report_data, VK_DEBUG_REPORT_ERROR_BIT_EXT, VK_DEBUG_REPORT_OBJECT_TYPE_DESCRIPTOR_SET_EXT,
+ (uint64_t)(ds), __LINE__, DRAWSTATE_DESCRIPTOR_UPDATE_OUT_OF_BOUNDS, "DS",
+ "Descriptor update type of %s is out of bounds for matching binding %u in Layout w/ CI:\n%s!",
+ string_VkStructureType(pUpdate->sType), binding, DSstr.c_str());
+ } else { // TODO : should we skip update on a type mismatch or force it?
+ uint32_t startIndex;
+ startIndex = getUpdateStartIndex(my_data, device, pLayout, binding, pWDS[i].dstArrayElement, pUpdate);
+ // Layout bindings match w/ update, now verify that update type
+ // & stageFlags are the same for entire update
+ if ((skipCall = validateUpdateConsistency(my_data, device, pLayout, pUpdate, startIndex, endIndex)) == VK_FALSE) {
+ // The update is within bounds and consistent, but need to
+ // make sure contents make sense as well
+ if ((skipCall = validateUpdateContents(my_data, &pWDS[i],
+ &pLayout->createInfo.pBindings[bindingToIndex->second])) == VK_FALSE) {
+ // Update is good. Save the update info
+ // Create new update struct for this set's shadow copy
+ GENERIC_HEADER *pNewNode = NULL;
+ skipCall |= shadowUpdateNode(my_data, device, pUpdate, &pNewNode);
+ if (NULL == pNewNode) {
+ skipCall |= log_msg(
+ my_data->report_data, VK_DEBUG_REPORT_ERROR_BIT_EXT, VK_DEBUG_REPORT_OBJECT_TYPE_DESCRIPTOR_SET_EXT,
+ (uint64_t)(ds), __LINE__, DRAWSTATE_OUT_OF_MEMORY, "DS",
+ "Out of memory while attempting to allocate UPDATE struct in vkUpdateDescriptors()");
+ } else {
+ // Insert shadow node into LL of updates for this set
+ pNewNode->pNext = pSet->pUpdateStructs;
+ pSet->pUpdateStructs = pNewNode;
+ // Now update appropriate descriptor(s) to point to new Update node
+ for (uint32_t j = startIndex; j <= endIndex; j++) {
+ assert(j < pSet->descriptorCount);
+ pSet->ppDescriptors[j] = pNewNode;
+ }
+ }
+ }
+ }
+ }
+ }
+ }
+ // Now validate copy updates
+ for (i = 0; i < descriptorCopyCount; ++i) {
+ SET_NODE *pSrcSet = NULL, *pDstSet = NULL;
+ LAYOUT_NODE *pSrcLayout = NULL, *pDstLayout = NULL;
+ uint32_t srcStartIndex = 0, srcEndIndex = 0, dstStartIndex = 0, dstEndIndex = 0;
+ // For each copy make sure that update falls within given layout and that types match
+ pSrcSet = my_data->setMap[pCDS[i].srcSet];
+ pDstSet = my_data->setMap[pCDS[i].dstSet];
+ // Set being updated cannot be in-flight
+ if ((skipCall = validateIdleDescriptorSet(my_data, pDstSet->set, "VkUpdateDescriptorSets")) == VK_TRUE)
+ return skipCall;
+ invalidateBoundCmdBuffers(my_data, pDstSet);
+ pSrcLayout = pSrcSet->pLayout;
+ pDstLayout = pDstSet->pLayout;
+ // Validate that src binding is valid for src set layout
+ if (pSrcLayout->bindingToIndexMap.find(pCDS[i].srcBinding) == pSrcLayout->bindingToIndexMap.end()) {
+ skipCall |= log_msg(my_data->report_data, VK_DEBUG_REPORT_ERROR_BIT_EXT, VK_DEBUG_REPORT_OBJECT_TYPE_DESCRIPTOR_SET_EXT,
+ (uint64_t)pSrcSet->set, __LINE__, DRAWSTATE_INVALID_UPDATE_INDEX, "DS",
+ "Copy descriptor update %u has srcBinding %u "
+ "which is out of bounds for underlying SetLayout "
+ "%#" PRIxLEAST64 " which only has bindings 0-%u.",
+ i, pCDS[i].srcBinding, (uint64_t)pSrcLayout->layout, pSrcLayout->createInfo.bindingCount - 1);
+ } else if (pDstLayout->bindingToIndexMap.find(pCDS[i].dstBinding) == pDstLayout->bindingToIndexMap.end()) {
+ skipCall |= log_msg(my_data->report_data, VK_DEBUG_REPORT_ERROR_BIT_EXT, VK_DEBUG_REPORT_OBJECT_TYPE_DESCRIPTOR_SET_EXT,
+ (uint64_t)pDstSet->set, __LINE__, DRAWSTATE_INVALID_UPDATE_INDEX, "DS",
+ "Copy descriptor update %u has dstBinding %u "
+ "which is out of bounds for underlying SetLayout "
+ "%#" PRIxLEAST64 " which only has bindings 0-%u.",
+ i, pCDS[i].dstBinding, (uint64_t)pDstLayout->layout, pDstLayout->createInfo.bindingCount - 1);
+ } else {
+ // Proceed with validation. Bindings are ok, but make sure update is within bounds of given layout
+ srcEndIndex = getUpdateEndIndex(my_data, device, pSrcLayout, pCDS[i].srcBinding, pCDS[i].srcArrayElement,
+ (const GENERIC_HEADER *)&(pCDS[i]));
+ dstEndIndex = getUpdateEndIndex(my_data, device, pDstLayout, pCDS[i].dstBinding, pCDS[i].dstArrayElement,
+ (const GENERIC_HEADER *)&(pCDS[i]));
+ if (getBindingEndIndex(pSrcLayout, pCDS[i].srcBinding) < srcEndIndex) {
+ pLayoutCI = &pSrcLayout->createInfo;
+ string DSstr = vk_print_vkdescriptorsetlayoutcreateinfo(pLayoutCI, "{DS} ");
+ skipCall |=
+ log_msg(my_data->report_data, VK_DEBUG_REPORT_ERROR_BIT_EXT, VK_DEBUG_REPORT_OBJECT_TYPE_DESCRIPTOR_SET_EXT,
+ (uint64_t)pSrcSet->set, __LINE__, DRAWSTATE_DESCRIPTOR_UPDATE_OUT_OF_BOUNDS, "DS",
+ "Copy descriptor src update is out of bounds for matching binding %u in Layout w/ CI:\n%s!",
+ pCDS[i].srcBinding, DSstr.c_str());
+ } else if (getBindingEndIndex(pDstLayout, pCDS[i].dstBinding) < dstEndIndex) {
+ pLayoutCI = &pDstLayout->createInfo;
+ string DSstr = vk_print_vkdescriptorsetlayoutcreateinfo(pLayoutCI, "{DS} ");
+ skipCall |=
+ log_msg(my_data->report_data, VK_DEBUG_REPORT_ERROR_BIT_EXT, VK_DEBUG_REPORT_OBJECT_TYPE_DESCRIPTOR_SET_EXT,
+ (uint64_t)pDstSet->set, __LINE__, DRAWSTATE_DESCRIPTOR_UPDATE_OUT_OF_BOUNDS, "DS",
+ "Copy descriptor dest update is out of bounds for matching binding %u in Layout w/ CI:\n%s!",
+ pCDS[i].dstBinding, DSstr.c_str());
+ } else {
+ srcStartIndex = getUpdateStartIndex(my_data, device, pSrcLayout, pCDS[i].srcBinding, pCDS[i].srcArrayElement,
+ (const GENERIC_HEADER *)&(pCDS[i]));
+ dstStartIndex = getUpdateStartIndex(my_data, device, pDstLayout, pCDS[i].dstBinding, pCDS[i].dstArrayElement,
+ (const GENERIC_HEADER *)&(pCDS[i]));
+ for (uint32_t j = 0; j < pCDS[i].descriptorCount; ++j) {
+ // For copy just make sure that the types match and then perform the update
+ if (pSrcLayout->descriptorTypes[srcStartIndex + j] != pDstLayout->descriptorTypes[dstStartIndex + j]) {
+ skipCall |= log_msg(my_data->report_data, VK_DEBUG_REPORT_ERROR_BIT_EXT, (VkDebugReportObjectTypeEXT)0, 0,
+ __LINE__, DRAWSTATE_DESCRIPTOR_TYPE_MISMATCH, "DS",
+ "Copy descriptor update index %u, update count #%u, has src update descriptor type %s "
+ "that does not match overlapping dest descriptor type of %s!",
+ i, j + 1, string_VkDescriptorType(pSrcLayout->descriptorTypes[srcStartIndex + j]),
+ string_VkDescriptorType(pDstLayout->descriptorTypes[dstStartIndex + j]));
+ } else {
+ // point dst descriptor at corresponding src descriptor
+ // TODO : This may be a hole. I believe copy should be its own copy,
+ // otherwise a subsequent write update to src will incorrectly affect the copy
+ pDstSet->ppDescriptors[j + dstStartIndex] = pSrcSet->ppDescriptors[j + srcStartIndex];
+ pDstSet->pUpdateStructs = pSrcSet->pUpdateStructs;
+ }
+ }
+ }
+ }
+ }
+ return skipCall;
+}
+
+// Verify that given pool has descriptors that are being requested for allocation.
+// NOTE : Calls to this function should be wrapped in mutex
+static VkBool32 validate_descriptor_availability_in_pool(layer_data *dev_data, DESCRIPTOR_POOL_NODE *pPoolNode, uint32_t count,
+ const VkDescriptorSetLayout *pSetLayouts) {
+ VkBool32 skipCall = VK_FALSE;
+ uint32_t i = 0;
+ uint32_t j = 0;
+
+ // Track number of descriptorSets allowable in this pool
+ if (pPoolNode->availableSets < count) {
+ skipCall |= log_msg(dev_data->report_data, VK_DEBUG_REPORT_ERROR_BIT_EXT, VK_DEBUG_REPORT_OBJECT_TYPE_DESCRIPTOR_POOL_EXT,
+ reinterpret_cast<uint64_t &>(pPoolNode->pool), __LINE__, DRAWSTATE_DESCRIPTOR_POOL_EMPTY, "DS",
+ "Unable to allocate %u descriptorSets from pool %#" PRIxLEAST64
+ ". This pool only has %d descriptorSets remaining.",
+ count, reinterpret_cast<uint64_t &>(pPoolNode->pool), pPoolNode->availableSets);
+ } else {
+ pPoolNode->availableSets -= count;
+ }
+
+ for (i = 0; i < count; ++i) {
+ LAYOUT_NODE *pLayout = getLayoutNode(dev_data, pSetLayouts[i]);
+ if (NULL == pLayout) {
+ skipCall |=
+ log_msg(dev_data->report_data, VK_DEBUG_REPORT_ERROR_BIT_EXT, VK_DEBUG_REPORT_OBJECT_TYPE_DESCRIPTOR_SET_LAYOUT_EXT,
+ (uint64_t)pSetLayouts[i], __LINE__, DRAWSTATE_INVALID_LAYOUT, "DS",
+ "Unable to find set layout node for layout %#" PRIxLEAST64 " specified in vkAllocateDescriptorSets() call",
+ (uint64_t)pSetLayouts[i]);
+ } else {
+ uint32_t typeIndex = 0, poolSizeCount = 0;
+ for (j = 0; j < pLayout->createInfo.bindingCount; ++j) {
+ typeIndex = static_cast<uint32_t>(pLayout->createInfo.pBindings[j].descriptorType);
+ poolSizeCount = pLayout->createInfo.pBindings[j].descriptorCount;
+ if (poolSizeCount > pPoolNode->availableDescriptorTypeCount[typeIndex]) {
+ skipCall |= log_msg(dev_data->report_data, VK_DEBUG_REPORT_ERROR_BIT_EXT,
+ VK_DEBUG_REPORT_OBJECT_TYPE_DESCRIPTOR_SET_LAYOUT_EXT, (uint64_t)pLayout->layout, __LINE__,
+ DRAWSTATE_DESCRIPTOR_POOL_EMPTY, "DS",
+ "Unable to allocate %u descriptors of type %s from pool %#" PRIxLEAST64
+ ". This pool only has %d descriptors of this type remaining.",
+ poolSizeCount, string_VkDescriptorType(pLayout->createInfo.pBindings[j].descriptorType),
+ (uint64_t)pPoolNode->pool, pPoolNode->availableDescriptorTypeCount[typeIndex]);
+ } else { // Decrement available descriptors of this type
+ pPoolNode->availableDescriptorTypeCount[typeIndex] -= poolSizeCount;
+ }
+ }
+ }
+ }
+ return skipCall;
+}
+
+// Free the shadowed update node for this Set
+// NOTE : Calls to this function should be wrapped in mutex
+static void freeShadowUpdateTree(SET_NODE *pSet) {
+ GENERIC_HEADER *pShadowUpdate = pSet->pUpdateStructs;
+ pSet->pUpdateStructs = NULL;
+ GENERIC_HEADER *pFreeUpdate = pShadowUpdate;
+ // Clear the descriptor mappings as they will now be invalid
+ memset(pSet->ppDescriptors, 0, pSet->descriptorCount * sizeof(GENERIC_HEADER *));
+ while (pShadowUpdate) {
+ pFreeUpdate = pShadowUpdate;
+ pShadowUpdate = (GENERIC_HEADER *)pShadowUpdate->pNext;
+ VkWriteDescriptorSet *pWDS = NULL;
+ switch (pFreeUpdate->sType) {
+ case VK_STRUCTURE_TYPE_WRITE_DESCRIPTOR_SET:
+ pWDS = (VkWriteDescriptorSet *)pFreeUpdate;
+ switch (pWDS->descriptorType) {
+ case VK_DESCRIPTOR_TYPE_SAMPLER:
+ case VK_DESCRIPTOR_TYPE_COMBINED_IMAGE_SAMPLER:
+ case VK_DESCRIPTOR_TYPE_SAMPLED_IMAGE:
+ case VK_DESCRIPTOR_TYPE_STORAGE_IMAGE: {
+ delete[] pWDS->pImageInfo;
+ } break;
+ case VK_DESCRIPTOR_TYPE_UNIFORM_TEXEL_BUFFER:
+ case VK_DESCRIPTOR_TYPE_STORAGE_TEXEL_BUFFER: {
+ delete[] pWDS->pTexelBufferView;
+ } break;
+ case VK_DESCRIPTOR_TYPE_UNIFORM_BUFFER:
+ case VK_DESCRIPTOR_TYPE_STORAGE_BUFFER:
+ case VK_DESCRIPTOR_TYPE_UNIFORM_BUFFER_DYNAMIC:
+ case VK_DESCRIPTOR_TYPE_STORAGE_BUFFER_DYNAMIC: {
+ delete[] pWDS->pBufferInfo;
+ } break;
+ default:
+ break;
+ }
+ break;
+ case VK_STRUCTURE_TYPE_COPY_DESCRIPTOR_SET:
+ break;
+ default:
+ assert(0);
+ break;
+ }
+ delete pFreeUpdate;
+ }
+}
+
+// Free all DS Pools including their Sets & related sub-structs
+// NOTE : Calls to this function should be wrapped in mutex
+static void deletePools(layer_data *my_data) {
+ if (my_data->descriptorPoolMap.size() <= 0)
+ return;
+ for (auto ii = my_data->descriptorPoolMap.begin(); ii != my_data->descriptorPoolMap.end(); ++ii) {
+ SET_NODE *pSet = (*ii).second->pSets;
+ SET_NODE *pFreeSet = pSet;
+ while (pSet) {
+ pFreeSet = pSet;
+ pSet = pSet->pNext;
+ // Freeing layouts handled in deleteLayouts() function
+ // Free Update shadow struct tree
+ freeShadowUpdateTree(pFreeSet);
+ delete[] pFreeSet->ppDescriptors;
+ delete pFreeSet;
+ }
+ delete (*ii).second;
+ }
+ my_data->descriptorPoolMap.clear();
+}
+
+// WARN : Once deleteLayouts() called, any layout ptrs in Pool/Set data structure will be invalid
+// NOTE : Calls to this function should be wrapped in mutex
+static void deleteLayouts(layer_data *my_data) {
+ if (my_data->descriptorSetLayoutMap.size() <= 0)
+ return;
+ for (auto ii = my_data->descriptorSetLayoutMap.begin(); ii != my_data->descriptorSetLayoutMap.end(); ++ii) {
+ LAYOUT_NODE *pLayout = (*ii).second;
+ if (pLayout->createInfo.pBindings) {
+ for (uint32_t i = 0; i < pLayout->createInfo.bindingCount; i++) {
+ delete[] pLayout->createInfo.pBindings[i].pImmutableSamplers;
+ }
+ delete[] pLayout->createInfo.pBindings;
+ }
+ delete pLayout;
+ }
+ my_data->descriptorSetLayoutMap.clear();
+}
+
+// Currently clearing a set is removing all previous updates to that set
+// TODO : Validate if this is correct clearing behavior
+static void clearDescriptorSet(layer_data *my_data, VkDescriptorSet set) {
+ SET_NODE *pSet = getSetNode(my_data, set);
+ if (!pSet) {
+ // TODO : Return error
+ } else {
+ freeShadowUpdateTree(pSet);
+ }
+}
+
+static void clearDescriptorPool(layer_data *my_data, const VkDevice device, const VkDescriptorPool pool,
+ VkDescriptorPoolResetFlags flags) {
+ DESCRIPTOR_POOL_NODE *pPool = getPoolNode(my_data, pool);
+ if (!pPool) {
+ log_msg(my_data->report_data, VK_DEBUG_REPORT_ERROR_BIT_EXT, VK_DEBUG_REPORT_OBJECT_TYPE_DESCRIPTOR_POOL_EXT,
+ (uint64_t)pool, __LINE__, DRAWSTATE_INVALID_POOL, "DS",
+ "Unable to find pool node for pool %#" PRIxLEAST64 " specified in vkResetDescriptorPool() call", (uint64_t)pool);
+ } else {
+ // TODO: validate flags
+ // For every set off of this pool, clear it
+ SET_NODE *pSet = pPool->pSets;
+ while (pSet) {
+ clearDescriptorSet(my_data, pSet->set);
+ pSet = pSet->pNext;
+ }
+ // Reset available count to max count for this pool
+ for (uint32_t i = 0; i < pPool->availableDescriptorTypeCount.size(); ++i) {
+ pPool->availableDescriptorTypeCount[i] = pPool->maxDescriptorTypeCount[i];
+ }
+ }
+}
+
+// For given CB object, fetch associated CB Node from map
+static GLOBAL_CB_NODE *getCBNode(layer_data *my_data, const VkCommandBuffer cb) {
+ if (my_data->commandBufferMap.count(cb) == 0) {
+ log_msg(my_data->report_data, VK_DEBUG_REPORT_ERROR_BIT_EXT, VK_DEBUG_REPORT_OBJECT_TYPE_COMMAND_BUFFER_EXT,
+ reinterpret_cast<const uint64_t &>(cb), __LINE__, DRAWSTATE_INVALID_COMMAND_BUFFER, "DS",
+ "Attempt to use CommandBuffer %#" PRIxLEAST64 " that doesn't exist!", (uint64_t)(cb));
+ return NULL;
+ }
+ return my_data->commandBufferMap[cb];
+}
+
+// Free all CB Nodes
+// NOTE : Calls to this function should be wrapped in mutex
+static void deleteCommandBuffers(layer_data *my_data) {
+ if (my_data->commandBufferMap.size() <= 0) {
+ return;
+ }
+ for (auto ii = my_data->commandBufferMap.begin(); ii != my_data->commandBufferMap.end(); ++ii) {
+ delete (*ii).second;
+ }
+ my_data->commandBufferMap.clear();
+}
+
+static VkBool32 report_error_no_cb_begin(const layer_data *dev_data, const VkCommandBuffer cb, const char *caller_name) {
+ return log_msg(dev_data->report_data, VK_DEBUG_REPORT_ERROR_BIT_EXT, VK_DEBUG_REPORT_OBJECT_TYPE_COMMAND_BUFFER_EXT,
+ (uint64_t)cb, __LINE__, DRAWSTATE_NO_BEGIN_COMMAND_BUFFER, "DS",
+ "You must call vkBeginCommandBuffer() before this call to %s", caller_name);
+}
+
+VkBool32 validateCmdsInCmdBuffer(const layer_data *dev_data, const GLOBAL_CB_NODE *pCB, const CMD_TYPE cmd_type) {
+ if (!pCB->activeRenderPass)
+ return VK_FALSE;
+ VkBool32 skip_call = VK_FALSE;
+ if (pCB->activeSubpassContents == VK_SUBPASS_CONTENTS_SECONDARY_COMMAND_BUFFERS && cmd_type != CMD_EXECUTECOMMANDS) {
+ skip_call |= log_msg(dev_data->report_data, VK_DEBUG_REPORT_ERROR_BIT_EXT, (VkDebugReportObjectTypeEXT)0, 0, __LINE__,
+ DRAWSTATE_INVALID_COMMAND_BUFFER, "DS",
+ "Commands cannot be called in a subpass using secondary command buffers.");
+ } else if (pCB->activeSubpassContents == VK_SUBPASS_CONTENTS_INLINE && cmd_type == CMD_EXECUTECOMMANDS) {
+ skip_call |= log_msg(dev_data->report_data, VK_DEBUG_REPORT_ERROR_BIT_EXT, (VkDebugReportObjectTypeEXT)0, 0, __LINE__,
+ DRAWSTATE_INVALID_COMMAND_BUFFER, "DS",
+ "vkCmdExecuteCommands() cannot be called in a subpass using inline commands.");
+ }
+ return skip_call;
+}
+
+static bool checkGraphicsBit(const layer_data *my_data, VkQueueFlags flags, const char *name) {
+ if (!(flags & VK_QUEUE_GRAPHICS_BIT))
+ return log_msg(my_data->report_data, VK_DEBUG_REPORT_ERROR_BIT_EXT, (VkDebugReportObjectTypeEXT)0, 0, __LINE__,
+ DRAWSTATE_INVALID_COMMAND_BUFFER, "DS",
+ "Cannot call %s on a command buffer allocated from a pool without graphics capabilities.", name);
+ return false;
+}
+
+static bool checkComputeBit(const layer_data *my_data, VkQueueFlags flags, const char *name) {
+ if (!(flags & VK_QUEUE_COMPUTE_BIT))
+ return log_msg(my_data->report_data, VK_DEBUG_REPORT_ERROR_BIT_EXT, (VkDebugReportObjectTypeEXT)0, 0, __LINE__,
+ DRAWSTATE_INVALID_COMMAND_BUFFER, "DS",
+ "Cannot call %s on a command buffer allocated from a pool without compute capabilities.", name);
+ return false;
+}
+
+static bool checkGraphicsOrComputeBit(const layer_data *my_data, VkQueueFlags flags, const char *name) {
+ if (!((flags & VK_QUEUE_GRAPHICS_BIT) || (flags & VK_QUEUE_COMPUTE_BIT)))
+ return log_msg(my_data->report_data, VK_DEBUG_REPORT_ERROR_BIT_EXT, (VkDebugReportObjectTypeEXT)0, 0, __LINE__,
+ DRAWSTATE_INVALID_COMMAND_BUFFER, "DS",
+ "Cannot call %s on a command buffer allocated from a pool without graphics capabilities.", name);
+ return false;
+}
+
+// Add specified CMD to the CmdBuffer in given pCB, flagging errors if CB is not
+// in the recording state or if there's an issue with the Cmd ordering
+static VkBool32 addCmd(const layer_data *my_data, GLOBAL_CB_NODE *pCB, const CMD_TYPE cmd, const char *caller_name) {
+ VkBool32 skipCall = VK_FALSE;
+ auto pool_data = my_data->commandPoolMap.find(pCB->createInfo.commandPool);
+ if (pool_data != my_data->commandPoolMap.end()) {
+ VkQueueFlags flags = my_data->physDevProperties.queue_family_properties[pool_data->second.queueFamilyIndex].queueFlags;
+ switch (cmd) {
+ case CMD_BINDPIPELINE:
+ case CMD_BINDPIPELINEDELTA:
+ case CMD_BINDDESCRIPTORSETS:
+ case CMD_FILLBUFFER:
+ case CMD_CLEARCOLORIMAGE:
+ case CMD_SETEVENT:
+ case CMD_RESETEVENT:
+ case CMD_WAITEVENTS:
+ case CMD_BEGINQUERY:
+ case CMD_ENDQUERY:
+ case CMD_RESETQUERYPOOL:
+ case CMD_COPYQUERYPOOLRESULTS:
+ case CMD_WRITETIMESTAMP:
+ skipCall |= checkGraphicsOrComputeBit(my_data, flags, cmdTypeToString(cmd).c_str());
+ break;
+ case CMD_SETVIEWPORTSTATE:
+ case CMD_SETSCISSORSTATE:
+ case CMD_SETLINEWIDTHSTATE:
+ case CMD_SETDEPTHBIASSTATE:
+ case CMD_SETBLENDSTATE:
+ case CMD_SETDEPTHBOUNDSSTATE:
+ case CMD_SETSTENCILREADMASKSTATE:
+ case CMD_SETSTENCILWRITEMASKSTATE:
+ case CMD_SETSTENCILREFERENCESTATE:
+ case CMD_BINDINDEXBUFFER:
+ case CMD_BINDVERTEXBUFFER:
+ case CMD_DRAW:
+ case CMD_DRAWINDEXED:
+ case CMD_DRAWINDIRECT:
+ case CMD_DRAWINDEXEDINDIRECT:
+ case CMD_BLITIMAGE:
+ case CMD_CLEARATTACHMENTS:
+ case CMD_CLEARDEPTHSTENCILIMAGE:
+ case CMD_RESOLVEIMAGE:
+ case CMD_BEGINRENDERPASS:
+ case CMD_NEXTSUBPASS:
+ case CMD_ENDRENDERPASS:
+ skipCall |= checkGraphicsBit(my_data, flags, cmdTypeToString(cmd).c_str());
+ break;
+ case CMD_DISPATCH:
+ case CMD_DISPATCHINDIRECT:
+ skipCall |= checkComputeBit(my_data, flags, cmdTypeToString(cmd).c_str());
+ break;
+ case CMD_COPYBUFFER:
+ case CMD_COPYIMAGE:
+ case CMD_COPYBUFFERTOIMAGE:
+ case CMD_COPYIMAGETOBUFFER:
+ case CMD_CLONEIMAGEDATA:
+ case CMD_UPDATEBUFFER:
+ case CMD_PIPELINEBARRIER:
+ case CMD_EXECUTECOMMANDS:
+ break;
+ default:
+ break;
+ }
+ }
+ if (pCB->state != CB_RECORDING) {
+ skipCall |= report_error_no_cb_begin(my_data, pCB->commandBuffer, caller_name);
+ skipCall |= validateCmdsInCmdBuffer(my_data, pCB, cmd);
+ CMD_NODE cmdNode = {};
+ // init cmd node and append to end of cmd LL
+ cmdNode.cmdNumber = ++pCB->numCmds;
+ cmdNode.type = cmd;
+ pCB->cmds.push_back(cmdNode);
+ }
+ return skipCall;
+}
+// Reset the command buffer state
+// Maintain the createInfo and set state to CB_NEW, but clear all other state
+static void resetCB(layer_data *my_data, const VkCommandBuffer cb) {
+ GLOBAL_CB_NODE *pCB = my_data->commandBufferMap[cb];
+ if (pCB) {
+ pCB->cmds.clear();
+ // Reset CB state (note that createInfo is not cleared)
+ pCB->commandBuffer = cb;
+ memset(&pCB->beginInfo, 0, sizeof(VkCommandBufferBeginInfo));
+ memset(&pCB->inheritanceInfo, 0, sizeof(VkCommandBufferInheritanceInfo));
+ pCB->numCmds = 0;
+ memset(pCB->drawCount, 0, NUM_DRAW_TYPES * sizeof(uint64_t));
+ pCB->state = CB_NEW;
+ pCB->submitCount = 0;
+ pCB->status = 0;
+ pCB->viewports.clear();
+ pCB->scissors.clear();
+ for (uint32_t i = 0; i < VK_PIPELINE_BIND_POINT_RANGE_SIZE; ++i) {
+ // Before clearing lastBoundState, remove any CB bindings from all uniqueBoundSets
+ for (auto set : pCB->lastBound[i].uniqueBoundSets) {
+ auto set_node = my_data->setMap.find(set);
+ if (set_node != my_data->setMap.end()) {
+ set_node->second->boundCmdBuffers.erase(pCB->commandBuffer);
+ }
+ }
+ pCB->lastBound[i].reset();
+ }
+ memset(&pCB->activeRenderPassBeginInfo, 0, sizeof(pCB->activeRenderPassBeginInfo));
+ pCB->activeRenderPass = 0;
+ pCB->activeSubpassContents = VK_SUBPASS_CONTENTS_INLINE;
+ pCB->activeSubpass = 0;
+ pCB->framebuffer = 0;
+ pCB->fenceId = 0;
+ pCB->lastSubmittedFence = VK_NULL_HANDLE;
+ pCB->lastSubmittedQueue = VK_NULL_HANDLE;
+ pCB->destroyedSets.clear();
+ pCB->updatedSets.clear();
+ pCB->destroyedFramebuffers.clear();
+ pCB->waitedEvents.clear();
+ pCB->semaphores.clear();
+ pCB->events.clear();
+ pCB->waitedEventsBeforeQueryReset.clear();
+ pCB->queryToStateMap.clear();
+ pCB->activeQueries.clear();
+ pCB->startedQueries.clear();
+ pCB->imageLayoutMap.clear();
+ pCB->eventToStageMap.clear();
+ pCB->drawData.clear();
+ pCB->currentDrawData.buffers.clear();
+ pCB->primaryCommandBuffer = VK_NULL_HANDLE;
+ pCB->secondaryCommandBuffers.clear();
+ pCB->activeDescriptorSets.clear();
+ pCB->validate_functions.clear();
+ pCB->pMemObjList.clear();
+ pCB->eventUpdates.clear();
+ }
+}
+
+// Set PSO-related status bits for CB, including dynamic state set via PSO
+static void set_cb_pso_status(GLOBAL_CB_NODE *pCB, const PIPELINE_NODE *pPipe) {
+ for (auto const & att : pPipe->attachments) {
+ if (0 != att.colorWriteMask) {
+ pCB->status |= CBSTATUS_COLOR_BLEND_WRITE_ENABLE;
+ }
+ }
+ if (pPipe->dsStateCI.depthWriteEnable) {
+ pCB->status |= CBSTATUS_DEPTH_WRITE_ENABLE;
+ }
+ if (pPipe->dsStateCI.stencilTestEnable) {
+ pCB->status |= CBSTATUS_STENCIL_TEST_ENABLE;
+ }
+ // Account for any dynamic state not set via this PSO
+ if (!pPipe->dynStateCI.dynamicStateCount) { // All state is static
+ pCB->status = CBSTATUS_ALL;
+ } else {
+ // First consider all state on
+ // Then unset any state that's noted as dynamic in PSO
+ // Finally OR that into CB statemask
+ CBStatusFlags psoDynStateMask = CBSTATUS_ALL;
+ for (uint32_t i = 0; i < pPipe->dynStateCI.dynamicStateCount; i++) {
+ switch (pPipe->dynStateCI.pDynamicStates[i]) {
+ case VK_DYNAMIC_STATE_VIEWPORT:
+ psoDynStateMask &= ~CBSTATUS_VIEWPORT_SET;
+ break;
+ case VK_DYNAMIC_STATE_SCISSOR:
+ psoDynStateMask &= ~CBSTATUS_SCISSOR_SET;
+ break;
+ case VK_DYNAMIC_STATE_LINE_WIDTH:
+ psoDynStateMask &= ~CBSTATUS_LINE_WIDTH_SET;
+ break;
+ case VK_DYNAMIC_STATE_DEPTH_BIAS:
+ psoDynStateMask &= ~CBSTATUS_DEPTH_BIAS_SET;
+ break;
+ case VK_DYNAMIC_STATE_BLEND_CONSTANTS:
+ psoDynStateMask &= ~CBSTATUS_BLEND_SET;
+ break;
+ case VK_DYNAMIC_STATE_DEPTH_BOUNDS:
+ psoDynStateMask &= ~CBSTATUS_DEPTH_BOUNDS_SET;
+ break;
+ case VK_DYNAMIC_STATE_STENCIL_COMPARE_MASK:
+ psoDynStateMask &= ~CBSTATUS_STENCIL_READ_MASK_SET;
+ break;
+ case VK_DYNAMIC_STATE_STENCIL_WRITE_MASK:
+ psoDynStateMask &= ~CBSTATUS_STENCIL_WRITE_MASK_SET;
+ break;
+ case VK_DYNAMIC_STATE_STENCIL_REFERENCE:
+ psoDynStateMask &= ~CBSTATUS_STENCIL_REFERENCE_SET;
+ break;
+ default:
+ // TODO : Flag error here
+ break;
+ }
+ }
+ pCB->status |= psoDynStateMask;
+ }
+}
+
+// Print the last bound Gfx Pipeline
+static VkBool32 printPipeline(layer_data *my_data, const VkCommandBuffer cb) {
+ VkBool32 skipCall = VK_FALSE;
+ GLOBAL_CB_NODE *pCB = getCBNode(my_data, cb);
+ if (pCB) {
+ PIPELINE_NODE *pPipeTrav = getPipeline(my_data, pCB->lastBound[VK_PIPELINE_BIND_POINT_GRAPHICS].pipeline);
+ if (!pPipeTrav) {
+ // nothing to print
+ } else {
+ skipCall |= log_msg(my_data->report_data, VK_DEBUG_REPORT_INFORMATION_BIT_EXT, (VkDebugReportObjectTypeEXT)0, 0,
+ __LINE__, DRAWSTATE_NONE, "DS", "%s",
+ vk_print_vkgraphicspipelinecreateinfo(&pPipeTrav->graphicsPipelineCI, "{DS}").c_str());
+ }
+ }
+ return skipCall;
+}
+
+static void printCB(layer_data *my_data, const VkCommandBuffer cb) {
+ GLOBAL_CB_NODE *pCB = getCBNode(my_data, cb);
+ if (pCB && pCB->cmds.size() > 0) {
+ log_msg(my_data->report_data, VK_DEBUG_REPORT_INFORMATION_BIT_EXT, (VkDebugReportObjectTypeEXT)0, 0, __LINE__,
+ DRAWSTATE_NONE, "DS", "Cmds in CB %p", (void *)cb);
+ vector<CMD_NODE> cmds = pCB->cmds;
+ for (auto ii = cmds.begin(); ii != cmds.end(); ++ii) {
+ // TODO : Need to pass cb as srcObj here
+ log_msg(my_data->report_data, VK_DEBUG_REPORT_INFORMATION_BIT_EXT, VK_DEBUG_REPORT_OBJECT_TYPE_COMMAND_BUFFER_EXT, 0,
+ __LINE__, DRAWSTATE_NONE, "DS", " CMD#%" PRIu64 ": %s", (*ii).cmdNumber, cmdTypeToString((*ii).type).c_str());
+ }
+ } else {
+ // Nothing to print
+ }
+}
+
+static VkBool32 synchAndPrintDSConfig(layer_data *my_data, const VkCommandBuffer cb) {
+ VkBool32 skipCall = VK_FALSE;
+ if (!(my_data->report_data->active_flags & VK_DEBUG_REPORT_INFORMATION_BIT_EXT)) {
+ return skipCall;
+ }
+ skipCall |= printPipeline(my_data, cb);
+ return skipCall;
+}
+
+// Flags validation error if the associated call is made inside a render pass. The apiName
+// routine should ONLY be called outside a render pass.
+static VkBool32 insideRenderPass(const layer_data *my_data, GLOBAL_CB_NODE *pCB, const char *apiName) {
+ VkBool32 inside = VK_FALSE;
+ if (pCB->activeRenderPass) {
+ inside = log_msg(my_data->report_data, VK_DEBUG_REPORT_ERROR_BIT_EXT, VK_DEBUG_REPORT_OBJECT_TYPE_COMMAND_BUFFER_EXT,
+ (uint64_t)pCB->commandBuffer, __LINE__, DRAWSTATE_INVALID_RENDERPASS_CMD, "DS",
+ "%s: It is invalid to issue this call inside an active render pass (%#" PRIxLEAST64 ")", apiName,
+ (uint64_t)pCB->activeRenderPass);
+ }
+ return inside;
+}
+
+// Flags validation error if the associated call is made outside a render pass. The apiName
+// routine should ONLY be called inside a render pass.
+static VkBool32 outsideRenderPass(const layer_data *my_data, GLOBAL_CB_NODE *pCB, const char *apiName) {
+ VkBool32 outside = VK_FALSE;
+ if (((pCB->createInfo.level == VK_COMMAND_BUFFER_LEVEL_PRIMARY) && (!pCB->activeRenderPass)) ||
+ ((pCB->createInfo.level == VK_COMMAND_BUFFER_LEVEL_SECONDARY) && (!pCB->activeRenderPass) &&
+ !(pCB->beginInfo.flags & VK_COMMAND_BUFFER_USAGE_RENDER_PASS_CONTINUE_BIT))) {
+ outside = log_msg(my_data->report_data, VK_DEBUG_REPORT_ERROR_BIT_EXT, VK_DEBUG_REPORT_OBJECT_TYPE_COMMAND_BUFFER_EXT,
+ (uint64_t)pCB->commandBuffer, __LINE__, DRAWSTATE_NO_ACTIVE_RENDERPASS, "DS",
+ "%s: This call must be issued inside an active render pass.", apiName);
+ }
+ return outside;
+}
+
+static void init_core_validation(layer_data *my_data, const VkAllocationCallbacks *pAllocator) {
+
+ layer_debug_actions(my_data->report_data, my_data->logging_callback, pAllocator, "lunarg_core_validation");
+
+ if (!globalLockInitialized) {
+ loader_platform_thread_create_mutex(&globalLock);
+ globalLockInitialized = 1;
+ }
+#if MTMERGESOURCE
+ // Zero out memory property data
+ memset(&memProps, 0, sizeof(VkPhysicalDeviceMemoryProperties));
+#endif
+}
+
+VK_LAYER_EXPORT VKAPI_ATTR VkResult VKAPI_CALL
+vkCreateInstance(const VkInstanceCreateInfo *pCreateInfo, const VkAllocationCallbacks *pAllocator, VkInstance *pInstance) {
+ VkLayerInstanceCreateInfo *chain_info = get_chain_info(pCreateInfo, VK_LAYER_LINK_INFO);
+
+ assert(chain_info->u.pLayerInfo);
+ PFN_vkGetInstanceProcAddr fpGetInstanceProcAddr = chain_info->u.pLayerInfo->pfnNextGetInstanceProcAddr;
+ PFN_vkCreateInstance fpCreateInstance = (PFN_vkCreateInstance)fpGetInstanceProcAddr(NULL, "vkCreateInstance");
+ if (fpCreateInstance == NULL)
+ return VK_ERROR_INITIALIZATION_FAILED;
+
+ // Advance the link info for the next element on the chain
+ chain_info->u.pLayerInfo = chain_info->u.pLayerInfo->pNext;
+
+ VkResult result = fpCreateInstance(pCreateInfo, pAllocator, pInstance);
+ if (result != VK_SUCCESS)
+ return result;
+
+ layer_data *my_data = get_my_data_ptr(get_dispatch_key(*pInstance), layer_data_map);
+ my_data->instance_dispatch_table = new VkLayerInstanceDispatchTable;
+ layer_init_instance_dispatch_table(*pInstance, my_data->instance_dispatch_table, fpGetInstanceProcAddr);
+
+ my_data->report_data = debug_report_create_instance(my_data->instance_dispatch_table, *pInstance,
+ pCreateInfo->enabledExtensionCount, pCreateInfo->ppEnabledExtensionNames);
+
+ init_core_validation(my_data, pAllocator);
+
+ ValidateLayerOrdering(*pCreateInfo);
+
+ return result;
+}
+
+/* hook DestroyInstance to remove tableInstanceMap entry */
+VK_LAYER_EXPORT VKAPI_ATTR void VKAPI_CALL vkDestroyInstance(VkInstance instance, const VkAllocationCallbacks *pAllocator) {
+ // TODOSC : Shouldn't need any customization here
+ dispatch_key key = get_dispatch_key(instance);
+ // TBD: Need any locking this early, in case this function is called at the
+ // same time by more than one thread?
+ layer_data *my_data = get_my_data_ptr(key, layer_data_map);
+ VkLayerInstanceDispatchTable *pTable = my_data->instance_dispatch_table;
+ pTable->DestroyInstance(instance, pAllocator);
+
+ loader_platform_thread_lock_mutex(&globalLock);
+ // Clean up logging callback, if any
+ while (my_data->logging_callback.size() > 0) {
+ VkDebugReportCallbackEXT callback = my_data->logging_callback.back();
+ layer_destroy_msg_callback(my_data->report_data, callback, pAllocator);
+ my_data->logging_callback.pop_back();
+ }
+
+ layer_debug_report_destroy_instance(my_data->report_data);
+ delete my_data->instance_dispatch_table;
+ layer_data_map.erase(key);
+ loader_platform_thread_unlock_mutex(&globalLock);
+ if (layer_data_map.empty()) {
+ // Release mutex when destroying last instance.
+ loader_platform_thread_delete_mutex(&globalLock);
+ globalLockInitialized = 0;
+ }
+}
+
+static void createDeviceRegisterExtensions(const VkDeviceCreateInfo *pCreateInfo, VkDevice device) {
+ uint32_t i;
+ // TBD: Need any locking, in case this function is called at the same time
+ // by more than one thread?
+ layer_data *dev_data = get_my_data_ptr(get_dispatch_key(device), layer_data_map);
+ dev_data->device_extensions.wsi_enabled = false;
+
+ VkLayerDispatchTable *pDisp = dev_data->device_dispatch_table;
+ PFN_vkGetDeviceProcAddr gpa = pDisp->GetDeviceProcAddr;
+ pDisp->CreateSwapchainKHR = (PFN_vkCreateSwapchainKHR)gpa(device, "vkCreateSwapchainKHR");
+ pDisp->DestroySwapchainKHR = (PFN_vkDestroySwapchainKHR)gpa(device, "vkDestroySwapchainKHR");
+ pDisp->GetSwapchainImagesKHR = (PFN_vkGetSwapchainImagesKHR)gpa(device, "vkGetSwapchainImagesKHR");
+ pDisp->AcquireNextImageKHR = (PFN_vkAcquireNextImageKHR)gpa(device, "vkAcquireNextImageKHR");
+ pDisp->QueuePresentKHR = (PFN_vkQueuePresentKHR)gpa(device, "vkQueuePresentKHR");
+
+ for (i = 0; i < pCreateInfo->enabledExtensionCount; i++) {
+ if (strcmp(pCreateInfo->ppEnabledExtensionNames[i], VK_KHR_SWAPCHAIN_EXTENSION_NAME) == 0)
+ dev_data->device_extensions.wsi_enabled = true;
+ }
+}
+
+VK_LAYER_EXPORT VKAPI_ATTR VkResult VKAPI_CALL vkCreateDevice(VkPhysicalDevice gpu, const VkDeviceCreateInfo *pCreateInfo,
+ const VkAllocationCallbacks *pAllocator, VkDevice *pDevice) {
+ VkLayerDeviceCreateInfo *chain_info = get_chain_info(pCreateInfo, VK_LAYER_LINK_INFO);
+
+ assert(chain_info->u.pLayerInfo);
+ PFN_vkGetInstanceProcAddr fpGetInstanceProcAddr = chain_info->u.pLayerInfo->pfnNextGetInstanceProcAddr;
+ PFN_vkGetDeviceProcAddr fpGetDeviceProcAddr = chain_info->u.pLayerInfo->pfnNextGetDeviceProcAddr;
+ PFN_vkCreateDevice fpCreateDevice = (PFN_vkCreateDevice)fpGetInstanceProcAddr(NULL, "vkCreateDevice");
+ if (fpCreateDevice == NULL) {
+ return VK_ERROR_INITIALIZATION_FAILED;
+ }
+
+ // Advance the link info for the next element on the chain
+ chain_info->u.pLayerInfo = chain_info->u.pLayerInfo->pNext;
+
+ VkResult result = fpCreateDevice(gpu, pCreateInfo, pAllocator, pDevice);
+ if (result != VK_SUCCESS) {
+ return result;
+ }
+
+ loader_platform_thread_lock_mutex(&globalLock);
+ layer_data *my_instance_data = get_my_data_ptr(get_dispatch_key(gpu), layer_data_map);
+ layer_data *my_device_data = get_my_data_ptr(get_dispatch_key(*pDevice), layer_data_map);
+
+ // Setup device dispatch table
+ my_device_data->device_dispatch_table = new VkLayerDispatchTable;
+ layer_init_device_dispatch_table(*pDevice, my_device_data->device_dispatch_table, fpGetDeviceProcAddr);
+
+ my_device_data->report_data = layer_debug_report_create_device(my_instance_data->report_data, *pDevice);
+ createDeviceRegisterExtensions(pCreateInfo, *pDevice);
+ // Get physical device limits for this device
+ my_instance_data->instance_dispatch_table->GetPhysicalDeviceProperties(gpu, &(my_device_data->physDevProperties.properties));
+ uint32_t count;
+ my_instance_data->instance_dispatch_table->GetPhysicalDeviceQueueFamilyProperties(gpu, &count, nullptr);
+ my_device_data->physDevProperties.queue_family_properties.resize(count);
+ my_instance_data->instance_dispatch_table->GetPhysicalDeviceQueueFamilyProperties(
+ gpu, &count, &my_device_data->physDevProperties.queue_family_properties[0]);
+ // TODO: device limits should make sure these are compatible
+ if (pCreateInfo->pEnabledFeatures) {
+ my_device_data->physDevProperties.features = *pCreateInfo->pEnabledFeatures;
+ } else {
+ memset(&my_device_data->physDevProperties.features, 0, sizeof(VkPhysicalDeviceFeatures));
+ }
+ loader_platform_thread_unlock_mutex(&globalLock);
+
+ ValidateLayerOrdering(*pCreateInfo);
+
+ return result;
+}
+
+// prototype
+static void deleteRenderPasses(layer_data *);
+VK_LAYER_EXPORT VKAPI_ATTR void VKAPI_CALL vkDestroyDevice(VkDevice device, const VkAllocationCallbacks *pAllocator) {
+ // TODOSC : Shouldn't need any customization here
+ dispatch_key key = get_dispatch_key(device);
+ layer_data *dev_data = get_my_data_ptr(key, layer_data_map);
+ // Free all the memory
+ loader_platform_thread_lock_mutex(&globalLock);
+ deletePipelines(dev_data);
+ deleteRenderPasses(dev_data);
+ deleteCommandBuffers(dev_data);
+ deletePools(dev_data);
+ deleteLayouts(dev_data);
+ dev_data->imageViewMap.clear();
+ dev_data->imageMap.clear();
+ dev_data->imageSubresourceMap.clear();
+ dev_data->imageLayoutMap.clear();
+ dev_data->bufferViewMap.clear();
+ dev_data->bufferMap.clear();
+ loader_platform_thread_unlock_mutex(&globalLock);
+#if MTMERGESOURCE
+ VkBool32 skipCall = VK_FALSE;
+ loader_platform_thread_lock_mutex(&globalLock);
+ log_msg(dev_data->report_data, VK_DEBUG_REPORT_INFORMATION_BIT_EXT, VK_DEBUG_REPORT_OBJECT_TYPE_DEVICE_EXT,
+ (uint64_t)device, __LINE__, MEMTRACK_NONE, "MEM", "Printing List details prior to vkDestroyDevice()");
+ log_msg(dev_data->report_data, VK_DEBUG_REPORT_INFORMATION_BIT_EXT, VK_DEBUG_REPORT_OBJECT_TYPE_DEVICE_EXT,
+ (uint64_t)device, __LINE__, MEMTRACK_NONE, "MEM", "================================================");
+ print_mem_list(dev_data, device);
+ printCBList(dev_data, device);
+ delete_cmd_buf_info_list(dev_data);
+ // Report any memory leaks
+ DEVICE_MEM_INFO *pInfo = NULL;
+ if (dev_data->memObjMap.size() > 0) {
+ for (auto ii = dev_data->memObjMap.begin(); ii != dev_data->memObjMap.end(); ++ii) {
+ pInfo = &(*ii).second;
+ if (pInfo->allocInfo.allocationSize != 0) {
+ // Valid Usage: All child objects created on device must have been destroyed prior to destroying device
+ skipCall |=
+ log_msg(dev_data->report_data, VK_DEBUG_REPORT_ERROR_BIT_EXT,
+ VK_DEBUG_REPORT_OBJECT_TYPE_DEVICE_MEMORY_EXT, (uint64_t)pInfo->mem, __LINE__, MEMTRACK_MEMORY_LEAK,
+ "MEM", "Mem Object %" PRIu64 " has not been freed. You should clean up this memory by calling "
+ "vkFreeMemory(%" PRIu64 ") prior to vkDestroyDevice().",
+ (uint64_t)(pInfo->mem), (uint64_t)(pInfo->mem));
+ }
+ }
+ }
+ // Queues persist until device is destroyed
+ delete_queue_info_list(dev_data);
+ layer_debug_report_destroy_device(device);
+ loader_platform_thread_unlock_mutex(&globalLock);
+
+#if DISPATCH_MAP_DEBUG
+ fprintf(stderr, "Device: %p, key: %p\n", device, key);
+#endif
+ VkLayerDispatchTable *pDisp = dev_data->device_dispatch_table;
+ if (VK_FALSE == skipCall) {
+ pDisp->DestroyDevice(device, pAllocator);
+ }
+#else
+ dev_data->device_dispatch_table->DestroyDevice(device, pAllocator);
+#endif
+ delete dev_data->device_dispatch_table;
+ layer_data_map.erase(key);
+}
+
+#if MTMERGESOURCE
+VK_LAYER_EXPORT VKAPI_ATTR void VKAPI_CALL
+vkGetPhysicalDeviceMemoryProperties(VkPhysicalDevice physicalDevice, VkPhysicalDeviceMemoryProperties *pMemoryProperties) {
+ layer_data *my_data = get_my_data_ptr(get_dispatch_key(physicalDevice), layer_data_map);
+ VkLayerInstanceDispatchTable *pInstanceTable = my_data->instance_dispatch_table;
+ pInstanceTable->GetPhysicalDeviceMemoryProperties(physicalDevice, pMemoryProperties);
+ memcpy(&memProps, pMemoryProperties, sizeof(VkPhysicalDeviceMemoryProperties));
+}
+#endif
+
+static const VkExtensionProperties instance_extensions[] = {{VK_EXT_DEBUG_REPORT_EXTENSION_NAME, VK_EXT_DEBUG_REPORT_SPEC_VERSION}};
+
+VK_LAYER_EXPORT VKAPI_ATTR VkResult VKAPI_CALL
+vkEnumerateInstanceExtensionProperties(const char *pLayerName, uint32_t *pCount, VkExtensionProperties *pProperties) {
+ return util_GetExtensionProperties(1, instance_extensions, pCount, pProperties);
+}
+
+VK_LAYER_EXPORT VKAPI_ATTR VkResult VKAPI_CALL
+vkEnumerateInstanceLayerProperties(uint32_t *pCount, VkLayerProperties *pProperties) {
+ return util_GetLayerProperties(ARRAY_SIZE(cv_global_layers), cv_global_layers, pCount, pProperties);
+}
+
+// TODO: Why does this exist - can we just use global?
+static const VkLayerProperties cv_device_layers[] = {{
+ "VK_LAYER_LUNARG_core_validation", VK_LAYER_API_VERSION, 1, "LunarG Validation Layer",
+}};
+
+VK_LAYER_EXPORT VKAPI_ATTR VkResult VKAPI_CALL vkEnumerateDeviceExtensionProperties(VkPhysicalDevice physicalDevice,
+ const char *pLayerName, uint32_t *pCount,
+ VkExtensionProperties *pProperties) {
+ if (pLayerName == NULL) {
+ dispatch_key key = get_dispatch_key(physicalDevice);
+ layer_data *my_data = get_my_data_ptr(key, layer_data_map);
+ return my_data->instance_dispatch_table->EnumerateDeviceExtensionProperties(physicalDevice, NULL, pCount, pProperties);
+ } else {
+ return util_GetExtensionProperties(0, NULL, pCount, pProperties);
+ }
+}
+
+VK_LAYER_EXPORT VKAPI_ATTR VkResult VKAPI_CALL
+vkEnumerateDeviceLayerProperties(VkPhysicalDevice physicalDevice, uint32_t *pCount, VkLayerProperties *pProperties) {
+ /* draw_state physical device layers are the same as global */
+ return util_GetLayerProperties(ARRAY_SIZE(cv_device_layers), cv_device_layers, pCount, pProperties);
+}
+
+// This validates that the initial layout specified in the command buffer for
+// the IMAGE is the same
+// as the global IMAGE layout
+VkBool32 ValidateCmdBufImageLayouts(VkCommandBuffer cmdBuffer) {
+ VkBool32 skip_call = VK_FALSE;
+ layer_data *dev_data = get_my_data_ptr(get_dispatch_key(cmdBuffer), layer_data_map);
+ GLOBAL_CB_NODE *pCB = getCBNode(dev_data, cmdBuffer);
+ for (auto cb_image_data : pCB->imageLayoutMap) {
+ VkImageLayout imageLayout;
+ if (!FindLayout(dev_data, cb_image_data.first, imageLayout)) {
+ skip_call |=
+ log_msg(dev_data->report_data, VK_DEBUG_REPORT_ERROR_BIT_EXT, VK_DEBUG_REPORT_OBJECT_TYPE_COMMAND_BUFFER_EXT, 0,
+ __LINE__, DRAWSTATE_INVALID_IMAGE_LAYOUT, "DS", "Cannot submit cmd buffer using deleted image %" PRIu64 ".",
+ reinterpret_cast<const uint64_t &>(cb_image_data.first));
+ } else {
+ if (cb_image_data.second.initialLayout == VK_IMAGE_LAYOUT_UNDEFINED) {
+ // TODO: Set memory invalid which is in mem_tracker currently
+ } else if (imageLayout != cb_image_data.second.initialLayout) {
+ skip_call |= log_msg(dev_data->report_data, VK_DEBUG_REPORT_ERROR_BIT_EXT,
+ VK_DEBUG_REPORT_OBJECT_TYPE_COMMAND_BUFFER_EXT, 0, __LINE__, DRAWSTATE_INVALID_IMAGE_LAYOUT,
+ "DS", "Cannot submit cmd buffer using image with layout %s when "
+ "first use is %s.",
+ string_VkImageLayout(imageLayout), string_VkImageLayout(cb_image_data.second.initialLayout));
+ }
+ SetLayout(dev_data, cb_image_data.first, cb_image_data.second.layout);
+ }
+ }
+ return skip_call;
+}
+// Track which resources are in-flight by atomically incrementing their "in_use" count
+VkBool32 validateAndIncrementResources(layer_data *my_data, GLOBAL_CB_NODE *pCB) {
+ VkBool32 skip_call = VK_FALSE;
+ for (auto drawDataElement : pCB->drawData) {
+ for (auto buffer : drawDataElement.buffers) {
+ auto buffer_data = my_data->bufferMap.find(buffer);
+ if (buffer_data == my_data->bufferMap.end()) {
+ skip_call |= log_msg(my_data->report_data, VK_DEBUG_REPORT_ERROR_BIT_EXT, VK_DEBUG_REPORT_OBJECT_TYPE_BUFFER_EXT,
+ (uint64_t)(buffer), __LINE__, DRAWSTATE_INVALID_BUFFER, "DS",
+ "Cannot submit cmd buffer using deleted buffer %" PRIu64 ".", (uint64_t)(buffer));
+ } else {
+ buffer_data->second.in_use.fetch_add(1);
+ }
+ }
+ }
+ for (uint32_t i = 0; i < VK_PIPELINE_BIND_POINT_RANGE_SIZE; ++i) {
+ for (auto set : pCB->lastBound[i].uniqueBoundSets) {
+ auto setNode = my_data->setMap.find(set);
+ if (setNode == my_data->setMap.end()) {
+ skip_call |=
+ log_msg(my_data->report_data, VK_DEBUG_REPORT_ERROR_BIT_EXT, VK_DEBUG_REPORT_OBJECT_TYPE_DESCRIPTOR_SET_EXT,
+ (uint64_t)(set), __LINE__, DRAWSTATE_INVALID_DESCRIPTOR_SET, "DS",
+ "Cannot submit cmd buffer using deleted descriptor set %" PRIu64 ".", (uint64_t)(set));
+ } else {
+ setNode->second->in_use.fetch_add(1);
+ }
+ }
+ }
+ for (auto semaphore : pCB->semaphores) {
+ auto semaphoreNode = my_data->semaphoreMap.find(semaphore);
+ if (semaphoreNode == my_data->semaphoreMap.end()) {
+ skip_call |=
+ log_msg(my_data->report_data, VK_DEBUG_REPORT_ERROR_BIT_EXT, VK_DEBUG_REPORT_OBJECT_TYPE_DESCRIPTOR_SET_EXT,
+ reinterpret_cast<uint64_t &>(semaphore), __LINE__, DRAWSTATE_INVALID_SEMAPHORE, "DS",
+ "Cannot submit cmd buffer using deleted semaphore %" PRIu64 ".", reinterpret_cast<uint64_t &>(semaphore));
+ } else {
+ semaphoreNode->second.in_use.fetch_add(1);
+ }
+ }
+ for (auto event : pCB->events) {
+ auto eventNode = my_data->eventMap.find(event);
+ if (eventNode == my_data->eventMap.end()) {
+ skip_call |=
+ log_msg(my_data->report_data, VK_DEBUG_REPORT_ERROR_BIT_EXT, VK_DEBUG_REPORT_OBJECT_TYPE_DESCRIPTOR_SET_EXT,
+ reinterpret_cast<uint64_t &>(event), __LINE__, DRAWSTATE_INVALID_EVENT, "DS",
+ "Cannot submit cmd buffer using deleted event %" PRIu64 ".", reinterpret_cast<uint64_t &>(event));
+ } else {
+ eventNode->second.in_use.fetch_add(1);
+ }
+ }
+ return skip_call;
+}
+
+void decrementResources(layer_data *my_data, VkCommandBuffer cmdBuffer) {
+ GLOBAL_CB_NODE *pCB = getCBNode(my_data, cmdBuffer);
+ for (auto drawDataElement : pCB->drawData) {
+ for (auto buffer : drawDataElement.buffers) {
+ auto buffer_data = my_data->bufferMap.find(buffer);
+ if (buffer_data != my_data->bufferMap.end()) {
+ buffer_data->second.in_use.fetch_sub(1);
+ }
+ }
+ }
+ for (uint32_t i = 0; i < VK_PIPELINE_BIND_POINT_RANGE_SIZE; ++i) {
+ for (auto set : pCB->lastBound[i].uniqueBoundSets) {
+ auto setNode = my_data->setMap.find(set);
+ if (setNode != my_data->setMap.end()) {
+ setNode->second->in_use.fetch_sub(1);
+ }
+ }
+ }
+ for (auto semaphore : pCB->semaphores) {
+ auto semaphoreNode = my_data->semaphoreMap.find(semaphore);
+ if (semaphoreNode != my_data->semaphoreMap.end()) {
+ semaphoreNode->second.in_use.fetch_sub(1);
+ }
+ }
+ for (auto event : pCB->events) {
+ auto eventNode = my_data->eventMap.find(event);
+ if (eventNode != my_data->eventMap.end()) {
+ eventNode->second.in_use.fetch_sub(1);
+ }
+ }
+ for (auto queryStatePair : pCB->queryToStateMap) {
+ my_data->queryToStateMap[queryStatePair.first] = queryStatePair.second;
+ }
+ for (auto eventStagePair : pCB->eventToStageMap) {
+ my_data->eventMap[eventStagePair.first].stageMask = eventStagePair.second;
+ }
+}
+
+void decrementResources(layer_data *my_data, uint32_t fenceCount, const VkFence *pFences) {
+ for (uint32_t i = 0; i < fenceCount; ++i) {
+ auto fence_data = my_data->fenceMap.find(pFences[i]);
+ if (fence_data == my_data->fenceMap.end() || !fence_data->second.needsSignaled)
+ return;
+ fence_data->second.needsSignaled = false;
+ fence_data->second.in_use.fetch_sub(1);
+ decrementResources(my_data, fence_data->second.priorFences.size(), fence_data->second.priorFences.data());
+ for (auto cmdBuffer : fence_data->second.cmdBuffers) {
+ decrementResources(my_data, cmdBuffer);
+ }
+ }
+}
+
+void decrementResources(layer_data *my_data, VkQueue queue) {
+ auto queue_data = my_data->queueMap.find(queue);
+ if (queue_data != my_data->queueMap.end()) {
+ for (auto cmdBuffer : queue_data->second.untrackedCmdBuffers) {
+ decrementResources(my_data, cmdBuffer);
+ }
+ queue_data->second.untrackedCmdBuffers.clear();
+ decrementResources(my_data, queue_data->second.lastFences.size(), queue_data->second.lastFences.data());
+ }
+}
+
+void updateTrackedCommandBuffers(layer_data *dev_data, VkQueue queue, VkQueue other_queue, VkFence fence) {
+ if (queue == other_queue) {
+ return;
+ }
+ auto queue_data = dev_data->queueMap.find(queue);
+ auto other_queue_data = dev_data->queueMap.find(other_queue);
+ if (queue_data == dev_data->queueMap.end() || other_queue_data == dev_data->queueMap.end()) {
+ return;
+ }
+ for (auto fence : other_queue_data->second.lastFences) {
+ queue_data->second.lastFences.push_back(fence);
+ }
+ if (fence != VK_NULL_HANDLE) {
+ auto fence_data = dev_data->fenceMap.find(fence);
+ if (fence_data == dev_data->fenceMap.end()) {
+ return;
+ }
+ for (auto cmdbuffer : other_queue_data->second.untrackedCmdBuffers) {
+ fence_data->second.cmdBuffers.push_back(cmdbuffer);
+ }
+ other_queue_data->second.untrackedCmdBuffers.clear();
+ } else {
+ for (auto cmdbuffer : other_queue_data->second.untrackedCmdBuffers) {
+ queue_data->second.untrackedCmdBuffers.push_back(cmdbuffer);
+ }
+ other_queue_data->second.untrackedCmdBuffers.clear();
+ }
+ for (auto eventStagePair : other_queue_data->second.eventToStageMap) {
+ queue_data->second.eventToStageMap[eventStagePair.first] = eventStagePair.second;
+ }
+}
+
+void trackCommandBuffers(layer_data *my_data, VkQueue queue, uint32_t submitCount, const VkSubmitInfo *pSubmits, VkFence fence) {
+ auto queue_data = my_data->queueMap.find(queue);
+ if (fence != VK_NULL_HANDLE) {
+ vector<VkFence> prior_fences;
+ auto fence_data = my_data->fenceMap.find(fence);
+ if (fence_data == my_data->fenceMap.end()) {
+ return;
+ }
+ if (queue_data != my_data->queueMap.end()) {
+ prior_fences = queue_data->second.lastFences;
+ queue_data->second.lastFences.clear();
+ queue_data->second.lastFences.push_back(fence);
+ for (auto cmdbuffer : queue_data->second.untrackedCmdBuffers) {
+ fence_data->second.cmdBuffers.push_back(cmdbuffer);
+ }
+ queue_data->second.untrackedCmdBuffers.clear();
+ }
+ fence_data->second.cmdBuffers.clear();
+ fence_data->second.priorFences = prior_fences;
+ fence_data->second.needsSignaled = true;
+ fence_data->second.queue = queue;
+ fence_data->second.in_use.fetch_add(1);
+ for (uint32_t submit_idx = 0; submit_idx < submitCount; submit_idx++) {
+ const VkSubmitInfo *submit = &pSubmits[submit_idx];
+ for (uint32_t i = 0; i < submit->commandBufferCount; ++i) {
+ for (auto secondaryCmdBuffer : my_data->commandBufferMap[submit->pCommandBuffers[i]]->secondaryCommandBuffers) {
+ fence_data->second.cmdBuffers.push_back(secondaryCmdBuffer);
+ }
+ fence_data->second.cmdBuffers.push_back(submit->pCommandBuffers[i]);
+ }
+ }
+ } else {
+ if (queue_data != my_data->queueMap.end()) {
+ for (uint32_t submit_idx = 0; submit_idx < submitCount; submit_idx++) {
+ const VkSubmitInfo *submit = &pSubmits[submit_idx];
+ for (uint32_t i = 0; i < submit->commandBufferCount; ++i) {
+ for (auto secondaryCmdBuffer : my_data->commandBufferMap[submit->pCommandBuffers[i]]->secondaryCommandBuffers) {
+ queue_data->second.untrackedCmdBuffers.push_back(secondaryCmdBuffer);
+ }
+ queue_data->second.untrackedCmdBuffers.push_back(submit->pCommandBuffers[i]);
+ }
+ }
+ }
+ }
+ if (queue_data != my_data->queueMap.end()) {
+ for (uint32_t submit_idx = 0; submit_idx < submitCount; submit_idx++) {
+ const VkSubmitInfo *submit = &pSubmits[submit_idx];
+ for (uint32_t i = 0; i < submit->commandBufferCount; ++i) {
+ // Add cmdBuffers to both the global set and queue set
+ for (auto secondaryCmdBuffer : my_data->commandBufferMap[submit->pCommandBuffers[i]]->secondaryCommandBuffers) {
+ my_data->globalInFlightCmdBuffers.insert(secondaryCmdBuffer);
+ queue_data->second.inFlightCmdBuffers.insert(secondaryCmdBuffer);
+ }
+ my_data->globalInFlightCmdBuffers.insert(submit->pCommandBuffers[i]);
+ queue_data->second.inFlightCmdBuffers.insert(submit->pCommandBuffers[i]);
+ }
+ }
+ }
+}
+
+bool validateCommandBufferSimultaneousUse(layer_data *dev_data, GLOBAL_CB_NODE *pCB) {
+ bool skip_call = false;
+ if (dev_data->globalInFlightCmdBuffers.count(pCB->commandBuffer) &&
+ !(pCB->beginInfo.flags & VK_COMMAND_BUFFER_USAGE_SIMULTANEOUS_USE_BIT)) {
+ skip_call |=
+ log_msg(dev_data->report_data, VK_DEBUG_REPORT_ERROR_BIT_EXT, VK_DEBUG_REPORT_OBJECT_TYPE_COMMAND_BUFFER_EXT, 0,
+ __LINE__, DRAWSTATE_INVALID_FENCE, "DS", "Command Buffer %#" PRIx64 " is already in use and is not marked "
+ "for simultaneous use.",
+ reinterpret_cast<uint64_t>(pCB->commandBuffer));
+ }
+ return skip_call;
+}
+
+static bool validateCommandBufferState(layer_data *dev_data, GLOBAL_CB_NODE *pCB) {
+ bool skipCall = false;
+ // Validate that cmd buffers have been updated
+ if (CB_RECORDED != pCB->state) {
+ if (CB_INVALID == pCB->state) {
+ // Inform app of reason CB invalid
+ bool causeReported = false;
+ if (!pCB->destroyedSets.empty()) {
+ std::stringstream set_string;
+ for (auto set : pCB->destroyedSets)
+ set_string << " " << set;
+
+ skipCall |=
+ log_msg(dev_data->report_data, VK_DEBUG_REPORT_ERROR_BIT_EXT, VK_DEBUG_REPORT_OBJECT_TYPE_COMMAND_BUFFER_EXT,
+ (uint64_t)(pCB->commandBuffer), __LINE__, DRAWSTATE_INVALID_COMMAND_BUFFER, "DS",
+ "You are submitting command buffer %#" PRIxLEAST64
+ " that is invalid because it had the following bound descriptor set(s) destroyed: %s",
+ (uint64_t)(pCB->commandBuffer), set_string.str().c_str());
+ causeReported = true;
+ }
+ if (!pCB->updatedSets.empty()) {
+ std::stringstream set_string;
+ for (auto set : pCB->updatedSets)
+ set_string << " " << set;
+
+ skipCall |=
+ log_msg(dev_data->report_data, VK_DEBUG_REPORT_ERROR_BIT_EXT, VK_DEBUG_REPORT_OBJECT_TYPE_COMMAND_BUFFER_EXT,
+ (uint64_t)(pCB->commandBuffer), __LINE__, DRAWSTATE_INVALID_COMMAND_BUFFER, "DS",
+ "You are submitting command buffer %#" PRIxLEAST64
+ " that is invalid because it had the following bound descriptor set(s) updated: %s",
+ (uint64_t)(pCB->commandBuffer), set_string.str().c_str());
+ causeReported = true;
+ }
+ if (!pCB->destroyedFramebuffers.empty()) {
+ std::stringstream fb_string;
+ for (auto fb : pCB->destroyedFramebuffers)
+ fb_string << " " << fb;
+
+ skipCall |=
+ log_msg(dev_data->report_data, VK_DEBUG_REPORT_ERROR_BIT_EXT, VK_DEBUG_REPORT_OBJECT_TYPE_COMMAND_BUFFER_EXT,
+ reinterpret_cast<uint64_t &>(pCB->commandBuffer), __LINE__, DRAWSTATE_INVALID_COMMAND_BUFFER, "DS",
+ "You are submitting command buffer %#" PRIxLEAST64 " that is invalid because it had the following "
+ "referenced framebuffers destroyed: %s",
+ reinterpret_cast<uint64_t &>(pCB->commandBuffer), fb_string.str().c_str());
+ causeReported = true;
+ }
+ // TODO : This is defensive programming to make sure an error is
+ // flagged if we hit this INVALID cmd buffer case and none of the
+ // above cases are hit. As the number of INVALID cases grows, this
+ // code should be updated to seemlessly handle all the cases.
+ if (!causeReported) {
+ skipCall |= log_msg(
+ dev_data->report_data, VK_DEBUG_REPORT_ERROR_BIT_EXT, VK_DEBUG_REPORT_OBJECT_TYPE_COMMAND_BUFFER_EXT,
+ reinterpret_cast<uint64_t &>(pCB->commandBuffer), __LINE__, DRAWSTATE_INVALID_COMMAND_BUFFER, "DS",
+ "You are submitting command buffer %#" PRIxLEAST64 " that is invalid due to an unknown cause. Validation "
+ "should "
+ "be improved to report the exact cause.",
+ reinterpret_cast<uint64_t &>(pCB->commandBuffer));
+ }
+ } else { // Flag error for using CB w/o vkEndCommandBuffer() called
+ skipCall |=
+ log_msg(dev_data->report_data, VK_DEBUG_REPORT_ERROR_BIT_EXT, VK_DEBUG_REPORT_OBJECT_TYPE_COMMAND_BUFFER_EXT,
+ (uint64_t)(pCB->commandBuffer), __LINE__, DRAWSTATE_NO_END_COMMAND_BUFFER, "DS",
+ "You must call vkEndCommandBuffer() on CB %#" PRIxLEAST64 " before this call to vkQueueSubmit()!",
+ (uint64_t)(pCB->commandBuffer));
+ }
+ }
+ return skipCall;
+}
+
+static VkBool32 validatePrimaryCommandBufferState(layer_data *dev_data, GLOBAL_CB_NODE *pCB) {
+ // Track in-use for resources off of primary and any secondary CBs
+ VkBool32 skipCall = validateAndIncrementResources(dev_data, pCB);
+ if (!pCB->secondaryCommandBuffers.empty()) {
+ for (auto secondaryCmdBuffer : pCB->secondaryCommandBuffers) {
+ skipCall |= validateAndIncrementResources(dev_data, dev_data->commandBufferMap[secondaryCmdBuffer]);
+ GLOBAL_CB_NODE *pSubCB = getCBNode(dev_data, secondaryCmdBuffer);
+ if (pSubCB->primaryCommandBuffer != pCB->commandBuffer) {
+ log_msg(dev_data->report_data, VK_DEBUG_REPORT_ERROR_BIT_EXT, VK_DEBUG_REPORT_OBJECT_TYPE_COMMAND_BUFFER_EXT, 0,
+ __LINE__, DRAWSTATE_COMMAND_BUFFER_SINGLE_SUBMIT_VIOLATION, "DS",
+ "CB %#" PRIxLEAST64 " was submitted with secondary buffer %#" PRIxLEAST64
+ " but that buffer has subsequently been bound to "
+ "primary cmd buffer %#" PRIxLEAST64 ".",
+ reinterpret_cast<uint64_t>(pCB->commandBuffer), reinterpret_cast<uint64_t>(secondaryCmdBuffer),
+ reinterpret_cast<uint64_t>(pSubCB->primaryCommandBuffer));
+ }
+ }
+ }
+ // TODO : Verify if this also needs to be checked for secondary command
+ // buffers. If so, this block of code can move to
+ // validateCommandBufferState() function. vulkan GL106 filed to clarify
+ if ((pCB->beginInfo.flags & VK_COMMAND_BUFFER_USAGE_ONE_TIME_SUBMIT_BIT) && (pCB->submitCount > 1)) {
+ skipCall |= log_msg(dev_data->report_data, VK_DEBUG_REPORT_ERROR_BIT_EXT, VK_DEBUG_REPORT_OBJECT_TYPE_COMMAND_BUFFER_EXT, 0,
+ __LINE__, DRAWSTATE_COMMAND_BUFFER_SINGLE_SUBMIT_VIOLATION, "DS",
+ "CB %#" PRIxLEAST64 " was begun w/ VK_COMMAND_BUFFER_USAGE_ONE_TIME_SUBMIT_BIT "
+ "set, but has been submitted %#" PRIxLEAST64 " times.",
+ (uint64_t)(pCB->commandBuffer), pCB->submitCount);
+ }
+ skipCall |= validateCommandBufferState(dev_data, pCB);
+ // If USAGE_SIMULTANEOUS_USE_BIT not set then CB cannot already be executing
+ // on device
+ skipCall |= validateCommandBufferSimultaneousUse(dev_data, pCB);
+ return skipCall;
+}
+
+VK_LAYER_EXPORT VKAPI_ATTR VkResult VKAPI_CALL
+vkQueueSubmit(VkQueue queue, uint32_t submitCount, const VkSubmitInfo *pSubmits, VkFence fence) {
+ VkBool32 skipCall = VK_FALSE;
+ GLOBAL_CB_NODE *pCBNode = NULL;
+ layer_data *dev_data = get_my_data_ptr(get_dispatch_key(queue), layer_data_map);
+ VkResult result = VK_ERROR_VALIDATION_FAILED_EXT;
+ loader_platform_thread_lock_mutex(&globalLock);
+#if MTMERGESOURCE
+ // TODO : Need to track fence and clear mem references when fence clears
+ // MTMTODO : Merge this code with code below to avoid duplicating efforts
+ uint64_t fenceId = 0;
+ skipCall = add_fence_info(dev_data, fence, queue, &fenceId);
+
+ print_mem_list(dev_data, queue);
+ printCBList(dev_data, queue);
+ for (uint32_t submit_idx = 0; submit_idx < submitCount; submit_idx++) {
+ const VkSubmitInfo *submit = &pSubmits[submit_idx];
+ for (uint32_t i = 0; i < submit->commandBufferCount; i++) {
+ pCBNode = getCBNode(dev_data, submit->pCommandBuffers[i]);
+ if (pCBNode) {
+ pCBNode->fenceId = fenceId;
+ pCBNode->lastSubmittedFence = fence;
+ pCBNode->lastSubmittedQueue = queue;
+ for (auto &function : pCBNode->validate_functions) {
+ skipCall |= function();
+ }
+ for (auto &function : pCBNode->eventUpdates) {
+ skipCall |= static_cast<VkBool32>(function(queue));
+ }
+ }
+ }
+
+ for (uint32_t i = 0; i < submit->waitSemaphoreCount; i++) {
+ VkSemaphore sem = submit->pWaitSemaphores[i];
+
+ if (dev_data->semaphoreMap.find(sem) != dev_data->semaphoreMap.end()) {
+ if (dev_data->semaphoreMap[sem].state != MEMTRACK_SEMAPHORE_STATE_SIGNALLED) {
+ skipCall =
+ log_msg(dev_data->report_data, VK_DEBUG_REPORT_ERROR_BIT_EXT, VK_DEBUG_REPORT_OBJECT_TYPE_SEMAPHORE_EXT,
+ (uint64_t)sem, __LINE__, MEMTRACK_NONE, "SEMAPHORE",
+ "vkQueueSubmit: Semaphore must be in signaled state before passing to pWaitSemaphores");
+ }
+ dev_data->semaphoreMap[sem].state = MEMTRACK_SEMAPHORE_STATE_WAIT;
+ }
+ }
+ for (uint32_t i = 0; i < submit->signalSemaphoreCount; i++) {
+ VkSemaphore sem = submit->pSignalSemaphores[i];
+
+ if (dev_data->semaphoreMap.find(sem) != dev_data->semaphoreMap.end()) {
+ if (dev_data->semaphoreMap[sem].state != MEMTRACK_SEMAPHORE_STATE_UNSET) {
+ skipCall = log_msg(dev_data->report_data, VK_DEBUG_REPORT_ERROR_BIT_EXT,
+ VK_DEBUG_REPORT_OBJECT_TYPE_SEMAPHORE_EXT, (uint64_t)sem, __LINE__, MEMTRACK_NONE,
+ "SEMAPHORE", "vkQueueSubmit: Semaphore must not be currently signaled or in a wait state");
+ }
+ dev_data->semaphoreMap[sem].state = MEMTRACK_SEMAPHORE_STATE_SIGNALLED;
+ }
+ }
+ }
+#endif
+ // First verify that fence is not in use
+ if ((fence != VK_NULL_HANDLE) && (submitCount != 0) && dev_data->fenceMap[fence].in_use.load()) {
+ skipCall |= log_msg(dev_data->report_data, VK_DEBUG_REPORT_ERROR_BIT_EXT, VK_DEBUG_REPORT_OBJECT_TYPE_FENCE_EXT,
+ (uint64_t)(fence), __LINE__, DRAWSTATE_INVALID_FENCE, "DS",
+ "Fence %#" PRIx64 " is already in use by another submission.", (uint64_t)(fence));
+ }
+ // Now verify each individual submit
+ std::unordered_set<VkQueue> processed_other_queues;
+ for (uint32_t submit_idx = 0; submit_idx < submitCount; submit_idx++) {
+ const VkSubmitInfo *submit = &pSubmits[submit_idx];
+ vector<VkSemaphore> semaphoreList;
+ for (uint32_t i = 0; i < submit->waitSemaphoreCount; ++i) {
+ const VkSemaphore &semaphore = submit->pWaitSemaphores[i];
+ semaphoreList.push_back(semaphore);
+ if (dev_data->semaphoreMap[semaphore].signaled) {
+ dev_data->semaphoreMap[semaphore].signaled = 0;
+ } else {
+ skipCall |= log_msg(dev_data->report_data, VK_DEBUG_REPORT_ERROR_BIT_EXT,
+ VK_DEBUG_REPORT_OBJECT_TYPE_COMMAND_BUFFER_EXT, 0, __LINE__, DRAWSTATE_QUEUE_FORWARD_PROGRESS,
+ "DS", "Queue %#" PRIx64 " is waiting on semaphore %#" PRIx64 " that has no way to be signaled.",
+ reinterpret_cast<uint64_t &>(queue), reinterpret_cast<const uint64_t &>(semaphore));
+ }
+ const VkQueue &other_queue = dev_data->semaphoreMap[semaphore].queue;
+ if (other_queue != VK_NULL_HANDLE && !processed_other_queues.count(other_queue)) {
+ updateTrackedCommandBuffers(dev_data, queue, other_queue, fence);
+ processed_other_queues.insert(other_queue);
+ }
+ }
+ for (uint32_t i = 0; i < submit->signalSemaphoreCount; ++i) {
+ const VkSemaphore &semaphore = submit->pSignalSemaphores[i];
+ semaphoreList.push_back(semaphore);
+ if (dev_data->semaphoreMap[semaphore].signaled) {
+ skipCall |= log_msg(dev_data->report_data, VK_DEBUG_REPORT_ERROR_BIT_EXT,
+ VK_DEBUG_REPORT_OBJECT_TYPE_COMMAND_BUFFER_EXT, 0, __LINE__, DRAWSTATE_QUEUE_FORWARD_PROGRESS,
+ "DS", "Queue %#" PRIx64 " is signaling semaphore %#" PRIx64
+ " that has already been signaled but not waited on by queue %#" PRIx64 ".",
+ reinterpret_cast<uint64_t &>(queue), reinterpret_cast<const uint64_t &>(semaphore),
+ reinterpret_cast<uint64_t &>(dev_data->semaphoreMap[semaphore].queue));
+ } else {
+ dev_data->semaphoreMap[semaphore].signaled = 1;
+ dev_data->semaphoreMap[semaphore].queue = queue;
+ }
+ }
+ for (uint32_t i = 0; i < submit->commandBufferCount; i++) {
+ skipCall |= ValidateCmdBufImageLayouts(submit->pCommandBuffers[i]);
+ pCBNode = getCBNode(dev_data, submit->pCommandBuffers[i]);
+ pCBNode->semaphores = semaphoreList;
+ pCBNode->submitCount++; // increment submit count
+ skipCall |= validatePrimaryCommandBufferState(dev_data, pCBNode);
+ }
+ }
+ // Update cmdBuffer-related data structs and mark fence in-use
+ trackCommandBuffers(dev_data, queue, submitCount, pSubmits, fence);
+ loader_platform_thread_unlock_mutex(&globalLock);
+ if (VK_FALSE == skipCall)
+ result = dev_data->device_dispatch_table->QueueSubmit(queue, submitCount, pSubmits, fence);
+#if MTMERGESOURCE
+ loader_platform_thread_lock_mutex(&globalLock);
+ for (uint32_t submit_idx = 0; submit_idx < submitCount; submit_idx++) {
+ const VkSubmitInfo *submit = &pSubmits[submit_idx];
+ for (uint32_t i = 0; i < submit->waitSemaphoreCount; i++) {
+ VkSemaphore sem = submit->pWaitSemaphores[i];
+
+ if (dev_data->semaphoreMap.find(sem) != dev_data->semaphoreMap.end()) {
+ dev_data->semaphoreMap[sem].state = MEMTRACK_SEMAPHORE_STATE_UNSET;
+ }
+ }
+ }
+ loader_platform_thread_unlock_mutex(&globalLock);
+#endif
+ return result;
+}
+
+#if MTMERGESOURCE
+VK_LAYER_EXPORT VKAPI_ATTR VkResult VKAPI_CALL vkAllocateMemory(VkDevice device, const VkMemoryAllocateInfo *pAllocateInfo,
+ const VkAllocationCallbacks *pAllocator, VkDeviceMemory *pMemory) {
+ layer_data *my_data = get_my_data_ptr(get_dispatch_key(device), layer_data_map);
+ VkResult result = my_data->device_dispatch_table->AllocateMemory(device, pAllocateInfo, pAllocator, pMemory);
+ // TODO : Track allocations and overall size here
+ loader_platform_thread_lock_mutex(&globalLock);
+ add_mem_obj_info(my_data, device, *pMemory, pAllocateInfo);
+ print_mem_list(my_data, device);
+ loader_platform_thread_unlock_mutex(&globalLock);
+ return result;
+}
+
+VK_LAYER_EXPORT VKAPI_ATTR void VKAPI_CALL
+vkFreeMemory(VkDevice device, VkDeviceMemory mem, const VkAllocationCallbacks *pAllocator) {
+ layer_data *my_data = get_my_data_ptr(get_dispatch_key(device), layer_data_map);
+
+ // From spec : A memory object is freed by calling vkFreeMemory() when it is no longer needed.
+ // Before freeing a memory object, an application must ensure the memory object is no longer
+ // in use by the device—for example by command buffers queued for execution. The memory need
+ // not yet be unbound from all images and buffers, but any further use of those images or
+ // buffers (on host or device) for anything other than destroying those objects will result in
+ // undefined behavior.
+
+ loader_platform_thread_lock_mutex(&globalLock);
+ freeMemObjInfo(my_data, device, mem, VK_FALSE);
+ print_mem_list(my_data, device);
+ printCBList(my_data, device);
+ loader_platform_thread_unlock_mutex(&globalLock);
+ my_data->device_dispatch_table->FreeMemory(device, mem, pAllocator);
+}
+
+VkBool32 validateMemRange(layer_data *my_data, VkDeviceMemory mem, VkDeviceSize offset, VkDeviceSize size) {
+ VkBool32 skipCall = VK_FALSE;
+
+ if (size == 0) {
+ // TODO: a size of 0 is not listed as an invalid use in the spec, should it be?
+ skipCall = log_msg(my_data->report_data, VK_DEBUG_REPORT_WARNING_BIT_EXT, VK_DEBUG_REPORT_OBJECT_TYPE_DEVICE_MEMORY_EXT,
+ (uint64_t)mem, __LINE__, MEMTRACK_INVALID_MAP, "MEM",
+ "VkMapMemory: Attempting to map memory range of size zero");
+ }
+
+ auto mem_element = my_data->memObjMap.find(mem);
+ if (mem_element != my_data->memObjMap.end()) {
+ // It is an application error to call VkMapMemory on an object that is already mapped
+ if (mem_element->second.memRange.size != 0) {
+ skipCall = log_msg(my_data->report_data, VK_DEBUG_REPORT_ERROR_BIT_EXT, VK_DEBUG_REPORT_OBJECT_TYPE_DEVICE_MEMORY_EXT,
+ (uint64_t)mem, __LINE__, MEMTRACK_INVALID_MAP, "MEM",
+ "VkMapMemory: Attempting to map memory on an already-mapped object %#" PRIxLEAST64, (uint64_t)mem);
+ }
+
+ // Validate that offset + size is within object's allocationSize
+ if (size == VK_WHOLE_SIZE) {
+ if (offset >= mem_element->second.allocInfo.allocationSize) {
+ skipCall = log_msg(my_data->report_data, VK_DEBUG_REPORT_ERROR_BIT_EXT,
+ VK_DEBUG_REPORT_OBJECT_TYPE_DEVICE_MEMORY_EXT, (uint64_t)mem, __LINE__, MEMTRACK_INVALID_MAP,
+ "MEM", "Mapping Memory from %" PRIu64 " to %" PRIu64 " with total array size %" PRIu64, offset,
+ mem_element->second.allocInfo.allocationSize, mem_element->second.allocInfo.allocationSize);
+ }
+ } else {
+ if ((offset + size) > mem_element->second.allocInfo.allocationSize) {
+ skipCall = log_msg(my_data->report_data, VK_DEBUG_REPORT_ERROR_BIT_EXT,
+ VK_DEBUG_REPORT_OBJECT_TYPE_DEVICE_MEMORY_EXT, (uint64_t)mem, __LINE__, MEMTRACK_INVALID_MAP,
+ "MEM", "Mapping Memory from %" PRIu64 " to %" PRIu64 " with total array size %" PRIu64, offset,
+ size + offset, mem_element->second.allocInfo.allocationSize);
+ }
+ }
+ }
+ return skipCall;
+}
+
+void storeMemRanges(layer_data *my_data, VkDeviceMemory mem, VkDeviceSize offset, VkDeviceSize size) {
+ auto mem_element = my_data->memObjMap.find(mem);
+ if (mem_element != my_data->memObjMap.end()) {
+ MemRange new_range;
+ new_range.offset = offset;
+ new_range.size = size;
+ mem_element->second.memRange = new_range;
+ }
+}
+
+VkBool32 deleteMemRanges(layer_data *my_data, VkDeviceMemory mem) {
+ VkBool32 skipCall = VK_FALSE;
+ auto mem_element = my_data->memObjMap.find(mem);
+ if (mem_element != my_data->memObjMap.end()) {
+ if (!mem_element->second.memRange.size) {
+ // Valid Usage: memory must currently be mapped
+ skipCall = log_msg(my_data->report_data, VK_DEBUG_REPORT_ERROR_BIT_EXT, VK_DEBUG_REPORT_OBJECT_TYPE_DEVICE_MEMORY_EXT,
+ (uint64_t)mem, __LINE__, MEMTRACK_INVALID_MAP, "MEM",
+ "Unmapping Memory without memory being mapped: mem obj %#" PRIxLEAST64, (uint64_t)mem);
+ }
+ mem_element->second.memRange.size = 0;
+ if (mem_element->second.pData) {
+ free(mem_element->second.pData);
+ mem_element->second.pData = 0;
+ }
+ }
+ return skipCall;
+}
+
+static char NoncoherentMemoryFillValue = 0xb;
+
+void initializeAndTrackMemory(layer_data *my_data, VkDeviceMemory mem, VkDeviceSize size, void **ppData) {
+ auto mem_element = my_data->memObjMap.find(mem);
+ if (mem_element != my_data->memObjMap.end()) {
+ mem_element->second.pDriverData = *ppData;
+ uint32_t index = mem_element->second.allocInfo.memoryTypeIndex;
+ if (memProps.memoryTypes[index].propertyFlags & VK_MEMORY_PROPERTY_HOST_COHERENT_BIT) {
+ mem_element->second.pData = 0;
+ } else {
+ if (size == VK_WHOLE_SIZE) {
+ size = mem_element->second.allocInfo.allocationSize;
+ }
+ size_t convSize = (size_t)(size);
+ mem_element->second.pData = malloc(2 * convSize);
+ memset(mem_element->second.pData, NoncoherentMemoryFillValue, 2 * convSize);
+ *ppData = static_cast<char *>(mem_element->second.pData) + (convSize / 2);
+ }
+ }
+}
+#endif
+// Note: This function assumes that the global lock is held by the calling
+// thread.
+VkBool32 cleanInFlightCmdBuffer(layer_data *my_data, VkCommandBuffer cmdBuffer) {
+ VkBool32 skip_call = VK_FALSE;
+ GLOBAL_CB_NODE *pCB = getCBNode(my_data, cmdBuffer);
+ if (pCB) {
+ for (auto queryEventsPair : pCB->waitedEventsBeforeQueryReset) {
+ for (auto event : queryEventsPair.second) {
+ if (my_data->eventMap[event].needsSignaled) {
+ skip_call |= log_msg(my_data->report_data, VK_DEBUG_REPORT_ERROR_BIT_EXT,
+ VK_DEBUG_REPORT_OBJECT_TYPE_QUERY_POOL_EXT, 0, 0, DRAWSTATE_INVALID_QUERY, "DS",
+ "Cannot get query results on queryPool %" PRIu64
+ " with index %d which was guarded by unsignaled event %" PRIu64 ".",
+ (uint64_t)(queryEventsPair.first.pool), queryEventsPair.first.index, (uint64_t)(event));
+ }
+ }
+ }
+ }
+ return skip_call;
+}
+// Remove given cmd_buffer from the global inFlight set.
+// Also, if given queue is valid, then remove the cmd_buffer from that queues
+// inFlightCmdBuffer set. Finally, check all other queues and if given cmd_buffer
+// is still in flight on another queue, add it back into the global set.
+// Note: This function assumes that the global lock is held by the calling
+// thread.
+static inline void removeInFlightCmdBuffer(layer_data *dev_data, VkCommandBuffer cmd_buffer, VkQueue queue) {
+ // Pull it off of global list initially, but if we find it in any other queue list, add it back in
+ dev_data->globalInFlightCmdBuffers.erase(cmd_buffer);
+ if (dev_data->queueMap.find(queue) != dev_data->queueMap.end()) {
+ dev_data->queueMap[queue].inFlightCmdBuffers.erase(cmd_buffer);
+ for (auto q : dev_data->queues) {
+ if ((q != queue) &&
+ (dev_data->queueMap[q].inFlightCmdBuffers.find(cmd_buffer) != dev_data->queueMap[q].inFlightCmdBuffers.end())) {
+ dev_data->globalInFlightCmdBuffers.insert(cmd_buffer);
+ break;
+ }
+ }
+ }
+}
+#if MTMERGESOURCE
+static inline bool verifyFenceStatus(VkDevice device, VkFence fence, const char *apiCall) {
+ layer_data *my_data = get_my_data_ptr(get_dispatch_key(device), layer_data_map);
+ VkBool32 skipCall = false;
+ auto pFenceInfo = my_data->fenceMap.find(fence);
+ if (pFenceInfo != my_data->fenceMap.end()) {
+ if (pFenceInfo->second.firstTimeFlag != VK_TRUE) {
+ if ((pFenceInfo->second.createInfo.flags & VK_FENCE_CREATE_SIGNALED_BIT) &&
+ pFenceInfo->second.firstTimeFlag != VK_TRUE) {
+ skipCall |=
+ log_msg(my_data->report_data, VK_DEBUG_REPORT_INFORMATION_BIT_EXT, VK_DEBUG_REPORT_OBJECT_TYPE_FENCE_EXT,
+ (uint64_t)fence, __LINE__, MEMTRACK_INVALID_FENCE_STATE, "MEM",
+ "%s specified fence %#" PRIxLEAST64 " already in SIGNALED state.", apiCall, (uint64_t)fence);
+ }
+ if (!pFenceInfo->second.queue && !pFenceInfo->second.swapchain) { // Checking status of unsubmitted fence
+ skipCall |= log_msg(my_data->report_data, VK_DEBUG_REPORT_WARNING_BIT_EXT, VK_DEBUG_REPORT_OBJECT_TYPE_FENCE_EXT,
+ reinterpret_cast<uint64_t &>(fence), __LINE__, MEMTRACK_INVALID_FENCE_STATE, "MEM",
+ "%s called for fence %#" PRIxLEAST64 " which has not been submitted on a Queue or during "
+ "acquire next image.",
+ apiCall, reinterpret_cast<uint64_t &>(fence));
+ }
+ } else {
+ pFenceInfo->second.firstTimeFlag = VK_FALSE;
+ }
+ }
+ return skipCall;
+}
+#endif
+VK_LAYER_EXPORT VKAPI_ATTR VkResult VKAPI_CALL
+vkWaitForFences(VkDevice device, uint32_t fenceCount, const VkFence *pFences, VkBool32 waitAll, uint64_t timeout) {
+ layer_data *dev_data = get_my_data_ptr(get_dispatch_key(device), layer_data_map);
+ VkBool32 skip_call = VK_FALSE;
+#if MTMERGESOURCE
+ // Verify fence status of submitted fences
+ loader_platform_thread_lock_mutex(&globalLock);
+ for (uint32_t i = 0; i < fenceCount; i++) {
+ skip_call |= verifyFenceStatus(device, pFences[i], "vkWaitForFences");
+ }
+ loader_platform_thread_unlock_mutex(&globalLock);
+ if (skip_call)
+ return VK_ERROR_VALIDATION_FAILED_EXT;
+#endif
+ VkResult result = dev_data->device_dispatch_table->WaitForFences(device, fenceCount, pFences, waitAll, timeout);
+
+ if (result == VK_SUCCESS) {
+ loader_platform_thread_lock_mutex(&globalLock);
+ // When we know that all fences are complete we can clean/remove their CBs
+ if (waitAll || fenceCount == 1) {
+ for (uint32_t i = 0; i < fenceCount; ++i) {
+#if MTMERGESOURCE
+ update_fence_tracking(dev_data, pFences[i]);
+#endif
+ VkQueue fence_queue = dev_data->fenceMap[pFences[i]].queue;
+ for (auto cmdBuffer : dev_data->fenceMap[pFences[i]].cmdBuffers) {
+ skip_call |= cleanInFlightCmdBuffer(dev_data, cmdBuffer);
+ removeInFlightCmdBuffer(dev_data, cmdBuffer, fence_queue);
+ }
+ }
+ decrementResources(dev_data, fenceCount, pFences);
+ }
+ // NOTE : Alternate case not handled here is when some fences have completed. In
+ // this case for app to guarantee which fences completed it will have to call
+ // vkGetFenceStatus() at which point we'll clean/remove their CBs if complete.
+ loader_platform_thread_unlock_mutex(&globalLock);
+ }
+ if (VK_FALSE != skip_call)
+ return VK_ERROR_VALIDATION_FAILED_EXT;
+ return result;
+}
+
+VK_LAYER_EXPORT VKAPI_ATTR VkResult VKAPI_CALL vkGetFenceStatus(VkDevice device, VkFence fence) {
+ layer_data *dev_data = get_my_data_ptr(get_dispatch_key(device), layer_data_map);
+ bool skipCall = false;
+ VkResult result = VK_ERROR_VALIDATION_FAILED_EXT;
+#if MTMERGESOURCE
+ loader_platform_thread_lock_mutex(&globalLock);
+ skipCall = verifyFenceStatus(device, fence, "vkGetFenceStatus");
+ loader_platform_thread_unlock_mutex(&globalLock);
+ if (skipCall)
+ return result;
+#endif
+ result = dev_data->device_dispatch_table->GetFenceStatus(device, fence);
+ VkBool32 skip_call = VK_FALSE;
+ loader_platform_thread_lock_mutex(&globalLock);
+ if (result == VK_SUCCESS) {
+#if MTMERGESOURCE
+ update_fence_tracking(dev_data, fence);
+#endif
+ auto fence_queue = dev_data->fenceMap[fence].queue;
+ for (auto cmdBuffer : dev_data->fenceMap[fence].cmdBuffers) {
+ skip_call |= cleanInFlightCmdBuffer(dev_data, cmdBuffer);
+ removeInFlightCmdBuffer(dev_data, cmdBuffer, fence_queue);
+ }
+ decrementResources(dev_data, 1, &fence);
+ }
+ loader_platform_thread_unlock_mutex(&globalLock);
+ if (VK_FALSE != skip_call)
+ return VK_ERROR_VALIDATION_FAILED_EXT;
+ return result;
+}
+
+VK_LAYER_EXPORT VKAPI_ATTR void VKAPI_CALL
+vkGetDeviceQueue(VkDevice device, uint32_t queueFamilyIndex, uint32_t queueIndex, VkQueue *pQueue) {
+ layer_data *dev_data = get_my_data_ptr(get_dispatch_key(device), layer_data_map);
+ dev_data->device_dispatch_table->GetDeviceQueue(device, queueFamilyIndex, queueIndex, pQueue);
+ loader_platform_thread_lock_mutex(&globalLock);
+ dev_data->queues.push_back(*pQueue);
+ QUEUE_NODE *pQNode = &dev_data->queueMap[*pQueue];
+ pQNode->device = device;
+#if MTMERGESOURCE
+ pQNode->lastRetiredId = 0;
+ pQNode->lastSubmittedId = 0;
+#endif
+ loader_platform_thread_unlock_mutex(&globalLock);
+}
+
+VK_LAYER_EXPORT VKAPI_ATTR VkResult VKAPI_CALL vkQueueWaitIdle(VkQueue queue) {
+ layer_data *dev_data = get_my_data_ptr(get_dispatch_key(queue), layer_data_map);
+ decrementResources(dev_data, queue);
+ VkBool32 skip_call = VK_FALSE;
+ loader_platform_thread_lock_mutex(&globalLock);
+ // Iterate over local set since we erase set members as we go in for loop
+ auto local_cb_set = dev_data->queueMap[queue].inFlightCmdBuffers;
+ for (auto cmdBuffer : local_cb_set) {
+ skip_call |= cleanInFlightCmdBuffer(dev_data, cmdBuffer);
+ removeInFlightCmdBuffer(dev_data, cmdBuffer, queue);
+ }
+ dev_data->queueMap[queue].inFlightCmdBuffers.clear();
+ loader_platform_thread_unlock_mutex(&globalLock);
+ if (VK_FALSE != skip_call)
+ return VK_ERROR_VALIDATION_FAILED_EXT;
+ VkResult result = dev_data->device_dispatch_table->QueueWaitIdle(queue);
+#if MTMERGESOURCE
+ if (VK_SUCCESS == result) {
+ loader_platform_thread_lock_mutex(&globalLock);
+ retire_queue_fences(dev_data, queue);
+ loader_platform_thread_unlock_mutex(&globalLock);
+ }
+#endif
+ return result;
+}
+
+VK_LAYER_EXPORT VKAPI_ATTR VkResult VKAPI_CALL vkDeviceWaitIdle(VkDevice device) {
+ VkBool32 skip_call = VK_FALSE;
+ layer_data *dev_data = get_my_data_ptr(get_dispatch_key(device), layer_data_map);
+ loader_platform_thread_lock_mutex(&globalLock);
+ for (auto queue : dev_data->queues) {
+ decrementResources(dev_data, queue);
+ if (dev_data->queueMap.find(queue) != dev_data->queueMap.end()) {
+ // Clear all of the queue inFlightCmdBuffers (global set cleared below)
+ dev_data->queueMap[queue].inFlightCmdBuffers.clear();
+ }
+ }
+ for (auto cmdBuffer : dev_data->globalInFlightCmdBuffers) {
+ skip_call |= cleanInFlightCmdBuffer(dev_data, cmdBuffer);
+ }
+ dev_data->globalInFlightCmdBuffers.clear();
+ loader_platform_thread_unlock_mutex(&globalLock);
+ if (VK_FALSE != skip_call)
+ return VK_ERROR_VALIDATION_FAILED_EXT;
+ VkResult result = dev_data->device_dispatch_table->DeviceWaitIdle(device);
+#if MTMERGESOURCE
+ if (VK_SUCCESS == result) {
+ loader_platform_thread_lock_mutex(&globalLock);
+ retire_device_fences(dev_data, device);
+ loader_platform_thread_unlock_mutex(&globalLock);
+ }
+#endif
+ return result;
+}
+
+VK_LAYER_EXPORT VKAPI_ATTR void VKAPI_CALL vkDestroyFence(VkDevice device, VkFence fence, const VkAllocationCallbacks *pAllocator) {
+ layer_data *dev_data = get_my_data_ptr(get_dispatch_key(device), layer_data_map);
+ bool skipCall = false;
+ loader_platform_thread_lock_mutex(&globalLock);
+ if (dev_data->fenceMap[fence].in_use.load()) {
+ skipCall |= log_msg(dev_data->report_data, VK_DEBUG_REPORT_ERROR_BIT_EXT, VK_DEBUG_REPORT_OBJECT_TYPE_FENCE_EXT,
+ (uint64_t)(fence), __LINE__, DRAWSTATE_INVALID_FENCE, "DS",
+ "Fence %#" PRIx64 " is in use by a command buffer.", (uint64_t)(fence));
+ }
+#if MTMERGESOURCE
+ delete_fence_info(dev_data, fence);
+ auto item = dev_data->fenceMap.find(fence);
+ if (item != dev_data->fenceMap.end()) {
+ dev_data->fenceMap.erase(item);
+ }
+#endif
+ loader_platform_thread_unlock_mutex(&globalLock);
+ if (!skipCall)
+ dev_data->device_dispatch_table->DestroyFence(device, fence, pAllocator);
+}
+
+VK_LAYER_EXPORT VKAPI_ATTR void VKAPI_CALL
+vkDestroySemaphore(VkDevice device, VkSemaphore semaphore, const VkAllocationCallbacks *pAllocator) {
+ layer_data *dev_data = get_my_data_ptr(get_dispatch_key(device), layer_data_map);
+ dev_data->device_dispatch_table->DestroySemaphore(device, semaphore, pAllocator);
+ loader_platform_thread_lock_mutex(&globalLock);
+ auto item = dev_data->semaphoreMap.find(semaphore);
+ if (item != dev_data->semaphoreMap.end()) {
+ if (item->second.in_use.load()) {
+ log_msg(dev_data->report_data, VK_DEBUG_REPORT_ERROR_BIT_EXT, VK_DEBUG_REPORT_OBJECT_TYPE_SEMAPHORE_EXT,
+ reinterpret_cast<uint64_t &>(semaphore), __LINE__, DRAWSTATE_INVALID_SEMAPHORE, "DS",
+ "Cannot delete semaphore %" PRIx64 " which is in use.", reinterpret_cast<uint64_t &>(semaphore));
+ }
+ dev_data->semaphoreMap.erase(semaphore);
+ }
+ loader_platform_thread_unlock_mutex(&globalLock);
+ // TODO : Clean up any internal data structures using this obj.
+}
+
+VK_LAYER_EXPORT VKAPI_ATTR void VKAPI_CALL vkDestroyEvent(VkDevice device, VkEvent event, const VkAllocationCallbacks *pAllocator) {
+ layer_data *dev_data = get_my_data_ptr(get_dispatch_key(device), layer_data_map);
+ bool skip_call = false;
+ loader_platform_thread_lock_mutex(&globalLock);
+ auto event_data = dev_data->eventMap.find(event);
+ if (event_data != dev_data->eventMap.end()) {
+ if (event_data->second.in_use.load()) {
+ skip_call |= log_msg(
+ dev_data->report_data, VK_DEBUG_REPORT_ERROR_BIT_EXT, VK_DEBUG_REPORT_OBJECT_TYPE_DESCRIPTOR_SET_EXT,
+ reinterpret_cast<uint64_t &>(event), __LINE__, DRAWSTATE_INVALID_EVENT, "DS",
+ "Cannot delete event %" PRIx64 " which is in use by a command buffer.", reinterpret_cast<uint64_t &>(event));
+ }
+ dev_data->eventMap.erase(event_data);
+ }
+ loader_platform_thread_unlock_mutex(&globalLock);
+ if (!skip_call)
+ dev_data->device_dispatch_table->DestroyEvent(device, event, pAllocator);
+ // TODO : Clean up any internal data structures using this obj.
+}
+
+VK_LAYER_EXPORT VKAPI_ATTR void VKAPI_CALL
+vkDestroyQueryPool(VkDevice device, VkQueryPool queryPool, const VkAllocationCallbacks *pAllocator) {
+ get_my_data_ptr(get_dispatch_key(device), layer_data_map)
+ ->device_dispatch_table->DestroyQueryPool(device, queryPool, pAllocator);
+ // TODO : Clean up any internal data structures using this obj.
+}
+
+VKAPI_ATTR VkResult VKAPI_CALL vkGetQueryPoolResults(VkDevice device, VkQueryPool queryPool, uint32_t firstQuery,
+ uint32_t queryCount, size_t dataSize, void *pData, VkDeviceSize stride,
+ VkQueryResultFlags flags) {
+ layer_data *dev_data = get_my_data_ptr(get_dispatch_key(device), layer_data_map);
+ unordered_map<QueryObject, vector<VkCommandBuffer>> queriesInFlight;
+ GLOBAL_CB_NODE *pCB = nullptr;
+ loader_platform_thread_lock_mutex(&globalLock);
+ for (auto cmdBuffer : dev_data->globalInFlightCmdBuffers) {
+ pCB = getCBNode(dev_data, cmdBuffer);
+ for (auto queryStatePair : pCB->queryToStateMap) {
+ queriesInFlight[queryStatePair.first].push_back(cmdBuffer);
+ }
+ }
+ VkBool32 skip_call = VK_FALSE;
+ for (uint32_t i = 0; i < queryCount; ++i) {
+ QueryObject query = {queryPool, firstQuery + i};
+ auto queryElement = queriesInFlight.find(query);
+ auto queryToStateElement = dev_data->queryToStateMap.find(query);
+ if (queryToStateElement != dev_data->queryToStateMap.end()) {
+ }
+ // Available and in flight
+ if (queryElement != queriesInFlight.end() && queryToStateElement != dev_data->queryToStateMap.end() &&
+ queryToStateElement->second) {
+ for (auto cmdBuffer : queryElement->second) {
+ pCB = getCBNode(dev_data, cmdBuffer);
+ auto queryEventElement = pCB->waitedEventsBeforeQueryReset.find(query);
+ if (queryEventElement == pCB->waitedEventsBeforeQueryReset.end()) {
+ skip_call |= log_msg(dev_data->report_data, VK_DEBUG_REPORT_ERROR_BIT_EXT,
+ VK_DEBUG_REPORT_OBJECT_TYPE_QUERY_POOL_EXT, 0, __LINE__, DRAWSTATE_INVALID_QUERY, "DS",
+ "Cannot get query results on queryPool %" PRIu64 " with index %d which is in flight.",
+ (uint64_t)(queryPool), firstQuery + i);
+ } else {
+ for (auto event : queryEventElement->second) {
+ dev_data->eventMap[event].needsSignaled = true;
+ }
+ }
+ }
+ // Unavailable and in flight
+ } else if (queryElement != queriesInFlight.end() && queryToStateElement != dev_data->queryToStateMap.end() &&
+ !queryToStateElement->second) {
+ // TODO : Can there be the same query in use by multiple command buffers in flight?
+ bool make_available = false;
+ for (auto cmdBuffer : queryElement->second) {
+ pCB = getCBNode(dev_data, cmdBuffer);
+ make_available |= pCB->queryToStateMap[query];
+ }
+ if (!(((flags & VK_QUERY_RESULT_PARTIAL_BIT) || (flags & VK_QUERY_RESULT_WAIT_BIT)) && make_available)) {
+ skip_call |= log_msg(dev_data->report_data, VK_DEBUG_REPORT_ERROR_BIT_EXT,
+ VK_DEBUG_REPORT_OBJECT_TYPE_QUERY_POOL_EXT, 0, __LINE__, DRAWSTATE_INVALID_QUERY, "DS",
+ "Cannot get query results on queryPool %" PRIu64 " with index %d which is unavailable.",
+ (uint64_t)(queryPool), firstQuery + i);
+ }
+ // Unavailable
+ } else if (queryToStateElement != dev_data->queryToStateMap.end() && !queryToStateElement->second) {
+ skip_call |= log_msg(dev_data->report_data, VK_DEBUG_REPORT_ERROR_BIT_EXT, VK_DEBUG_REPORT_OBJECT_TYPE_QUERY_POOL_EXT,
+ 0, __LINE__, DRAWSTATE_INVALID_QUERY, "DS",
+ "Cannot get query results on queryPool %" PRIu64 " with index %d which is unavailable.",
+ (uint64_t)(queryPool), firstQuery + i);
+ // Unitialized
+ } else if (queryToStateElement == dev_data->queryToStateMap.end()) {
+ skip_call |= log_msg(dev_data->report_data, VK_DEBUG_REPORT_ERROR_BIT_EXT, VK_DEBUG_REPORT_OBJECT_TYPE_QUERY_POOL_EXT,
+ 0, __LINE__, DRAWSTATE_INVALID_QUERY, "DS",
+ "Cannot get query results on queryPool %" PRIu64 " with index %d which is uninitialized.",
+ (uint64_t)(queryPool), firstQuery + i);
+ }
+ }
+ loader_platform_thread_unlock_mutex(&globalLock);
+ if (skip_call)
+ return VK_ERROR_VALIDATION_FAILED_EXT;
+ return dev_data->device_dispatch_table->GetQueryPoolResults(device, queryPool, firstQuery, queryCount, dataSize, pData, stride,
+ flags);
+}
+
+VkBool32 validateIdleBuffer(const layer_data *my_data, VkBuffer buffer) {
+ VkBool32 skip_call = VK_FALSE;
+ auto buffer_data = my_data->bufferMap.find(buffer);
+ if (buffer_data == my_data->bufferMap.end()) {
+ skip_call |= log_msg(my_data->report_data, VK_DEBUG_REPORT_ERROR_BIT_EXT, VK_DEBUG_REPORT_OBJECT_TYPE_BUFFER_EXT,
+ (uint64_t)(buffer), __LINE__, DRAWSTATE_DOUBLE_DESTROY, "DS",
+ "Cannot free buffer %" PRIxLEAST64 " that has not been allocated.", (uint64_t)(buffer));
+ } else {
+ if (buffer_data->second.in_use.load()) {
+ skip_call |= log_msg(my_data->report_data, VK_DEBUG_REPORT_ERROR_BIT_EXT, VK_DEBUG_REPORT_OBJECT_TYPE_BUFFER_EXT,
+ (uint64_t)(buffer), __LINE__, DRAWSTATE_OBJECT_INUSE, "DS",
+ "Cannot free buffer %" PRIxLEAST64 " that is in use by a command buffer.", (uint64_t)(buffer));
+ }
+ }
+ return skip_call;
+}
+
+VK_LAYER_EXPORT VKAPI_ATTR void VKAPI_CALL
+vkDestroyBuffer(VkDevice device, VkBuffer buffer, const VkAllocationCallbacks *pAllocator) {
+ layer_data *dev_data = get_my_data_ptr(get_dispatch_key(device), layer_data_map);
+ VkBool32 skipCall = VK_FALSE;
+ loader_platform_thread_lock_mutex(&globalLock);
+#if MTMERGESOURCE
+ auto item = dev_data->bufferBindingMap.find((uint64_t)buffer);
+ if (item != dev_data->bufferBindingMap.end()) {
+ skipCall = clear_object_binding(dev_data, device, (uint64_t)buffer, VK_DEBUG_REPORT_OBJECT_TYPE_BUFFER_EXT);
+ dev_data->bufferBindingMap.erase(item);
+ }
+#endif
+ if (!validateIdleBuffer(dev_data, buffer) && (VK_FALSE == skipCall)) {
+ loader_platform_thread_unlock_mutex(&globalLock);
+ dev_data->device_dispatch_table->DestroyBuffer(device, buffer, pAllocator);
+ loader_platform_thread_lock_mutex(&globalLock);
+ }
+ dev_data->bufferMap.erase(buffer);
+ loader_platform_thread_unlock_mutex(&globalLock);
+}
+
+VK_LAYER_EXPORT VKAPI_ATTR void VKAPI_CALL
+vkDestroyBufferView(VkDevice device, VkBufferView bufferView, const VkAllocationCallbacks *pAllocator) {
+ layer_data *dev_data = get_my_data_ptr(get_dispatch_key(device), layer_data_map);
+ dev_data->device_dispatch_table->DestroyBufferView(device, bufferView, pAllocator);
+ loader_platform_thread_lock_mutex(&globalLock);
+ auto item = dev_data->bufferViewMap.find(bufferView);
+ if (item != dev_data->bufferViewMap.end()) {
+ dev_data->bufferViewMap.erase(item);
+ }
+ loader_platform_thread_unlock_mutex(&globalLock);
+}
+
+VK_LAYER_EXPORT VKAPI_ATTR void VKAPI_CALL vkDestroyImage(VkDevice device, VkImage image, const VkAllocationCallbacks *pAllocator) {
+ layer_data *dev_data = get_my_data_ptr(get_dispatch_key(device), layer_data_map);
+ VkBool32 skipCall = VK_FALSE;
+#if MTMERGESOURCE
+ loader_platform_thread_lock_mutex(&globalLock);
+ auto item = dev_data->imageBindingMap.find((uint64_t)image);
+ if (item != dev_data->imageBindingMap.end()) {
+ skipCall = clear_object_binding(dev_data, device, (uint64_t)image, VK_DEBUG_REPORT_OBJECT_TYPE_IMAGE_EXT);
+ dev_data->imageBindingMap.erase(item);
+ }
+ loader_platform_thread_unlock_mutex(&globalLock);
+#endif
+ if (VK_FALSE == skipCall)
+ dev_data->device_dispatch_table->DestroyImage(device, image, pAllocator);
+
+ loader_platform_thread_lock_mutex(&globalLock);
+ const auto& entry = dev_data->imageMap.find(image);
+ if (entry != dev_data->imageMap.end()) {
+ // Clear any memory mapping for this image
+ const auto &mem_entry = dev_data->memObjMap.find(entry->second.mem);
+ if (mem_entry != dev_data->memObjMap.end())
+ mem_entry->second.image = VK_NULL_HANDLE;
+
+ // Remove image from imageMap
+ dev_data->imageMap.erase(entry);
+ }
+ const auto& subEntry = dev_data->imageSubresourceMap.find(image);
+ if (subEntry != dev_data->imageSubresourceMap.end()) {
+ for (const auto& pair : subEntry->second) {
+ dev_data->imageLayoutMap.erase(pair);
+ }
+ dev_data->imageSubresourceMap.erase(subEntry);
+ }
+ loader_platform_thread_unlock_mutex(&globalLock);
+}
+#if MTMERGESOURCE
+VkBool32 print_memory_range_error(layer_data *dev_data, const uint64_t object_handle, const uint64_t other_handle,
+ VkDebugReportObjectTypeEXT object_type) {
+ if (object_type == VK_DEBUG_REPORT_OBJECT_TYPE_BUFFER_EXT) {
+ return log_msg(dev_data->report_data, VK_DEBUG_REPORT_ERROR_BIT_EXT, object_type, object_handle, 0,
+ MEMTRACK_INVALID_ALIASING, "MEM", "Buffer %" PRIx64 " is alised with image %" PRIx64, object_handle,
+ other_handle);
+ } else {
+ return log_msg(dev_data->report_data, VK_DEBUG_REPORT_ERROR_BIT_EXT, object_type, object_handle, 0,
+ MEMTRACK_INVALID_ALIASING, "MEM", "Image %" PRIx64 " is alised with buffer %" PRIx64, object_handle,
+ other_handle);
+ }
+}
+
+VkBool32 validate_memory_range(layer_data *dev_data, const vector<MEMORY_RANGE> &ranges, const MEMORY_RANGE &new_range,
+ VkDebugReportObjectTypeEXT object_type) {
+ VkBool32 skip_call = false;
+
+ for (auto range : ranges) {
+ if ((range.end & ~(dev_data->physDevProperties.properties.limits.bufferImageGranularity - 1)) <
+ (new_range.start & ~(dev_data->physDevProperties.properties.limits.bufferImageGranularity - 1)))
+ continue;
+ if ((range.start & ~(dev_data->physDevProperties.properties.limits.bufferImageGranularity - 1)) >
+ (new_range.end & ~(dev_data->physDevProperties.properties.limits.bufferImageGranularity - 1)))
+ continue;
+ skip_call |= print_memory_range_error(dev_data, new_range.handle, range.handle, object_type);
+ }
+ return skip_call;
+}
+
+VkBool32 validate_buffer_image_aliasing(layer_data *dev_data, uint64_t handle, VkDeviceMemory mem, VkDeviceSize memoryOffset,
+ VkMemoryRequirements memRequirements, vector<MEMORY_RANGE> &ranges,
+ const vector<MEMORY_RANGE> &other_ranges, VkDebugReportObjectTypeEXT object_type) {
+ MEMORY_RANGE range;
+ range.handle = handle;
+ range.memory = mem;
+ range.start = memoryOffset;
+ range.end = memoryOffset + memRequirements.size - 1;
+ ranges.push_back(range);
+ return validate_memory_range(dev_data, other_ranges, range, object_type);
+}
+
+VK_LAYER_EXPORT VKAPI_ATTR VkResult VKAPI_CALL
+vkBindBufferMemory(VkDevice device, VkBuffer buffer, VkDeviceMemory mem, VkDeviceSize memoryOffset) {
+ layer_data *dev_data = get_my_data_ptr(get_dispatch_key(device), layer_data_map);
+ VkResult result = VK_ERROR_VALIDATION_FAILED_EXT;
+ loader_platform_thread_lock_mutex(&globalLock);
+ // Track objects tied to memory
+ uint64_t buffer_handle = (uint64_t)(buffer);
+ VkBool32 skipCall =
+ set_mem_binding(dev_data, device, mem, buffer_handle, VK_DEBUG_REPORT_OBJECT_TYPE_BUFFER_EXT, "vkBindBufferMemory");
+ add_object_binding_info(dev_data, buffer_handle, VK_DEBUG_REPORT_OBJECT_TYPE_BUFFER_EXT, mem);
+ {
+ VkMemoryRequirements memRequirements;
+ // MTMTODO : Shouldn't this call down the chain?
+ vkGetBufferMemoryRequirements(device, buffer, &memRequirements);
+ skipCall |= validate_buffer_image_aliasing(dev_data, buffer_handle, mem, memoryOffset, memRequirements,
+ dev_data->memObjMap[mem].bufferRanges, dev_data->memObjMap[mem].imageRanges,
+ VK_DEBUG_REPORT_OBJECT_TYPE_BUFFER_EXT);
+ }
+ print_mem_list(dev_data, device);
+ loader_platform_thread_unlock_mutex(&globalLock);
+ if (VK_FALSE == skipCall) {
+ result = dev_data->device_dispatch_table->BindBufferMemory(device, buffer, mem, memoryOffset);
+ }
+ return result;
+}
+
+VK_LAYER_EXPORT VKAPI_ATTR void VKAPI_CALL
+vkGetBufferMemoryRequirements(VkDevice device, VkBuffer buffer, VkMemoryRequirements *pMemoryRequirements) {
+ layer_data *my_data = get_my_data_ptr(get_dispatch_key(device), layer_data_map);
+ // TODO : What to track here?
+ // Could potentially save returned mem requirements and validate values passed into BindBufferMemory
+ my_data->device_dispatch_table->GetBufferMemoryRequirements(device, buffer, pMemoryRequirements);
+}
+
+VK_LAYER_EXPORT VKAPI_ATTR void VKAPI_CALL
+vkGetImageMemoryRequirements(VkDevice device, VkImage image, VkMemoryRequirements *pMemoryRequirements) {
+ layer_data *my_data = get_my_data_ptr(get_dispatch_key(device), layer_data_map);
+ // TODO : What to track here?
+ // Could potentially save returned mem requirements and validate values passed into BindImageMemory
+ my_data->device_dispatch_table->GetImageMemoryRequirements(device, image, pMemoryRequirements);
+}
+#endif
+VK_LAYER_EXPORT VKAPI_ATTR void VKAPI_CALL
+vkDestroyImageView(VkDevice device, VkImageView imageView, const VkAllocationCallbacks *pAllocator) {
+ get_my_data_ptr(get_dispatch_key(device), layer_data_map)
+ ->device_dispatch_table->DestroyImageView(device, imageView, pAllocator);
+ // TODO : Clean up any internal data structures using this obj.
+}
+
+VK_LAYER_EXPORT VKAPI_ATTR void VKAPI_CALL
+vkDestroyShaderModule(VkDevice device, VkShaderModule shaderModule, const VkAllocationCallbacks *pAllocator) {
+ layer_data *my_data = get_my_data_ptr(get_dispatch_key(device), layer_data_map);
+
+ loader_platform_thread_lock_mutex(&globalLock);
+
+ my_data->shaderModuleMap.erase(shaderModule);
+
+ loader_platform_thread_unlock_mutex(&globalLock);
+
+ my_data->device_dispatch_table->DestroyShaderModule(device, shaderModule, pAllocator);
+}
+
+VK_LAYER_EXPORT VKAPI_ATTR void VKAPI_CALL
+vkDestroyPipeline(VkDevice device, VkPipeline pipeline, const VkAllocationCallbacks *pAllocator) {
+ get_my_data_ptr(get_dispatch_key(device), layer_data_map)->device_dispatch_table->DestroyPipeline(device, pipeline, pAllocator);
+ // TODO : Clean up any internal data structures using this obj.
+}
+
+VK_LAYER_EXPORT VKAPI_ATTR void VKAPI_CALL
+vkDestroyPipelineLayout(VkDevice device, VkPipelineLayout pipelineLayout, const VkAllocationCallbacks *pAllocator) {
+ get_my_data_ptr(get_dispatch_key(device), layer_data_map)
+ ->device_dispatch_table->DestroyPipelineLayout(device, pipelineLayout, pAllocator);
+ // TODO : Clean up any internal data structures using this obj.
+}
+
+VK_LAYER_EXPORT VKAPI_ATTR void VKAPI_CALL
+vkDestroySampler(VkDevice device, VkSampler sampler, const VkAllocationCallbacks *pAllocator) {
+ get_my_data_ptr(get_dispatch_key(device), layer_data_map)->device_dispatch_table->DestroySampler(device, sampler, pAllocator);
+ // TODO : Clean up any internal data structures using this obj.
+}
+
+VK_LAYER_EXPORT VKAPI_ATTR void VKAPI_CALL
+vkDestroyDescriptorSetLayout(VkDevice device, VkDescriptorSetLayout descriptorSetLayout, const VkAllocationCallbacks *pAllocator) {
+ get_my_data_ptr(get_dispatch_key(device), layer_data_map)
+ ->device_dispatch_table->DestroyDescriptorSetLayout(device, descriptorSetLayout, pAllocator);
+ // TODO : Clean up any internal data structures using this obj.
+}
+
+VK_LAYER_EXPORT VKAPI_ATTR void VKAPI_CALL
+vkDestroyDescriptorPool(VkDevice device, VkDescriptorPool descriptorPool, const VkAllocationCallbacks *pAllocator) {
+ get_my_data_ptr(get_dispatch_key(device), layer_data_map)
+ ->device_dispatch_table->DestroyDescriptorPool(device, descriptorPool, pAllocator);
+ // TODO : Clean up any internal data structures using this obj.
+}
+
+VK_LAYER_EXPORT VKAPI_ATTR void VKAPI_CALL
+vkFreeCommandBuffers(VkDevice device, VkCommandPool commandPool, uint32_t commandBufferCount, const VkCommandBuffer *pCommandBuffers) {
+ layer_data *dev_data = get_my_data_ptr(get_dispatch_key(device), layer_data_map);
+
+ bool skip_call = false;
+ loader_platform_thread_lock_mutex(&globalLock);
+ for (uint32_t i = 0; i < commandBufferCount; i++) {
+#if MTMERGESOURCE
+ clear_cmd_buf_and_mem_references(dev_data, pCommandBuffers[i]);
+#endif
+ if (dev_data->globalInFlightCmdBuffers.count(pCommandBuffers[i])) {
+ skip_call |=
+ log_msg(dev_data->report_data, VK_DEBUG_REPORT_ERROR_BIT_EXT, VK_DEBUG_REPORT_OBJECT_TYPE_COMMAND_BUFFER_EXT,
+ reinterpret_cast<uint64_t>(pCommandBuffers[i]), __LINE__, DRAWSTATE_INVALID_COMMAND_BUFFER_RESET, "DS",
+ "Attempt to free command buffer (%#" PRIxLEAST64 ") which is in use.",
+ reinterpret_cast<uint64_t>(pCommandBuffers[i]));
+ }
+ // Delete CB information structure, and remove from commandBufferMap
+ auto cb = dev_data->commandBufferMap.find(pCommandBuffers[i]);
+ if (cb != dev_data->commandBufferMap.end()) {
+ // reset prior to delete for data clean-up
+ resetCB(dev_data, (*cb).second->commandBuffer);
+ delete (*cb).second;
+ dev_data->commandBufferMap.erase(cb);
+ }
+
+ // Remove commandBuffer reference from commandPoolMap
+ dev_data->commandPoolMap[commandPool].commandBuffers.remove(pCommandBuffers[i]);
+ }
+#if MTMERGESOURCE
+ printCBList(dev_data, device);
+#endif
+ loader_platform_thread_unlock_mutex(&globalLock);
+
+ if (!skip_call)
+ dev_data->device_dispatch_table->FreeCommandBuffers(device, commandPool, commandBufferCount, pCommandBuffers);
+}
+
+VK_LAYER_EXPORT VKAPI_ATTR VkResult VKAPI_CALL vkCreateCommandPool(VkDevice device, const VkCommandPoolCreateInfo *pCreateInfo,
+ const VkAllocationCallbacks *pAllocator,
+ VkCommandPool *pCommandPool) {
+ layer_data *dev_data = get_my_data_ptr(get_dispatch_key(device), layer_data_map);
+
+ VkResult result = dev_data->device_dispatch_table->CreateCommandPool(device, pCreateInfo, pAllocator, pCommandPool);
+
+ if (VK_SUCCESS == result) {
+ loader_platform_thread_lock_mutex(&globalLock);
+ dev_data->commandPoolMap[*pCommandPool].createFlags = pCreateInfo->flags;
+ dev_data->commandPoolMap[*pCommandPool].queueFamilyIndex = pCreateInfo->queueFamilyIndex;
+ loader_platform_thread_unlock_mutex(&globalLock);
+ }
+ return result;
+}
+
+VK_LAYER_EXPORT VKAPI_ATTR VkResult VKAPI_CALL vkCreateQueryPool(VkDevice device, const VkQueryPoolCreateInfo *pCreateInfo,
+ const VkAllocationCallbacks *pAllocator, VkQueryPool *pQueryPool) {
+
+ layer_data *dev_data = get_my_data_ptr(get_dispatch_key(device), layer_data_map);
+ VkResult result = dev_data->device_dispatch_table->CreateQueryPool(device, pCreateInfo, pAllocator, pQueryPool);
+ if (result == VK_SUCCESS) {
+ loader_platform_thread_lock_mutex(&globalLock);
+ dev_data->queryPoolMap[*pQueryPool].createInfo = *pCreateInfo;
+ loader_platform_thread_unlock_mutex(&globalLock);
+ }
+ return result;
+}
+
+VkBool32 validateCommandBuffersNotInUse(const layer_data *dev_data, VkCommandPool commandPool) {
+ VkBool32 skipCall = VK_FALSE;
+ auto pool_data = dev_data->commandPoolMap.find(commandPool);
+ if (pool_data != dev_data->commandPoolMap.end()) {
+ for (auto cmdBuffer : pool_data->second.commandBuffers) {
+ if (dev_data->globalInFlightCmdBuffers.count(cmdBuffer)) {
+ skipCall |=
+ log_msg(dev_data->report_data, VK_DEBUG_REPORT_ERROR_BIT_EXT, VK_DEBUG_REPORT_OBJECT_TYPE_COMMAND_POOL_EXT,
+ (uint64_t)(commandPool), __LINE__, DRAWSTATE_OBJECT_INUSE, "DS",
+ "Cannot reset command pool %" PRIx64 " when allocated command buffer %" PRIx64 " is in use.",
+ (uint64_t)(commandPool), (uint64_t)(cmdBuffer));
+ }
+ }
+ }
+ return skipCall;
+}
+
+// Destroy commandPool along with all of the commandBuffers allocated from that pool
+VK_LAYER_EXPORT VKAPI_ATTR void VKAPI_CALL
+vkDestroyCommandPool(VkDevice device, VkCommandPool commandPool, const VkAllocationCallbacks *pAllocator) {
+ layer_data *dev_data = get_my_data_ptr(get_dispatch_key(device), layer_data_map);
+ bool commandBufferComplete = false;
+ bool skipCall = false;
+ loader_platform_thread_lock_mutex(&globalLock);
+#if MTMERGESOURCE
+ // Verify that command buffers in pool are complete (not in-flight)
+ // MTMTODO : Merge this with code below (separate *NotInUse() call)
+ for (auto it = dev_data->commandPoolMap[commandPool].commandBuffers.begin();
+ it != dev_data->commandPoolMap[commandPool].commandBuffers.end(); it++) {
+ commandBufferComplete = VK_FALSE;
+ skipCall = checkCBCompleted(dev_data, *it, &commandBufferComplete);
+ if (VK_FALSE == commandBufferComplete) {
+ skipCall |= log_msg(dev_data->report_data, VK_DEBUG_REPORT_ERROR_BIT_EXT, VK_DEBUG_REPORT_OBJECT_TYPE_COMMAND_BUFFER_EXT,
+ (uint64_t)(*it), __LINE__, MEMTRACK_RESET_CB_WHILE_IN_FLIGHT, "MEM",
+ "Destroying Command Pool 0x%" PRIxLEAST64 " before "
+ "its command buffer (0x%" PRIxLEAST64 ") has completed.",
+ (uint64_t)(commandPool), reinterpret_cast<uint64_t>(*it));
+ }
+ }
+#endif
+ // Must remove cmdpool from cmdpoolmap, after removing all cmdbuffers in its list from the commandPoolMap
+ if (dev_data->commandPoolMap.find(commandPool) != dev_data->commandPoolMap.end()) {
+ for (auto poolCb = dev_data->commandPoolMap[commandPool].commandBuffers.begin();
+ poolCb != dev_data->commandPoolMap[commandPool].commandBuffers.end();) {
+ auto del_cb = dev_data->commandBufferMap.find(*poolCb);
+ delete (*del_cb).second; // delete CB info structure
+ dev_data->commandBufferMap.erase(del_cb); // Remove this command buffer
+ poolCb = dev_data->commandPoolMap[commandPool].commandBuffers.erase(
+ poolCb); // Remove CB reference from commandPoolMap's list
+ }
+ }
+ dev_data->commandPoolMap.erase(commandPool);
+
+ loader_platform_thread_unlock_mutex(&globalLock);
+
+ if (VK_TRUE == validateCommandBuffersNotInUse(dev_data, commandPool))
+ return;
+
+ if (!skipCall)
+ dev_data->device_dispatch_table->DestroyCommandPool(device, commandPool, pAllocator);
+#if MTMERGESOURCE
+ loader_platform_thread_lock_mutex(&globalLock);
+ auto item = dev_data->commandPoolMap[commandPool].commandBuffers.begin();
+ // Remove command buffers from command buffer map
+ while (item != dev_data->commandPoolMap[commandPool].commandBuffers.end()) {
+ auto del_item = item++;
+ delete_cmd_buf_info(dev_data, commandPool, *del_item);
+ }
+ dev_data->commandPoolMap.erase(commandPool);
+ loader_platform_thread_unlock_mutex(&globalLock);
+#endif
+}
+
+VK_LAYER_EXPORT VKAPI_ATTR VkResult VKAPI_CALL
+vkResetCommandPool(VkDevice device, VkCommandPool commandPool, VkCommandPoolResetFlags flags) {
+ layer_data *dev_data = get_my_data_ptr(get_dispatch_key(device), layer_data_map);
+ bool commandBufferComplete = false;
+ bool skipCall = false;
+ VkResult result = VK_ERROR_VALIDATION_FAILED_EXT;
+#if MTMERGESOURCE
+ // MTMTODO : Merge this with *NotInUse() call below
+ loader_platform_thread_lock_mutex(&globalLock);
+ auto it = dev_data->commandPoolMap[commandPool].commandBuffers.begin();
+ // Verify that CB's in pool are complete (not in-flight)
+ while (it != dev_data->commandPoolMap[commandPool].commandBuffers.end()) {
+ skipCall = checkCBCompleted(dev_data, (*it), &commandBufferComplete);
+ if (!commandBufferComplete) {
+ skipCall |= log_msg(dev_data->report_data, VK_DEBUG_REPORT_ERROR_BIT_EXT, VK_DEBUG_REPORT_OBJECT_TYPE_COMMAND_BUFFER_EXT,
+ (uint64_t)(*it), __LINE__, MEMTRACK_RESET_CB_WHILE_IN_FLIGHT, "MEM",
+ "Resetting CB %p before it has completed. You must check CB "
+ "flag before calling vkResetCommandBuffer().",
+ (*it));
+ } else {
+ // Clear memory references at this point.
+ clear_cmd_buf_and_mem_references(dev_data, (*it));
+ }
+ ++it;
+ }
+ loader_platform_thread_unlock_mutex(&globalLock);
+#endif
+ if (VK_TRUE == validateCommandBuffersNotInUse(dev_data, commandPool))
+ return VK_ERROR_VALIDATION_FAILED_EXT;
+
+ if (!skipCall)
+ result = dev_data->device_dispatch_table->ResetCommandPool(device, commandPool, flags);
+
+ // Reset all of the CBs allocated from this pool
+ if (VK_SUCCESS == result) {
+ loader_platform_thread_lock_mutex(&globalLock);
+ auto it = dev_data->commandPoolMap[commandPool].commandBuffers.begin();
+ while (it != dev_data->commandPoolMap[commandPool].commandBuffers.end()) {
+ resetCB(dev_data, (*it));
+ ++it;
+ }
+ loader_platform_thread_unlock_mutex(&globalLock);
+ }
+ return result;
+}
+
+VK_LAYER_EXPORT VKAPI_ATTR VkResult VKAPI_CALL vkResetFences(VkDevice device, uint32_t fenceCount, const VkFence *pFences) {
+ layer_data *dev_data = get_my_data_ptr(get_dispatch_key(device), layer_data_map);
+ VkResult result = VK_ERROR_VALIDATION_FAILED_EXT;
+ bool skipCall = false;
+ loader_platform_thread_lock_mutex(&globalLock);
+ for (uint32_t i = 0; i < fenceCount; ++i) {
+#if MTMERGESOURCE
+ // Reset fence state in fenceCreateInfo structure
+ // MTMTODO : Merge with code below
+ auto fence_item = dev_data->fenceMap.find(pFences[i]);
+ if (fence_item != dev_data->fenceMap.end()) {
+ // Validate fences in SIGNALED state
+ if (!(fence_item->second.createInfo.flags & VK_FENCE_CREATE_SIGNALED_BIT)) {
+ // TODO: I don't see a Valid Usage section for ResetFences. This behavior should be documented there.
+ skipCall = log_msg(dev_data->report_data, VK_DEBUG_REPORT_WARNING_BIT_EXT, VK_DEBUG_REPORT_OBJECT_TYPE_FENCE_EXT,
+ (uint64_t)pFences[i], __LINE__, MEMTRACK_INVALID_FENCE_STATE, "MEM",
+ "Fence %#" PRIxLEAST64 " submitted to VkResetFences in UNSIGNALED STATE", (uint64_t)pFences[i]);
+ } else {
+ fence_item->second.createInfo.flags =
+ static_cast<VkFenceCreateFlags>(fence_item->second.createInfo.flags & ~VK_FENCE_CREATE_SIGNALED_BIT);
+ }
+ }
+#endif
+ if (dev_data->fenceMap[pFences[i]].in_use.load()) {
+ skipCall |=
+ log_msg(dev_data->report_data, VK_DEBUG_REPORT_ERROR_BIT_EXT, VK_DEBUG_REPORT_OBJECT_TYPE_FENCE_EXT,
+ reinterpret_cast<const uint64_t &>(pFences[i]), __LINE__, DRAWSTATE_INVALID_FENCE, "DS",
+ "Fence %#" PRIx64 " is in use by a command buffer.", reinterpret_cast<const uint64_t &>(pFences[i]));
+ }
+ }
+ loader_platform_thread_unlock_mutex(&globalLock);
+ if (!skipCall)
+ result = dev_data->device_dispatch_table->ResetFences(device, fenceCount, pFences);
+ return result;
+}
+
+VK_LAYER_EXPORT VKAPI_ATTR void VKAPI_CALL
+vkDestroyFramebuffer(VkDevice device, VkFramebuffer framebuffer, const VkAllocationCallbacks *pAllocator) {
+ layer_data *dev_data = get_my_data_ptr(get_dispatch_key(device), layer_data_map);
+ auto fbNode = dev_data->frameBufferMap.find(framebuffer);
+ if (fbNode != dev_data->frameBufferMap.end()) {
+ for (auto cb : fbNode->second.referencingCmdBuffers) {
+ auto cbNode = dev_data->commandBufferMap.find(cb);
+ if (cbNode != dev_data->commandBufferMap.end()) {
+ // Set CB as invalid and record destroyed framebuffer
+ cbNode->second->state = CB_INVALID;
+ loader_platform_thread_lock_mutex(&globalLock);
+ cbNode->second->destroyedFramebuffers.insert(framebuffer);
+ loader_platform_thread_unlock_mutex(&globalLock);
+ }
+ }
+ loader_platform_thread_lock_mutex(&globalLock);
+ dev_data->frameBufferMap.erase(framebuffer);
+ loader_platform_thread_unlock_mutex(&globalLock);
+ }
+ dev_data->device_dispatch_table->DestroyFramebuffer(device, framebuffer, pAllocator);
+}
+
+VK_LAYER_EXPORT VKAPI_ATTR void VKAPI_CALL
+vkDestroyRenderPass(VkDevice device, VkRenderPass renderPass, const VkAllocationCallbacks *pAllocator) {
+ layer_data *dev_data = get_my_data_ptr(get_dispatch_key(device), layer_data_map);
+ dev_data->device_dispatch_table->DestroyRenderPass(device, renderPass, pAllocator);
+ loader_platform_thread_lock_mutex(&globalLock);
+ dev_data->renderPassMap.erase(renderPass);
+ loader_platform_thread_unlock_mutex(&globalLock);
+}
+
+VK_LAYER_EXPORT VKAPI_ATTR VkResult VKAPI_CALL vkCreateBuffer(VkDevice device, const VkBufferCreateInfo *pCreateInfo,
+ const VkAllocationCallbacks *pAllocator, VkBuffer *pBuffer) {
+ layer_data *dev_data = get_my_data_ptr(get_dispatch_key(device), layer_data_map);
+
+ VkResult result = dev_data->device_dispatch_table->CreateBuffer(device, pCreateInfo, pAllocator, pBuffer);
+
+ if (VK_SUCCESS == result) {
+ loader_platform_thread_lock_mutex(&globalLock);
+#if MTMERGESOURCE
+ add_object_create_info(dev_data, (uint64_t)*pBuffer, VK_DEBUG_REPORT_OBJECT_TYPE_BUFFER_EXT, pCreateInfo);
+#endif
+ // TODO : This doesn't create deep copy of pQueueFamilyIndices so need to fix that if/when we want that data to be valid
+ dev_data->bufferMap[*pBuffer].create_info = unique_ptr<VkBufferCreateInfo>(new VkBufferCreateInfo(*pCreateInfo));
+ dev_data->bufferMap[*pBuffer].in_use.store(0);
+ loader_platform_thread_unlock_mutex(&globalLock);
+ }
+ return result;
+}
+
+VK_LAYER_EXPORT VKAPI_ATTR VkResult VKAPI_CALL vkCreateBufferView(VkDevice device, const VkBufferViewCreateInfo *pCreateInfo,
+ const VkAllocationCallbacks *pAllocator, VkBufferView *pView) {
+ layer_data *dev_data = get_my_data_ptr(get_dispatch_key(device), layer_data_map);
+ VkResult result = dev_data->device_dispatch_table->CreateBufferView(device, pCreateInfo, pAllocator, pView);
+ if (VK_SUCCESS == result) {
+ loader_platform_thread_lock_mutex(&globalLock);
+ dev_data->bufferViewMap[*pView] = VkBufferViewCreateInfo(*pCreateInfo);
+#if MTMERGESOURCE
+ // In order to create a valid buffer view, the buffer must have been created with at least one of the
+ // following flags: UNIFORM_TEXEL_BUFFER_BIT or STORAGE_TEXEL_BUFFER_BIT
+ validate_buffer_usage_flags(dev_data, device, pCreateInfo->buffer,
+ VK_BUFFER_USAGE_UNIFORM_TEXEL_BUFFER_BIT | VK_BUFFER_USAGE_STORAGE_TEXEL_BUFFER_BIT, VK_FALSE,
+ "vkCreateBufferView()", "VK_BUFFER_USAGE_[STORAGE|UNIFORM]_TEXEL_BUFFER_BIT");
+#endif
+ loader_platform_thread_unlock_mutex(&globalLock);
+ }
+ return result;
+}
+
+VK_LAYER_EXPORT VKAPI_ATTR VkResult VKAPI_CALL vkCreateImage(VkDevice device, const VkImageCreateInfo *pCreateInfo,
+ const VkAllocationCallbacks *pAllocator, VkImage *pImage) {
+ layer_data *dev_data = get_my_data_ptr(get_dispatch_key(device), layer_data_map);
+
+ VkResult result = dev_data->device_dispatch_table->CreateImage(device, pCreateInfo, pAllocator, pImage);
+
+ if (VK_SUCCESS == result) {
+ loader_platform_thread_lock_mutex(&globalLock);
+#if MTMERGESOURCE
+ add_object_create_info(dev_data, (uint64_t)*pImage, VK_DEBUG_REPORT_OBJECT_TYPE_IMAGE_EXT, pCreateInfo);
+#endif
+ IMAGE_LAYOUT_NODE image_node;
+ image_node.layout = pCreateInfo->initialLayout;
+ image_node.format = pCreateInfo->format;
+ dev_data->imageMap[*pImage].createInfo = *pCreateInfo;
+ ImageSubresourcePair subpair = {*pImage, false, VkImageSubresource()};
+ dev_data->imageSubresourceMap[*pImage].push_back(subpair);
+ dev_data->imageLayoutMap[subpair] = image_node;
+ loader_platform_thread_unlock_mutex(&globalLock);
+ }
+ return result;
+}
+
+static void ResolveRemainingLevelsLayers(layer_data *dev_data, VkImageSubresourceRange *range, VkImage image) {
+ /* expects globalLock to be held by caller */
+
+ auto image_node_it = dev_data->imageMap.find(image);
+ if (image_node_it != dev_data->imageMap.end()) {
+ /* If the caller used the special values VK_REMAINING_MIP_LEVELS and
+ * VK_REMAINING_ARRAY_LAYERS, resolve them now in our internal state to
+ * the actual values.
+ */
+ if (range->levelCount == VK_REMAINING_MIP_LEVELS) {
+ range->levelCount = image_node_it->second.createInfo.mipLevels - range->baseMipLevel;
+ }
+
+ if (range->layerCount == VK_REMAINING_ARRAY_LAYERS) {
+ range->layerCount = image_node_it->second.createInfo.arrayLayers - range->baseArrayLayer;
+ }
+ }
+}
+
+// Return the correct layer/level counts if the caller used the special
+// values VK_REMAINING_MIP_LEVELS or VK_REMAINING_ARRAY_LAYERS.
+static void ResolveRemainingLevelsLayers(layer_data *dev_data, uint32_t *levels, uint32_t *layers, VkImageSubresourceRange range,
+ VkImage image) {
+ /* expects globalLock to be held by caller */
+
+ *levels = range.levelCount;
+ *layers = range.layerCount;
+ auto image_node_it = dev_data->imageMap.find(image);
+ if (image_node_it != dev_data->imageMap.end()) {
+ if (range.levelCount == VK_REMAINING_MIP_LEVELS) {
+ *levels = image_node_it->second.createInfo.mipLevels - range.baseMipLevel;
+ }
+ if (range.layerCount == VK_REMAINING_ARRAY_LAYERS) {
+ *layers = image_node_it->second.createInfo.arrayLayers - range.baseArrayLayer;
+ }
+ }
+}
+
+VK_LAYER_EXPORT VKAPI_ATTR VkResult VKAPI_CALL vkCreateImageView(VkDevice device, const VkImageViewCreateInfo *pCreateInfo,
+ const VkAllocationCallbacks *pAllocator, VkImageView *pView) {
+ layer_data *dev_data = get_my_data_ptr(get_dispatch_key(device), layer_data_map);
+ VkResult result = dev_data->device_dispatch_table->CreateImageView(device, pCreateInfo, pAllocator, pView);
+ if (VK_SUCCESS == result) {
+ loader_platform_thread_lock_mutex(&globalLock);
+ VkImageViewCreateInfo localCI = VkImageViewCreateInfo(*pCreateInfo);
+ ResolveRemainingLevelsLayers(dev_data, &localCI.subresourceRange, pCreateInfo->image);
+ dev_data->imageViewMap[*pView] = localCI;
+#if MTMERGESOURCE
+ // Validate that img has correct usage flags set
+ validate_image_usage_flags(dev_data, device, pCreateInfo->image,
+ VK_IMAGE_USAGE_SAMPLED_BIT | VK_IMAGE_USAGE_STORAGE_BIT |
+ VK_IMAGE_USAGE_DEPTH_STENCIL_ATTACHMENT_BIT | VK_IMAGE_USAGE_COLOR_ATTACHMENT_BIT,
+ VK_FALSE, "vkCreateImageView()", "VK_IMAGE_USAGE_[SAMPLED|STORAGE|COLOR_ATTACHMENT]_BIT");
+#endif
+ loader_platform_thread_unlock_mutex(&globalLock);
+ }
+ return result;
+}
+
+VK_LAYER_EXPORT VKAPI_ATTR VkResult VKAPI_CALL
+vkCreateFence(VkDevice device, const VkFenceCreateInfo *pCreateInfo, const VkAllocationCallbacks *pAllocator, VkFence *pFence) {
+ layer_data *dev_data = get_my_data_ptr(get_dispatch_key(device), layer_data_map);
+ VkResult result = dev_data->device_dispatch_table->CreateFence(device, pCreateInfo, pAllocator, pFence);
+ if (VK_SUCCESS == result) {
+ loader_platform_thread_lock_mutex(&globalLock);
+ FENCE_NODE *pFN = &dev_data->fenceMap[*pFence];
+#if MTMERGESOURCE
+ memset(pFN, 0, sizeof(MT_FENCE_INFO));
+ memcpy(&(pFN->createInfo), pCreateInfo, sizeof(VkFenceCreateInfo));
+ if (pCreateInfo->flags & VK_FENCE_CREATE_SIGNALED_BIT) {
+ pFN->firstTimeFlag = VK_TRUE;
+ }
+#endif
+ pFN->in_use.store(0);
+ loader_platform_thread_unlock_mutex(&globalLock);
+ }
+ return result;
+}
+
+// TODO handle pipeline caches
+VKAPI_ATTR VkResult VKAPI_CALL vkCreatePipelineCache(VkDevice device, const VkPipelineCacheCreateInfo *pCreateInfo,
+ const VkAllocationCallbacks *pAllocator, VkPipelineCache *pPipelineCache) {
+ layer_data *dev_data = get_my_data_ptr(get_dispatch_key(device), layer_data_map);
+ VkResult result = dev_data->device_dispatch_table->CreatePipelineCache(device, pCreateInfo, pAllocator, pPipelineCache);
+ return result;
+}
+
+VKAPI_ATTR void VKAPI_CALL
+vkDestroyPipelineCache(VkDevice device, VkPipelineCache pipelineCache, const VkAllocationCallbacks *pAllocator) {
+ layer_data *dev_data = get_my_data_ptr(get_dispatch_key(device), layer_data_map);
+ dev_data->device_dispatch_table->DestroyPipelineCache(device, pipelineCache, pAllocator);
+}
+
+VKAPI_ATTR VkResult VKAPI_CALL
+vkGetPipelineCacheData(VkDevice device, VkPipelineCache pipelineCache, size_t *pDataSize, void *pData) {
+ layer_data *dev_data = get_my_data_ptr(get_dispatch_key(device), layer_data_map);
+ VkResult result = dev_data->device_dispatch_table->GetPipelineCacheData(device, pipelineCache, pDataSize, pData);
+ return result;
+}
+
+VKAPI_ATTR VkResult VKAPI_CALL
+vkMergePipelineCaches(VkDevice device, VkPipelineCache dstCache, uint32_t srcCacheCount, const VkPipelineCache *pSrcCaches) {
+ layer_data *dev_data = get_my_data_ptr(get_dispatch_key(device), layer_data_map);
+ VkResult result = dev_data->device_dispatch_table->MergePipelineCaches(device, dstCache, srcCacheCount, pSrcCaches);
+ return result;
+}
+
+VK_LAYER_EXPORT VKAPI_ATTR VkResult VKAPI_CALL
+vkCreateGraphicsPipelines(VkDevice device, VkPipelineCache pipelineCache, uint32_t count,
+ const VkGraphicsPipelineCreateInfo *pCreateInfos, const VkAllocationCallbacks *pAllocator,
+ VkPipeline *pPipelines) {
+ VkResult result = VK_SUCCESS;
+ // TODO What to do with pipelineCache?
+ // The order of operations here is a little convoluted but gets the job done
+ // 1. Pipeline create state is first shadowed into PIPELINE_NODE struct
+ // 2. Create state is then validated (which uses flags setup during shadowing)
+ // 3. If everything looks good, we'll then create the pipeline and add NODE to pipelineMap
+ VkBool32 skipCall = VK_FALSE;
+ // TODO : Improve this data struct w/ unique_ptrs so cleanup below is automatic
+ vector<PIPELINE_NODE *> pPipeNode(count);
+ layer_data *dev_data = get_my_data_ptr(get_dispatch_key(device), layer_data_map);
+
+ uint32_t i = 0;
+ loader_platform_thread_lock_mutex(&globalLock);
+
+ for (i = 0; i < count; i++) {
+ pPipeNode[i] = initGraphicsPipeline(dev_data, &pCreateInfos[i]);
+ skipCall |= verifyPipelineCreateState(dev_data, device, pPipeNode, i);
+ }
+
+ if (VK_FALSE == skipCall) {
+ loader_platform_thread_unlock_mutex(&globalLock);
+ result = dev_data->device_dispatch_table->CreateGraphicsPipelines(device, pipelineCache, count, pCreateInfos, pAllocator,
+ pPipelines);
+ loader_platform_thread_lock_mutex(&globalLock);
+ for (i = 0; i < count; i++) {
+ pPipeNode[i]->pipeline = pPipelines[i];
+ dev_data->pipelineMap[pPipeNode[i]->pipeline] = pPipeNode[i];
+ }
+ loader_platform_thread_unlock_mutex(&globalLock);
+ } else {
+ for (i = 0; i < count; i++) {
+ delete pPipeNode[i];
+ }
+ loader_platform_thread_unlock_mutex(&globalLock);
+ return VK_ERROR_VALIDATION_FAILED_EXT;
+ }
+ return result;
+}
+
+VK_LAYER_EXPORT VKAPI_ATTR VkResult VKAPI_CALL
+vkCreateComputePipelines(VkDevice device, VkPipelineCache pipelineCache, uint32_t count,
+ const VkComputePipelineCreateInfo *pCreateInfos, const VkAllocationCallbacks *pAllocator,
+ VkPipeline *pPipelines) {
+ VkResult result = VK_SUCCESS;
+ VkBool32 skipCall = VK_FALSE;
+
+ // TODO : Improve this data struct w/ unique_ptrs so cleanup below is automatic
+ vector<PIPELINE_NODE *> pPipeNode(count);
+ layer_data *dev_data = get_my_data_ptr(get_dispatch_key(device), layer_data_map);
+
+ uint32_t i = 0;
+ loader_platform_thread_lock_mutex(&globalLock);
+ for (i = 0; i < count; i++) {
+ // TODO: Verify compute stage bits
+
+ // Create and initialize internal tracking data structure
+ pPipeNode[i] = new PIPELINE_NODE;
+ memcpy(&pPipeNode[i]->computePipelineCI, (const void *)&pCreateInfos[i], sizeof(VkComputePipelineCreateInfo));
+
+ // TODO: Add Compute Pipeline Verification
+ // skipCall |= verifyPipelineCreateState(dev_data, device, pPipeNode[i]);
+ }
+
+ if (VK_FALSE == skipCall) {
+ loader_platform_thread_unlock_mutex(&globalLock);
+ result = dev_data->device_dispatch_table->CreateComputePipelines(device, pipelineCache, count, pCreateInfos, pAllocator,
+ pPipelines);
+ loader_platform_thread_lock_mutex(&globalLock);
+ for (i = 0; i < count; i++) {
+ pPipeNode[i]->pipeline = pPipelines[i];
+ dev_data->pipelineMap[pPipeNode[i]->pipeline] = pPipeNode[i];
+ }
+ loader_platform_thread_unlock_mutex(&globalLock);
+ } else {
+ for (i = 0; i < count; i++) {
+ // Clean up any locally allocated data structures
+ delete pPipeNode[i];
+ }
+ loader_platform_thread_unlock_mutex(&globalLock);
+ return VK_ERROR_VALIDATION_FAILED_EXT;
+ }
+ return result;
+}
+
+VK_LAYER_EXPORT VKAPI_ATTR VkResult VKAPI_CALL vkCreateSampler(VkDevice device, const VkSamplerCreateInfo *pCreateInfo,
+ const VkAllocationCallbacks *pAllocator, VkSampler *pSampler) {
+ layer_data *dev_data = get_my_data_ptr(get_dispatch_key(device), layer_data_map);
+ VkResult result = dev_data->device_dispatch_table->CreateSampler(device, pCreateInfo, pAllocator, pSampler);
+ if (VK_SUCCESS == result) {
+ loader_platform_thread_lock_mutex(&globalLock);
+ dev_data->sampleMap[*pSampler] = unique_ptr<SAMPLER_NODE>(new SAMPLER_NODE(pSampler, pCreateInfo));
+ loader_platform_thread_unlock_mutex(&globalLock);
+ }
+ return result;
+}
+
+VK_LAYER_EXPORT VKAPI_ATTR VkResult VKAPI_CALL
+vkCreateDescriptorSetLayout(VkDevice device, const VkDescriptorSetLayoutCreateInfo *pCreateInfo,
+ const VkAllocationCallbacks *pAllocator, VkDescriptorSetLayout *pSetLayout) {
+ layer_data *dev_data = get_my_data_ptr(get_dispatch_key(device), layer_data_map);
+ VkResult result = dev_data->device_dispatch_table->CreateDescriptorSetLayout(device, pCreateInfo, pAllocator, pSetLayout);
+ if (VK_SUCCESS == result) {
+ // TODOSC : Capture layout bindings set
+ LAYOUT_NODE *pNewNode = new LAYOUT_NODE;
+ if (NULL == pNewNode) {
+ if (log_msg(dev_data->report_data, VK_DEBUG_REPORT_ERROR_BIT_EXT, VK_DEBUG_REPORT_OBJECT_TYPE_DESCRIPTOR_SET_LAYOUT_EXT,
+ (uint64_t)*pSetLayout, __LINE__, DRAWSTATE_OUT_OF_MEMORY, "DS",
+ "Out of memory while attempting to allocate LAYOUT_NODE in vkCreateDescriptorSetLayout()"))
+ return VK_ERROR_VALIDATION_FAILED_EXT;
+ }
+ memcpy((void *)&pNewNode->createInfo, pCreateInfo, sizeof(VkDescriptorSetLayoutCreateInfo));
+ pNewNode->createInfo.pBindings = new VkDescriptorSetLayoutBinding[pCreateInfo->bindingCount];
+ memcpy((void *)pNewNode->createInfo.pBindings, pCreateInfo->pBindings,
+ sizeof(VkDescriptorSetLayoutBinding) * pCreateInfo->bindingCount);
+ // g++ does not like reserve with size 0
+ if (pCreateInfo->bindingCount)
+ pNewNode->bindingToIndexMap.reserve(pCreateInfo->bindingCount);
+ uint32_t totalCount = 0;
+ for (uint32_t i = 0; i < pCreateInfo->bindingCount; i++) {
+ if (!pNewNode->bindingToIndexMap.emplace(pCreateInfo->pBindings[i].binding, i).second) {
+ if (log_msg(dev_data->report_data, VK_DEBUG_REPORT_ERROR_BIT_EXT,
+ VK_DEBUG_REPORT_OBJECT_TYPE_DESCRIPTOR_SET_LAYOUT_EXT, (uint64_t)*pSetLayout, __LINE__,
+ DRAWSTATE_INVALID_LAYOUT, "DS", "duplicated binding number in "
+ "VkDescriptorSetLayoutBinding"))
+ return VK_ERROR_VALIDATION_FAILED_EXT;
+ } else {
+ pNewNode->bindingToIndexMap[pCreateInfo->pBindings[i].binding] = i;
+ }
+ totalCount += pCreateInfo->pBindings[i].descriptorCount;
+ if (pCreateInfo->pBindings[i].pImmutableSamplers) {
+ VkSampler **ppIS = (VkSampler **)&pNewNode->createInfo.pBindings[i].pImmutableSamplers;
+ *ppIS = new VkSampler[pCreateInfo->pBindings[i].descriptorCount];
+ memcpy(*ppIS, pCreateInfo->pBindings[i].pImmutableSamplers,
+ pCreateInfo->pBindings[i].descriptorCount * sizeof(VkSampler));
+ }
+ }
+ pNewNode->layout = *pSetLayout;
+ pNewNode->startIndex = 0;
+ if (totalCount > 0) {
+ pNewNode->descriptorTypes.resize(totalCount);
+ pNewNode->stageFlags.resize(totalCount);
+ uint32_t offset = 0;
+ uint32_t j = 0;
+ VkDescriptorType dType;
+ for (uint32_t i = 0; i < pCreateInfo->bindingCount; i++) {
+ dType = pCreateInfo->pBindings[i].descriptorType;
+ for (j = 0; j < pCreateInfo->pBindings[i].descriptorCount; j++) {
+ pNewNode->descriptorTypes[offset + j] = dType;
+ pNewNode->stageFlags[offset + j] = pCreateInfo->pBindings[i].stageFlags;
+ if ((dType == VK_DESCRIPTOR_TYPE_UNIFORM_BUFFER_DYNAMIC) ||
+ (dType == VK_DESCRIPTOR_TYPE_STORAGE_BUFFER_DYNAMIC)) {
+ pNewNode->dynamicDescriptorCount++;
+ }
+ }
+ offset += j;
+ }
+ pNewNode->endIndex = pNewNode->startIndex + totalCount - 1;
+ } else { // no descriptors
+ pNewNode->endIndex = 0;
+ }
+ // Put new node at Head of global Layer list
+ loader_platform_thread_lock_mutex(&globalLock);
+ dev_data->descriptorSetLayoutMap[*pSetLayout] = pNewNode;
+ loader_platform_thread_unlock_mutex(&globalLock);
+ }
+ return result;
+}
+
+static bool validatePushConstantSize(const layer_data *dev_data, const uint32_t offset, const uint32_t size,
+ const char *caller_name) {
+ bool skipCall = false;
+ if ((offset + size) > dev_data->physDevProperties.properties.limits.maxPushConstantsSize) {
+ skipCall = log_msg(dev_data->report_data, VK_DEBUG_REPORT_ERROR_BIT_EXT, (VkDebugReportObjectTypeEXT)0, 0, __LINE__,
+ DRAWSTATE_PUSH_CONSTANTS_ERROR, "DS", "%s call has push constants with offset %u and size %u that "
+ "exceeds this device's maxPushConstantSize of %u.",
+ caller_name, offset, size, dev_data->physDevProperties.properties.limits.maxPushConstantsSize);
+ }
+ return skipCall;
+}
+
+VKAPI_ATTR VkResult VKAPI_CALL vkCreatePipelineLayout(VkDevice device, const VkPipelineLayoutCreateInfo *pCreateInfo,
+ const VkAllocationCallbacks *pAllocator, VkPipelineLayout *pPipelineLayout) {
+ bool skipCall = false;
+ layer_data *dev_data = get_my_data_ptr(get_dispatch_key(device), layer_data_map);
+ uint32_t i = 0;
+ for (i = 0; i < pCreateInfo->pushConstantRangeCount; ++i) {
+ skipCall |= validatePushConstantSize(dev_data, pCreateInfo->pPushConstantRanges[i].offset,
+ pCreateInfo->pPushConstantRanges[i].size, "vkCreatePipelineLayout()");
+ if ((pCreateInfo->pPushConstantRanges[i].size == 0) || ((pCreateInfo->pPushConstantRanges[i].size & 0x3) != 0)) {
+ skipCall |=
+ log_msg(dev_data->report_data, VK_DEBUG_REPORT_ERROR_BIT_EXT, (VkDebugReportObjectTypeEXT)0, 0, __LINE__,
+ DRAWSTATE_PUSH_CONSTANTS_ERROR, "DS", "vkCreatePipelineLayout() call has push constant index %u with "
+ "size %u. Size must be greater than zero and a multiple of 4.",
+ i, pCreateInfo->pPushConstantRanges[i].size);
+ }
+ // TODO : Add warning if ranges overlap
+ }
+ VkResult result = dev_data->device_dispatch_table->CreatePipelineLayout(device, pCreateInfo, pAllocator, pPipelineLayout);
+ if (VK_SUCCESS == result) {
+ loader_platform_thread_lock_mutex(&globalLock);
+ // TODOSC : Merge capture of the setLayouts per pipeline
+ PIPELINE_LAYOUT_NODE &plNode = dev_data->pipelineLayoutMap[*pPipelineLayout];
+ plNode.descriptorSetLayouts.resize(pCreateInfo->setLayoutCount);
+ for (i = 0; i < pCreateInfo->setLayoutCount; ++i) {
+ plNode.descriptorSetLayouts[i] = pCreateInfo->pSetLayouts[i];
+ }
+ plNode.pushConstantRanges.resize(pCreateInfo->pushConstantRangeCount);
+ for (i = 0; i < pCreateInfo->pushConstantRangeCount; ++i) {
+ plNode.pushConstantRanges[i] = pCreateInfo->pPushConstantRanges[i];
+ }
+ loader_platform_thread_unlock_mutex(&globalLock);
+ }
+ return result;
+}
+
+VK_LAYER_EXPORT VKAPI_ATTR VkResult VKAPI_CALL
+vkCreateDescriptorPool(VkDevice device, const VkDescriptorPoolCreateInfo *pCreateInfo, const VkAllocationCallbacks *pAllocator,
+ VkDescriptorPool *pDescriptorPool) {
+ layer_data *dev_data = get_my_data_ptr(get_dispatch_key(device), layer_data_map);
+ VkResult result = dev_data->device_dispatch_table->CreateDescriptorPool(device, pCreateInfo, pAllocator, pDescriptorPool);
+ if (VK_SUCCESS == result) {
+ // Insert this pool into Global Pool LL at head
+ if (log_msg(dev_data->report_data, VK_DEBUG_REPORT_INFORMATION_BIT_EXT, VK_DEBUG_REPORT_OBJECT_TYPE_DESCRIPTOR_POOL_EXT,
+ (uint64_t)*pDescriptorPool, __LINE__, DRAWSTATE_OUT_OF_MEMORY, "DS", "Created Descriptor Pool %#" PRIxLEAST64,
+ (uint64_t)*pDescriptorPool))
+ return VK_ERROR_VALIDATION_FAILED_EXT;
+ DESCRIPTOR_POOL_NODE *pNewNode = new DESCRIPTOR_POOL_NODE(*pDescriptorPool, pCreateInfo);
+ if (NULL == pNewNode) {
+ if (log_msg(dev_data->report_data, VK_DEBUG_REPORT_ERROR_BIT_EXT, VK_DEBUG_REPORT_OBJECT_TYPE_DESCRIPTOR_POOL_EXT,
+ (uint64_t)*pDescriptorPool, __LINE__, DRAWSTATE_OUT_OF_MEMORY, "DS",
+ "Out of memory while attempting to allocate DESCRIPTOR_POOL_NODE in vkCreateDescriptorPool()"))
+ return VK_ERROR_VALIDATION_FAILED_EXT;
+ } else {
+ loader_platform_thread_lock_mutex(&globalLock);
+ dev_data->descriptorPoolMap[*pDescriptorPool] = pNewNode;
+ loader_platform_thread_unlock_mutex(&globalLock);
+ }
+ } else {
+ // Need to do anything if pool create fails?
+ }
+ return result;
+}
+
+VK_LAYER_EXPORT VKAPI_ATTR VkResult VKAPI_CALL
+vkResetDescriptorPool(VkDevice device, VkDescriptorPool descriptorPool, VkDescriptorPoolResetFlags flags) {
+ layer_data *dev_data = get_my_data_ptr(get_dispatch_key(device), layer_data_map);
+ VkResult result = dev_data->device_dispatch_table->ResetDescriptorPool(device, descriptorPool, flags);
+ if (VK_SUCCESS == result) {
+ loader_platform_thread_lock_mutex(&globalLock);
+ clearDescriptorPool(dev_data, device, descriptorPool, flags);
+ loader_platform_thread_unlock_mutex(&globalLock);
+ }
+ return result;
+}
+
+VK_LAYER_EXPORT VKAPI_ATTR VkResult VKAPI_CALL
+vkAllocateDescriptorSets(VkDevice device, const VkDescriptorSetAllocateInfo *pAllocateInfo, VkDescriptorSet *pDescriptorSets) {
+ VkBool32 skipCall = VK_FALSE;
+ layer_data *dev_data = get_my_data_ptr(get_dispatch_key(device), layer_data_map);
+
+ loader_platform_thread_lock_mutex(&globalLock);
+ // Verify that requested descriptorSets are available in pool
+ DESCRIPTOR_POOL_NODE *pPoolNode = getPoolNode(dev_data, pAllocateInfo->descriptorPool);
+ if (!pPoolNode) {
+ skipCall |= log_msg(dev_data->report_data, VK_DEBUG_REPORT_ERROR_BIT_EXT, VK_DEBUG_REPORT_OBJECT_TYPE_DESCRIPTOR_POOL_EXT,
+ (uint64_t)pAllocateInfo->descriptorPool, __LINE__, DRAWSTATE_INVALID_POOL, "DS",
+ "Unable to find pool node for pool %#" PRIxLEAST64 " specified in vkAllocateDescriptorSets() call",
+ (uint64_t)pAllocateInfo->descriptorPool);
+ } else { // Make sure pool has all the available descriptors before calling down chain
+ skipCall |= validate_descriptor_availability_in_pool(dev_data, pPoolNode, pAllocateInfo->descriptorSetCount,
+ pAllocateInfo->pSetLayouts);
+ }
+ loader_platform_thread_unlock_mutex(&globalLock);
+ if (skipCall)
+ return VK_ERROR_VALIDATION_FAILED_EXT;
+ VkResult result = dev_data->device_dispatch_table->AllocateDescriptorSets(device, pAllocateInfo, pDescriptorSets);
+ if (VK_SUCCESS == result) {
+ loader_platform_thread_lock_mutex(&globalLock);
+ DESCRIPTOR_POOL_NODE *pPoolNode = getPoolNode(dev_data, pAllocateInfo->descriptorPool);
+ if (pPoolNode) {
+ if (pAllocateInfo->descriptorSetCount == 0) {
+ log_msg(dev_data->report_data, VK_DEBUG_REPORT_INFORMATION_BIT_EXT, VK_DEBUG_REPORT_OBJECT_TYPE_DESCRIPTOR_SET_EXT,
+ pAllocateInfo->descriptorSetCount, __LINE__, DRAWSTATE_NONE, "DS",
+ "AllocateDescriptorSets called with 0 count");
+ }
+ for (uint32_t i = 0; i < pAllocateInfo->descriptorSetCount; i++) {
+ log_msg(dev_data->report_data, VK_DEBUG_REPORT_INFORMATION_BIT_EXT, VK_DEBUG_REPORT_OBJECT_TYPE_DESCRIPTOR_SET_EXT,
+ (uint64_t)pDescriptorSets[i], __LINE__, DRAWSTATE_NONE, "DS", "Created Descriptor Set %#" PRIxLEAST64,
+ (uint64_t)pDescriptorSets[i]);
+ // Create new set node and add to head of pool nodes
+ SET_NODE *pNewNode = new SET_NODE;
+ if (NULL == pNewNode) {
+ if (log_msg(dev_data->report_data, VK_DEBUG_REPORT_ERROR_BIT_EXT,
+ VK_DEBUG_REPORT_OBJECT_TYPE_DESCRIPTOR_SET_EXT, (uint64_t)pDescriptorSets[i], __LINE__,
+ DRAWSTATE_OUT_OF_MEMORY, "DS",
+ "Out of memory while attempting to allocate SET_NODE in vkAllocateDescriptorSets()"))
+ return VK_ERROR_VALIDATION_FAILED_EXT;
+ } else {
+ // TODO : Pool should store a total count of each type of Descriptor available
+ // When descriptors are allocated, decrement the count and validate here
+ // that the count doesn't go below 0. One reset/free need to bump count back up.
+ // Insert set at head of Set LL for this pool
+ pNewNode->pNext = pPoolNode->pSets;
+ pNewNode->in_use.store(0);
+ pPoolNode->pSets = pNewNode;
+ LAYOUT_NODE *pLayout = getLayoutNode(dev_data, pAllocateInfo->pSetLayouts[i]);
+ if (NULL == pLayout) {
+ if (log_msg(dev_data->report_data, VK_DEBUG_REPORT_ERROR_BIT_EXT,
+ VK_DEBUG_REPORT_OBJECT_TYPE_DESCRIPTOR_SET_LAYOUT_EXT, (uint64_t)pAllocateInfo->pSetLayouts[i],
+ __LINE__, DRAWSTATE_INVALID_LAYOUT, "DS",
+ "Unable to find set layout node for layout %#" PRIxLEAST64
+ " specified in vkAllocateDescriptorSets() call",
+ (uint64_t)pAllocateInfo->pSetLayouts[i]))
+ return VK_ERROR_VALIDATION_FAILED_EXT;
+ }
+ pNewNode->pLayout = pLayout;
+ pNewNode->pool = pAllocateInfo->descriptorPool;
+ pNewNode->set = pDescriptorSets[i];
+ pNewNode->descriptorCount = (pLayout->createInfo.bindingCount != 0) ? pLayout->endIndex + 1 : 0;
+ if (pNewNode->descriptorCount) {
+ size_t descriptorArraySize = sizeof(GENERIC_HEADER *) * pNewNode->descriptorCount;
+ pNewNode->ppDescriptors = new GENERIC_HEADER *[descriptorArraySize];
+ memset(pNewNode->ppDescriptors, 0, descriptorArraySize);
+ }
+ dev_data->setMap[pDescriptorSets[i]] = pNewNode;
+ }
+ }
+ }
+ loader_platform_thread_unlock_mutex(&globalLock);
+ }
+ return result;
+}
+
+VK_LAYER_EXPORT VKAPI_ATTR VkResult VKAPI_CALL
+vkFreeDescriptorSets(VkDevice device, VkDescriptorPool descriptorPool, uint32_t count, const VkDescriptorSet *pDescriptorSets) {
+ VkBool32 skipCall = VK_FALSE;
+ layer_data *dev_data = get_my_data_ptr(get_dispatch_key(device), layer_data_map);
+ // Make sure that no sets being destroyed are in-flight
+ loader_platform_thread_lock_mutex(&globalLock);
+ for (uint32_t i = 0; i < count; ++i)
+ skipCall |= validateIdleDescriptorSet(dev_data, pDescriptorSets[i], "vkFreeDesriptorSets");
+ DESCRIPTOR_POOL_NODE *pPoolNode = getPoolNode(dev_data, descriptorPool);
+ if (pPoolNode && !(VK_DESCRIPTOR_POOL_CREATE_FREE_DESCRIPTOR_SET_BIT & pPoolNode->createInfo.flags)) {
+ // Can't Free from a NON_FREE pool
+ skipCall |= log_msg(dev_data->report_data, VK_DEBUG_REPORT_ERROR_BIT_EXT, VK_DEBUG_REPORT_OBJECT_TYPE_DEVICE_EXT,
+ (uint64_t)device, __LINE__, DRAWSTATE_CANT_FREE_FROM_NON_FREE_POOL, "DS",
+ "It is invalid to call vkFreeDescriptorSets() with a pool created without setting "
+ "VK_DESCRIPTOR_POOL_CREATE_FREE_DESCRIPTOR_SET_BIT.");
+ }
+ loader_platform_thread_unlock_mutex(&globalLock);
+ if (VK_FALSE != skipCall)
+ return VK_ERROR_VALIDATION_FAILED_EXT;
+ VkResult result = dev_data->device_dispatch_table->FreeDescriptorSets(device, descriptorPool, count, pDescriptorSets);
+ if (VK_SUCCESS == result) {
+ loader_platform_thread_lock_mutex(&globalLock);
+
+ // Update available descriptor sets in pool
+ pPoolNode->availableSets += count;
+
+ // For each freed descriptor add it back into the pool as available
+ for (uint32_t i = 0; i < count; ++i) {
+ SET_NODE *pSet = dev_data->setMap[pDescriptorSets[i]]; // getSetNode() without locking
+ invalidateBoundCmdBuffers(dev_data, pSet);
+ LAYOUT_NODE *pLayout = pSet->pLayout;
+ uint32_t typeIndex = 0, poolSizeCount = 0;
+ for (uint32_t j = 0; j < pLayout->createInfo.bindingCount; ++j) {
+ typeIndex = static_cast<uint32_t>(pLayout->createInfo.pBindings[j].descriptorType);
+ poolSizeCount = pLayout->createInfo.pBindings[j].descriptorCount;
+ pPoolNode->availableDescriptorTypeCount[typeIndex] += poolSizeCount;
+ }
+ }
+ loader_platform_thread_unlock_mutex(&globalLock);
+ }
+ // TODO : Any other clean-up or book-keeping to do here?
+ return result;
+}
+
+VK_LAYER_EXPORT VKAPI_ATTR void VKAPI_CALL
+vkUpdateDescriptorSets(VkDevice device, uint32_t descriptorWriteCount, const VkWriteDescriptorSet *pDescriptorWrites,
+ uint32_t descriptorCopyCount, const VkCopyDescriptorSet *pDescriptorCopies) {
+ // dsUpdate will return VK_TRUE only if a bailout error occurs, so we want to call down tree when update returns VK_FALSE
+ layer_data *dev_data = get_my_data_ptr(get_dispatch_key(device), layer_data_map);
+ loader_platform_thread_lock_mutex(&globalLock);
+#if MTMERGESOURCE
+ // MTMTODO : Merge this in with existing update code below and handle descriptor copies case
+ uint32_t j = 0;
+ for (uint32_t i = 0; i < descriptorWriteCount; ++i) {
+ if (pDescriptorWrites[i].descriptorType == VK_DESCRIPTOR_TYPE_STORAGE_IMAGE) {
+ for (j = 0; j < pDescriptorWrites[i].descriptorCount; ++j) {
+ dev_data->descriptorSetMap[pDescriptorWrites[i].dstSet].images.push_back(
+ pDescriptorWrites[i].pImageInfo[j].imageView);
+ }
+ } else if (pDescriptorWrites[i].descriptorType == VK_DESCRIPTOR_TYPE_STORAGE_TEXEL_BUFFER) {
+ for (j = 0; j < pDescriptorWrites[i].descriptorCount; ++j) {
+ dev_data->descriptorSetMap[pDescriptorWrites[i].dstSet].buffers.push_back(
+ dev_data->bufferViewMap[pDescriptorWrites[i].pTexelBufferView[j]].buffer);
+ }
+ } else if (pDescriptorWrites[i].descriptorType == VK_DESCRIPTOR_TYPE_STORAGE_BUFFER ||
+ pDescriptorWrites[i].descriptorType == VK_DESCRIPTOR_TYPE_STORAGE_BUFFER_DYNAMIC) {
+ for (j = 0; j < pDescriptorWrites[i].descriptorCount; ++j) {
+ dev_data->descriptorSetMap[pDescriptorWrites[i].dstSet].buffers.push_back(
+ pDescriptorWrites[i].pBufferInfo[j].buffer);
+ }
+ }
+ }
+#endif
+ VkBool32 rtn = dsUpdate(dev_data, device, descriptorWriteCount, pDescriptorWrites, descriptorCopyCount, pDescriptorCopies);
+ loader_platform_thread_unlock_mutex(&globalLock);
+ if (!rtn) {
+ dev_data->device_dispatch_table->UpdateDescriptorSets(device, descriptorWriteCount, pDescriptorWrites, descriptorCopyCount,
+ pDescriptorCopies);
+ }
+}
+
+VK_LAYER_EXPORT VKAPI_ATTR VkResult VKAPI_CALL
+vkAllocateCommandBuffers(VkDevice device, const VkCommandBufferAllocateInfo *pCreateInfo, VkCommandBuffer *pCommandBuffer) {
+ layer_data *dev_data = get_my_data_ptr(get_dispatch_key(device), layer_data_map);
+ VkResult result = dev_data->device_dispatch_table->AllocateCommandBuffers(device, pCreateInfo, pCommandBuffer);
+ if (VK_SUCCESS == result) {
+ loader_platform_thread_lock_mutex(&globalLock);
+ auto const &cp_it = dev_data->commandPoolMap.find(pCreateInfo->commandPool);
+ if (cp_it != dev_data->commandPoolMap.end()) {
+ for (uint32_t i = 0; i < pCreateInfo->commandBufferCount; i++) {
+ // Add command buffer to its commandPool map
+ cp_it->second.commandBuffers.push_back(pCommandBuffer[i]);
+ GLOBAL_CB_NODE *pCB = new GLOBAL_CB_NODE;
+ // Add command buffer to map
+ dev_data->commandBufferMap[pCommandBuffer[i]] = pCB;
+ resetCB(dev_data, pCommandBuffer[i]);
+ pCB->createInfo = *pCreateInfo;
+ pCB->device = device;
+ }
+ }
+#if MTMERGESOURCE
+ printCBList(dev_data, device);
+#endif
+ loader_platform_thread_unlock_mutex(&globalLock);
+ }
+ return result;
+}
+
+VK_LAYER_EXPORT VKAPI_ATTR VkResult VKAPI_CALL
+vkBeginCommandBuffer(VkCommandBuffer commandBuffer, const VkCommandBufferBeginInfo *pBeginInfo) {
+ VkBool32 skipCall = VK_FALSE;
+ layer_data *dev_data = get_my_data_ptr(get_dispatch_key(commandBuffer), layer_data_map);
+ loader_platform_thread_lock_mutex(&globalLock);
+ // Validate command buffer level
+ GLOBAL_CB_NODE *pCB = getCBNode(dev_data, commandBuffer);
+ if (pCB) {
+#if MTMERGESOURCE
+ bool commandBufferComplete = false;
+ // MTMTODO : Merge this with code below
+ // This implicitly resets the Cmd Buffer so make sure any fence is done and then clear memory references
+ skipCall = checkCBCompleted(dev_data, commandBuffer, &commandBufferComplete);
+
+ if (!commandBufferComplete) {
+ skipCall |= log_msg(dev_data->report_data, VK_DEBUG_REPORT_ERROR_BIT_EXT, VK_DEBUG_REPORT_OBJECT_TYPE_COMMAND_BUFFER_EXT,
+ (uint64_t)commandBuffer, __LINE__, MEMTRACK_RESET_CB_WHILE_IN_FLIGHT, "MEM",
+ "Calling vkBeginCommandBuffer() on active CB %p before it has completed. "
+ "You must check CB flag before this call.",
+ commandBuffer);
+ }
+#endif
+ if (pCB->createInfo.level != VK_COMMAND_BUFFER_LEVEL_PRIMARY) {
+ // Secondary Command Buffer
+ const VkCommandBufferInheritanceInfo *pInfo = pBeginInfo->pInheritanceInfo;
+ if (!pInfo) {
+ skipCall |=
+ log_msg(dev_data->report_data, VK_DEBUG_REPORT_ERROR_BIT_EXT, VK_DEBUG_REPORT_OBJECT_TYPE_COMMAND_BUFFER_EXT,
+ reinterpret_cast<uint64_t>(commandBuffer), __LINE__, DRAWSTATE_BEGIN_CB_INVALID_STATE, "DS",
+ "vkBeginCommandBuffer(): Secondary Command Buffer (%p) must have inheritance info.",
+ reinterpret_cast<void *>(commandBuffer));
+ } else {
+ if (pBeginInfo->flags & VK_COMMAND_BUFFER_USAGE_RENDER_PASS_CONTINUE_BIT) {
+ if (!pInfo->renderPass) { // renderpass should NOT be null for an Secondary CB
+ skipCall |= log_msg(
+ dev_data->report_data, VK_DEBUG_REPORT_ERROR_BIT_EXT, VK_DEBUG_REPORT_OBJECT_TYPE_COMMAND_BUFFER_EXT,
+ reinterpret_cast<uint64_t>(commandBuffer), __LINE__, DRAWSTATE_BEGIN_CB_INVALID_STATE, "DS",
+ "vkBeginCommandBuffer(): Secondary Command Buffers (%p) must specify a valid renderpass parameter.",
+ reinterpret_cast<void *>(commandBuffer));
+ }
+ if (!pInfo->framebuffer) { // framebuffer may be null for an Secondary CB, but this affects perf
+ skipCall |= log_msg(dev_data->report_data, VK_DEBUG_REPORT_WARNING_BIT_EXT,
+ VK_DEBUG_REPORT_OBJECT_TYPE_COMMAND_BUFFER_EXT,
+ reinterpret_cast<uint64_t>(commandBuffer), __LINE__, DRAWSTATE_BEGIN_CB_INVALID_STATE,
+ "DS", "vkBeginCommandBuffer(): Secondary Command Buffers (%p) may perform better if a "
+ "valid framebuffer parameter is specified.",
+ reinterpret_cast<void *>(commandBuffer));
+ } else {
+ string errorString = "";
+ auto fbNode = dev_data->frameBufferMap.find(pInfo->framebuffer);
+ if (fbNode != dev_data->frameBufferMap.end()) {
+ VkRenderPass fbRP = fbNode->second.createInfo.renderPass;
+ if (!verify_renderpass_compatibility(dev_data, fbRP, pInfo->renderPass, errorString)) {
+ // renderPass that framebuffer was created with
+ // must
+ // be compatible with local renderPass
+ skipCall |=
+ log_msg(dev_data->report_data, VK_DEBUG_REPORT_ERROR_BIT_EXT,
+ VK_DEBUG_REPORT_OBJECT_TYPE_COMMAND_BUFFER_EXT,
+ reinterpret_cast<uint64_t>(commandBuffer), __LINE__, DRAWSTATE_RENDERPASS_INCOMPATIBLE,
+ "DS", "vkBeginCommandBuffer(): Secondary Command "
+ "Buffer (%p) renderPass (%#" PRIxLEAST64 ") is incompatible w/ framebuffer "
+ "(%#" PRIxLEAST64 ") w/ render pass (%#" PRIxLEAST64 ") due to: %s",
+ reinterpret_cast<void *>(commandBuffer), (uint64_t)(pInfo->renderPass),
+ (uint64_t)(pInfo->framebuffer), (uint64_t)(fbRP), errorString.c_str());
+ }
+ // Connect this framebuffer to this cmdBuffer
+ fbNode->second.referencingCmdBuffers.insert(pCB->commandBuffer);
+ }
+ }
+ }
+ if ((pInfo->occlusionQueryEnable == VK_FALSE ||
+ dev_data->physDevProperties.features.occlusionQueryPrecise == VK_FALSE) &&
+ (pInfo->queryFlags & VK_QUERY_CONTROL_PRECISE_BIT)) {
+ skipCall |= log_msg(dev_data->report_data, VK_DEBUG_REPORT_ERROR_BIT_EXT,
+ VK_DEBUG_REPORT_OBJECT_TYPE_COMMAND_BUFFER_EXT, reinterpret_cast<uint64_t>(commandBuffer),
+ __LINE__, DRAWSTATE_BEGIN_CB_INVALID_STATE, "DS",
+ "vkBeginCommandBuffer(): Secondary Command Buffer (%p) must not have "
+ "VK_QUERY_CONTROL_PRECISE_BIT if occulusionQuery is disabled or the device does not "
+ "support precise occlusion queries.",
+ reinterpret_cast<void *>(commandBuffer));
+ }
+ }
+ if (pInfo && pInfo->renderPass != VK_NULL_HANDLE) {
+ auto rp_data = dev_data->renderPassMap.find(pInfo->renderPass);
+ if (rp_data != dev_data->renderPassMap.end() && rp_data->second && rp_data->second->pCreateInfo) {
+ if (pInfo->subpass >= rp_data->second->pCreateInfo->subpassCount) {
+ skipCall |= log_msg(dev_data->report_data, VK_DEBUG_REPORT_ERROR_BIT_EXT,
+ VK_DEBUG_REPORT_OBJECT_TYPE_COMMAND_BUFFER_EXT, (uint64_t)commandBuffer, __LINE__,
+ DRAWSTATE_BEGIN_CB_INVALID_STATE, "DS",
+ "vkBeginCommandBuffer(): Secondary Command Buffers (%p) must has a subpass index (%d) "
+ "that is less than the number of subpasses (%d).",
+ (void *)commandBuffer, pInfo->subpass, rp_data->second->pCreateInfo->subpassCount);
+ }
+ }
+ }
+ }
+ if (CB_RECORDING == pCB->state) {
+ skipCall |=
+ log_msg(dev_data->report_data, VK_DEBUG_REPORT_ERROR_BIT_EXT, VK_DEBUG_REPORT_OBJECT_TYPE_COMMAND_BUFFER_EXT,
+ (uint64_t)commandBuffer, __LINE__, DRAWSTATE_BEGIN_CB_INVALID_STATE, "DS",
+ "vkBeginCommandBuffer(): Cannot call Begin on CB (%#" PRIxLEAST64
+ ") in the RECORDING state. Must first call vkEndCommandBuffer().",
+ (uint64_t)commandBuffer);
+ } else if (CB_RECORDED == pCB->state) {
+ VkCommandPool cmdPool = pCB->createInfo.commandPool;
+ if (!(VK_COMMAND_POOL_CREATE_RESET_COMMAND_BUFFER_BIT & dev_data->commandPoolMap[cmdPool].createFlags)) {
+ skipCall |=
+ log_msg(dev_data->report_data, VK_DEBUG_REPORT_ERROR_BIT_EXT, VK_DEBUG_REPORT_OBJECT_TYPE_COMMAND_BUFFER_EXT,
+ (uint64_t)commandBuffer, __LINE__, DRAWSTATE_INVALID_COMMAND_BUFFER_RESET, "DS",
+ "Call to vkBeginCommandBuffer() on command buffer (%#" PRIxLEAST64
+ ") attempts to implicitly reset cmdBuffer created from command pool (%#" PRIxLEAST64
+ ") that does NOT have the VK_COMMAND_POOL_CREATE_RESET_COMMAND_BUFFER_BIT bit set.",
+ (uint64_t)commandBuffer, (uint64_t)cmdPool);
+ }
+ resetCB(dev_data, commandBuffer);
+ }
+ // Set updated state here in case implicit reset occurs above
+ pCB->state = CB_RECORDING;
+ pCB->beginInfo = *pBeginInfo;
+ if (pCB->beginInfo.pInheritanceInfo) {
+ pCB->inheritanceInfo = *(pCB->beginInfo.pInheritanceInfo);
+ pCB->beginInfo.pInheritanceInfo = &pCB->inheritanceInfo;
+ }
+ } else {
+ skipCall |= log_msg(dev_data->report_data, VK_DEBUG_REPORT_ERROR_BIT_EXT, VK_DEBUG_REPORT_OBJECT_TYPE_COMMAND_BUFFER_EXT,
+ (uint64_t)commandBuffer, __LINE__, DRAWSTATE_INVALID_COMMAND_BUFFER, "DS",
+ "In vkBeginCommandBuffer() and unable to find CommandBuffer Node for CB %p!", (void *)commandBuffer);
+ }
+ loader_platform_thread_unlock_mutex(&globalLock);
+ if (VK_FALSE != skipCall) {
+ return VK_ERROR_VALIDATION_FAILED_EXT;
+ }
+ VkResult result = dev_data->device_dispatch_table->BeginCommandBuffer(commandBuffer, pBeginInfo);
+#if MTMERGESOURCE
+ loader_platform_thread_lock_mutex(&globalLock);
+ clear_cmd_buf_and_mem_references(dev_data, commandBuffer);
+ loader_platform_thread_unlock_mutex(&globalLock);
+#endif
+ return result;
+}
+
+VK_LAYER_EXPORT VKAPI_ATTR VkResult VKAPI_CALL vkEndCommandBuffer(VkCommandBuffer commandBuffer) {
+ VkBool32 skipCall = VK_FALSE;
+ VkResult result = VK_SUCCESS;
+ layer_data *dev_data = get_my_data_ptr(get_dispatch_key(commandBuffer), layer_data_map);
+ loader_platform_thread_lock_mutex(&globalLock);
+ GLOBAL_CB_NODE *pCB = getCBNode(dev_data, commandBuffer);
+ if (pCB) {
+ if (pCB->state != CB_RECORDING) {
+ skipCall |= report_error_no_cb_begin(dev_data, commandBuffer, "vkEndCommandBuffer()");
+ }
+ for (auto query : pCB->activeQueries) {
+ skipCall |= log_msg(dev_data->report_data, VK_DEBUG_REPORT_ERROR_BIT_EXT, (VkDebugReportObjectTypeEXT)0, 0, __LINE__,
+ DRAWSTATE_INVALID_QUERY, "DS",
+ "Ending command buffer with in progress query: queryPool %" PRIu64 ", index %d",
+ (uint64_t)(query.pool), query.index);
+ }
+ }
+ if (VK_FALSE == skipCall) {
+ loader_platform_thread_unlock_mutex(&globalLock);
+ result = dev_data->device_dispatch_table->EndCommandBuffer(commandBuffer);
+ loader_platform_thread_lock_mutex(&globalLock);
+ if (VK_SUCCESS == result) {
+ pCB->state = CB_RECORDED;
+ // Reset CB status flags
+ pCB->status = 0;
+ printCB(dev_data, commandBuffer);
+ }
+ } else {
+ result = VK_ERROR_VALIDATION_FAILED_EXT;
+ }
+ loader_platform_thread_unlock_mutex(&globalLock);
+ return result;
+}
+
+VK_LAYER_EXPORT VKAPI_ATTR VkResult VKAPI_CALL
+vkResetCommandBuffer(VkCommandBuffer commandBuffer, VkCommandBufferResetFlags flags) {
+ VkBool32 skipCall = VK_FALSE;
+ layer_data *dev_data = get_my_data_ptr(get_dispatch_key(commandBuffer), layer_data_map);
+ loader_platform_thread_lock_mutex(&globalLock);
+#if MTMERGESOURCE
+ bool commandBufferComplete = false;
+ // Verify that CB is complete (not in-flight)
+ skipCall = checkCBCompleted(dev_data, commandBuffer, &commandBufferComplete);
+ if (!commandBufferComplete) {
+ skipCall |= log_msg(dev_data->report_data, VK_DEBUG_REPORT_ERROR_BIT_EXT, VK_DEBUG_REPORT_OBJECT_TYPE_COMMAND_BUFFER_EXT,
+ (uint64_t)commandBuffer, __LINE__, MEMTRACK_RESET_CB_WHILE_IN_FLIGHT, "MEM",
+ "Resetting CB %p before it has completed. You must check CB "
+ "flag before calling vkResetCommandBuffer().",
+ commandBuffer);
+ }
+ // Clear memory references as this point.
+ clear_cmd_buf_and_mem_references(dev_data, commandBuffer);
+#endif
+ GLOBAL_CB_NODE *pCB = getCBNode(dev_data, commandBuffer);
+ VkCommandPool cmdPool = pCB->createInfo.commandPool;
+ if (!(VK_COMMAND_POOL_CREATE_RESET_COMMAND_BUFFER_BIT & dev_data->commandPoolMap[cmdPool].createFlags)) {
+ skipCall |= log_msg(dev_data->report_data, VK_DEBUG_REPORT_ERROR_BIT_EXT, VK_DEBUG_REPORT_OBJECT_TYPE_COMMAND_BUFFER_EXT,
+ (uint64_t)commandBuffer, __LINE__, DRAWSTATE_INVALID_COMMAND_BUFFER_RESET, "DS",
+ "Attempt to reset command buffer (%#" PRIxLEAST64 ") created from command pool (%#" PRIxLEAST64
+ ") that does NOT have the VK_COMMAND_POOL_CREATE_RESET_COMMAND_BUFFER_BIT bit set.",
+ (uint64_t)commandBuffer, (uint64_t)cmdPool);
+ }
+ if (dev_data->globalInFlightCmdBuffers.count(commandBuffer)) {
+ skipCall |= log_msg(dev_data->report_data, VK_DEBUG_REPORT_ERROR_BIT_EXT, VK_DEBUG_REPORT_OBJECT_TYPE_COMMAND_BUFFER_EXT,
+ (uint64_t)commandBuffer, __LINE__, DRAWSTATE_INVALID_COMMAND_BUFFER_RESET, "DS",
+ "Attempt to reset command buffer (%#" PRIxLEAST64 ") which is in use.",
+ reinterpret_cast<uint64_t>(commandBuffer));
+ }
+ loader_platform_thread_unlock_mutex(&globalLock);
+ if (skipCall != VK_FALSE)
+ return VK_ERROR_VALIDATION_FAILED_EXT;
+ VkResult result = dev_data->device_dispatch_table->ResetCommandBuffer(commandBuffer, flags);
+ if (VK_SUCCESS == result) {
+ loader_platform_thread_lock_mutex(&globalLock);
+ resetCB(dev_data, commandBuffer);
+ loader_platform_thread_unlock_mutex(&globalLock);
+ }
+ return result;
+}
+#if MTMERGESOURCE
+// TODO : For any vkCmdBind* calls that include an object which has mem bound to it,
+// need to account for that mem now having binding to given commandBuffer
+#endif
+VK_LAYER_EXPORT VKAPI_ATTR void VKAPI_CALL
+vkCmdBindPipeline(VkCommandBuffer commandBuffer, VkPipelineBindPoint pipelineBindPoint, VkPipeline pipeline) {
+ VkBool32 skipCall = VK_FALSE;
+ layer_data *dev_data = get_my_data_ptr(get_dispatch_key(commandBuffer), layer_data_map);
+ loader_platform_thread_lock_mutex(&globalLock);
+ GLOBAL_CB_NODE *pCB = getCBNode(dev_data, commandBuffer);
+ if (pCB) {
+ skipCall |= addCmd(dev_data, pCB, CMD_BINDPIPELINE, "vkCmdBindPipeline()");
+ if ((VK_PIPELINE_BIND_POINT_COMPUTE == pipelineBindPoint) && (pCB->activeRenderPass)) {
+ skipCall |=
+ log_msg(dev_data->report_data, VK_DEBUG_REPORT_ERROR_BIT_EXT, VK_DEBUG_REPORT_OBJECT_TYPE_PIPELINE_EXT,
+ (uint64_t)pipeline, __LINE__, DRAWSTATE_INVALID_RENDERPASS_CMD, "DS",
+ "Incorrectly binding compute pipeline (%#" PRIxLEAST64 ") during active RenderPass (%#" PRIxLEAST64 ")",
+ (uint64_t)pipeline, (uint64_t)pCB->activeRenderPass);
+ }
+
+ PIPELINE_NODE *pPN = getPipeline(dev_data, pipeline);
+ if (pPN) {
+ pCB->lastBound[pipelineBindPoint].pipeline = pipeline;
+ set_cb_pso_status(pCB, pPN);
+ skipCall |= validatePipelineState(dev_data, pCB, pipelineBindPoint, pipeline);
+ } else {
+ skipCall |= log_msg(dev_data->report_data, VK_DEBUG_REPORT_ERROR_BIT_EXT, VK_DEBUG_REPORT_OBJECT_TYPE_PIPELINE_EXT,
+ (uint64_t)pipeline, __LINE__, DRAWSTATE_INVALID_PIPELINE, "DS",
+ "Attempt to bind Pipeline %#" PRIxLEAST64 " that doesn't exist!", (uint64_t)(pipeline));
+ }
+ }
+ loader_platform_thread_unlock_mutex(&globalLock);
+ if (VK_FALSE == skipCall)
+ dev_data->device_dispatch_table->CmdBindPipeline(commandBuffer, pipelineBindPoint, pipeline);
+}
+
+VK_LAYER_EXPORT VKAPI_ATTR void VKAPI_CALL
+vkCmdSetViewport(VkCommandBuffer commandBuffer, uint32_t firstViewport, uint32_t viewportCount, const VkViewport *pViewports) {
+ VkBool32 skipCall = VK_FALSE;
+ layer_data *dev_data = get_my_data_ptr(get_dispatch_key(commandBuffer), layer_data_map);
+ loader_platform_thread_lock_mutex(&globalLock);
+ GLOBAL_CB_NODE *pCB = getCBNode(dev_data, commandBuffer);
+ if (pCB) {
+ skipCall |= addCmd(dev_data, pCB, CMD_SETVIEWPORTSTATE, "vkCmdSetViewport()");
+ pCB->status |= CBSTATUS_VIEWPORT_SET;
+ pCB->viewports.resize(viewportCount);
+ memcpy(pCB->viewports.data(), pViewports, viewportCount * sizeof(VkViewport));
+ }
+ loader_platform_thread_unlock_mutex(&globalLock);
+ if (VK_FALSE == skipCall)
+ dev_data->device_dispatch_table->CmdSetViewport(commandBuffer, firstViewport, viewportCount, pViewports);
+}
+
+VK_LAYER_EXPORT VKAPI_ATTR void VKAPI_CALL
+vkCmdSetScissor(VkCommandBuffer commandBuffer, uint32_t firstScissor, uint32_t scissorCount, const VkRect2D *pScissors) {
+ VkBool32 skipCall = VK_FALSE;
+ layer_data *dev_data = get_my_data_ptr(get_dispatch_key(commandBuffer), layer_data_map);
+ loader_platform_thread_lock_mutex(&globalLock);
+ GLOBAL_CB_NODE *pCB = getCBNode(dev_data, commandBuffer);
+ if (pCB) {
+ skipCall |= addCmd(dev_data, pCB, CMD_SETSCISSORSTATE, "vkCmdSetScissor()");
+ pCB->status |= CBSTATUS_SCISSOR_SET;
+ pCB->scissors.resize(scissorCount);
+ memcpy(pCB->scissors.data(), pScissors, scissorCount * sizeof(VkRect2D));
+ }
+ loader_platform_thread_unlock_mutex(&globalLock);
+ if (VK_FALSE == skipCall)
+ dev_data->device_dispatch_table->CmdSetScissor(commandBuffer, firstScissor, scissorCount, pScissors);
+}
+
+VK_LAYER_EXPORT VKAPI_ATTR void VKAPI_CALL vkCmdSetLineWidth(VkCommandBuffer commandBuffer, float lineWidth) {
+ VkBool32 skipCall = VK_FALSE;
+ layer_data *dev_data = get_my_data_ptr(get_dispatch_key(commandBuffer), layer_data_map);
+ loader_platform_thread_lock_mutex(&globalLock);
+ GLOBAL_CB_NODE *pCB = getCBNode(dev_data, commandBuffer);
+ if (pCB) {
+ skipCall |= addCmd(dev_data, pCB, CMD_SETLINEWIDTHSTATE, "vkCmdSetLineWidth()");
+ pCB->status |= CBSTATUS_LINE_WIDTH_SET;
+ }
+ loader_platform_thread_unlock_mutex(&globalLock);
+ if (VK_FALSE == skipCall)
+ dev_data->device_dispatch_table->CmdSetLineWidth(commandBuffer, lineWidth);
+}
+
+VK_LAYER_EXPORT VKAPI_ATTR void VKAPI_CALL
+vkCmdSetDepthBias(VkCommandBuffer commandBuffer, float depthBiasConstantFactor, float depthBiasClamp, float depthBiasSlopeFactor) {
+ VkBool32 skipCall = VK_FALSE;
+ layer_data *dev_data = get_my_data_ptr(get_dispatch_key(commandBuffer), layer_data_map);
+ loader_platform_thread_lock_mutex(&globalLock);
+ GLOBAL_CB_NODE *pCB = getCBNode(dev_data, commandBuffer);
+ if (pCB) {
+ skipCall |= addCmd(dev_data, pCB, CMD_SETDEPTHBIASSTATE, "vkCmdSetDepthBias()");
+ pCB->status |= CBSTATUS_DEPTH_BIAS_SET;
+ }
+ loader_platform_thread_unlock_mutex(&globalLock);
+ if (VK_FALSE == skipCall)
+ dev_data->device_dispatch_table->CmdSetDepthBias(commandBuffer, depthBiasConstantFactor, depthBiasClamp,
+ depthBiasSlopeFactor);
+}
+
+VK_LAYER_EXPORT VKAPI_ATTR void VKAPI_CALL vkCmdSetBlendConstants(VkCommandBuffer commandBuffer, const float blendConstants[4]) {
+ VkBool32 skipCall = VK_FALSE;
+ layer_data *dev_data = get_my_data_ptr(get_dispatch_key(commandBuffer), layer_data_map);
+ loader_platform_thread_lock_mutex(&globalLock);
+ GLOBAL_CB_NODE *pCB = getCBNode(dev_data, commandBuffer);
+ if (pCB) {
+ skipCall |= addCmd(dev_data, pCB, CMD_SETBLENDSTATE, "vkCmdSetBlendConstants()");
+ pCB->status |= CBSTATUS_BLEND_SET;
+ }
+ loader_platform_thread_unlock_mutex(&globalLock);
+ if (VK_FALSE == skipCall)
+ dev_data->device_dispatch_table->CmdSetBlendConstants(commandBuffer, blendConstants);
+}
+
+VK_LAYER_EXPORT VKAPI_ATTR void VKAPI_CALL
+vkCmdSetDepthBounds(VkCommandBuffer commandBuffer, float minDepthBounds, float maxDepthBounds) {
+ VkBool32 skipCall = VK_FALSE;
+ layer_data *dev_data = get_my_data_ptr(get_dispatch_key(commandBuffer), layer_data_map);
+ loader_platform_thread_lock_mutex(&globalLock);
+ GLOBAL_CB_NODE *pCB = getCBNode(dev_data, commandBuffer);
+ if (pCB) {
+ skipCall |= addCmd(dev_data, pCB, CMD_SETDEPTHBOUNDSSTATE, "vkCmdSetDepthBounds()");
+ pCB->status |= CBSTATUS_DEPTH_BOUNDS_SET;
+ }
+ loader_platform_thread_unlock_mutex(&globalLock);
+ if (VK_FALSE == skipCall)
+ dev_data->device_dispatch_table->CmdSetDepthBounds(commandBuffer, minDepthBounds, maxDepthBounds);
+}
+
+VK_LAYER_EXPORT VKAPI_ATTR void VKAPI_CALL
+vkCmdSetStencilCompareMask(VkCommandBuffer commandBuffer, VkStencilFaceFlags faceMask, uint32_t compareMask) {
+ VkBool32 skipCall = VK_FALSE;
+ layer_data *dev_data = get_my_data_ptr(get_dispatch_key(commandBuffer), layer_data_map);
+ loader_platform_thread_lock_mutex(&globalLock);
+ GLOBAL_CB_NODE *pCB = getCBNode(dev_data, commandBuffer);
+ if (pCB) {
+ skipCall |= addCmd(dev_data, pCB, CMD_SETSTENCILREADMASKSTATE, "vkCmdSetStencilCompareMask()");
+ pCB->status |= CBSTATUS_STENCIL_READ_MASK_SET;
+ }
+ loader_platform_thread_unlock_mutex(&globalLock);
+ if (VK_FALSE == skipCall)
+ dev_data->device_dispatch_table->CmdSetStencilCompareMask(commandBuffer, faceMask, compareMask);
+}
+
+VK_LAYER_EXPORT VKAPI_ATTR void VKAPI_CALL
+vkCmdSetStencilWriteMask(VkCommandBuffer commandBuffer, VkStencilFaceFlags faceMask, uint32_t writeMask) {
+ VkBool32 skipCall = VK_FALSE;
+ layer_data *dev_data = get_my_data_ptr(get_dispatch_key(commandBuffer), layer_data_map);
+ loader_platform_thread_lock_mutex(&globalLock);
+ GLOBAL_CB_NODE *pCB = getCBNode(dev_data, commandBuffer);
+ if (pCB) {
+ skipCall |= addCmd(dev_data, pCB, CMD_SETSTENCILWRITEMASKSTATE, "vkCmdSetStencilWriteMask()");
+ pCB->status |= CBSTATUS_STENCIL_WRITE_MASK_SET;
+ }
+ loader_platform_thread_unlock_mutex(&globalLock);
+ if (VK_FALSE == skipCall)
+ dev_data->device_dispatch_table->CmdSetStencilWriteMask(commandBuffer, faceMask, writeMask);
+}
+
+VK_LAYER_EXPORT VKAPI_ATTR void VKAPI_CALL
+vkCmdSetStencilReference(VkCommandBuffer commandBuffer, VkStencilFaceFlags faceMask, uint32_t reference) {
+ VkBool32 skipCall = VK_FALSE;
+ layer_data *dev_data = get_my_data_ptr(get_dispatch_key(commandBuffer), layer_data_map);
+ loader_platform_thread_lock_mutex(&globalLock);
+ GLOBAL_CB_NODE *pCB = getCBNode(dev_data, commandBuffer);
+ if (pCB) {
+ skipCall |= addCmd(dev_data, pCB, CMD_SETSTENCILREFERENCESTATE, "vkCmdSetStencilReference()");
+ pCB->status |= CBSTATUS_STENCIL_REFERENCE_SET;
+ }
+ loader_platform_thread_unlock_mutex(&globalLock);
+ if (VK_FALSE == skipCall)
+ dev_data->device_dispatch_table->CmdSetStencilReference(commandBuffer, faceMask, reference);
+}
+
+VK_LAYER_EXPORT VKAPI_ATTR void VKAPI_CALL
+vkCmdBindDescriptorSets(VkCommandBuffer commandBuffer, VkPipelineBindPoint pipelineBindPoint, VkPipelineLayout layout,
+ uint32_t firstSet, uint32_t setCount, const VkDescriptorSet *pDescriptorSets, uint32_t dynamicOffsetCount,
+ const uint32_t *pDynamicOffsets) {
+ VkBool32 skipCall = VK_FALSE;
+ layer_data *dev_data = get_my_data_ptr(get_dispatch_key(commandBuffer), layer_data_map);
+ loader_platform_thread_lock_mutex(&globalLock);
+#if MTMERGESOURCE
+ // MTMTODO : Merge this with code below
+ auto cb_data = dev_data->commandBufferMap.find(commandBuffer);
+ if (cb_data != dev_data->commandBufferMap.end()) {
+ // MTMTODO : activeDescriptorSets should be merged with lastBound.boundDescriptorSets
+ std::vector<VkDescriptorSet> &activeDescriptorSets = cb_data->second->activeDescriptorSets;
+ if (activeDescriptorSets.size() < (setCount + firstSet)) {
+ activeDescriptorSets.resize(setCount + firstSet);
+ }
+ for (uint32_t i = 0; i < setCount; ++i) {
+ activeDescriptorSets[i + firstSet] = pDescriptorSets[i];
+ }
+ }
+ // TODO : Somewhere need to verify that all textures referenced by shaders in DS are in some type of *SHADER_READ* state
+#endif
+ GLOBAL_CB_NODE *pCB = getCBNode(dev_data, commandBuffer);
+ if (pCB) {
+ if (pCB->state == CB_RECORDING) {
+ // Track total count of dynamic descriptor types to make sure we have an offset for each one
+ uint32_t totalDynamicDescriptors = 0;
+ string errorString = "";
+ uint32_t lastSetIndex = firstSet + setCount - 1;
+ if (lastSetIndex >= pCB->lastBound[pipelineBindPoint].boundDescriptorSets.size())
+ pCB->lastBound[pipelineBindPoint].boundDescriptorSets.resize(lastSetIndex + 1);
+ VkDescriptorSet oldFinalBoundSet = pCB->lastBound[pipelineBindPoint].boundDescriptorSets[lastSetIndex];
+ for (uint32_t i = 0; i < setCount; i++) {
+ SET_NODE *pSet = getSetNode(dev_data, pDescriptorSets[i]);
+ if (pSet) {
+ pCB->lastBound[pipelineBindPoint].uniqueBoundSets.insert(pDescriptorSets[i]);
+ pSet->boundCmdBuffers.insert(commandBuffer);
+ pCB->lastBound[pipelineBindPoint].pipelineLayout = layout;
+ pCB->lastBound[pipelineBindPoint].boundDescriptorSets[i + firstSet] = pDescriptorSets[i];
+ skipCall |= log_msg(dev_data->report_data, VK_DEBUG_REPORT_INFORMATION_BIT_EXT,
+ VK_DEBUG_REPORT_OBJECT_TYPE_DESCRIPTOR_SET_EXT, (uint64_t)pDescriptorSets[i], __LINE__,
+ DRAWSTATE_NONE, "DS", "DS %#" PRIxLEAST64 " bound on pipeline %s",
+ (uint64_t)pDescriptorSets[i], string_VkPipelineBindPoint(pipelineBindPoint));
+ if (!pSet->pUpdateStructs && (pSet->descriptorCount != 0)) {
+ skipCall |= log_msg(dev_data->report_data, VK_DEBUG_REPORT_WARNING_BIT_EXT,
+ VK_DEBUG_REPORT_OBJECT_TYPE_DESCRIPTOR_SET_EXT, (uint64_t)pDescriptorSets[i],
+ __LINE__, DRAWSTATE_DESCRIPTOR_SET_NOT_UPDATED, "DS",
+ "DS %#" PRIxLEAST64
+ " bound but it was never updated. You may want to either update it or not bind it.",
+ (uint64_t)pDescriptorSets[i]);
+ }
+ // Verify that set being bound is compatible with overlapping setLayout of pipelineLayout
+ if (!verify_set_layout_compatibility(dev_data, pSet, layout, i + firstSet, errorString)) {
+ skipCall |= log_msg(dev_data->report_data, VK_DEBUG_REPORT_ERROR_BIT_EXT,
+ VK_DEBUG_REPORT_OBJECT_TYPE_DESCRIPTOR_SET_EXT, (uint64_t)pDescriptorSets[i],
+ __LINE__, DRAWSTATE_PIPELINE_LAYOUTS_INCOMPATIBLE, "DS",
+ "descriptorSet #%u being bound is not compatible with overlapping layout in "
+ "pipelineLayout due to: %s",
+ i, errorString.c_str());
+ }
+ if (pSet->pLayout->dynamicDescriptorCount) {
+ // First make sure we won't overstep bounds of pDynamicOffsets array
+ if ((totalDynamicDescriptors + pSet->pLayout->dynamicDescriptorCount) > dynamicOffsetCount) {
+ skipCall |=
+ log_msg(dev_data->report_data, VK_DEBUG_REPORT_ERROR_BIT_EXT,
+ VK_DEBUG_REPORT_OBJECT_TYPE_DESCRIPTOR_SET_EXT, (uint64_t)pDescriptorSets[i], __LINE__,
+ DRAWSTATE_INVALID_DYNAMIC_OFFSET_COUNT, "DS",
+ "descriptorSet #%u (%#" PRIxLEAST64
+ ") requires %u dynamicOffsets, but only %u dynamicOffsets are left in pDynamicOffsets "
+ "array. There must be one dynamic offset for each dynamic descriptor being bound.",
+ i, (uint64_t)pDescriptorSets[i], pSet->pLayout->dynamicDescriptorCount,
+ (dynamicOffsetCount - totalDynamicDescriptors));
+ } else { // Validate and store dynamic offsets with the set
+ // Validate Dynamic Offset Minimums
+ uint32_t cur_dyn_offset = totalDynamicDescriptors;
+ for (uint32_t d = 0; d < pSet->descriptorCount; d++) {
+ if (pSet->pLayout->descriptorTypes[d] == VK_DESCRIPTOR_TYPE_UNIFORM_BUFFER_DYNAMIC) {
+ if (vk_safe_modulo(
+ pDynamicOffsets[cur_dyn_offset],
+ dev_data->physDevProperties.properties.limits.minUniformBufferOffsetAlignment) !=
+ 0) {
+ skipCall |= log_msg(
+ dev_data->report_data, VK_DEBUG_REPORT_ERROR_BIT_EXT,
+ VK_DEBUG_REPORT_OBJECT_TYPE_PHYSICAL_DEVICE_EXT, 0, __LINE__,
+ DRAWSTATE_INVALID_UNIFORM_BUFFER_OFFSET, "DS",
+ "vkCmdBindDescriptorSets(): pDynamicOffsets[%d] is %d but must be a multiple of "
+ "device limit minUniformBufferOffsetAlignment %#" PRIxLEAST64,
+ cur_dyn_offset, pDynamicOffsets[cur_dyn_offset],
+ dev_data->physDevProperties.properties.limits.minUniformBufferOffsetAlignment);
+ }
+ cur_dyn_offset++;
+ } else if (pSet->pLayout->descriptorTypes[d] == VK_DESCRIPTOR_TYPE_STORAGE_BUFFER_DYNAMIC) {
+ if (vk_safe_modulo(
+ pDynamicOffsets[cur_dyn_offset],
+ dev_data->physDevProperties.properties.limits.minStorageBufferOffsetAlignment) !=
+ 0) {
+ skipCall |= log_msg(
+ dev_data->report_data, VK_DEBUG_REPORT_ERROR_BIT_EXT,
+ VK_DEBUG_REPORT_OBJECT_TYPE_PHYSICAL_DEVICE_EXT, 0, __LINE__,
+ DRAWSTATE_INVALID_STORAGE_BUFFER_OFFSET, "DS",
+ "vkCmdBindDescriptorSets(): pDynamicOffsets[%d] is %d but must be a multiple of "
+ "device limit minStorageBufferOffsetAlignment %#" PRIxLEAST64,
+ cur_dyn_offset, pDynamicOffsets[cur_dyn_offset],
+ dev_data->physDevProperties.properties.limits.minStorageBufferOffsetAlignment);
+ }
+ cur_dyn_offset++;
+ }
+ }
+ // Keep running total of dynamic descriptor count to verify at the end
+ totalDynamicDescriptors += pSet->pLayout->dynamicDescriptorCount;
+ }
+ }
+ } else {
+ skipCall |= log_msg(dev_data->report_data, VK_DEBUG_REPORT_ERROR_BIT_EXT,
+ VK_DEBUG_REPORT_OBJECT_TYPE_DESCRIPTOR_SET_EXT, (uint64_t)pDescriptorSets[i], __LINE__,
+ DRAWSTATE_INVALID_SET, "DS", "Attempt to bind DS %#" PRIxLEAST64 " that doesn't exist!",
+ (uint64_t)pDescriptorSets[i]);
+ }
+ skipCall |= addCmd(dev_data, pCB, CMD_BINDDESCRIPTORSETS, "vkCmdBindDescriptorSets()");
+ // For any previously bound sets, need to set them to "invalid" if they were disturbed by this update
+ if (firstSet > 0) { // Check set #s below the first bound set
+ for (uint32_t i = 0; i < firstSet; ++i) {
+ if (pCB->lastBound[pipelineBindPoint].boundDescriptorSets[i] &&
+ !verify_set_layout_compatibility(
+ dev_data, dev_data->setMap[pCB->lastBound[pipelineBindPoint].boundDescriptorSets[i]], layout, i,
+ errorString)) {
+ skipCall |= log_msg(
+ dev_data->report_data, VK_DEBUG_REPORT_PERFORMANCE_WARNING_BIT_EXT,
+ VK_DEBUG_REPORT_OBJECT_TYPE_DESCRIPTOR_SET_EXT,
+ (uint64_t)pCB->lastBound[pipelineBindPoint].boundDescriptorSets[i], __LINE__, DRAWSTATE_NONE, "DS",
+ "DescriptorSetDS %#" PRIxLEAST64
+ " previously bound as set #%u was disturbed by newly bound pipelineLayout (%#" PRIxLEAST64 ")",
+ (uint64_t)pCB->lastBound[pipelineBindPoint].boundDescriptorSets[i], i, (uint64_t)layout);
+ pCB->lastBound[pipelineBindPoint].boundDescriptorSets[i] = VK_NULL_HANDLE;
+ }
+ }
+ }
+ // Check if newly last bound set invalidates any remaining bound sets
+ if ((pCB->lastBound[pipelineBindPoint].boundDescriptorSets.size() - 1) > (lastSetIndex)) {
+ if (oldFinalBoundSet &&
+ !verify_set_layout_compatibility(dev_data, dev_data->setMap[oldFinalBoundSet], layout, lastSetIndex,
+ errorString)) {
+ skipCall |=
+ log_msg(dev_data->report_data, VK_DEBUG_REPORT_PERFORMANCE_WARNING_BIT_EXT,
+ VK_DEBUG_REPORT_OBJECT_TYPE_DESCRIPTOR_SET_EXT, (uint64_t)oldFinalBoundSet, __LINE__,
+ DRAWSTATE_NONE, "DS", "DescriptorSetDS %#" PRIxLEAST64
+ " previously bound as set #%u is incompatible with set %#" PRIxLEAST64
+ " newly bound as set #%u so set #%u and any subsequent sets were "
+ "disturbed by newly bound pipelineLayout (%#" PRIxLEAST64 ")",
+ (uint64_t)oldFinalBoundSet, lastSetIndex,
+ (uint64_t)pCB->lastBound[pipelineBindPoint].boundDescriptorSets[lastSetIndex], lastSetIndex,
+ lastSetIndex + 1, (uint64_t)layout);
+ pCB->lastBound[pipelineBindPoint].boundDescriptorSets.resize(lastSetIndex + 1);
+ }
+ }
+ // dynamicOffsetCount must equal the total number of dynamic descriptors in the sets being bound
+ if (totalDynamicDescriptors != dynamicOffsetCount) {
+ skipCall |= log_msg(dev_data->report_data, VK_DEBUG_REPORT_ERROR_BIT_EXT,
+ VK_DEBUG_REPORT_OBJECT_TYPE_COMMAND_BUFFER_EXT, (uint64_t)commandBuffer, __LINE__,
+ DRAWSTATE_INVALID_DYNAMIC_OFFSET_COUNT, "DS",
+ "Attempting to bind %u descriptorSets with %u dynamic descriptors, but dynamicOffsetCount "
+ "is %u. It should exactly match the number of dynamic descriptors.",
+ setCount, totalDynamicDescriptors, dynamicOffsetCount);
+ }
+ // Save dynamicOffsets bound to this CB
+ for (uint32_t i = 0; i < dynamicOffsetCount; i++) {
+ pCB->lastBound[pipelineBindPoint].dynamicOffsets.push_back(pDynamicOffsets[i]);
+ }
+ }
+ // dynamicOffsetCount must equal the total number of dynamic descriptors in the sets being bound
+ if (totalDynamicDescriptors != dynamicOffsetCount) {
+ skipCall |= log_msg(dev_data->report_data, VK_DEBUG_REPORT_ERROR_BIT_EXT,
+ VK_DEBUG_REPORT_OBJECT_TYPE_COMMAND_BUFFER_EXT, (uint64_t)commandBuffer, __LINE__,
+ DRAWSTATE_INVALID_DYNAMIC_OFFSET_COUNT, "DS",
+ "Attempting to bind %u descriptorSets with %u dynamic descriptors, but dynamicOffsetCount "
+ "is %u. It should exactly match the number of dynamic descriptors.",
+ setCount, totalDynamicDescriptors, dynamicOffsetCount);
+ }
+ // Save dynamicOffsets bound to this CB
+ for (uint32_t i = 0; i < dynamicOffsetCount; i++) {
+ pCB->dynamicOffsets.emplace_back(pDynamicOffsets[i]);
+ }
+ } else {
+ skipCall |= report_error_no_cb_begin(dev_data, commandBuffer, "vkCmdBindDescriptorSets()");
+ }
+ }
+ loader_platform_thread_unlock_mutex(&globalLock);
+ if (VK_FALSE == skipCall)
+ dev_data->device_dispatch_table->CmdBindDescriptorSets(commandBuffer, pipelineBindPoint, layout, firstSet, setCount,
+ pDescriptorSets, dynamicOffsetCount, pDynamicOffsets);
+}
+
+VK_LAYER_EXPORT VKAPI_ATTR void VKAPI_CALL
+vkCmdBindIndexBuffer(VkCommandBuffer commandBuffer, VkBuffer buffer, VkDeviceSize offset, VkIndexType indexType) {
+ VkBool32 skipCall = VK_FALSE;
+ layer_data *dev_data = get_my_data_ptr(get_dispatch_key(commandBuffer), layer_data_map);
+ loader_platform_thread_lock_mutex(&globalLock);
+#if MTMERGESOURCE
+ VkDeviceMemory mem;
+ skipCall =
+ get_mem_binding_from_object(dev_data, commandBuffer, (uint64_t)(buffer), VK_DEBUG_REPORT_OBJECT_TYPE_BUFFER_EXT, &mem);
+ auto cb_data = dev_data->commandBufferMap.find(commandBuffer);
+ if (cb_data != dev_data->commandBufferMap.end()) {
+ std::function<VkBool32()> function = [=]() { return validate_memory_is_valid(dev_data, mem, "vkCmdBindIndexBuffer()"); };
+ cb_data->second->validate_functions.push_back(function);
+ }
+ // TODO : Somewhere need to verify that IBs have correct usage state flagged
+#endif
+ GLOBAL_CB_NODE *pCB = getCBNode(dev_data, commandBuffer);
+ if (pCB) {
+ skipCall |= addCmd(dev_data, pCB, CMD_BINDINDEXBUFFER, "vkCmdBindIndexBuffer()");
+ VkDeviceSize offset_align = 0;
+ switch (indexType) {
+ case VK_INDEX_TYPE_UINT16:
+ offset_align = 2;
+ break;
+ case VK_INDEX_TYPE_UINT32:
+ offset_align = 4;
+ break;
+ default:
+ // ParamChecker should catch bad enum, we'll also throw alignment error below if offset_align stays 0
+ break;
+ }
+ if (!offset_align || (offset % offset_align)) {
+ skipCall |= log_msg(dev_data->report_data, VK_DEBUG_REPORT_ERROR_BIT_EXT, (VkDebugReportObjectTypeEXT)0, 0, __LINE__,
+ DRAWSTATE_VTX_INDEX_ALIGNMENT_ERROR, "DS",
+ "vkCmdBindIndexBuffer() offset (%#" PRIxLEAST64 ") does not fall on alignment (%s) boundary.",
+ offset, string_VkIndexType(indexType));
+ }
+ pCB->status |= CBSTATUS_INDEX_BUFFER_BOUND;
+ }
+ loader_platform_thread_unlock_mutex(&globalLock);
+ if (VK_FALSE == skipCall)
+ dev_data->device_dispatch_table->CmdBindIndexBuffer(commandBuffer, buffer, offset, indexType);
+}
+
+void updateResourceTracking(GLOBAL_CB_NODE *pCB, uint32_t firstBinding, uint32_t bindingCount, const VkBuffer *pBuffers) {
+ uint32_t end = firstBinding + bindingCount;
+ if (pCB->currentDrawData.buffers.size() < end) {
+ pCB->currentDrawData.buffers.resize(end);
+ }
+ for (uint32_t i = 0; i < bindingCount; ++i) {
+ pCB->currentDrawData.buffers[i + firstBinding] = pBuffers[i];
+ }
+}
+
+void updateResourceTrackingOnDraw(GLOBAL_CB_NODE *pCB) { pCB->drawData.push_back(pCB->currentDrawData); }
+
+VK_LAYER_EXPORT VKAPI_ATTR void VKAPI_CALL vkCmdBindVertexBuffers(VkCommandBuffer commandBuffer, uint32_t firstBinding,
+ uint32_t bindingCount, const VkBuffer *pBuffers,
+ const VkDeviceSize *pOffsets) {
+ VkBool32 skipCall = VK_FALSE;
+ layer_data *dev_data = get_my_data_ptr(get_dispatch_key(commandBuffer), layer_data_map);
+ loader_platform_thread_lock_mutex(&globalLock);
+#if MTMERGESOURCE
+ for (uint32_t i = 0; i < bindingCount; ++i) {
+ VkDeviceMemory mem;
+ skipCall |= get_mem_binding_from_object(dev_data, commandBuffer, (uint64_t)(pBuffers[i]),
+ VK_DEBUG_REPORT_OBJECT_TYPE_BUFFER_EXT, &mem);
+ auto cb_data = dev_data->commandBufferMap.find(commandBuffer);
+ if (cb_data != dev_data->commandBufferMap.end()) {
+ std::function<VkBool32()> function =
+ [=]() { return validate_memory_is_valid(dev_data, mem, "vkCmdBindVertexBuffers()"); };
+ cb_data->second->validate_functions.push_back(function);
+ }
+ }
+ // TODO : Somewhere need to verify that VBs have correct usage state flagged
+#endif
+ GLOBAL_CB_NODE *pCB = getCBNode(dev_data, commandBuffer);
+ if (pCB) {
+ addCmd(dev_data, pCB, CMD_BINDVERTEXBUFFER, "vkCmdBindVertexBuffer()");
+ updateResourceTracking(pCB, firstBinding, bindingCount, pBuffers);
+ } else {
+ skipCall |= report_error_no_cb_begin(dev_data, commandBuffer, "vkCmdBindVertexBuffer()");
+ }
+ loader_platform_thread_unlock_mutex(&globalLock);
+ if (VK_FALSE == skipCall)
+ dev_data->device_dispatch_table->CmdBindVertexBuffers(commandBuffer, firstBinding, bindingCount, pBuffers, pOffsets);
+}
+
+#if MTMERGESOURCE
+/* expects globalLock to be held by caller */
+bool markStoreImagesAndBuffersAsWritten(VkCommandBuffer commandBuffer) {
+ bool skip_call = false;
+ layer_data *my_data = get_my_data_ptr(get_dispatch_key(commandBuffer), layer_data_map);
+ auto cb_data = my_data->commandBufferMap.find(commandBuffer);
+ if (cb_data == my_data->commandBufferMap.end())
+ return skip_call;
+ std::vector<VkDescriptorSet> &activeDescriptorSets = cb_data->second->activeDescriptorSets;
+ for (auto descriptorSet : activeDescriptorSets) {
+ auto ds_data = my_data->descriptorSetMap.find(descriptorSet);
+ if (ds_data == my_data->descriptorSetMap.end())
+ continue;
+ std::vector<VkImageView> images = ds_data->second.images;
+ std::vector<VkBuffer> buffers = ds_data->second.buffers;
+ for (auto imageView : images) {
+ auto iv_data = my_data->imageViewMap.find(imageView);
+ if (iv_data == my_data->imageViewMap.end())
+ continue;
+ VkImage image = iv_data->second.image;
+ VkDeviceMemory mem;
+ skip_call |=
+ get_mem_binding_from_object(my_data, commandBuffer, (uint64_t)image, VK_DEBUG_REPORT_OBJECT_TYPE_IMAGE_EXT, &mem);
+ std::function<VkBool32()> function = [=]() {
+ set_memory_valid(my_data, mem, true, image);
+ return VK_FALSE;
+ };
+ cb_data->second->validate_functions.push_back(function);
+ }
+ for (auto buffer : buffers) {
+ VkDeviceMemory mem;
+ skip_call |=
+ get_mem_binding_from_object(my_data, commandBuffer, (uint64_t)buffer, VK_DEBUG_REPORT_OBJECT_TYPE_BUFFER_EXT, &mem);
+ std::function<VkBool32()> function = [=]() {
+ set_memory_valid(my_data, mem, true);
+ return VK_FALSE;
+ };
+ cb_data->second->validate_functions.push_back(function);
+ }
+ }
+ return skip_call;
+}
+#endif
+
+VK_LAYER_EXPORT VKAPI_ATTR void VKAPI_CALL vkCmdDraw(VkCommandBuffer commandBuffer, uint32_t vertexCount, uint32_t instanceCount,
+ uint32_t firstVertex, uint32_t firstInstance) {
+ VkBool32 skipCall = VK_FALSE;
+ layer_data *dev_data = get_my_data_ptr(get_dispatch_key(commandBuffer), layer_data_map);
+ loader_platform_thread_lock_mutex(&globalLock);
+#if MTMERGESOURCE
+ // MTMTODO : merge with code below
+ skipCall = markStoreImagesAndBuffersAsWritten(commandBuffer);
+#endif
+ GLOBAL_CB_NODE *pCB = getCBNode(dev_data, commandBuffer);
+ if (pCB) {
+ skipCall |= addCmd(dev_data, pCB, CMD_DRAW, "vkCmdDraw()");
+ pCB->drawCount[DRAW]++;
+ skipCall |= validate_draw_state(dev_data, pCB, VK_FALSE);
+ // TODO : Need to pass commandBuffer as srcObj here
+ skipCall |=
+ log_msg(dev_data->report_data, VK_DEBUG_REPORT_INFORMATION_BIT_EXT, VK_DEBUG_REPORT_OBJECT_TYPE_COMMAND_BUFFER_EXT, 0,
+ __LINE__, DRAWSTATE_NONE, "DS", "vkCmdDraw() call #%" PRIu64 ", reporting DS state:", g_drawCount[DRAW]++);
+ skipCall |= synchAndPrintDSConfig(dev_data, commandBuffer);
+ if (VK_FALSE == skipCall) {
+ updateResourceTrackingOnDraw(pCB);
+ }
+ skipCall |= outsideRenderPass(dev_data, pCB, "vkCmdDraw");
+ }
+ loader_platform_thread_unlock_mutex(&globalLock);
+ if (VK_FALSE == skipCall)
+ dev_data->device_dispatch_table->CmdDraw(commandBuffer, vertexCount, instanceCount, firstVertex, firstInstance);
+}
+
+VK_LAYER_EXPORT VKAPI_ATTR void VKAPI_CALL vkCmdDrawIndexed(VkCommandBuffer commandBuffer, uint32_t indexCount,
+ uint32_t instanceCount, uint32_t firstIndex, int32_t vertexOffset,
+ uint32_t firstInstance) {
+ layer_data *dev_data = get_my_data_ptr(get_dispatch_key(commandBuffer), layer_data_map);
+ VkBool32 skipCall = VK_FALSE;
+ loader_platform_thread_lock_mutex(&globalLock);
+#if MTMERGESOURCE
+ // MTMTODO : merge with code below
+ skipCall = markStoreImagesAndBuffersAsWritten(commandBuffer);
+#endif
+ GLOBAL_CB_NODE *pCB = getCBNode(dev_data, commandBuffer);
+ if (pCB) {
+ skipCall |= addCmd(dev_data, pCB, CMD_DRAWINDEXED, "vkCmdDrawIndexed()");
+ pCB->drawCount[DRAW_INDEXED]++;
+ skipCall |= validate_draw_state(dev_data, pCB, VK_TRUE);
+ // TODO : Need to pass commandBuffer as srcObj here
+ skipCall |= log_msg(dev_data->report_data, VK_DEBUG_REPORT_INFORMATION_BIT_EXT,
+ VK_DEBUG_REPORT_OBJECT_TYPE_COMMAND_BUFFER_EXT, 0, __LINE__, DRAWSTATE_NONE, "DS",
+ "vkCmdDrawIndexed() call #%" PRIu64 ", reporting DS state:", g_drawCount[DRAW_INDEXED]++);
+ skipCall |= synchAndPrintDSConfig(dev_data, commandBuffer);
+ if (VK_FALSE == skipCall) {
+ updateResourceTrackingOnDraw(pCB);
+ }
+ skipCall |= outsideRenderPass(dev_data, pCB, "vkCmdDrawIndexed");
+ }
+ loader_platform_thread_unlock_mutex(&globalLock);
+ if (VK_FALSE == skipCall)
+ dev_data->device_dispatch_table->CmdDrawIndexed(commandBuffer, indexCount, instanceCount, firstIndex, vertexOffset,
+ firstInstance);
+}
+
+VK_LAYER_EXPORT VKAPI_ATTR void VKAPI_CALL
+vkCmdDrawIndirect(VkCommandBuffer commandBuffer, VkBuffer buffer, VkDeviceSize offset, uint32_t count, uint32_t stride) {
+ layer_data *dev_data = get_my_data_ptr(get_dispatch_key(commandBuffer), layer_data_map);
+ VkBool32 skipCall = VK_FALSE;
+ loader_platform_thread_lock_mutex(&globalLock);
+#if MTMERGESOURCE
+ VkDeviceMemory mem;
+ // MTMTODO : merge with code below
+ skipCall =
+ get_mem_binding_from_object(dev_data, commandBuffer, (uint64_t)buffer, VK_DEBUG_REPORT_OBJECT_TYPE_BUFFER_EXT, &mem);
+ skipCall |= update_cmd_buf_and_mem_references(dev_data, commandBuffer, mem, "vkCmdDrawIndirect");
+ skipCall |= markStoreImagesAndBuffersAsWritten(commandBuffer);
+#endif
+ GLOBAL_CB_NODE *pCB = getCBNode(dev_data, commandBuffer);
+ if (pCB) {
+ skipCall |= addCmd(dev_data, pCB, CMD_DRAWINDIRECT, "vkCmdDrawIndirect()");
+ pCB->drawCount[DRAW_INDIRECT]++;
+ skipCall |= validate_draw_state(dev_data, pCB, VK_FALSE);
+ // TODO : Need to pass commandBuffer as srcObj here
+ skipCall |= log_msg(dev_data->report_data, VK_DEBUG_REPORT_INFORMATION_BIT_EXT,
+ VK_DEBUG_REPORT_OBJECT_TYPE_COMMAND_BUFFER_EXT, 0, __LINE__, DRAWSTATE_NONE, "DS",
+ "vkCmdDrawIndirect() call #%" PRIu64 ", reporting DS state:", g_drawCount[DRAW_INDIRECT]++);
+ skipCall |= synchAndPrintDSConfig(dev_data, commandBuffer);
+ if (VK_FALSE == skipCall) {
+ updateResourceTrackingOnDraw(pCB);
+ }
+ skipCall |= outsideRenderPass(dev_data, pCB, "vkCmdDrawIndirect");
+ }
+ loader_platform_thread_unlock_mutex(&globalLock);
+ if (VK_FALSE == skipCall)
+ dev_data->device_dispatch_table->CmdDrawIndirect(commandBuffer, buffer, offset, count, stride);
+}
+
+VK_LAYER_EXPORT VKAPI_ATTR void VKAPI_CALL
+vkCmdDrawIndexedIndirect(VkCommandBuffer commandBuffer, VkBuffer buffer, VkDeviceSize offset, uint32_t count, uint32_t stride) {
+ VkBool32 skipCall = VK_FALSE;
+ layer_data *dev_data = get_my_data_ptr(get_dispatch_key(commandBuffer), layer_data_map);
+ loader_platform_thread_lock_mutex(&globalLock);
+#if MTMERGESOURCE
+ VkDeviceMemory mem;
+ // MTMTODO : merge with code below
+ skipCall =
+ get_mem_binding_from_object(dev_data, commandBuffer, (uint64_t)buffer, VK_DEBUG_REPORT_OBJECT_TYPE_BUFFER_EXT, &mem);
+ skipCall |= update_cmd_buf_and_mem_references(dev_data, commandBuffer, mem, "vkCmdDrawIndexedIndirect");
+ skipCall |= markStoreImagesAndBuffersAsWritten(commandBuffer);
+#endif
+ GLOBAL_CB_NODE *pCB = getCBNode(dev_data, commandBuffer);
+ if (pCB) {
+ skipCall |= addCmd(dev_data, pCB, CMD_DRAWINDEXEDINDIRECT, "vkCmdDrawIndexedIndirect()");
+ pCB->drawCount[DRAW_INDEXED_INDIRECT]++;
+ loader_platform_thread_unlock_mutex(&globalLock);
+ skipCall |= validate_draw_state(dev_data, pCB, VK_TRUE);
+ loader_platform_thread_lock_mutex(&globalLock);
+ // TODO : Need to pass commandBuffer as srcObj here
+ skipCall |=
+ log_msg(dev_data->report_data, VK_DEBUG_REPORT_INFORMATION_BIT_EXT, VK_DEBUG_REPORT_OBJECT_TYPE_COMMAND_BUFFER_EXT, 0,
+ __LINE__, DRAWSTATE_NONE, "DS", "vkCmdDrawIndexedIndirect() call #%" PRIu64 ", reporting DS state:",
+ g_drawCount[DRAW_INDEXED_INDIRECT]++);
+ skipCall |= synchAndPrintDSConfig(dev_data, commandBuffer);
+ if (VK_FALSE == skipCall) {
+ updateResourceTrackingOnDraw(pCB);
+ }
+ skipCall |= outsideRenderPass(dev_data, pCB, "vkCmdDrawIndexedIndirect");
+ }
+ loader_platform_thread_unlock_mutex(&globalLock);
+ if (VK_FALSE == skipCall)
+ dev_data->device_dispatch_table->CmdDrawIndexedIndirect(commandBuffer, buffer, offset, count, stride);
+}
+
+VK_LAYER_EXPORT VKAPI_ATTR void VKAPI_CALL vkCmdDispatch(VkCommandBuffer commandBuffer, uint32_t x, uint32_t y, uint32_t z) {
+ VkBool32 skipCall = VK_FALSE;
+ layer_data *dev_data = get_my_data_ptr(get_dispatch_key(commandBuffer), layer_data_map);
+ loader_platform_thread_lock_mutex(&globalLock);
+#if MTMERGESOURCE
+ skipCall = markStoreImagesAndBuffersAsWritten(commandBuffer);
+#endif
+ GLOBAL_CB_NODE *pCB = getCBNode(dev_data, commandBuffer);
+ if (pCB) {
+ skipCall |= addCmd(dev_data, pCB, CMD_DISPATCH, "vkCmdDispatch()");
+ skipCall |= insideRenderPass(dev_data, pCB, "vkCmdDispatch");
+ }
+ loader_platform_thread_unlock_mutex(&globalLock);
+ if (VK_FALSE == skipCall)
+ dev_data->device_dispatch_table->CmdDispatch(commandBuffer, x, y, z);
+}
+
+VK_LAYER_EXPORT VKAPI_ATTR void VKAPI_CALL
+vkCmdDispatchIndirect(VkCommandBuffer commandBuffer, VkBuffer buffer, VkDeviceSize offset) {
+ VkBool32 skipCall = VK_FALSE;
+ layer_data *dev_data = get_my_data_ptr(get_dispatch_key(commandBuffer), layer_data_map);
+ loader_platform_thread_lock_mutex(&globalLock);
+#if MTMERGESOURCE
+ VkDeviceMemory mem;
+ skipCall =
+ get_mem_binding_from_object(dev_data, commandBuffer, (uint64_t)buffer, VK_DEBUG_REPORT_OBJECT_TYPE_BUFFER_EXT, &mem);
+ skipCall |= update_cmd_buf_and_mem_references(dev_data, commandBuffer, mem, "vkCmdDispatchIndirect");
+ skipCall |= markStoreImagesAndBuffersAsWritten(commandBuffer);
+#endif
+ GLOBAL_CB_NODE *pCB = getCBNode(dev_data, commandBuffer);
+ if (pCB) {
+ skipCall |= addCmd(dev_data, pCB, CMD_DISPATCHINDIRECT, "vkCmdDispatchIndirect()");
+ skipCall |= insideRenderPass(dev_data, pCB, "vkCmdDispatchIndirect");
+ }
+ loader_platform_thread_unlock_mutex(&globalLock);
+ if (VK_FALSE == skipCall)
+ dev_data->device_dispatch_table->CmdDispatchIndirect(commandBuffer, buffer, offset);
+}
+
+VK_LAYER_EXPORT VKAPI_ATTR void VKAPI_CALL vkCmdCopyBuffer(VkCommandBuffer commandBuffer, VkBuffer srcBuffer, VkBuffer dstBuffer,
+ uint32_t regionCount, const VkBufferCopy *pRegions) {
+ VkBool32 skipCall = VK_FALSE;
+ layer_data *dev_data = get_my_data_ptr(get_dispatch_key(commandBuffer), layer_data_map);
+ loader_platform_thread_lock_mutex(&globalLock);
+#if MTMERGESOURCE
+ VkDeviceMemory mem;
+ auto cb_data = dev_data->commandBufferMap.find(commandBuffer);
+ loader_platform_thread_lock_mutex(&globalLock);
+ skipCall =
+ get_mem_binding_from_object(dev_data, commandBuffer, (uint64_t)srcBuffer, VK_DEBUG_REPORT_OBJECT_TYPE_BUFFER_EXT, &mem);
+ if (cb_data != dev_data->commandBufferMap.end()) {
+ std::function<VkBool32()> function = [=]() { return validate_memory_is_valid(dev_data, mem, "vkCmdCopyBuffer()"); };
+ cb_data->second->validate_functions.push_back(function);
+ }
+ skipCall |= update_cmd_buf_and_mem_references(dev_data, commandBuffer, mem, "vkCmdCopyBuffer");
+ skipCall |=
+ get_mem_binding_from_object(dev_data, commandBuffer, (uint64_t)dstBuffer, VK_DEBUG_REPORT_OBJECT_TYPE_BUFFER_EXT, &mem);
+ if (cb_data != dev_data->commandBufferMap.end()) {
+ std::function<VkBool32()> function = [=]() {
+ set_memory_valid(dev_data, mem, true);
+ return VK_FALSE;
+ };
+ cb_data->second->validate_functions.push_back(function);
+ }
+ skipCall |= update_cmd_buf_and_mem_references(dev_data, commandBuffer, mem, "vkCmdCopyBuffer");
+ // Validate that SRC & DST buffers have correct usage flags set
+ skipCall |= validate_buffer_usage_flags(dev_data, commandBuffer, srcBuffer, VK_BUFFER_USAGE_TRANSFER_SRC_BIT, true,
+ "vkCmdCopyBuffer()", "VK_BUFFER_USAGE_TRANSFER_SRC_BIT");
+ skipCall |= validate_buffer_usage_flags(dev_data, commandBuffer, dstBuffer, VK_BUFFER_USAGE_TRANSFER_DST_BIT, true,
+ "vkCmdCopyBuffer()", "VK_BUFFER_USAGE_TRANSFER_DST_BIT");
+#endif
+ GLOBAL_CB_NODE *pCB = getCBNode(dev_data, commandBuffer);
+ if (pCB) {
+ skipCall |= addCmd(dev_data, pCB, CMD_COPYBUFFER, "vkCmdCopyBuffer()");
+ skipCall |= insideRenderPass(dev_data, pCB, "vkCmdCopyBuffer");
+ }
+ loader_platform_thread_unlock_mutex(&globalLock);
+ if (VK_FALSE == skipCall)
+ dev_data->device_dispatch_table->CmdCopyBuffer(commandBuffer, srcBuffer, dstBuffer, regionCount, pRegions);
+}
+
+VkBool32 VerifySourceImageLayout(VkCommandBuffer cmdBuffer, VkImage srcImage, VkImageSubresourceLayers subLayers,
+ VkImageLayout srcImageLayout) {
+ VkBool32 skip_call = VK_FALSE;
+
+ layer_data *dev_data = get_my_data_ptr(get_dispatch_key(cmdBuffer), layer_data_map);
+ GLOBAL_CB_NODE *pCB = getCBNode(dev_data, cmdBuffer);
+ for (uint32_t i = 0; i < subLayers.layerCount; ++i) {
+ uint32_t layer = i + subLayers.baseArrayLayer;
+ VkImageSubresource sub = {subLayers.aspectMask, subLayers.mipLevel, layer};
+ IMAGE_CMD_BUF_LAYOUT_NODE node;
+ if (!FindLayout(pCB, srcImage, sub, node)) {
+ SetLayout(pCB, srcImage, sub, {srcImageLayout, srcImageLayout});
+ continue;
+ }
+ if (node.layout != srcImageLayout) {
+ // TODO: Improve log message in the next pass
+ skip_call |=
+ log_msg(dev_data->report_data, VK_DEBUG_REPORT_ERROR_BIT_EXT, VK_DEBUG_REPORT_OBJECT_TYPE_COMMAND_BUFFER_EXT, 0,
+ __LINE__, DRAWSTATE_INVALID_IMAGE_LAYOUT, "DS", "Cannot copy from an image whose source layout is %s "
+ "and doesn't match the current layout %s.",
+ string_VkImageLayout(srcImageLayout), string_VkImageLayout(node.layout));
+ }
+ }
+ if (srcImageLayout != VK_IMAGE_LAYOUT_TRANSFER_SRC_OPTIMAL) {
+ if (srcImageLayout == VK_IMAGE_LAYOUT_GENERAL) {
+ // LAYOUT_GENERAL is allowed, but may not be performance optimal, flag as perf warning.
+ skip_call |= log_msg(dev_data->report_data, VK_DEBUG_REPORT_PERFORMANCE_WARNING_BIT_EXT, (VkDebugReportObjectTypeEXT)0,
+ 0, __LINE__, DRAWSTATE_INVALID_IMAGE_LAYOUT, "DS",
+ "Layout for input image should be TRANSFER_SRC_OPTIMAL instead of GENERAL.");
+ } else {
+ skip_call |= log_msg(dev_data->report_data, VK_DEBUG_REPORT_ERROR_BIT_EXT, (VkDebugReportObjectTypeEXT)0, 0, __LINE__,
+ DRAWSTATE_INVALID_IMAGE_LAYOUT, "DS", "Layout for input image is %s but can only be "
+ "TRANSFER_SRC_OPTIMAL or GENERAL.",
+ string_VkImageLayout(srcImageLayout));
+ }
+ }
+ return skip_call;
+}
+
+VkBool32 VerifyDestImageLayout(VkCommandBuffer cmdBuffer, VkImage destImage, VkImageSubresourceLayers subLayers,
+ VkImageLayout destImageLayout) {
+ VkBool32 skip_call = VK_FALSE;
+
+ layer_data *dev_data = get_my_data_ptr(get_dispatch_key(cmdBuffer), layer_data_map);
+ GLOBAL_CB_NODE *pCB = getCBNode(dev_data, cmdBuffer);
+ for (uint32_t i = 0; i < subLayers.layerCount; ++i) {
+ uint32_t layer = i + subLayers.baseArrayLayer;
+ VkImageSubresource sub = {subLayers.aspectMask, subLayers.mipLevel, layer};
+ IMAGE_CMD_BUF_LAYOUT_NODE node;
+ if (!FindLayout(pCB, destImage, sub, node)) {
+ SetLayout(pCB, destImage, sub, {destImageLayout, destImageLayout});
+ continue;
+ }
+ if (node.layout != destImageLayout) {
+ skip_call |=
+ log_msg(dev_data->report_data, VK_DEBUG_REPORT_ERROR_BIT_EXT, VK_DEBUG_REPORT_OBJECT_TYPE_COMMAND_BUFFER_EXT, 0,
+ __LINE__, DRAWSTATE_INVALID_IMAGE_LAYOUT, "DS", "Cannot copy from an image whose dest layout is %s and "
+ "doesn't match the current layout %s.",
+ string_VkImageLayout(destImageLayout), string_VkImageLayout(node.layout));
+ }
+ }
+ if (destImageLayout != VK_IMAGE_LAYOUT_TRANSFER_DST_OPTIMAL) {
+ if (destImageLayout == VK_IMAGE_LAYOUT_GENERAL) {
+ // LAYOUT_GENERAL is allowed, but may not be performance optimal, flag as perf warning.
+ skip_call |= log_msg(dev_data->report_data, VK_DEBUG_REPORT_PERFORMANCE_WARNING_BIT_EXT, (VkDebugReportObjectTypeEXT)0,
+ 0, __LINE__, DRAWSTATE_INVALID_IMAGE_LAYOUT, "DS",
+ "Layout for output image should be TRANSFER_DST_OPTIMAL instead of GENERAL.");
+ } else {
+ skip_call |= log_msg(dev_data->report_data, VK_DEBUG_REPORT_ERROR_BIT_EXT, (VkDebugReportObjectTypeEXT)0, 0, __LINE__,
+ DRAWSTATE_INVALID_IMAGE_LAYOUT, "DS", "Layout for output image is %s but can only be "
+ "TRANSFER_DST_OPTIMAL or GENERAL.",
+ string_VkImageLayout(destImageLayout));
+ }
+ }
+ return skip_call;
+}
+
+VK_LAYER_EXPORT VKAPI_ATTR void VKAPI_CALL
+vkCmdCopyImage(VkCommandBuffer commandBuffer, VkImage srcImage, VkImageLayout srcImageLayout, VkImage dstImage,
+ VkImageLayout dstImageLayout, uint32_t regionCount, const VkImageCopy *pRegions) {
+ VkBool32 skipCall = VK_FALSE;
+ layer_data *dev_data = get_my_data_ptr(get_dispatch_key(commandBuffer), layer_data_map);
+ loader_platform_thread_lock_mutex(&globalLock);
+#if MTMERGESOURCE
+ VkDeviceMemory mem;
+ auto cb_data = dev_data->commandBufferMap.find(commandBuffer);
+ // Validate that src & dst images have correct usage flags set
+ skipCall = get_mem_binding_from_object(dev_data, commandBuffer, (uint64_t)srcImage, VK_DEBUG_REPORT_OBJECT_TYPE_IMAGE_EXT, &mem);
+ if (cb_data != dev_data->commandBufferMap.end()) {
+ std::function<VkBool32()> function = [=]() { return validate_memory_is_valid(dev_data, mem, "vkCmdCopyImage()", srcImage); };
+ cb_data->second->validate_functions.push_back(function);
+ }
+ skipCall |= update_cmd_buf_and_mem_references(dev_data, commandBuffer, mem, "vkCmdCopyImage");
+ skipCall |=
+ get_mem_binding_from_object(dev_data, commandBuffer, (uint64_t)dstImage, VK_DEBUG_REPORT_OBJECT_TYPE_IMAGE_EXT, &mem);
+ if (cb_data != dev_data->commandBufferMap.end()) {
+ std::function<VkBool32()> function = [=]() {
+ set_memory_valid(dev_data, mem, true, dstImage);
+ return VK_FALSE;
+ };
+ cb_data->second->validate_functions.push_back(function);
+ }
+ skipCall |= update_cmd_buf_and_mem_references(dev_data, commandBuffer, mem, "vkCmdCopyImage");
+ skipCall |= validate_image_usage_flags(dev_data, commandBuffer, srcImage, VK_IMAGE_USAGE_TRANSFER_SRC_BIT, true,
+ "vkCmdCopyImage()", "VK_IMAGE_USAGE_TRANSFER_SRC_BIT");
+ skipCall |= validate_image_usage_flags(dev_data, commandBuffer, dstImage, VK_IMAGE_USAGE_TRANSFER_DST_BIT, true,
+ "vkCmdCopyImage()", "VK_IMAGE_USAGE_TRANSFER_DST_BIT");
+#endif
+ GLOBAL_CB_NODE *pCB = getCBNode(dev_data, commandBuffer);
+ if (pCB) {
+ skipCall |= addCmd(dev_data, pCB, CMD_COPYIMAGE, "vkCmdCopyImage()");
+ skipCall |= insideRenderPass(dev_data, pCB, "vkCmdCopyImage");
+ for (uint32_t i = 0; i < regionCount; ++i) {
+ skipCall |= VerifySourceImageLayout(commandBuffer, srcImage, pRegions[i].srcSubresource, srcImageLayout);
+ skipCall |= VerifyDestImageLayout(commandBuffer, dstImage, pRegions[i].dstSubresource, dstImageLayout);
+ }
+ }
+ loader_platform_thread_unlock_mutex(&globalLock);
+ if (VK_FALSE == skipCall)
+ dev_data->device_dispatch_table->CmdCopyImage(commandBuffer, srcImage, srcImageLayout, dstImage, dstImageLayout,
+ regionCount, pRegions);
+}
+
+VK_LAYER_EXPORT VKAPI_ATTR void VKAPI_CALL
+vkCmdBlitImage(VkCommandBuffer commandBuffer, VkImage srcImage, VkImageLayout srcImageLayout, VkImage dstImage,
+ VkImageLayout dstImageLayout, uint32_t regionCount, const VkImageBlit *pRegions, VkFilter filter) {
+ VkBool32 skipCall = VK_FALSE;
+ layer_data *dev_data = get_my_data_ptr(get_dispatch_key(commandBuffer), layer_data_map);
+ loader_platform_thread_lock_mutex(&globalLock);
+#if MTMERGESOURCE
+ VkDeviceMemory mem;
+ auto cb_data = dev_data->commandBufferMap.find(commandBuffer);
+ // Validate that src & dst images have correct usage flags set
+ skipCall = get_mem_binding_from_object(dev_data, commandBuffer, (uint64_t)srcImage, VK_DEBUG_REPORT_OBJECT_TYPE_IMAGE_EXT, &mem);
+ if (cb_data != dev_data->commandBufferMap.end()) {
+ std::function<VkBool32()> function = [=]() { return validate_memory_is_valid(dev_data, mem, "vkCmdBlitImage()", srcImage); };
+ cb_data->second->validate_functions.push_back(function);
+ }
+ skipCall |= update_cmd_buf_and_mem_references(dev_data, commandBuffer, mem, "vkCmdBlitImage");
+ skipCall |=
+ get_mem_binding_from_object(dev_data, commandBuffer, (uint64_t)dstImage, VK_DEBUG_REPORT_OBJECT_TYPE_IMAGE_EXT, &mem);
+ if (cb_data != dev_data->commandBufferMap.end()) {
+ std::function<VkBool32()> function = [=]() {
+ set_memory_valid(dev_data, mem, true, dstImage);
+ return VK_FALSE;
+ };
+ cb_data->second->validate_functions.push_back(function);
+ }
+ skipCall |= update_cmd_buf_and_mem_references(dev_data, commandBuffer, mem, "vkCmdBlitImage");
+ skipCall |= validate_image_usage_flags(dev_data, commandBuffer, srcImage, VK_IMAGE_USAGE_TRANSFER_SRC_BIT, true,
+ "vkCmdBlitImage()", "VK_IMAGE_USAGE_TRANSFER_SRC_BIT");
+ skipCall |= validate_image_usage_flags(dev_data, commandBuffer, dstImage, VK_IMAGE_USAGE_TRANSFER_DST_BIT, true,
+ "vkCmdBlitImage()", "VK_IMAGE_USAGE_TRANSFER_DST_BIT");
+#endif
+ GLOBAL_CB_NODE *pCB = getCBNode(dev_data, commandBuffer);
+ if (pCB) {
+ skipCall |= addCmd(dev_data, pCB, CMD_BLITIMAGE, "vkCmdBlitImage()");
+ skipCall |= insideRenderPass(dev_data, pCB, "vkCmdBlitImage");
+ }
+ loader_platform_thread_unlock_mutex(&globalLock);
+ if (VK_FALSE == skipCall)
+ dev_data->device_dispatch_table->CmdBlitImage(commandBuffer, srcImage, srcImageLayout, dstImage, dstImageLayout,
+ regionCount, pRegions, filter);
+}
+
+VK_LAYER_EXPORT VKAPI_ATTR void VKAPI_CALL vkCmdCopyBufferToImage(VkCommandBuffer commandBuffer, VkBuffer srcBuffer,
+ VkImage dstImage, VkImageLayout dstImageLayout,
+ uint32_t regionCount, const VkBufferImageCopy *pRegions) {
+ VkBool32 skipCall = VK_FALSE;
+ layer_data *dev_data = get_my_data_ptr(get_dispatch_key(commandBuffer), layer_data_map);
+ loader_platform_thread_lock_mutex(&globalLock);
+#if MTMERGESOURCE
+ VkDeviceMemory mem;
+ auto cb_data = dev_data->commandBufferMap.find(commandBuffer);
+ skipCall = get_mem_binding_from_object(dev_data, commandBuffer, (uint64_t)dstImage, VK_DEBUG_REPORT_OBJECT_TYPE_IMAGE_EXT, &mem);
+ if (cb_data != dev_data->commandBufferMap.end()) {
+ std::function<VkBool32()> function = [=]() {
+ set_memory_valid(dev_data, mem, true, dstImage);
+ return VK_FALSE;
+ };
+ cb_data->second->validate_functions.push_back(function);
+ }
+ skipCall |= update_cmd_buf_and_mem_references(dev_data, commandBuffer, mem, "vkCmdCopyBufferToImage");
+ skipCall |=
+ get_mem_binding_from_object(dev_data, commandBuffer, (uint64_t)srcBuffer, VK_DEBUG_REPORT_OBJECT_TYPE_BUFFER_EXT, &mem);
+ if (cb_data != dev_data->commandBufferMap.end()) {
+ std::function<VkBool32()> function = [=]() { return validate_memory_is_valid(dev_data, mem, "vkCmdCopyBufferToImage()"); };
+ cb_data->second->validate_functions.push_back(function);
+ }
+ skipCall |= update_cmd_buf_and_mem_references(dev_data, commandBuffer, mem, "vkCmdCopyBufferToImage");
+ // Validate that src buff & dst image have correct usage flags set
+ skipCall |= validate_buffer_usage_flags(dev_data, commandBuffer, srcBuffer, VK_BUFFER_USAGE_TRANSFER_SRC_BIT, true,
+ "vkCmdCopyBufferToImage()", "VK_BUFFER_USAGE_TRANSFER_SRC_BIT");
+ skipCall |= validate_image_usage_flags(dev_data, commandBuffer, dstImage, VK_IMAGE_USAGE_TRANSFER_DST_BIT, true,
+ "vkCmdCopyBufferToImage()", "VK_IMAGE_USAGE_TRANSFER_DST_BIT");
+#endif
+ GLOBAL_CB_NODE *pCB = getCBNode(dev_data, commandBuffer);
+ if (pCB) {
+ skipCall |= addCmd(dev_data, pCB, CMD_COPYBUFFERTOIMAGE, "vkCmdCopyBufferToImage()");
+ skipCall |= insideRenderPass(dev_data, pCB, "vkCmdCopyBufferToImage");
+ for (uint32_t i = 0; i < regionCount; ++i) {
+ skipCall |= VerifyDestImageLayout(commandBuffer, dstImage, pRegions[i].imageSubresource, dstImageLayout);
+ }
+ }
+ loader_platform_thread_unlock_mutex(&globalLock);
+ if (VK_FALSE == skipCall)
+ dev_data->device_dispatch_table->CmdCopyBufferToImage(commandBuffer, srcBuffer, dstImage, dstImageLayout, regionCount,
+ pRegions);
+}
+
+VK_LAYER_EXPORT VKAPI_ATTR void VKAPI_CALL vkCmdCopyImageToBuffer(VkCommandBuffer commandBuffer, VkImage srcImage,
+ VkImageLayout srcImageLayout, VkBuffer dstBuffer,
+ uint32_t regionCount, const VkBufferImageCopy *pRegions) {
+ VkBool32 skipCall = VK_FALSE;
+ layer_data *dev_data = get_my_data_ptr(get_dispatch_key(commandBuffer), layer_data_map);
+ loader_platform_thread_lock_mutex(&globalLock);
+#if MTMERGESOURCE
+ VkDeviceMemory mem;
+ auto cb_data = dev_data->commandBufferMap.find(commandBuffer);
+ skipCall = get_mem_binding_from_object(dev_data, commandBuffer, (uint64_t)srcImage, VK_DEBUG_REPORT_OBJECT_TYPE_IMAGE_EXT, &mem);
+ if (cb_data != dev_data->commandBufferMap.end()) {
+ std::function<VkBool32()> function =
+ [=]() { return validate_memory_is_valid(dev_data, mem, "vkCmdCopyImageToBuffer()", srcImage); };
+ cb_data->second->validate_functions.push_back(function);
+ }
+ skipCall |= update_cmd_buf_and_mem_references(dev_data, commandBuffer, mem, "vkCmdCopyImageToBuffer");
+ skipCall |=
+ get_mem_binding_from_object(dev_data, commandBuffer, (uint64_t)dstBuffer, VK_DEBUG_REPORT_OBJECT_TYPE_BUFFER_EXT, &mem);
+ if (cb_data != dev_data->commandBufferMap.end()) {
+ std::function<VkBool32()> function = [=]() {
+ set_memory_valid(dev_data, mem, true);
+ return VK_FALSE;
+ };
+ cb_data->second->validate_functions.push_back(function);
+ }
+ skipCall |= update_cmd_buf_and_mem_references(dev_data, commandBuffer, mem, "vkCmdCopyImageToBuffer");
+ // Validate that dst buff & src image have correct usage flags set
+ skipCall |= validate_image_usage_flags(dev_data, commandBuffer, srcImage, VK_IMAGE_USAGE_TRANSFER_SRC_BIT, true,
+ "vkCmdCopyImageToBuffer()", "VK_IMAGE_USAGE_TRANSFER_SRC_BIT");
+ skipCall |= validate_buffer_usage_flags(dev_data, commandBuffer, dstBuffer, VK_BUFFER_USAGE_TRANSFER_DST_BIT, true,
+ "vkCmdCopyImageToBuffer()", "VK_BUFFER_USAGE_TRANSFER_DST_BIT");
+#endif
+ GLOBAL_CB_NODE *pCB = getCBNode(dev_data, commandBuffer);
+ if (pCB) {
+ skipCall |= addCmd(dev_data, pCB, CMD_COPYIMAGETOBUFFER, "vkCmdCopyImageToBuffer()");
+ skipCall |= insideRenderPass(dev_data, pCB, "vkCmdCopyImageToBuffer");
+ for (uint32_t i = 0; i < regionCount; ++i) {
+ skipCall |= VerifySourceImageLayout(commandBuffer, srcImage, pRegions[i].imageSubresource, srcImageLayout);
+ }
+ }
+ loader_platform_thread_unlock_mutex(&globalLock);
+ if (VK_FALSE == skipCall)
+ dev_data->device_dispatch_table->CmdCopyImageToBuffer(commandBuffer, srcImage, srcImageLayout, dstBuffer, regionCount,
+ pRegions);
+}
+
+VK_LAYER_EXPORT VKAPI_ATTR void VKAPI_CALL vkCmdUpdateBuffer(VkCommandBuffer commandBuffer, VkBuffer dstBuffer,
+ VkDeviceSize dstOffset, VkDeviceSize dataSize, const uint32_t *pData) {
+ VkBool32 skipCall = VK_FALSE;
+ layer_data *dev_data = get_my_data_ptr(get_dispatch_key(commandBuffer), layer_data_map);
+ loader_platform_thread_lock_mutex(&globalLock);
+#if MTMERGESOURCE
+ VkDeviceMemory mem;
+ auto cb_data = dev_data->commandBufferMap.find(commandBuffer);
+ skipCall =
+ get_mem_binding_from_object(dev_data, commandBuffer, (uint64_t)dstBuffer, VK_DEBUG_REPORT_OBJECT_TYPE_BUFFER_EXT, &mem);
+ if (cb_data != dev_data->commandBufferMap.end()) {
+ std::function<VkBool32()> function = [=]() {
+ set_memory_valid(dev_data, mem, true);
+ return VK_FALSE;
+ };
+ cb_data->second->validate_functions.push_back(function);
+ }
+ skipCall |= update_cmd_buf_and_mem_references(dev_data, commandBuffer, mem, "vkCmdUpdateBuffer");
+ // Validate that dst buff has correct usage flags set
+ skipCall |= validate_buffer_usage_flags(dev_data, commandBuffer, dstBuffer, VK_BUFFER_USAGE_TRANSFER_DST_BIT, true,
+ "vkCmdUpdateBuffer()", "VK_BUFFER_USAGE_TRANSFER_DST_BIT");
+#endif
+ GLOBAL_CB_NODE *pCB = getCBNode(dev_data, commandBuffer);
+ if (pCB) {
+ skipCall |= addCmd(dev_data, pCB, CMD_UPDATEBUFFER, "vkCmdUpdateBuffer()");
+ skipCall |= insideRenderPass(dev_data, pCB, "vkCmdCopyUpdateBuffer");
+ }
+ loader_platform_thread_unlock_mutex(&globalLock);
+ if (VK_FALSE == skipCall)
+ dev_data->device_dispatch_table->CmdUpdateBuffer(commandBuffer, dstBuffer, dstOffset, dataSize, pData);
+}
+
+VK_LAYER_EXPORT VKAPI_ATTR void VKAPI_CALL
+vkCmdFillBuffer(VkCommandBuffer commandBuffer, VkBuffer dstBuffer, VkDeviceSize dstOffset, VkDeviceSize size, uint32_t data) {
+ VkBool32 skipCall = VK_FALSE;
+ layer_data *dev_data = get_my_data_ptr(get_dispatch_key(commandBuffer), layer_data_map);
+ loader_platform_thread_lock_mutex(&globalLock);
+#if MTMERGESOURCE
+ VkDeviceMemory mem;
+ auto cb_data = dev_data->commandBufferMap.find(commandBuffer);
+ skipCall =
+ get_mem_binding_from_object(dev_data, commandBuffer, (uint64_t)dstBuffer, VK_DEBUG_REPORT_OBJECT_TYPE_BUFFER_EXT, &mem);
+ if (cb_data != dev_data->commandBufferMap.end()) {
+ std::function<VkBool32()> function = [=]() {
+ set_memory_valid(dev_data, mem, true);
+ return VK_FALSE;
+ };
+ cb_data->second->validate_functions.push_back(function);
+ }
+ skipCall |= update_cmd_buf_and_mem_references(dev_data, commandBuffer, mem, "vkCmdFillBuffer");
+ // Validate that dst buff has correct usage flags set
+ skipCall |= validate_buffer_usage_flags(dev_data, commandBuffer, dstBuffer, VK_BUFFER_USAGE_TRANSFER_DST_BIT, true,
+ "vkCmdFillBuffer()", "VK_BUFFER_USAGE_TRANSFER_DST_BIT");
+#endif
+ GLOBAL_CB_NODE *pCB = getCBNode(dev_data, commandBuffer);
+ if (pCB) {
+ skipCall |= addCmd(dev_data, pCB, CMD_FILLBUFFER, "vkCmdFillBuffer()");
+ skipCall |= insideRenderPass(dev_data, pCB, "vkCmdCopyFillBuffer");
+ }
+ loader_platform_thread_unlock_mutex(&globalLock);
+ if (VK_FALSE == skipCall)
+ dev_data->device_dispatch_table->CmdFillBuffer(commandBuffer, dstBuffer, dstOffset, size, data);
+}
+
+VK_LAYER_EXPORT VKAPI_ATTR void VKAPI_CALL vkCmdClearAttachments(VkCommandBuffer commandBuffer, uint32_t attachmentCount,
+ const VkClearAttachment *pAttachments, uint32_t rectCount,
+ const VkClearRect *pRects) {
+ VkBool32 skipCall = VK_FALSE;
+ layer_data *dev_data = get_my_data_ptr(get_dispatch_key(commandBuffer), layer_data_map);
+ loader_platform_thread_lock_mutex(&globalLock);
+ GLOBAL_CB_NODE *pCB = getCBNode(dev_data, commandBuffer);
+ if (pCB) {
+ skipCall |= addCmd(dev_data, pCB, CMD_CLEARATTACHMENTS, "vkCmdClearAttachments()");
+ // Warn if this is issued prior to Draw Cmd and clearing the entire attachment
+ if (!hasDrawCmd(pCB) && (pCB->activeRenderPassBeginInfo.renderArea.extent.width == pRects[0].rect.extent.width) &&
+ (pCB->activeRenderPassBeginInfo.renderArea.extent.height == pRects[0].rect.extent.height)) {
+ // TODO : commandBuffer should be srcObj
+ // There are times where app needs to use ClearAttachments (generally when reusing a buffer inside of a render pass)
+ // Can we make this warning more specific? I'd like to avoid triggering this test if we can tell it's a use that must
+ // call CmdClearAttachments
+ // Otherwise this seems more like a performance warning.
+ skipCall |= log_msg(dev_data->report_data, VK_DEBUG_REPORT_PERFORMANCE_WARNING_BIT_EXT,
+ VK_DEBUG_REPORT_OBJECT_TYPE_COMMAND_BUFFER_EXT, 0, 0, DRAWSTATE_CLEAR_CMD_BEFORE_DRAW, "DS",
+ "vkCmdClearAttachments() issued on CB object 0x%" PRIxLEAST64 " prior to any Draw Cmds."
+ " It is recommended you use RenderPass LOAD_OP_CLEAR on Attachments prior to any Draw.",
+ (uint64_t)(commandBuffer));
+ }
+ skipCall |= outsideRenderPass(dev_data, pCB, "vkCmdClearAttachments");
+ }
+
+ // Validate that attachment is in reference list of active subpass
+ if (pCB->activeRenderPass) {
+ const VkRenderPassCreateInfo *pRPCI = dev_data->renderPassMap[pCB->activeRenderPass]->pCreateInfo;
+ const VkSubpassDescription *pSD = &pRPCI->pSubpasses[pCB->activeSubpass];
+
+ for (uint32_t attachment_idx = 0; attachment_idx < attachmentCount; attachment_idx++) {
+ const VkClearAttachment *attachment = &pAttachments[attachment_idx];
+ if (attachment->aspectMask & VK_IMAGE_ASPECT_COLOR_BIT) {
+ VkBool32 found = VK_FALSE;
+ for (uint32_t i = 0; i < pSD->colorAttachmentCount; i++) {
+ if (attachment->colorAttachment == pSD->pColorAttachments[i].attachment) {
+ found = VK_TRUE;
+ break;
+ }
+ }
+ if (VK_FALSE == found) {
+ skipCall |= log_msg(
+ dev_data->report_data, VK_DEBUG_REPORT_ERROR_BIT_EXT, VK_DEBUG_REPORT_OBJECT_TYPE_COMMAND_BUFFER_EXT,
+ (uint64_t)commandBuffer, __LINE__, DRAWSTATE_MISSING_ATTACHMENT_REFERENCE, "DS",
+ "vkCmdClearAttachments() attachment index %d not found in attachment reference array of active subpass %d",
+ attachment->colorAttachment, pCB->activeSubpass);
+ }
+ } else if (attachment->aspectMask & (VK_IMAGE_ASPECT_DEPTH_BIT | VK_IMAGE_ASPECT_STENCIL_BIT)) {
+ if (!pSD->pDepthStencilAttachment || // Says no DS will be used in active subpass
+ (pSD->pDepthStencilAttachment->attachment ==
+ VK_ATTACHMENT_UNUSED)) { // Says no DS will be used in active subpass
+
+ skipCall |= log_msg(
+ dev_data->report_data, VK_DEBUG_REPORT_ERROR_BIT_EXT, VK_DEBUG_REPORT_OBJECT_TYPE_COMMAND_BUFFER_EXT,
+ (uint64_t)commandBuffer, __LINE__, DRAWSTATE_MISSING_ATTACHMENT_REFERENCE, "DS",
+ "vkCmdClearAttachments() attachment index %d does not match depthStencilAttachment.attachment (%d) found "
+ "in active subpass %d",
+ attachment->colorAttachment,
+ (pSD->pDepthStencilAttachment) ? pSD->pDepthStencilAttachment->attachment : VK_ATTACHMENT_UNUSED,
+ pCB->activeSubpass);
+ }
+ }
+ }
+ }
+ loader_platform_thread_unlock_mutex(&globalLock);
+ if (VK_FALSE == skipCall)
+ dev_data->device_dispatch_table->CmdClearAttachments(commandBuffer, attachmentCount, pAttachments, rectCount, pRects);
+}
+
+VK_LAYER_EXPORT VKAPI_ATTR void VKAPI_CALL vkCmdClearColorImage(VkCommandBuffer commandBuffer, VkImage image,
+ VkImageLayout imageLayout, const VkClearColorValue *pColor,
+ uint32_t rangeCount, const VkImageSubresourceRange *pRanges) {
+ VkBool32 skipCall = VK_FALSE;
+ layer_data *dev_data = get_my_data_ptr(get_dispatch_key(commandBuffer), layer_data_map);
+ loader_platform_thread_lock_mutex(&globalLock);
+#if MTMERGESOURCE
+ // TODO : Verify memory is in VK_IMAGE_STATE_CLEAR state
+ VkDeviceMemory mem;
+ auto cb_data = dev_data->commandBufferMap.find(commandBuffer);
+ skipCall = get_mem_binding_from_object(dev_data, commandBuffer, (uint64_t)image, VK_DEBUG_REPORT_OBJECT_TYPE_IMAGE_EXT, &mem);
+ if (cb_data != dev_data->commandBufferMap.end()) {
+ std::function<VkBool32()> function = [=]() {
+ set_memory_valid(dev_data, mem, true, image);
+ return VK_FALSE;
+ };
+ cb_data->second->validate_functions.push_back(function);
+ }
+ skipCall |= update_cmd_buf_and_mem_references(dev_data, commandBuffer, mem, "vkCmdClearColorImage");
+#endif
+ GLOBAL_CB_NODE *pCB = getCBNode(dev_data, commandBuffer);
+ if (pCB) {
+ skipCall |= addCmd(dev_data, pCB, CMD_CLEARCOLORIMAGE, "vkCmdClearColorImage()");
+ skipCall |= insideRenderPass(dev_data, pCB, "vkCmdClearColorImage");
+ }
+ loader_platform_thread_unlock_mutex(&globalLock);
+ if (VK_FALSE == skipCall)
+ dev_data->device_dispatch_table->CmdClearColorImage(commandBuffer, image, imageLayout, pColor, rangeCount, pRanges);
+}
+
+VK_LAYER_EXPORT VKAPI_ATTR void VKAPI_CALL
+vkCmdClearDepthStencilImage(VkCommandBuffer commandBuffer, VkImage image, VkImageLayout imageLayout,
+ const VkClearDepthStencilValue *pDepthStencil, uint32_t rangeCount,
+ const VkImageSubresourceRange *pRanges) {
+ VkBool32 skipCall = VK_FALSE;
+ layer_data *dev_data = get_my_data_ptr(get_dispatch_key(commandBuffer), layer_data_map);
+ loader_platform_thread_lock_mutex(&globalLock);
+#if MTMERGESOURCE
+ // TODO : Verify memory is in VK_IMAGE_STATE_CLEAR state
+ VkDeviceMemory mem;
+ auto cb_data = dev_data->commandBufferMap.find(commandBuffer);
+ skipCall = get_mem_binding_from_object(dev_data, commandBuffer, (uint64_t)image, VK_DEBUG_REPORT_OBJECT_TYPE_IMAGE_EXT, &mem);
+ if (cb_data != dev_data->commandBufferMap.end()) {
+ std::function<VkBool32()> function = [=]() {
+ set_memory_valid(dev_data, mem, true, image);
+ return VK_FALSE;
+ };
+ cb_data->second->validate_functions.push_back(function);
+ }
+ skipCall |= update_cmd_buf_and_mem_references(dev_data, commandBuffer, mem, "vkCmdClearDepthStencilImage");
+#endif
+ GLOBAL_CB_NODE *pCB = getCBNode(dev_data, commandBuffer);
+ if (pCB) {
+ skipCall |= addCmd(dev_data, pCB, CMD_CLEARDEPTHSTENCILIMAGE, "vkCmdClearDepthStencilImage()");
+ skipCall |= insideRenderPass(dev_data, pCB, "vkCmdClearDepthStencilImage");
+ }
+ loader_platform_thread_unlock_mutex(&globalLock);
+ if (VK_FALSE == skipCall)
+ dev_data->device_dispatch_table->CmdClearDepthStencilImage(commandBuffer, image, imageLayout, pDepthStencil, rangeCount,
+ pRanges);
+}
+
+VK_LAYER_EXPORT VKAPI_ATTR void VKAPI_CALL
+vkCmdResolveImage(VkCommandBuffer commandBuffer, VkImage srcImage, VkImageLayout srcImageLayout, VkImage dstImage,
+ VkImageLayout dstImageLayout, uint32_t regionCount, const VkImageResolve *pRegions) {
+ VkBool32 skipCall = VK_FALSE;
+ layer_data *dev_data = get_my_data_ptr(get_dispatch_key(commandBuffer), layer_data_map);
+ loader_platform_thread_lock_mutex(&globalLock);
+#if MTMERGESOURCE
+ auto cb_data = dev_data->commandBufferMap.find(commandBuffer);
+ VkDeviceMemory mem;
+ skipCall = get_mem_binding_from_object(dev_data, commandBuffer, (uint64_t)srcImage, VK_DEBUG_REPORT_OBJECT_TYPE_IMAGE_EXT, &mem);
+ if (cb_data != dev_data->commandBufferMap.end()) {
+ std::function<VkBool32()> function =
+ [=]() { return validate_memory_is_valid(dev_data, mem, "vkCmdResolveImage()", srcImage); };
+ cb_data->second->validate_functions.push_back(function);
+ }
+ skipCall |= update_cmd_buf_and_mem_references(dev_data, commandBuffer, mem, "vkCmdResolveImage");
+ skipCall |=
+ get_mem_binding_from_object(dev_data, commandBuffer, (uint64_t)dstImage, VK_DEBUG_REPORT_OBJECT_TYPE_IMAGE_EXT, &mem);
+ if (cb_data != dev_data->commandBufferMap.end()) {
+ std::function<VkBool32()> function = [=]() {
+ set_memory_valid(dev_data, mem, true, dstImage);
+ return VK_FALSE;
+ };
+ cb_data->second->validate_functions.push_back(function);
+ }
+ skipCall |= update_cmd_buf_and_mem_references(dev_data, commandBuffer, mem, "vkCmdResolveImage");
+#endif
+ GLOBAL_CB_NODE *pCB = getCBNode(dev_data, commandBuffer);
+ if (pCB) {
+ skipCall |= addCmd(dev_data, pCB, CMD_RESOLVEIMAGE, "vkCmdResolveImage()");
+ skipCall |= insideRenderPass(dev_data, pCB, "vkCmdResolveImage");
+ }
+ loader_platform_thread_unlock_mutex(&globalLock);
+ if (VK_FALSE == skipCall)
+ dev_data->device_dispatch_table->CmdResolveImage(commandBuffer, srcImage, srcImageLayout, dstImage, dstImageLayout,
+ regionCount, pRegions);
+}
+
+bool setEventStageMask(VkQueue queue, VkCommandBuffer commandBuffer, VkEvent event, VkPipelineStageFlags stageMask) {
+ layer_data *dev_data = get_my_data_ptr(get_dispatch_key(commandBuffer), layer_data_map);
+ GLOBAL_CB_NODE *pCB = getCBNode(dev_data, commandBuffer);
+ if (pCB) {
+ pCB->eventToStageMap[event] = stageMask;
+ }
+ auto queue_data = dev_data->queueMap.find(queue);
+ if (queue_data != dev_data->queueMap.end()) {
+ queue_data->second.eventToStageMap[event] = stageMask;
+ }
+ return false;
+}
+
+VK_LAYER_EXPORT VKAPI_ATTR void VKAPI_CALL
+vkCmdSetEvent(VkCommandBuffer commandBuffer, VkEvent event, VkPipelineStageFlags stageMask) {
+ VkBool32 skipCall = VK_FALSE;
+ layer_data *dev_data = get_my_data_ptr(get_dispatch_key(commandBuffer), layer_data_map);
+ loader_platform_thread_lock_mutex(&globalLock);
+ GLOBAL_CB_NODE *pCB = getCBNode(dev_data, commandBuffer);
+ if (pCB) {
+ skipCall |= addCmd(dev_data, pCB, CMD_SETEVENT, "vkCmdSetEvent()");
+ skipCall |= insideRenderPass(dev_data, pCB, "vkCmdSetEvent");
+ pCB->events.push_back(event);
+ std::function<bool(VkQueue)> eventUpdate =
+ std::bind(setEventStageMask, std::placeholders::_1, commandBuffer, event, stageMask);
+ pCB->eventUpdates.push_back(eventUpdate);
+ }
+ loader_platform_thread_unlock_mutex(&globalLock);
+ if (VK_FALSE == skipCall)
+ dev_data->device_dispatch_table->CmdSetEvent(commandBuffer, event, stageMask);
+}
+
+VK_LAYER_EXPORT VKAPI_ATTR void VKAPI_CALL
+vkCmdResetEvent(VkCommandBuffer commandBuffer, VkEvent event, VkPipelineStageFlags stageMask) {
+ VkBool32 skipCall = VK_FALSE;
+ layer_data *dev_data = get_my_data_ptr(get_dispatch_key(commandBuffer), layer_data_map);
+ loader_platform_thread_lock_mutex(&globalLock);
+ GLOBAL_CB_NODE *pCB = getCBNode(dev_data, commandBuffer);
+ if (pCB) {
+ skipCall |= addCmd(dev_data, pCB, CMD_RESETEVENT, "vkCmdResetEvent()");
+ skipCall |= insideRenderPass(dev_data, pCB, "vkCmdResetEvent");
+ pCB->events.push_back(event);
+ std::function<bool(VkQueue)> eventUpdate =
+ std::bind(setEventStageMask, std::placeholders::_1, commandBuffer, event, VkPipelineStageFlags(0));
+ pCB->eventUpdates.push_back(eventUpdate);
+ }
+ loader_platform_thread_unlock_mutex(&globalLock);
+ if (VK_FALSE == skipCall)
+ dev_data->device_dispatch_table->CmdResetEvent(commandBuffer, event, stageMask);
+}
+
+VkBool32 TransitionImageLayouts(VkCommandBuffer cmdBuffer, uint32_t memBarrierCount, const VkImageMemoryBarrier *pImgMemBarriers) {
+ layer_data *dev_data = get_my_data_ptr(get_dispatch_key(cmdBuffer), layer_data_map);
+ GLOBAL_CB_NODE *pCB = getCBNode(dev_data, cmdBuffer);
+ VkBool32 skip = VK_FALSE;
+ uint32_t levelCount = 0;
+ uint32_t layerCount = 0;
+
+ for (uint32_t i = 0; i < memBarrierCount; ++i) {
+ auto mem_barrier = &pImgMemBarriers[i];
+ if (!mem_barrier)
+ continue;
+ // TODO: Do not iterate over every possibility - consolidate where
+ // possible
+ ResolveRemainingLevelsLayers(dev_data, &levelCount, &layerCount, mem_barrier->subresourceRange, mem_barrier->image);
+
+ for (uint32_t j = 0; j < levelCount; j++) {
+ uint32_t level = mem_barrier->subresourceRange.baseMipLevel + j;
+ for (uint32_t k = 0; k < layerCount; k++) {
+ uint32_t layer = mem_barrier->subresourceRange.baseArrayLayer + k;
+ VkImageSubresource sub = {mem_barrier->subresourceRange.aspectMask, level, layer};
+ IMAGE_CMD_BUF_LAYOUT_NODE node;
+ if (!FindLayout(pCB, mem_barrier->image, sub, node)) {
+ SetLayout(pCB, mem_barrier->image, sub, {mem_barrier->oldLayout, mem_barrier->newLayout});
+ continue;
+ }
+ if (mem_barrier->oldLayout == VK_IMAGE_LAYOUT_UNDEFINED) {
+ // TODO: Set memory invalid which is in mem_tracker currently
+ } else if (node.layout != mem_barrier->oldLayout) {
+ skip |= log_msg(dev_data->report_data, VK_DEBUG_REPORT_ERROR_BIT_EXT, (VkDebugReportObjectTypeEXT)0, 0,
+ __LINE__, DRAWSTATE_INVALID_IMAGE_LAYOUT, "DS", "You cannot transition the layout from %s "
+ "when current layout is %s.",
+ string_VkImageLayout(mem_barrier->oldLayout), string_VkImageLayout(node.layout));
+ }
+ SetLayout(pCB, mem_barrier->image, sub, mem_barrier->newLayout);
+ }
+ }
+ }
+ return skip;
+}
+
+// Print readable FlagBits in FlagMask
+std::string string_VkAccessFlags(VkAccessFlags accessMask) {
+ std::string result;
+ std::string separator;
+
+ if (accessMask == 0) {
+ result = "[None]";
+ } else {
+ result = "[";
+ for (auto i = 0; i < 32; i++) {
+ if (accessMask & (1 << i)) {
+ result = result + separator + string_VkAccessFlagBits((VkAccessFlagBits)(1 << i));
+ separator = " | ";
+ }
+ }
+ result = result + "]";
+ }
+ return result;
+}
+
+// AccessFlags MUST have 'required_bit' set, and may have one or more of 'optional_bits' set.
+// If required_bit is zero, accessMask must have at least one of 'optional_bits' set
+// TODO: Add tracking to ensure that at least one barrier has been set for these layout transitions
+VkBool32 ValidateMaskBits(const layer_data *my_data, VkCommandBuffer cmdBuffer, const VkAccessFlags &accessMask,
+ const VkImageLayout &layout, VkAccessFlags required_bit, VkAccessFlags optional_bits, const char *type) {
+ VkBool32 skip_call = VK_FALSE;
+
+ if ((accessMask & required_bit) || (!required_bit && (accessMask & optional_bits))) {
+ if (accessMask & !(required_bit | optional_bits)) {
+ // TODO: Verify against Valid Use
+ skip_call |=
+ log_msg(my_data->report_data, VK_DEBUG_REPORT_WARNING_BIT_EXT, (VkDebugReportObjectTypeEXT)0, 0, __LINE__,
+ DRAWSTATE_INVALID_BARRIER, "DS", "Additional bits in %s accessMask %d %s are specified when layout is %s.",
+ type, accessMask, string_VkAccessFlags(accessMask).c_str(), string_VkImageLayout(layout));
+ }
+ } else {
+ if (!required_bit) {
+ skip_call |= log_msg(my_data->report_data, VK_DEBUG_REPORT_ERROR_BIT_EXT, (VkDebugReportObjectTypeEXT)0, 0, __LINE__,
+ DRAWSTATE_INVALID_BARRIER, "DS", "%s AccessMask %d %s must contain at least one of access bits %d "
+ "%s when layout is %s, unless the app has previously added a "
+ "barrier for this transition.",
+ type, accessMask, string_VkAccessFlags(accessMask).c_str(), optional_bits,
+ string_VkAccessFlags(optional_bits).c_str(), string_VkImageLayout(layout));
+ } else {
+ std::string opt_bits;
+ if (optional_bits != 0) {
+ std::stringstream ss;
+ ss << optional_bits;
+ opt_bits = "and may have optional bits " + ss.str() + ' ' + string_VkAccessFlags(optional_bits);
+ }
+ skip_call |= log_msg(my_data->report_data, VK_DEBUG_REPORT_ERROR_BIT_EXT, (VkDebugReportObjectTypeEXT)0, 0, __LINE__,
+ DRAWSTATE_INVALID_BARRIER, "DS", "%s AccessMask %d %s must have required access bit %d %s %s when "
+ "layout is %s, unless the app has previously added a barrier for "
+ "this transition.",
+ type, accessMask, string_VkAccessFlags(accessMask).c_str(), required_bit,
+ string_VkAccessFlags(required_bit).c_str(), opt_bits.c_str(), string_VkImageLayout(layout));
+ }
+ }
+ return skip_call;
+}
+
+VkBool32 ValidateMaskBitsFromLayouts(const layer_data *my_data, VkCommandBuffer cmdBuffer, const VkAccessFlags &accessMask,
+ const VkImageLayout &layout, const char *type) {
+ VkBool32 skip_call = VK_FALSE;
+ switch (layout) {
+ case VK_IMAGE_LAYOUT_COLOR_ATTACHMENT_OPTIMAL: {
+ skip_call |= ValidateMaskBits(my_data, cmdBuffer, accessMask, layout, VK_ACCESS_COLOR_ATTACHMENT_WRITE_BIT,
+ VK_ACCESS_COLOR_ATTACHMENT_READ_BIT, type);
+ break;
+ }
+ case VK_IMAGE_LAYOUT_DEPTH_STENCIL_ATTACHMENT_OPTIMAL: {
+ skip_call |= ValidateMaskBits(my_data, cmdBuffer, accessMask, layout, VK_ACCESS_DEPTH_STENCIL_ATTACHMENT_WRITE_BIT,
+ VK_ACCESS_DEPTH_STENCIL_ATTACHMENT_READ_BIT, type);
+ break;
+ }
+ case VK_IMAGE_LAYOUT_TRANSFER_DST_OPTIMAL: {
+ skip_call |= ValidateMaskBits(my_data, cmdBuffer, accessMask, layout, VK_ACCESS_TRANSFER_WRITE_BIT, 0, type);
+ break;
+ }
+ case VK_IMAGE_LAYOUT_PREINITIALIZED: {
+ skip_call |= ValidateMaskBits(my_data, cmdBuffer, accessMask, layout, VK_ACCESS_HOST_WRITE_BIT, 0, type);
+ break;
+ }
+ case VK_IMAGE_LAYOUT_DEPTH_STENCIL_READ_ONLY_OPTIMAL: {
+ skip_call |= ValidateMaskBits(my_data, cmdBuffer, accessMask, layout, 0,
+ VK_ACCESS_DEPTH_STENCIL_ATTACHMENT_READ_BIT | VK_ACCESS_SHADER_READ_BIT, type);
+ break;
+ }
+ case VK_IMAGE_LAYOUT_SHADER_READ_ONLY_OPTIMAL: {
+ skip_call |= ValidateMaskBits(my_data, cmdBuffer, accessMask, layout, 0,
+ VK_ACCESS_INPUT_ATTACHMENT_READ_BIT | VK_ACCESS_SHADER_READ_BIT, type);
+ break;
+ }
+ case VK_IMAGE_LAYOUT_TRANSFER_SRC_OPTIMAL: {
+ skip_call |= ValidateMaskBits(my_data, cmdBuffer, accessMask, layout, VK_ACCESS_TRANSFER_READ_BIT, 0, type);
+ break;
+ }
+ case VK_IMAGE_LAYOUT_UNDEFINED: {
+ if (accessMask != 0) {
+ // TODO: Verify against Valid Use section spec
+ skip_call |=
+ log_msg(my_data->report_data, VK_DEBUG_REPORT_WARNING_BIT_EXT, (VkDebugReportObjectTypeEXT)0, 0, __LINE__,
+ DRAWSTATE_INVALID_BARRIER, "DS", "Additional bits in %s accessMask %d %s are specified when layout is %s.",
+ type, accessMask, string_VkAccessFlags(accessMask).c_str(), string_VkImageLayout(layout));
+ }
+ break;
+ }
+ case VK_IMAGE_LAYOUT_GENERAL:
+ default: { break; }
+ }
+ return skip_call;
+}
+
+VkBool32 ValidateBarriers(const char *funcName, VkCommandBuffer cmdBuffer, uint32_t memBarrierCount,
+ const VkMemoryBarrier *pMemBarriers, uint32_t bufferBarrierCount,
+ const VkBufferMemoryBarrier *pBufferMemBarriers, uint32_t imageMemBarrierCount,
+ const VkImageMemoryBarrier *pImageMemBarriers) {
+ VkBool32 skip_call = VK_FALSE;
+ layer_data *dev_data = get_my_data_ptr(get_dispatch_key(cmdBuffer), layer_data_map);
+ GLOBAL_CB_NODE *pCB = getCBNode(dev_data, cmdBuffer);
+ if (pCB->activeRenderPass && memBarrierCount) {
+ if (!dev_data->renderPassMap[pCB->activeRenderPass]->hasSelfDependency[pCB->activeSubpass]) {
+ skip_call |= log_msg(dev_data->report_data, VK_DEBUG_REPORT_ERROR_BIT_EXT, (VkDebugReportObjectTypeEXT)0, 0, __LINE__,
+ DRAWSTATE_INVALID_BARRIER, "DS", "%s: Barriers cannot be set during subpass %d "
+ "with no self dependency specified.",
+ funcName, pCB->activeSubpass);
+ }
+ }
+ for (uint32_t i = 0; i < imageMemBarrierCount; ++i) {
+ auto mem_barrier = &pImageMemBarriers[i];
+ auto image_data = dev_data->imageMap.find(mem_barrier->image);
+ if (image_data != dev_data->imageMap.end()) {
+ uint32_t src_q_f_index = mem_barrier->srcQueueFamilyIndex;
+ uint32_t dst_q_f_index = mem_barrier->dstQueueFamilyIndex;
+ if (image_data->second.createInfo.sharingMode == VK_SHARING_MODE_CONCURRENT) {
+ // srcQueueFamilyIndex and dstQueueFamilyIndex must both
+ // be VK_QUEUE_FAMILY_IGNORED
+ if ((src_q_f_index != VK_QUEUE_FAMILY_IGNORED) || (dst_q_f_index != VK_QUEUE_FAMILY_IGNORED)) {
+ skip_call |= log_msg(dev_data->report_data, VK_DEBUG_REPORT_ERROR_BIT_EXT, (VkDebugReportObjectTypeEXT)0, 0,
+ __LINE__, DRAWSTATE_INVALID_QUEUE_INDEX, "DS",
+ "%s: Image Barrier for image 0x%" PRIx64 " was created with sharingMode of "
+ "VK_SHARING_MODE_CONCURRENT. Src and dst "
+ " queueFamilyIndices must be VK_QUEUE_FAMILY_IGNORED.",
+ funcName, reinterpret_cast<const uint64_t &>(mem_barrier->image));
+ }
+ } else {
+ // Sharing mode is VK_SHARING_MODE_EXCLUSIVE. srcQueueFamilyIndex and
+ // dstQueueFamilyIndex must either both be VK_QUEUE_FAMILY_IGNORED,
+ // or both be a valid queue family
+ if (((src_q_f_index == VK_QUEUE_FAMILY_IGNORED) || (dst_q_f_index == VK_QUEUE_FAMILY_IGNORED)) &&
+ (src_q_f_index != dst_q_f_index)) {
+ skip_call |=
+ log_msg(dev_data->report_data, VK_DEBUG_REPORT_ERROR_BIT_EXT, (VkDebugReportObjectTypeEXT)0, 0, __LINE__,
+ DRAWSTATE_INVALID_QUEUE_INDEX, "DS", "%s: Image 0x%" PRIx64 " was created with sharingMode "
+ "of VK_SHARING_MODE_EXCLUSIVE. If one of src- or "
+ "dstQueueFamilyIndex is VK_QUEUE_FAMILY_IGNORED, both "
+ "must be.",
+ funcName, reinterpret_cast<const uint64_t &>(mem_barrier->image));
+ } else if (((src_q_f_index != VK_QUEUE_FAMILY_IGNORED) && (dst_q_f_index != VK_QUEUE_FAMILY_IGNORED)) &&
+ ((src_q_f_index >= dev_data->physDevProperties.queue_family_properties.size()) ||
+ (dst_q_f_index >= dev_data->physDevProperties.queue_family_properties.size()))) {
+ skip_call |= log_msg(dev_data->report_data, VK_DEBUG_REPORT_ERROR_BIT_EXT, (VkDebugReportObjectTypeEXT)0, 0,
+ __LINE__, DRAWSTATE_INVALID_QUEUE_INDEX, "DS",
+ "%s: Image 0x%" PRIx64 " was created with sharingMode "
+ "of VK_SHARING_MODE_EXCLUSIVE, but srcQueueFamilyIndex %d"
+ " or dstQueueFamilyIndex %d is greater than " PRINTF_SIZE_T_SPECIFIER
+ "queueFamilies crated for this device.",
+ funcName, reinterpret_cast<const uint64_t &>(mem_barrier->image), src_q_f_index,
+ dst_q_f_index, dev_data->physDevProperties.queue_family_properties.size());
+ }
+ }
+ }
+
+ if (mem_barrier) {
+ skip_call |=
+ ValidateMaskBitsFromLayouts(dev_data, cmdBuffer, mem_barrier->srcAccessMask, mem_barrier->oldLayout, "Source");
+ skip_call |=
+ ValidateMaskBitsFromLayouts(dev_data, cmdBuffer, mem_barrier->dstAccessMask, mem_barrier->newLayout, "Dest");
+ if (mem_barrier->newLayout == VK_IMAGE_LAYOUT_UNDEFINED || mem_barrier->newLayout == VK_IMAGE_LAYOUT_PREINITIALIZED) {
+ log_msg(dev_data->report_data, VK_DEBUG_REPORT_ERROR_BIT_EXT, (VkDebugReportObjectTypeEXT)0, 0, __LINE__,
+ DRAWSTATE_INVALID_BARRIER, "DS", "%s: Image Layout cannot be transitioned to UNDEFINED or "
+ "PREINITIALIZED.",
+ funcName);
+ }
+ auto image_data = dev_data->imageMap.find(mem_barrier->image);
+ VkFormat format;
+ uint32_t arrayLayers, mipLevels;
+ bool imageFound = false;
+ if (image_data != dev_data->imageMap.end()) {
+ format = image_data->second.createInfo.format;
+ arrayLayers = image_data->second.createInfo.arrayLayers;
+ mipLevels = image_data->second.createInfo.mipLevels;
+ imageFound = true;
+ } else if (dev_data->device_extensions.wsi_enabled) {
+ auto imageswap_data = dev_data->device_extensions.imageToSwapchainMap.find(mem_barrier->image);
+ if (imageswap_data != dev_data->device_extensions.imageToSwapchainMap.end()) {
+ auto swapchain_data = dev_data->device_extensions.swapchainMap.find(imageswap_data->second);
+ if (swapchain_data != dev_data->device_extensions.swapchainMap.end()) {
+ format = swapchain_data->second->createInfo.imageFormat;
+ arrayLayers = swapchain_data->second->createInfo.imageArrayLayers;
+ mipLevels = 1;
+ imageFound = true;
+ }
+ }
+ }
+ if (imageFound) {
+ if (vk_format_is_depth_and_stencil(format) &&
+ (!(mem_barrier->subresourceRange.aspectMask & VK_IMAGE_ASPECT_DEPTH_BIT) ||
+ !(mem_barrier->subresourceRange.aspectMask & VK_IMAGE_ASPECT_STENCIL_BIT))) {
+ log_msg(dev_data->report_data, VK_DEBUG_REPORT_ERROR_BIT_EXT, (VkDebugReportObjectTypeEXT)0, 0, __LINE__,
+ DRAWSTATE_INVALID_BARRIER, "DS", "%s: Image is a depth and stencil format and thus must "
+ "have both VK_IMAGE_ASPECT_DEPTH_BIT and "
+ "VK_IMAGE_ASPECT_STENCIL_BIT set.",
+ funcName);
+ }
+ int layerCount = (mem_barrier->subresourceRange.layerCount == VK_REMAINING_ARRAY_LAYERS)
+ ? 1
+ : mem_barrier->subresourceRange.layerCount;
+ if ((mem_barrier->subresourceRange.baseArrayLayer + layerCount) > arrayLayers) {
+ log_msg(dev_data->report_data, VK_DEBUG_REPORT_ERROR_BIT_EXT, (VkDebugReportObjectTypeEXT)0, 0, __LINE__,
+ DRAWSTATE_INVALID_BARRIER, "DS", "%s: Subresource must have the sum of the "
+ "baseArrayLayer (%d) and layerCount (%d) be less "
+ "than or equal to the total number of layers (%d).",
+ funcName, mem_barrier->subresourceRange.baseArrayLayer, mem_barrier->subresourceRange.layerCount,
+ arrayLayers);
+ }
+ int levelCount = (mem_barrier->subresourceRange.levelCount == VK_REMAINING_MIP_LEVELS)
+ ? 1
+ : mem_barrier->subresourceRange.levelCount;
+ if ((mem_barrier->subresourceRange.baseMipLevel + levelCount) > mipLevels) {
+ log_msg(dev_data->report_data, VK_DEBUG_REPORT_ERROR_BIT_EXT, (VkDebugReportObjectTypeEXT)0, 0, __LINE__,
+ DRAWSTATE_INVALID_BARRIER, "DS", "%s: Subresource must have the sum of the baseMipLevel "
+ "(%d) and levelCount (%d) be less than or equal to "
+ "the total number of levels (%d).",
+ funcName, mem_barrier->subresourceRange.baseMipLevel, mem_barrier->subresourceRange.levelCount,
+ mipLevels);
+ }
+ }
+ }
+ }
+ for (uint32_t i = 0; i < bufferBarrierCount; ++i) {
+ auto mem_barrier = &pBufferMemBarriers[i];
+ if (pCB->activeRenderPass) {
+ skip_call |=
+ log_msg(dev_data->report_data, VK_DEBUG_REPORT_ERROR_BIT_EXT, (VkDebugReportObjectTypeEXT)0, 0, __LINE__,
+ DRAWSTATE_INVALID_BARRIER, "DS", "%s: Buffer Barriers cannot be used during a render pass.", funcName);
+ }
+ if (!mem_barrier)
+ continue;
+
+ // Validate buffer barrier queue family indices
+ if ((mem_barrier->srcQueueFamilyIndex != VK_QUEUE_FAMILY_IGNORED &&
+ mem_barrier->srcQueueFamilyIndex >= dev_data->physDevProperties.queue_family_properties.size()) ||
+ (mem_barrier->dstQueueFamilyIndex != VK_QUEUE_FAMILY_IGNORED &&
+ mem_barrier->dstQueueFamilyIndex >= dev_data->physDevProperties.queue_family_properties.size())) {
+ skip_call |= log_msg(dev_data->report_data, VK_DEBUG_REPORT_ERROR_BIT_EXT, (VkDebugReportObjectTypeEXT)0, 0, __LINE__,
+ DRAWSTATE_INVALID_QUEUE_INDEX, "DS",
+ "%s: Buffer Barrier 0x%" PRIx64 " has QueueFamilyIndex greater "
+ "than the number of QueueFamilies (" PRINTF_SIZE_T_SPECIFIER ") for this device.",
+ funcName, reinterpret_cast<const uint64_t &>(mem_barrier->buffer),
+ dev_data->physDevProperties.queue_family_properties.size());
+ }
+
+ auto buffer_data = dev_data->bufferMap.find(mem_barrier->buffer);
+ uint64_t buffer_size =
+ buffer_data->second.create_info ? reinterpret_cast<uint64_t &>(buffer_data->second.create_info->size) : 0;
+ if (buffer_data != dev_data->bufferMap.end()) {
+ if (mem_barrier->offset >= buffer_size) {
+ skip_call |=
+ log_msg(dev_data->report_data, VK_DEBUG_REPORT_ERROR_BIT_EXT, (VkDebugReportObjectTypeEXT)0, 0, __LINE__,
+ DRAWSTATE_INVALID_BARRIER, "DS", "%s: Buffer Barrier 0x%" PRIx64 " has offset %" PRIu64
+ " whose sum is not less than total size %" PRIu64 ".",
+ funcName, reinterpret_cast<const uint64_t &>(mem_barrier->buffer),
+ reinterpret_cast<const uint64_t &>(mem_barrier->offset), buffer_size);
+ } else if (mem_barrier->size != VK_WHOLE_SIZE && (mem_barrier->offset + mem_barrier->size > buffer_size)) {
+ skip_call |= log_msg(dev_data->report_data, VK_DEBUG_REPORT_ERROR_BIT_EXT, (VkDebugReportObjectTypeEXT)0, 0,
+ __LINE__, DRAWSTATE_INVALID_BARRIER, "DS",
+ "%s: Buffer Barrier 0x%" PRIx64 " has offset %" PRIu64 " and size %" PRIu64
+ " whose sum is greater than total size %" PRIu64 ".",
+ funcName, reinterpret_cast<const uint64_t &>(mem_barrier->buffer),
+ reinterpret_cast<const uint64_t &>(mem_barrier->offset),
+ reinterpret_cast<const uint64_t &>(mem_barrier->size), buffer_size);
+ }
+ }
+ }
+ return skip_call;
+}
+
+bool validateEventStageMask(VkQueue queue, uint32_t eventCount, const VkEvent *pEvents, VkPipelineStageFlags sourceStageMask) {
+ bool skip_call = false;
+ VkPipelineStageFlags stageMask = 0;
+ layer_data *dev_data = get_my_data_ptr(get_dispatch_key(queue), layer_data_map);
+ for (uint32_t i = 0; i < eventCount; ++i) {
+ auto queue_data = dev_data->queueMap.find(queue);
+ if (queue_data == dev_data->queueMap.end())
+ return false;
+ auto event_data = queue_data->second.eventToStageMap.find(pEvents[i]);
+ if (event_data != queue_data->second.eventToStageMap.end()) {
+ stageMask |= event_data->second;
+ } else {
+ auto global_event_data = dev_data->eventMap.find(pEvents[i]);
+ if (global_event_data == dev_data->eventMap.end()) {
+ skip_call |= log_msg(dev_data->report_data, VK_DEBUG_REPORT_ERROR_BIT_EXT, VK_DEBUG_REPORT_OBJECT_TYPE_EVENT_EXT,
+ reinterpret_cast<const uint64_t &>(pEvents[i]), __LINE__, DRAWSTATE_INVALID_FENCE, "DS",
+ "Fence 0x%" PRIx64 " cannot be waited on if it has never been set.",
+ reinterpret_cast<const uint64_t &>(pEvents[i]));
+ } else {
+ stageMask |= global_event_data->second.stageMask;
+ }
+ }
+ }
+ if (sourceStageMask != stageMask) {
+ skip_call |=
+ log_msg(dev_data->report_data, VK_DEBUG_REPORT_ERROR_BIT_EXT, (VkDebugReportObjectTypeEXT)0, 0, __LINE__,
+ DRAWSTATE_INVALID_FENCE, "DS",
+ "Submitting cmdbuffer with call to VkCmdWaitEvents using srcStageMask 0x%x which must be the bitwise OR of the "
+ "stageMask parameters used in calls to vkCmdSetEvent and VK_PIPELINE_STAGE_HOST_BIT if used with vkSetEvent.",
+ sourceStageMask);
+ }
+ return skip_call;
+}
+
+VK_LAYER_EXPORT VKAPI_ATTR void VKAPI_CALL
+vkCmdWaitEvents(VkCommandBuffer commandBuffer, uint32_t eventCount, const VkEvent *pEvents, VkPipelineStageFlags sourceStageMask,
+ VkPipelineStageFlags dstStageMask, uint32_t memoryBarrierCount, const VkMemoryBarrier *pMemoryBarriers,
+ uint32_t bufferMemoryBarrierCount, const VkBufferMemoryBarrier *pBufferMemoryBarriers,
+ uint32_t imageMemoryBarrierCount, const VkImageMemoryBarrier *pImageMemoryBarriers) {
+ VkBool32 skipCall = VK_FALSE;
+ layer_data *dev_data = get_my_data_ptr(get_dispatch_key(commandBuffer), layer_data_map);
+ loader_platform_thread_lock_mutex(&globalLock);
+ GLOBAL_CB_NODE *pCB = getCBNode(dev_data, commandBuffer);
+ if (pCB) {
+ for (uint32_t i = 0; i < eventCount; ++i) {
+ pCB->waitedEvents.push_back(pEvents[i]);
+ pCB->events.push_back(pEvents[i]);
+ }
+ std::function<bool(VkQueue)> eventUpdate =
+ std::bind(validateEventStageMask, std::placeholders::_1, eventCount, pEvents, sourceStageMask);
+ pCB->eventUpdates.push_back(eventUpdate);
+ if (pCB->state == CB_RECORDING) {
+ skipCall |= addCmd(dev_data, pCB, CMD_WAITEVENTS, "vkCmdWaitEvents()");
+ } else {
+ skipCall |= report_error_no_cb_begin(dev_data, commandBuffer, "vkCmdWaitEvents()");
+ }
+ skipCall |= TransitionImageLayouts(commandBuffer, imageMemoryBarrierCount, pImageMemoryBarriers);
+ skipCall |=
+ ValidateBarriers("vkCmdWaitEvents", commandBuffer, memoryBarrierCount, pMemoryBarriers, bufferMemoryBarrierCount,
+ pBufferMemoryBarriers, imageMemoryBarrierCount, pImageMemoryBarriers);
+ }
+ loader_platform_thread_unlock_mutex(&globalLock);
+ if (VK_FALSE == skipCall)
+ dev_data->device_dispatch_table->CmdWaitEvents(commandBuffer, eventCount, pEvents, sourceStageMask, dstStageMask,
+ memoryBarrierCount, pMemoryBarriers, bufferMemoryBarrierCount,
+ pBufferMemoryBarriers, imageMemoryBarrierCount, pImageMemoryBarriers);
+}
+
+VK_LAYER_EXPORT VKAPI_ATTR void VKAPI_CALL
+vkCmdPipelineBarrier(VkCommandBuffer commandBuffer, VkPipelineStageFlags srcStageMask, VkPipelineStageFlags dstStageMask,
+ VkDependencyFlags dependencyFlags, uint32_t memoryBarrierCount, const VkMemoryBarrier *pMemoryBarriers,
+ uint32_t bufferMemoryBarrierCount, const VkBufferMemoryBarrier *pBufferMemoryBarriers,
+ uint32_t imageMemoryBarrierCount, const VkImageMemoryBarrier *pImageMemoryBarriers) {
+ VkBool32 skipCall = VK_FALSE;
+ layer_data *dev_data = get_my_data_ptr(get_dispatch_key(commandBuffer), layer_data_map);
+ loader_platform_thread_lock_mutex(&globalLock);
+ GLOBAL_CB_NODE *pCB = getCBNode(dev_data, commandBuffer);
+ if (pCB) {
+ skipCall |= addCmd(dev_data, pCB, CMD_PIPELINEBARRIER, "vkCmdPipelineBarrier()");
+ skipCall |= TransitionImageLayouts(commandBuffer, imageMemoryBarrierCount, pImageMemoryBarriers);
+ skipCall |=
+ ValidateBarriers("vkCmdPipelineBarrier", commandBuffer, memoryBarrierCount, pMemoryBarriers, bufferMemoryBarrierCount,
+ pBufferMemoryBarriers, imageMemoryBarrierCount, pImageMemoryBarriers);
+ }
+ loader_platform_thread_unlock_mutex(&globalLock);
+ if (VK_FALSE == skipCall)
+ dev_data->device_dispatch_table->CmdPipelineBarrier(commandBuffer, srcStageMask, dstStageMask, dependencyFlags,
+ memoryBarrierCount, pMemoryBarriers, bufferMemoryBarrierCount,
+ pBufferMemoryBarriers, imageMemoryBarrierCount, pImageMemoryBarriers);
+}
+
+VK_LAYER_EXPORT VKAPI_ATTR void VKAPI_CALL
+vkCmdBeginQuery(VkCommandBuffer commandBuffer, VkQueryPool queryPool, uint32_t slot, VkFlags flags) {
+ VkBool32 skipCall = VK_FALSE;
+ layer_data *dev_data = get_my_data_ptr(get_dispatch_key(commandBuffer), layer_data_map);
+ loader_platform_thread_lock_mutex(&globalLock);
+ GLOBAL_CB_NODE *pCB = getCBNode(dev_data, commandBuffer);
+ if (pCB) {
+ QueryObject query = {queryPool, slot};
+ pCB->activeQueries.insert(query);
+ if (!pCB->startedQueries.count(query)) {
+ pCB->startedQueries.insert(query);
+ }
+ skipCall |= addCmd(dev_data, pCB, CMD_BEGINQUERY, "vkCmdBeginQuery()");
+ }
+ loader_platform_thread_unlock_mutex(&globalLock);
+ if (VK_FALSE == skipCall)
+ dev_data->device_dispatch_table->CmdBeginQuery(commandBuffer, queryPool, slot, flags);
+}
+
+VK_LAYER_EXPORT VKAPI_ATTR void VKAPI_CALL vkCmdEndQuery(VkCommandBuffer commandBuffer, VkQueryPool queryPool, uint32_t slot) {
+ VkBool32 skipCall = VK_FALSE;
+ layer_data *dev_data = get_my_data_ptr(get_dispatch_key(commandBuffer), layer_data_map);
+ loader_platform_thread_lock_mutex(&globalLock);
+ GLOBAL_CB_NODE *pCB = getCBNode(dev_data, commandBuffer);
+ if (pCB) {
+ QueryObject query = {queryPool, slot};
+ if (!pCB->activeQueries.count(query)) {
+ skipCall |=
+ log_msg(dev_data->report_data, VK_DEBUG_REPORT_ERROR_BIT_EXT, (VkDebugReportObjectTypeEXT)0, 0, __LINE__,
+ DRAWSTATE_INVALID_QUERY, "DS", "Ending a query before it was started: queryPool %" PRIu64 ", index %d",
+ (uint64_t)(queryPool), slot);
+ } else {
+ pCB->activeQueries.erase(query);
+ }
+ pCB->queryToStateMap[query] = 1;
+ if (pCB->state == CB_RECORDING) {
+ skipCall |= addCmd(dev_data, pCB, CMD_ENDQUERY, "VkCmdEndQuery()");
+ } else {
+ skipCall |= report_error_no_cb_begin(dev_data, commandBuffer, "vkCmdEndQuery()");
+ }
+ }
+ loader_platform_thread_unlock_mutex(&globalLock);
+ if (VK_FALSE == skipCall)
+ dev_data->device_dispatch_table->CmdEndQuery(commandBuffer, queryPool, slot);
+}
+
+VK_LAYER_EXPORT VKAPI_ATTR void VKAPI_CALL
+vkCmdResetQueryPool(VkCommandBuffer commandBuffer, VkQueryPool queryPool, uint32_t firstQuery, uint32_t queryCount) {
+ VkBool32 skipCall = VK_FALSE;
+ layer_data *dev_data = get_my_data_ptr(get_dispatch_key(commandBuffer), layer_data_map);
+ loader_platform_thread_lock_mutex(&globalLock);
+ GLOBAL_CB_NODE *pCB = getCBNode(dev_data, commandBuffer);
+ if (pCB) {
+ for (uint32_t i = 0; i < queryCount; i++) {
+ QueryObject query = {queryPool, firstQuery + i};
+ pCB->waitedEventsBeforeQueryReset[query] = pCB->waitedEvents;
+ pCB->queryToStateMap[query] = 0;
+ }
+ if (pCB->state == CB_RECORDING) {
+ skipCall |= addCmd(dev_data, pCB, CMD_RESETQUERYPOOL, "VkCmdResetQueryPool()");
+ } else {
+ skipCall |= report_error_no_cb_begin(dev_data, commandBuffer, "vkCmdResetQueryPool()");
+ }
+ skipCall |= insideRenderPass(dev_data, pCB, "vkCmdQueryPool");
+ }
+ loader_platform_thread_unlock_mutex(&globalLock);
+ if (VK_FALSE == skipCall)
+ dev_data->device_dispatch_table->CmdResetQueryPool(commandBuffer, queryPool, firstQuery, queryCount);
+}
+
+VK_LAYER_EXPORT VKAPI_ATTR void VKAPI_CALL
+vkCmdCopyQueryPoolResults(VkCommandBuffer commandBuffer, VkQueryPool queryPool, uint32_t firstQuery, uint32_t queryCount,
+ VkBuffer dstBuffer, VkDeviceSize dstOffset, VkDeviceSize stride, VkQueryResultFlags flags) {
+ VkBool32 skipCall = VK_FALSE;
+ layer_data *dev_data = get_my_data_ptr(get_dispatch_key(commandBuffer), layer_data_map);
+ loader_platform_thread_lock_mutex(&globalLock);
+ GLOBAL_CB_NODE *pCB = getCBNode(dev_data, commandBuffer);
+#if MTMERGESOURCE
+ VkDeviceMemory mem;
+ auto cb_data = dev_data->commandBufferMap.find(commandBuffer);
+ skipCall |=
+ get_mem_binding_from_object(dev_data, commandBuffer, (uint64_t)dstBuffer, VK_DEBUG_REPORT_OBJECT_TYPE_BUFFER_EXT, &mem);
+ if (cb_data != dev_data->commandBufferMap.end()) {
+ std::function<VkBool32()> function = [=]() {
+ set_memory_valid(dev_data, mem, true);
+ return VK_FALSE;
+ };
+ cb_data->second->validate_functions.push_back(function);
+ }
+ skipCall |= update_cmd_buf_and_mem_references(dev_data, commandBuffer, mem, "vkCmdCopyQueryPoolResults");
+ // Validate that DST buffer has correct usage flags set
+ skipCall |= validate_buffer_usage_flags(dev_data, commandBuffer, dstBuffer, VK_BUFFER_USAGE_TRANSFER_DST_BIT, true,
+ "vkCmdCopyQueryPoolResults()", "VK_BUFFER_USAGE_TRANSFER_DST_BIT");
+#endif
+ if (pCB) {
+ for (uint32_t i = 0; i < queryCount; i++) {
+ QueryObject query = {queryPool, firstQuery + i};
+ if (!pCB->queryToStateMap[query]) {
+ skipCall |= log_msg(dev_data->report_data, VK_DEBUG_REPORT_ERROR_BIT_EXT, (VkDebugReportObjectTypeEXT)0, 0,
+ __LINE__, DRAWSTATE_INVALID_QUERY, "DS",
+ "Requesting a copy from query to buffer with invalid query: queryPool %" PRIu64 ", index %d",
+ (uint64_t)(queryPool), firstQuery + i);
+ }
+ }
+ if (pCB->state == CB_RECORDING) {
+ skipCall |= addCmd(dev_data, pCB, CMD_COPYQUERYPOOLRESULTS, "vkCmdCopyQueryPoolResults()");
+ } else {
+ skipCall |= report_error_no_cb_begin(dev_data, commandBuffer, "vkCmdCopyQueryPoolResults()");
+ }
+ skipCall |= insideRenderPass(dev_data, pCB, "vkCmdCopyQueryPoolResults");
+ }
+ loader_platform_thread_unlock_mutex(&globalLock);
+ if (VK_FALSE == skipCall)
+ dev_data->device_dispatch_table->CmdCopyQueryPoolResults(commandBuffer, queryPool, firstQuery, queryCount, dstBuffer,
+ dstOffset, stride, flags);
+}
+
+VK_LAYER_EXPORT VKAPI_ATTR void VKAPI_CALL vkCmdPushConstants(VkCommandBuffer commandBuffer, VkPipelineLayout layout,
+ VkShaderStageFlags stageFlags, uint32_t offset, uint32_t size,
+ const void *pValues) {
+ bool skipCall = false;
+ layer_data *dev_data = get_my_data_ptr(get_dispatch_key(commandBuffer), layer_data_map);
+ loader_platform_thread_lock_mutex(&globalLock);
+ GLOBAL_CB_NODE *pCB = getCBNode(dev_data, commandBuffer);
+ if (pCB) {
+ if (pCB->state == CB_RECORDING) {
+ skipCall |= addCmd(dev_data, pCB, CMD_PUSHCONSTANTS, "vkCmdPushConstants()");
+ } else {
+ skipCall |= report_error_no_cb_begin(dev_data, commandBuffer, "vkCmdPushConstants()");
+ }
+ }
+ if ((offset + size) > dev_data->physDevProperties.properties.limits.maxPushConstantsSize) {
+ skipCall |= validatePushConstantSize(dev_data, offset, size, "vkCmdPushConstants()");
+ }
+ // TODO : Add warning if push constant update doesn't align with range
+ loader_platform_thread_unlock_mutex(&globalLock);
+ if (!skipCall)
+ dev_data->device_dispatch_table->CmdPushConstants(commandBuffer, layout, stageFlags, offset, size, pValues);
+}
+
+VK_LAYER_EXPORT VKAPI_ATTR void VKAPI_CALL
+vkCmdWriteTimestamp(VkCommandBuffer commandBuffer, VkPipelineStageFlagBits pipelineStage, VkQueryPool queryPool, uint32_t slot) {
+ VkBool32 skipCall = VK_FALSE;
+ layer_data *dev_data = get_my_data_ptr(get_dispatch_key(commandBuffer), layer_data_map);
+ loader_platform_thread_lock_mutex(&globalLock);
+ GLOBAL_CB_NODE *pCB = getCBNode(dev_data, commandBuffer);
+ if (pCB) {
+ QueryObject query = {queryPool, slot};
+ pCB->queryToStateMap[query] = 1;
+ if (pCB->state == CB_RECORDING) {
+ skipCall |= addCmd(dev_data, pCB, CMD_WRITETIMESTAMP, "vkCmdWriteTimestamp()");
+ } else {
+ skipCall |= report_error_no_cb_begin(dev_data, commandBuffer, "vkCmdWriteTimestamp()");
+ }
+ }
+ loader_platform_thread_unlock_mutex(&globalLock);
+ if (VK_FALSE == skipCall)
+ dev_data->device_dispatch_table->CmdWriteTimestamp(commandBuffer, pipelineStage, queryPool, slot);
+}
+
+VK_LAYER_EXPORT VKAPI_ATTR VkResult VKAPI_CALL vkCreateFramebuffer(VkDevice device, const VkFramebufferCreateInfo *pCreateInfo,
+ const VkAllocationCallbacks *pAllocator,
+ VkFramebuffer *pFramebuffer) {
+ layer_data *dev_data = get_my_data_ptr(get_dispatch_key(device), layer_data_map);
+ VkResult result = dev_data->device_dispatch_table->CreateFramebuffer(device, pCreateInfo, pAllocator, pFramebuffer);
+ if (VK_SUCCESS == result) {
+ // Shadow create info and store in map
+ VkFramebufferCreateInfo *localFBCI = new VkFramebufferCreateInfo(*pCreateInfo);
+ if (pCreateInfo->pAttachments) {
+ localFBCI->pAttachments = new VkImageView[localFBCI->attachmentCount];
+ memcpy((void *)localFBCI->pAttachments, pCreateInfo->pAttachments, localFBCI->attachmentCount * sizeof(VkImageView));
+ }
+ FRAMEBUFFER_NODE fbNode = {};
+ fbNode.createInfo = *localFBCI;
+ std::pair<VkFramebuffer, FRAMEBUFFER_NODE> fbPair(*pFramebuffer, fbNode);
+ loader_platform_thread_lock_mutex(&globalLock);
+ for (uint32_t i = 0; i < pCreateInfo->attachmentCount; ++i) {
+ VkImageView view = pCreateInfo->pAttachments[i];
+ auto view_data = dev_data->imageViewMap.find(view);
+ if (view_data == dev_data->imageViewMap.end()) {
+ continue;
+ }
+ MT_FB_ATTACHMENT_INFO fb_info;
+ get_mem_binding_from_object(dev_data, device, (uint64_t)(view_data->second.image), VK_DEBUG_REPORT_OBJECT_TYPE_IMAGE_EXT,
+ &fb_info.mem);
+ fb_info.image = view_data->second.image;
+ fbPair.second.attachments.push_back(fb_info);
+ }
+ dev_data->frameBufferMap.insert(fbPair);
+ loader_platform_thread_unlock_mutex(&globalLock);
+ }
+ return result;
+}
+
+VkBool32 FindDependency(const int index, const int dependent, const std::vector<DAGNode> &subpass_to_node,
+ std::unordered_set<uint32_t> &processed_nodes) {
+ // If we have already checked this node we have not found a dependency path so return false.
+ if (processed_nodes.count(index))
+ return VK_FALSE;
+ processed_nodes.insert(index);
+ const DAGNode &node = subpass_to_node[index];
+ // Look for a dependency path. If one exists return true else recurse on the previous nodes.
+ if (std::find(node.prev.begin(), node.prev.end(), dependent) == node.prev.end()) {
+ for (auto elem : node.prev) {
+ if (FindDependency(elem, dependent, subpass_to_node, processed_nodes))
+ return VK_TRUE;
+ }
+ } else {
+ return VK_TRUE;
+ }
+ return VK_FALSE;
+}
+
+VkBool32 CheckDependencyExists(const layer_data *my_data, const int subpass, const std::vector<uint32_t> &dependent_subpasses,
+ const std::vector<DAGNode> &subpass_to_node, VkBool32 &skip_call) {
+ VkBool32 result = VK_TRUE;
+ // Loop through all subpasses that share the same attachment and make sure a dependency exists
+ for (uint32_t k = 0; k < dependent_subpasses.size(); ++k) {
+ if (subpass == dependent_subpasses[k])
+ continue;
+ const DAGNode &node = subpass_to_node[subpass];
+ // Check for a specified dependency between the two nodes. If one exists we are done.
+ auto prev_elem = std::find(node.prev.begin(), node.prev.end(), dependent_subpasses[k]);
+ auto next_elem = std::find(node.next.begin(), node.next.end(), dependent_subpasses[k]);
+ if (prev_elem == node.prev.end() && next_elem == node.next.end()) {
+ // If no dependency exits an implicit dependency still might. If so, warn and if not throw an error.
+ std::unordered_set<uint32_t> processed_nodes;
+ if (FindDependency(subpass, dependent_subpasses[k], subpass_to_node, processed_nodes) ||
+ FindDependency(dependent_subpasses[k], subpass, subpass_to_node, processed_nodes)) {
+ // TODO: Verify against Valid Use section of spec
+ skip_call |= log_msg(my_data->report_data, VK_DEBUG_REPORT_ERROR_BIT_EXT, (VkDebugReportObjectTypeEXT)0, 0,
+ __LINE__, DRAWSTATE_INVALID_RENDERPASS, "DS",
+ "A dependency between subpasses %d and %d must exist but only an implicit one is specified.",
+ subpass, dependent_subpasses[k]);
+ } else {
+ skip_call |= log_msg(my_data->report_data, VK_DEBUG_REPORT_ERROR_BIT_EXT, (VkDebugReportObjectTypeEXT)0, 0,
+ __LINE__, DRAWSTATE_INVALID_RENDERPASS, "DS",
+ "A dependency between subpasses %d and %d must exist but one is not specified.", subpass,
+ dependent_subpasses[k]);
+ result = VK_FALSE;
+ }
+ }
+ }
+ return result;
+}
+
+VkBool32 CheckPreserved(const layer_data *my_data, const VkRenderPassCreateInfo *pCreateInfo, const int index,
+ const uint32_t attachment, const std::vector<DAGNode> &subpass_to_node, int depth, VkBool32 &skip_call) {
+ const DAGNode &node = subpass_to_node[index];
+ // If this node writes to the attachment return true as next nodes need to preserve the attachment.
+ const VkSubpassDescription &subpass = pCreateInfo->pSubpasses[index];
+ for (uint32_t j = 0; j < subpass.colorAttachmentCount; ++j) {
+ if (attachment == subpass.pColorAttachments[j].attachment)
+ return VK_TRUE;
+ }
+ if (subpass.pDepthStencilAttachment && subpass.pDepthStencilAttachment->attachment != VK_ATTACHMENT_UNUSED) {
+ if (attachment == subpass.pDepthStencilAttachment->attachment)
+ return VK_TRUE;
+ }
+ VkBool32 result = VK_FALSE;
+ // Loop through previous nodes and see if any of them write to the attachment.
+ for (auto elem : node.prev) {
+ result |= CheckPreserved(my_data, pCreateInfo, elem, attachment, subpass_to_node, depth + 1, skip_call);
+ }
+ // If the attachment was written to by a previous node than this node needs to preserve it.
+ if (result && depth > 0) {
+ const VkSubpassDescription &subpass = pCreateInfo->pSubpasses[index];
+ VkBool32 has_preserved = VK_FALSE;
+ for (uint32_t j = 0; j < subpass.preserveAttachmentCount; ++j) {
+ if (subpass.pPreserveAttachments[j] == attachment) {
+ has_preserved = VK_TRUE;
+ break;
+ }
+ }
+ if (has_preserved == VK_FALSE) {
+ skip_call |=
+ log_msg(my_data->report_data, VK_DEBUG_REPORT_ERROR_BIT_EXT, (VkDebugReportObjectTypeEXT)0, 0, __LINE__,
+ DRAWSTATE_INVALID_RENDERPASS, "DS",
+ "Attachment %d is used by a later subpass and must be preserved in subpass %d.", attachment, index);
+ }
+ }
+ return result;
+}
+
+template <class T> bool isRangeOverlapping(T offset1, T size1, T offset2, T size2) {
+ return (((offset1 + size1) > offset2) && ((offset1 + size1) < (offset2 + size2))) ||
+ ((offset1 > offset2) && (offset1 < (offset2 + size2)));
+}
+
+bool isRegionOverlapping(VkImageSubresourceRange range1, VkImageSubresourceRange range2) {
+ return (isRangeOverlapping(range1.baseMipLevel, range1.levelCount, range2.baseMipLevel, range2.levelCount) &&
+ isRangeOverlapping(range1.baseArrayLayer, range1.layerCount, range2.baseArrayLayer, range2.layerCount));
+}
+
+VkBool32 ValidateDependencies(const layer_data *my_data, const VkRenderPassBeginInfo *pRenderPassBegin,
+ const std::vector<DAGNode> &subpass_to_node) {
+ VkBool32 skip_call = VK_FALSE;
+ const VkFramebufferCreateInfo *pFramebufferInfo = &my_data->frameBufferMap.at(pRenderPassBegin->framebuffer).createInfo;
+ const VkRenderPassCreateInfo *pCreateInfo = my_data->renderPassMap.at(pRenderPassBegin->renderPass)->pCreateInfo;
+ std::vector<std::vector<uint32_t>> output_attachment_to_subpass(pCreateInfo->attachmentCount);
+ std::vector<std::vector<uint32_t>> input_attachment_to_subpass(pCreateInfo->attachmentCount);
+ std::vector<std::vector<uint32_t>> overlapping_attachments(pCreateInfo->attachmentCount);
+ // Find overlapping attachments
+ for (uint32_t i = 0; i < pCreateInfo->attachmentCount; ++i) {
+ for (uint32_t j = i + 1; j < pCreateInfo->attachmentCount; ++j) {
+ VkImageView viewi = pFramebufferInfo->pAttachments[i];
+ VkImageView viewj = pFramebufferInfo->pAttachments[j];
+ if (viewi == viewj) {
+ overlapping_attachments[i].push_back(j);
+ overlapping_attachments[j].push_back(i);
+ continue;
+ }
+ auto view_data_i = my_data->imageViewMap.find(viewi);
+ auto view_data_j = my_data->imageViewMap.find(viewj);
+ if (view_data_i == my_data->imageViewMap.end() || view_data_j == my_data->imageViewMap.end()) {
+ continue;
+ }
+ if (view_data_i->second.image == view_data_j->second.image &&
+ isRegionOverlapping(view_data_i->second.subresourceRange, view_data_j->second.subresourceRange)) {
+ overlapping_attachments[i].push_back(j);
+ overlapping_attachments[j].push_back(i);
+ continue;
+ }
+ auto image_data_i = my_data->imageMap.find(view_data_i->second.image);
+ auto image_data_j = my_data->imageMap.find(view_data_j->second.image);
+ if (image_data_i == my_data->imageMap.end() || image_data_j == my_data->imageMap.end()) {
+ continue;
+ }
+ if (image_data_i->second.mem == image_data_j->second.mem &&
+ isRangeOverlapping(image_data_i->second.memOffset, image_data_i->second.memSize, image_data_j->second.memOffset,
+ image_data_j->second.memSize)) {
+ overlapping_attachments[i].push_back(j);
+ overlapping_attachments[j].push_back(i);
+ }
+ }
+ }
+ for (uint32_t i = 0; i < overlapping_attachments.size(); ++i) {
+ uint32_t attachment = i;
+ for (auto other_attachment : overlapping_attachments[i]) {
+ if (!(pCreateInfo->pAttachments[attachment].flags & VK_ATTACHMENT_DESCRIPTION_MAY_ALIAS_BIT)) {
+ skip_call |=
+ log_msg(my_data->report_data, VK_DEBUG_REPORT_ERROR_BIT_EXT, (VkDebugReportObjectTypeEXT)0, 0, __LINE__,
+ DRAWSTATE_INVALID_RENDERPASS, "DS", "Attachment %d aliases attachment %d but doesn't "
+ "set VK_ATTACHMENT_DESCRIPTION_MAY_ALIAS_BIT.",
+ attachment, other_attachment);
+ }
+ if (!(pCreateInfo->pAttachments[other_attachment].flags & VK_ATTACHMENT_DESCRIPTION_MAY_ALIAS_BIT)) {
+ skip_call |=
+ log_msg(my_data->report_data, VK_DEBUG_REPORT_ERROR_BIT_EXT, (VkDebugReportObjectTypeEXT)0, 0, __LINE__,
+ DRAWSTATE_INVALID_RENDERPASS, "DS", "Attachment %d aliases attachment %d but doesn't "
+ "set VK_ATTACHMENT_DESCRIPTION_MAY_ALIAS_BIT.",
+ other_attachment, attachment);
+ }
+ }
+ }
+ // Find for each attachment the subpasses that use them.
+ for (uint32_t i = 0; i < pCreateInfo->subpassCount; ++i) {
+ const VkSubpassDescription &subpass = pCreateInfo->pSubpasses[i];
+ for (uint32_t j = 0; j < subpass.inputAttachmentCount; ++j) {
+ uint32_t attachment = subpass.pInputAttachments[j].attachment;
+ input_attachment_to_subpass[attachment].push_back(i);
+ for (auto overlapping_attachment : overlapping_attachments[attachment]) {
+ input_attachment_to_subpass[overlapping_attachment].push_back(i);
+ }
+ }
+ for (uint32_t j = 0; j < subpass.colorAttachmentCount; ++j) {
+ uint32_t attachment = subpass.pColorAttachments[j].attachment;
+ output_attachment_to_subpass[attachment].push_back(i);
+ for (auto overlapping_attachment : overlapping_attachments[attachment]) {
+ output_attachment_to_subpass[overlapping_attachment].push_back(i);
+ }
+ }
+ if (subpass.pDepthStencilAttachment && subpass.pDepthStencilAttachment->attachment != VK_ATTACHMENT_UNUSED) {
+ uint32_t attachment = subpass.pDepthStencilAttachment->attachment;
+ output_attachment_to_subpass[attachment].push_back(i);
+ for (auto overlapping_attachment : overlapping_attachments[attachment]) {
+ output_attachment_to_subpass[overlapping_attachment].push_back(i);
+ }
+ }
+ }
+ // If there is a dependency needed make sure one exists
+ for (uint32_t i = 0; i < pCreateInfo->subpassCount; ++i) {
+ const VkSubpassDescription &subpass = pCreateInfo->pSubpasses[i];
+ // If the attachment is an input then all subpasses that output must have a dependency relationship
+ for (uint32_t j = 0; j < subpass.inputAttachmentCount; ++j) {
+ const uint32_t &attachment = subpass.pInputAttachments[j].attachment;
+ CheckDependencyExists(my_data, i, output_attachment_to_subpass[attachment], subpass_to_node, skip_call);
+ }
+ // If the attachment is an output then all subpasses that use the attachment must have a dependency relationship
+ for (uint32_t j = 0; j < subpass.colorAttachmentCount; ++j) {
+ const uint32_t &attachment = subpass.pColorAttachments[j].attachment;
+ CheckDependencyExists(my_data, i, output_attachment_to_subpass[attachment], subpass_to_node, skip_call);
+ CheckDependencyExists(my_data, i, input_attachment_to_subpass[attachment], subpass_to_node, skip_call);
+ }
+ if (subpass.pDepthStencilAttachment && subpass.pDepthStencilAttachment->attachment != VK_ATTACHMENT_UNUSED) {
+ const uint32_t &attachment = subpass.pDepthStencilAttachment->attachment;
+ CheckDependencyExists(my_data, i, output_attachment_to_subpass[attachment], subpass_to_node, skip_call);
+ CheckDependencyExists(my_data, i, input_attachment_to_subpass[attachment], subpass_to_node, skip_call);
+ }
+ }
+ // Loop through implicit dependencies, if this pass reads make sure the attachment is preserved for all passes after it was
+ // written.
+ for (uint32_t i = 0; i < pCreateInfo->subpassCount; ++i) {
+ const VkSubpassDescription &subpass = pCreateInfo->pSubpasses[i];
+ for (uint32_t j = 0; j < subpass.inputAttachmentCount; ++j) {
+ CheckPreserved(my_data, pCreateInfo, i, subpass.pInputAttachments[j].attachment, subpass_to_node, 0, skip_call);
+ }
+ }
+ return skip_call;
+}
+
+VkBool32 ValidateLayouts(const layer_data *my_data, VkDevice device, const VkRenderPassCreateInfo *pCreateInfo) {
+ VkBool32 skip = VK_FALSE;
+
+ for (uint32_t i = 0; i < pCreateInfo->subpassCount; ++i) {
+ const VkSubpassDescription &subpass = pCreateInfo->pSubpasses[i];
+ for (uint32_t j = 0; j < subpass.inputAttachmentCount; ++j) {
+ if (subpass.pInputAttachments[j].layout != VK_IMAGE_LAYOUT_DEPTH_STENCIL_READ_ONLY_OPTIMAL &&
+ subpass.pInputAttachments[j].layout != VK_IMAGE_LAYOUT_SHADER_READ_ONLY_OPTIMAL) {
+ if (subpass.pInputAttachments[j].layout == VK_IMAGE_LAYOUT_GENERAL) {
+ // TODO: Verify Valid Use in spec. I believe this is allowed (valid) but may not be optimal performance
+ skip |= log_msg(my_data->report_data, VK_DEBUG_REPORT_PERFORMANCE_WARNING_BIT_EXT,
+ (VkDebugReportObjectTypeEXT)0, 0, __LINE__, DRAWSTATE_INVALID_IMAGE_LAYOUT, "DS",
+ "Layout for input attachment is GENERAL but should be READ_ONLY_OPTIMAL.");
+ } else {
+ skip |= log_msg(my_data->report_data, VK_DEBUG_REPORT_ERROR_BIT_EXT, (VkDebugReportObjectTypeEXT)0, 0, __LINE__,
+ DRAWSTATE_INVALID_IMAGE_LAYOUT, "DS",
+ "Layout for input attachment is %s but can only be READ_ONLY_OPTIMAL or GENERAL.",
+ string_VkImageLayout(subpass.pInputAttachments[j].layout));
+ }
+ }
+ }
+ for (uint32_t j = 0; j < subpass.colorAttachmentCount; ++j) {
+ if (subpass.pColorAttachments[j].layout != VK_IMAGE_LAYOUT_COLOR_ATTACHMENT_OPTIMAL) {
+ if (subpass.pColorAttachments[j].layout == VK_IMAGE_LAYOUT_GENERAL) {
+ // TODO: Verify Valid Use in spec. I believe this is allowed (valid) but may not be optimal performance
+ skip |= log_msg(my_data->report_data, VK_DEBUG_REPORT_PERFORMANCE_WARNING_BIT_EXT,
+ (VkDebugReportObjectTypeEXT)0, 0, __LINE__, DRAWSTATE_INVALID_IMAGE_LAYOUT, "DS",
+ "Layout for color attachment is GENERAL but should be COLOR_ATTACHMENT_OPTIMAL.");
+ } else {
+ skip |= log_msg(my_data->report_data, VK_DEBUG_REPORT_ERROR_BIT_EXT, (VkDebugReportObjectTypeEXT)0, 0, __LINE__,
+ DRAWSTATE_INVALID_IMAGE_LAYOUT, "DS",
+ "Layout for color attachment is %s but can only be COLOR_ATTACHMENT_OPTIMAL or GENERAL.",
+ string_VkImageLayout(subpass.pColorAttachments[j].layout));
+ }
+ }
+ }
+ if ((subpass.pDepthStencilAttachment != NULL) && (subpass.pDepthStencilAttachment->attachment != VK_ATTACHMENT_UNUSED)) {
+ if (subpass.pDepthStencilAttachment->layout != VK_IMAGE_LAYOUT_DEPTH_STENCIL_ATTACHMENT_OPTIMAL) {
+ if (subpass.pDepthStencilAttachment->layout == VK_IMAGE_LAYOUT_GENERAL) {
+ // TODO: Verify Valid Use in spec. I believe this is allowed (valid) but may not be optimal performance
+ skip |= log_msg(my_data->report_data, VK_DEBUG_REPORT_PERFORMANCE_WARNING_BIT_EXT,
+ (VkDebugReportObjectTypeEXT)0, 0, __LINE__, DRAWSTATE_INVALID_IMAGE_LAYOUT, "DS",
+ "Layout for depth attachment is GENERAL but should be DEPTH_STENCIL_ATTACHMENT_OPTIMAL.");
+ } else {
+ skip |=
+ log_msg(my_data->report_data, VK_DEBUG_REPORT_ERROR_BIT_EXT, (VkDebugReportObjectTypeEXT)0, 0, __LINE__,
+ DRAWSTATE_INVALID_IMAGE_LAYOUT, "DS",
+ "Layout for depth attachment is %s but can only be DEPTH_STENCIL_ATTACHMENT_OPTIMAL or GENERAL.",
+ string_VkImageLayout(subpass.pDepthStencilAttachment->layout));
+ }
+ }
+ }
+ }
+ return skip;
+}
+
+VkBool32 CreatePassDAG(const layer_data *my_data, VkDevice device, const VkRenderPassCreateInfo *pCreateInfo,
+ std::vector<DAGNode> &subpass_to_node, std::vector<bool> &has_self_dependency) {
+ VkBool32 skip_call = VK_FALSE;
+ for (uint32_t i = 0; i < pCreateInfo->subpassCount; ++i) {
+ DAGNode &subpass_node = subpass_to_node[i];
+ subpass_node.pass = i;
+ }
+ for (uint32_t i = 0; i < pCreateInfo->dependencyCount; ++i) {
+ const VkSubpassDependency &dependency = pCreateInfo->pDependencies[i];
+ if (dependency.srcSubpass > dependency.dstSubpass && dependency.srcSubpass != VK_SUBPASS_EXTERNAL &&
+ dependency.dstSubpass != VK_SUBPASS_EXTERNAL) {
+ skip_call |= log_msg(my_data->report_data, VK_DEBUG_REPORT_ERROR_BIT_EXT, (VkDebugReportObjectTypeEXT)0, 0, __LINE__,
+ DRAWSTATE_INVALID_RENDERPASS, "DS",
+ "Depedency graph must be specified such that an earlier pass cannot depend on a later pass.");
+ } else if (dependency.srcSubpass == VK_SUBPASS_EXTERNAL && dependency.dstSubpass == VK_SUBPASS_EXTERNAL) {
+ skip_call |= log_msg(my_data->report_data, VK_DEBUG_REPORT_ERROR_BIT_EXT, (VkDebugReportObjectTypeEXT)0, 0, __LINE__,
+ DRAWSTATE_INVALID_RENDERPASS, "DS", "The src and dest subpasses cannot both be external.");
+ } else if (dependency.srcSubpass == dependency.dstSubpass) {
+ has_self_dependency[dependency.srcSubpass] = true;
+ }
+ if (dependency.dstSubpass != VK_SUBPASS_EXTERNAL) {
+ subpass_to_node[dependency.dstSubpass].prev.push_back(dependency.srcSubpass);
+ }
+ if (dependency.srcSubpass != VK_SUBPASS_EXTERNAL) {
+ subpass_to_node[dependency.srcSubpass].next.push_back(dependency.dstSubpass);
+ }
+ }
+ return skip_call;
+}
+
+
+VK_LAYER_EXPORT VKAPI_ATTR VkResult VKAPI_CALL vkCreateShaderModule(VkDevice device, const VkShaderModuleCreateInfo *pCreateInfo,
+ const VkAllocationCallbacks *pAllocator,
+ VkShaderModule *pShaderModule) {
+ layer_data *my_data = get_my_data_ptr(get_dispatch_key(device), layer_data_map);
+ VkBool32 skip_call = VK_FALSE;
+ if (!shader_is_spirv(pCreateInfo)) {
+ skip_call |= log_msg(my_data->report_data, VK_DEBUG_REPORT_ERROR_BIT_EXT, VK_DEBUG_REPORT_OBJECT_TYPE_DEVICE_EXT,
+ /* dev */ 0, __LINE__, SHADER_CHECKER_NON_SPIRV_SHADER, "SC", "Shader is not SPIR-V");
+ }
+
+ if (VK_FALSE != skip_call)
+ return VK_ERROR_VALIDATION_FAILED_EXT;
+
+ VkResult res = my_data->device_dispatch_table->CreateShaderModule(device, pCreateInfo, pAllocator, pShaderModule);
+
+ if (res == VK_SUCCESS) {
+ loader_platform_thread_lock_mutex(&globalLock);
+ my_data->shaderModuleMap[*pShaderModule] = unique_ptr<shader_module>(new shader_module(pCreateInfo));
+ loader_platform_thread_unlock_mutex(&globalLock);
+ }
+ return res;
+}
+
+VK_LAYER_EXPORT VKAPI_ATTR VkResult VKAPI_CALL vkCreateRenderPass(VkDevice device, const VkRenderPassCreateInfo *pCreateInfo,
+ const VkAllocationCallbacks *pAllocator,
+ VkRenderPass *pRenderPass) {
+ VkBool32 skip_call = VK_FALSE;
+ layer_data *dev_data = get_my_data_ptr(get_dispatch_key(device), layer_data_map);
+ loader_platform_thread_lock_mutex(&globalLock);
+ // Create DAG
+ std::vector<bool> has_self_dependency(pCreateInfo->subpassCount);
+ std::vector<DAGNode> subpass_to_node(pCreateInfo->subpassCount);
+ skip_call |= CreatePassDAG(dev_data, device, pCreateInfo, subpass_to_node, has_self_dependency);
+ // Validate
+ skip_call |= ValidateLayouts(dev_data, device, pCreateInfo);
+ if (VK_FALSE != skip_call) {
+ return VK_ERROR_VALIDATION_FAILED_EXT;
+ }
+ loader_platform_thread_unlock_mutex(&globalLock);
+ VkResult result = dev_data->device_dispatch_table->CreateRenderPass(device, pCreateInfo, pAllocator, pRenderPass);
+ if (VK_SUCCESS == result) {
+ loader_platform_thread_lock_mutex(&globalLock);
+ // TODOSC : Merge in tracking of renderpass from shader_checker
+ // Shadow create info and store in map
+ VkRenderPassCreateInfo *localRPCI = new VkRenderPassCreateInfo(*pCreateInfo);
+ if (pCreateInfo->pAttachments) {
+ localRPCI->pAttachments = new VkAttachmentDescription[localRPCI->attachmentCount];
+ memcpy((void *)localRPCI->pAttachments, pCreateInfo->pAttachments,
+ localRPCI->attachmentCount * sizeof(VkAttachmentDescription));
+ }
+ if (pCreateInfo->pSubpasses) {
+ localRPCI->pSubpasses = new VkSubpassDescription[localRPCI->subpassCount];
+ memcpy((void *)localRPCI->pSubpasses, pCreateInfo->pSubpasses, localRPCI->subpassCount * sizeof(VkSubpassDescription));
+
+ for (uint32_t i = 0; i < localRPCI->subpassCount; i++) {
+ VkSubpassDescription *subpass = (VkSubpassDescription *)&localRPCI->pSubpasses[i];
+ const uint32_t attachmentCount = subpass->inputAttachmentCount +
+ subpass->colorAttachmentCount * (1 + (subpass->pResolveAttachments ? 1 : 0)) +
+ ((subpass->pDepthStencilAttachment) ? 1 : 0) + subpass->preserveAttachmentCount;
+ VkAttachmentReference *attachments = new VkAttachmentReference[attachmentCount];
+
+ memcpy(attachments, subpass->pInputAttachments, sizeof(attachments[0]) * subpass->inputAttachmentCount);
+ subpass->pInputAttachments = attachments;
+ attachments += subpass->inputAttachmentCount;
+
+ memcpy(attachments, subpass->pColorAttachments, sizeof(attachments[0]) * subpass->colorAttachmentCount);
+ subpass->pColorAttachments = attachments;
+ attachments += subpass->colorAttachmentCount;
+
+ if (subpass->pResolveAttachments) {
+ memcpy(attachments, subpass->pResolveAttachments, sizeof(attachments[0]) * subpass->colorAttachmentCount);
+ subpass->pResolveAttachments = attachments;
+ attachments += subpass->colorAttachmentCount;
+ }
+
+ if (subpass->pDepthStencilAttachment) {
+ memcpy(attachments, subpass->pDepthStencilAttachment, sizeof(attachments[0]) * 1);
+ subpass->pDepthStencilAttachment = attachments;
+ attachments += 1;
+ }
+
+ memcpy(attachments, subpass->pPreserveAttachments, sizeof(attachments[0]) * subpass->preserveAttachmentCount);
+ subpass->pPreserveAttachments = &attachments->attachment;
+ }
+ }
+ if (pCreateInfo->pDependencies) {
+ localRPCI->pDependencies = new VkSubpassDependency[localRPCI->dependencyCount];
+ memcpy((void *)localRPCI->pDependencies, pCreateInfo->pDependencies,
+ localRPCI->dependencyCount * sizeof(VkSubpassDependency));
+ }
+ dev_data->renderPassMap[*pRenderPass] = new RENDER_PASS_NODE(localRPCI);
+ dev_data->renderPassMap[*pRenderPass]->hasSelfDependency = has_self_dependency;
+ dev_data->renderPassMap[*pRenderPass]->subpassToNode = subpass_to_node;
+#if MTMERGESOURCE
+ // MTMTODO : Merge with code from above to eliminate duplication
+ for (uint32_t i = 0; i < pCreateInfo->attachmentCount; ++i) {
+ VkAttachmentDescription desc = pCreateInfo->pAttachments[i];
+ MT_PASS_ATTACHMENT_INFO pass_info;
+ pass_info.load_op = desc.loadOp;
+ pass_info.store_op = desc.storeOp;
+ pass_info.attachment = i;
+ dev_data->renderPassMap[*pRenderPass]->attachments.push_back(pass_info);
+ }
+ // TODO: Maybe fill list and then copy instead of locking
+ std::unordered_map<uint32_t, bool> &attachment_first_read = dev_data->renderPassMap[*pRenderPass]->attachment_first_read;
+ std::unordered_map<uint32_t, VkImageLayout> &attachment_first_layout =
+ dev_data->renderPassMap[*pRenderPass]->attachment_first_layout;
+ for (uint32_t i = 0; i < pCreateInfo->subpassCount; ++i) {
+ const VkSubpassDescription &subpass = pCreateInfo->pSubpasses[i];
+ for (uint32_t j = 0; j < subpass.inputAttachmentCount; ++j) {
+ uint32_t attachment = subpass.pInputAttachments[j].attachment;
+ if (attachment_first_read.count(attachment))
+ continue;
+ attachment_first_read.insert(std::make_pair(attachment, true));
+ attachment_first_layout.insert(std::make_pair(attachment, subpass.pInputAttachments[j].layout));
+ }
+ for (uint32_t j = 0; j < subpass.colorAttachmentCount; ++j) {
+ uint32_t attachment = subpass.pColorAttachments[j].attachment;
+ if (attachment_first_read.count(attachment))
+ continue;
+ attachment_first_read.insert(std::make_pair(attachment, false));
+ attachment_first_layout.insert(std::make_pair(attachment, subpass.pColorAttachments[j].layout));
+ }
+ if (subpass.pDepthStencilAttachment && subpass.pDepthStencilAttachment->attachment != VK_ATTACHMENT_UNUSED) {
+ uint32_t attachment = subpass.pDepthStencilAttachment->attachment;
+ if (attachment_first_read.count(attachment))
+ continue;
+ attachment_first_read.insert(std::make_pair(attachment, false));
+ attachment_first_layout.insert(std::make_pair(attachment, subpass.pDepthStencilAttachment->layout));
+ }
+ }
+#endif
+ loader_platform_thread_unlock_mutex(&globalLock);
+ }
+ return result;
+}
+// Free the renderpass shadow
+static void deleteRenderPasses(layer_data *my_data) {
+ if (my_data->renderPassMap.size() <= 0)
+ return;
+ for (auto ii = my_data->renderPassMap.begin(); ii != my_data->renderPassMap.end(); ++ii) {
+ const VkRenderPassCreateInfo *pRenderPassInfo = (*ii).second->pCreateInfo;
+ delete[] pRenderPassInfo->pAttachments;
+ if (pRenderPassInfo->pSubpasses) {
+ for (uint32_t i = 0; i < pRenderPassInfo->subpassCount; ++i) {
+ // Attachements are all allocated in a block, so just need to
+ // find the first non-null one to delete
+ if (pRenderPassInfo->pSubpasses[i].pInputAttachments) {
+ delete[] pRenderPassInfo->pSubpasses[i].pInputAttachments;
+ } else if (pRenderPassInfo->pSubpasses[i].pColorAttachments) {
+ delete[] pRenderPassInfo->pSubpasses[i].pColorAttachments;
+ } else if (pRenderPassInfo->pSubpasses[i].pResolveAttachments) {
+ delete[] pRenderPassInfo->pSubpasses[i].pResolveAttachments;
+ } else if (pRenderPassInfo->pSubpasses[i].pPreserveAttachments) {
+ delete[] pRenderPassInfo->pSubpasses[i].pPreserveAttachments;
+ }
+ }
+ delete[] pRenderPassInfo->pSubpasses;
+ }
+ delete[] pRenderPassInfo->pDependencies;
+ delete pRenderPassInfo;
+ delete (*ii).second;
+ }
+ my_data->renderPassMap.clear();
+}
+
+VkBool32 VerifyFramebufferAndRenderPassLayouts(VkCommandBuffer cmdBuffer, const VkRenderPassBeginInfo *pRenderPassBegin) {
+ VkBool32 skip_call = VK_FALSE;
+ layer_data *dev_data = get_my_data_ptr(get_dispatch_key(cmdBuffer), layer_data_map);
+ GLOBAL_CB_NODE *pCB = getCBNode(dev_data, cmdBuffer);
+ const VkRenderPassCreateInfo *pRenderPassInfo = dev_data->renderPassMap[pRenderPassBegin->renderPass]->pCreateInfo;
+ const VkFramebufferCreateInfo framebufferInfo = dev_data->frameBufferMap[pRenderPassBegin->framebuffer].createInfo;
+ if (pRenderPassInfo->attachmentCount != framebufferInfo.attachmentCount) {
+ skip_call |= log_msg(dev_data->report_data, VK_DEBUG_REPORT_ERROR_BIT_EXT, (VkDebugReportObjectTypeEXT)0, 0, __LINE__,
+ DRAWSTATE_INVALID_RENDERPASS, "DS", "You cannot start a render pass using a framebuffer "
+ "with a different number of attachments.");
+ }
+ for (uint32_t i = 0; i < pRenderPassInfo->attachmentCount; ++i) {
+ const VkImageView &image_view = framebufferInfo.pAttachments[i];
+ auto image_data = dev_data->imageViewMap.find(image_view);
+ assert(image_data != dev_data->imageViewMap.end());
+ const VkImage &image = image_data->second.image;
+ const VkImageSubresourceRange &subRange = image_data->second.subresourceRange;
+ IMAGE_CMD_BUF_LAYOUT_NODE newNode = {pRenderPassInfo->pAttachments[i].initialLayout,
+ pRenderPassInfo->pAttachments[i].initialLayout};
+ // TODO: Do not iterate over every possibility - consolidate where possible
+ for (uint32_t j = 0; j < subRange.levelCount; j++) {
+ uint32_t level = subRange.baseMipLevel + j;
+ for (uint32_t k = 0; k < subRange.layerCount; k++) {
+ uint32_t layer = subRange.baseArrayLayer + k;
+ VkImageSubresource sub = {subRange.aspectMask, level, layer};
+ IMAGE_CMD_BUF_LAYOUT_NODE node;
+ if (!FindLayout(pCB, image, sub, node)) {
+ SetLayout(pCB, image, sub, newNode);
+ continue;
+ }
+ if (newNode.layout != node.layout) {
+ skip_call |=
+ log_msg(dev_data->report_data, VK_DEBUG_REPORT_ERROR_BIT_EXT, (VkDebugReportObjectTypeEXT)0, 0, __LINE__,
+ DRAWSTATE_INVALID_RENDERPASS, "DS", "You cannot start a render pass using attachment %i "
+ "where the "
+ "intial layout differs from the starting layout.",
+ i);
+ }
+ }
+ }
+ }
+ return skip_call;
+}
+
+void TransitionSubpassLayouts(VkCommandBuffer cmdBuffer, const VkRenderPassBeginInfo *pRenderPassBegin, const int subpass_index) {
+ layer_data *dev_data = get_my_data_ptr(get_dispatch_key(cmdBuffer), layer_data_map);
+ GLOBAL_CB_NODE *pCB = getCBNode(dev_data, cmdBuffer);
+ auto render_pass_data = dev_data->renderPassMap.find(pRenderPassBegin->renderPass);
+ if (render_pass_data == dev_data->renderPassMap.end()) {
+ return;
+ }
+ const VkRenderPassCreateInfo *pRenderPassInfo = render_pass_data->second->pCreateInfo;
+ auto framebuffer_data = dev_data->frameBufferMap.find(pRenderPassBegin->framebuffer);
+ if (framebuffer_data == dev_data->frameBufferMap.end()) {
+ return;
+ }
+ const VkFramebufferCreateInfo framebufferInfo = framebuffer_data->second.createInfo;
+ const VkSubpassDescription &subpass = pRenderPassInfo->pSubpasses[subpass_index];
+ for (uint32_t j = 0; j < subpass.inputAttachmentCount; ++j) {
+ const VkImageView &image_view = framebufferInfo.pAttachments[subpass.pInputAttachments[j].attachment];
+ SetLayout(dev_data, pCB, image_view, subpass.pInputAttachments[j].layout);
+ }
+ for (uint32_t j = 0; j < subpass.colorAttachmentCount; ++j) {
+ const VkImageView &image_view = framebufferInfo.pAttachments[subpass.pColorAttachments[j].attachment];
+ SetLayout(dev_data, pCB, image_view, subpass.pColorAttachments[j].layout);
+ }
+ if ((subpass.pDepthStencilAttachment != NULL) && (subpass.pDepthStencilAttachment->attachment != VK_ATTACHMENT_UNUSED)) {
+ const VkImageView &image_view = framebufferInfo.pAttachments[subpass.pDepthStencilAttachment->attachment];
+ SetLayout(dev_data, pCB, image_view, subpass.pDepthStencilAttachment->layout);
+ }
+}
+
+VkBool32 validatePrimaryCommandBuffer(const layer_data *my_data, const GLOBAL_CB_NODE *pCB, const std::string &cmd_name) {
+ VkBool32 skip_call = VK_FALSE;
+ if (pCB->createInfo.level != VK_COMMAND_BUFFER_LEVEL_PRIMARY) {
+ skip_call |= log_msg(my_data->report_data, VK_DEBUG_REPORT_ERROR_BIT_EXT, (VkDebugReportObjectTypeEXT)0, 0, __LINE__,
+ DRAWSTATE_INVALID_COMMAND_BUFFER, "DS", "Cannot execute command %s on a secondary command buffer.",
+ cmd_name.c_str());
+ }
+ return skip_call;
+}
+
+void TransitionFinalSubpassLayouts(VkCommandBuffer cmdBuffer, const VkRenderPassBeginInfo *pRenderPassBegin) {
+ layer_data *dev_data = get_my_data_ptr(get_dispatch_key(cmdBuffer), layer_data_map);
+ GLOBAL_CB_NODE *pCB = getCBNode(dev_data, cmdBuffer);
+ auto render_pass_data = dev_data->renderPassMap.find(pRenderPassBegin->renderPass);
+ if (render_pass_data == dev_data->renderPassMap.end()) {
+ return;
+ }
+ const VkRenderPassCreateInfo *pRenderPassInfo = render_pass_data->second->pCreateInfo;
+ auto framebuffer_data = dev_data->frameBufferMap.find(pRenderPassBegin->framebuffer);
+ if (framebuffer_data == dev_data->frameBufferMap.end()) {
+ return;
+ }
+ const VkFramebufferCreateInfo framebufferInfo = framebuffer_data->second.createInfo;
+ for (uint32_t i = 0; i < pRenderPassInfo->attachmentCount; ++i) {
+ const VkImageView &image_view = framebufferInfo.pAttachments[i];
+ SetLayout(dev_data, pCB, image_view, pRenderPassInfo->pAttachments[i].finalLayout);
+ }
+}
+
+VK_LAYER_EXPORT VKAPI_ATTR void VKAPI_CALL
+vkCmdBeginRenderPass(VkCommandBuffer commandBuffer, const VkRenderPassBeginInfo *pRenderPassBegin, VkSubpassContents contents) {
+ VkBool32 skipCall = VK_FALSE;
+ layer_data *dev_data = get_my_data_ptr(get_dispatch_key(commandBuffer), layer_data_map);
+ loader_platform_thread_lock_mutex(&globalLock);
+ GLOBAL_CB_NODE *pCB = getCBNode(dev_data, commandBuffer);
+ if (pCB) {
+ if (pRenderPassBegin && pRenderPassBegin->renderPass) {
+#if MTMERGE
+ auto pass_data = dev_data->renderPassMap.find(pRenderPassBegin->renderPass);
+ if (pass_data != dev_data->renderPassMap.end()) {
+ RENDER_PASS_NODE* pRPNode = pass_data->second;
+ pRPNode->fb = pRenderPassBegin->framebuffer;
+ auto cb_data = dev_data->commandBufferMap.find(commandBuffer);
+ for (size_t i = 0; i < pRPNode->attachments.size(); ++i) {
+ MT_FB_ATTACHMENT_INFO &fb_info = dev_data->frameBufferMap[pRPNode->fb].attachments[i];
+ if (pRPNode->attachments[i].load_op == VK_ATTACHMENT_LOAD_OP_CLEAR) {
+ if (cb_data != dev_data->commandBufferMap.end()) {
+ std::function<VkBool32()> function = [=]() {
+ set_memory_valid(dev_data, fb_info.mem, true, fb_info.image);
+ return VK_FALSE;
+ };
+ cb_data->second->validate_functions.push_back(function);
+ }
+ VkImageLayout &attachment_layout = pRPNode->attachment_first_layout[pRPNode->attachments[i].attachment];
+ if (attachment_layout == VK_IMAGE_LAYOUT_DEPTH_STENCIL_READ_ONLY_OPTIMAL ||
+ attachment_layout == VK_IMAGE_LAYOUT_SHADER_READ_ONLY_OPTIMAL) {
+ skipCall |=
+ log_msg(dev_data->report_data, VK_DEBUG_REPORT_ERROR_BIT_EXT,
+ VK_DEBUG_REPORT_OBJECT_TYPE_RENDER_PASS_EXT, (uint64_t)(pRenderPassBegin->renderPass), __LINE__,
+ MEMTRACK_INVALID_LAYOUT, "MEM", "Cannot clear attachment %d with invalid first layout %d.",
+ pRPNode->attachments[i].attachment, attachment_layout);
+ }
+ } else if (pRPNode->attachments[i].load_op == VK_ATTACHMENT_LOAD_OP_DONT_CARE) {
+ if (cb_data != dev_data->commandBufferMap.end()) {
+ std::function<VkBool32()> function = [=]() {
+ set_memory_valid(dev_data, fb_info.mem, false, fb_info.image);
+ return VK_FALSE;
+ };
+ cb_data->second->validate_functions.push_back(function);
+ }
+ } else if (pRPNode->attachments[i].load_op == VK_ATTACHMENT_LOAD_OP_LOAD) {
+ if (cb_data != dev_data->commandBufferMap.end()) {
+ std::function<VkBool32()> function = [=]() {
+ return validate_memory_is_valid(dev_data, fb_info.mem, "vkCmdBeginRenderPass()", fb_info.image);
+ };
+ cb_data->second->validate_functions.push_back(function);
+ }
+ }
+ if (pRPNode->attachment_first_read[pRPNode->attachments[i].attachment]) {
+ if (cb_data != dev_data->commandBufferMap.end()) {
+ std::function<VkBool32()> function = [=]() {
+ return validate_memory_is_valid(dev_data, fb_info.mem, "vkCmdBeginRenderPass()", fb_info.image);
+ };
+ cb_data->second->validate_functions.push_back(function);
+ }
+ }
+ }
+ }
+#endif
+ skipCall |= VerifyFramebufferAndRenderPassLayouts(commandBuffer, pRenderPassBegin);
+ auto render_pass_data = dev_data->renderPassMap.find(pRenderPassBegin->renderPass);
+ if (render_pass_data != dev_data->renderPassMap.end()) {
+ skipCall |= ValidateDependencies(dev_data, pRenderPassBegin, render_pass_data->second->subpassToNode);
+ }
+ skipCall |= insideRenderPass(dev_data, pCB, "vkCmdBeginRenderPass");
+ skipCall |= validatePrimaryCommandBuffer(dev_data, pCB, "vkCmdBeginRenderPass");
+ skipCall |= addCmd(dev_data, pCB, CMD_BEGINRENDERPASS, "vkCmdBeginRenderPass()");
+ pCB->activeRenderPass = pRenderPassBegin->renderPass;
+ // This is a shallow copy as that is all that is needed for now
+ pCB->activeRenderPassBeginInfo = *pRenderPassBegin;
+ pCB->activeSubpass = 0;
+ pCB->activeSubpassContents = contents;
+ pCB->framebuffer = pRenderPassBegin->framebuffer;
+ // Connect this framebuffer to this cmdBuffer
+ dev_data->frameBufferMap[pCB->framebuffer].referencingCmdBuffers.insert(pCB->commandBuffer);
+ } else {
+ skipCall |=
+ log_msg(dev_data->report_data, VK_DEBUG_REPORT_ERROR_BIT_EXT, (VkDebugReportObjectTypeEXT)0, 0, __LINE__,
+ DRAWSTATE_INVALID_RENDERPASS, "DS", "You cannot use a NULL RenderPass object in vkCmdBeginRenderPass()");
+ }
+ }
+ loader_platform_thread_unlock_mutex(&globalLock);
+ if (VK_FALSE == skipCall) {
+ dev_data->device_dispatch_table->CmdBeginRenderPass(commandBuffer, pRenderPassBegin, contents);
+ loader_platform_thread_lock_mutex(&globalLock);
+ // This is a shallow copy as that is all that is needed for now
+ dev_data->renderPassBeginInfo = *pRenderPassBegin;
+ dev_data->currentSubpass = 0;
+ loader_platform_thread_unlock_mutex(&globalLock);
+ }
+}
+
+VK_LAYER_EXPORT VKAPI_ATTR void VKAPI_CALL vkCmdNextSubpass(VkCommandBuffer commandBuffer, VkSubpassContents contents) {
+ VkBool32 skipCall = VK_FALSE;
+ layer_data *dev_data = get_my_data_ptr(get_dispatch_key(commandBuffer), layer_data_map);
+ loader_platform_thread_lock_mutex(&globalLock);
+ GLOBAL_CB_NODE *pCB = getCBNode(dev_data, commandBuffer);
+ TransitionSubpassLayouts(commandBuffer, &dev_data->renderPassBeginInfo, ++dev_data->currentSubpass);
+ if (pCB) {
+ skipCall |= validatePrimaryCommandBuffer(dev_data, pCB, "vkCmdNextSubpass");
+ skipCall |= addCmd(dev_data, pCB, CMD_NEXTSUBPASS, "vkCmdNextSubpass()");
+ pCB->activeSubpass++;
+ pCB->activeSubpassContents = contents;
+ TransitionSubpassLayouts(commandBuffer, &pCB->activeRenderPassBeginInfo, pCB->activeSubpass);
+ if (pCB->lastBound[VK_PIPELINE_BIND_POINT_GRAPHICS].pipeline) {
+ skipCall |= validatePipelineState(dev_data, pCB, VK_PIPELINE_BIND_POINT_GRAPHICS,
+ pCB->lastBound[VK_PIPELINE_BIND_POINT_GRAPHICS].pipeline);
+ }
+ skipCall |= outsideRenderPass(dev_data, pCB, "vkCmdNextSubpass");
+ }
+ loader_platform_thread_unlock_mutex(&globalLock);
+ if (VK_FALSE == skipCall)
+ dev_data->device_dispatch_table->CmdNextSubpass(commandBuffer, contents);
+}
+
+VK_LAYER_EXPORT VKAPI_ATTR void VKAPI_CALL vkCmdEndRenderPass(VkCommandBuffer commandBuffer) {
+ VkBool32 skipCall = VK_FALSE;
+ layer_data *dev_data = get_my_data_ptr(get_dispatch_key(commandBuffer), layer_data_map);
+ loader_platform_thread_lock_mutex(&globalLock);
+#if MTMERGESOURCE
+ auto cb_data = dev_data->commandBufferMap.find(commandBuffer);
+ if (cb_data != dev_data->commandBufferMap.end()) {
+ auto pass_data = dev_data->renderPassMap.find(cb_data->second->activeRenderPass);
+ if (pass_data != dev_data->renderPassMap.end()) {
+ RENDER_PASS_NODE* pRPNode = pass_data->second;
+ for (size_t i = 0; i < pRPNode->attachments.size(); ++i) {
+ MT_FB_ATTACHMENT_INFO &fb_info = dev_data->frameBufferMap[pRPNode->fb].attachments[i];
+ if (pRPNode->attachments[i].store_op == VK_ATTACHMENT_STORE_OP_STORE) {
+ if (cb_data != dev_data->commandBufferMap.end()) {
+ std::function<VkBool32()> function = [=]() {
+ set_memory_valid(dev_data, fb_info.mem, true, fb_info.image);
+ return VK_FALSE;
+ };
+ cb_data->second->validate_functions.push_back(function);
+ }
+ } else if (pRPNode->attachments[i].store_op == VK_ATTACHMENT_STORE_OP_DONT_CARE) {
+ if (cb_data != dev_data->commandBufferMap.end()) {
+ std::function<VkBool32()> function = [=]() {
+ set_memory_valid(dev_data, fb_info.mem, false, fb_info.image);
+ return VK_FALSE;
+ };
+ cb_data->second->validate_functions.push_back(function);
+ }
+ }
+ }
+ }
+ }
+#endif
+ GLOBAL_CB_NODE *pCB = getCBNode(dev_data, commandBuffer);
+ TransitionFinalSubpassLayouts(commandBuffer, &dev_data->renderPassBeginInfo);
+ if (pCB) {
+ skipCall |= outsideRenderPass(dev_data, pCB, "vkCmdEndRenderpass");
+ skipCall |= validatePrimaryCommandBuffer(dev_data, pCB, "vkCmdEndRenderPass");
+ skipCall |= addCmd(dev_data, pCB, CMD_ENDRENDERPASS, "vkCmdEndRenderPass()");
+ TransitionFinalSubpassLayouts(commandBuffer, &pCB->activeRenderPassBeginInfo);
+ pCB->activeRenderPass = 0;
+ pCB->activeSubpass = 0;
+ }
+ loader_platform_thread_unlock_mutex(&globalLock);
+ if (VK_FALSE == skipCall)
+ dev_data->device_dispatch_table->CmdEndRenderPass(commandBuffer);
+}
+
+bool logInvalidAttachmentMessage(layer_data *dev_data, VkCommandBuffer secondaryBuffer, VkRenderPass secondaryPass,
+ VkRenderPass primaryPass, uint32_t primaryAttach, uint32_t secondaryAttach, const char *msg) {
+ return log_msg(dev_data->report_data, VK_DEBUG_REPORT_ERROR_BIT_EXT, (VkDebugReportObjectTypeEXT)0, 0, __LINE__,
+ DRAWSTATE_INVALID_SECONDARY_COMMAND_BUFFER, "DS",
+ "vkCmdExecuteCommands() called w/ invalid Cmd Buffer %p which has a render pass %" PRIx64
+ " that is not compatible with the current render pass %" PRIx64 "."
+ "Attachment %" PRIu32 " is not compatable with %" PRIu32 ". %s",
+ (void *)secondaryBuffer, (uint64_t)(secondaryPass), (uint64_t)(primaryPass), primaryAttach, secondaryAttach,
+ msg);
+}
+
+bool validateAttachmentCompatibility(layer_data *dev_data, VkCommandBuffer primaryBuffer, VkRenderPass primaryPass,
+ uint32_t primaryAttach, VkCommandBuffer secondaryBuffer, VkRenderPass secondaryPass,
+ uint32_t secondaryAttach, bool is_multi) {
+ bool skip_call = false;
+ auto primary_data = dev_data->renderPassMap.find(primaryPass);
+ auto secondary_data = dev_data->renderPassMap.find(secondaryPass);
+ if (primary_data->second->pCreateInfo->attachmentCount <= primaryAttach) {
+ primaryAttach = VK_ATTACHMENT_UNUSED;
+ }
+ if (secondary_data->second->pCreateInfo->attachmentCount <= secondaryAttach) {
+ secondaryAttach = VK_ATTACHMENT_UNUSED;
+ }
+ if (primaryAttach == VK_ATTACHMENT_UNUSED && secondaryAttach == VK_ATTACHMENT_UNUSED) {
+ return skip_call;
+ }
+ if (primaryAttach == VK_ATTACHMENT_UNUSED) {
+ skip_call |= logInvalidAttachmentMessage(dev_data, secondaryBuffer, secondaryPass, primaryPass, primaryAttach,
+ secondaryAttach, "The first is unused while the second is not.");
+ return skip_call;
+ }
+ if (secondaryAttach == VK_ATTACHMENT_UNUSED) {
+ skip_call |= logInvalidAttachmentMessage(dev_data, secondaryBuffer, secondaryPass, primaryPass, primaryAttach,
+ secondaryAttach, "The second is unused while the first is not.");
+ return skip_call;
+ }
+ if (primary_data->second->pCreateInfo->pAttachments[primaryAttach].format !=
+ secondary_data->second->pCreateInfo->pAttachments[secondaryAttach].format) {
+ skip_call |= logInvalidAttachmentMessage(dev_data, secondaryBuffer, secondaryPass, primaryPass, primaryAttach,
+ secondaryAttach, "They have different formats.");
+ }
+ if (primary_data->second->pCreateInfo->pAttachments[primaryAttach].samples !=
+ secondary_data->second->pCreateInfo->pAttachments[secondaryAttach].samples) {
+ skip_call |= logInvalidAttachmentMessage(dev_data, secondaryBuffer, secondaryPass, primaryPass, primaryAttach,
+ secondaryAttach, "They have different samples.");
+ }
+ if (is_multi &&
+ primary_data->second->pCreateInfo->pAttachments[primaryAttach].flags !=
+ secondary_data->second->pCreateInfo->pAttachments[secondaryAttach].flags) {
+ skip_call |= logInvalidAttachmentMessage(dev_data, secondaryBuffer, secondaryPass, primaryPass, primaryAttach,
+ secondaryAttach, "They have different flags.");
+ }
+ return skip_call;
+}
+
+bool validateSubpassCompatibility(layer_data *dev_data, VkCommandBuffer primaryBuffer, VkRenderPass primaryPass,
+ VkCommandBuffer secondaryBuffer, VkRenderPass secondaryPass, const int subpass, bool is_multi) {
+ bool skip_call = false;
+ auto primary_data = dev_data->renderPassMap.find(primaryPass);
+ auto secondary_data = dev_data->renderPassMap.find(secondaryPass);
+ const VkSubpassDescription &primary_desc = primary_data->second->pCreateInfo->pSubpasses[subpass];
+ const VkSubpassDescription &secondary_desc = secondary_data->second->pCreateInfo->pSubpasses[subpass];
+ uint32_t maxInputAttachmentCount = std::max(primary_desc.inputAttachmentCount, secondary_desc.inputAttachmentCount);
+ for (uint32_t i = 0; i < maxInputAttachmentCount; ++i) {
+ uint32_t primary_input_attach = VK_ATTACHMENT_UNUSED, secondary_input_attach = VK_ATTACHMENT_UNUSED;
+ if (i < primary_desc.inputAttachmentCount) {
+ primary_input_attach = primary_desc.pInputAttachments[i].attachment;
+ }
+ if (i < secondary_desc.inputAttachmentCount) {
+ secondary_input_attach = secondary_desc.pInputAttachments[i].attachment;
+ }
+ skip_call |= validateAttachmentCompatibility(dev_data, primaryBuffer, primaryPass, primary_input_attach, secondaryBuffer,
+ secondaryPass, secondary_input_attach, is_multi);
+ }
+ uint32_t maxColorAttachmentCount = std::max(primary_desc.colorAttachmentCount, secondary_desc.colorAttachmentCount);
+ for (uint32_t i = 0; i < maxColorAttachmentCount; ++i) {
+ uint32_t primary_color_attach = VK_ATTACHMENT_UNUSED, secondary_color_attach = VK_ATTACHMENT_UNUSED;
+ if (i < primary_desc.colorAttachmentCount) {
+ primary_color_attach = primary_desc.pColorAttachments[i].attachment;
+ }
+ if (i < secondary_desc.colorAttachmentCount) {
+ secondary_color_attach = secondary_desc.pColorAttachments[i].attachment;
+ }
+ skip_call |= validateAttachmentCompatibility(dev_data, primaryBuffer, primaryPass, primary_color_attach, secondaryBuffer,
+ secondaryPass, secondary_color_attach, is_multi);
+ uint32_t primary_resolve_attach = VK_ATTACHMENT_UNUSED, secondary_resolve_attach = VK_ATTACHMENT_UNUSED;
+ if (i < primary_desc.colorAttachmentCount && primary_desc.pResolveAttachments) {
+ primary_resolve_attach = primary_desc.pResolveAttachments[i].attachment;
+ }
+ if (i < secondary_desc.colorAttachmentCount && secondary_desc.pResolveAttachments) {
+ secondary_resolve_attach = secondary_desc.pResolveAttachments[i].attachment;
+ }
+ skip_call |= validateAttachmentCompatibility(dev_data, primaryBuffer, primaryPass, primary_resolve_attach, secondaryBuffer,
+ secondaryPass, secondary_resolve_attach, is_multi);
+ }
+ uint32_t primary_depthstencil_attach = VK_ATTACHMENT_UNUSED, secondary_depthstencil_attach = VK_ATTACHMENT_UNUSED;
+ if (primary_desc.pDepthStencilAttachment) {
+ primary_depthstencil_attach = primary_desc.pDepthStencilAttachment[0].attachment;
+ }
+ if (secondary_desc.pDepthStencilAttachment) {
+ secondary_depthstencil_attach = secondary_desc.pDepthStencilAttachment[0].attachment;
+ }
+ skip_call |= validateAttachmentCompatibility(dev_data, primaryBuffer, primaryPass, primary_depthstencil_attach, secondaryBuffer,
+ secondaryPass, secondary_depthstencil_attach, is_multi);
+ return skip_call;
+}
+
+bool validateRenderPassCompatibility(layer_data *dev_data, VkCommandBuffer primaryBuffer, VkRenderPass primaryPass,
+ VkCommandBuffer secondaryBuffer, VkRenderPass secondaryPass) {
+ bool skip_call = false;
+ // Early exit if renderPass objects are identical (and therefore compatible)
+ if (primaryPass == secondaryPass)
+ return skip_call;
+ auto primary_data = dev_data->renderPassMap.find(primaryPass);
+ auto secondary_data = dev_data->renderPassMap.find(secondaryPass);
+ if (primary_data == dev_data->renderPassMap.end() || primary_data->second == nullptr) {
+ skip_call |=
+ log_msg(dev_data->report_data, VK_DEBUG_REPORT_ERROR_BIT_EXT, (VkDebugReportObjectTypeEXT)0, 0, __LINE__,
+ DRAWSTATE_INVALID_SECONDARY_COMMAND_BUFFER, "DS",
+ "vkCmdExecuteCommands() called w/ invalid current Cmd Buffer %p which has invalid render pass %" PRIx64 ".",
+ (void *)primaryBuffer, (uint64_t)(primaryPass));
+ return skip_call;
+ }
+ if (secondary_data == dev_data->renderPassMap.end() || secondary_data->second == nullptr) {
+ skip_call |=
+ log_msg(dev_data->report_data, VK_DEBUG_REPORT_ERROR_BIT_EXT, (VkDebugReportObjectTypeEXT)0, 0, __LINE__,
+ DRAWSTATE_INVALID_SECONDARY_COMMAND_BUFFER, "DS",
+ "vkCmdExecuteCommands() called w/ invalid secondary Cmd Buffer %p which has invalid render pass %" PRIx64 ".",
+ (void *)secondaryBuffer, (uint64_t)(secondaryPass));
+ return skip_call;
+ }
+ if (primary_data->second->pCreateInfo->subpassCount != secondary_data->second->pCreateInfo->subpassCount) {
+ skip_call |= log_msg(dev_data->report_data, VK_DEBUG_REPORT_ERROR_BIT_EXT, (VkDebugReportObjectTypeEXT)0, 0, __LINE__,
+ DRAWSTATE_INVALID_SECONDARY_COMMAND_BUFFER, "DS",
+ "vkCmdExecuteCommands() called w/ invalid Cmd Buffer %p which has a render pass %" PRIx64
+ " that is not compatible with the current render pass %" PRIx64 "."
+ "They have a different number of subpasses.",
+ (void *)secondaryBuffer, (uint64_t)(secondaryPass), (uint64_t)(primaryPass));
+ return skip_call;
+ }
+ bool is_multi = primary_data->second->pCreateInfo->subpassCount > 1;
+ for (uint32_t i = 0; i < primary_data->second->pCreateInfo->subpassCount; ++i) {
+ skip_call |=
+ validateSubpassCompatibility(dev_data, primaryBuffer, primaryPass, secondaryBuffer, secondaryPass, i, is_multi);
+ }
+ return skip_call;
+}
+
+bool validateFramebuffer(layer_data *dev_data, VkCommandBuffer primaryBuffer, const GLOBAL_CB_NODE *pCB,
+ VkCommandBuffer secondaryBuffer, const GLOBAL_CB_NODE *pSubCB) {
+ bool skip_call = false;
+ if (!pSubCB->beginInfo.pInheritanceInfo) {
+ return skip_call;
+ }
+ VkFramebuffer primary_fb = pCB->framebuffer;
+ VkFramebuffer secondary_fb = pSubCB->beginInfo.pInheritanceInfo->framebuffer;
+ if (secondary_fb != VK_NULL_HANDLE) {
+ if (primary_fb != secondary_fb) {
+ skip_call |= log_msg(dev_data->report_data, VK_DEBUG_REPORT_ERROR_BIT_EXT, (VkDebugReportObjectTypeEXT)0, 0, __LINE__,
+ DRAWSTATE_INVALID_SECONDARY_COMMAND_BUFFER, "DS",
+ "vkCmdExecuteCommands() called w/ invalid Cmd Buffer %p which has a framebuffer %" PRIx64
+ " that is not compatible with the current framebuffer %" PRIx64 ".",
+ (void *)secondaryBuffer, (uint64_t)(secondary_fb), (uint64_t)(primary_fb));
+ }
+ auto fb_data = dev_data->frameBufferMap.find(secondary_fb);
+ if (fb_data == dev_data->frameBufferMap.end()) {
+ skip_call |=
+ log_msg(dev_data->report_data, VK_DEBUG_REPORT_ERROR_BIT_EXT, (VkDebugReportObjectTypeEXT)0, 0, __LINE__,
+ DRAWSTATE_INVALID_SECONDARY_COMMAND_BUFFER, "DS", "vkCmdExecuteCommands() called w/ invalid Cmd Buffer %p "
+ "which has invalid framebuffer %" PRIx64 ".",
+ (void *)secondaryBuffer, (uint64_t)(secondary_fb));
+ return skip_call;
+ }
+ skip_call |= validateRenderPassCompatibility(dev_data, secondaryBuffer, fb_data->second.createInfo.renderPass,
+ secondaryBuffer, pSubCB->beginInfo.pInheritanceInfo->renderPass);
+ }
+ return skip_call;
+}
+
+bool validateSecondaryCommandBufferState(layer_data *dev_data, GLOBAL_CB_NODE *pCB, GLOBAL_CB_NODE *pSubCB) {
+ bool skipCall = false;
+ unordered_set<int> activeTypes;
+ for (auto queryObject : pCB->activeQueries) {
+ auto queryPoolData = dev_data->queryPoolMap.find(queryObject.pool);
+ if (queryPoolData != dev_data->queryPoolMap.end()) {
+ if (queryPoolData->second.createInfo.queryType == VK_QUERY_TYPE_PIPELINE_STATISTICS &&
+ pSubCB->beginInfo.pInheritanceInfo) {
+ VkQueryPipelineStatisticFlags cmdBufStatistics = pSubCB->beginInfo.pInheritanceInfo->pipelineStatistics;
+ if ((cmdBufStatistics & queryPoolData->second.createInfo.pipelineStatistics) != cmdBufStatistics) {
+ skipCall |= log_msg(
+ dev_data->report_data, VK_DEBUG_REPORT_ERROR_BIT_EXT, (VkDebugReportObjectTypeEXT)0, 0, __LINE__,
+ DRAWSTATE_INVALID_SECONDARY_COMMAND_BUFFER, "DS",
+ "vkCmdExecuteCommands() called w/ invalid Cmd Buffer %p "
+ "which has invalid active query pool %" PRIx64 ". Pipeline statistics is being queried so the command "
+ "buffer must have all bits set on the queryPool.",
+ reinterpret_cast<void *>(pCB->commandBuffer), reinterpret_cast<const uint64_t &>(queryPoolData->first));
+ }
+ }
+ activeTypes.insert(queryPoolData->second.createInfo.queryType);
+ }
+ }
+ for (auto queryObject : pSubCB->startedQueries) {
+ auto queryPoolData = dev_data->queryPoolMap.find(queryObject.pool);
+ if (queryPoolData != dev_data->queryPoolMap.end() && activeTypes.count(queryPoolData->second.createInfo.queryType)) {
+ skipCall |=
+ log_msg(dev_data->report_data, VK_DEBUG_REPORT_ERROR_BIT_EXT, (VkDebugReportObjectTypeEXT)0, 0, __LINE__,
+ DRAWSTATE_INVALID_SECONDARY_COMMAND_BUFFER, "DS",
+ "vkCmdExecuteCommands() called w/ invalid Cmd Buffer %p "
+ "which has invalid active query pool %" PRIx64 "of type %d but a query of that type has been started on "
+ "secondary Cmd Buffer %p.",
+ reinterpret_cast<void *>(pCB->commandBuffer), reinterpret_cast<const uint64_t &>(queryPoolData->first),
+ queryPoolData->second.createInfo.queryType, reinterpret_cast<void *>(pSubCB->commandBuffer));
+ }
+ }
+ return skipCall;
+}
+
+VK_LAYER_EXPORT VKAPI_ATTR void VKAPI_CALL
+vkCmdExecuteCommands(VkCommandBuffer commandBuffer, uint32_t commandBuffersCount, const VkCommandBuffer *pCommandBuffers) {
+ VkBool32 skipCall = VK_FALSE;
+ layer_data *dev_data = get_my_data_ptr(get_dispatch_key(commandBuffer), layer_data_map);
+ loader_platform_thread_lock_mutex(&globalLock);
+ GLOBAL_CB_NODE *pCB = getCBNode(dev_data, commandBuffer);
+ if (pCB) {
+ GLOBAL_CB_NODE *pSubCB = NULL;
+ for (uint32_t i = 0; i < commandBuffersCount; i++) {
+ pSubCB = getCBNode(dev_data, pCommandBuffers[i]);
+ if (!pSubCB) {
+ skipCall |=
+ log_msg(dev_data->report_data, VK_DEBUG_REPORT_ERROR_BIT_EXT, (VkDebugReportObjectTypeEXT)0, 0, __LINE__,
+ DRAWSTATE_INVALID_SECONDARY_COMMAND_BUFFER, "DS",
+ "vkCmdExecuteCommands() called w/ invalid Cmd Buffer %p in element %u of pCommandBuffers array.",
+ (void *)pCommandBuffers[i], i);
+ } else if (VK_COMMAND_BUFFER_LEVEL_PRIMARY == pSubCB->createInfo.level) {
+ skipCall |= log_msg(dev_data->report_data, VK_DEBUG_REPORT_ERROR_BIT_EXT, (VkDebugReportObjectTypeEXT)0, 0,
+ __LINE__, DRAWSTATE_INVALID_SECONDARY_COMMAND_BUFFER, "DS",
+ "vkCmdExecuteCommands() called w/ Primary Cmd Buffer %p in element %u of pCommandBuffers "
+ "array. All cmd buffers in pCommandBuffers array must be secondary.",
+ (void *)pCommandBuffers[i], i);
+ } else if (pCB->activeRenderPass) { // Secondary CB w/i RenderPass must have *CONTINUE_BIT set
+ if (!(pSubCB->beginInfo.flags & VK_COMMAND_BUFFER_USAGE_RENDER_PASS_CONTINUE_BIT)) {
+ skipCall |= log_msg(
+ dev_data->report_data, VK_DEBUG_REPORT_ERROR_BIT_EXT, VK_DEBUG_REPORT_OBJECT_TYPE_COMMAND_BUFFER_EXT,
+ (uint64_t)pCommandBuffers[i], __LINE__, DRAWSTATE_BEGIN_CB_INVALID_STATE, "DS",
+ "vkCmdExecuteCommands(): Secondary Command Buffer (%p) executed within render pass (%#" PRIxLEAST64
+ ") must have had vkBeginCommandBuffer() called w/ VK_COMMAND_BUFFER_USAGE_RENDER_PASS_CONTINUE_BIT set.",
+ (void *)pCommandBuffers[i], (uint64_t)pCB->activeRenderPass);
+ } else {
+ // Make sure render pass is compatible with parent command buffer pass if has continue
+ skipCall |= validateRenderPassCompatibility(dev_data, commandBuffer, pCB->activeRenderPass, pCommandBuffers[i],
+ pSubCB->beginInfo.pInheritanceInfo->renderPass);
+ skipCall |= validateFramebuffer(dev_data, commandBuffer, pCB, pCommandBuffers[i], pSubCB);
+ }
+ string errorString = "";
+ if (!verify_renderpass_compatibility(dev_data, pCB->activeRenderPass,
+ pSubCB->beginInfo.pInheritanceInfo->renderPass, errorString)) {
+ skipCall |= log_msg(
+ dev_data->report_data, VK_DEBUG_REPORT_ERROR_BIT_EXT, VK_DEBUG_REPORT_OBJECT_TYPE_COMMAND_BUFFER_EXT,
+ (uint64_t)pCommandBuffers[i], __LINE__, DRAWSTATE_RENDERPASS_INCOMPATIBLE, "DS",
+ "vkCmdExecuteCommands(): Secondary Command Buffer (%p) w/ render pass (%#" PRIxLEAST64
+ ") is incompatible w/ primary command buffer (%p) w/ render pass (%#" PRIxLEAST64 ") due to: %s",
+ (void *)pCommandBuffers[i], (uint64_t)pSubCB->beginInfo.pInheritanceInfo->renderPass, (void *)commandBuffer,
+ (uint64_t)pCB->activeRenderPass, errorString.c_str());
+ }
+ // If framebuffer for secondary CB is not NULL, then it must match FB from vkCmdBeginRenderPass()
+ // that this CB will be executed in AND framebuffer must have been created w/ RP compatible w/ renderpass
+ if (pSubCB->beginInfo.pInheritanceInfo->framebuffer) {
+ if (pSubCB->beginInfo.pInheritanceInfo->framebuffer != pCB->activeRenderPassBeginInfo.framebuffer) {
+ skipCall |= log_msg(
+ dev_data->report_data, VK_DEBUG_REPORT_ERROR_BIT_EXT, VK_DEBUG_REPORT_OBJECT_TYPE_COMMAND_BUFFER_EXT,
+ (uint64_t)pCommandBuffers[i], __LINE__, DRAWSTATE_FRAMEBUFFER_INCOMPATIBLE, "DS",
+ "vkCmdExecuteCommands(): Secondary Command Buffer (%p) references framebuffer (%#" PRIxLEAST64
+ ") that does not match framebuffer (%#" PRIxLEAST64 ") in active renderpass (%#" PRIxLEAST64 ").",
+ (void *)pCommandBuffers[i], (uint64_t)pSubCB->beginInfo.pInheritanceInfo->framebuffer,
+ (uint64_t)pCB->activeRenderPassBeginInfo.framebuffer, (uint64_t)pCB->activeRenderPass);
+ }
+ }
+ }
+ // TODO(mlentine): Move more logic into this method
+ skipCall |= validateSecondaryCommandBufferState(dev_data, pCB, pSubCB);
+ skipCall |= validateCommandBufferState(dev_data, pSubCB);
+ // Secondary cmdBuffers are considered pending execution starting w/
+ // being recorded
+ if (!(pSubCB->beginInfo.flags & VK_COMMAND_BUFFER_USAGE_SIMULTANEOUS_USE_BIT)) {
+ if (dev_data->globalInFlightCmdBuffers.find(pSubCB->commandBuffer) != dev_data->globalInFlightCmdBuffers.end()) {
+ skipCall |= log_msg(
+ dev_data->report_data, VK_DEBUG_REPORT_ERROR_BIT_EXT, VK_DEBUG_REPORT_OBJECT_TYPE_COMMAND_BUFFER_EXT,
+ (uint64_t)(pCB->commandBuffer), __LINE__, DRAWSTATE_INVALID_CB_SIMULTANEOUS_USE, "DS",
+ "Attempt to simultaneously execute CB %#" PRIxLEAST64 " w/o VK_COMMAND_BUFFER_USAGE_SIMULTANEOUS_USE_BIT "
+ "set!",
+ (uint64_t)(pCB->commandBuffer));
+ }
+ if (pCB->beginInfo.flags & VK_COMMAND_BUFFER_USAGE_SIMULTANEOUS_USE_BIT) {
+ // Warn that non-simultaneous secondary cmd buffer renders primary non-simultaneous
+ skipCall |= log_msg(
+ dev_data->report_data, VK_DEBUG_REPORT_WARNING_BIT_EXT, VK_DEBUG_REPORT_OBJECT_TYPE_COMMAND_BUFFER_EXT,
+ (uint64_t)(pCommandBuffers[i]), __LINE__, DRAWSTATE_INVALID_CB_SIMULTANEOUS_USE, "DS",
+ "vkCmdExecuteCommands(): Secondary Command Buffer (%#" PRIxLEAST64
+ ") does not have VK_COMMAND_BUFFER_USAGE_SIMULTANEOUS_USE_BIT set and will cause primary command buffer "
+ "(%#" PRIxLEAST64 ") to be treated as if it does not have VK_COMMAND_BUFFER_USAGE_SIMULTANEOUS_USE_BIT "
+ "set, even though it does.",
+ (uint64_t)(pCommandBuffers[i]), (uint64_t)(pCB->commandBuffer));
+ pCB->beginInfo.flags &= ~VK_COMMAND_BUFFER_USAGE_SIMULTANEOUS_USE_BIT;
+ }
+ }
+ if (!pCB->activeQueries.empty() && !dev_data->physDevProperties.features.inheritedQueries) {
+ skipCall |=
+ log_msg(dev_data->report_data, VK_DEBUG_REPORT_ERROR_BIT_EXT, VK_DEBUG_REPORT_OBJECT_TYPE_COMMAND_BUFFER_EXT,
+ reinterpret_cast<uint64_t>(pCommandBuffers[i]), __LINE__, DRAWSTATE_INVALID_COMMAND_BUFFER, "DS",
+ "vkCmdExecuteCommands(): Secondary Command Buffer "
+ "(%#" PRIxLEAST64 ") cannot be submitted with a query in "
+ "flight and inherited queries not "
+ "supported on this device.",
+ reinterpret_cast<uint64_t>(pCommandBuffers[i]));
+ }
+ pSubCB->primaryCommandBuffer = pCB->commandBuffer;
+ pCB->secondaryCommandBuffers.insert(pSubCB->commandBuffer);
+ dev_data->globalInFlightCmdBuffers.insert(pSubCB->commandBuffer);
+ }
+ skipCall |= validatePrimaryCommandBuffer(dev_data, pCB, "vkCmdExecuteComands");
+ skipCall |= addCmd(dev_data, pCB, CMD_EXECUTECOMMANDS, "vkCmdExecuteComands()");
+ }
+ loader_platform_thread_unlock_mutex(&globalLock);
+ if (VK_FALSE == skipCall)
+ dev_data->device_dispatch_table->CmdExecuteCommands(commandBuffer, commandBuffersCount, pCommandBuffers);
+}
+
+VkBool32 ValidateMapImageLayouts(VkDevice device, VkDeviceMemory mem) {
+ VkBool32 skip_call = VK_FALSE;
+ layer_data *dev_data = get_my_data_ptr(get_dispatch_key(device), layer_data_map);
+ auto mem_data = dev_data->memObjMap.find(mem);
+ if ((mem_data != dev_data->memObjMap.end()) && (mem_data->second.image != VK_NULL_HANDLE)) {
+ std::vector<VkImageLayout> layouts;
+ if (FindLayouts(dev_data, mem_data->second.image, layouts)) {
+ for (auto layout : layouts) {
+ if (layout != VK_IMAGE_LAYOUT_PREINITIALIZED && layout != VK_IMAGE_LAYOUT_GENERAL) {
+ skip_call |= log_msg(dev_data->report_data, VK_DEBUG_REPORT_ERROR_BIT_EXT, (VkDebugReportObjectTypeEXT)0, 0,
+ __LINE__, DRAWSTATE_INVALID_IMAGE_LAYOUT, "DS", "Cannot map an image with layout %s. Only "
+ "GENERAL or PREINITIALIZED are supported.",
+ string_VkImageLayout(layout));
+ }
+ }
+ }
+ }
+ return skip_call;
+}
+
+VK_LAYER_EXPORT VKAPI_ATTR VkResult VKAPI_CALL
+vkMapMemory(VkDevice device, VkDeviceMemory mem, VkDeviceSize offset, VkDeviceSize size, VkFlags flags, void **ppData) {
+ layer_data *dev_data = get_my_data_ptr(get_dispatch_key(device), layer_data_map);
+
+ VkBool32 skip_call = VK_FALSE;
+ VkResult result = VK_ERROR_VALIDATION_FAILED_EXT;
+ loader_platform_thread_lock_mutex(&globalLock);
+#if MTMERGESOURCE
+ DEVICE_MEM_INFO *pMemObj = get_mem_obj_info(dev_data, mem);
+ if (pMemObj) {
+ pMemObj->valid = true;
+ if ((memProps.memoryTypes[pMemObj->allocInfo.memoryTypeIndex].propertyFlags & VK_MEMORY_PROPERTY_HOST_VISIBLE_BIT) == 0) {
+ skip_call =
+ log_msg(dev_data->report_data, VK_DEBUG_REPORT_ERROR_BIT_EXT, VK_DEBUG_REPORT_OBJECT_TYPE_DEVICE_MEMORY_EXT,
+ (uint64_t)mem, __LINE__, MEMTRACK_INVALID_STATE, "MEM",
+ "Mapping Memory without VK_MEMORY_PROPERTY_HOST_VISIBLE_BIT set: mem obj %#" PRIxLEAST64, (uint64_t)mem);
+ }
+ }
+ skip_call |= validateMemRange(dev_data, mem, offset, size);
+ storeMemRanges(dev_data, mem, offset, size);
+#endif
+ skip_call |= ValidateMapImageLayouts(device, mem);
+ loader_platform_thread_unlock_mutex(&globalLock);
+
+ if (VK_FALSE == skip_call) {
+ result = dev_data->device_dispatch_table->MapMemory(device, mem, offset, size, flags, ppData);
+#if MTMERGESOURCE
+ initializeAndTrackMemory(dev_data, mem, size, ppData);
+#endif
+ }
+ return result;
+}
+
+#if MTMERGESOURCE
+VK_LAYER_EXPORT VKAPI_ATTR void VKAPI_CALL vkUnmapMemory(VkDevice device, VkDeviceMemory mem) {
+ layer_data *my_data = get_my_data_ptr(get_dispatch_key(device), layer_data_map);
+ VkBool32 skipCall = VK_FALSE;
+
+ loader_platform_thread_lock_mutex(&globalLock);
+ skipCall |= deleteMemRanges(my_data, mem);
+ loader_platform_thread_unlock_mutex(&globalLock);
+ if (VK_FALSE == skipCall) {
+ my_data->device_dispatch_table->UnmapMemory(device, mem);
+ }
+}
+
+VkBool32 validateMemoryIsMapped(layer_data *my_data, const char *funcName, uint32_t memRangeCount,
+ const VkMappedMemoryRange *pMemRanges) {
+ VkBool32 skipCall = VK_FALSE;
+ for (uint32_t i = 0; i < memRangeCount; ++i) {
+ auto mem_element = my_data->memObjMap.find(pMemRanges[i].memory);
+ if (mem_element != my_data->memObjMap.end()) {
+ if (mem_element->second.memRange.offset > pMemRanges[i].offset) {
+ skipCall |= log_msg(
+ my_data->report_data, VK_DEBUG_REPORT_ERROR_BIT_EXT, VK_DEBUG_REPORT_OBJECT_TYPE_DEVICE_MEMORY_EXT,
+ (uint64_t)pMemRanges[i].memory, __LINE__, MEMTRACK_INVALID_MAP, "MEM",
+ "%s: Flush/Invalidate offset (" PRINTF_SIZE_T_SPECIFIER ") is less than Memory Object's offset "
+ "(" PRINTF_SIZE_T_SPECIFIER ").",
+ funcName, static_cast<size_t>(pMemRanges[i].offset), static_cast<size_t>(mem_element->second.memRange.offset));
+ }
+ if ((mem_element->second.memRange.size != VK_WHOLE_SIZE) &&
+ ((mem_element->second.memRange.offset + mem_element->second.memRange.size) <
+ (pMemRanges[i].offset + pMemRanges[i].size))) {
+ skipCall |= log_msg(my_data->report_data, VK_DEBUG_REPORT_ERROR_BIT_EXT,
+ VK_DEBUG_REPORT_OBJECT_TYPE_DEVICE_MEMORY_EXT, (uint64_t)pMemRanges[i].memory, __LINE__,
+ MEMTRACK_INVALID_MAP, "MEM", "%s: Flush/Invalidate upper-bound (" PRINTF_SIZE_T_SPECIFIER
+ ") exceeds the Memory Object's upper-bound "
+ "(" PRINTF_SIZE_T_SPECIFIER ").",
+ funcName, static_cast<size_t>(pMemRanges[i].offset + pMemRanges[i].size),
+ static_cast<size_t>(mem_element->second.memRange.offset + mem_element->second.memRange.size));
+ }
+ }
+ }
+ return skipCall;
+}
+
+VkBool32 validateAndCopyNoncoherentMemoryToDriver(layer_data *my_data, uint32_t memRangeCount,
+ const VkMappedMemoryRange *pMemRanges) {
+ VkBool32 skipCall = VK_FALSE;
+ for (uint32_t i = 0; i < memRangeCount; ++i) {
+ auto mem_element = my_data->memObjMap.find(pMemRanges[i].memory);
+ if (mem_element != my_data->memObjMap.end()) {
+ if (mem_element->second.pData) {
+ VkDeviceSize size = mem_element->second.memRange.size;
+ VkDeviceSize half_size = (size / 2);
+ char *data = static_cast<char *>(mem_element->second.pData);
+ for (auto j = 0; j < half_size; ++j) {
+ if (data[j] != NoncoherentMemoryFillValue) {
+ skipCall |= log_msg(my_data->report_data, VK_DEBUG_REPORT_ERROR_BIT_EXT,
+ VK_DEBUG_REPORT_OBJECT_TYPE_DEVICE_MEMORY_EXT, (uint64_t)pMemRanges[i].memory, __LINE__,
+ MEMTRACK_INVALID_MAP, "MEM", "Memory overflow was detected on mem obj %" PRIxLEAST64,
+ (uint64_t)pMemRanges[i].memory);
+ }
+ }
+ for (auto j = size + half_size; j < 2 * size; ++j) {
+ if (data[j] != NoncoherentMemoryFillValue) {
+ skipCall |= log_msg(my_data->report_data, VK_DEBUG_REPORT_ERROR_BIT_EXT,
+ VK_DEBUG_REPORT_OBJECT_TYPE_DEVICE_MEMORY_EXT, (uint64_t)pMemRanges[i].memory, __LINE__,
+ MEMTRACK_INVALID_MAP, "MEM", "Memory overflow was detected on mem obj %" PRIxLEAST64,
+ (uint64_t)pMemRanges[i].memory);
+ }
+ }
+ memcpy(mem_element->second.pDriverData, static_cast<void *>(data + (size_t)(half_size)), (size_t)(size));
+ }
+ }
+ }
+ return skipCall;
+}
+
+VK_LAYER_EXPORT VkResult VKAPI_CALL
+vkFlushMappedMemoryRanges(VkDevice device, uint32_t memRangeCount, const VkMappedMemoryRange *pMemRanges) {
+ VkResult result = VK_ERROR_VALIDATION_FAILED_EXT;
+ VkBool32 skipCall = VK_FALSE;
+ layer_data *my_data = get_my_data_ptr(get_dispatch_key(device), layer_data_map);
+
+ loader_platform_thread_lock_mutex(&globalLock);
+ skipCall |= validateAndCopyNoncoherentMemoryToDriver(my_data, memRangeCount, pMemRanges);
+ skipCall |= validateMemoryIsMapped(my_data, "vkFlushMappedMemoryRanges", memRangeCount, pMemRanges);
+ loader_platform_thread_unlock_mutex(&globalLock);
+ if (VK_FALSE == skipCall) {
+ result = my_data->device_dispatch_table->FlushMappedMemoryRanges(device, memRangeCount, pMemRanges);
+ }
+ return result;
+}
+
+VK_LAYER_EXPORT VkResult VKAPI_CALL
+vkInvalidateMappedMemoryRanges(VkDevice device, uint32_t memRangeCount, const VkMappedMemoryRange *pMemRanges) {
+ VkResult result = VK_ERROR_VALIDATION_FAILED_EXT;
+ VkBool32 skipCall = VK_FALSE;
+ layer_data *my_data = get_my_data_ptr(get_dispatch_key(device), layer_data_map);
+
+ loader_platform_thread_lock_mutex(&globalLock);
+ skipCall |= validateMemoryIsMapped(my_data, "vkInvalidateMappedMemoryRanges", memRangeCount, pMemRanges);
+ loader_platform_thread_unlock_mutex(&globalLock);
+ if (VK_FALSE == skipCall) {
+ result = my_data->device_dispatch_table->InvalidateMappedMemoryRanges(device, memRangeCount, pMemRanges);
+ }
+ return result;
+}
+#endif
+
+VKAPI_ATTR VkResult VKAPI_CALL vkBindImageMemory(VkDevice device, VkImage image, VkDeviceMemory mem, VkDeviceSize memoryOffset) {
+ layer_data *dev_data = get_my_data_ptr(get_dispatch_key(device), layer_data_map);
+ VkResult result = VK_ERROR_VALIDATION_FAILED_EXT;
+ VkBool32 skipCall = VK_FALSE;
+#if MTMERGESOURCE
+ loader_platform_thread_lock_mutex(&globalLock);
+ // Track objects tied to memory
+ uint64_t image_handle = (uint64_t)(image);
+ skipCall =
+ set_mem_binding(dev_data, device, mem, image_handle, VK_DEBUG_REPORT_OBJECT_TYPE_IMAGE_EXT, "vkBindImageMemory");
+ add_object_binding_info(dev_data, image_handle, VK_DEBUG_REPORT_OBJECT_TYPE_IMAGE_EXT, mem);
+ {
+ VkMemoryRequirements memRequirements;
+ vkGetImageMemoryRequirements(device, image, &memRequirements);
+ skipCall |= validate_buffer_image_aliasing(dev_data, image_handle, mem, memoryOffset, memRequirements,
+ dev_data->memObjMap[mem].imageRanges, dev_data->memObjMap[mem].bufferRanges,
+ VK_DEBUG_REPORT_OBJECT_TYPE_IMAGE_EXT);
+ }
+ print_mem_list(dev_data, device);
+ loader_platform_thread_unlock_mutex(&globalLock);
+#endif
+ if (VK_FALSE == skipCall) {
+ result = dev_data->device_dispatch_table->BindImageMemory(device, image, mem, memoryOffset);
+ VkMemoryRequirements memRequirements;
+ dev_data->device_dispatch_table->GetImageMemoryRequirements(device, image, &memRequirements);
+ loader_platform_thread_lock_mutex(&globalLock);
+ dev_data->memObjMap[mem].image = image;
+ dev_data->imageMap[image].mem = mem;
+ dev_data->imageMap[image].memOffset = memoryOffset;
+ dev_data->imageMap[image].memSize = memRequirements.size;
+ loader_platform_thread_unlock_mutex(&globalLock);
+ }
+ return result;
+}
+
+VKAPI_ATTR VkResult VKAPI_CALL vkSetEvent(VkDevice device, VkEvent event) {
+ layer_data *dev_data = get_my_data_ptr(get_dispatch_key(device), layer_data_map);
+ loader_platform_thread_lock_mutex(&globalLock);
+ dev_data->eventMap[event].needsSignaled = false;
+ dev_data->eventMap[event].stageMask = VK_PIPELINE_STAGE_HOST_BIT;
+ loader_platform_thread_unlock_mutex(&globalLock);
+ VkResult result = dev_data->device_dispatch_table->SetEvent(device, event);
+ return result;
+}
+
+VKAPI_ATTR VkResult VKAPI_CALL
+vkQueueBindSparse(VkQueue queue, uint32_t bindInfoCount, const VkBindSparseInfo *pBindInfo, VkFence fence) {
+ layer_data *dev_data = get_my_data_ptr(get_dispatch_key(queue), layer_data_map);
+ VkResult result = VK_ERROR_VALIDATION_FAILED_EXT;
+ VkBool32 skip_call = VK_FALSE;
+#if MTMERGESOURCE
+ //MTMTODO : Merge this code with the checks below
+ loader_platform_thread_lock_mutex(&globalLock);
+
+ for (uint32_t i = 0; i < bindInfoCount; i++) {
+ const VkBindSparseInfo *bindInfo = &pBindInfo[i];
+ // Track objects tied to memory
+ for (uint32_t j = 0; j < bindInfo->bufferBindCount; j++) {
+ for (uint32_t k = 0; k < bindInfo->pBufferBinds[j].bindCount; k++) {
+ if (set_sparse_mem_binding(dev_data, queue, bindInfo->pBufferBinds[j].pBinds[k].memory,
+ (uint64_t)bindInfo->pBufferBinds[j].buffer, VK_DEBUG_REPORT_OBJECT_TYPE_BUFFER_EXT,
+ "vkQueueBindSparse"))
+ skip_call = VK_TRUE;
+ }
+ }
+ for (uint32_t j = 0; j < bindInfo->imageOpaqueBindCount; j++) {
+ for (uint32_t k = 0; k < bindInfo->pImageOpaqueBinds[j].bindCount; k++) {
+ if (set_sparse_mem_binding(dev_data, queue, bindInfo->pImageOpaqueBinds[j].pBinds[k].memory,
+ (uint64_t)bindInfo->pImageOpaqueBinds[j].image, VK_DEBUG_REPORT_OBJECT_TYPE_IMAGE_EXT,
+ "vkQueueBindSparse"))
+ skip_call = VK_TRUE;
+ }
+ }
+ for (uint32_t j = 0; j < bindInfo->imageBindCount; j++) {
+ for (uint32_t k = 0; k < bindInfo->pImageBinds[j].bindCount; k++) {
+ if (set_sparse_mem_binding(dev_data, queue, bindInfo->pImageBinds[j].pBinds[k].memory,
+ (uint64_t)bindInfo->pImageBinds[j].image, VK_DEBUG_REPORT_OBJECT_TYPE_IMAGE_EXT,
+ "vkQueueBindSparse"))
+ skip_call = VK_TRUE;
+ }
+ }
+ // Validate semaphore state
+ for (uint32_t i = 0; i < bindInfo->waitSemaphoreCount; i++) {
+ VkSemaphore sem = bindInfo->pWaitSemaphores[i];
+
+ if (dev_data->semaphoreMap.find(sem) != dev_data->semaphoreMap.end()) {
+ if (dev_data->semaphoreMap[sem].state != MEMTRACK_SEMAPHORE_STATE_SIGNALLED) {
+ skip_call =
+ log_msg(dev_data->report_data, VK_DEBUG_REPORT_ERROR_BIT_EXT, VK_DEBUG_REPORT_OBJECT_TYPE_SEMAPHORE_EXT,
+ (uint64_t)sem, __LINE__, MEMTRACK_NONE, "SEMAPHORE",
+ "vkQueueBindSparse: Semaphore must be in signaled state before passing to pWaitSemaphores");
+ }
+ dev_data->semaphoreMap[sem].state = MEMTRACK_SEMAPHORE_STATE_WAIT;
+ }
+ }
+ for (uint32_t i = 0; i < bindInfo->signalSemaphoreCount; i++) {
+ VkSemaphore sem = bindInfo->pSignalSemaphores[i];
+
+ if (dev_data->semaphoreMap.find(sem) != dev_data->semaphoreMap.end()) {
+ if (dev_data->semaphoreMap[sem].state != MEMTRACK_SEMAPHORE_STATE_UNSET) {
+ skip_call =
+ log_msg(dev_data->report_data, VK_DEBUG_REPORT_ERROR_BIT_EXT, VK_DEBUG_REPORT_OBJECT_TYPE_SEMAPHORE_EXT,
+ (uint64_t)sem, __LINE__, MEMTRACK_NONE, "SEMAPHORE",
+ "vkQueueBindSparse: Semaphore must not be currently signaled or in a wait state");
+ }
+ dev_data->semaphoreMap[sem].state = MEMTRACK_SEMAPHORE_STATE_SIGNALLED;
+ }
+ }
+ }
+
+ print_mem_list(dev_data, queue);
+ loader_platform_thread_unlock_mutex(&globalLock);
+#endif
+ loader_platform_thread_lock_mutex(&globalLock);
+ for (uint32_t bindIdx = 0; bindIdx < bindInfoCount; ++bindIdx) {
+ const VkBindSparseInfo &bindInfo = pBindInfo[bindIdx];
+ for (uint32_t i = 0; i < bindInfo.waitSemaphoreCount; ++i) {
+ if (dev_data->semaphoreMap[bindInfo.pWaitSemaphores[i]].signaled) {
+ dev_data->semaphoreMap[bindInfo.pWaitSemaphores[i]].signaled = 0;
+ } else {
+ skip_call |=
+ log_msg(dev_data->report_data, VK_DEBUG_REPORT_ERROR_BIT_EXT, VK_DEBUG_REPORT_OBJECT_TYPE_COMMAND_BUFFER_EXT, 0,
+ __LINE__, DRAWSTATE_QUEUE_FORWARD_PROGRESS, "DS",
+ "Queue %#" PRIx64 " is waiting on semaphore %#" PRIx64 " that has no way to be signaled.",
+ (uint64_t)(queue), (uint64_t)(bindInfo.pWaitSemaphores[i]));
+ }
+ }
+ for (uint32_t i = 0; i < bindInfo.signalSemaphoreCount; ++i) {
+ dev_data->semaphoreMap[bindInfo.pSignalSemaphores[i]].signaled = 1;
+ }
+ }
+ loader_platform_thread_unlock_mutex(&globalLock);
+
+ if (VK_FALSE == skip_call)
+ return dev_data->device_dispatch_table->QueueBindSparse(queue, bindInfoCount, pBindInfo, fence);
+#if MTMERGESOURCE
+ // Update semaphore state
+ loader_platform_thread_lock_mutex(&globalLock);
+ for (uint32_t bind_info_idx = 0; bind_info_idx < bindInfoCount; bind_info_idx++) {
+ const VkBindSparseInfo *bindInfo = &pBindInfo[bind_info_idx];
+ for (uint32_t i = 0; i < bindInfo->waitSemaphoreCount; i++) {
+ VkSemaphore sem = bindInfo->pWaitSemaphores[i];
+
+ if (dev_data->semaphoreMap.find(sem) != dev_data->semaphoreMap.end()) {
+ dev_data->semaphoreMap[sem].state = MEMTRACK_SEMAPHORE_STATE_UNSET;
+ }
+ }
+ }
+ loader_platform_thread_unlock_mutex(&globalLock);
+#endif
+
+ return result;
+}
+
+VKAPI_ATTR VkResult VKAPI_CALL vkCreateSemaphore(VkDevice device, const VkSemaphoreCreateInfo *pCreateInfo,
+ const VkAllocationCallbacks *pAllocator, VkSemaphore *pSemaphore) {
+ layer_data *dev_data = get_my_data_ptr(get_dispatch_key(device), layer_data_map);
+ VkResult result = dev_data->device_dispatch_table->CreateSemaphore(device, pCreateInfo, pAllocator, pSemaphore);
+ if (result == VK_SUCCESS) {
+ loader_platform_thread_lock_mutex(&globalLock);
+ SEMAPHORE_NODE* sNode = &dev_data->semaphoreMap[*pSemaphore];
+ sNode->signaled = 0;
+ sNode->queue = VK_NULL_HANDLE;
+ sNode->in_use.store(0);
+ sNode->state = MEMTRACK_SEMAPHORE_STATE_UNSET;
+ loader_platform_thread_unlock_mutex(&globalLock);
+ }
+ return result;
+}
+
+VKAPI_ATTR VkResult VKAPI_CALL
+vkCreateEvent(VkDevice device, const VkEventCreateInfo *pCreateInfo, const VkAllocationCallbacks *pAllocator, VkEvent *pEvent) {
+ layer_data *dev_data = get_my_data_ptr(get_dispatch_key(device), layer_data_map);
+ VkResult result = dev_data->device_dispatch_table->CreateEvent(device, pCreateInfo, pAllocator, pEvent);
+ if (result == VK_SUCCESS) {
+ loader_platform_thread_lock_mutex(&globalLock);
+ dev_data->eventMap[*pEvent].needsSignaled = false;
+ dev_data->eventMap[*pEvent].in_use.store(0);
+ dev_data->eventMap[*pEvent].stageMask = VkPipelineStageFlags(0);
+ loader_platform_thread_unlock_mutex(&globalLock);
+ }
+ return result;
+}
+
+VK_LAYER_EXPORT VKAPI_ATTR VkResult VKAPI_CALL vkCreateSwapchainKHR(VkDevice device, const VkSwapchainCreateInfoKHR *pCreateInfo,
+ const VkAllocationCallbacks *pAllocator,
+ VkSwapchainKHR *pSwapchain) {
+ layer_data *dev_data = get_my_data_ptr(get_dispatch_key(device), layer_data_map);
+ VkResult result = dev_data->device_dispatch_table->CreateSwapchainKHR(device, pCreateInfo, pAllocator, pSwapchain);
+
+ if (VK_SUCCESS == result) {
+ SWAPCHAIN_NODE *psc_node = new SWAPCHAIN_NODE(pCreateInfo);
+ loader_platform_thread_lock_mutex(&globalLock);
+ dev_data->device_extensions.swapchainMap[*pSwapchain] = psc_node;
+ loader_platform_thread_unlock_mutex(&globalLock);
+ }
+
+ return result;
+}
+
+VK_LAYER_EXPORT VKAPI_ATTR void VKAPI_CALL
+vkDestroySwapchainKHR(VkDevice device, VkSwapchainKHR swapchain, const VkAllocationCallbacks *pAllocator) {
+ layer_data *dev_data = get_my_data_ptr(get_dispatch_key(device), layer_data_map);
+ bool skipCall = false;
+
+ loader_platform_thread_lock_mutex(&globalLock);
+ auto swapchain_data = dev_data->device_extensions.swapchainMap.find(swapchain);
+ if (swapchain_data != dev_data->device_extensions.swapchainMap.end()) {
+ if (swapchain_data->second->images.size() > 0) {
+ for (auto swapchain_image : swapchain_data->second->images) {
+ auto image_sub = dev_data->imageSubresourceMap.find(swapchain_image);
+ if (image_sub != dev_data->imageSubresourceMap.end()) {
+ for (auto imgsubpair : image_sub->second) {
+ auto image_item = dev_data->imageLayoutMap.find(imgsubpair);
+ if (image_item != dev_data->imageLayoutMap.end()) {
+ dev_data->imageLayoutMap.erase(image_item);
+ }
+ }
+ dev_data->imageSubresourceMap.erase(image_sub);
+ }
+#if MTMERGESOURCE
+ skipCall = clear_object_binding(dev_data, device, (uint64_t)swapchain_image,
+ VK_DEBUG_REPORT_OBJECT_TYPE_SWAPCHAIN_KHR_EXT);
+ dev_data->imageBindingMap.erase((uint64_t)swapchain_image);
+#endif
+ }
+ }
+ delete swapchain_data->second;
+ dev_data->device_extensions.swapchainMap.erase(swapchain);
+ }
+ loader_platform_thread_unlock_mutex(&globalLock);
+ if (!skipCall)
+ dev_data->device_dispatch_table->DestroySwapchainKHR(device, swapchain, pAllocator);
+}
+
+VK_LAYER_EXPORT VKAPI_ATTR VkResult VKAPI_CALL
+vkGetSwapchainImagesKHR(VkDevice device, VkSwapchainKHR swapchain, uint32_t *pCount, VkImage *pSwapchainImages) {
+ layer_data *dev_data = get_my_data_ptr(get_dispatch_key(device), layer_data_map);
+ VkResult result = dev_data->device_dispatch_table->GetSwapchainImagesKHR(device, swapchain, pCount, pSwapchainImages);
+
+ if (result == VK_SUCCESS && pSwapchainImages != NULL) {
+ // This should never happen and is checked by param checker.
+ if (!pCount)
+ return result;
+ loader_platform_thread_lock_mutex(&globalLock);
+ const size_t count = *pCount;
+ auto swapchain_node = dev_data->device_extensions.swapchainMap[swapchain];
+ if (!swapchain_node->images.empty()) {
+ // TODO : Not sure I like the memcmp here, but it works
+ const bool mismatch = (swapchain_node->images.size() != count ||
+ memcmp(&swapchain_node->images[0], pSwapchainImages, sizeof(swapchain_node->images[0]) * count));
+ if (mismatch) {
+ // TODO: Verify against Valid Usage section of extension
+ log_msg(dev_data->report_data, VK_DEBUG_REPORT_WARNING_BIT_EXT, VK_DEBUG_REPORT_OBJECT_TYPE_SWAPCHAIN_KHR_EXT,
+ (uint64_t)swapchain, __LINE__, MEMTRACK_NONE, "SWAP_CHAIN",
+ "vkGetSwapchainInfoKHR(%" PRIu64
+ ", VK_SWAP_CHAIN_INFO_TYPE_PERSISTENT_IMAGES_KHR) returned mismatching data",
+ (uint64_t)(swapchain));
+ }
+ }
+ for (uint32_t i = 0; i < *pCount; ++i) {
+ IMAGE_LAYOUT_NODE image_layout_node;
+ image_layout_node.layout = VK_IMAGE_LAYOUT_UNDEFINED;
+ image_layout_node.format = swapchain_node->createInfo.imageFormat;
+ dev_data->imageMap[pSwapchainImages[i]].createInfo.mipLevels = 1;
+ dev_data->imageMap[pSwapchainImages[i]].createInfo.arrayLayers = swapchain_node->createInfo.imageArrayLayers;
+ swapchain_node->images.push_back(pSwapchainImages[i]);
+ ImageSubresourcePair subpair = {pSwapchainImages[i], false, VkImageSubresource()};
+ dev_data->imageSubresourceMap[pSwapchainImages[i]].push_back(subpair);
+ dev_data->imageLayoutMap[subpair] = image_layout_node;
+ dev_data->device_extensions.imageToSwapchainMap[pSwapchainImages[i]] = swapchain;
+ }
+ if (!swapchain_node->images.empty()) {
+ for (auto image : swapchain_node->images) {
+ // Add image object binding, then insert the new Mem Object and then bind it to created image
+#if MTMERGESOURCE
+ add_object_create_info(dev_data, (uint64_t)image, VK_DEBUG_REPORT_OBJECT_TYPE_SWAPCHAIN_KHR_EXT,
+ &swapchain_node->createInfo);
+#endif
+ }
+ }
+ loader_platform_thread_unlock_mutex(&globalLock);
+ }
+ return result;
+}
+
+VK_LAYER_EXPORT VKAPI_ATTR VkResult VKAPI_CALL vkQueuePresentKHR(VkQueue queue, const VkPresentInfoKHR *pPresentInfo) {
+ layer_data *dev_data = get_my_data_ptr(get_dispatch_key(queue), layer_data_map);
+ VkResult result = VK_ERROR_VALIDATION_FAILED_EXT;
+ bool skip_call = false;
+
+ if (pPresentInfo) {
+ loader_platform_thread_lock_mutex(&globalLock);
+ for (uint32_t i = 0; i < pPresentInfo->waitSemaphoreCount; ++i) {
+ if (dev_data->semaphoreMap[pPresentInfo->pWaitSemaphores[i]].signaled) {
+ dev_data->semaphoreMap[pPresentInfo->pWaitSemaphores[i]].signaled = 0;
+ } else {
+ skip_call |=
+ log_msg(dev_data->report_data, VK_DEBUG_REPORT_ERROR_BIT_EXT, VK_DEBUG_REPORT_OBJECT_TYPE_COMMAND_BUFFER_EXT, 0,
+ __LINE__, DRAWSTATE_QUEUE_FORWARD_PROGRESS, "DS",
+ "Queue %#" PRIx64 " is waiting on semaphore %#" PRIx64 " that has no way to be signaled.",
+ (uint64_t)(queue), (uint64_t)(pPresentInfo->pWaitSemaphores[i]));
+ }
+ }
+ VkDeviceMemory mem;
+ for (uint32_t i = 0; i < pPresentInfo->swapchainCount; ++i) {
+ auto swapchain_data = dev_data->device_extensions.swapchainMap.find(pPresentInfo->pSwapchains[i]);
+ if (swapchain_data != dev_data->device_extensions.swapchainMap.end() &&
+ pPresentInfo->pImageIndices[i] < swapchain_data->second->images.size()) {
+ VkImage image = swapchain_data->second->images[pPresentInfo->pImageIndices[i]];
+#if MTMERGESOURCE
+ skip_call |=
+ get_mem_binding_from_object(dev_data, queue, (uint64_t)(image), VK_DEBUG_REPORT_OBJECT_TYPE_IMAGE_EXT, &mem);
+ skip_call |= validate_memory_is_valid(dev_data, mem, "vkQueuePresentKHR()", image);
+#endif
+ vector<VkImageLayout> layouts;
+ if (FindLayouts(dev_data, image, layouts)) {
+ for (auto layout : layouts) {
+ if (layout != VK_IMAGE_LAYOUT_PRESENT_SRC_KHR) {
+ skip_call |=
+ log_msg(dev_data->report_data, VK_DEBUG_REPORT_ERROR_BIT_EXT, VK_DEBUG_REPORT_OBJECT_TYPE_QUEUE_EXT,
+ reinterpret_cast<uint64_t &>(queue), __LINE__, DRAWSTATE_INVALID_IMAGE_LAYOUT, "DS",
+ "Images passed to present must be in layout "
+ "PRESENT_SOURCE_KHR but is in %s",
+ string_VkImageLayout(layout));
+ }
+ }
+ }
+ }
+ }
+ loader_platform_thread_unlock_mutex(&globalLock);
+ }
+
+ if (!skip_call)
+ result = dev_data->device_dispatch_table->QueuePresentKHR(queue, pPresentInfo);
+#if MTMERGESOURCE
+ loader_platform_thread_lock_mutex(&globalLock);
+ for (uint32_t i = 0; i < pPresentInfo->waitSemaphoreCount; i++) {
+ VkSemaphore sem = pPresentInfo->pWaitSemaphores[i];
+ if (dev_data->semaphoreMap.find(sem) != dev_data->semaphoreMap.end()) {
+ dev_data->semaphoreMap[sem].state = MEMTRACK_SEMAPHORE_STATE_UNSET;
+ }
+ }
+ loader_platform_thread_unlock_mutex(&globalLock);
+#endif
+ return result;
+}
+
+VKAPI_ATTR VkResult VKAPI_CALL vkAcquireNextImageKHR(VkDevice device, VkSwapchainKHR swapchain, uint64_t timeout,
+ VkSemaphore semaphore, VkFence fence, uint32_t *pImageIndex) {
+ layer_data *dev_data = get_my_data_ptr(get_dispatch_key(device), layer_data_map);
+ VkResult result = VK_ERROR_VALIDATION_FAILED_EXT;
+ bool skipCall = false;
+#if MTMERGESOURCE
+ loader_platform_thread_lock_mutex(&globalLock);
+ if (dev_data->semaphoreMap.find(semaphore) != dev_data->semaphoreMap.end()) {
+ if (dev_data->semaphoreMap[semaphore].state != MEMTRACK_SEMAPHORE_STATE_UNSET) {
+ skipCall = log_msg(dev_data->report_data, VK_DEBUG_REPORT_ERROR_BIT_EXT, VK_DEBUG_REPORT_OBJECT_TYPE_SEMAPHORE_EXT,
+ (uint64_t)semaphore, __LINE__, MEMTRACK_NONE, "SEMAPHORE",
+ "vkAcquireNextImageKHR: Semaphore must not be currently signaled or in a wait state");
+ }
+ dev_data->semaphoreMap[semaphore].state = MEMTRACK_SEMAPHORE_STATE_SIGNALLED;
+ }
+ auto fence_data = dev_data->fenceMap.find(fence);
+ if (fence_data != dev_data->fenceMap.end()) {
+ fence_data->second.swapchain = swapchain;
+ }
+ loader_platform_thread_unlock_mutex(&globalLock);
+#endif
+ if (!skipCall) {
+ result =
+ dev_data->device_dispatch_table->AcquireNextImageKHR(device, swapchain, timeout, semaphore, fence, pImageIndex);
+ }
+ loader_platform_thread_lock_mutex(&globalLock);
+ // FIXME/TODO: Need to add some thing code the "fence" parameter
+ dev_data->semaphoreMap[semaphore].signaled = 1;
+ loader_platform_thread_unlock_mutex(&globalLock);
+ return result;
+}
+
+VK_LAYER_EXPORT VKAPI_ATTR VkResult VKAPI_CALL
+vkCreateDebugReportCallbackEXT(VkInstance instance, const VkDebugReportCallbackCreateInfoEXT *pCreateInfo,
+ const VkAllocationCallbacks *pAllocator, VkDebugReportCallbackEXT *pMsgCallback) {
+ layer_data *my_data = get_my_data_ptr(get_dispatch_key(instance), layer_data_map);
+ VkLayerInstanceDispatchTable *pTable = my_data->instance_dispatch_table;
+ VkResult res = pTable->CreateDebugReportCallbackEXT(instance, pCreateInfo, pAllocator, pMsgCallback);
+ if (VK_SUCCESS == res) {
+ loader_platform_thread_lock_mutex(&globalLock);
+ res = layer_create_msg_callback(my_data->report_data, pCreateInfo, pAllocator, pMsgCallback);
+ loader_platform_thread_unlock_mutex(&globalLock);
+ }
+ return res;
+}
+
+VK_LAYER_EXPORT VKAPI_ATTR void VKAPI_CALL vkDestroyDebugReportCallbackEXT(VkInstance instance,
+ VkDebugReportCallbackEXT msgCallback,
+ const VkAllocationCallbacks *pAllocator) {
+ layer_data *my_data = get_my_data_ptr(get_dispatch_key(instance), layer_data_map);
+ VkLayerInstanceDispatchTable *pTable = my_data->instance_dispatch_table;
+ pTable->DestroyDebugReportCallbackEXT(instance, msgCallback, pAllocator);
+ loader_platform_thread_lock_mutex(&globalLock);
+ layer_destroy_msg_callback(my_data->report_data, msgCallback, pAllocator);
+ loader_platform_thread_unlock_mutex(&globalLock);
+}
+
+VK_LAYER_EXPORT VKAPI_ATTR void VKAPI_CALL
+vkDebugReportMessageEXT(VkInstance instance, VkDebugReportFlagsEXT flags, VkDebugReportObjectTypeEXT objType, uint64_t object,
+ size_t location, int32_t msgCode, const char *pLayerPrefix, const char *pMsg) {
+ layer_data *my_data = get_my_data_ptr(get_dispatch_key(instance), layer_data_map);
+ my_data->instance_dispatch_table->DebugReportMessageEXT(instance, flags, objType, object, location, msgCode, pLayerPrefix,
+ pMsg);
+}
+
+VK_LAYER_EXPORT VKAPI_ATTR PFN_vkVoidFunction VKAPI_CALL vkGetDeviceProcAddr(VkDevice dev, const char *funcName) {
+ if (!strcmp(funcName, "vkGetDeviceProcAddr"))
+ return (PFN_vkVoidFunction)vkGetDeviceProcAddr;
+ if (!strcmp(funcName, "vkDestroyDevice"))
+ return (PFN_vkVoidFunction)vkDestroyDevice;
+ if (!strcmp(funcName, "vkQueueSubmit"))
+ return (PFN_vkVoidFunction)vkQueueSubmit;
+ if (!strcmp(funcName, "vkWaitForFences"))
+ return (PFN_vkVoidFunction)vkWaitForFences;
+ if (!strcmp(funcName, "vkGetFenceStatus"))
+ return (PFN_vkVoidFunction)vkGetFenceStatus;
+ if (!strcmp(funcName, "vkQueueWaitIdle"))
+ return (PFN_vkVoidFunction)vkQueueWaitIdle;
+ if (!strcmp(funcName, "vkDeviceWaitIdle"))
+ return (PFN_vkVoidFunction)vkDeviceWaitIdle;
+ if (!strcmp(funcName, "vkGetDeviceQueue"))
+ return (PFN_vkVoidFunction)vkGetDeviceQueue;
+ if (!strcmp(funcName, "vkDestroyInstance"))
+ return (PFN_vkVoidFunction)vkDestroyInstance;
+ if (!strcmp(funcName, "vkDestroyDevice"))
+ return (PFN_vkVoidFunction)vkDestroyDevice;
+ if (!strcmp(funcName, "vkDestroyFence"))
+ return (PFN_vkVoidFunction)vkDestroyFence;
+ if (!strcmp(funcName, "vkResetFences"))
+ return (PFN_vkVoidFunction)vkResetFences;
+ if (!strcmp(funcName, "vkDestroySemaphore"))
+ return (PFN_vkVoidFunction)vkDestroySemaphore;
+ if (!strcmp(funcName, "vkDestroyEvent"))
+ return (PFN_vkVoidFunction)vkDestroyEvent;
+ if (!strcmp(funcName, "vkDestroyQueryPool"))
+ return (PFN_vkVoidFunction)vkDestroyQueryPool;
+ if (!strcmp(funcName, "vkDestroyBuffer"))
+ return (PFN_vkVoidFunction)vkDestroyBuffer;
+ if (!strcmp(funcName, "vkDestroyBufferView"))
+ return (PFN_vkVoidFunction)vkDestroyBufferView;
+ if (!strcmp(funcName, "vkDestroyImage"))
+ return (PFN_vkVoidFunction)vkDestroyImage;
+ if (!strcmp(funcName, "vkDestroyImageView"))
+ return (PFN_vkVoidFunction)vkDestroyImageView;
+ if (!strcmp(funcName, "vkDestroyShaderModule"))
+ return (PFN_vkVoidFunction)vkDestroyShaderModule;
+ if (!strcmp(funcName, "vkDestroyPipeline"))
+ return (PFN_vkVoidFunction)vkDestroyPipeline;
+ if (!strcmp(funcName, "vkDestroyPipelineLayout"))
+ return (PFN_vkVoidFunction)vkDestroyPipelineLayout;
+ if (!strcmp(funcName, "vkDestroySampler"))
+ return (PFN_vkVoidFunction)vkDestroySampler;
+ if (!strcmp(funcName, "vkDestroyDescriptorSetLayout"))
+ return (PFN_vkVoidFunction)vkDestroyDescriptorSetLayout;
+ if (!strcmp(funcName, "vkDestroyDescriptorPool"))
+ return (PFN_vkVoidFunction)vkDestroyDescriptorPool;
+ if (!strcmp(funcName, "vkDestroyFramebuffer"))
+ return (PFN_vkVoidFunction)vkDestroyFramebuffer;
+ if (!strcmp(funcName, "vkDestroyRenderPass"))
+ return (PFN_vkVoidFunction)vkDestroyRenderPass;
+ if (!strcmp(funcName, "vkCreateBuffer"))
+ return (PFN_vkVoidFunction)vkCreateBuffer;
+ if (!strcmp(funcName, "vkCreateBufferView"))
+ return (PFN_vkVoidFunction)vkCreateBufferView;
+ if (!strcmp(funcName, "vkCreateImage"))
+ return (PFN_vkVoidFunction)vkCreateImage;
+ if (!strcmp(funcName, "vkCreateImageView"))
+ return (PFN_vkVoidFunction)vkCreateImageView;
+ if (!strcmp(funcName, "vkCreateFence"))
+ return (PFN_vkVoidFunction)vkCreateFence;
+ if (!strcmp(funcName, "CreatePipelineCache"))
+ return (PFN_vkVoidFunction)vkCreatePipelineCache;
+ if (!strcmp(funcName, "DestroyPipelineCache"))
+ return (PFN_vkVoidFunction)vkDestroyPipelineCache;
+ if (!strcmp(funcName, "GetPipelineCacheData"))
+ return (PFN_vkVoidFunction)vkGetPipelineCacheData;
+ if (!strcmp(funcName, "MergePipelineCaches"))
+ return (PFN_vkVoidFunction)vkMergePipelineCaches;
+ if (!strcmp(funcName, "vkCreateGraphicsPipelines"))
+ return (PFN_vkVoidFunction)vkCreateGraphicsPipelines;
+ if (!strcmp(funcName, "vkCreateComputePipelines"))
+ return (PFN_vkVoidFunction)vkCreateComputePipelines;
+ if (!strcmp(funcName, "vkCreateSampler"))
+ return (PFN_vkVoidFunction)vkCreateSampler;
+ if (!strcmp(funcName, "vkCreateDescriptorSetLayout"))
+ return (PFN_vkVoidFunction)vkCreateDescriptorSetLayout;
+ if (!strcmp(funcName, "vkCreatePipelineLayout"))
+ return (PFN_vkVoidFunction)vkCreatePipelineLayout;
+ if (!strcmp(funcName, "vkCreateDescriptorPool"))
+ return (PFN_vkVoidFunction)vkCreateDescriptorPool;
+ if (!strcmp(funcName, "vkResetDescriptorPool"))
+ return (PFN_vkVoidFunction)vkResetDescriptorPool;
+ if (!strcmp(funcName, "vkAllocateDescriptorSets"))
+ return (PFN_vkVoidFunction)vkAllocateDescriptorSets;
+ if (!strcmp(funcName, "vkFreeDescriptorSets"))
+ return (PFN_vkVoidFunction)vkFreeDescriptorSets;
+ if (!strcmp(funcName, "vkUpdateDescriptorSets"))
+ return (PFN_vkVoidFunction)vkUpdateDescriptorSets;
+ if (!strcmp(funcName, "vkCreateCommandPool"))
+ return (PFN_vkVoidFunction)vkCreateCommandPool;
+ if (!strcmp(funcName, "vkDestroyCommandPool"))
+ return (PFN_vkVoidFunction)vkDestroyCommandPool;
+ if (!strcmp(funcName, "vkResetCommandPool"))
+ return (PFN_vkVoidFunction)vkResetCommandPool;
+ if (!strcmp(funcName, "vkCreateQueryPool"))
+ return (PFN_vkVoidFunction)vkCreateQueryPool;
+ if (!strcmp(funcName, "vkAllocateCommandBuffers"))
+ return (PFN_vkVoidFunction)vkAllocateCommandBuffers;
+ if (!strcmp(funcName, "vkFreeCommandBuffers"))
+ return (PFN_vkVoidFunction)vkFreeCommandBuffers;
+ if (!strcmp(funcName, "vkBeginCommandBuffer"))
+ return (PFN_vkVoidFunction)vkBeginCommandBuffer;
+ if (!strcmp(funcName, "vkEndCommandBuffer"))
+ return (PFN_vkVoidFunction)vkEndCommandBuffer;
+ if (!strcmp(funcName, "vkResetCommandBuffer"))
+ return (PFN_vkVoidFunction)vkResetCommandBuffer;
+ if (!strcmp(funcName, "vkCmdBindPipeline"))
+ return (PFN_vkVoidFunction)vkCmdBindPipeline;
+ if (!strcmp(funcName, "vkCmdSetViewport"))
+ return (PFN_vkVoidFunction)vkCmdSetViewport;
+ if (!strcmp(funcName, "vkCmdSetScissor"))
+ return (PFN_vkVoidFunction)vkCmdSetScissor;
+ if (!strcmp(funcName, "vkCmdSetLineWidth"))
+ return (PFN_vkVoidFunction)vkCmdSetLineWidth;
+ if (!strcmp(funcName, "vkCmdSetDepthBias"))
+ return (PFN_vkVoidFunction)vkCmdSetDepthBias;
+ if (!strcmp(funcName, "vkCmdSetBlendConstants"))
+ return (PFN_vkVoidFunction)vkCmdSetBlendConstants;
+ if (!strcmp(funcName, "vkCmdSetDepthBounds"))
+ return (PFN_vkVoidFunction)vkCmdSetDepthBounds;
+ if (!strcmp(funcName, "vkCmdSetStencilCompareMask"))
+ return (PFN_vkVoidFunction)vkCmdSetStencilCompareMask;
+ if (!strcmp(funcName, "vkCmdSetStencilWriteMask"))
+ return (PFN_vkVoidFunction)vkCmdSetStencilWriteMask;
+ if (!strcmp(funcName, "vkCmdSetStencilReference"))
+ return (PFN_vkVoidFunction)vkCmdSetStencilReference;
+ if (!strcmp(funcName, "vkCmdBindDescriptorSets"))
+ return (PFN_vkVoidFunction)vkCmdBindDescriptorSets;
+ if (!strcmp(funcName, "vkCmdBindVertexBuffers"))
+ return (PFN_vkVoidFunction)vkCmdBindVertexBuffers;
+ if (!strcmp(funcName, "vkCmdBindIndexBuffer"))
+ return (PFN_vkVoidFunction)vkCmdBindIndexBuffer;
+ if (!strcmp(funcName, "vkCmdDraw"))
+ return (PFN_vkVoidFunction)vkCmdDraw;
+ if (!strcmp(funcName, "vkCmdDrawIndexed"))
+ return (PFN_vkVoidFunction)vkCmdDrawIndexed;
+ if (!strcmp(funcName, "vkCmdDrawIndirect"))
+ return (PFN_vkVoidFunction)vkCmdDrawIndirect;
+ if (!strcmp(funcName, "vkCmdDrawIndexedIndirect"))
+ return (PFN_vkVoidFunction)vkCmdDrawIndexedIndirect;
+ if (!strcmp(funcName, "vkCmdDispatch"))
+ return (PFN_vkVoidFunction)vkCmdDispatch;
+ if (!strcmp(funcName, "vkCmdDispatchIndirect"))
+ return (PFN_vkVoidFunction)vkCmdDispatchIndirect;
+ if (!strcmp(funcName, "vkCmdCopyBuffer"))
+ return (PFN_vkVoidFunction)vkCmdCopyBuffer;
+ if (!strcmp(funcName, "vkCmdCopyImage"))
+ return (PFN_vkVoidFunction)vkCmdCopyImage;
+ if (!strcmp(funcName, "vkCmdBlitImage"))
+ return (PFN_vkVoidFunction)vkCmdBlitImage;
+ if (!strcmp(funcName, "vkCmdCopyBufferToImage"))
+ return (PFN_vkVoidFunction)vkCmdCopyBufferToImage;
+ if (!strcmp(funcName, "vkCmdCopyImageToBuffer"))
+ return (PFN_vkVoidFunction)vkCmdCopyImageToBuffer;
+ if (!strcmp(funcName, "vkCmdUpdateBuffer"))
+ return (PFN_vkVoidFunction)vkCmdUpdateBuffer;
+ if (!strcmp(funcName, "vkCmdFillBuffer"))
+ return (PFN_vkVoidFunction)vkCmdFillBuffer;
+ if (!strcmp(funcName, "vkCmdClearColorImage"))
+ return (PFN_vkVoidFunction)vkCmdClearColorImage;
+ if (!strcmp(funcName, "vkCmdClearDepthStencilImage"))
+ return (PFN_vkVoidFunction)vkCmdClearDepthStencilImage;
+ if (!strcmp(funcName, "vkCmdClearAttachments"))
+ return (PFN_vkVoidFunction)vkCmdClearAttachments;
+ if (!strcmp(funcName, "vkCmdResolveImage"))
+ return (PFN_vkVoidFunction)vkCmdResolveImage;
+ if (!strcmp(funcName, "vkCmdSetEvent"))
+ return (PFN_vkVoidFunction)vkCmdSetEvent;
+ if (!strcmp(funcName, "vkCmdResetEvent"))
+ return (PFN_vkVoidFunction)vkCmdResetEvent;
+ if (!strcmp(funcName, "vkCmdWaitEvents"))
+ return (PFN_vkVoidFunction)vkCmdWaitEvents;
+ if (!strcmp(funcName, "vkCmdPipelineBarrier"))
+ return (PFN_vkVoidFunction)vkCmdPipelineBarrier;
+ if (!strcmp(funcName, "vkCmdBeginQuery"))
+ return (PFN_vkVoidFunction)vkCmdBeginQuery;
+ if (!strcmp(funcName, "vkCmdEndQuery"))
+ return (PFN_vkVoidFunction)vkCmdEndQuery;
+ if (!strcmp(funcName, "vkCmdResetQueryPool"))
+ return (PFN_vkVoidFunction)vkCmdResetQueryPool;
+ if (!strcmp(funcName, "vkCmdCopyQueryPoolResults"))
+ return (PFN_vkVoidFunction)vkCmdCopyQueryPoolResults;
+ if (!strcmp(funcName, "vkCmdPushConstants"))
+ return (PFN_vkVoidFunction)vkCmdPushConstants;
+ if (!strcmp(funcName, "vkCmdWriteTimestamp"))
+ return (PFN_vkVoidFunction)vkCmdWriteTimestamp;
+ if (!strcmp(funcName, "vkCreateFramebuffer"))
+ return (PFN_vkVoidFunction)vkCreateFramebuffer;
+ if (!strcmp(funcName, "vkCreateShaderModule"))
+ return (PFN_vkVoidFunction)vkCreateShaderModule;
+ if (!strcmp(funcName, "vkCreateRenderPass"))
+ return (PFN_vkVoidFunction)vkCreateRenderPass;
+ if (!strcmp(funcName, "vkCmdBeginRenderPass"))
+ return (PFN_vkVoidFunction)vkCmdBeginRenderPass;
+ if (!strcmp(funcName, "vkCmdNextSubpass"))
+ return (PFN_vkVoidFunction)vkCmdNextSubpass;
+ if (!strcmp(funcName, "vkCmdEndRenderPass"))
+ return (PFN_vkVoidFunction)vkCmdEndRenderPass;
+ if (!strcmp(funcName, "vkCmdExecuteCommands"))
+ return (PFN_vkVoidFunction)vkCmdExecuteCommands;
+ if (!strcmp(funcName, "vkSetEvent"))
+ return (PFN_vkVoidFunction)vkSetEvent;
+ if (!strcmp(funcName, "vkMapMemory"))
+ return (PFN_vkVoidFunction)vkMapMemory;
+#if MTMERGESOURCE
+ if (!strcmp(funcName, "vkUnmapMemory"))
+ return (PFN_vkVoidFunction)vkUnmapMemory;
+ if (!strcmp(funcName, "vkAllocateMemory"))
+ return (PFN_vkVoidFunction)vkAllocateMemory;
+ if (!strcmp(funcName, "vkFreeMemory"))
+ return (PFN_vkVoidFunction)vkFreeMemory;
+ if (!strcmp(funcName, "vkFlushMappedMemoryRanges"))
+ return (PFN_vkVoidFunction)vkFlushMappedMemoryRanges;
+ if (!strcmp(funcName, "vkInvalidateMappedMemoryRanges"))
+ return (PFN_vkVoidFunction)vkInvalidateMappedMemoryRanges;
+ if (!strcmp(funcName, "vkBindBufferMemory"))
+ return (PFN_vkVoidFunction)vkBindBufferMemory;
+ if (!strcmp(funcName, "vkGetBufferMemoryRequirements"))
+ return (PFN_vkVoidFunction)vkGetBufferMemoryRequirements;
+ if (!strcmp(funcName, "vkGetImageMemoryRequirements"))
+ return (PFN_vkVoidFunction)vkGetImageMemoryRequirements;
+#endif
+ if (!strcmp(funcName, "vkGetQueryPoolResults"))
+ return (PFN_vkVoidFunction)vkGetQueryPoolResults;
+ if (!strcmp(funcName, "vkBindImageMemory"))
+ return (PFN_vkVoidFunction)vkBindImageMemory;
+ if (!strcmp(funcName, "vkQueueBindSparse"))
+ return (PFN_vkVoidFunction)vkQueueBindSparse;
+ if (!strcmp(funcName, "vkCreateSemaphore"))
+ return (PFN_vkVoidFunction)vkCreateSemaphore;
+ if (!strcmp(funcName, "vkCreateEvent"))
+ return (PFN_vkVoidFunction)vkCreateEvent;
+
+ if (dev == NULL)
+ return NULL;
+
+ layer_data *dev_data;
+ dev_data = get_my_data_ptr(get_dispatch_key(dev), layer_data_map);
+
+ if (dev_data->device_extensions.wsi_enabled) {
+ if (!strcmp(funcName, "vkCreateSwapchainKHR"))
+ return (PFN_vkVoidFunction)vkCreateSwapchainKHR;
+ if (!strcmp(funcName, "vkDestroySwapchainKHR"))
+ return (PFN_vkVoidFunction)vkDestroySwapchainKHR;
+ if (!strcmp(funcName, "vkGetSwapchainImagesKHR"))
+ return (PFN_vkVoidFunction)vkGetSwapchainImagesKHR;
+ if (!strcmp(funcName, "vkAcquireNextImageKHR"))
+ return (PFN_vkVoidFunction)vkAcquireNextImageKHR;
+ if (!strcmp(funcName, "vkQueuePresentKHR"))
+ return (PFN_vkVoidFunction)vkQueuePresentKHR;
+ }
+
+ VkLayerDispatchTable *pTable = dev_data->device_dispatch_table;
+ {
+ if (pTable->GetDeviceProcAddr == NULL)
+ return NULL;
+ return pTable->GetDeviceProcAddr(dev, funcName);
+ }
+}
+
+VK_LAYER_EXPORT VKAPI_ATTR PFN_vkVoidFunction VKAPI_CALL vkGetInstanceProcAddr(VkInstance instance, const char *funcName) {
+ if (!strcmp(funcName, "vkGetInstanceProcAddr"))
+ return (PFN_vkVoidFunction)vkGetInstanceProcAddr;
+ if (!strcmp(funcName, "vkGetDeviceProcAddr"))
+ return (PFN_vkVoidFunction)vkGetDeviceProcAddr;
+ if (!strcmp(funcName, "vkCreateInstance"))
+ return (PFN_vkVoidFunction)vkCreateInstance;
+ if (!strcmp(funcName, "vkCreateDevice"))
+ return (PFN_vkVoidFunction)vkCreateDevice;
+ if (!strcmp(funcName, "vkDestroyInstance"))
+ return (PFN_vkVoidFunction)vkDestroyInstance;
+#if MTMERGESOURCE
+ if (!strcmp(funcName, "vkGetPhysicalDeviceMemoryProperties"))
+ return (PFN_vkVoidFunction)vkGetPhysicalDeviceMemoryProperties;
+#endif
+ if (!strcmp(funcName, "vkEnumerateInstanceLayerProperties"))
+ return (PFN_vkVoidFunction)vkEnumerateInstanceLayerProperties;
+ if (!strcmp(funcName, "vkEnumerateInstanceExtensionProperties"))
+ return (PFN_vkVoidFunction)vkEnumerateInstanceExtensionProperties;
+ if (!strcmp(funcName, "vkEnumerateDeviceLayerProperties"))
+ return (PFN_vkVoidFunction)vkEnumerateDeviceLayerProperties;
+ if (!strcmp(funcName, "vkEnumerateDeviceExtensionProperties"))
+ return (PFN_vkVoidFunction)vkEnumerateDeviceExtensionProperties;
+
+ if (instance == NULL)
+ return NULL;
+
+ PFN_vkVoidFunction fptr;
+
+ layer_data *my_data;
+ my_data = get_my_data_ptr(get_dispatch_key(instance), layer_data_map);
+ fptr = debug_report_get_instance_proc_addr(my_data->report_data, funcName);
+ if (fptr)
+ return fptr;
+
+ VkLayerInstanceDispatchTable *pTable = my_data->instance_dispatch_table;
+ if (pTable->GetInstanceProcAddr == NULL)
+ return NULL;
+ return pTable->GetInstanceProcAddr(instance, funcName);
+}
diff --git a/layers/core_validation.h b/layers/core_validation.h
new file mode 100644
index 000000000..69b4f8ff8
--- /dev/null
+++ b/layers/core_validation.h
@@ -0,0 +1,896 @@
+/* Copyright (c) 2015-2016 The Khronos Group Inc.
+ * Copyright (c) 2015-2016 Valve Corporation
+ * Copyright (c) 2015-2016 LunarG, Inc.
+ * Copyright (C) 2015-2016 Google Inc.
+ *
+ * Permission is hereby granted, free of charge, to any person obtaining a copy
+ * of this software and/or associated documentation files (the "Materials"), to
+ * deal in the Materials without restriction, including without limitation the
+ * rights to use, copy, modify, merge, publish, distribute, sublicense, and/or
+ * sell copies of the Materials, and to permit persons to whom the Materials
+ * are furnished to do so, subject to the following conditions:
+ *
+ * The above copyright notice(s) and this permission notice shall be included
+ * in all copies or substantial portions of the Materials.
+ *
+ * THE MATERIALS ARE PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
+ * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
+ * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT.
+ *
+ * IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM,
+ * DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR
+ * OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE MATERIALS OR THE
+ * USE OR OTHER DEALINGS IN THE MATERIALS
+ *
+ * Author: Courtney Goeltzenleuchter <courtneygo@google.com>
+ * Author: Tobin Ehlis <tobine@google.com>
+ * Author: Chris Forbes <chrisf@ijw.co.nz>
+ * Author: Mark Lobodzinski <mark@lunarg.com>
+ */
+
+// Enable mem_tracker merged code
+#define MTMERGE 1
+
+#pragma once
+#include "vulkan/vk_layer.h"
+#include <atomic>
+#include <vector>
+#include <unordered_map>
+#include <memory>
+#include <functional>
+
+using std::vector;
+
+//#ifdef __cplusplus
+//extern "C" {
+//#endif
+
+#if MTMERGE
+// Mem Tracker ERROR codes
+typedef enum _MEM_TRACK_ERROR {
+ MEMTRACK_NONE, // Used for INFO & other non-error messages
+ MEMTRACK_INVALID_CB, // Cmd Buffer invalid
+ MEMTRACK_INVALID_MEM_OBJ, // Invalid Memory Object
+ MEMTRACK_INVALID_ALIASING, // Invalid Memory Aliasing
+ MEMTRACK_INVALID_LAYOUT, // Invalid Layout
+ MEMTRACK_INTERNAL_ERROR, // Bug in Mem Track Layer internal data structures
+ MEMTRACK_FREED_MEM_REF, // MEM Obj freed while it still has obj and/or CB refs
+ MEMTRACK_MEM_OBJ_CLEAR_EMPTY_BINDINGS, // Clearing bindings on mem obj that doesn't have any bindings
+ MEMTRACK_MISSING_MEM_BINDINGS, // Trying to retrieve mem bindings, but none found (may be internal error)
+ MEMTRACK_INVALID_OBJECT, // Attempting to reference generic VK Object that is invalid
+ MEMTRACK_MEMORY_BINDING_ERROR, // Error during one of many calls that bind memory to object or CB
+ MEMTRACK_MEMORY_LEAK, // Failure to call vkFreeMemory on Mem Obj prior to DestroyDevice
+ MEMTRACK_INVALID_STATE, // Memory not in the correct state
+ MEMTRACK_RESET_CB_WHILE_IN_FLIGHT, // vkResetCommandBuffer() called on a CB that hasn't completed
+ MEMTRACK_INVALID_FENCE_STATE, // Invalid Fence State signaled or used
+ MEMTRACK_REBIND_OBJECT, // Non-sparse object bindings are immutable
+ MEMTRACK_INVALID_USAGE_FLAG, // Usage flags specified at image/buffer create conflict w/ use of object
+ MEMTRACK_INVALID_MAP, // Size flag specified at alloc is too small for mapping range
+} MEM_TRACK_ERROR;
+
+// MemTracker Semaphore states
+typedef enum SemaphoreState {
+ MEMTRACK_SEMAPHORE_STATE_UNSET, // Semaphore is in an undefined state
+ MEMTRACK_SEMAPHORE_STATE_SIGNALLED, // Semaphore has is in signalled state
+ MEMTRACK_SEMAPHORE_STATE_WAIT, // Semaphore is in wait state
+} SemaphoreState;
+
+struct MemRange {
+ VkDeviceSize offset;
+ VkDeviceSize size;
+};
+
+/*
+ * MTMTODO : Update this comment
+ * Data Structure overview
+ * There are 4 global STL(' maps
+ * cbMap -- map of command Buffer (CB) objects to MT_CB_INFO structures
+ * Each MT_CB_INFO struct has an stl list container with
+ * memory objects that are referenced by this CB
+ * memObjMap -- map of Memory Objects to MT_MEM_OBJ_INFO structures
+ * Each MT_MEM_OBJ_INFO has two stl list containers with:
+ * -- all CBs referencing this mem obj
+ * -- all VK Objects that are bound to this memory
+ * objectMap -- map of objects to MT_OBJ_INFO structures
+ *
+ * Algorithm overview
+ * These are the primary events that should happen related to different objects
+ * 1. Command buffers
+ * CREATION - Add object,structure to map
+ * CMD BIND - If mem associated, add mem reference to list container
+ * DESTROY - Remove from map, decrement (and report) mem references
+ * 2. Mem Objects
+ * CREATION - Add object,structure to map
+ * OBJ BIND - Add obj structure to list container for that mem node
+ * CMB BIND - If mem-related add CB structure to list container for that mem node
+ * DESTROY - Flag as errors any remaining refs and remove from map
+ * 3. Generic Objects
+ * MEM BIND - DESTROY any previous binding, Add obj node w/ ref to map, add obj ref to list container for that mem node
+ * DESTROY - If mem bound, remove reference list container for that memInfo, remove object ref from map
+ */
+// TODO : Is there a way to track when Cmd Buffer finishes & remove mem references at that point?
+// TODO : Could potentially store a list of freed mem allocs to flag when they're incorrectly used
+
+// Simple struct to hold handle and type of object so they can be uniquely identified and looked up in appropriate map
+struct MT_OBJ_HANDLE_TYPE {
+ uint64_t handle;
+ VkDebugReportObjectTypeEXT type;
+};
+
+struct MEMORY_RANGE {
+ uint64_t handle;
+ VkDeviceMemory memory;
+ VkDeviceSize start;
+ VkDeviceSize end;
+};
+
+// Data struct for tracking memory object
+struct DEVICE_MEM_INFO {
+ void *object; // Dispatchable object used to create this memory (device of swapchain)
+ uint32_t refCount; // Count of references (obj bindings or CB use)
+ bool valid; // Stores if the memory has valid data or not
+ VkDeviceMemory mem;
+ VkMemoryAllocateInfo allocInfo;
+ list<MT_OBJ_HANDLE_TYPE> pObjBindings; // list container of objects bound to this memory
+ list<VkCommandBuffer> pCommandBufferBindings; // list container of cmd buffers that reference this mem object
+ vector<MEMORY_RANGE> bufferRanges;
+ vector<MEMORY_RANGE> imageRanges;
+ VkImage image; // If memory is bound to image, this will have VkImage handle, else VK_NULL_HANDLE
+ MemRange memRange;
+ void *pData, *pDriverData;
+};
+
+// This only applies to Buffers and Images, which can have memory bound to them
+struct MT_OBJ_BINDING_INFO {
+ VkDeviceMemory mem;
+ bool valid; // If this is a swapchain image backing memory is not a MT_MEM_OBJ_INFO so store it here.
+ union create_info {
+ VkImageCreateInfo image;
+ VkBufferCreateInfo buffer;
+ } create_info;
+};
+
+struct MT_FB_ATTACHMENT_INFO {
+ VkImage image;
+ VkDeviceMemory mem;
+};
+
+struct MT_PASS_ATTACHMENT_INFO {
+ uint32_t attachment;
+ VkAttachmentLoadOp load_op;
+ VkAttachmentStoreOp store_op;
+};
+
+// Associate fenceId with a fence object
+struct MT_FENCE_INFO {
+ uint64_t fenceId; // Sequence number for fence at last submit
+ VkQueue queue; // Queue that this fence is submitted against or NULL
+ VkSwapchainKHR swapchain; // Swapchain that this fence is submitted against or NULL
+ VkBool32 firstTimeFlag; // Fence was created in signaled state, avoid warnings for first use
+ VkFenceCreateInfo createInfo;
+};
+
+// Track Queue information
+struct MT_QUEUE_INFO {
+ uint64_t lastRetiredId;
+ uint64_t lastSubmittedId;
+ list<VkCommandBuffer> pQueueCommandBuffers;
+ list<VkDeviceMemory> pMemRefList;
+};
+
+struct MT_DESCRIPTOR_SET_INFO {
+ std::vector<VkImageView> images;
+ std::vector<VkBuffer> buffers;
+};
+
+// Track Swapchain Information
+struct MT_SWAP_CHAIN_INFO {
+ VkSwapchainCreateInfoKHR createInfo;
+ std::vector<VkImage> images;
+};
+
+#endif
+// Draw State ERROR codes
+typedef enum _DRAW_STATE_ERROR {
+ DRAWSTATE_NONE, // Used for INFO & other non-error messages
+ DRAWSTATE_INTERNAL_ERROR, // Error with DrawState internal data structures
+ DRAWSTATE_NO_PIPELINE_BOUND, // Unable to identify a bound pipeline
+ DRAWSTATE_INVALID_POOL, // Invalid DS pool
+ DRAWSTATE_INVALID_SET, // Invalid DS
+ DRAWSTATE_INVALID_LAYOUT, // Invalid DS layout
+ DRAWSTATE_INVALID_IMAGE_LAYOUT, // Invalid Image layout
+ DRAWSTATE_INVALID_PIPELINE, // Invalid Pipeline handle referenced
+ DRAWSTATE_INVALID_PIPELINE_LAYOUT, // Invalid PipelineLayout
+ DRAWSTATE_INVALID_PIPELINE_CREATE_STATE, // Attempt to create a pipeline
+ // with invalid state
+ DRAWSTATE_INVALID_COMMAND_BUFFER, // Invalid CommandBuffer referenced
+ DRAWSTATE_INVALID_BARRIER, // Invalid Barrier
+ DRAWSTATE_INVALID_BUFFER, // Invalid Buffer
+ DRAWSTATE_INVALID_QUERY, // Invalid Query
+ DRAWSTATE_INVALID_FENCE, // Invalid Fence
+ DRAWSTATE_INVALID_SEMAPHORE, // Invalid Semaphore
+ DRAWSTATE_INVALID_EVENT, // Invalid Event
+ DRAWSTATE_VTX_INDEX_OUT_OF_BOUNDS, // binding in vkCmdBindVertexData() too
+ // large for PSO's
+ // pVertexBindingDescriptions array
+ DRAWSTATE_VTX_INDEX_ALIGNMENT_ERROR, // binding offset in
+ // vkCmdBindIndexBuffer() out of
+ // alignment based on indexType
+ // DRAWSTATE_MISSING_DOT_PROGRAM, // No "dot" program in order
+ // to generate png image
+ DRAWSTATE_OUT_OF_MEMORY, // malloc failed
+ DRAWSTATE_INVALID_DESCRIPTOR_SET, // Descriptor Set handle is unknown
+ DRAWSTATE_DESCRIPTOR_TYPE_MISMATCH, // Type in layout vs. update are not the
+ // same
+ DRAWSTATE_DESCRIPTOR_STAGEFLAGS_MISMATCH, // StageFlags in layout are not
+ // the same throughout a single
+ // VkWriteDescriptorSet update
+ DRAWSTATE_DESCRIPTOR_UPDATE_OUT_OF_BOUNDS, // Descriptors set for update out
+ // of bounds for corresponding
+ // layout section
+ DRAWSTATE_DESCRIPTOR_POOL_EMPTY, // Attempt to allocate descriptor from a
+ // pool with no more descriptors of that
+ // type available
+ DRAWSTATE_CANT_FREE_FROM_NON_FREE_POOL, // Invalid to call
+ // vkFreeDescriptorSets on Sets
+ // allocated from a NON_FREE Pool
+ DRAWSTATE_INVALID_UPDATE_INDEX, // Index of requested update is invalid for
+ // specified descriptors set
+ DRAWSTATE_INVALID_UPDATE_STRUCT, // Struct in DS Update tree is of invalid
+ // type
+ DRAWSTATE_NUM_SAMPLES_MISMATCH, // Number of samples in bound PSO does not
+ // match number in FB of current RenderPass
+ DRAWSTATE_NO_END_COMMAND_BUFFER, // Must call vkEndCommandBuffer() before
+ // QueueSubmit on that commandBuffer
+ DRAWSTATE_NO_BEGIN_COMMAND_BUFFER, // Binding cmds or calling End on CB that
+ // never had vkBeginCommandBuffer()
+ // called on it
+ DRAWSTATE_COMMAND_BUFFER_SINGLE_SUBMIT_VIOLATION, // Cmd Buffer created with
+ // VK_COMMAND_BUFFER_USAGE_ONE_TIME_SUBMIT_BIT
+ // flag is submitted
+ // multiple times
+ DRAWSTATE_INVALID_SECONDARY_COMMAND_BUFFER, // vkCmdExecuteCommands() called
+ // with a primary commandBuffer
+ // in pCommandBuffers array
+ DRAWSTATE_VIEWPORT_NOT_BOUND, // Draw submitted with no viewport state bound
+ DRAWSTATE_SCISSOR_NOT_BOUND, // Draw submitted with no scissor state bound
+ DRAWSTATE_LINE_WIDTH_NOT_BOUND, // Draw submitted with no line width state
+ // bound
+ DRAWSTATE_DEPTH_BIAS_NOT_BOUND, // Draw submitted with no depth bias state
+ // bound
+ DRAWSTATE_BLEND_NOT_BOUND, // Draw submitted with no blend state bound when
+ // color write enabled
+ DRAWSTATE_DEPTH_BOUNDS_NOT_BOUND, // Draw submitted with no depth bounds
+ // state bound when depth enabled
+ DRAWSTATE_STENCIL_NOT_BOUND, // Draw submitted with no stencil state bound
+ // when stencil enabled
+ DRAWSTATE_INDEX_BUFFER_NOT_BOUND, // Draw submitted with no depth-stencil
+ // state bound when depth write enabled
+ DRAWSTATE_PIPELINE_LAYOUTS_INCOMPATIBLE, // Draw submitted PSO Pipeline
+ // layout that's not compatible
+ // with layout from
+ // BindDescriptorSets
+ DRAWSTATE_RENDERPASS_INCOMPATIBLE, // Incompatible renderpasses between
+ // secondary cmdBuffer and primary
+ // cmdBuffer or framebuffer
+ DRAWSTATE_FRAMEBUFFER_INCOMPATIBLE, // Incompatible framebuffer between
+ // secondary cmdBuffer and active
+ // renderPass
+ DRAWSTATE_INVALID_RENDERPASS, // Use of a NULL or otherwise invalid
+ // RenderPass object
+ DRAWSTATE_INVALID_RENDERPASS_CMD, // Invalid cmd submitted while a
+ // RenderPass is active
+ DRAWSTATE_NO_ACTIVE_RENDERPASS, // Rendering cmd submitted without an active
+ // RenderPass
+ DRAWSTATE_DESCRIPTOR_SET_NOT_UPDATED, // DescriptorSet bound but it was
+ // never updated. This is a warning
+ // code.
+ DRAWSTATE_DESCRIPTOR_SET_NOT_BOUND, // DescriptorSet used by pipeline at
+ // draw time is not bound, or has been
+ // disturbed (which would have flagged
+ // previous warning)
+ DRAWSTATE_INVALID_DYNAMIC_OFFSET_COUNT, // DescriptorSets bound with
+ // different number of dynamic
+ // descriptors that were included in
+ // dynamicOffsetCount
+ DRAWSTATE_CLEAR_CMD_BEFORE_DRAW, // Clear cmd issued before any Draw in
+ // CommandBuffer, should use RenderPass Ops
+ // instead
+ DRAWSTATE_BEGIN_CB_INVALID_STATE, // CB state at Begin call is bad. Can be
+ // Primary/Secondary CB created with
+ // mismatched FB/RP information or CB in
+ // RECORDING state
+ DRAWSTATE_INVALID_CB_SIMULTANEOUS_USE, // CmdBuffer is being used in
+ // violation of
+ // VK_COMMAND_BUFFER_USAGE_SIMULTANEOUS_USE_BIT
+ // rules (i.e. simultaneous use w/o
+ // that bit set)
+ DRAWSTATE_INVALID_COMMAND_BUFFER_RESET, // Attempting to call Reset (or
+ // Begin on recorded cmdBuffer) that
+ // was allocated from Pool w/o
+ // VK_COMMAND_POOL_CREATE_RESET_COMMAND_BUFFER_BIT
+ // bit set
+ DRAWSTATE_VIEWPORT_SCISSOR_MISMATCH, // Count for viewports and scissors
+ // mismatch and/or state doesn't match
+ // count
+ DRAWSTATE_INVALID_IMAGE_ASPECT, // Image aspect is invalid for the current
+ // operation
+ DRAWSTATE_MISSING_ATTACHMENT_REFERENCE, // Attachment reference must be
+ // present in active subpass
+ DRAWSTATE_SAMPLER_DESCRIPTOR_ERROR, // A Descriptor of *_SAMPLER type is
+ // being updated with an invalid or bad
+ // Sampler
+ DRAWSTATE_INCONSISTENT_IMMUTABLE_SAMPLER_UPDATE, // Descriptors of
+ // *COMBINED_IMAGE_SAMPLER
+ // type are being updated
+ // where some, but not all,
+ // of the updates use
+ // immutable samplers
+ DRAWSTATE_IMAGEVIEW_DESCRIPTOR_ERROR, // A Descriptor of *_IMAGE or
+ // *_ATTACHMENT type is being updated
+ // with an invalid or bad ImageView
+ DRAWSTATE_BUFFERVIEW_DESCRIPTOR_ERROR, // A Descriptor of *_TEXEL_BUFFER
+ // type is being updated with an
+ // invalid or bad BufferView
+ DRAWSTATE_BUFFERINFO_DESCRIPTOR_ERROR, // A Descriptor of
+ // *_[UNIFORM|STORAGE]_BUFFER_[DYNAMIC]
+ // type is being updated with an
+ // invalid or bad BufferView
+ DRAWSTATE_DYNAMIC_OFFSET_OVERFLOW, // At draw time the dynamic offset
+ // combined with buffer offset and range
+ // oversteps size of buffer
+ DRAWSTATE_DOUBLE_DESTROY, // Destroying an object twice
+ DRAWSTATE_OBJECT_INUSE, // Destroying or modifying an object in use by a
+ // command buffer
+ DRAWSTATE_QUEUE_FORWARD_PROGRESS, // Queue cannot guarantee forward progress
+ DRAWSTATE_INVALID_UNIFORM_BUFFER_OFFSET, // Dynamic Uniform Buffer Offsets
+ // violate device limit
+ DRAWSTATE_INVALID_STORAGE_BUFFER_OFFSET, // Dynamic Storage Buffer Offsets
+ // violate device limit
+ DRAWSTATE_INDEPENDENT_BLEND, // If independent blending is not enabled, all
+ // elements of pAttachmentsMustBeIdentical
+ DRAWSTATE_DISABLED_LOGIC_OP, // If logic operations is not enabled, logicOpEnable
+ // must be VK_FALSE
+ DRAWSTATE_INVALID_LOGIC_OP, // If logicOpEnable is VK_TRUE, logicOp must
+ // must be a valid VkLogicOp value
+ DRAWSTATE_INVALID_QUEUE_INDEX, // Specified queue index exceeds number
+ // of queried queue families
+ DRAWSTATE_PUSH_CONSTANTS_ERROR, // Push constants exceed maxPushConstantSize
+} DRAW_STATE_ERROR;
+
+typedef enum _SHADER_CHECKER_ERROR {
+ SHADER_CHECKER_NONE,
+ SHADER_CHECKER_INTERFACE_TYPE_MISMATCH, // Type mismatch between shader stages or shader and pipeline
+ SHADER_CHECKER_OUTPUT_NOT_CONSUMED, // Entry appears in output interface, but missing in input
+ SHADER_CHECKER_INPUT_NOT_PRODUCED, // Entry appears in input interface, but missing in output
+ SHADER_CHECKER_NON_SPIRV_SHADER, // Shader image is not SPIR-V
+ SHADER_CHECKER_INCONSISTENT_SPIRV, // General inconsistency within a SPIR-V module
+ SHADER_CHECKER_UNKNOWN_STAGE, // Stage is not supported by analysis
+ SHADER_CHECKER_INCONSISTENT_VI, // VI state contains conflicting binding or attrib descriptions
+ SHADER_CHECKER_MISSING_DESCRIPTOR, // Shader attempts to use a descriptor binding not declared in the layout
+ SHADER_CHECKER_BAD_SPECIALIZATION, // Specialization map entry points outside specialization data block
+ SHADER_CHECKER_MISSING_ENTRYPOINT, // Shader module does not contain the requested entrypoint
+ SHADER_CHECKER_PUSH_CONSTANT_OUT_OF_RANGE, // Push constant variable is not in a push constant range
+ SHADER_CHECKER_PUSH_CONSTANT_NOT_ACCESSIBLE_FROM_STAGE, // Push constant range exists, but not accessible from stage
+ SHADER_CHECKER_DESCRIPTOR_TYPE_MISMATCH, // Descriptor type does not match shader resource type
+ SHADER_CHECKER_DESCRIPTOR_NOT_ACCESSIBLE_FROM_STAGE, // Descriptor used by shader, but not accessible from stage
+ SHADER_CHECKER_FEATURE_NOT_ENABLED, // Shader uses capability requiring a feature not enabled on device
+ SHADER_CHECKER_BAD_CAPABILITY, // Shader uses capability not supported by Vulkan (OpenCL features)
+} SHADER_CHECKER_ERROR;
+
+typedef enum _DRAW_TYPE {
+ DRAW = 0,
+ DRAW_INDEXED = 1,
+ DRAW_INDIRECT = 2,
+ DRAW_INDEXED_INDIRECT = 3,
+ DRAW_BEGIN_RANGE = DRAW,
+ DRAW_END_RANGE = DRAW_INDEXED_INDIRECT,
+ NUM_DRAW_TYPES = (DRAW_END_RANGE - DRAW_BEGIN_RANGE + 1),
+} DRAW_TYPE;
+
+typedef struct _SHADER_DS_MAPPING {
+ uint32_t slotCount;
+ VkDescriptorSetLayoutCreateInfo *pShaderMappingSlot;
+} SHADER_DS_MAPPING;
+
+typedef struct _GENERIC_HEADER {
+ VkStructureType sType;
+ const void *pNext;
+} GENERIC_HEADER;
+
+typedef struct _PIPELINE_NODE {
+ VkPipeline pipeline;
+ VkGraphicsPipelineCreateInfo graphicsPipelineCI;
+ VkPipelineVertexInputStateCreateInfo vertexInputCI;
+ VkPipelineInputAssemblyStateCreateInfo iaStateCI;
+ VkPipelineTessellationStateCreateInfo tessStateCI;
+ VkPipelineViewportStateCreateInfo vpStateCI;
+ VkPipelineRasterizationStateCreateInfo rsStateCI;
+ VkPipelineMultisampleStateCreateInfo msStateCI;
+ VkPipelineColorBlendStateCreateInfo cbStateCI;
+ VkPipelineDepthStencilStateCreateInfo dsStateCI;
+ VkPipelineDynamicStateCreateInfo dynStateCI;
+ VkPipelineShaderStageCreateInfo vsCI;
+ VkPipelineShaderStageCreateInfo tcsCI;
+ VkPipelineShaderStageCreateInfo tesCI;
+ VkPipelineShaderStageCreateInfo gsCI;
+ VkPipelineShaderStageCreateInfo fsCI;
+ // Compute shader is include in VkComputePipelineCreateInfo
+ VkComputePipelineCreateInfo computePipelineCI;
+ // Flag of which shader stages are active for this pipeline
+ uint32_t active_shaders;
+ // Capture which sets are actually used by the shaders of this pipeline
+ std::set<unsigned> active_sets;
+ // Vtx input info (if any)
+ std::vector<VkVertexInputBindingDescription> vertexBindingDescriptions;
+ std::vector<VkVertexInputAttributeDescription> vertexAttributeDescriptions;
+ std::vector<VkPipelineColorBlendAttachmentState> attachments;
+ // Default constructor
+ _PIPELINE_NODE()
+ : pipeline{}, graphicsPipelineCI{}, vertexInputCI{}, iaStateCI{}, tessStateCI{}, vpStateCI{}, rsStateCI{}, msStateCI{},
+ cbStateCI{}, dsStateCI{}, dynStateCI{}, vsCI{}, tcsCI{}, tesCI{}, gsCI{}, fsCI{}, computePipelineCI{}, active_shaders(0),
+ active_sets(),
+ vertexBindingDescriptions(), vertexAttributeDescriptions(), attachments()
+ {}
+} PIPELINE_NODE;
+
+class BASE_NODE {
+ public:
+ std::atomic_int in_use;
+};
+
+typedef struct _SAMPLER_NODE {
+ VkSampler sampler;
+ VkSamplerCreateInfo createInfo;
+
+ _SAMPLER_NODE(const VkSampler *ps, const VkSamplerCreateInfo *pci) : sampler(*ps), createInfo(*pci){};
+} SAMPLER_NODE;
+
+class IMAGE_NODE : public BASE_NODE {
+ public:
+ VkImageCreateInfo createInfo;
+ VkDeviceMemory mem;
+ VkDeviceSize memOffset;
+ VkDeviceSize memSize;
+};
+
+typedef struct _IMAGE_LAYOUT_NODE {
+ VkImageLayout layout;
+ VkFormat format;
+} IMAGE_LAYOUT_NODE;
+
+typedef struct _IMAGE_CMD_BUF_LAYOUT_NODE {
+ VkImageLayout initialLayout;
+ VkImageLayout layout;
+} IMAGE_CMD_BUF_LAYOUT_NODE;
+
+class BUFFER_NODE : public BASE_NODE {
+ public:
+ using BASE_NODE::in_use;
+ unique_ptr<VkBufferCreateInfo> create_info;
+};
+
+// Store the DAG.
+struct DAGNode {
+ uint32_t pass;
+ std::vector<uint32_t> prev;
+ std::vector<uint32_t> next;
+};
+
+struct RENDER_PASS_NODE {
+ VkRenderPassCreateInfo const *pCreateInfo;
+ VkFramebuffer fb;
+ vector<bool> hasSelfDependency;
+ vector<DAGNode> subpassToNode;
+ vector<vector<VkFormat>> subpassColorFormats;
+ vector<MT_PASS_ATTACHMENT_INFO> attachments;
+ unordered_map<uint32_t, bool> attachment_first_read;
+ unordered_map<uint32_t, VkImageLayout> attachment_first_layout;
+
+ RENDER_PASS_NODE(VkRenderPassCreateInfo const *pCreateInfo) : pCreateInfo(pCreateInfo), fb(VK_NULL_HANDLE) {
+ uint32_t i;
+
+ subpassColorFormats.reserve(pCreateInfo->subpassCount);
+ for (i = 0; i < pCreateInfo->subpassCount; i++) {
+ const VkSubpassDescription *subpass = &pCreateInfo->pSubpasses[i];
+ vector<VkFormat> color_formats;
+ uint32_t j;
+
+ color_formats.reserve(subpass->colorAttachmentCount);
+ for (j = 0; j < subpass->colorAttachmentCount; j++) {
+ const uint32_t att = subpass->pColorAttachments[j].attachment;
+ const VkFormat format = pCreateInfo->pAttachments[att].format;
+
+ color_formats.push_back(format);
+ }
+
+ subpassColorFormats.push_back(color_formats);
+ }
+ }
+};
+
+class PHYS_DEV_PROPERTIES_NODE {
+ public:
+ VkPhysicalDeviceProperties properties;
+ VkPhysicalDeviceFeatures features;
+ vector<VkQueueFamilyProperties> queue_family_properties;
+};
+
+class FENCE_NODE : public BASE_NODE {
+ public:
+ using BASE_NODE::in_use;
+#if MTMERGE
+ uint64_t fenceId; // Sequence number for fence at last submit
+ VkSwapchainKHR swapchain; // Swapchain that this fence is submitted against or NULL
+ VkBool32 firstTimeFlag; // Fence was created in signaled state, avoid warnings for first use
+ VkFenceCreateInfo createInfo;
+#endif
+ VkQueue queue;
+ vector<VkCommandBuffer> cmdBuffers;
+ bool needsSignaled;
+ vector<VkFence> priorFences;
+
+ // Default constructor
+ FENCE_NODE() : queue(NULL), needsSignaled(VK_FALSE){};
+};
+
+class SEMAPHORE_NODE : public BASE_NODE {
+ public:
+ using BASE_NODE::in_use;
+ uint32_t signaled;
+ SemaphoreState state;
+ VkQueue queue;
+};
+
+class EVENT_NODE : public BASE_NODE {
+ public:
+ using BASE_NODE::in_use;
+ bool needsSignaled;
+ VkPipelineStageFlags stageMask;
+};
+
+class QUEUE_NODE {
+ public:
+ VkDevice device;
+ vector<VkFence> lastFences;
+#if MTMERGE
+ uint64_t lastRetiredId;
+ uint64_t lastSubmittedId;
+ // MTMTODO : merge cmd_buffer data structs here
+ list<VkCommandBuffer> pQueueCommandBuffers;
+ list<VkDeviceMemory> pMemRefList;
+#endif
+ vector<VkCommandBuffer> untrackedCmdBuffers;
+ unordered_set<VkCommandBuffer> inFlightCmdBuffers;
+ unordered_map<VkEvent, VkPipelineStageFlags> eventToStageMap;
+};
+
+class QUERY_POOL_NODE : public BASE_NODE {
+ public:
+ VkQueryPoolCreateInfo createInfo;
+};
+
+class FRAMEBUFFER_NODE {
+ public:
+ VkFramebufferCreateInfo createInfo;
+ unordered_set<VkCommandBuffer> referencingCmdBuffers;
+ vector<MT_FB_ATTACHMENT_INFO> attachments;
+};
+
+// Descriptor Data structures
+// Layout Node has the core layout data
+typedef struct _LAYOUT_NODE {
+ VkDescriptorSetLayout layout;
+ VkDescriptorSetLayoutCreateInfo createInfo;
+ uint32_t startIndex; // 1st index of this layout
+ uint32_t endIndex; // last index of this layout
+ uint32_t dynamicDescriptorCount; // Total count of dynamic descriptors used
+ // by this layout
+ vector<VkDescriptorType> descriptorTypes; // Type per descriptor in this
+ // layout to verify correct
+ // updates
+ vector<VkShaderStageFlags> stageFlags; // stageFlags per descriptor in this
+ // layout to verify correct updates
+ unordered_map<uint32_t, uint32_t> bindingToIndexMap; // map set binding # to
+ // pBindings index
+ // Default constructor
+ _LAYOUT_NODE() : layout{}, createInfo{}, startIndex(0), endIndex(0), dynamicDescriptorCount(0){};
+} LAYOUT_NODE;
+
+// Store layouts and pushconstants for PipelineLayout
+struct PIPELINE_LAYOUT_NODE {
+ vector<VkDescriptorSetLayout> descriptorSetLayouts;
+ vector<VkPushConstantRange> pushConstantRanges;
+};
+
+class SET_NODE : public BASE_NODE {
+ public:
+ using BASE_NODE::in_use;
+ VkDescriptorSet set;
+ VkDescriptorPool pool;
+ // Head of LL of all Update structs for this set
+ GENERIC_HEADER *pUpdateStructs;
+ // Total num of descriptors in this set (count of its layout plus all prior layouts)
+ uint32_t descriptorCount;
+ GENERIC_HEADER **ppDescriptors; // Array where each index points to update node for its slot
+ LAYOUT_NODE *pLayout; // Layout for this set
+ SET_NODE *pNext;
+ unordered_set<VkCommandBuffer> boundCmdBuffers; // Cmd buffers that this set has been bound to
+ SET_NODE() : pUpdateStructs(NULL), ppDescriptors(NULL), pLayout(NULL), pNext(NULL){};
+};
+
+typedef struct _DESCRIPTOR_POOL_NODE {
+ VkDescriptorPool pool;
+ uint32_t maxSets; // Max descriptor sets allowed in this pool
+ uint32_t availableSets; // Available descriptr sets in this pool
+
+ VkDescriptorPoolCreateInfo createInfo;
+ SET_NODE *pSets; // Head of LL of sets for this Pool
+ vector<uint32_t> maxDescriptorTypeCount; // Max # of descriptors of each type in this pool
+ vector<uint32_t> availableDescriptorTypeCount; // Available # of descriptors of each type in this pool
+
+ _DESCRIPTOR_POOL_NODE(const VkDescriptorPool pool, const VkDescriptorPoolCreateInfo *pCreateInfo)
+ : pool(pool), maxSets(pCreateInfo->maxSets), availableSets(pCreateInfo->maxSets), createInfo(*pCreateInfo), pSets(NULL),
+ maxDescriptorTypeCount(VK_DESCRIPTOR_TYPE_RANGE_SIZE), availableDescriptorTypeCount(VK_DESCRIPTOR_TYPE_RANGE_SIZE) {
+ if (createInfo.poolSizeCount) { // Shadow type struct from ptr into local struct
+ size_t poolSizeCountSize = createInfo.poolSizeCount * sizeof(VkDescriptorPoolSize);
+ createInfo.pPoolSizes = new VkDescriptorPoolSize[poolSizeCountSize];
+ memcpy((void *)createInfo.pPoolSizes, pCreateInfo->pPoolSizes, poolSizeCountSize);
+ // Now set max counts for each descriptor type based on count of that type times maxSets
+ uint32_t i = 0;
+ for (i = 0; i < createInfo.poolSizeCount; ++i) {
+ uint32_t typeIndex = static_cast<uint32_t>(createInfo.pPoolSizes[i].type);
+ maxDescriptorTypeCount[typeIndex] = createInfo.pPoolSizes[i].descriptorCount;
+ availableDescriptorTypeCount[typeIndex] = maxDescriptorTypeCount[typeIndex];
+ }
+ } else {
+ createInfo.pPoolSizes = NULL; // Make sure this is NULL so we don't try to clean it up
+ }
+ }
+ ~_DESCRIPTOR_POOL_NODE() {
+ delete[] createInfo.pPoolSizes;
+ // TODO : pSets are currently freed in deletePools function which uses freeShadowUpdateTree function
+ // need to migrate that struct to smart ptrs for auto-cleanup
+ }
+} DESCRIPTOR_POOL_NODE;
+
+// Cmd Buffer Tracking
+typedef enum _CMD_TYPE {
+ CMD_BINDPIPELINE,
+ CMD_BINDPIPELINEDELTA,
+ CMD_SETVIEWPORTSTATE,
+ CMD_SETSCISSORSTATE,
+ CMD_SETLINEWIDTHSTATE,
+ CMD_SETDEPTHBIASSTATE,
+ CMD_SETBLENDSTATE,
+ CMD_SETDEPTHBOUNDSSTATE,
+ CMD_SETSTENCILREADMASKSTATE,
+ CMD_SETSTENCILWRITEMASKSTATE,
+ CMD_SETSTENCILREFERENCESTATE,
+ CMD_BINDDESCRIPTORSETS,
+ CMD_BINDINDEXBUFFER,
+ CMD_BINDVERTEXBUFFER,
+ CMD_DRAW,
+ CMD_DRAWINDEXED,
+ CMD_DRAWINDIRECT,
+ CMD_DRAWINDEXEDINDIRECT,
+ CMD_DISPATCH,
+ CMD_DISPATCHINDIRECT,
+ CMD_COPYBUFFER,
+ CMD_COPYIMAGE,
+ CMD_BLITIMAGE,
+ CMD_COPYBUFFERTOIMAGE,
+ CMD_COPYIMAGETOBUFFER,
+ CMD_CLONEIMAGEDATA,
+ CMD_UPDATEBUFFER,
+ CMD_FILLBUFFER,
+ CMD_CLEARCOLORIMAGE,
+ CMD_CLEARATTACHMENTS,
+ CMD_CLEARDEPTHSTENCILIMAGE,
+ CMD_RESOLVEIMAGE,
+ CMD_SETEVENT,
+ CMD_RESETEVENT,
+ CMD_WAITEVENTS,
+ CMD_PIPELINEBARRIER,
+ CMD_BEGINQUERY,
+ CMD_ENDQUERY,
+ CMD_RESETQUERYPOOL,
+ CMD_COPYQUERYPOOLRESULTS,
+ CMD_WRITETIMESTAMP,
+ CMD_PUSHCONSTANTS,
+ CMD_INITATOMICCOUNTERS,
+ CMD_LOADATOMICCOUNTERS,
+ CMD_SAVEATOMICCOUNTERS,
+ CMD_BEGINRENDERPASS,
+ CMD_NEXTSUBPASS,
+ CMD_ENDRENDERPASS,
+ CMD_EXECUTECOMMANDS,
+} CMD_TYPE;
+// Data structure for holding sequence of cmds in cmd buffer
+typedef struct _CMD_NODE {
+ CMD_TYPE type;
+ uint64_t cmdNumber;
+} CMD_NODE;
+
+typedef enum _CB_STATE {
+ CB_NEW, // Newly created CB w/o any cmds
+ CB_RECORDING, // BeginCB has been called on this CB
+ CB_RECORDED, // EndCB has been called on this CB
+ CB_INVALID // CB had a bound descriptor set destroyed or updated
+} CB_STATE;
+// CB Status -- used to track status of various bindings on cmd buffer objects
+typedef VkFlags CBStatusFlags;
+typedef enum _CBStatusFlagBits {
+ CBSTATUS_NONE = 0x00000000, // No status is set
+ CBSTATUS_VIEWPORT_SET = 0x00000001, // Viewport has been set
+ CBSTATUS_LINE_WIDTH_SET = 0x00000002, // Line width has been set
+ CBSTATUS_DEPTH_BIAS_SET = 0x00000004, // Depth bias has been set
+ CBSTATUS_COLOR_BLEND_WRITE_ENABLE = 0x00000008, // PSO w/ CB Enable set has been set
+ CBSTATUS_BLEND_SET = 0x00000010, // Blend state object has been set
+ CBSTATUS_DEPTH_WRITE_ENABLE = 0x00000020, // PSO w/ Depth Enable set has been set
+ CBSTATUS_STENCIL_TEST_ENABLE = 0x00000040, // PSO w/ Stencil Enable set has been set
+ CBSTATUS_DEPTH_BOUNDS_SET = 0x00000080, // Depth bounds state object has been set
+ CBSTATUS_STENCIL_READ_MASK_SET = 0x00000100, // Stencil read mask has been set
+ CBSTATUS_STENCIL_WRITE_MASK_SET = 0x00000200, // Stencil write mask has been set
+ CBSTATUS_STENCIL_REFERENCE_SET = 0x00000400, // Stencil reference has been set
+ CBSTATUS_INDEX_BUFFER_BOUND = 0x00000800, // Index buffer has been set
+ CBSTATUS_SCISSOR_SET = 0x00001000, // Scissor has been set
+ CBSTATUS_ALL = 0x00001FFF, // All dynamic state set
+} CBStatusFlagBits;
+
+typedef struct stencil_data {
+ uint32_t compareMask;
+ uint32_t writeMask;
+ uint32_t reference;
+} CBStencilData;
+
+typedef struct _DRAW_DATA { vector<VkBuffer> buffers; } DRAW_DATA;
+
+struct ImageSubresourcePair {
+ VkImage image;
+ bool hasSubresource;
+ VkImageSubresource subresource;
+};
+
+bool operator==(const ImageSubresourcePair &img1, const ImageSubresourcePair &img2) {
+ if (img1.image != img2.image || img1.hasSubresource != img2.hasSubresource)
+ return false;
+ return !img1.hasSubresource ||
+ (img1.subresource.aspectMask == img2.subresource.aspectMask && img1.subresource.mipLevel == img2.subresource.mipLevel &&
+ img1.subresource.arrayLayer == img2.subresource.arrayLayer);
+}
+
+namespace std {
+template <> struct hash<ImageSubresourcePair> {
+ size_t operator()(ImageSubresourcePair img) const throw() {
+ size_t hashVal = hash<uint64_t>()(reinterpret_cast<uint64_t &>(img.image));
+ hashVal ^= hash<bool>()(img.hasSubresource);
+ if (img.hasSubresource) {
+ hashVal ^= hash<uint32_t>()(reinterpret_cast<uint32_t &>(img.subresource.aspectMask));
+ hashVal ^= hash<uint32_t>()(img.subresource.mipLevel);
+ hashVal ^= hash<uint32_t>()(img.subresource.arrayLayer);
+ }
+ return hashVal;
+ }
+};
+}
+
+struct QueryObject {
+ VkQueryPool pool;
+ uint32_t index;
+};
+
+bool operator==(const QueryObject &query1, const QueryObject &query2) {
+ return (query1.pool == query2.pool && query1.index == query2.index);
+}
+
+namespace std {
+template <> struct hash<QueryObject> {
+ size_t operator()(QueryObject query) const throw() {
+ return hash<uint64_t>()((uint64_t)(query.pool)) ^ hash<uint32_t>()(query.index);
+ }
+};
+}
+// Track last states that are bound per pipeline bind point (Gfx & Compute)
+struct LAST_BOUND_STATE {
+ VkPipeline pipeline;
+ VkPipelineLayout pipelineLayout;
+ // Track each set that has been bound
+ // TODO : can unique be global per CB? (do we care about Gfx vs. Compute?)
+ unordered_set<VkDescriptorSet> uniqueBoundSets;
+ // Ordered bound set tracking where index is set# that given set is bound to
+ vector<VkDescriptorSet> boundDescriptorSets;
+ // one dynamic offset per dynamic descriptor bound to this CB
+ vector<uint32_t> dynamicOffsets;
+ void reset() {
+ pipeline = VK_NULL_HANDLE;
+ pipelineLayout = VK_NULL_HANDLE;
+ uniqueBoundSets.clear();
+ boundDescriptorSets.clear();
+ dynamicOffsets.clear();
+ }
+};
+// Cmd Buffer Wrapper Struct
+struct GLOBAL_CB_NODE {
+ VkCommandBuffer commandBuffer;
+ VkCommandBufferAllocateInfo createInfo;
+ VkCommandBufferBeginInfo beginInfo;
+ VkCommandBufferInheritanceInfo inheritanceInfo;
+ // VkFence fence; // fence tracking this cmd buffer
+ VkDevice device; // device this CB belongs to
+ uint64_t numCmds; // number of cmds in this CB
+ uint64_t drawCount[NUM_DRAW_TYPES]; // Count of each type of draw in this CB
+ CB_STATE state; // Track cmd buffer update state
+ uint64_t submitCount; // Number of times CB has been submitted
+ CBStatusFlags status; // Track status of various bindings on cmd buffer
+ vector<CMD_NODE> cmds; // vector of commands bound to this command buffer
+ // Currently storing "lastBound" objects on per-CB basis
+ // long-term may want to create caches of "lastBound" states and could have
+ // each individual CMD_NODE referencing its own "lastBound" state
+ // VkPipeline lastBoundPipeline;
+ // VkPipelineLayout lastBoundPipelineLayout;
+ // // Capture unique std::set of descriptorSets that are bound to this CB.
+ // std::set<VkDescriptorSet> uniqueBoundSets;
+ // vector<VkDescriptorSet> boundDescriptorSets; // Index is set# that given set is bound to
+ // Store last bound state for Gfx & Compute pipeline bind points
+ LAST_BOUND_STATE lastBound[VK_PIPELINE_BIND_POINT_RANGE_SIZE];
+
+ vector<uint32_t> dynamicOffsets;
+ vector<VkViewport> viewports;
+ vector<VkRect2D> scissors;
+ VkRenderPassBeginInfo activeRenderPassBeginInfo;
+ uint64_t fenceId;
+ VkFence lastSubmittedFence;
+ VkQueue lastSubmittedQueue;
+ VkRenderPass activeRenderPass;
+ VkSubpassContents activeSubpassContents;
+ uint32_t activeSubpass;
+ VkFramebuffer framebuffer;
+ // Track descriptor sets that are destroyed or updated while bound to CB
+ // TODO : These data structures relate to tracking resources that invalidate
+ // a cmd buffer that references them. Need to unify how we handle these
+ // cases so we don't have different tracking data for each type.
+ std::set<VkDescriptorSet> destroyedSets;
+ std::set<VkDescriptorSet> updatedSets;
+ unordered_set<VkFramebuffer> destroyedFramebuffers;
+ vector<VkEvent> waitedEvents;
+ vector<VkSemaphore> semaphores;
+ vector<VkEvent> events;
+ unordered_map<QueryObject, vector<VkEvent>> waitedEventsBeforeQueryReset;
+ unordered_map<QueryObject, bool> queryToStateMap; // 0 is unavailable, 1 is available
+ unordered_set<QueryObject> activeQueries;
+ unordered_set<QueryObject> startedQueries;
+ unordered_map<ImageSubresourcePair, IMAGE_CMD_BUF_LAYOUT_NODE> imageLayoutMap;
+ unordered_map<VkImage, vector<ImageSubresourcePair>> imageSubresourceMap;
+ unordered_map<VkEvent, VkPipelineStageFlags> eventToStageMap;
+ vector<DRAW_DATA> drawData;
+ DRAW_DATA currentDrawData;
+ VkCommandBuffer primaryCommandBuffer;
+ // If cmd buffer is primary, track secondary command buffers pending
+ // execution
+ std::unordered_set<VkCommandBuffer> secondaryCommandBuffers;
+ // MTMTODO : Scrub these data fields and merge active sets w/ lastBound as appropriate
+ vector<VkDescriptorSet> activeDescriptorSets;
+ vector<std::function<VkBool32()>> validate_functions;
+ list<VkDeviceMemory> pMemObjList; // List container of Mem objs referenced by this CB
+ vector<std::function<bool(VkQueue)>> eventUpdates;
+};
+
+class SWAPCHAIN_NODE {
+ public:
+ VkSwapchainCreateInfoKHR createInfo;
+ uint32_t *pQueueFamilyIndices;
+ std::vector<VkImage> images;
+ SWAPCHAIN_NODE(const VkSwapchainCreateInfoKHR *pCreateInfo) : createInfo(*pCreateInfo), pQueueFamilyIndices(NULL) {
+ if (pCreateInfo->queueFamilyIndexCount && pCreateInfo->imageSharingMode == VK_SHARING_MODE_CONCURRENT) {
+ pQueueFamilyIndices = new uint32_t[pCreateInfo->queueFamilyIndexCount];
+ memcpy(pQueueFamilyIndices, pCreateInfo->pQueueFamilyIndices, pCreateInfo->queueFamilyIndexCount * sizeof(uint32_t));
+ createInfo.pQueueFamilyIndices = pQueueFamilyIndices;
+ }
+ }
+ ~SWAPCHAIN_NODE() { delete[] pQueueFamilyIndices; }
+};
+
+//#ifdef __cplusplus
+//}
+//#endif
diff --git a/layers/device_limits.cpp b/layers/device_limits.cpp
index 690a16171..ff761f6c8 100644
--- a/layers/device_limits.cpp
+++ b/layers/device_limits.cpp
@@ -45,31 +45,24 @@
#include "device_limits.h"
#include "vulkan/vk_layer.h"
#include "vk_layer_config.h"
-#include "vulkan/vk_debug_marker_layer.h"
#include "vk_enum_validate_helper.h"
#include "vk_layer_table.h"
-#include "vk_layer_debug_marker_table.h"
#include "vk_layer_data.h"
#include "vk_layer_logging.h"
#include "vk_layer_extension_utils.h"
#include "vk_layer_utils.h"
-struct devExts {
- bool debug_marker_enabled;
-};
-
// This struct will be stored in a map hashed by the dispatchable object
struct layer_data {
- debug_report_data *report_data;
- std::vector<VkDebugReportCallbackEXT> logging_callback;
- VkLayerDispatchTable *device_dispatch_table;
- VkLayerInstanceDispatchTable *instance_dispatch_table;
- devExts device_extensions;
+ debug_report_data *report_data;
+ std::vector<VkDebugReportCallbackEXT> logging_callback;
+ VkLayerDispatchTable *device_dispatch_table;
+ VkLayerInstanceDispatchTable *instance_dispatch_table;
// Track state of each instance
- unique_ptr<INSTANCE_STATE> instanceState;
- unique_ptr<PHYSICAL_DEVICE_STATE> physicalDeviceState;
- VkPhysicalDeviceFeatures actualPhysicalDeviceFeatures;
- VkPhysicalDeviceFeatures requestedPhysicalDeviceFeatures;
+ unique_ptr<INSTANCE_STATE> instanceState;
+ unique_ptr<PHYSICAL_DEVICE_STATE> physicalDeviceState;
+ VkPhysicalDeviceFeatures actualPhysicalDeviceFeatures;
+ VkPhysicalDeviceFeatures requestedPhysicalDeviceFeatures;
unordered_map<VkDevice, VkPhysicalDeviceProperties> physDevPropertyMap;
// Track physical device per logical device
@@ -77,17 +70,9 @@ struct layer_data {
// Vector indices correspond to queueFamilyIndex
vector<unique_ptr<VkQueueFamilyProperties>> queueFamilyProperties;
- layer_data() :
- report_data(nullptr),
- device_dispatch_table(nullptr),
- instance_dispatch_table(nullptr),
- device_extensions(),
- instanceState(nullptr),
- physicalDeviceState(nullptr),
- actualPhysicalDeviceFeatures(),
- requestedPhysicalDeviceFeatures(),
- physicalDevice()
- {};
+ layer_data()
+ : report_data(nullptr), device_dispatch_table(nullptr), instance_dispatch_table(nullptr), instanceState(nullptr),
+ physicalDeviceState(nullptr), actualPhysicalDeviceFeatures(), requestedPhysicalDeviceFeatures(), physicalDevice(){};
};
static unordered_map<void *, layer_data *> layer_data_map;
@@ -96,48 +81,13 @@ static unordered_map<void *, layer_data *> layer_data_map;
static int globalLockInitialized = 0;
static loader_platform_thread_mutex globalLock;
-template layer_data *get_my_data_ptr<layer_data>(
- void *data_key,
- std::unordered_map<void *, layer_data *> &data_map);
-
-static void init_device_limits(layer_data *my_data, const VkAllocationCallbacks *pAllocator)
-{
- uint32_t report_flags = 0;
- uint32_t debug_action = 0;
- FILE *log_output = NULL;
- const char *option_str;
- VkDebugReportCallbackEXT callback;
- // initialize DeviceLimits options
- report_flags = getLayerOptionFlags("DeviceLimitsReportFlags", 0);
- getLayerOptionEnum("DeviceLimitsDebugAction", (uint32_t *) &debug_action);
-
- if (debug_action & VK_DBG_LAYER_ACTION_LOG_MSG)
- {
- option_str = getLayerOption("DeviceLimitsLogFilename");
- log_output = getLayerLogOutput(option_str, "DeviceLimits");
- VkDebugReportCallbackCreateInfoEXT dbgCreateInfo;
- memset(&dbgCreateInfo, 0, sizeof(dbgCreateInfo));
- dbgCreateInfo.sType = VK_STRUCTURE_TYPE_DEBUG_REPORT_CREATE_INFO_EXT;
- dbgCreateInfo.flags = report_flags;
- dbgCreateInfo.pfnCallback = log_callback;
- dbgCreateInfo.pUserData = (void *) log_output;
- layer_create_msg_callback(my_data->report_data, &dbgCreateInfo, pAllocator, &callback);
- my_data->logging_callback.push_back(callback);
- }
+template layer_data *get_my_data_ptr<layer_data>(void *data_key, std::unordered_map<void *, layer_data *> &data_map);
- if (debug_action & VK_DBG_LAYER_ACTION_DEBUG_OUTPUT) {
- VkDebugReportCallbackCreateInfoEXT dbgCreateInfo;
- memset(&dbgCreateInfo, 0, sizeof(dbgCreateInfo));
- dbgCreateInfo.sType = VK_STRUCTURE_TYPE_DEBUG_REPORT_CREATE_INFO_EXT;
- dbgCreateInfo.flags = report_flags;
- dbgCreateInfo.pfnCallback = win32_debug_output_msg;
- dbgCreateInfo.pUserData = NULL;
- layer_create_msg_callback(my_data->report_data, &dbgCreateInfo, pAllocator, &callback);
- my_data->logging_callback.push_back(callback);
- }
+static void init_device_limits(layer_data *my_data, const VkAllocationCallbacks *pAllocator) {
- if (!globalLockInitialized)
- {
+ layer_debug_actions(my_data->report_data, my_data->logging_callback, pAllocator, "lunarg_device_limits");
+
+ if (!globalLockInitialized) {
// TODO/TBD: Need to delete this mutex sometime. How??? One
// suggestion is to call this during vkCreateInstance(), and then we
// can clean it up during vkDestroyInstance(). However, that requires
@@ -148,68 +98,46 @@ static void init_device_limits(layer_data *my_data, const VkAllocationCallbacks
}
}
-static const VkExtensionProperties instance_extensions[] = {
- {
- VK_EXT_DEBUG_REPORT_EXTENSION_NAME,
- VK_EXT_DEBUG_REPORT_SPEC_VERSION
- }
-};
+static const VkExtensionProperties instance_extensions[] = {{VK_EXT_DEBUG_REPORT_EXTENSION_NAME, VK_EXT_DEBUG_REPORT_SPEC_VERSION}};
-VK_LAYER_EXPORT VKAPI_ATTR VkResult VKAPI_CALL vkEnumerateInstanceExtensionProperties(
- const char *pLayerName,
- uint32_t *pCount,
- VkExtensionProperties* pProperties)
-{
+VK_LAYER_EXPORT VKAPI_ATTR VkResult VKAPI_CALL
+vkEnumerateInstanceExtensionProperties(const char *pLayerName, uint32_t *pCount, VkExtensionProperties *pProperties) {
return util_GetExtensionProperties(1, instance_extensions, pCount, pProperties);
}
-VK_LAYER_EXPORT VKAPI_ATTR VkResult VKAPI_CALL
-vkEnumerateDeviceExtensionProperties(VkPhysicalDevice physicalDevice,
- const char *pLayerName, uint32_t *pCount,
- VkExtensionProperties *pProperties) {
+VK_LAYER_EXPORT VKAPI_ATTR VkResult VKAPI_CALL vkEnumerateDeviceExtensionProperties(VkPhysicalDevice physicalDevice,
+ const char *pLayerName, uint32_t *pCount,
+ VkExtensionProperties *pProperties) {
if (pLayerName == NULL) {
dispatch_key key = get_dispatch_key(physicalDevice);
layer_data *my_data = get_my_data_ptr(key, layer_data_map);
- return my_data->instance_dispatch_table
- ->EnumerateDeviceExtensionProperties(physicalDevice, NULL, pCount,
- pProperties);
+ return my_data->instance_dispatch_table->EnumerateDeviceExtensionProperties(physicalDevice, NULL, pCount, pProperties);
} else {
return util_GetExtensionProperties(0, nullptr, pCount, pProperties);
}
}
-static const VkLayerProperties dl_global_layers[] = {
- {
- "VK_LAYER_LUNARG_device_limits",
- VK_API_VERSION,
- 1,
- "LunarG Validation Layer",
- }
-};
+static const VkLayerProperties dl_global_layers[] = {{
+ "VK_LAYER_LUNARG_device_limits", VK_LAYER_API_VERSION, 1, "LunarG Validation Layer",
+}};
-VK_LAYER_EXPORT VKAPI_ATTR VkResult VKAPI_CALL vkEnumerateInstanceLayerProperties(
- uint32_t *pCount,
- VkLayerProperties* pProperties)
-{
- return util_GetLayerProperties(ARRAY_SIZE(dl_global_layers),
- dl_global_layers,
- pCount, pProperties);
+VK_LAYER_EXPORT VKAPI_ATTR VkResult VKAPI_CALL
+vkEnumerateInstanceLayerProperties(uint32_t *pCount, VkLayerProperties *pProperties) {
+ return util_GetLayerProperties(ARRAY_SIZE(dl_global_layers), dl_global_layers, pCount, pProperties);
}
-VK_LAYER_EXPORT VKAPI_ATTR VkResult VKAPI_CALL vkEnumerateDeviceLayerProperties(
- VkPhysicalDevice physicalDevice, uint32_t *pCount,
- VkLayerProperties *pProperties) {
- return util_GetLayerProperties(ARRAY_SIZE(dl_global_layers),
- dl_global_layers, pCount, pProperties);
+VK_LAYER_EXPORT VKAPI_ATTR VkResult VKAPI_CALL
+vkEnumerateDeviceLayerProperties(VkPhysicalDevice physicalDevice, uint32_t *pCount, VkLayerProperties *pProperties) {
+ return util_GetLayerProperties(ARRAY_SIZE(dl_global_layers), dl_global_layers, pCount, pProperties);
}
-VK_LAYER_EXPORT VKAPI_ATTR VkResult VKAPI_CALL vkCreateInstance(const VkInstanceCreateInfo* pCreateInfo, const VkAllocationCallbacks* pAllocator, VkInstance* pInstance)
-{
+VK_LAYER_EXPORT VKAPI_ATTR VkResult VKAPI_CALL
+vkCreateInstance(const VkInstanceCreateInfo *pCreateInfo, const VkAllocationCallbacks *pAllocator, VkInstance *pInstance) {
VkLayerInstanceCreateInfo *chain_info = get_chain_info(pCreateInfo, VK_LAYER_LINK_INFO);
assert(chain_info->u.pLayerInfo);
PFN_vkGetInstanceProcAddr fpGetInstanceProcAddr = chain_info->u.pLayerInfo->pfnNextGetInstanceProcAddr;
- PFN_vkCreateInstance fpCreateInstance = (PFN_vkCreateInstance) fpGetInstanceProcAddr(NULL, "vkCreateInstance");
+ PFN_vkCreateInstance fpCreateInstance = (PFN_vkCreateInstance)fpGetInstanceProcAddr(NULL, "vkCreateInstance");
if (fpCreateInstance == NULL) {
return VK_ERROR_INITIALIZATION_FAILED;
}
@@ -225,11 +153,8 @@ VK_LAYER_EXPORT VKAPI_ATTR VkResult VKAPI_CALL vkCreateInstance(const VkInstance
my_data->instance_dispatch_table = new VkLayerInstanceDispatchTable;
layer_init_instance_dispatch_table(*pInstance, my_data->instance_dispatch_table, fpGetInstanceProcAddr);
- my_data->report_data = debug_report_create_instance(
- my_data->instance_dispatch_table,
- *pInstance,
- pCreateInfo->enabledExtensionCount,
- pCreateInfo->ppEnabledExtensionNames);
+ my_data->report_data = debug_report_create_instance(my_data->instance_dispatch_table, *pInstance,
+ pCreateInfo->enabledExtensionCount, pCreateInfo->ppEnabledExtensionNames);
init_device_limits(my_data, pAllocator);
my_data->instanceState = unique_ptr<INSTANCE_STATE>(new INSTANCE_STATE());
@@ -238,8 +163,7 @@ VK_LAYER_EXPORT VKAPI_ATTR VkResult VKAPI_CALL vkCreateInstance(const VkInstance
}
/* hook DestroyInstance to remove tableInstanceMap entry */
-VK_LAYER_EXPORT VKAPI_ATTR void VKAPI_CALL vkDestroyInstance(VkInstance instance, const VkAllocationCallbacks* pAllocator)
-{
+VK_LAYER_EXPORT VKAPI_ATTR void VKAPI_CALL vkDestroyInstance(VkInstance instance, const VkAllocationCallbacks *pAllocator) {
dispatch_key key = get_dispatch_key(instance);
layer_data *my_data = get_my_data_ptr(key, layer_data_map);
VkLayerInstanceDispatchTable *pTable = my_data->instance_dispatch_table;
@@ -262,8 +186,8 @@ VK_LAYER_EXPORT VKAPI_ATTR void VKAPI_CALL vkDestroyInstance(VkInstance instance
}
}
-VK_LAYER_EXPORT VKAPI_ATTR VkResult VKAPI_CALL vkEnumeratePhysicalDevices(VkInstance instance, uint32_t* pPhysicalDeviceCount, VkPhysicalDevice* pPhysicalDevices)
-{
+VK_LAYER_EXPORT VKAPI_ATTR VkResult VKAPI_CALL
+vkEnumeratePhysicalDevices(VkInstance instance, uint32_t *pPhysicalDeviceCount, VkPhysicalDevice *pPhysicalDevices) {
VkBool32 skipCall = VK_FALSE;
layer_data *my_data = get_my_data_ptr(get_dispatch_key(instance), layer_data_map);
if (my_data->instanceState) {
@@ -273,117 +197,143 @@ VK_LAYER_EXPORT VKAPI_ATTR VkResult VKAPI_CALL vkEnumeratePhysicalDevices(VkInst
} else {
if (UNCALLED == my_data->instanceState->vkEnumeratePhysicalDevicesState) {
// Flag error here, shouldn't be calling this without having queried count
- skipCall |= log_msg(my_data->report_data, VK_DEBUG_REPORT_ERROR_BIT_EXT, VK_DEBUG_REPORT_OBJECT_TYPE_INSTANCE_EXT, 0, __LINE__, DEVLIMITS_MUST_QUERY_COUNT, "DL",
- "Invalid call sequence to vkEnumeratePhysicalDevices() w/ non-NULL pPhysicalDevices. You should first call vkEnumeratePhysicalDevices() w/ NULL pPhysicalDevices to query pPhysicalDeviceCount.");
+ skipCall |=
+ log_msg(my_data->report_data, VK_DEBUG_REPORT_ERROR_BIT_EXT, VK_DEBUG_REPORT_OBJECT_TYPE_INSTANCE_EXT, 0,
+ __LINE__, DEVLIMITS_MUST_QUERY_COUNT, "DL",
+ "Invalid call sequence to vkEnumeratePhysicalDevices() w/ non-NULL pPhysicalDevices. You should first "
+ "call vkEnumeratePhysicalDevices() w/ NULL pPhysicalDevices to query pPhysicalDeviceCount.");
} // TODO : Could also flag a warning if re-calling this function in QUERY_DETAILS state
else if (my_data->instanceState->physicalDevicesCount != *pPhysicalDeviceCount) {
// TODO: Having actual count match count from app is not a requirement, so this can be a warning
- skipCall |= log_msg(my_data->report_data, VK_DEBUG_REPORT_WARNING_BIT_EXT, VK_DEBUG_REPORT_OBJECT_TYPE_PHYSICAL_DEVICE_EXT, 0, __LINE__, DEVLIMITS_COUNT_MISMATCH, "DL",
- "Call to vkEnumeratePhysicalDevices() w/ pPhysicalDeviceCount value %u, but actual count supported by this instance is %u.", *pPhysicalDeviceCount, my_data->instanceState->physicalDevicesCount);
+ skipCall |= log_msg(my_data->report_data, VK_DEBUG_REPORT_WARNING_BIT_EXT,
+ VK_DEBUG_REPORT_OBJECT_TYPE_PHYSICAL_DEVICE_EXT, 0, __LINE__, DEVLIMITS_COUNT_MISMATCH, "DL",
+ "Call to vkEnumeratePhysicalDevices() w/ pPhysicalDeviceCount value %u, but actual count "
+ "supported by this instance is %u.",
+ *pPhysicalDeviceCount, my_data->instanceState->physicalDevicesCount);
}
my_data->instanceState->vkEnumeratePhysicalDevicesState = QUERY_DETAILS;
}
if (skipCall)
return VK_ERROR_VALIDATION_FAILED_EXT;
- VkResult result = my_data->instance_dispatch_table->EnumeratePhysicalDevices(instance, pPhysicalDeviceCount, pPhysicalDevices);
+ VkResult result =
+ my_data->instance_dispatch_table->EnumeratePhysicalDevices(instance, pPhysicalDeviceCount, pPhysicalDevices);
if (NULL == pPhysicalDevices) {
my_data->instanceState->physicalDevicesCount = *pPhysicalDeviceCount;
} else { // Save physical devices
- for (uint32_t i=0; i < *pPhysicalDeviceCount; i++) {
+ for (uint32_t i = 0; i < *pPhysicalDeviceCount; i++) {
layer_data *phy_dev_data = get_my_data_ptr(get_dispatch_key(pPhysicalDevices[i]), layer_data_map);
phy_dev_data->physicalDeviceState = unique_ptr<PHYSICAL_DEVICE_STATE>(new PHYSICAL_DEVICE_STATE());
// Init actual features for each physical device
- my_data->instance_dispatch_table->GetPhysicalDeviceFeatures(pPhysicalDevices[i], &(phy_dev_data->actualPhysicalDeviceFeatures));
+ my_data->instance_dispatch_table->GetPhysicalDeviceFeatures(pPhysicalDevices[i],
+ &(phy_dev_data->actualPhysicalDeviceFeatures));
}
}
return result;
} else {
- log_msg(my_data->report_data, VK_DEBUG_REPORT_ERROR_BIT_EXT, VK_DEBUG_REPORT_OBJECT_TYPE_INSTANCE_EXT, 0, __LINE__, DEVLIMITS_INVALID_INSTANCE, "DL",
- "Invalid instance (%#" PRIxLEAST64 ") passed into vkEnumeratePhysicalDevices().", (uint64_t)instance);
+ log_msg(my_data->report_data, VK_DEBUG_REPORT_ERROR_BIT_EXT, VK_DEBUG_REPORT_OBJECT_TYPE_INSTANCE_EXT, 0, __LINE__,
+ DEVLIMITS_INVALID_INSTANCE, "DL", "Invalid instance (%#" PRIxLEAST64 ") passed into vkEnumeratePhysicalDevices().",
+ (uint64_t)instance);
}
return VK_ERROR_VALIDATION_FAILED_EXT;
}
-VK_LAYER_EXPORT VKAPI_ATTR void VKAPI_CALL vkGetPhysicalDeviceFeatures(VkPhysicalDevice physicalDevice, VkPhysicalDeviceFeatures* pFeatures)
-{
+VK_LAYER_EXPORT VKAPI_ATTR void VKAPI_CALL
+vkGetPhysicalDeviceFeatures(VkPhysicalDevice physicalDevice, VkPhysicalDeviceFeatures *pFeatures) {
layer_data *phy_dev_data = get_my_data_ptr(get_dispatch_key(physicalDevice), layer_data_map);
phy_dev_data->physicalDeviceState->vkGetPhysicalDeviceFeaturesState = QUERY_DETAILS;
phy_dev_data->instance_dispatch_table->GetPhysicalDeviceFeatures(physicalDevice, pFeatures);
}
-VK_LAYER_EXPORT VKAPI_ATTR void VKAPI_CALL vkGetPhysicalDeviceFormatProperties(VkPhysicalDevice physicalDevice, VkFormat format, VkFormatProperties* pFormatProperties)
-{
- get_my_data_ptr(get_dispatch_key(physicalDevice), layer_data_map)->instance_dispatch_table->GetPhysicalDeviceFormatProperties(
- physicalDevice, format, pFormatProperties);
+VK_LAYER_EXPORT VKAPI_ATTR void VKAPI_CALL
+vkGetPhysicalDeviceFormatProperties(VkPhysicalDevice physicalDevice, VkFormat format, VkFormatProperties *pFormatProperties) {
+ get_my_data_ptr(get_dispatch_key(physicalDevice), layer_data_map)
+ ->instance_dispatch_table->GetPhysicalDeviceFormatProperties(physicalDevice, format, pFormatProperties);
}
-VK_LAYER_EXPORT VKAPI_ATTR VkResult VKAPI_CALL vkGetPhysicalDeviceImageFormatProperties(VkPhysicalDevice physicalDevice, VkFormat format, VkImageType type, VkImageTiling tiling, VkImageUsageFlags usage, VkImageCreateFlags flags, VkImageFormatProperties* pImageFormatProperties)
-{
- return get_my_data_ptr(get_dispatch_key(physicalDevice), layer_data_map)->instance_dispatch_table->GetPhysicalDeviceImageFormatProperties(physicalDevice, format, type, tiling, usage, flags, pImageFormatProperties);
+VK_LAYER_EXPORT VKAPI_ATTR VkResult VKAPI_CALL
+vkGetPhysicalDeviceImageFormatProperties(VkPhysicalDevice physicalDevice, VkFormat format, VkImageType type, VkImageTiling tiling,
+ VkImageUsageFlags usage, VkImageCreateFlags flags,
+ VkImageFormatProperties *pImageFormatProperties) {
+ return get_my_data_ptr(get_dispatch_key(physicalDevice), layer_data_map)
+ ->instance_dispatch_table->GetPhysicalDeviceImageFormatProperties(physicalDevice, format, type, tiling, usage, flags,
+ pImageFormatProperties);
}
-VK_LAYER_EXPORT VKAPI_ATTR void VKAPI_CALL vkGetPhysicalDeviceProperties(VkPhysicalDevice physicalDevice, VkPhysicalDeviceProperties* pProperties)
-{
+VK_LAYER_EXPORT VKAPI_ATTR void VKAPI_CALL
+vkGetPhysicalDeviceProperties(VkPhysicalDevice physicalDevice, VkPhysicalDeviceProperties *pProperties) {
layer_data *phy_dev_data = get_my_data_ptr(get_dispatch_key(physicalDevice), layer_data_map);
phy_dev_data->instance_dispatch_table->GetPhysicalDeviceProperties(physicalDevice, pProperties);
}
-VK_LAYER_EXPORT VKAPI_ATTR void VKAPI_CALL vkGetPhysicalDeviceQueueFamilyProperties(VkPhysicalDevice physicalDevice, uint32_t* pCount, VkQueueFamilyProperties* pQueueFamilyProperties)
-{
+VK_LAYER_EXPORT VKAPI_ATTR void VKAPI_CALL
+vkGetPhysicalDeviceQueueFamilyProperties(VkPhysicalDevice physicalDevice, uint32_t *pCount,
+ VkQueueFamilyProperties *pQueueFamilyProperties) {
VkBool32 skipCall = VK_FALSE;
layer_data *phy_dev_data = get_my_data_ptr(get_dispatch_key(physicalDevice), layer_data_map);
if (phy_dev_data->physicalDeviceState) {
if (NULL == pQueueFamilyProperties) {
phy_dev_data->physicalDeviceState->vkGetPhysicalDeviceQueueFamilyPropertiesState = QUERY_COUNT;
} else {
- // Verify that for each physical device, this function is called first with NULL pQueueFamilyProperties ptr in order to get count
+ // Verify that for each physical device, this function is called first with NULL pQueueFamilyProperties ptr in order to
+ // get count
if (UNCALLED == phy_dev_data->physicalDeviceState->vkGetPhysicalDeviceQueueFamilyPropertiesState) {
- skipCall |= log_msg(phy_dev_data->report_data, VK_DEBUG_REPORT_ERROR_BIT_EXT, VK_DEBUG_REPORT_OBJECT_TYPE_PHYSICAL_DEVICE_EXT, 0, __LINE__, DEVLIMITS_MUST_QUERY_COUNT, "DL",
- "Invalid call sequence to vkGetPhysicalDeviceQueueFamilyProperties() w/ non-NULL pQueueFamilyProperties. You should first call vkGetPhysicalDeviceQueueFamilyProperties() w/ NULL pQueueFamilyProperties to query pCount.");
+ skipCall |= log_msg(phy_dev_data->report_data, VK_DEBUG_REPORT_ERROR_BIT_EXT,
+ VK_DEBUG_REPORT_OBJECT_TYPE_PHYSICAL_DEVICE_EXT, 0, __LINE__, DEVLIMITS_MUST_QUERY_COUNT, "DL",
+ "Invalid call sequence to vkGetPhysicalDeviceQueueFamilyProperties() w/ non-NULL "
+ "pQueueFamilyProperties. You should first call vkGetPhysicalDeviceQueueFamilyProperties() w/ "
+ "NULL pQueueFamilyProperties to query pCount.");
}
// Then verify that pCount that is passed in on second call matches what was returned
if (phy_dev_data->physicalDeviceState->queueFamilyPropertiesCount != *pCount) {
- // TODO: this is not a requirement of the Valid Usage section for vkGetPhysicalDeviceQueueFamilyProperties, so provide as warning
- skipCall |= log_msg(phy_dev_data->report_data, VK_DEBUG_REPORT_WARNING_BIT_EXT, VK_DEBUG_REPORT_OBJECT_TYPE_PHYSICAL_DEVICE_EXT, 0, __LINE__, DEVLIMITS_COUNT_MISMATCH, "DL",
- "Call to vkGetPhysicalDeviceQueueFamilyProperties() w/ pCount value %u, but actual count supported by this physicalDevice is %u.", *pCount, phy_dev_data->physicalDeviceState->queueFamilyPropertiesCount);
+ // TODO: this is not a requirement of the Valid Usage section for vkGetPhysicalDeviceQueueFamilyProperties, so
+ // provide as warning
+ skipCall |= log_msg(phy_dev_data->report_data, VK_DEBUG_REPORT_WARNING_BIT_EXT,
+ VK_DEBUG_REPORT_OBJECT_TYPE_PHYSICAL_DEVICE_EXT, 0, __LINE__, DEVLIMITS_COUNT_MISMATCH, "DL",
+ "Call to vkGetPhysicalDeviceQueueFamilyProperties() w/ pCount value %u, but actual count "
+ "supported by this physicalDevice is %u.",
+ *pCount, phy_dev_data->physicalDeviceState->queueFamilyPropertiesCount);
}
phy_dev_data->physicalDeviceState->vkGetPhysicalDeviceQueueFamilyPropertiesState = QUERY_DETAILS;
}
if (skipCall)
return;
- phy_dev_data->instance_dispatch_table->GetPhysicalDeviceQueueFamilyProperties(physicalDevice, pCount, pQueueFamilyProperties);
+ phy_dev_data->instance_dispatch_table->GetPhysicalDeviceQueueFamilyProperties(physicalDevice, pCount,
+ pQueueFamilyProperties);
if (NULL == pQueueFamilyProperties) {
phy_dev_data->physicalDeviceState->queueFamilyPropertiesCount = *pCount;
} else { // Save queue family properties
phy_dev_data->queueFamilyProperties.reserve(*pCount);
- for (uint32_t i=0; i < *pCount; i++) {
+ for (uint32_t i = 0; i < *pCount; i++) {
phy_dev_data->queueFamilyProperties.emplace_back(new VkQueueFamilyProperties(pQueueFamilyProperties[i]));
}
}
return;
} else {
- log_msg(phy_dev_data->report_data, VK_DEBUG_REPORT_ERROR_BIT_EXT, VK_DEBUG_REPORT_OBJECT_TYPE_PHYSICAL_DEVICE_EXT, 0, __LINE__, DEVLIMITS_INVALID_PHYSICAL_DEVICE, "DL",
- "Invalid physicalDevice (%#" PRIxLEAST64 ") passed into vkGetPhysicalDeviceQueueFamilyProperties().", (uint64_t)physicalDevice);
+ log_msg(phy_dev_data->report_data, VK_DEBUG_REPORT_ERROR_BIT_EXT, VK_DEBUG_REPORT_OBJECT_TYPE_PHYSICAL_DEVICE_EXT, 0,
+ __LINE__, DEVLIMITS_INVALID_PHYSICAL_DEVICE, "DL",
+ "Invalid physicalDevice (%#" PRIxLEAST64 ") passed into vkGetPhysicalDeviceQueueFamilyProperties().",
+ (uint64_t)physicalDevice);
}
}
-VK_LAYER_EXPORT VKAPI_ATTR void VKAPI_CALL vkGetPhysicalDeviceMemoryProperties(VkPhysicalDevice physicalDevice, VkPhysicalDeviceMemoryProperties* pMemoryProperties)
-{
- get_my_data_ptr(get_dispatch_key(physicalDevice), layer_data_map)->instance_dispatch_table->GetPhysicalDeviceMemoryProperties(physicalDevice, pMemoryProperties);
+VK_LAYER_EXPORT VKAPI_ATTR void VKAPI_CALL
+vkGetPhysicalDeviceMemoryProperties(VkPhysicalDevice physicalDevice, VkPhysicalDeviceMemoryProperties *pMemoryProperties) {
+ get_my_data_ptr(get_dispatch_key(physicalDevice), layer_data_map)
+ ->instance_dispatch_table->GetPhysicalDeviceMemoryProperties(physicalDevice, pMemoryProperties);
}
-VK_LAYER_EXPORT VKAPI_ATTR void VKAPI_CALL vkGetPhysicalDeviceSparseImageFormatProperties(VkPhysicalDevice physicalDevice, VkFormat format, VkImageType type, VkSampleCountFlagBits samples, VkImageUsageFlags usage, VkImageTiling tiling, uint32_t* pNumProperties, VkSparseImageFormatProperties* pProperties)
-{
- get_my_data_ptr(get_dispatch_key(physicalDevice), layer_data_map)->instance_dispatch_table->GetPhysicalDeviceSparseImageFormatProperties(physicalDevice, format, type, samples, usage, tiling, pNumProperties, pProperties);
+VK_LAYER_EXPORT VKAPI_ATTR void VKAPI_CALL
+vkGetPhysicalDeviceSparseImageFormatProperties(VkPhysicalDevice physicalDevice, VkFormat format, VkImageType type,
+ VkSampleCountFlagBits samples, VkImageUsageFlags usage, VkImageTiling tiling,
+ uint32_t *pNumProperties, VkSparseImageFormatProperties *pProperties) {
+ get_my_data_ptr(get_dispatch_key(physicalDevice), layer_data_map)
+ ->instance_dispatch_table->GetPhysicalDeviceSparseImageFormatProperties(physicalDevice, format, type, samples, usage,
+ tiling, pNumProperties, pProperties);
}
-VK_LAYER_EXPORT VKAPI_ATTR void VKAPI_CALL vkCmdSetViewport(
- VkCommandBuffer commandBuffer,
- uint32_t firstViewport,
- uint32_t viewportCount,
- const VkViewport* pViewports)
-{
+VK_LAYER_EXPORT VKAPI_ATTR void VKAPI_CALL
+vkCmdSetViewport(VkCommandBuffer commandBuffer, uint32_t firstViewport, uint32_t viewportCount, const VkViewport *pViewports) {
VkBool32 skipCall = VK_FALSE;
/* TODO: Verify viewportCount < maxViewports from VkPhysicalDeviceLimits */
if (VK_FALSE == skipCall) {
@@ -392,12 +342,8 @@ VK_LAYER_EXPORT VKAPI_ATTR void VKAPI_CALL vkCmdSetViewport(
}
}
-VK_LAYER_EXPORT VKAPI_ATTR void VKAPI_CALL vkCmdSetScissor(
- VkCommandBuffer commandBuffer,
- uint32_t firstScissor,
- uint32_t scissorCount,
- const VkRect2D* pScissors)
-{
+VK_LAYER_EXPORT VKAPI_ATTR void VKAPI_CALL
+vkCmdSetScissor(VkCommandBuffer commandBuffer, uint32_t firstScissor, uint32_t scissorCount, const VkRect2D *pScissors) {
VkBool32 skipCall = VK_FALSE;
/* TODO: Verify scissorCount < maxViewports from VkPhysicalDeviceLimits */
/* TODO: viewportCount and scissorCount must match at draw time */
@@ -407,73 +353,72 @@ VK_LAYER_EXPORT VKAPI_ATTR void VKAPI_CALL vkCmdSetScissor(
}
}
-static void createDeviceRegisterExtensions(const VkDeviceCreateInfo* pCreateInfo, VkDevice device)
-{
- uint32_t i;
- layer_data *my_data = get_my_data_ptr(get_dispatch_key(device), layer_data_map);
- my_data->device_extensions.debug_marker_enabled = false;
-
- for (i = 0; i < pCreateInfo->enabledExtensionCount; i++) {
- if (strcmp(pCreateInfo->ppEnabledExtensionNames[i], DEBUG_MARKER_EXTENSION_NAME) == 0) {
- /* Found a matching extension name, mark it enabled and init dispatch table*/
- initDebugMarkerTable(device);
- my_data->device_extensions.debug_marker_enabled = true;
- }
-
- }
-}
-
// Verify that features have been queried and verify that requested features are available
-static VkBool32 validate_features_request(layer_data *phy_dev_data)
-{
+static VkBool32 validate_features_request(layer_data *phy_dev_data) {
VkBool32 skipCall = VK_FALSE;
// Verify that all of the requested features are available
// Get ptrs into actual and requested structs and if requested is 1 but actual is 0, request is invalid
- VkBool32* actual = (VkBool32*)&(phy_dev_data->actualPhysicalDeviceFeatures);
- VkBool32* requested = (VkBool32*)&(phy_dev_data->requestedPhysicalDeviceFeatures);
+ VkBool32 *actual = (VkBool32 *)&(phy_dev_data->actualPhysicalDeviceFeatures);
+ VkBool32 *requested = (VkBool32 *)&(phy_dev_data->requestedPhysicalDeviceFeatures);
// TODO : This is a nice, compact way to loop through struct, but a bad way to report issues
// Need to provide the struct member name with the issue. To do that seems like we'll
// have to loop through each struct member which should be done w/ codegen to keep in synch.
uint32_t errors = 0;
- uint32_t totalBools = sizeof(VkPhysicalDeviceFeatures)/sizeof(VkBool32);
+ uint32_t totalBools = sizeof(VkPhysicalDeviceFeatures) / sizeof(VkBool32);
for (uint32_t i = 0; i < totalBools; i++) {
if (requested[i] > actual[i]) {
- skipCall |= log_msg(phy_dev_data->report_data, VK_DEBUG_REPORT_ERROR_BIT_EXT, VK_DEBUG_REPORT_OBJECT_TYPE_PHYSICAL_DEVICE_EXT, 0, __LINE__, DEVLIMITS_INVALID_FEATURE_REQUESTED, "DL",
- "While calling vkCreateDevice(), requesting feature #%u in VkPhysicalDeviceFeatures struct, which is not available on this device.", i);
+ skipCall |= log_msg(phy_dev_data->report_data, VK_DEBUG_REPORT_ERROR_BIT_EXT,
+ VK_DEBUG_REPORT_OBJECT_TYPE_PHYSICAL_DEVICE_EXT, 0, __LINE__, DEVLIMITS_INVALID_FEATURE_REQUESTED,
+ "DL", "While calling vkCreateDevice(), requesting feature #%u in VkPhysicalDeviceFeatures struct, "
+ "which is not available on this device.",
+ i);
errors++;
}
}
if (errors && (UNCALLED == phy_dev_data->physicalDeviceState->vkGetPhysicalDeviceFeaturesState)) {
// If user didn't request features, notify them that they should
// TODO: Verify this against the spec. I believe this is an invalid use of the API and should return an error
- skipCall |= log_msg(phy_dev_data->report_data, VK_DEBUG_REPORT_WARNING_BIT_EXT, VK_DEBUG_REPORT_OBJECT_TYPE_PHYSICAL_DEVICE_EXT, 0, __LINE__, DEVLIMITS_INVALID_FEATURE_REQUESTED, "DL",
- "You requested features that are unavailable on this device. You should first query feature availability by calling vkGetPhysicalDeviceFeatures().");
+ skipCall |= log_msg(phy_dev_data->report_data, VK_DEBUG_REPORT_ERROR_BIT_EXT,
+ VK_DEBUG_REPORT_OBJECT_TYPE_PHYSICAL_DEVICE_EXT, 0, __LINE__, DEVLIMITS_INVALID_FEATURE_REQUESTED, "DL",
+ "You requested features that are unavailable on this device. You should first query feature "
+ "availability by calling vkGetPhysicalDeviceFeatures().");
}
return skipCall;
}
-VK_LAYER_EXPORT VKAPI_ATTR VkResult VKAPI_CALL vkCreateDevice(VkPhysicalDevice gpu, const VkDeviceCreateInfo* pCreateInfo, const VkAllocationCallbacks* pAllocator, VkDevice* pDevice)
-{
+VK_LAYER_EXPORT VKAPI_ATTR VkResult VKAPI_CALL vkCreateDevice(VkPhysicalDevice gpu, const VkDeviceCreateInfo *pCreateInfo,
+ const VkAllocationCallbacks *pAllocator, VkDevice *pDevice) {
VkBool32 skipCall = VK_FALSE;
layer_data *phy_dev_data = get_my_data_ptr(get_dispatch_key(gpu), layer_data_map);
// First check is app has actually requested queueFamilyProperties
if (!phy_dev_data->physicalDeviceState) {
- skipCall |= log_msg(phy_dev_data->report_data, VK_DEBUG_REPORT_ERROR_BIT_EXT, VK_DEBUG_REPORT_OBJECT_TYPE_PHYSICAL_DEVICE_EXT, 0, __LINE__, DEVLIMITS_MUST_QUERY_COUNT, "DL",
- "Invalid call to vkCreateDevice() w/o first calling vkEnumeratePhysicalDevices().");
+ skipCall |= log_msg(phy_dev_data->report_data, VK_DEBUG_REPORT_ERROR_BIT_EXT,
+ VK_DEBUG_REPORT_OBJECT_TYPE_PHYSICAL_DEVICE_EXT, 0, __LINE__, DEVLIMITS_MUST_QUERY_COUNT, "DL",
+ "Invalid call to vkCreateDevice() w/o first calling vkEnumeratePhysicalDevices().");
} else if (QUERY_DETAILS != phy_dev_data->physicalDeviceState->vkGetPhysicalDeviceQueueFamilyPropertiesState) {
// TODO: This is not called out as an invalid use in the spec so make more informative recommendation.
- skipCall |= log_msg(phy_dev_data->report_data, VK_DEBUG_REPORT_WARNING_BIT_EXT, VK_DEBUG_REPORT_OBJECT_TYPE_PHYSICAL_DEVICE_EXT, 0, __LINE__, DEVLIMITS_INVALID_QUEUE_CREATE_REQUEST, "DL",
- "Call to vkCreateDevice() w/o first calling vkGetPhysicalDeviceQueueFamilyProperties().");
+ skipCall |= log_msg(phy_dev_data->report_data, VK_DEBUG_REPORT_WARNING_BIT_EXT,
+ VK_DEBUG_REPORT_OBJECT_TYPE_PHYSICAL_DEVICE_EXT, 0, __LINE__, DEVLIMITS_INVALID_QUEUE_CREATE_REQUEST,
+ "DL", "Call to vkCreateDevice() w/o first calling vkGetPhysicalDeviceQueueFamilyProperties().");
} else {
// Check that the requested queue properties are valid
- for (uint32_t i=0; i<pCreateInfo->queueCreateInfoCount; i++) {
+ for (uint32_t i = 0; i < pCreateInfo->queueCreateInfoCount; i++) {
uint32_t requestedIndex = pCreateInfo->pQueueCreateInfos[i].queueFamilyIndex;
- if (phy_dev_data->queueFamilyProperties.size() <= requestedIndex) { // requested index is out of bounds for this physical device
- skipCall |= log_msg(phy_dev_data->report_data, VK_DEBUG_REPORT_ERROR_BIT_EXT, VK_DEBUG_REPORT_OBJECT_TYPE_PHYSICAL_DEVICE_EXT, 0, __LINE__, DEVLIMITS_INVALID_QUEUE_CREATE_REQUEST, "DL",
+ if (phy_dev_data->queueFamilyProperties.size() <=
+ requestedIndex) { // requested index is out of bounds for this physical device
+ skipCall |= log_msg(
+ phy_dev_data->report_data, VK_DEBUG_REPORT_ERROR_BIT_EXT, VK_DEBUG_REPORT_OBJECT_TYPE_PHYSICAL_DEVICE_EXT, 0,
+ __LINE__, DEVLIMITS_INVALID_QUEUE_CREATE_REQUEST, "DL",
"Invalid queue create request in vkCreateDevice(). Invalid queueFamilyIndex %u requested.", requestedIndex);
- } else if (pCreateInfo->pQueueCreateInfos[i].queueCount > phy_dev_data->queueFamilyProperties[requestedIndex]->queueCount) {
- skipCall |= log_msg(phy_dev_data->report_data, VK_DEBUG_REPORT_ERROR_BIT_EXT, VK_DEBUG_REPORT_OBJECT_TYPE_PHYSICAL_DEVICE_EXT, 0, __LINE__, DEVLIMITS_INVALID_QUEUE_CREATE_REQUEST, "DL",
- "Invalid queue create request in vkCreateDevice(). QueueFamilyIndex %u only has %u queues, but requested queueCount is %u.", requestedIndex, phy_dev_data->queueFamilyProperties[requestedIndex]->queueCount, pCreateInfo->pQueueCreateInfos[i].queueCount);
+ } else if (pCreateInfo->pQueueCreateInfos[i].queueCount >
+ phy_dev_data->queueFamilyProperties[requestedIndex]->queueCount) {
+ skipCall |=
+ log_msg(phy_dev_data->report_data, VK_DEBUG_REPORT_ERROR_BIT_EXT,
+ VK_DEBUG_REPORT_OBJECT_TYPE_PHYSICAL_DEVICE_EXT, 0, __LINE__, DEVLIMITS_INVALID_QUEUE_CREATE_REQUEST,
+ "DL", "Invalid queue create request in vkCreateDevice(). QueueFamilyIndex %u only has %u queues, but "
+ "requested queueCount is %u.",
+ requestedIndex, phy_dev_data->queueFamilyProperties[requestedIndex]->queueCount,
+ pCreateInfo->pQueueCreateInfos[i].queueCount);
}
}
}
@@ -490,7 +435,7 @@ VK_LAYER_EXPORT VKAPI_ATTR VkResult VKAPI_CALL vkCreateDevice(VkPhysicalDevice g
assert(chain_info->u.pLayerInfo);
PFN_vkGetInstanceProcAddr fpGetInstanceProcAddr = chain_info->u.pLayerInfo->pfnNextGetInstanceProcAddr;
PFN_vkGetDeviceProcAddr fpGetDeviceProcAddr = chain_info->u.pLayerInfo->pfnNextGetDeviceProcAddr;
- PFN_vkCreateDevice fpCreateDevice = (PFN_vkCreateDevice) fpGetInstanceProcAddr(NULL, "vkCreateDevice");
+ PFN_vkCreateDevice fpCreateDevice = (PFN_vkCreateDevice)fpGetInstanceProcAddr(NULL, "vkCreateDevice");
if (fpCreateDevice == NULL) {
return VK_ERROR_INITIALIZATION_FAILED;
}
@@ -509,111 +454,117 @@ VK_LAYER_EXPORT VKAPI_ATTR VkResult VKAPI_CALL vkCreateDevice(VkPhysicalDevice g
layer_init_device_dispatch_table(*pDevice, my_device_data->device_dispatch_table, fpGetDeviceProcAddr);
my_device_data->report_data = layer_debug_report_create_device(my_instance_data->report_data, *pDevice);
my_device_data->physicalDevice = gpu;
- createDeviceRegisterExtensions(pCreateInfo, *pDevice);
// Get physical device properties for this device
phy_dev_data->instance_dispatch_table->GetPhysicalDeviceProperties(gpu, &(phy_dev_data->physDevPropertyMap[*pDevice]));
return result;
}
-VK_LAYER_EXPORT VKAPI_ATTR void VKAPI_CALL vkDestroyDevice(VkDevice device, const VkAllocationCallbacks* pAllocator)
-{
+VK_LAYER_EXPORT VKAPI_ATTR void VKAPI_CALL vkDestroyDevice(VkDevice device, const VkAllocationCallbacks *pAllocator) {
// Free device lifetime allocations
dispatch_key key = get_dispatch_key(device);
layer_data *my_device_data = get_my_data_ptr(key, layer_data_map);
my_device_data->device_dispatch_table->DestroyDevice(device, pAllocator);
- tableDebugMarkerMap.erase(key);
delete my_device_data->device_dispatch_table;
layer_data_map.erase(key);
}
-VK_LAYER_EXPORT VKAPI_ATTR VkResult VKAPI_CALL vkCreateCommandPool(VkDevice device, const VkCommandPoolCreateInfo* pCreateInfo, const VkAllocationCallbacks* pAllocator, VkCommandPool* pCommandPool)
-{
+VK_LAYER_EXPORT VKAPI_ATTR VkResult VKAPI_CALL vkCreateCommandPool(VkDevice device, const VkCommandPoolCreateInfo *pCreateInfo,
+ const VkAllocationCallbacks *pAllocator,
+ VkCommandPool *pCommandPool) {
// TODO : Verify that requested QueueFamilyIndex for this pool exists
- VkResult result = get_my_data_ptr(get_dispatch_key(device), layer_data_map)->device_dispatch_table->CreateCommandPool(device, pCreateInfo, pAllocator, pCommandPool);
+ VkResult result = get_my_data_ptr(get_dispatch_key(device), layer_data_map)
+ ->device_dispatch_table->CreateCommandPool(device, pCreateInfo, pAllocator, pCommandPool);
return result;
}
-VK_LAYER_EXPORT VKAPI_ATTR void VKAPI_CALL vkDestroyCommandPool(VkDevice device, VkCommandPool commandPool, const VkAllocationCallbacks* pAllocator)
-{
- get_my_data_ptr(get_dispatch_key(device), layer_data_map)->device_dispatch_table->DestroyCommandPool(device, commandPool, pAllocator);
+VK_LAYER_EXPORT VKAPI_ATTR void VKAPI_CALL
+vkDestroyCommandPool(VkDevice device, VkCommandPool commandPool, const VkAllocationCallbacks *pAllocator) {
+ get_my_data_ptr(get_dispatch_key(device), layer_data_map)
+ ->device_dispatch_table->DestroyCommandPool(device, commandPool, pAllocator);
}
-VK_LAYER_EXPORT VKAPI_ATTR VkResult VKAPI_CALL vkResetCommandPool(VkDevice device, VkCommandPool commandPool, VkCommandPoolResetFlags flags)
-{
- VkResult result = get_my_data_ptr(get_dispatch_key(device), layer_data_map)->device_dispatch_table->ResetCommandPool(device, commandPool, flags);
+VK_LAYER_EXPORT VKAPI_ATTR VkResult VKAPI_CALL
+vkResetCommandPool(VkDevice device, VkCommandPool commandPool, VkCommandPoolResetFlags flags) {
+ VkResult result = get_my_data_ptr(get_dispatch_key(device), layer_data_map)
+ ->device_dispatch_table->ResetCommandPool(device, commandPool, flags);
return result;
}
-VK_LAYER_EXPORT VKAPI_ATTR VkResult VKAPI_CALL vkAllocateCommandBuffers(VkDevice device, const VkCommandBufferAllocateInfo* pCreateInfo, VkCommandBuffer* pCommandBuffer)
-{
- VkResult result = get_my_data_ptr(get_dispatch_key(device), layer_data_map)->device_dispatch_table->AllocateCommandBuffers(device, pCreateInfo, pCommandBuffer);
+VK_LAYER_EXPORT VKAPI_ATTR VkResult VKAPI_CALL
+vkAllocateCommandBuffers(VkDevice device, const VkCommandBufferAllocateInfo *pCreateInfo, VkCommandBuffer *pCommandBuffer) {
+ VkResult result = get_my_data_ptr(get_dispatch_key(device), layer_data_map)
+ ->device_dispatch_table->AllocateCommandBuffers(device, pCreateInfo, pCommandBuffer);
return result;
}
-VK_LAYER_EXPORT VKAPI_ATTR void VKAPI_CALL vkFreeCommandBuffers(VkDevice device, VkCommandPool commandPool, uint32_t count, const VkCommandBuffer* pCommandBuffers)
-{
- get_my_data_ptr(get_dispatch_key(device), layer_data_map)->device_dispatch_table->FreeCommandBuffers(device, commandPool, count, pCommandBuffers);
+VK_LAYER_EXPORT VKAPI_ATTR void VKAPI_CALL
+vkFreeCommandBuffers(VkDevice device, VkCommandPool commandPool, uint32_t count, const VkCommandBuffer *pCommandBuffers) {
+ get_my_data_ptr(get_dispatch_key(device), layer_data_map)
+ ->device_dispatch_table->FreeCommandBuffers(device, commandPool, count, pCommandBuffers);
}
-VK_LAYER_EXPORT VKAPI_ATTR VkResult VKAPI_CALL vkBeginCommandBuffer(VkCommandBuffer commandBuffer, const VkCommandBufferBeginInfo* pBeginInfo)
-{
+VK_LAYER_EXPORT VKAPI_ATTR VkResult VKAPI_CALL
+vkBeginCommandBuffer(VkCommandBuffer commandBuffer, const VkCommandBufferBeginInfo *pBeginInfo) {
bool skipCall = false;
layer_data *dev_data = get_my_data_ptr(get_dispatch_key(commandBuffer), layer_data_map);
const VkCommandBufferInheritanceInfo *pInfo = pBeginInfo->pInheritanceInfo;
if (dev_data->actualPhysicalDeviceFeatures.inheritedQueries == VK_FALSE && pInfo && pInfo->occlusionQueryEnable != VK_FALSE) {
- skipCall |= log_msg(dev_data->report_data, VK_DEBUG_REPORT_ERROR_BIT_EXT, VK_DEBUG_REPORT_OBJECT_TYPE_COMMAND_BUFFER_EXT, reinterpret_cast<uint64_t>(commandBuffer), __LINE__,
- DEVLIMITS_INVALID_INHERITED_QUERY, "DL",
- "Cannot set inherited occlusionQueryEnable in vkBeginCommandBuffer() when device does not support inheritedQueries.");
+ skipCall |= log_msg(
+ dev_data->report_data, VK_DEBUG_REPORT_ERROR_BIT_EXT, VK_DEBUG_REPORT_OBJECT_TYPE_COMMAND_BUFFER_EXT,
+ reinterpret_cast<uint64_t>(commandBuffer), __LINE__, DEVLIMITS_INVALID_INHERITED_QUERY, "DL",
+ "Cannot set inherited occlusionQueryEnable in vkBeginCommandBuffer() when device does not support inheritedQueries.");
}
if (dev_data->actualPhysicalDeviceFeatures.inheritedQueries != VK_FALSE && pInfo && pInfo->occlusionQueryEnable != VK_FALSE &&
!validate_VkQueryControlFlagBits(VkQueryControlFlagBits(pInfo->queryFlags))) {
- skipCall |= log_msg(dev_data->report_data, VK_DEBUG_REPORT_ERROR_BIT_EXT, VK_DEBUG_REPORT_OBJECT_TYPE_COMMAND_BUFFER_EXT, reinterpret_cast<uint64_t>(commandBuffer), __LINE__,
- DEVLIMITS_INVALID_INHERITED_QUERY, "DL",
- "Cannot enable in occlusion queries in vkBeginCommandBuffer() and set queryFlags to %d which is not a valid combination of VkQueryControlFlagBits.",
+ skipCall |= log_msg(dev_data->report_data, VK_DEBUG_REPORT_ERROR_BIT_EXT, VK_DEBUG_REPORT_OBJECT_TYPE_COMMAND_BUFFER_EXT,
+ reinterpret_cast<uint64_t>(commandBuffer), __LINE__, DEVLIMITS_INVALID_INHERITED_QUERY, "DL",
+ "Cannot enable in occlusion queries in vkBeginCommandBuffer() and set queryFlags to %d which is not a "
+ "valid combination of VkQueryControlFlagBits.",
pInfo->queryFlags);
}
VkResult result = VK_ERROR_VALIDATION_FAILED_EXT;
if (!skipCall)
- result = dev_data->device_dispatch_table->BeginCommandBuffer(
- commandBuffer, pBeginInfo);
+ result = dev_data->device_dispatch_table->BeginCommandBuffer(commandBuffer, pBeginInfo);
return result;
}
-VK_LAYER_EXPORT VKAPI_ATTR void VKAPI_CALL vkGetDeviceQueue(VkDevice device, uint32_t queueFamilyIndex, uint32_t queueIndex, VkQueue* pQueue)
-{
+VK_LAYER_EXPORT VKAPI_ATTR void VKAPI_CALL
+vkGetDeviceQueue(VkDevice device, uint32_t queueFamilyIndex, uint32_t queueIndex, VkQueue *pQueue) {
VkBool32 skipCall = VK_FALSE;
layer_data *dev_data = get_my_data_ptr(get_dispatch_key(device), layer_data_map);
VkPhysicalDevice gpu = dev_data->physicalDevice;
layer_data *phy_dev_data = get_my_data_ptr(get_dispatch_key(gpu), layer_data_map);
- if (queueFamilyIndex >= phy_dev_data->queueFamilyProperties.size()) { // requested index is out of bounds for this physical device
- skipCall |= log_msg(phy_dev_data->report_data, VK_DEBUG_REPORT_ERROR_BIT_EXT, VK_DEBUG_REPORT_OBJECT_TYPE_PHYSICAL_DEVICE_EXT, 0, __LINE__, DEVLIMITS_INVALID_QUEUE_CREATE_REQUEST, "DL",
- "Invalid queueFamilyIndex %u requested in vkGetDeviceQueue().", queueFamilyIndex);
+ if (queueFamilyIndex >=
+ phy_dev_data->queueFamilyProperties.size()) { // requested index is out of bounds for this physical device
+ skipCall |= log_msg(phy_dev_data->report_data, VK_DEBUG_REPORT_ERROR_BIT_EXT,
+ VK_DEBUG_REPORT_OBJECT_TYPE_PHYSICAL_DEVICE_EXT, 0, __LINE__, DEVLIMITS_INVALID_QUEUE_CREATE_REQUEST,
+ "DL", "Invalid queueFamilyIndex %u requested in vkGetDeviceQueue().", queueFamilyIndex);
} else if (queueIndex >= phy_dev_data->queueFamilyProperties[queueFamilyIndex]->queueCount) {
- skipCall |= log_msg(phy_dev_data->report_data, VK_DEBUG_REPORT_ERROR_BIT_EXT, VK_DEBUG_REPORT_OBJECT_TYPE_PHYSICAL_DEVICE_EXT, 0, __LINE__, DEVLIMITS_INVALID_QUEUE_CREATE_REQUEST, "DL",
- "Invalid queue request in vkGetDeviceQueue(). QueueFamilyIndex %u only has %u queues, but requested queueIndex is %u.", queueFamilyIndex, phy_dev_data->queueFamilyProperties[queueFamilyIndex]->queueCount, queueIndex);
+ skipCall |= log_msg(
+ phy_dev_data->report_data, VK_DEBUG_REPORT_ERROR_BIT_EXT, VK_DEBUG_REPORT_OBJECT_TYPE_PHYSICAL_DEVICE_EXT, 0, __LINE__,
+ DEVLIMITS_INVALID_QUEUE_CREATE_REQUEST, "DL",
+ "Invalid queue request in vkGetDeviceQueue(). QueueFamilyIndex %u only has %u queues, but requested queueIndex is %u.",
+ queueFamilyIndex, phy_dev_data->queueFamilyProperties[queueFamilyIndex]->queueCount, queueIndex);
}
if (skipCall)
return;
dev_data->device_dispatch_table->GetDeviceQueue(device, queueFamilyIndex, queueIndex, pQueue);
}
-VK_LAYER_EXPORT VKAPI_ATTR VkResult VKAPI_CALL vkBindBufferMemory(
- VkDevice device,
- VkBuffer buffer,
- VkDeviceMemory mem,
- VkDeviceSize memoryOffset)
-{
+VK_LAYER_EXPORT VKAPI_ATTR VkResult VKAPI_CALL
+vkBindBufferMemory(VkDevice device, VkBuffer buffer, VkDeviceMemory mem, VkDeviceSize memoryOffset) {
layer_data *dev_data = get_my_data_ptr(get_dispatch_key(device), layer_data_map);
- VkResult result = VK_ERROR_VALIDATION_FAILED_EXT;
- VkBool32 skipCall = VK_FALSE;
+ VkResult result = VK_ERROR_VALIDATION_FAILED_EXT;
+ VkBool32 skipCall = VK_FALSE;
VkDeviceSize uniformAlignment = dev_data->physDevPropertyMap[device].limits.minUniformBufferOffsetAlignment;
if (vk_safe_modulo(memoryOffset, uniformAlignment) != 0) {
- skipCall |= log_msg(dev_data->report_data, VK_DEBUG_REPORT_ERROR_BIT_EXT, VK_DEBUG_REPORT_OBJECT_TYPE_PHYSICAL_DEVICE_EXT, 0,
- __LINE__, DEVLIMITS_INVALID_UNIFORM_BUFFER_OFFSET, "DL",
- "vkBindBufferMemory(): memoryOffset %#" PRIxLEAST64 " must be a multiple of device limit minUniformBufferOffsetAlignment %#" PRIxLEAST64,
- memoryOffset, uniformAlignment);
+ skipCall |= log_msg(dev_data->report_data, VK_DEBUG_REPORT_ERROR_BIT_EXT, VK_DEBUG_REPORT_OBJECT_TYPE_PHYSICAL_DEVICE_EXT,
+ 0, __LINE__, DEVLIMITS_INVALID_UNIFORM_BUFFER_OFFSET, "DL",
+ "vkBindBufferMemory(): memoryOffset %#" PRIxLEAST64
+ " must be a multiple of device limit minUniformBufferOffsetAlignment %#" PRIxLEAST64,
+ memoryOffset, uniformAlignment);
}
if (VK_FALSE == skipCall) {
@@ -622,60 +573,56 @@ VK_LAYER_EXPORT VKAPI_ATTR VkResult VKAPI_CALL vkBindBufferMemory(
return result;
}
-VK_LAYER_EXPORT VKAPI_ATTR void VKAPI_CALL vkUpdateDescriptorSets(
- VkDevice device,
- uint32_t descriptorWriteCount,
- const VkWriteDescriptorSet *pDescriptorWrites,
- uint32_t descriptorCopyCount,
- const VkCopyDescriptorSet *pDescriptorCopies)
-{
- layer_data* dev_data = get_my_data_ptr(get_dispatch_key(device), layer_data_map);
- VkBool32 skipCall = VK_FALSE;
+VK_LAYER_EXPORT VKAPI_ATTR void VKAPI_CALL
+vkUpdateDescriptorSets(VkDevice device, uint32_t descriptorWriteCount, const VkWriteDescriptorSet *pDescriptorWrites,
+ uint32_t descriptorCopyCount, const VkCopyDescriptorSet *pDescriptorCopies) {
+ layer_data *dev_data = get_my_data_ptr(get_dispatch_key(device), layer_data_map);
+ VkBool32 skipCall = VK_FALSE;
for (uint32_t i = 0; i < descriptorWriteCount; i++) {
- if ((pDescriptorWrites[i].descriptorType == VK_DESCRIPTOR_TYPE_UNIFORM_BUFFER) ||
- (pDescriptorWrites[i].descriptorType == VK_DESCRIPTOR_TYPE_UNIFORM_BUFFER_DYNAMIC)) {
+ if ((pDescriptorWrites[i].descriptorType == VK_DESCRIPTOR_TYPE_UNIFORM_BUFFER) ||
+ (pDescriptorWrites[i].descriptorType == VK_DESCRIPTOR_TYPE_UNIFORM_BUFFER_DYNAMIC)) {
VkDeviceSize uniformAlignment = dev_data->physDevPropertyMap[device].limits.minUniformBufferOffsetAlignment;
for (uint32_t j = 0; j < pDescriptorWrites[i].descriptorCount; j++) {
if (vk_safe_modulo(pDescriptorWrites[i].pBufferInfo[j].offset, uniformAlignment) != 0) {
- skipCall |= log_msg(dev_data->report_data, VK_DEBUG_REPORT_ERROR_BIT_EXT, VK_DEBUG_REPORT_OBJECT_TYPE_PHYSICAL_DEVICE_EXT, 0,
- __LINE__, DEVLIMITS_INVALID_UNIFORM_BUFFER_OFFSET, "DL",
- "vkUpdateDescriptorSets(): pDescriptorWrites[%d].pBufferInfo[%d].offset (%#" PRIxLEAST64 ") must be a multiple of device limit minUniformBufferOffsetAlignment %#" PRIxLEAST64,
- i, j, pDescriptorWrites[i].pBufferInfo[j].offset, uniformAlignment);
+ skipCall |= log_msg(dev_data->report_data, VK_DEBUG_REPORT_ERROR_BIT_EXT,
+ VK_DEBUG_REPORT_OBJECT_TYPE_PHYSICAL_DEVICE_EXT, 0, __LINE__,
+ DEVLIMITS_INVALID_UNIFORM_BUFFER_OFFSET, "DL",
+ "vkUpdateDescriptorSets(): pDescriptorWrites[%d].pBufferInfo[%d].offset (%#" PRIxLEAST64
+ ") must be a multiple of device limit minUniformBufferOffsetAlignment %#" PRIxLEAST64,
+ i, j, pDescriptorWrites[i].pBufferInfo[j].offset, uniformAlignment);
}
}
- } else if ((pDescriptorWrites[i].descriptorType == VK_DESCRIPTOR_TYPE_STORAGE_BUFFER) ||
- (pDescriptorWrites[i].descriptorType == VK_DESCRIPTOR_TYPE_STORAGE_BUFFER_DYNAMIC)) {
+ } else if ((pDescriptorWrites[i].descriptorType == VK_DESCRIPTOR_TYPE_STORAGE_BUFFER) ||
+ (pDescriptorWrites[i].descriptorType == VK_DESCRIPTOR_TYPE_STORAGE_BUFFER_DYNAMIC)) {
VkDeviceSize storageAlignment = dev_data->physDevPropertyMap[device].limits.minStorageBufferOffsetAlignment;
for (uint32_t j = 0; j < pDescriptorWrites[i].descriptorCount; j++) {
if (vk_safe_modulo(pDescriptorWrites[i].pBufferInfo[j].offset, storageAlignment) != 0) {
- skipCall |= log_msg(dev_data->report_data, VK_DEBUG_REPORT_ERROR_BIT_EXT, VK_DEBUG_REPORT_OBJECT_TYPE_PHYSICAL_DEVICE_EXT, 0,
- __LINE__, DEVLIMITS_INVALID_STORAGE_BUFFER_OFFSET, "DL",
- "vkUpdateDescriptorSets(): pDescriptorWrites[%d].pBufferInfo[%d].offset (%#" PRIxLEAST64 ") must be a multiple of device limit minStorageBufferOffsetAlignment %#" PRIxLEAST64,
- i, j, pDescriptorWrites[i].pBufferInfo[j].offset, storageAlignment);
+ skipCall |= log_msg(dev_data->report_data, VK_DEBUG_REPORT_ERROR_BIT_EXT,
+ VK_DEBUG_REPORT_OBJECT_TYPE_PHYSICAL_DEVICE_EXT, 0, __LINE__,
+ DEVLIMITS_INVALID_STORAGE_BUFFER_OFFSET, "DL",
+ "vkUpdateDescriptorSets(): pDescriptorWrites[%d].pBufferInfo[%d].offset (%#" PRIxLEAST64
+ ") must be a multiple of device limit minStorageBufferOffsetAlignment %#" PRIxLEAST64,
+ i, j, pDescriptorWrites[i].pBufferInfo[j].offset, storageAlignment);
}
}
}
}
if (skipCall == VK_FALSE) {
- dev_data->device_dispatch_table->UpdateDescriptorSets(device, descriptorWriteCount, pDescriptorWrites, descriptorCopyCount, pDescriptorCopies);
+ dev_data->device_dispatch_table->UpdateDescriptorSets(device, descriptorWriteCount, pDescriptorWrites, descriptorCopyCount,
+ pDescriptorCopies);
}
}
-VK_LAYER_EXPORT VKAPI_ATTR void VKAPI_CALL vkCmdUpdateBuffer(
- VkCommandBuffer commandBuffer,
- VkBuffer dstBuffer,
- VkDeviceSize dstOffset,
- VkDeviceSize dataSize,
- const uint32_t* pData)
-{
+VK_LAYER_EXPORT VKAPI_ATTR void VKAPI_CALL vkCmdUpdateBuffer(VkCommandBuffer commandBuffer, VkBuffer dstBuffer,
+ VkDeviceSize dstOffset, VkDeviceSize dataSize, const uint32_t *pData) {
layer_data *dev_data = get_my_data_ptr(get_dispatch_key(commandBuffer), layer_data_map);
// dstOffset is the byte offset into the buffer to start updating and must be a multiple of 4.
if (dstOffset & 3) {
layer_data *my_data = get_my_data_ptr(get_dispatch_key(commandBuffer), layer_data_map);
if (log_msg(my_data->report_data, VK_DEBUG_REPORT_ERROR_BIT_EXT, (VkDebugReportObjectTypeEXT)0, 0, __LINE__, 1, "DL",
- "vkCmdUpdateBuffer parameter, VkDeviceSize dstOffset, is not a multiple of 4")) {
+ "vkCmdUpdateBuffer parameter, VkDeviceSize dstOffset, is not a multiple of 4")) {
return;
}
}
@@ -684,7 +631,7 @@ VK_LAYER_EXPORT VKAPI_ATTR void VKAPI_CALL vkCmdUpdateBuffer(
if (dataSize & 3) {
layer_data *my_data = get_my_data_ptr(get_dispatch_key(commandBuffer), layer_data_map);
if (log_msg(my_data->report_data, VK_DEBUG_REPORT_ERROR_BIT_EXT, (VkDebugReportObjectTypeEXT)0, 0, __LINE__, 1, "DL",
- "vkCmdUpdateBuffer parameter, VkDeviceSize dataSize, is not a multiple of 4")) {
+ "vkCmdUpdateBuffer parameter, VkDeviceSize dataSize, is not a multiple of 4")) {
return;
}
}
@@ -692,20 +639,15 @@ VK_LAYER_EXPORT VKAPI_ATTR void VKAPI_CALL vkCmdUpdateBuffer(
dev_data->device_dispatch_table->CmdUpdateBuffer(commandBuffer, dstBuffer, dstOffset, dataSize, pData);
}
-VK_LAYER_EXPORT VKAPI_ATTR void VKAPI_CALL vkCmdFillBuffer(
- VkCommandBuffer commandBuffer,
- VkBuffer dstBuffer,
- VkDeviceSize dstOffset,
- VkDeviceSize size,
- uint32_t data)
-{
+VK_LAYER_EXPORT VKAPI_ATTR void VKAPI_CALL
+vkCmdFillBuffer(VkCommandBuffer commandBuffer, VkBuffer dstBuffer, VkDeviceSize dstOffset, VkDeviceSize size, uint32_t data) {
layer_data *dev_data = get_my_data_ptr(get_dispatch_key(commandBuffer), layer_data_map);
// dstOffset is the byte offset into the buffer to start filling and must be a multiple of 4.
if (dstOffset & 3) {
layer_data *my_data = get_my_data_ptr(get_dispatch_key(commandBuffer), layer_data_map);
if (log_msg(my_data->report_data, VK_DEBUG_REPORT_ERROR_BIT_EXT, (VkDebugReportObjectTypeEXT)0, 0, __LINE__, 1, "DL",
- "vkCmdFillBuffer parameter, VkDeviceSize dstOffset, is not a multiple of 4")) {
+ "vkCmdFillBuffer parameter, VkDeviceSize dstOffset, is not a multiple of 4")) {
return;
}
}
@@ -714,7 +656,7 @@ VK_LAYER_EXPORT VKAPI_ATTR void VKAPI_CALL vkCmdFillBuffer(
if (size & 3) {
layer_data *my_data = get_my_data_ptr(get_dispatch_key(commandBuffer), layer_data_map);
if (log_msg(my_data->report_data, VK_DEBUG_REPORT_ERROR_BIT_EXT, (VkDebugReportObjectTypeEXT)0, 0, __LINE__, 1, "DL",
- "vkCmdFillBuffer parameter, VkDeviceSize size, is not a multiple of 4")) {
+ "vkCmdFillBuffer parameter, VkDeviceSize size, is not a multiple of 4")) {
return;
}
}
@@ -722,12 +664,9 @@ VK_LAYER_EXPORT VKAPI_ATTR void VKAPI_CALL vkCmdFillBuffer(
dev_data->device_dispatch_table->CmdFillBuffer(commandBuffer, dstBuffer, dstOffset, size, data);
}
-VK_LAYER_EXPORT VKAPI_ATTR VkResult VKAPI_CALL vkCreateDebugReportCallbackEXT(
- VkInstance instance,
- const VkDebugReportCallbackCreateInfoEXT* pCreateInfo,
- const VkAllocationCallbacks* pAllocator,
- VkDebugReportCallbackEXT* pMsgCallback)
-{
+VK_LAYER_EXPORT VKAPI_ATTR VkResult VKAPI_CALL
+vkCreateDebugReportCallbackEXT(VkInstance instance, const VkDebugReportCallbackCreateInfoEXT *pCreateInfo,
+ const VkAllocationCallbacks *pAllocator, VkDebugReportCallbackEXT *pMsgCallback) {
layer_data *my_data = get_my_data_ptr(get_dispatch_key(instance), layer_data_map);
VkResult res = my_data->instance_dispatch_table->CreateDebugReportCallbackEXT(instance, pCreateInfo, pAllocator, pMsgCallback);
if (VK_SUCCESS == res) {
@@ -736,64 +675,55 @@ VK_LAYER_EXPORT VKAPI_ATTR VkResult VKAPI_CALL vkCreateDebugReportCallbackEXT(
return res;
}
-VK_LAYER_EXPORT VKAPI_ATTR void VKAPI_CALL vkDestroyDebugReportCallbackEXT(
- VkInstance instance,
- VkDebugReportCallbackEXT msgCallback,
- const VkAllocationCallbacks* pAllocator)
-{
+VK_LAYER_EXPORT VKAPI_ATTR void VKAPI_CALL vkDestroyDebugReportCallbackEXT(VkInstance instance,
+ VkDebugReportCallbackEXT msgCallback,
+ const VkAllocationCallbacks *pAllocator) {
layer_data *my_data = get_my_data_ptr(get_dispatch_key(instance), layer_data_map);
my_data->instance_dispatch_table->DestroyDebugReportCallbackEXT(instance, msgCallback, pAllocator);
layer_destroy_msg_callback(my_data->report_data, msgCallback, pAllocator);
}
-VK_LAYER_EXPORT VKAPI_ATTR void VKAPI_CALL vkDebugReportMessageEXT(
- VkInstance instance,
- VkDebugReportFlagsEXT flags,
- VkDebugReportObjectTypeEXT objType,
- uint64_t object,
- size_t location,
- int32_t msgCode,
- const char* pLayerPrefix,
- const char* pMsg)
-{
+VK_LAYER_EXPORT VKAPI_ATTR void VKAPI_CALL
+vkDebugReportMessageEXT(VkInstance instance, VkDebugReportFlagsEXT flags, VkDebugReportObjectTypeEXT objType, uint64_t object,
+ size_t location, int32_t msgCode, const char *pLayerPrefix, const char *pMsg) {
layer_data *my_data = get_my_data_ptr(get_dispatch_key(instance), layer_data_map);
- my_data->instance_dispatch_table->DebugReportMessageEXT(instance, flags, objType, object, location, msgCode, pLayerPrefix, pMsg);
+ my_data->instance_dispatch_table->DebugReportMessageEXT(instance, flags, objType, object, location, msgCode, pLayerPrefix,
+ pMsg);
}
-VK_LAYER_EXPORT VKAPI_ATTR PFN_vkVoidFunction VKAPI_CALL vkGetDeviceProcAddr(VkDevice dev, const char* funcName)
-{
+VK_LAYER_EXPORT VKAPI_ATTR PFN_vkVoidFunction VKAPI_CALL vkGetDeviceProcAddr(VkDevice dev, const char *funcName) {
if (!strcmp(funcName, "vkGetDeviceProcAddr"))
- return (PFN_vkVoidFunction) vkGetDeviceProcAddr;
+ return (PFN_vkVoidFunction)vkGetDeviceProcAddr;
if (!strcmp(funcName, "vkDestroyDevice"))
- return (PFN_vkVoidFunction) vkDestroyDevice;
+ return (PFN_vkVoidFunction)vkDestroyDevice;
if (!strcmp(funcName, "vkGetDeviceQueue"))
- return (PFN_vkVoidFunction) vkGetDeviceQueue;
+ return (PFN_vkVoidFunction)vkGetDeviceQueue;
if (!strcmp(funcName, "CreateCommandPool"))
- return (PFN_vkVoidFunction) vkCreateCommandPool;
+ return (PFN_vkVoidFunction)vkCreateCommandPool;
if (!strcmp(funcName, "DestroyCommandPool"))
- return (PFN_vkVoidFunction) vkDestroyCommandPool;
+ return (PFN_vkVoidFunction)vkDestroyCommandPool;
if (!strcmp(funcName, "ResetCommandPool"))
- return (PFN_vkVoidFunction) vkResetCommandPool;
+ return (PFN_vkVoidFunction)vkResetCommandPool;
if (!strcmp(funcName, "vkAllocateCommandBuffers"))
- return (PFN_vkVoidFunction) vkAllocateCommandBuffers;
+ return (PFN_vkVoidFunction)vkAllocateCommandBuffers;
if (!strcmp(funcName, "vkFreeCommandBuffers"))
- return (PFN_vkVoidFunction) vkFreeCommandBuffers;
+ return (PFN_vkVoidFunction)vkFreeCommandBuffers;
if (!strcmp(funcName, "vkBeginCommandBuffer"))
- return (PFN_vkVoidFunction) vkBeginCommandBuffer;
+ return (PFN_vkVoidFunction)vkBeginCommandBuffer;
if (!strcmp(funcName, "vkCmdUpdateBuffer"))
- return (PFN_vkVoidFunction) vkCmdUpdateBuffer;
+ return (PFN_vkVoidFunction)vkCmdUpdateBuffer;
if (!strcmp(funcName, "vkBindBufferMemory"))
- return (PFN_vkVoidFunction) vkBindBufferMemory;
+ return (PFN_vkVoidFunction)vkBindBufferMemory;
if (!strcmp(funcName, "vkUpdateDescriptorSets"))
- return (PFN_vkVoidFunction) vkUpdateDescriptorSets;
+ return (PFN_vkVoidFunction)vkUpdateDescriptorSets;
if (!strcmp(funcName, "vkCmdFillBuffer"))
- return (PFN_vkVoidFunction) vkCmdFillBuffer;
+ return (PFN_vkVoidFunction)vkCmdFillBuffer;
if (dev == NULL)
return NULL;
layer_data *my_data = get_my_data_ptr(get_dispatch_key(dev), layer_data_map);
- VkLayerDispatchTable* pTable = my_data->device_dispatch_table;
+ VkLayerDispatchTable *pTable = my_data->device_dispatch_table;
{
if (pTable->GetDeviceProcAddr == NULL)
return NULL;
@@ -801,47 +731,47 @@ VK_LAYER_EXPORT VKAPI_ATTR PFN_vkVoidFunction VKAPI_CALL vkGetDeviceProcAddr(VkD
}
}
-VK_LAYER_EXPORT VKAPI_ATTR PFN_vkVoidFunction VKAPI_CALL vkGetInstanceProcAddr(VkInstance instance, const char* funcName)
-{
+VK_LAYER_EXPORT VKAPI_ATTR PFN_vkVoidFunction VKAPI_CALL vkGetInstanceProcAddr(VkInstance instance, const char *funcName) {
PFN_vkVoidFunction fptr;
layer_data *my_data;
if (!strcmp(funcName, "vkGetInstanceProcAddr"))
- return (PFN_vkVoidFunction) vkGetInstanceProcAddr;
+ return (PFN_vkVoidFunction)vkGetInstanceProcAddr;
if (!strcmp(funcName, "vkGetDeviceProcAddr"))
- return (PFN_vkVoidFunction) vkGetDeviceProcAddr;
+ return (PFN_vkVoidFunction)vkGetDeviceProcAddr;
if (!strcmp(funcName, "vkCreateInstance"))
- return (PFN_vkVoidFunction) vkCreateInstance;
+ return (PFN_vkVoidFunction)vkCreateInstance;
if (!strcmp(funcName, "vkDestroyInstance"))
- return (PFN_vkVoidFunction) vkDestroyInstance;
+ return (PFN_vkVoidFunction)vkDestroyInstance;
if (!strcmp(funcName, "vkCreateDevice"))
- return (PFN_vkVoidFunction) vkCreateDevice;
+ return (PFN_vkVoidFunction)vkCreateDevice;
if (!strcmp(funcName, "vkEnumeratePhysicalDevices"))
- return (PFN_vkVoidFunction) vkEnumeratePhysicalDevices;
+ return (PFN_vkVoidFunction)vkEnumeratePhysicalDevices;
if (!strcmp(funcName, "vkGetPhysicalDeviceFeatures"))
- return (PFN_vkVoidFunction) vkGetPhysicalDeviceFeatures;
+ return (PFN_vkVoidFunction)vkGetPhysicalDeviceFeatures;
if (!strcmp(funcName, "vkGetPhysicalDeviceFormatProperties"))
- return (PFN_vkVoidFunction) vkGetPhysicalDeviceFormatProperties;
+ return (PFN_vkVoidFunction)vkGetPhysicalDeviceFormatProperties;
if (!strcmp(funcName, "vkGetPhysicalDeviceImageFormatProperties"))
- return (PFN_vkVoidFunction) vkGetPhysicalDeviceImageFormatProperties;
+ return (PFN_vkVoidFunction)vkGetPhysicalDeviceImageFormatProperties;
if (!strcmp(funcName, "vkGetPhysicalDeviceProperties"))
- return (PFN_vkVoidFunction) vkGetPhysicalDeviceProperties;
+ return (PFN_vkVoidFunction)vkGetPhysicalDeviceProperties;
if (!strcmp(funcName, "vkGetPhysicalDeviceQueueFamilyProperties"))
- return (PFN_vkVoidFunction) vkGetPhysicalDeviceQueueFamilyProperties;
+ return (PFN_vkVoidFunction)vkGetPhysicalDeviceQueueFamilyProperties;
if (!strcmp(funcName, "vkGetPhysicalDeviceMemoryProperties"))
- return (PFN_vkVoidFunction) vkGetPhysicalDeviceMemoryProperties;
+ return (PFN_vkVoidFunction)vkGetPhysicalDeviceMemoryProperties;
if (!strcmp(funcName, "vkGetPhysicalDeviceSparseImageFormatProperties"))
- return (PFN_vkVoidFunction) vkGetPhysicalDeviceSparseImageFormatProperties;
+ return (PFN_vkVoidFunction)vkGetPhysicalDeviceSparseImageFormatProperties;
if (!strcmp(funcName, "vkEnumerateInstanceLayerProperties"))
- return (PFN_vkVoidFunction) vkEnumerateInstanceLayerProperties;
+ return (PFN_vkVoidFunction)vkEnumerateInstanceLayerProperties;
if (!strcmp(funcName, "vkEnumerateDeviceLayerProperties"))
return (PFN_vkVoidFunction)vkEnumerateDeviceLayerProperties;
if (!strcmp(funcName, "vkEnumerateInstanceExtensionProperties"))
- return (PFN_vkVoidFunction) vkEnumerateInstanceExtensionProperties;
+ return (PFN_vkVoidFunction)vkEnumerateInstanceExtensionProperties;
if (!strcmp(funcName, "vkEnumerateInstanceDeviceProperties"))
return (PFN_vkVoidFunction)vkEnumerateDeviceExtensionProperties;
- if (!instance) return NULL;
+ if (!instance)
+ return NULL;
my_data = get_my_data_ptr(get_dispatch_key(instance), layer_data_map);
@@ -850,7 +780,7 @@ VK_LAYER_EXPORT VKAPI_ATTR PFN_vkVoidFunction VKAPI_CALL vkGetInstanceProcAddr(V
return fptr;
{
- VkLayerInstanceDispatchTable* pTable = my_data->instance_dispatch_table;
+ VkLayerInstanceDispatchTable *pTable = my_data->instance_dispatch_table;
if (pTable->GetInstanceProcAddr == NULL)
return NULL;
return pTable->GetInstanceProcAddr(instance, funcName);
diff --git a/layers/device_limits.h b/layers/device_limits.h
index 0fa3b90dc..c7dfbfe2e 100644
--- a/layers/device_limits.h
+++ b/layers/device_limits.h
@@ -31,40 +31,36 @@
using namespace std;
// Device Limits ERROR codes
-typedef enum _DEV_LIMITS_ERROR
-{
- DEVLIMITS_NONE, // Used for INFO & other non-error messages
- DEVLIMITS_INVALID_INSTANCE, // Invalid instance used
- DEVLIMITS_INVALID_PHYSICAL_DEVICE, // Invalid physical device used
- DEVLIMITS_INVALID_INHERITED_QUERY, // Invalid use of inherited query
- DEVLIMITS_MUST_QUERY_COUNT, // Failed to make initial call to an API to query the count
- DEVLIMITS_MUST_QUERY_PROPERTIES, // Failed to make initial call to an API to query properties
- DEVLIMITS_INVALID_CALL_SEQUENCE, // Flag generic case of an invalid call sequence by the app
- DEVLIMITS_INVALID_FEATURE_REQUESTED, // App requested a feature not supported by physical device
- DEVLIMITS_COUNT_MISMATCH, // App requesting a count value different than actual value
- DEVLIMITS_INVALID_QUEUE_CREATE_REQUEST, // Invalid queue requested based on queue family properties
- DEVLIMITS_LIMITS_VIOLATION, // Driver-specified limits/properties were exceeded
- DEVLIMITS_INVALID_UNIFORM_BUFFER_OFFSET, // Uniform buffer offset violates device limit granularity
- DEVLIMITS_INVALID_STORAGE_BUFFER_OFFSET, // Storage buffer offset violates device limit granularity
+typedef enum _DEV_LIMITS_ERROR {
+ DEVLIMITS_NONE, // Used for INFO & other non-error messages
+ DEVLIMITS_INVALID_INSTANCE, // Invalid instance used
+ DEVLIMITS_INVALID_PHYSICAL_DEVICE, // Invalid physical device used
+ DEVLIMITS_INVALID_INHERITED_QUERY, // Invalid use of inherited query
+ DEVLIMITS_MUST_QUERY_COUNT, // Failed to make initial call to an API to query the count
+ DEVLIMITS_MUST_QUERY_PROPERTIES, // Failed to make initial call to an API to query properties
+ DEVLIMITS_INVALID_CALL_SEQUENCE, // Flag generic case of an invalid call sequence by the app
+ DEVLIMITS_INVALID_FEATURE_REQUESTED, // App requested a feature not supported by physical device
+ DEVLIMITS_COUNT_MISMATCH, // App requesting a count value different than actual value
+ DEVLIMITS_INVALID_QUEUE_CREATE_REQUEST, // Invalid queue requested based on queue family properties
+ DEVLIMITS_LIMITS_VIOLATION, // Driver-specified limits/properties were exceeded
+ DEVLIMITS_INVALID_UNIFORM_BUFFER_OFFSET, // Uniform buffer offset violates device limit granularity
+ DEVLIMITS_INVALID_STORAGE_BUFFER_OFFSET, // Storage buffer offset violates device limit granularity
} DEV_LIMITS_ERROR;
-typedef enum _CALL_STATE
-{
- UNCALLED, // Function has not been called
- QUERY_COUNT, // Function called once to query a count
- QUERY_DETAILS, // Function called w/ a count to query details
+typedef enum _CALL_STATE {
+ UNCALLED, // Function has not been called
+ QUERY_COUNT, // Function called once to query a count
+ QUERY_DETAILS, // Function called w/ a count to query details
} CALL_STATE;
-typedef struct _INSTANCE_STATE
-{
+typedef struct _INSTANCE_STATE {
// Track the call state and array size for physical devices
CALL_STATE vkEnumeratePhysicalDevicesState;
uint32_t physicalDevicesCount;
- _INSTANCE_STATE():vkEnumeratePhysicalDevicesState(UNCALLED), physicalDevicesCount(0) {};
+ _INSTANCE_STATE() : vkEnumeratePhysicalDevicesState(UNCALLED), physicalDevicesCount(0){};
} INSTANCE_STATE;
-typedef struct _PHYSICAL_DEVICE_STATE
-{
+typedef struct _PHYSICAL_DEVICE_STATE {
// Track the call state and array sizes for various query functions
CALL_STATE vkGetPhysicalDeviceQueueFamilyPropertiesState;
uint32_t queueFamilyPropertiesCount;
@@ -73,9 +69,9 @@ typedef struct _PHYSICAL_DEVICE_STATE
CALL_STATE vkGetPhysicalDeviceExtensionPropertiesState;
uint32_t deviceExtensionCount;
CALL_STATE vkGetPhysicalDeviceFeaturesState;
- _PHYSICAL_DEVICE_STATE():vkGetPhysicalDeviceQueueFamilyPropertiesState(UNCALLED), queueFamilyPropertiesCount(0),
- vkGetPhysicalDeviceLayerPropertiesState(UNCALLED), deviceLayerCount(0),
- vkGetPhysicalDeviceExtensionPropertiesState(UNCALLED), deviceExtensionCount(0),
- vkGetPhysicalDeviceFeaturesState(UNCALLED) {};
+ _PHYSICAL_DEVICE_STATE()
+ : vkGetPhysicalDeviceQueueFamilyPropertiesState(UNCALLED), queueFamilyPropertiesCount(0),
+ vkGetPhysicalDeviceLayerPropertiesState(UNCALLED), deviceLayerCount(0),
+ vkGetPhysicalDeviceExtensionPropertiesState(UNCALLED), deviceExtensionCount(0),
+ vkGetPhysicalDeviceFeaturesState(UNCALLED){};
} PHYSICAL_DEVICE_STATE;
-
diff --git a/layers/draw_state.cpp b/layers/draw_state.cpp
deleted file mode 100644
index 15abb1179..000000000
--- a/layers/draw_state.cpp
+++ /dev/null
@@ -1,8046 +0,0 @@
-/* Copyright (c) 2015-2016 The Khronos Group Inc.
- * Copyright (c) 2015-2016 Valve Corporation
- * Copyright (c) 2015-2016 LunarG, Inc.
- * Copyright (C) 2015-2016 Google Inc.
- *
- * Permission is hereby granted, free of charge, to any person obtaining a copy
- * of this software and/or associated documentation files (the "Materials"), to
- * deal in the Materials without restriction, including without limitation the
- * rights to use, copy, modify, merge, publish, distribute, sublicense, and/or
- * sell copies of the Materials, and to permit persons to whom the Materials
- * are furnished to do so, subject to the following conditions:
- *
- * The above copyright notice(s) and this permission notice shall be included
- * in all copies or substantial portions of the Materials.
- *
- * THE MATERIALS ARE PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
- * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
- * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT.
- *
- * IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM,
- * DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR
- * OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE MATERIALS OR THE
- * USE OR OTHER DEALINGS IN THE MATERIALS
- *
- * Author: Cody Northrop <cnorthrop@google.com>
- * Author: Michael Lentine <mlentine@google.com>
- * Author: Tobin Ehlis <tobine@google.com>
- * Author: Chia-I Wu <olv@google.com>
- * Author: Chris Forbes <chrisf@ijw.co.nz>
- * Author: Mark Lobodzinski <mark@lunarg.com>
- * Author: Ian Elliott <ianelliott@google.com>
- */
-
-// Allow use of STL min and max functions in Windows
-#define NOMINMAX
-
-#include <stdio.h>
-#include <stdlib.h>
-#include <string.h>
-#include <assert.h>
-#include <unordered_map>
-#include <unordered_set>
-#include <map>
-#include <string>
-#include <iostream>
-#include <algorithm>
-#include <list>
-#include <spirv.hpp>
-#include <set>
-
-#include "vk_loader_platform.h"
-#include "vk_dispatch_table_helper.h"
-#include "vk_struct_string_helper_cpp.h"
-#if defined(__GNUC__)
-#pragma GCC diagnostic ignored "-Wwrite-strings"
-#endif
-#if defined(__GNUC__)
-#pragma GCC diagnostic warning "-Wwrite-strings"
-#endif
-#include "vk_struct_size_helper.h"
-#include "draw_state.h"
-#include "vk_layer_config.h"
-#include "vulkan/vk_debug_marker_layer.h"
-#include "vk_layer_table.h"
-#include "vk_layer_debug_marker_table.h"
-#include "vk_layer_data.h"
-#include "vk_layer_logging.h"
-#include "vk_layer_extension_utils.h"
-#include "vk_layer_utils.h"
-
-using std::unordered_map;
-using std::unordered_set;
-
-// Track command pools and their command buffers
-struct CMD_POOL_INFO {
- VkCommandPoolCreateFlags createFlags;
- uint32_t queueFamilyIndex;
- list<VkCommandBuffer> commandBuffers; // list container of cmd buffers allocated from this pool
-};
-
-struct devExts {
- VkBool32 debug_marker_enabled;
- VkBool32 wsi_enabled;
- unordered_map<VkSwapchainKHR, SWAPCHAIN_NODE*> swapchainMap;
-};
-
-// fwd decls
-struct shader_module;
-struct render_pass;
-
-struct layer_data {
- debug_report_data* report_data;
- std::vector<VkDebugReportCallbackEXT> logging_callback;
- VkLayerDispatchTable* device_dispatch_table;
- VkLayerInstanceDispatchTable* instance_dispatch_table;
- devExts device_extensions;
- vector<VkQueue> queues; // all queues under given device
- // Global set of all cmdBuffers that are inFlight on this device
- unordered_set<VkCommandBuffer> globalInFlightCmdBuffers;
- // Layer specific data
- unordered_map<VkSampler, unique_ptr<SAMPLER_NODE>> sampleMap;
- unordered_map<VkImageView, unique_ptr<VkImageViewCreateInfo>> imageViewMap;
- unordered_map<VkImage, unique_ptr<VkImageCreateInfo>> imageMap;
- unordered_map<VkBufferView, unique_ptr<VkBufferViewCreateInfo>> bufferViewMap;
- unordered_map<VkBuffer, BUFFER_NODE> bufferMap;
- unordered_map<VkPipeline, PIPELINE_NODE*> pipelineMap;
- unordered_map<VkCommandPool, CMD_POOL_INFO> commandPoolMap;
- unordered_map<VkDescriptorPool, DESCRIPTOR_POOL_NODE*> descriptorPoolMap;
- unordered_map<VkDescriptorSet, SET_NODE*> setMap;
- unordered_map<VkDescriptorSetLayout, LAYOUT_NODE*> descriptorSetLayoutMap;
- unordered_map<VkPipelineLayout, PIPELINE_LAYOUT_NODE> pipelineLayoutMap;
- unordered_map<VkDeviceMemory, VkImage> memImageMap;
- unordered_map<VkFence, FENCE_NODE> fenceMap;
- unordered_map<VkQueue, QUEUE_NODE> queueMap;
- unordered_map<VkEvent, EVENT_NODE> eventMap;
- unordered_map<QueryObject, bool> queryToStateMap;
- unordered_map<VkQueryPool, QUERY_POOL_NODE> queryPoolMap;
- unordered_map<VkSemaphore, SEMAPHORE_NODE> semaphoreMap;
- unordered_map<void*, GLOBAL_CB_NODE*> commandBufferMap;
- unordered_map<VkFramebuffer, VkFramebufferCreateInfo*> frameBufferMap;
- unordered_map<VkImage, vector<ImageSubresourcePair>> imageSubresourceMap;
- unordered_map<ImageSubresourcePair, IMAGE_NODE> imageLayoutMap;
- unordered_map<VkRenderPass, RENDER_PASS_NODE*> renderPassMap;
- unordered_map<VkShaderModule, shader_module*> shaderModuleMap;
- // Current render pass
- VkRenderPassBeginInfo renderPassBeginInfo;
- uint32_t currentSubpass;
-
- // Device specific data
- PHYS_DEV_PROPERTIES_NODE physDevProperties;
-
- layer_data() :
- report_data(nullptr),
- device_dispatch_table(nullptr),
- instance_dispatch_table(nullptr),
- device_extensions()
- {};
-};
-
-// Code imported from ShaderChecker
-static void
-build_def_index(shader_module *);
-
-// A forward iterator over spirv instructions. Provides easy access to len, opcode, and content words
-// without the caller needing to care too much about the physical SPIRV module layout.
-struct spirv_inst_iter {
- std::vector<uint32_t>::const_iterator zero;
- std::vector<uint32_t>::const_iterator it;
-
- uint32_t len() { return *it >> 16; }
- uint32_t opcode() { return *it & 0x0ffffu; }
- uint32_t const & word(unsigned n) { return it[n]; }
- uint32_t offset() { return (uint32_t)(it - zero); }
-
- spirv_inst_iter() {}
-
- spirv_inst_iter(std::vector<uint32_t>::const_iterator zero,
- std::vector<uint32_t>::const_iterator it) : zero(zero), it(it) {}
-
- bool operator== (spirv_inst_iter const & other) {
- return it == other.it;
- }
-
- bool operator!= (spirv_inst_iter const & other) {
- return it != other.it;
- }
-
- spirv_inst_iter operator++ (int) { /* x++ */
- spirv_inst_iter ii = *this;
- it += len();
- return ii;
- }
-
- spirv_inst_iter operator++ () { /* ++x; */
- it += len();
- return *this;
- }
-
- /* The iterator and the value are the same thing. */
- spirv_inst_iter & operator* () { return *this; }
- spirv_inst_iter const & operator* () const { return *this; }
-};
-
-struct shader_module {
- /* the spirv image itself */
- vector<uint32_t> words;
- /* a mapping of <id> to the first word of its def. this is useful because walking type
- * trees, constant expressions, etc requires jumping all over the instruction stream.
- */
- unordered_map<unsigned, unsigned> def_index;
-
- shader_module(VkShaderModuleCreateInfo const *pCreateInfo) :
- words((uint32_t *)pCreateInfo->pCode, (uint32_t *)pCreateInfo->pCode + pCreateInfo->codeSize / sizeof(uint32_t)),
- def_index() {
-
- build_def_index(this);
- }
-
- /* expose begin() / end() to enable range-based for */
- spirv_inst_iter begin() const { return spirv_inst_iter(words.begin(), words.begin() + 5); } /* first insn */
- spirv_inst_iter end() const { return spirv_inst_iter(words.begin(), words.end()); } /* just past last insn */
- /* given an offset into the module, produce an iterator there. */
- spirv_inst_iter at(unsigned offset) const { return spirv_inst_iter(words.begin(), words.begin() + offset); }
-
- /* gets an iterator to the definition of an id */
- spirv_inst_iter get_def(unsigned id) const {
- auto it = def_index.find(id);
- if (it == def_index.end()) {
- return end();
- }
- return at(it->second);
- }
-};
-
-// TODO : Do we need to guard access to layer_data_map w/ lock?
-static unordered_map<void*, layer_data*> layer_data_map;
-
-// TODO : This can be much smarter, using separate locks for separate global data
-static int globalLockInitialized = 0;
-static loader_platform_thread_mutex globalLock;
-#define MAX_TID 513
-static loader_platform_thread_id g_tidMapping[MAX_TID] = {0};
-static uint32_t g_maxTID = 0;
-
-template layer_data *get_my_data_ptr<layer_data>(
- void *data_key,
- std::unordered_map<void *, layer_data *> &data_map);
-
-// Map actual TID to an index value and return that index
-// This keeps TIDs in range from 0-MAX_TID and simplifies compares between runs
-static uint32_t getTIDIndex() {
- loader_platform_thread_id tid = loader_platform_get_thread_id();
- for (uint32_t i = 0; i < g_maxTID; i++) {
- if (tid == g_tidMapping[i])
- return i;
- }
- // Don't yet have mapping, set it and return newly set index
- uint32_t retVal = (uint32_t) g_maxTID;
- g_tidMapping[g_maxTID++] = tid;
- assert(g_maxTID < MAX_TID);
- return retVal;
-}
-
-// Return a string representation of CMD_TYPE enum
-static string cmdTypeToString(CMD_TYPE cmd)
-{
- switch (cmd)
- {
- case CMD_BINDPIPELINE:
- return "CMD_BINDPIPELINE";
- case CMD_BINDPIPELINEDELTA:
- return "CMD_BINDPIPELINEDELTA";
- case CMD_SETVIEWPORTSTATE:
- return "CMD_SETVIEWPORTSTATE";
- case CMD_SETLINEWIDTHSTATE:
- return "CMD_SETLINEWIDTHSTATE";
- case CMD_SETDEPTHBIASSTATE:
- return "CMD_SETDEPTHBIASSTATE";
- case CMD_SETBLENDSTATE:
- return "CMD_SETBLENDSTATE";
- case CMD_SETDEPTHBOUNDSSTATE:
- return "CMD_SETDEPTHBOUNDSSTATE";
- case CMD_SETSTENCILREADMASKSTATE:
- return "CMD_SETSTENCILREADMASKSTATE";
- case CMD_SETSTENCILWRITEMASKSTATE:
- return "CMD_SETSTENCILWRITEMASKSTATE";
- case CMD_SETSTENCILREFERENCESTATE:
- return "CMD_SETSTENCILREFERENCESTATE";
- case CMD_BINDDESCRIPTORSETS:
- return "CMD_BINDDESCRIPTORSETS";
- case CMD_BINDINDEXBUFFER:
- return "CMD_BINDINDEXBUFFER";
- case CMD_BINDVERTEXBUFFER:
- return "CMD_BINDVERTEXBUFFER";
- case CMD_DRAW:
- return "CMD_DRAW";
- case CMD_DRAWINDEXED:
- return "CMD_DRAWINDEXED";
- case CMD_DRAWINDIRECT:
- return "CMD_DRAWINDIRECT";
- case CMD_DRAWINDEXEDINDIRECT:
- return "CMD_DRAWINDEXEDINDIRECT";
- case CMD_DISPATCH:
- return "CMD_DISPATCH";
- case CMD_DISPATCHINDIRECT:
- return "CMD_DISPATCHINDIRECT";
- case CMD_COPYBUFFER:
- return "CMD_COPYBUFFER";
- case CMD_COPYIMAGE:
- return "CMD_COPYIMAGE";
- case CMD_BLITIMAGE:
- return "CMD_BLITIMAGE";
- case CMD_COPYBUFFERTOIMAGE:
- return "CMD_COPYBUFFERTOIMAGE";
- case CMD_COPYIMAGETOBUFFER:
- return "CMD_COPYIMAGETOBUFFER";
- case CMD_CLONEIMAGEDATA:
- return "CMD_CLONEIMAGEDATA";
- case CMD_UPDATEBUFFER:
- return "CMD_UPDATEBUFFER";
- case CMD_FILLBUFFER:
- return "CMD_FILLBUFFER";
- case CMD_CLEARCOLORIMAGE:
- return "CMD_CLEARCOLORIMAGE";
- case CMD_CLEARATTACHMENTS:
- return "CMD_CLEARCOLORATTACHMENT";
- case CMD_CLEARDEPTHSTENCILIMAGE:
- return "CMD_CLEARDEPTHSTENCILIMAGE";
- case CMD_RESOLVEIMAGE:
- return "CMD_RESOLVEIMAGE";
- case CMD_SETEVENT:
- return "CMD_SETEVENT";
- case CMD_RESETEVENT:
- return "CMD_RESETEVENT";
- case CMD_WAITEVENTS:
- return "CMD_WAITEVENTS";
- case CMD_PIPELINEBARRIER:
- return "CMD_PIPELINEBARRIER";
- case CMD_BEGINQUERY:
- return "CMD_BEGINQUERY";
- case CMD_ENDQUERY:
- return "CMD_ENDQUERY";
- case CMD_RESETQUERYPOOL:
- return "CMD_RESETQUERYPOOL";
- case CMD_COPYQUERYPOOLRESULTS:
- return "CMD_COPYQUERYPOOLRESULTS";
- case CMD_WRITETIMESTAMP:
- return "CMD_WRITETIMESTAMP";
- case CMD_INITATOMICCOUNTERS:
- return "CMD_INITATOMICCOUNTERS";
- case CMD_LOADATOMICCOUNTERS:
- return "CMD_LOADATOMICCOUNTERS";
- case CMD_SAVEATOMICCOUNTERS:
- return "CMD_SAVEATOMICCOUNTERS";
- case CMD_BEGINRENDERPASS:
- return "CMD_BEGINRENDERPASS";
- case CMD_ENDRENDERPASS:
- return "CMD_ENDRENDERPASS";
- case CMD_DBGMARKERBEGIN:
- return "CMD_DBGMARKERBEGIN";
- case CMD_DBGMARKEREND:
- return "CMD_DBGMARKEREND";
- default:
- return "UNKNOWN";
- }
-}
-
-// SPIRV utility functions
-static void
-build_def_index(shader_module *module)
-{
- for (auto insn : *module) {
- switch (insn.opcode()) {
- /* Types */
- case spv::OpTypeVoid:
- case spv::OpTypeBool:
- case spv::OpTypeInt:
- case spv::OpTypeFloat:
- case spv::OpTypeVector:
- case spv::OpTypeMatrix:
- case spv::OpTypeImage:
- case spv::OpTypeSampler:
- case spv::OpTypeSampledImage:
- case spv::OpTypeArray:
- case spv::OpTypeRuntimeArray:
- case spv::OpTypeStruct:
- case spv::OpTypeOpaque:
- case spv::OpTypePointer:
- case spv::OpTypeFunction:
- case spv::OpTypeEvent:
- case spv::OpTypeDeviceEvent:
- case spv::OpTypeReserveId:
- case spv::OpTypeQueue:
- case spv::OpTypePipe:
- module->def_index[insn.word(1)] = insn.offset();
- break;
-
- /* Fixed constants */
- case spv::OpConstantTrue:
- case spv::OpConstantFalse:
- case spv::OpConstant:
- case spv::OpConstantComposite:
- case spv::OpConstantSampler:
- case spv::OpConstantNull:
- module->def_index[insn.word(2)] = insn.offset();
- break;
-
- /* Specialization constants */
- case spv::OpSpecConstantTrue:
- case spv::OpSpecConstantFalse:
- case spv::OpSpecConstant:
- case spv::OpSpecConstantComposite:
- case spv::OpSpecConstantOp:
- module->def_index[insn.word(2)] = insn.offset();
- break;
-
- /* Variables */
- case spv::OpVariable:
- module->def_index[insn.word(2)] = insn.offset();
- break;
-
- /* Functions */
- case spv::OpFunction:
- module->def_index[insn.word(2)] = insn.offset();
- break;
-
- default:
- /* We don't care about any other defs for now. */
- break;
- }
- }
-}
-
-
-static spirv_inst_iter
-find_entrypoint(shader_module *src, char const *name, VkShaderStageFlagBits stageBits)
-{
- for (auto insn : *src) {
- if (insn.opcode() == spv::OpEntryPoint) {
- auto entrypointName = (char const *) &insn.word(3);
- auto entrypointStageBits = 1u << insn.word(1);
-
- if (!strcmp(entrypointName, name) && (entrypointStageBits & stageBits)) {
- return insn;
- }
- }
- }
-
- return src->end();
-}
-
-
-bool
-shader_is_spirv(VkShaderModuleCreateInfo const *pCreateInfo)
-{
- uint32_t *words = (uint32_t *)pCreateInfo->pCode;
- size_t sizeInWords = pCreateInfo->codeSize / sizeof(uint32_t);
-
- /* Just validate that the header makes sense. */
- return sizeInWords >= 5 && words[0] == spv::MagicNumber && words[1] == spv::Version;
-}
-
-static char const *
-storage_class_name(unsigned sc)
-{
- switch (sc) {
- case spv::StorageClassInput: return "input";
- case spv::StorageClassOutput: return "output";
- case spv::StorageClassUniformConstant: return "const uniform";
- case spv::StorageClassUniform: return "uniform";
- case spv::StorageClassWorkgroup: return "workgroup local";
- case spv::StorageClassCrossWorkgroup: return "workgroup global";
- case spv::StorageClassPrivate: return "private global";
- case spv::StorageClassFunction: return "function";
- case spv::StorageClassGeneric: return "generic";
- case spv::StorageClassAtomicCounter: return "atomic counter";
- case spv::StorageClassImage: return "image";
- default: return "unknown";
- }
-}
-
-/* get the value of an integral constant */
-unsigned
-get_constant_value(shader_module const *src, unsigned id)
-{
- auto value = src->get_def(id);
- assert(value != src->end());
-
- if (value.opcode() != spv::OpConstant) {
- /* TODO: Either ensure that the specialization transform is already performed on a module we're
- considering here, OR -- specialize on the fly now.
- */
- return 1;
- }
-
- return value.word(3);
-}
-
-/* returns ptr to null terminator */
-static char *
-describe_type(char *dst, shader_module const *src, unsigned type)
-{
- auto insn = src->get_def(type);
- assert(insn != src->end());
-
- switch (insn.opcode()) {
- case spv::OpTypeBool:
- return dst + sprintf(dst, "bool");
- case spv::OpTypeInt:
- return dst + sprintf(dst, "%cint%d", insn.word(3) ? 's' : 'u', insn.word(2));
- case spv::OpTypeFloat:
- return dst + sprintf(dst, "float%d", insn.word(2));
- case spv::OpTypeVector:
- dst += sprintf(dst, "vec%d of ", insn.word(3));
- return describe_type(dst, src, insn.word(2));
- case spv::OpTypeMatrix:
- dst += sprintf(dst, "mat%d of ", insn.word(3));
- return describe_type(dst, src, insn.word(2));
- case spv::OpTypeArray:
- dst += sprintf(dst, "arr[%d] of ", get_constant_value(src, insn.word(3)));
- return describe_type(dst, src, insn.word(2));
- case spv::OpTypePointer:
- dst += sprintf(dst, "ptr to %s ", storage_class_name(insn.word(2)));
- return describe_type(dst, src, insn.word(3));
- case spv::OpTypeStruct:
- {
- dst += sprintf(dst, "struct of (");
- for (unsigned i = 2; i < insn.len(); i++) {
- dst = describe_type(dst, src, insn.word(i));
- dst += sprintf(dst, i == insn.len()-1 ? ")" : ", ");
- }
- return dst;
- }
- case spv::OpTypeSampler:
- return dst + sprintf(dst, "sampler");
- default:
- return dst + sprintf(dst, "oddtype");
- }
-}
-
-static bool
-types_match(shader_module const *a, shader_module const *b, unsigned a_type, unsigned b_type, bool b_arrayed)
-{
- /* walk two type trees together, and complain about differences */
- auto a_insn = a->get_def(a_type);
- auto b_insn = b->get_def(b_type);
- assert(a_insn != a->end());
- assert(b_insn != b->end());
-
- if (b_arrayed && b_insn.opcode() == spv::OpTypeArray) {
- /* we probably just found the extra level of arrayness in b_type: compare the type inside it to a_type */
- return types_match(a, b, a_type, b_insn.word(2), false);
- }
-
- if (a_insn.opcode() != b_insn.opcode()) {
- return false;
- }
-
- switch (a_insn.opcode()) {
- /* if b_arrayed and we hit a leaf type, then we can't match -- there's nowhere for the extra OpTypeArray to be! */
- case spv::OpTypeBool:
- return true && !b_arrayed;
- case spv::OpTypeInt:
- /* match on width, signedness */
- return a_insn.word(2) == b_insn.word(2) && a_insn.word(3) == b_insn.word(3) && !b_arrayed;
- case spv::OpTypeFloat:
- /* match on width */
- return a_insn.word(2) == b_insn.word(2) && !b_arrayed;
- case spv::OpTypeVector:
- case spv::OpTypeMatrix:
- /* match on element type, count. these all have the same layout. we don't get here if
- * b_arrayed -- that is handled above. */
- return !b_arrayed &&
- types_match(a, b, a_insn.word(2), b_insn.word(2), b_arrayed) &&
- a_insn.word(3) == b_insn.word(3);
- case spv::OpTypeArray:
- /* match on element type, count. these all have the same layout. we don't get here if
- * b_arrayed. This differs from vector & matrix types in that the array size is the id of a constant instruction,
- * not a literal within OpTypeArray */
- return !b_arrayed &&
- types_match(a, b, a_insn.word(2), b_insn.word(2), b_arrayed) &&
- get_constant_value(a, a_insn.word(3)) == get_constant_value(b, b_insn.word(3));
- case spv::OpTypeStruct:
- /* match on all element types */
- {
- if (b_arrayed) {
- /* for the purposes of matching different levels of arrayness, structs are leaves. */
- return false;
- }
-
- if (a_insn.len() != b_insn.len()) {
- return false; /* structs cannot match if member counts differ */
- }
-
- for (unsigned i = 2; i < a_insn.len(); i++) {
- if (!types_match(a, b, a_insn.word(i), b_insn.word(i), b_arrayed)) {
- return false;
- }
- }
-
- return true;
- }
- case spv::OpTypePointer:
- /* match on pointee type. storage class is expected to differ */
- return types_match(a, b, a_insn.word(3), b_insn.word(3), b_arrayed);
-
- default:
- /* remaining types are CLisms, or may not appear in the interfaces we
- * are interested in. Just claim no match.
- */
- return false;
-
- }
-}
-
-static int
-value_or_default(std::unordered_map<unsigned, unsigned> const &map, unsigned id, int def)
-{
- auto it = map.find(id);
- if (it == map.end())
- return def;
- else
- return it->second;
-}
-
-
-static unsigned
-get_locations_consumed_by_type(shader_module const *src, unsigned type, bool strip_array_level)
-{
- auto insn = src->get_def(type);
- assert(insn != src->end());
-
- switch (insn.opcode()) {
- case spv::OpTypePointer:
- /* see through the ptr -- this is only ever at the toplevel for graphics shaders;
- * we're never actually passing pointers around. */
- return get_locations_consumed_by_type(src, insn.word(3), strip_array_level);
- case spv::OpTypeArray:
- if (strip_array_level) {
- return get_locations_consumed_by_type(src, insn.word(2), false);
- }
- else {
- return get_constant_value(src, insn.word(3)) * get_locations_consumed_by_type(src, insn.word(2), false);
- }
- case spv::OpTypeMatrix:
- /* num locations is the dimension * element size */
- return insn.word(3) * get_locations_consumed_by_type(src, insn.word(2), false);
- default:
- /* everything else is just 1. */
- return 1;
-
- /* TODO: extend to handle 64bit scalar types, whose vectors may need
- * multiple locations. */
- }
-}
-
-
-struct interface_var {
- uint32_t id;
- uint32_t type_id;
- uint32_t offset;
- /* TODO: collect the name, too? Isn't required to be present. */
-};
-
-
-static void
-collect_interface_block_members(layer_data *my_data, VkDevice dev,
- shader_module const *src,
- std::map<uint32_t, interface_var> &out,
- std::map<uint32_t, interface_var> &builtins_out,
- std::unordered_map<unsigned, unsigned> const &blocks,
- bool is_array_of_verts,
- uint32_t id,
- uint32_t type_id)
-{
- /* Walk down the type_id presented, trying to determine whether it's actually an interface block. */
- auto type = src->get_def(type_id);
-
- while (true) {
-
- if (type.opcode() == spv::OpTypePointer) {
- type = src->get_def(type.word(3));
- }
- else if (type.opcode() == spv::OpTypeArray && is_array_of_verts) {
- type = src->get_def(type.word(2));
- is_array_of_verts = false;
- }
- else if (type.opcode() == spv::OpTypeStruct) {
- if (blocks.find(type.word(1)) == blocks.end()) {
- /* This isn't an interface block. */
- return;
- }
- else {
- /* We have found the correct type. Walk its members. */
- break;
- }
- }
- else {
- /* not an interface block */
- return;
- }
- }
-
- /* Walk all the OpMemberDecorate for type's result id. */
- for (auto insn : *src) {
- if (insn.opcode() == spv::OpMemberDecorate && insn.word(1) == type.word(1)) {
- unsigned member_index = insn.word(2);
- unsigned member_type_id = type.word(2 + member_index);
-
- if (insn.word(3) == spv::DecorationLocation) {
- unsigned location = insn.word(4);
- unsigned num_locations = get_locations_consumed_by_type(src, member_type_id, false);
- for (unsigned int offset = 0; offset < num_locations; offset++) {
- interface_var v;
- v.id = id;
- /* TODO: member index in interface_var too? */
- v.type_id = member_type_id;
- v.offset = offset;
- out[location + offset] = v;
- }
- }
- else if (insn.word(3) == spv::DecorationBuiltIn) {
- unsigned builtin = insn.word(4);
- interface_var v;
- v.id = id;
- v.type_id = member_type_id;
- v.offset = 0;
- builtins_out[builtin] = v;
- }
- }
- }
-}
-
-static void
-collect_interface_by_location(layer_data *my_data, VkDevice dev,
- shader_module const *src,
- spirv_inst_iter entrypoint,
- spv::StorageClass sinterface,
- std::map<uint32_t, interface_var> &out,
- std::map<uint32_t, interface_var> &builtins_out,
- bool is_array_of_verts)
-{
- std::unordered_map<unsigned, unsigned> var_locations;
- std::unordered_map<unsigned, unsigned> var_builtins;
- std::unordered_map<unsigned, unsigned> blocks;
-
- for (auto insn : *src) {
-
- /* We consider two interface models: SSO rendezvous-by-location, and
- * builtins. Complain about anything that fits neither model.
- */
- if (insn.opcode() == spv::OpDecorate) {
- if (insn.word(2) == spv::DecorationLocation) {
- var_locations[insn.word(1)] = insn.word(3);
- }
-
- if (insn.word(2) == spv::DecorationBuiltIn) {
- var_builtins[insn.word(1)] = insn.word(3);
- }
-
- if (insn.word(2) == spv::DecorationBlock) {
- blocks[insn.word(1)] = 1;
- }
- }
- }
-
- /* TODO: handle grouped decorations */
- /* TODO: handle index=1 dual source outputs from FS -- two vars will
- * have the same location, and we DONT want to clobber. */
-
- /* find the end of the entrypoint's name string. additional zero bytes follow the actual null
- terminator, to fill out the rest of the word - so we only need to look at the last byte in
- the word to determine which word contains the terminator. */
- auto word = 3;
- while (entrypoint.word(word) & 0xff000000u) {
- ++word;
- }
- ++word;
-
- for (; word < entrypoint.len(); word++) {
- auto insn = src->get_def(entrypoint.word(word));
- assert(insn != src->end());
- assert(insn.opcode() == spv::OpVariable);
-
- if (insn.word(3) == sinterface) {
- unsigned id = insn.word(2);
- unsigned type = insn.word(1);
-
- int location = value_or_default(var_locations, id, -1);
- int builtin = value_or_default(var_builtins, id, -1);
-
- /* All variables and interface block members in the Input or Output storage classes
- * must be decorated with either a builtin or an explicit location.
- *
- * TODO: integrate the interface block support here. For now, don't complain --
- * a valid SPIRV module will only hit this path for the interface block case, as the
- * individual members of the type are decorated, rather than variable declarations.
- */
-
- if (location != -1) {
- /* A user-defined interface variable, with a location. Where a variable
- * occupied multiple locations, emit one result for each. */
- unsigned num_locations = get_locations_consumed_by_type(src, type,
- is_array_of_verts);
- for (unsigned int offset = 0; offset < num_locations; offset++) {
- interface_var v;
- v.id = id;
- v.type_id = type;
- v.offset = offset;
- out[location + offset] = v;
- }
- }
- else if (builtin != -1) {
- /* A builtin interface variable */
- /* Note that since builtin interface variables do not consume numbered
- * locations, there is no larger-than-vec4 consideration as above
- */
- interface_var v;
- v.id = id;
- v.type_id = type;
- v.offset = 0;
- builtins_out[builtin] = v;
- }
- else {
- /* An interface block instance */
- collect_interface_block_members(my_data, dev, src, out, builtins_out,
- blocks, is_array_of_verts, id, type);
- }
- }
- }
-}
-
-static void
-collect_interface_by_descriptor_slot(layer_data *my_data, VkDevice dev,
- shader_module const *src, spv::StorageClass sinterface,
- std::unordered_set<uint32_t> const &accessible_ids,
- std::map<std::pair<unsigned, unsigned>, interface_var> &out)
-{
-
- std::unordered_map<unsigned, unsigned> var_sets;
- std::unordered_map<unsigned, unsigned> var_bindings;
-
- for (auto insn : *src) {
- /* All variables in the Uniform or UniformConstant storage classes are required to be decorated with both
- * DecorationDescriptorSet and DecorationBinding.
- */
- if (insn.opcode() == spv::OpDecorate) {
- if (insn.word(2) == spv::DecorationDescriptorSet) {
- var_sets[insn.word(1)] = insn.word(3);
- }
-
- if (insn.word(2) == spv::DecorationBinding) {
- var_bindings[insn.word(1)] = insn.word(3);
- }
- }
- }
-
- for (auto id : accessible_ids) {
- auto insn = src->get_def(id);
- assert(insn != src->end());
-
- if (insn.opcode() == spv::OpVariable &&
- (insn.word(3) == spv::StorageClassUniform ||
- insn.word(3) == spv::StorageClassUniformConstant)) {
- unsigned set = value_or_default(var_sets, insn.word(2), 0);
- unsigned binding = value_or_default(var_bindings, insn.word(2), 0);
-
- auto existing_it = out.find(std::make_pair(set, binding));
- if (existing_it != out.end()) {
- /* conflict within spv image */
- log_msg(my_data->report_data, VK_DEBUG_REPORT_ERROR_BIT_EXT, VK_DEBUG_REPORT_OBJECT_TYPE_DEVICE_EXT, /*dev*/0, __LINE__,
- SHADER_CHECKER_INCONSISTENT_SPIRV, "SC",
- "var %d (type %d) in %s interface in descriptor slot (%u,%u) conflicts with existing definition",
- insn.word(2), insn.word(1), storage_class_name(sinterface),
- existing_it->first.first, existing_it->first.second);
- }
-
- interface_var v;
- v.id = insn.word(2);
- v.type_id = insn.word(1);
- out[std::make_pair(set, binding)] = v;
- }
- }
-}
-
-static bool
-validate_interface_between_stages(layer_data *my_data, VkDevice dev,
- shader_module const *producer, spirv_inst_iter producer_entrypoint, char const *producer_name,
- shader_module const *consumer, spirv_inst_iter consumer_entrypoint, char const *consumer_name,
- bool consumer_arrayed_input)
-{
- std::map<uint32_t, interface_var> outputs;
- std::map<uint32_t, interface_var> inputs;
-
- std::map<uint32_t, interface_var> builtin_outputs;
- std::map<uint32_t, interface_var> builtin_inputs;
-
- bool pass = true;
-
- collect_interface_by_location(my_data, dev, producer, producer_entrypoint, spv::StorageClassOutput, outputs, builtin_outputs, false);
- collect_interface_by_location(my_data, dev, consumer, consumer_entrypoint, spv::StorageClassInput, inputs, builtin_inputs,
- consumer_arrayed_input);
-
- auto a_it = outputs.begin();
- auto b_it = inputs.begin();
-
- /* maps sorted by key (location); walk them together to find mismatches */
- while ((outputs.size() > 0 && a_it != outputs.end()) || ( inputs.size() && b_it != inputs.end())) {
- bool a_at_end = outputs.size() == 0 || a_it == outputs.end();
- bool b_at_end = inputs.size() == 0 || b_it == inputs.end();
- auto a_first = a_at_end ? 0 : a_it->first;
- auto b_first = b_at_end ? 0 : b_it->first;
-
- if (b_at_end || ((!a_at_end) && (a_first < b_first))) {
- if (log_msg(my_data->report_data, VK_DEBUG_REPORT_PERFORMANCE_WARNING_BIT_EXT, VK_DEBUG_REPORT_OBJECT_TYPE_DEVICE_EXT, /*dev*/0, __LINE__, SHADER_CHECKER_OUTPUT_NOT_CONSUMED, "SC",
- "%s writes to output location %d which is not consumed by %s", producer_name, a_first, consumer_name)) {
- pass = false;
- }
- a_it++;
- }
- else if (a_at_end || a_first > b_first) {
- if (log_msg(my_data->report_data, VK_DEBUG_REPORT_ERROR_BIT_EXT, VK_DEBUG_REPORT_OBJECT_TYPE_DEVICE_EXT, /*dev*/0, __LINE__, SHADER_CHECKER_INPUT_NOT_PRODUCED, "SC",
- "%s consumes input location %d which is not written by %s", consumer_name, b_first, producer_name)) {
- pass = false;
- }
- b_it++;
- }
- else {
- if (types_match(producer, consumer, a_it->second.type_id, b_it->second.type_id, consumer_arrayed_input)) {
- /* OK! */
- }
- else {
- char producer_type[1024];
- char consumer_type[1024];
- describe_type(producer_type, producer, a_it->second.type_id);
- describe_type(consumer_type, consumer, b_it->second.type_id);
-
- if (log_msg(my_data->report_data, VK_DEBUG_REPORT_ERROR_BIT_EXT, VK_DEBUG_REPORT_OBJECT_TYPE_DEVICE_EXT, /*dev*/0, __LINE__, SHADER_CHECKER_INTERFACE_TYPE_MISMATCH, "SC",
- "Type mismatch on location %d: '%s' vs '%s'", a_it->first, producer_type, consumer_type)) {
- pass = false;
- }
- }
- a_it++;
- b_it++;
- }
- }
-
- return pass;
-}
-
-enum FORMAT_TYPE {
- FORMAT_TYPE_UNDEFINED,
- FORMAT_TYPE_FLOAT, /* UNORM, SNORM, FLOAT, USCALED, SSCALED, SRGB -- anything we consider float in the shader */
- FORMAT_TYPE_SINT,
- FORMAT_TYPE_UINT,
-};
-
-static unsigned
-get_format_type(VkFormat fmt) {
- switch (fmt) {
- case VK_FORMAT_UNDEFINED:
- return FORMAT_TYPE_UNDEFINED;
- case VK_FORMAT_R8_SINT:
- case VK_FORMAT_R8G8_SINT:
- case VK_FORMAT_R8G8B8_SINT:
- case VK_FORMAT_R8G8B8A8_SINT:
- case VK_FORMAT_R16_SINT:
- case VK_FORMAT_R16G16_SINT:
- case VK_FORMAT_R16G16B16_SINT:
- case VK_FORMAT_R16G16B16A16_SINT:
- case VK_FORMAT_R32_SINT:
- case VK_FORMAT_R32G32_SINT:
- case VK_FORMAT_R32G32B32_SINT:
- case VK_FORMAT_R32G32B32A32_SINT:
- case VK_FORMAT_B8G8R8_SINT:
- case VK_FORMAT_B8G8R8A8_SINT:
- case VK_FORMAT_A2B10G10R10_SINT_PACK32:
- case VK_FORMAT_A2R10G10B10_SINT_PACK32:
- return FORMAT_TYPE_SINT;
- case VK_FORMAT_R8_UINT:
- case VK_FORMAT_R8G8_UINT:
- case VK_FORMAT_R8G8B8_UINT:
- case VK_FORMAT_R8G8B8A8_UINT:
- case VK_FORMAT_R16_UINT:
- case VK_FORMAT_R16G16_UINT:
- case VK_FORMAT_R16G16B16_UINT:
- case VK_FORMAT_R16G16B16A16_UINT:
- case VK_FORMAT_R32_UINT:
- case VK_FORMAT_R32G32_UINT:
- case VK_FORMAT_R32G32B32_UINT:
- case VK_FORMAT_R32G32B32A32_UINT:
- case VK_FORMAT_B8G8R8_UINT:
- case VK_FORMAT_B8G8R8A8_UINT:
- case VK_FORMAT_A2B10G10R10_UINT_PACK32:
- case VK_FORMAT_A2R10G10B10_UINT_PACK32:
- return FORMAT_TYPE_UINT;
- default:
- return FORMAT_TYPE_FLOAT;
- }
-}
-
-/* characterizes a SPIR-V type appearing in an interface to a FF stage,
- * for comparison to a VkFormat's characterization above. */
-static unsigned
-get_fundamental_type(shader_module const *src, unsigned type)
-{
- auto insn = src->get_def(type);
- assert(insn != src->end());
-
- switch (insn.opcode()) {
- case spv::OpTypeInt:
- return insn.word(3) ? FORMAT_TYPE_SINT : FORMAT_TYPE_UINT;
- case spv::OpTypeFloat:
- return FORMAT_TYPE_FLOAT;
- case spv::OpTypeVector:
- return get_fundamental_type(src, insn.word(2));
- case spv::OpTypeMatrix:
- return get_fundamental_type(src, insn.word(2));
- case spv::OpTypeArray:
- return get_fundamental_type(src, insn.word(2));
- case spv::OpTypePointer:
- return get_fundamental_type(src, insn.word(3));
- default:
- return FORMAT_TYPE_UNDEFINED;
- }
-}
-
-static bool
-validate_vi_consistency(layer_data *my_data, VkDevice dev, VkPipelineVertexInputStateCreateInfo const *vi)
-{
- /* walk the binding descriptions, which describe the step rate and stride of each vertex buffer.
- * each binding should be specified only once.
- */
- std::unordered_map<uint32_t, VkVertexInputBindingDescription const *> bindings;
- bool pass = true;
-
- for (unsigned i = 0; i < vi->vertexBindingDescriptionCount; i++) {
- auto desc = &vi->pVertexBindingDescriptions[i];
- auto & binding = bindings[desc->binding];
- if (binding) {
- if (log_msg(my_data->report_data, VK_DEBUG_REPORT_ERROR_BIT_EXT, VK_DEBUG_REPORT_OBJECT_TYPE_DEVICE_EXT, /*dev*/0, __LINE__, SHADER_CHECKER_INCONSISTENT_VI, "SC",
- "Duplicate vertex input binding descriptions for binding %d", desc->binding)) {
- pass = false;
- }
- }
- else {
- binding = desc;
- }
- }
-
- return pass;
-}
-
-static bool
-validate_vi_against_vs_inputs(layer_data *my_data, VkDevice dev, VkPipelineVertexInputStateCreateInfo const *vi, shader_module const *vs, spirv_inst_iter entrypoint)
-{
- std::map<uint32_t, interface_var> inputs;
- /* we collect builtin inputs, but they will never appear in the VI state --
- * the vs builtin inputs are generated in the pipeline, not sourced from buffers (VertexID, etc)
- */
- std::map<uint32_t, interface_var> builtin_inputs;
- bool pass = true;
-
- collect_interface_by_location(my_data, dev, vs, entrypoint, spv::StorageClassInput, inputs, builtin_inputs, false);
-
- /* Build index by location */
- std::map<uint32_t, VkVertexInputAttributeDescription const *> attribs;
- if (vi) {
- for (unsigned i = 0; i < vi->vertexAttributeDescriptionCount; i++)
- attribs[vi->pVertexAttributeDescriptions[i].location] = &vi->pVertexAttributeDescriptions[i];
- }
-
- auto it_a = attribs.begin();
- auto it_b = inputs.begin();
-
- while ((attribs.size() > 0 && it_a != attribs.end()) || (inputs.size() > 0 && it_b != inputs.end())) {
- bool a_at_end = attribs.size() == 0 || it_a == attribs.end();
- bool b_at_end = inputs.size() == 0 || it_b == inputs.end();
- auto a_first = a_at_end ? 0 : it_a->first;
- auto b_first = b_at_end ? 0 : it_b->first;
- if (!a_at_end && (b_at_end || a_first < b_first)) {
- if (log_msg(my_data->report_data, VK_DEBUG_REPORT_PERFORMANCE_WARNING_BIT_EXT, VK_DEBUG_REPORT_OBJECT_TYPE_DEVICE_EXT, /*dev*/0, __LINE__, SHADER_CHECKER_OUTPUT_NOT_CONSUMED, "SC",
- "Vertex attribute at location %d not consumed by VS", a_first)) {
- pass = false;
- }
- it_a++;
- }
- else if (!b_at_end && (a_at_end || b_first < a_first)) {
- if (log_msg(my_data->report_data, VK_DEBUG_REPORT_ERROR_BIT_EXT, VK_DEBUG_REPORT_OBJECT_TYPE_DEVICE_EXT, /*dev*/0, __LINE__, SHADER_CHECKER_INPUT_NOT_PRODUCED, "SC",
- "VS consumes input at location %d but not provided", b_first)) {
- pass = false;
- }
- it_b++;
- }
- else {
- unsigned attrib_type = get_format_type(it_a->second->format);
- unsigned input_type = get_fundamental_type(vs, it_b->second.type_id);
-
- /* type checking */
- if (attrib_type != FORMAT_TYPE_UNDEFINED && input_type != FORMAT_TYPE_UNDEFINED && attrib_type != input_type) {
- char vs_type[1024];
- describe_type(vs_type, vs, it_b->second.type_id);
- if (log_msg(my_data->report_data, VK_DEBUG_REPORT_ERROR_BIT_EXT, VK_DEBUG_REPORT_OBJECT_TYPE_DEVICE_EXT, /*dev*/0, __LINE__, SHADER_CHECKER_INTERFACE_TYPE_MISMATCH, "SC",
- "Attribute type of `%s` at location %d does not match VS input type of `%s`",
- string_VkFormat(it_a->second->format), a_first, vs_type)) {
- pass = false;
- }
- }
-
- /* OK! */
- it_a++;
- it_b++;
- }
- }
-
- return pass;
-}
-
-static bool
-validate_fs_outputs_against_render_pass(layer_data *my_data, VkDevice dev, shader_module const *fs, spirv_inst_iter entrypoint, RENDER_PASS_NODE const *rp, uint32_t subpass)
-{
- const std::vector<VkFormat> &color_formats = rp->subpassColorFormats[subpass];
- std::map<uint32_t, interface_var> outputs;
- std::map<uint32_t, interface_var> builtin_outputs;
- bool pass = true;
-
- /* TODO: dual source blend index (spv::DecIndex, zero if not provided) */
-
- collect_interface_by_location(my_data, dev, fs, entrypoint, spv::StorageClassOutput, outputs, builtin_outputs, false);
-
- auto it = outputs.begin();
- uint32_t attachment = 0;
-
- /* Walk attachment list and outputs together -- this is a little overpowered since attachments
- * are currently dense, but the parallel with matching between shader stages is nice.
- */
-
- while ((outputs.size() > 0 && it != outputs.end()) || attachment < color_formats.size()) {
- if (attachment == color_formats.size() || ( it != outputs.end() && it->first < attachment)) {
- if (log_msg(my_data->report_data, VK_DEBUG_REPORT_WARNING_BIT_EXT, VK_DEBUG_REPORT_OBJECT_TYPE_DEVICE_EXT, /*dev*/0, __LINE__, SHADER_CHECKER_OUTPUT_NOT_CONSUMED, "SC",
- "FS writes to output location %d with no matching attachment", it->first)) {
- pass = false;
- }
- it++;
- }
- else if (it == outputs.end() || it->first > attachment) {
- if (log_msg(my_data->report_data, VK_DEBUG_REPORT_ERROR_BIT_EXT, VK_DEBUG_REPORT_OBJECT_TYPE_DEVICE_EXT, /*dev*/0, __LINE__, SHADER_CHECKER_INPUT_NOT_PRODUCED, "SC",
- "Attachment %d not written by FS", attachment)) {
- pass = false;
- }
- attachment++;
- }
- else {
- unsigned output_type = get_fundamental_type(fs, it->second.type_id);
- unsigned att_type = get_format_type(color_formats[attachment]);
-
- /* type checking */
- if (att_type != FORMAT_TYPE_UNDEFINED && output_type != FORMAT_TYPE_UNDEFINED && att_type != output_type) {
- char fs_type[1024];
- describe_type(fs_type, fs, it->second.type_id);
- if (log_msg(my_data->report_data, VK_DEBUG_REPORT_ERROR_BIT_EXT, VK_DEBUG_REPORT_OBJECT_TYPE_DEVICE_EXT, /*dev*/0, __LINE__, SHADER_CHECKER_INTERFACE_TYPE_MISMATCH, "SC",
- "Attachment %d of type `%s` does not match FS output type of `%s`",
- attachment, string_VkFormat(color_formats[attachment]), fs_type)) {
- pass = false;
- }
- }
-
- /* OK! */
- it++;
- attachment++;
- }
- }
-
- return pass;
-}
-
-
-/* For some analyses, we need to know about all ids referenced by the static call tree of a particular
- * entrypoint. This is important for identifying the set of shader resources actually used by an entrypoint,
- * for example.
- * Note: we only explore parts of the image which might actually contain ids we care about for the above analyses.
- * - NOT the shader input/output interfaces.
- *
- * TODO: The set of interesting opcodes here was determined by eyeballing the SPIRV spec. It might be worth
- * converting parts of this to be generated from the machine-readable spec instead.
- */
-static void
-mark_accessible_ids(shader_module const *src, spirv_inst_iter entrypoint, std::unordered_set<uint32_t> &ids)
-{
- std::unordered_set<uint32_t> worklist;
- worklist.insert(entrypoint.word(2));
-
- while (!worklist.empty()) {
- auto id_iter = worklist.begin();
- auto id = *id_iter;
- worklist.erase(id_iter);
-
- auto insn = src->get_def(id);
- if (insn == src->end()) {
- /* id is something we didnt collect in build_def_index. that's OK -- we'll stumble
- * across all kinds of things here that we may not care about. */
- continue;
- }
-
- /* try to add to the output set */
- if (!ids.insert(id).second) {
- continue; /* if we already saw this id, we don't want to walk it again. */
- }
-
- switch (insn.opcode()) {
- case spv::OpFunction:
- /* scan whole body of the function, enlisting anything interesting */
- while (++insn, insn.opcode() != spv::OpFunctionEnd) {
- switch (insn.opcode()) {
- case spv::OpLoad:
- case spv::OpAtomicLoad:
- case spv::OpAtomicExchange:
- case spv::OpAtomicCompareExchange:
- case spv::OpAtomicCompareExchangeWeak:
- case spv::OpAtomicIIncrement:
- case spv::OpAtomicIDecrement:
- case spv::OpAtomicIAdd:
- case spv::OpAtomicISub:
- case spv::OpAtomicSMin:
- case spv::OpAtomicUMin:
- case spv::OpAtomicSMax:
- case spv::OpAtomicUMax:
- case spv::OpAtomicAnd:
- case spv::OpAtomicOr:
- case spv::OpAtomicXor:
- worklist.insert(insn.word(3)); /* ptr */
- break;
- case spv::OpStore:
- case spv::OpAtomicStore:
- worklist.insert(insn.word(1)); /* ptr */
- break;
- case spv::OpAccessChain:
- case spv::OpInBoundsAccessChain:
- worklist.insert(insn.word(3)); /* base ptr */
- break;
- case spv::OpSampledImage:
- case spv::OpImageSampleImplicitLod:
- case spv::OpImageSampleExplicitLod:
- case spv::OpImageSampleDrefImplicitLod:
- case spv::OpImageSampleDrefExplicitLod:
- case spv::OpImageSampleProjImplicitLod:
- case spv::OpImageSampleProjExplicitLod:
- case spv::OpImageSampleProjDrefImplicitLod:
- case spv::OpImageSampleProjDrefExplicitLod:
- case spv::OpImageFetch:
- case spv::OpImageGather:
- case spv::OpImageDrefGather:
- case spv::OpImageRead:
- case spv::OpImage:
- case spv::OpImageQueryFormat:
- case spv::OpImageQueryOrder:
- case spv::OpImageQuerySizeLod:
- case spv::OpImageQuerySize:
- case spv::OpImageQueryLod:
- case spv::OpImageQueryLevels:
- case spv::OpImageQuerySamples:
- case spv::OpImageSparseSampleImplicitLod:
- case spv::OpImageSparseSampleExplicitLod:
- case spv::OpImageSparseSampleDrefImplicitLod:
- case spv::OpImageSparseSampleDrefExplicitLod:
- case spv::OpImageSparseSampleProjImplicitLod:
- case spv::OpImageSparseSampleProjExplicitLod:
- case spv::OpImageSparseSampleProjDrefImplicitLod:
- case spv::OpImageSparseSampleProjDrefExplicitLod:
- case spv::OpImageSparseFetch:
- case spv::OpImageSparseGather:
- case spv::OpImageSparseDrefGather:
- case spv::OpImageTexelPointer:
- worklist.insert(insn.word(3)); /* image or sampled image */
- break;
- case spv::OpImageWrite:
- worklist.insert(insn.word(1)); /* image -- different operand order to above */
- break;
- case spv::OpFunctionCall:
- for (auto i = 3; i < insn.len(); i++) {
- worklist.insert(insn.word(i)); /* fn itself, and all args */
- }
- break;
-
- case spv::OpExtInst:
- for (auto i = 5; i < insn.len(); i++) {
- worklist.insert(insn.word(i)); /* operands to ext inst */
- }
- break;
- }
- }
- break;
- }
- }
-}
-
-
-struct shader_stage_attributes {
- char const * const name;
- bool arrayed_input;
-};
-
-
-static shader_stage_attributes
-shader_stage_attribs[] = {
- { "vertex shader", false },
- { "tessellation control shader", true },
- { "tessellation evaluation shader", false },
- { "geometry shader", true },
- { "fragment shader", false },
-};
-
-// For given pipelineLayout verify that the setLayout at slot.first
-// has the requested binding at slot.second
-static bool
-has_descriptor_binding(layer_data* my_data,
- vector<VkDescriptorSetLayout>* pipelineLayout,
- std::pair<unsigned, unsigned> slot)
-{
- if (!pipelineLayout)
- return false;
-
- if (slot.first >= pipelineLayout->size())
- return false;
-
- const auto &bindingMap = my_data->descriptorSetLayoutMap[(*pipelineLayout)[slot.first]]
- ->bindingToIndexMap;
-
- return (bindingMap.find(slot.second) != bindingMap.end());
-}
-
-static uint32_t get_shader_stage_id(VkShaderStageFlagBits stage)
-{
- uint32_t bit_pos = u_ffs(stage);
- return bit_pos-1;
-}
-
-// Block of code at start here for managing/tracking Pipeline state that this layer cares about
-
-static uint64_t g_drawCount[NUM_DRAW_TYPES] = {0, 0, 0, 0};
-
-// TODO : Should be tracking lastBound per commandBuffer and when draws occur, report based on that cmd buffer lastBound
-// Then need to synchronize the accesses based on cmd buffer so that if I'm reading state on one cmd buffer, updates
-// to that same cmd buffer by separate thread are not changing state from underneath us
-// Track the last cmd buffer touched by this thread
-
-// prototype
-static GLOBAL_CB_NODE* getCBNode(layer_data*, const VkCommandBuffer);
-
-static VkBool32 hasDrawCmd(GLOBAL_CB_NODE* pCB)
-{
- for (uint32_t i=0; i<NUM_DRAW_TYPES; i++) {
- if (pCB->drawCount[i])
- return VK_TRUE;
- }
- return VK_FALSE;
-}
-
-// Check object status for selected flag state
-static VkBool32 validate_status(layer_data* my_data, GLOBAL_CB_NODE* pNode, CBStatusFlags enable_mask, CBStatusFlags status_mask, CBStatusFlags status_flag, VkFlags msg_flags, DRAW_STATE_ERROR error_code, const char* fail_msg)
-{
- // If non-zero enable mask is present, check it against status but if enable_mask
- // is 0 then no enable required so we should always just check status
- if ((!enable_mask) || (enable_mask & pNode->status)) {
- if ((pNode->status & status_mask) != status_flag) {
- // TODO : How to pass dispatchable objects as srcObject? Here src obj should be cmd buffer
- return log_msg(my_data->report_data, msg_flags, VK_DEBUG_REPORT_OBJECT_TYPE_COMMAND_BUFFER_EXT, 0, __LINE__, error_code, "DS",
- "CB object %#" PRIxLEAST64 ": %s", (uint64_t)(pNode->commandBuffer), fail_msg);
- }
- }
- return VK_FALSE;
-}
-
-// Retrieve pipeline node ptr for given pipeline object
-static PIPELINE_NODE* getPipeline(layer_data* my_data, const VkPipeline pipeline)
-{
- if (my_data->pipelineMap.find(pipeline) == my_data->pipelineMap.end()) {
- return NULL;
- }
- return my_data->pipelineMap[pipeline];
-}
-
-// Return VK_TRUE if for a given PSO, the given state enum is dynamic, else return VK_FALSE
-static VkBool32 isDynamic(const PIPELINE_NODE* pPipeline, const VkDynamicState state)
-{
- if (pPipeline && pPipeline->graphicsPipelineCI.pDynamicState) {
- for (uint32_t i=0; i<pPipeline->graphicsPipelineCI.pDynamicState->dynamicStateCount; i++) {
- if (state == pPipeline->graphicsPipelineCI.pDynamicState->pDynamicStates[i])
- return VK_TRUE;
- }
- }
- return VK_FALSE;
-}
-
-// Validate state stored as flags at time of draw call
-static VkBool32 validate_draw_state_flags(layer_data* my_data, GLOBAL_CB_NODE* pCB, VkBool32 indexedDraw) {
- VkBool32 result;
- result = validate_status(my_data, pCB, CBSTATUS_NONE, CBSTATUS_VIEWPORT_SET, CBSTATUS_VIEWPORT_SET, VK_DEBUG_REPORT_ERROR_BIT_EXT, DRAWSTATE_VIEWPORT_NOT_BOUND, "Dynamic viewport state not set for this command buffer");
- result |= validate_status(my_data, pCB, CBSTATUS_NONE, CBSTATUS_SCISSOR_SET, CBSTATUS_SCISSOR_SET, VK_DEBUG_REPORT_ERROR_BIT_EXT, DRAWSTATE_SCISSOR_NOT_BOUND, "Dynamic scissor state not set for this command buffer");
- result |= validate_status(my_data, pCB, CBSTATUS_NONE, CBSTATUS_LINE_WIDTH_SET, CBSTATUS_LINE_WIDTH_SET, VK_DEBUG_REPORT_ERROR_BIT_EXT, DRAWSTATE_LINE_WIDTH_NOT_BOUND, "Dynamic line width state not set for this command buffer");
- result |= validate_status(my_data, pCB, CBSTATUS_NONE, CBSTATUS_DEPTH_BIAS_SET, CBSTATUS_DEPTH_BIAS_SET, VK_DEBUG_REPORT_ERROR_BIT_EXT, DRAWSTATE_DEPTH_BIAS_NOT_BOUND, "Dynamic depth bias state not set for this command buffer");
- result |= validate_status(my_data, pCB, CBSTATUS_COLOR_BLEND_WRITE_ENABLE, CBSTATUS_BLEND_SET, CBSTATUS_BLEND_SET, VK_DEBUG_REPORT_ERROR_BIT_EXT, DRAWSTATE_BLEND_NOT_BOUND, "Dynamic blend object state not set for this command buffer");
- result |= validate_status(my_data, pCB, CBSTATUS_DEPTH_WRITE_ENABLE, CBSTATUS_DEPTH_BOUNDS_SET, CBSTATUS_DEPTH_BOUNDS_SET, VK_DEBUG_REPORT_ERROR_BIT_EXT, DRAWSTATE_DEPTH_BOUNDS_NOT_BOUND, "Dynamic depth bounds state not set for this command buffer");
- result |= validate_status(my_data, pCB, CBSTATUS_STENCIL_TEST_ENABLE, CBSTATUS_STENCIL_READ_MASK_SET, CBSTATUS_STENCIL_READ_MASK_SET, VK_DEBUG_REPORT_ERROR_BIT_EXT, DRAWSTATE_STENCIL_NOT_BOUND, "Dynamic stencil read mask state not set for this command buffer");
- result |= validate_status(my_data, pCB, CBSTATUS_STENCIL_TEST_ENABLE, CBSTATUS_STENCIL_WRITE_MASK_SET, CBSTATUS_STENCIL_WRITE_MASK_SET, VK_DEBUG_REPORT_ERROR_BIT_EXT, DRAWSTATE_STENCIL_NOT_BOUND, "Dynamic stencil write mask state not set for this command buffer");
- result |= validate_status(my_data, pCB, CBSTATUS_STENCIL_TEST_ENABLE, CBSTATUS_STENCIL_REFERENCE_SET, CBSTATUS_STENCIL_REFERENCE_SET, VK_DEBUG_REPORT_ERROR_BIT_EXT, DRAWSTATE_STENCIL_NOT_BOUND, "Dynamic stencil reference state not set for this command buffer");
- if (indexedDraw)
- result |= validate_status(my_data, pCB, CBSTATUS_NONE, CBSTATUS_INDEX_BUFFER_BOUND, CBSTATUS_INDEX_BUFFER_BOUND, VK_DEBUG_REPORT_ERROR_BIT_EXT, DRAWSTATE_INDEX_BUFFER_NOT_BOUND, "Index buffer object not bound to this command buffer when Indexed Draw attempted");
- return result;
-}
-
-// Verify attachment reference compatibility according to spec
-// If one array is larger, treat missing elements of shorter array as VK_ATTACHMENT_UNUSED & other array much match this
-// If both AttachmentReference arrays have requested index, check their corresponding AttachementDescriptions
-// to make sure that format and samples counts match.
-// If not, they are not compatible.
-static bool attachment_references_compatible(const uint32_t index, const VkAttachmentReference* pPrimary, const uint32_t primaryCount, const VkAttachmentDescription* pPrimaryAttachments,
- const VkAttachmentReference* pSecondary, const uint32_t secondaryCount, const VkAttachmentDescription* pSecondaryAttachments)
-{
- if (index >= primaryCount) { // Check secondary as if primary is VK_ATTACHMENT_UNUSED
- if (VK_ATTACHMENT_UNUSED != pSecondary[index].attachment)
- return false;
- } else if (index >= secondaryCount) { // Check primary as if secondary is VK_ATTACHMENT_UNUSED
- if (VK_ATTACHMENT_UNUSED != pPrimary[index].attachment)
- return false;
- } else { // format and sample count must match
- if ((pPrimaryAttachments[pPrimary[index].attachment].format == pSecondaryAttachments[pSecondary[index].attachment].format) &&
- (pPrimaryAttachments[pPrimary[index].attachment].samples == pSecondaryAttachments[pSecondary[index].attachment].samples))
- return true;
- }
- // Format and sample counts didn't match
- return false;
-}
-
-// For give primary and secondary RenderPass objects, verify that they're compatible
-static bool verify_renderpass_compatibility(layer_data* my_data, const VkRenderPass primaryRP, const VkRenderPass secondaryRP, string& errorMsg)
-{
- stringstream errorStr;
- if (my_data->renderPassMap.find(primaryRP) == my_data->renderPassMap.end()) {
- errorStr << "invalid VkRenderPass (" << primaryRP << ")";
- errorMsg = errorStr.str();
- return false;
- } else if (my_data->renderPassMap.find(secondaryRP) == my_data->renderPassMap.end()) {
- errorStr << "invalid VkRenderPass (" << secondaryRP << ")";
- errorMsg = errorStr.str();
- return false;
- }
- // Trivial pass case is exact same RP
- if (primaryRP == secondaryRP) {
- return true;
- }
- const VkRenderPassCreateInfo* primaryRPCI = my_data->renderPassMap[primaryRP]->pCreateInfo;
- const VkRenderPassCreateInfo* secondaryRPCI = my_data->renderPassMap[secondaryRP]->pCreateInfo;
- if (primaryRPCI->subpassCount != secondaryRPCI->subpassCount) {
- errorStr << "RenderPass for primary cmdBuffer has " << primaryRPCI->subpassCount << " subpasses but renderPass for secondary cmdBuffer has " << secondaryRPCI->subpassCount << " subpasses.";
- errorMsg = errorStr.str();
- return false;
- }
- uint32_t spIndex = 0;
- for (spIndex = 0; spIndex < primaryRPCI->subpassCount; ++spIndex) {
- // For each subpass, verify that corresponding color, input, resolve & depth/stencil attachment references are compatible
- uint32_t primaryColorCount = primaryRPCI->pSubpasses[spIndex].colorAttachmentCount;
- uint32_t secondaryColorCount = secondaryRPCI->pSubpasses[spIndex].colorAttachmentCount;
- uint32_t colorMax = std::max(primaryColorCount, secondaryColorCount);
- for (uint32_t cIdx = 0; cIdx < colorMax; ++cIdx) {
- if (!attachment_references_compatible(cIdx, primaryRPCI->pSubpasses[spIndex].pColorAttachments, primaryColorCount, primaryRPCI->pAttachments,
- secondaryRPCI->pSubpasses[spIndex].pColorAttachments, secondaryColorCount, secondaryRPCI->pAttachments)) {
- errorStr << "color attachments at index " << cIdx << " of subpass index " << spIndex << " are not compatible.";
- errorMsg = errorStr.str();
- return false;
- } else if (!attachment_references_compatible(cIdx, primaryRPCI->pSubpasses[spIndex].pResolveAttachments, primaryColorCount, primaryRPCI->pAttachments,
- secondaryRPCI->pSubpasses[spIndex].pResolveAttachments, secondaryColorCount, secondaryRPCI->pAttachments)) {
- errorStr << "resolve attachments at index " << cIdx << " of subpass index " << spIndex << " are not compatible.";
- errorMsg = errorStr.str();
- return false;
- } else if (!attachment_references_compatible(cIdx, primaryRPCI->pSubpasses[spIndex].pDepthStencilAttachment, primaryColorCount, primaryRPCI->pAttachments,
- secondaryRPCI->pSubpasses[spIndex].pDepthStencilAttachment, secondaryColorCount, secondaryRPCI->pAttachments)) {
- errorStr << "depth/stencil attachments at index " << cIdx << " of subpass index " << spIndex << " are not compatible.";
- errorMsg = errorStr.str();
- return false;
- }
- }
- uint32_t primaryInputCount = primaryRPCI->pSubpasses[spIndex].inputAttachmentCount;
- uint32_t secondaryInputCount = secondaryRPCI->pSubpasses[spIndex].inputAttachmentCount;
- uint32_t inputMax = std::max(primaryInputCount, secondaryInputCount);
- for (uint32_t i = 0; i < inputMax; ++i) {
- if (!attachment_references_compatible(i, primaryRPCI->pSubpasses[spIndex].pInputAttachments, primaryColorCount, primaryRPCI->pAttachments,
- secondaryRPCI->pSubpasses[spIndex].pInputAttachments, secondaryColorCount, secondaryRPCI->pAttachments)) {
- errorStr << "input attachments at index " << i << " of subpass index " << spIndex << " are not compatible.";
- errorMsg = errorStr.str();
- return false;
- }
- }
- }
- return true;
-}
-
-// For give SET_NODE, verify that its Set is compatible w/ the setLayout corresponding to pipelineLayout[layoutIndex]
-static bool verify_set_layout_compatibility(layer_data* my_data, const SET_NODE* pSet, const VkPipelineLayout layout, const uint32_t layoutIndex, string& errorMsg)
-{
- stringstream errorStr;
- if (my_data->pipelineLayoutMap.find(layout) == my_data->pipelineLayoutMap.end()) {
- errorStr << "invalid VkPipelineLayout (" << layout << ")";
- errorMsg = errorStr.str();
- return false;
- }
- PIPELINE_LAYOUT_NODE pl = my_data->pipelineLayoutMap[layout];
- if (layoutIndex >= pl.descriptorSetLayouts.size()) {
- errorStr << "VkPipelineLayout (" << layout << ") only contains " << pl.descriptorSetLayouts.size() << " setLayouts corresponding to sets 0-" << pl.descriptorSetLayouts.size()-1 << ", but you're attempting to bind set to index " << layoutIndex;
- errorMsg = errorStr.str();
- return false;
- }
- // Get the specific setLayout from PipelineLayout that overlaps this set
- LAYOUT_NODE* pLayoutNode = my_data->descriptorSetLayoutMap[pl.descriptorSetLayouts[layoutIndex]];
- if (pLayoutNode->layout == pSet->pLayout->layout) { // trivial pass case
- return true;
- }
- size_t descriptorCount = pLayoutNode->descriptorTypes.size();
- if (descriptorCount != pSet->pLayout->descriptorTypes.size()) {
- errorStr << "setLayout " << layoutIndex << " from pipelineLayout " << layout << " has " << descriptorCount << " descriptors, but corresponding set being bound has " << pSet->pLayout->descriptorTypes.size() << " descriptors.";
- errorMsg = errorStr.str();
- return false; // trivial fail case
- }
- // Now need to check set against corresponding pipelineLayout to verify compatibility
- for (size_t i=0; i<descriptorCount; ++i) {
- // Need to verify that layouts are identically defined
- // TODO : Is below sufficient? Making sure that types & stageFlags match per descriptor
- // do we also need to check immutable samplers?
- if (pLayoutNode->descriptorTypes[i] != pSet->pLayout->descriptorTypes[i]) {
- errorStr << "descriptor " << i << " for descriptorSet being bound is type '" << string_VkDescriptorType(pSet->pLayout->descriptorTypes[i]) << "' but corresponding descriptor from pipelineLayout is type '" << string_VkDescriptorType(pLayoutNode->descriptorTypes[i]) << "'";
- errorMsg = errorStr.str();
- return false;
- }
- if (pLayoutNode->stageFlags[i] != pSet->pLayout->stageFlags[i]) {
- errorStr << "stageFlags " << i << " for descriptorSet being bound is " << pSet->pLayout->stageFlags[i] << "' but corresponding descriptor from pipelineLayout has stageFlags " << pLayoutNode->stageFlags[i];
- errorMsg = errorStr.str();
- return false;
- }
- }
- return true;
-}
-
-
-// Validate that data for each specialization entry is fully contained within the buffer.
-static VkBool32
-validate_specialization_offsets(layer_data *my_data, VkPipelineShaderStageCreateInfo const *info)
-{
- VkBool32 pass = VK_TRUE;
-
- VkSpecializationInfo const *spec = info->pSpecializationInfo;
-
- if (spec) {
- for (auto i = 0u; i < spec->mapEntryCount; i++) {
- if (spec->pMapEntries[i].offset + spec->pMapEntries[i].size > spec->dataSize) {
- if (log_msg(my_data->report_data, VK_DEBUG_REPORT_ERROR_BIT_EXT, VK_DEBUG_REPORT_OBJECT_TYPE_DEVICE_EXT,
- /*dev*/0, __LINE__, SHADER_CHECKER_BAD_SPECIALIZATION, "SC",
- "Specialization entry %u (for constant id %u) references memory outside provided "
- "specialization data (bytes %u.."
- PRINTF_SIZE_T_SPECIFIER "; " PRINTF_SIZE_T_SPECIFIER " bytes provided)",
- i, spec->pMapEntries[i].constantID,
- spec->pMapEntries[i].offset,
- spec->pMapEntries[i].offset + spec->pMapEntries[i].size - 1,
- spec->dataSize)) {
-
- pass = VK_FALSE;
- }
- }
- }
- }
-
- return pass;
-}
-
-
-// Validate that the shaders used by the given pipeline
-// As a side effect this function also records the sets that are actually used by the pipeline
-static VkBool32
-validate_pipeline_shaders(layer_data *my_data, VkDevice dev, PIPELINE_NODE* pPipeline)
-{
- VkGraphicsPipelineCreateInfo const *pCreateInfo = &pPipeline->graphicsPipelineCI;
- /* We seem to allow pipeline stages to be specified out of order, so collect and identify them
- * before trying to do anything more: */
- int vertex_stage = get_shader_stage_id(VK_SHADER_STAGE_VERTEX_BIT);
- int fragment_stage = get_shader_stage_id(VK_SHADER_STAGE_FRAGMENT_BIT);
-
- shader_module *shaders[5];
- memset(shaders, 0, sizeof(shaders));
- spirv_inst_iter entrypoints[5];
- memset(entrypoints, 0, sizeof(entrypoints));
- RENDER_PASS_NODE const *rp = 0;
- VkPipelineVertexInputStateCreateInfo const *vi = 0;
- VkBool32 pass = VK_TRUE;
-
- for (uint32_t i = 0; i < pCreateInfo->stageCount; i++) {
- VkPipelineShaderStageCreateInfo const *pStage = &pCreateInfo->pStages[i];
- if (pStage->sType == VK_STRUCTURE_TYPE_PIPELINE_SHADER_STAGE_CREATE_INFO) {
-
- if ((pStage->stage & (VK_SHADER_STAGE_VERTEX_BIT | VK_SHADER_STAGE_GEOMETRY_BIT | VK_SHADER_STAGE_FRAGMENT_BIT
- | VK_SHADER_STAGE_TESSELLATION_CONTROL_BIT | VK_SHADER_STAGE_TESSELLATION_EVALUATION_BIT)) == 0) {
- if (log_msg(my_data->report_data, VK_DEBUG_REPORT_WARNING_BIT_EXT, VK_DEBUG_REPORT_OBJECT_TYPE_DEVICE_EXT, /*dev*/0, __LINE__, SHADER_CHECKER_UNKNOWN_STAGE, "SC",
- "Unknown shader stage %d", pStage->stage)) {
- pass = VK_FALSE;
- }
- }
- else {
- pass = validate_specialization_offsets(my_data, pStage) && pass;
-
- auto stage_id = get_shader_stage_id(pStage->stage);
- shader_module *module = my_data->shaderModuleMap[pStage->module];
- shaders[stage_id] = module;
-
- /* find the entrypoint */
- entrypoints[stage_id] = find_entrypoint(module, pStage->pName, pStage->stage);
- if (entrypoints[stage_id] == module->end()) {
- if (log_msg(my_data->report_data, VK_DEBUG_REPORT_ERROR_BIT_EXT, VK_DEBUG_REPORT_OBJECT_TYPE_DEVICE_EXT, /*dev*/0, __LINE__, SHADER_CHECKER_MISSING_ENTRYPOINT, "SC",
- "No entrypoint found named `%s` for stages %u", pStage->pName, pStage->stage)) {
- pass = VK_FALSE;
- }
- }
-
- /* mark accessible ids */
- std::unordered_set<uint32_t> accessible_ids;
- mark_accessible_ids(module, entrypoints[stage_id], accessible_ids);
-
- /* validate descriptor set layout against what the entrypoint actually uses */
- std::map<std::pair<unsigned, unsigned>, interface_var> descriptor_uses;
- collect_interface_by_descriptor_slot(my_data, dev, module, spv::StorageClassUniform,
- accessible_ids,
- descriptor_uses);
-
- auto layouts = pCreateInfo->layout != VK_NULL_HANDLE ?
- &(my_data->pipelineLayoutMap[pCreateInfo->layout].descriptorSetLayouts) : nullptr;
-
- for (auto it = descriptor_uses.begin(); it != descriptor_uses.end(); it++) {
- // As a side-effect of this function, capture which sets are used by the pipeline
- pPipeline->active_sets.insert(it->first.first);
-
- /* find the matching binding */
- auto found = has_descriptor_binding(my_data, layouts, it->first);
-
- if (!found) {
- char type_name[1024];
- describe_type(type_name, module, it->second.type_id);
- if (log_msg(my_data->report_data, VK_DEBUG_REPORT_ERROR_BIT_EXT, VK_DEBUG_REPORT_OBJECT_TYPE_DEVICE_EXT, /*dev*/0, __LINE__,
- SHADER_CHECKER_MISSING_DESCRIPTOR, "SC",
- "Shader uses descriptor slot %u.%u (used as type `%s`) but not declared in pipeline layout",
- it->first.first, it->first.second, type_name)) {
- pass = VK_FALSE;
- }
- }
- }
- }
- }
- }
-
- if (pCreateInfo->renderPass != VK_NULL_HANDLE)
- rp = my_data->renderPassMap[pCreateInfo->renderPass];
-
- vi = pCreateInfo->pVertexInputState;
-
- if (vi) {
- pass = validate_vi_consistency(my_data, dev, vi) && pass;
- }
-
- if (shaders[vertex_stage]) {
- pass = validate_vi_against_vs_inputs(my_data, dev, vi, shaders[vertex_stage], entrypoints[vertex_stage]) && pass;
- }
-
- /* TODO: enforce rules about present combinations of shaders */
- int producer = get_shader_stage_id(VK_SHADER_STAGE_VERTEX_BIT);
- int consumer = get_shader_stage_id(VK_SHADER_STAGE_TESSELLATION_CONTROL_BIT);
-
- while (!shaders[producer] && producer != fragment_stage) {
- producer++;
- consumer++;
- }
-
- for (; producer != fragment_stage && consumer <= fragment_stage; consumer++) {
- assert(shaders[producer]);
- if (shaders[consumer]) {
- pass = validate_interface_between_stages(my_data, dev,
- shaders[producer], entrypoints[producer],
- shader_stage_attribs[producer].name,
- shaders[consumer], entrypoints[consumer],
- shader_stage_attribs[consumer].name,
- shader_stage_attribs[consumer].arrayed_input) && pass;
-
- producer = consumer;
- }
- }
-
- if (shaders[fragment_stage] && rp) {
- pass = validate_fs_outputs_against_render_pass(my_data, dev, shaders[fragment_stage],
- entrypoints[fragment_stage], rp, pCreateInfo->subpass) && pass;
- }
-
- return pass;
-}
-
-// Return Set node ptr for specified set or else NULL
-static SET_NODE* getSetNode(layer_data* my_data, const VkDescriptorSet set)
-{
- if (my_data->setMap.find(set) == my_data->setMap.end()) {
- return NULL;
- }
- return my_data->setMap[set];
-}
-// For the given command buffer, verify that for each set set in activeSetNodes
-// that any dynamic descriptor in that set has a valid dynamic offset bound.
-// To be valid, the dynamic offset combined with the offet and range from its
-// descriptor update must not overflow the size of its buffer being updated
-static VkBool32 validate_dynamic_offsets(layer_data* my_data, const GLOBAL_CB_NODE* pCB, const vector<SET_NODE*> activeSetNodes)
-{
- VkBool32 result = VK_FALSE;
-
- VkWriteDescriptorSet* pWDS = NULL;
- uint32_t dynOffsetIndex = 0;
- VkDeviceSize bufferSize = 0;
- for (auto set_node : activeSetNodes) {
- for (uint32_t i = 0; i < set_node->descriptorCount; ++i) {
- if (set_node->ppDescriptors[i] != NULL) {
- switch (set_node->ppDescriptors[i]->sType) {
- case VK_STRUCTURE_TYPE_WRITE_DESCRIPTOR_SET:
- pWDS = (VkWriteDescriptorSet *)set_node->ppDescriptors[i];
- if ((pWDS->descriptorType ==
- VK_DESCRIPTOR_TYPE_UNIFORM_BUFFER_DYNAMIC) ||
- (pWDS->descriptorType ==
- VK_DESCRIPTOR_TYPE_STORAGE_BUFFER_DYNAMIC)) {
- for (uint32_t j = 0; j < pWDS->descriptorCount; ++j) {
- bufferSize =
- my_data->bufferMap[pWDS->pBufferInfo[j].buffer]
- .create_info->size;
- if (pWDS->pBufferInfo[j].range == VK_WHOLE_SIZE) {
- if ((pCB->dynamicOffsets[dynOffsetIndex] +
- pWDS->pBufferInfo[j].offset) > bufferSize) {
- result |= log_msg(
- my_data->report_data,
- VK_DEBUG_REPORT_ERROR_BIT_EXT,
- VK_DEBUG_REPORT_OBJECT_TYPE_DESCRIPTOR_SET_EXT,
- (uint64_t)set_node->set, __LINE__,
- DRAWSTATE_DYNAMIC_OFFSET_OVERFLOW, "DS",
- "VkDescriptorSet (%#" PRIxLEAST64
- ") bound as set #%u has range of "
- "VK_WHOLE_SIZE but dynamic offset %u "
- "combined with offet %#" PRIxLEAST64
- " oversteps its buffer (%#" PRIxLEAST64
- ") which has a size of %#" PRIxLEAST64 ".",
- (uint64_t)set_node->set, i,
- pCB->dynamicOffsets[dynOffsetIndex],
- pWDS->pBufferInfo[j].offset,
- (uint64_t)pWDS->pBufferInfo[j].buffer,
- bufferSize);
- }
- } else if ((pCB->dynamicOffsets[dynOffsetIndex] +
- pWDS->pBufferInfo[j].offset +
- pWDS->pBufferInfo[j].range) > bufferSize) {
- result |= log_msg(
- my_data->report_data,
- VK_DEBUG_REPORT_ERROR_BIT_EXT,
- VK_DEBUG_REPORT_OBJECT_TYPE_DESCRIPTOR_SET_EXT,
- (uint64_t)set_node->set, __LINE__,
- DRAWSTATE_DYNAMIC_OFFSET_OVERFLOW, "DS",
- "VkDescriptorSet (%#" PRIxLEAST64
- ") bound as set #%u has dynamic offset %u. "
- "Combined with offet %#" PRIxLEAST64
- " and range %#" PRIxLEAST64
- " from its update, this oversteps its buffer "
- "(%#" PRIxLEAST64
- ") which has a size of %#" PRIxLEAST64 ".",
- (uint64_t)set_node->set, i,
- pCB->dynamicOffsets[dynOffsetIndex],
- pWDS->pBufferInfo[j].offset,
- pWDS->pBufferInfo[j].range,
- (uint64_t)pWDS->pBufferInfo[j].buffer,
- bufferSize);
- } else if ((pCB->dynamicOffsets[dynOffsetIndex] +
- pWDS->pBufferInfo[j].offset +
- pWDS->pBufferInfo[j].range) >
- bufferSize) {
- result |= log_msg(
- my_data->report_data,
- VK_DEBUG_REPORT_ERROR_BIT_EXT,
- VK_DEBUG_REPORT_OBJECT_TYPE_DESCRIPTOR_SET_EXT,
- (uint64_t)set_node->set, __LINE__,
- DRAWSTATE_DYNAMIC_OFFSET_OVERFLOW, "DS",
- "VkDescriptorSet (%#" PRIxLEAST64
- ") bound as set #%u has dynamic offset %u. "
- "Combined with offet %#" PRIxLEAST64
- " and range %#" PRIxLEAST64
- " from its update, this oversteps its buffer "
- "(%#" PRIxLEAST64
- ") which has a size of %#" PRIxLEAST64 ".",
- (uint64_t)set_node->set, i,
- pCB->dynamicOffsets[dynOffsetIndex],
- pWDS->pBufferInfo[j].offset,
- pWDS->pBufferInfo[j].range,
- (uint64_t)pWDS->pBufferInfo[j].buffer,
- bufferSize);
- }
- dynOffsetIndex++;
- i += j; // Advance i to end of this set of descriptors (++i at end of for loop will move 1 index past last of these descriptors)
- }
- }
- break;
- default: // Currently only shadowing Write update nodes so shouldn't get here
- assert(0);
- continue;
- }
- }
- }
- }
- return result;
-}
-
-// Validate overall state at the time of a draw call
-static VkBool32 validate_draw_state(layer_data* my_data, GLOBAL_CB_NODE* pCB, VkBool32 indexedDraw) {
- // First check flag states
- VkBool32 result = validate_draw_state_flags(my_data, pCB, indexedDraw);
- PIPELINE_NODE* pPipe = getPipeline(my_data, pCB->lastBoundPipeline);
- // Now complete other state checks
- // TODO : Currently only performing next check if *something* was bound (non-zero last bound)
- // There is probably a better way to gate when this check happens, and to know if something *should* have been bound
- // We should have that check separately and then gate this check based on that check
- if (pPipe) {
- if (pCB->lastBoundPipelineLayout) {
- string errorString;
- // Need a vector (vs. std::set) of active Sets for dynamicOffset validation in case same set bound w/ different offsets
- vector<SET_NODE*> activeSetNodes;
- for (auto setIndex : pPipe->active_sets) {
- // If valid set is not bound throw an error
- if ((pCB->boundDescriptorSets.size() <= setIndex) || (!pCB->boundDescriptorSets[setIndex])) {
- result |= log_msg(my_data->report_data, VK_DEBUG_REPORT_ERROR_BIT_EXT, (VkDebugReportObjectTypeEXT) 0, 0, __LINE__, DRAWSTATE_DESCRIPTOR_SET_NOT_BOUND, "DS",
- "VkPipeline %#" PRIxLEAST64 " uses set #%u but that set is not bound.", (uint64_t)pPipe->pipeline, setIndex);
- } else if (!verify_set_layout_compatibility(my_data, my_data->setMap[pCB->boundDescriptorSets[setIndex]], pPipe->graphicsPipelineCI.layout, setIndex, errorString)) {
- // Set is bound but not compatible w/ overlapping pipelineLayout from PSO
- VkDescriptorSet setHandle = my_data->setMap[pCB->boundDescriptorSets[setIndex]]->set;
- result |= log_msg(my_data->report_data, VK_DEBUG_REPORT_ERROR_BIT_EXT, VK_DEBUG_REPORT_OBJECT_TYPE_DESCRIPTOR_SET_EXT, (uint64_t)setHandle, __LINE__, DRAWSTATE_PIPELINE_LAYOUTS_INCOMPATIBLE, "DS",
- "VkDescriptorSet (%#" PRIxLEAST64 ") bound as set #%u is not compatible with overlapping VkPipelineLayout %#" PRIxLEAST64 " due to: %s",
- (uint64_t)setHandle, setIndex, (uint64_t)pPipe->graphicsPipelineCI.layout, errorString.c_str());
- } else { // Valid set is bound and layout compatible, validate that it's updated and verify any dynamic offsets
- // Pull the set node
- SET_NODE* pSet = my_data->setMap[pCB->boundDescriptorSets[setIndex]];
- // Save vector of all active sets to verify dynamicOffsets below
- activeSetNodes.push_back(pSet);
- // Make sure set has been updated
- if (!pSet->pUpdateStructs) {
- result |= log_msg(my_data->report_data, VK_DEBUG_REPORT_ERROR_BIT_EXT, VK_DEBUG_REPORT_OBJECT_TYPE_DESCRIPTOR_SET_EXT, (uint64_t) pSet->set, __LINE__, DRAWSTATE_DESCRIPTOR_SET_NOT_UPDATED, "DS",
- "DS %#" PRIxLEAST64 " bound but it was never updated. It is now being used to draw so this will result in undefined behavior.", (uint64_t) pSet->set);
- }
- }
- }
- // For each dynamic descriptor, make sure dynamic offset doesn't overstep buffer
- if (!pCB->dynamicOffsets.empty())
- result |= validate_dynamic_offsets(my_data, pCB, activeSetNodes);
- }
- // Verify Vtx binding
- if (pPipe->vtxBindingCount > 0) {
- VkPipelineVertexInputStateCreateInfo *vtxInCI = &pPipe->vertexInputCI;
- for (uint32_t i = 0; i < vtxInCI->vertexBindingDescriptionCount; i++) {
- if ((pCB->currentDrawData.buffers.size() < (i+1)) || (pCB->currentDrawData.buffers[i] == VK_NULL_HANDLE)) {
- result |= log_msg(my_data->report_data, VK_DEBUG_REPORT_ERROR_BIT_EXT, (VkDebugReportObjectTypeEXT) 0, 0, __LINE__, DRAWSTATE_VTX_INDEX_OUT_OF_BOUNDS, "DS",
- "The Pipeline State Object (%#" PRIxLEAST64 ") expects that this Command Buffer's vertex binding Index %d should be set via vkCmdBindVertexBuffers.",
- (uint64_t)pCB->lastBoundPipeline, i);
-
- }
- }
- } else {
- if (!pCB->currentDrawData.buffers.empty()) {
- result |= log_msg(my_data->report_data, VK_DEBUG_REPORT_PERFORMANCE_WARNING_BIT_EXT, (VkDebugReportObjectTypeEXT) 0, 0, __LINE__, DRAWSTATE_VTX_INDEX_OUT_OF_BOUNDS,
- "DS", "Vertex buffers are bound to command buffer (%#" PRIxLEAST64 ") but no vertex buffers are attached to this Pipeline State Object (%#" PRIxLEAST64 ").",
- (uint64_t)pCB->commandBuffer, (uint64_t)pCB->lastBoundPipeline);
- }
- }
- // If Viewport or scissors are dynamic, verify that dynamic count matches PSO count
- VkBool32 dynViewport = isDynamic(pPipe, VK_DYNAMIC_STATE_VIEWPORT);
- VkBool32 dynScissor = isDynamic(pPipe, VK_DYNAMIC_STATE_SCISSOR);
- if (dynViewport) {
- if (pCB->viewports.size() != pPipe->graphicsPipelineCI.pViewportState->viewportCount) {
- result |= log_msg(my_data->report_data, VK_DEBUG_REPORT_ERROR_BIT_EXT, (VkDebugReportObjectTypeEXT) 0, 0, __LINE__, DRAWSTATE_VIEWPORT_SCISSOR_MISMATCH, "DS",
- "Dynamic viewportCount from vkCmdSetViewport() is " PRINTF_SIZE_T_SPECIFIER ", but PSO viewportCount is %u. These counts must match.", pCB->viewports.size(), pPipe->graphicsPipelineCI.pViewportState->viewportCount);
- }
- }
- if (dynScissor) {
- if (pCB->scissors.size() != pPipe->graphicsPipelineCI.pViewportState->scissorCount) {
- result |= log_msg(my_data->report_data, VK_DEBUG_REPORT_ERROR_BIT_EXT, (VkDebugReportObjectTypeEXT) 0, 0, __LINE__, DRAWSTATE_VIEWPORT_SCISSOR_MISMATCH, "DS",
- "Dynamic scissorCount from vkCmdSetScissor() is " PRINTF_SIZE_T_SPECIFIER ", but PSO scissorCount is %u. These counts must match.", pCB->scissors.size(), pPipe->graphicsPipelineCI.pViewportState->scissorCount);
- }
- }
- }
- return result;
-}
-
-// Verify that create state for a pipeline is valid
-static VkBool32 verifyPipelineCreateState(layer_data* my_data, const VkDevice device, PIPELINE_NODE* pPipeline)
-{
- VkBool32 skipCall = VK_FALSE;
-
- if (!validate_pipeline_shaders(my_data, device, pPipeline)) {
- skipCall = VK_TRUE;
- }
- // VS is required
- if (!(pPipeline->active_shaders & VK_SHADER_STAGE_VERTEX_BIT)) {
- skipCall |= log_msg(my_data->report_data, VK_DEBUG_REPORT_ERROR_BIT_EXT, (VkDebugReportObjectTypeEXT) 0, 0, __LINE__, DRAWSTATE_INVALID_PIPELINE_CREATE_STATE, "DS",
- "Invalid Pipeline CreateInfo State: Vtx Shader required");
- }
- // Either both or neither TC/TE shaders should be defined
- if (((pPipeline->active_shaders & VK_SHADER_STAGE_TESSELLATION_CONTROL_BIT) == 0) !=
- ((pPipeline->active_shaders & VK_SHADER_STAGE_TESSELLATION_EVALUATION_BIT) == 0) ) {
- skipCall |= log_msg(my_data->report_data, VK_DEBUG_REPORT_ERROR_BIT_EXT, (VkDebugReportObjectTypeEXT) 0, 0, __LINE__, DRAWSTATE_INVALID_PIPELINE_CREATE_STATE, "DS",
- "Invalid Pipeline CreateInfo State: TE and TC shaders must be included or excluded as a pair");
- }
- // Compute shaders should be specified independent of Gfx shaders
- if ((pPipeline->active_shaders & VK_SHADER_STAGE_COMPUTE_BIT) &&
- (pPipeline->active_shaders & (VK_SHADER_STAGE_VERTEX_BIT | VK_SHADER_STAGE_TESSELLATION_CONTROL_BIT |
- VK_SHADER_STAGE_TESSELLATION_EVALUATION_BIT | VK_SHADER_STAGE_GEOMETRY_BIT |
- VK_SHADER_STAGE_FRAGMENT_BIT))) {
- skipCall |= log_msg(my_data->report_data, VK_DEBUG_REPORT_ERROR_BIT_EXT, (VkDebugReportObjectTypeEXT) 0, 0, __LINE__, DRAWSTATE_INVALID_PIPELINE_CREATE_STATE, "DS",
- "Invalid Pipeline CreateInfo State: Do not specify Compute Shader for Gfx Pipeline");
- }
- // VK_PRIMITIVE_TOPOLOGY_PATCH_LIST primitive topology is only valid for tessellation pipelines.
- // Mismatching primitive topology and tessellation fails graphics pipeline creation.
- if (pPipeline->active_shaders & (VK_SHADER_STAGE_TESSELLATION_CONTROL_BIT | VK_SHADER_STAGE_TESSELLATION_EVALUATION_BIT) &&
- (pPipeline->iaStateCI.topology != VK_PRIMITIVE_TOPOLOGY_PATCH_LIST)) {
- skipCall |= log_msg(my_data->report_data, VK_DEBUG_REPORT_ERROR_BIT_EXT, (VkDebugReportObjectTypeEXT) 0, 0, __LINE__, DRAWSTATE_INVALID_PIPELINE_CREATE_STATE, "DS",
- "Invalid Pipeline CreateInfo State: VK_PRIMITIVE_TOPOLOGY_PATCH_LIST must be set as IA topology for tessellation pipelines");
- }
- if (pPipeline->iaStateCI.topology == VK_PRIMITIVE_TOPOLOGY_PATCH_LIST) {
- if (~pPipeline->active_shaders & VK_SHADER_STAGE_TESSELLATION_CONTROL_BIT) {
- skipCall |= log_msg(my_data->report_data, VK_DEBUG_REPORT_ERROR_BIT_EXT, (VkDebugReportObjectTypeEXT) 0, 0, __LINE__, DRAWSTATE_INVALID_PIPELINE_CREATE_STATE, "DS",
- "Invalid Pipeline CreateInfo State: VK_PRIMITIVE_TOPOLOGY_PATCH_LIST primitive topology is only valid for tessellation pipelines");
- }
- if (!pPipeline->tessStateCI.patchControlPoints || (pPipeline->tessStateCI.patchControlPoints > 32)) {
- skipCall |= log_msg(my_data->report_data, VK_DEBUG_REPORT_ERROR_BIT_EXT, (VkDebugReportObjectTypeEXT) 0, 0, __LINE__, DRAWSTATE_INVALID_PIPELINE_CREATE_STATE, "DS",
- "Invalid Pipeline CreateInfo State: VK_PRIMITIVE_TOPOLOGY_PATCH_LIST primitive topology used with patchControlPoints value %u."
- " patchControlPoints should be >0 and <=32.", pPipeline->tessStateCI.patchControlPoints);
- }
- }
- // Viewport state must be included and viewport and scissor counts should always match
- // NOTE : Even if these are flagged as dynamic, counts need to be set correctly for shader compiler
- if (!pPipeline->graphicsPipelineCI.pViewportState) {
- skipCall |= log_msg(my_data->report_data, VK_DEBUG_REPORT_ERROR_BIT_EXT, (VkDebugReportObjectTypeEXT) 0, 0, __LINE__, DRAWSTATE_VIEWPORT_SCISSOR_MISMATCH, "DS",
- "Gfx Pipeline pViewportState is null. Even if viewport and scissors are dynamic PSO must include viewportCount and scissorCount in pViewportState.");
- } else if (pPipeline->graphicsPipelineCI.pViewportState->scissorCount != pPipeline->graphicsPipelineCI.pViewportState->viewportCount) {
- skipCall |= log_msg(my_data->report_data, VK_DEBUG_REPORT_ERROR_BIT_EXT, (VkDebugReportObjectTypeEXT) 0, 0, __LINE__, DRAWSTATE_VIEWPORT_SCISSOR_MISMATCH, "DS",
- "Gfx Pipeline viewport count (%u) must match scissor count (%u).", pPipeline->vpStateCI.viewportCount, pPipeline->vpStateCI.scissorCount);
- } else {
- // If viewport or scissor are not dynamic, then verify that data is appropriate for count
- VkBool32 dynViewport = isDynamic(pPipeline, VK_DYNAMIC_STATE_VIEWPORT);
- VkBool32 dynScissor = isDynamic(pPipeline, VK_DYNAMIC_STATE_SCISSOR);
- if (!dynViewport) {
- if (pPipeline->graphicsPipelineCI.pViewportState->viewportCount && !pPipeline->graphicsPipelineCI.pViewportState->pViewports) {
- skipCall |= log_msg(my_data->report_data, VK_DEBUG_REPORT_ERROR_BIT_EXT, (VkDebugReportObjectTypeEXT) 0, 0, __LINE__, DRAWSTATE_VIEWPORT_SCISSOR_MISMATCH, "DS",
- "Gfx Pipeline viewportCount is %u, but pViewports is NULL. For non-zero viewportCount, you must either include pViewports data, or include viewport in pDynamicState and set it with vkCmdSetViewport().", pPipeline->graphicsPipelineCI.pViewportState->viewportCount);
- }
- }
- if (!dynScissor) {
- if (pPipeline->graphicsPipelineCI.pViewportState->scissorCount && !pPipeline->graphicsPipelineCI.pViewportState->pScissors) {
- skipCall |= log_msg(my_data->report_data, VK_DEBUG_REPORT_ERROR_BIT_EXT, (VkDebugReportObjectTypeEXT) 0, 0, __LINE__, DRAWSTATE_VIEWPORT_SCISSOR_MISMATCH, "DS",
- "Gfx Pipeline scissorCount is %u, but pScissors is NULL. For non-zero scissorCount, you must either include pScissors data, or include scissor in pDynamicState and set it with vkCmdSetScissor().", pPipeline->graphicsPipelineCI.pViewportState->scissorCount);
- }
- }
- }
- return skipCall;
-}
-
-// Init the pipeline mapping info based on pipeline create info LL tree
-// Threading note : Calls to this function should wrapped in mutex
-// TODO : this should really just be in the constructor for PIPELINE_NODE
-static PIPELINE_NODE* initGraphicsPipeline(layer_data* dev_data, const VkGraphicsPipelineCreateInfo* pCreateInfo, PIPELINE_NODE* pBasePipeline)
-{
- PIPELINE_NODE* pPipeline = new PIPELINE_NODE;
-
- if (pBasePipeline) {
- *pPipeline = *pBasePipeline;
- }
-
- // First init create info
- memcpy(&pPipeline->graphicsPipelineCI, pCreateInfo, sizeof(VkGraphicsPipelineCreateInfo));
-
- size_t bufferSize = 0;
- const VkPipelineVertexInputStateCreateInfo* pVICI = NULL;
- const VkPipelineColorBlendStateCreateInfo* pCBCI = NULL;
-
- for (uint32_t i = 0; i < pCreateInfo->stageCount; i++) {
- const VkPipelineShaderStageCreateInfo *pPSSCI = &pCreateInfo->pStages[i];
-
- switch (pPSSCI->stage) {
- case VK_SHADER_STAGE_VERTEX_BIT:
- memcpy(&pPipeline->vsCI, pPSSCI, sizeof(VkPipelineShaderStageCreateInfo));
- pPipeline->active_shaders |= VK_SHADER_STAGE_VERTEX_BIT;
- break;
- case VK_SHADER_STAGE_TESSELLATION_CONTROL_BIT:
- memcpy(&pPipeline->tcsCI, pPSSCI, sizeof(VkPipelineShaderStageCreateInfo));
- pPipeline->active_shaders |= VK_SHADER_STAGE_TESSELLATION_CONTROL_BIT;
- break;
- case VK_SHADER_STAGE_TESSELLATION_EVALUATION_BIT:
- memcpy(&pPipeline->tesCI, pPSSCI, sizeof(VkPipelineShaderStageCreateInfo));
- pPipeline->active_shaders |= VK_SHADER_STAGE_TESSELLATION_EVALUATION_BIT;
- break;
- case VK_SHADER_STAGE_GEOMETRY_BIT:
- memcpy(&pPipeline->gsCI, pPSSCI, sizeof(VkPipelineShaderStageCreateInfo));
- pPipeline->active_shaders |= VK_SHADER_STAGE_GEOMETRY_BIT;
- break;
- case VK_SHADER_STAGE_FRAGMENT_BIT:
- memcpy(&pPipeline->fsCI, pPSSCI, sizeof(VkPipelineShaderStageCreateInfo));
- pPipeline->active_shaders |= VK_SHADER_STAGE_FRAGMENT_BIT;
- break;
- case VK_SHADER_STAGE_COMPUTE_BIT:
- // TODO : Flag error, CS is specified through VkComputePipelineCreateInfo
- pPipeline->active_shaders |= VK_SHADER_STAGE_COMPUTE_BIT;
- break;
- default:
- // TODO : Flag error
- break;
- }
- }
- // Copy over GraphicsPipelineCreateInfo structure embedded pointers
- if (pCreateInfo->stageCount != 0) {
- pPipeline->graphicsPipelineCI.pStages = new VkPipelineShaderStageCreateInfo[pCreateInfo->stageCount];
- bufferSize = pCreateInfo->stageCount * sizeof(VkPipelineShaderStageCreateInfo);
- memcpy((void*)pPipeline->graphicsPipelineCI.pStages, pCreateInfo->pStages, bufferSize);
- }
- if (pCreateInfo->pVertexInputState != NULL) {
- memcpy((void*)&pPipeline->vertexInputCI, pCreateInfo->pVertexInputState , sizeof(VkPipelineVertexInputStateCreateInfo));
- // Copy embedded ptrs
- pVICI = pCreateInfo->pVertexInputState;
- pPipeline->vtxBindingCount = pVICI->vertexBindingDescriptionCount;
- if (pPipeline->vtxBindingCount) {
- pPipeline->pVertexBindingDescriptions = new VkVertexInputBindingDescription[pPipeline->vtxBindingCount];
- bufferSize = pPipeline->vtxBindingCount * sizeof(VkVertexInputBindingDescription);
- memcpy((void*)pPipeline->pVertexBindingDescriptions, pVICI->pVertexBindingDescriptions, bufferSize);
- }
- pPipeline->vtxAttributeCount = pVICI->vertexAttributeDescriptionCount;
- if (pPipeline->vtxAttributeCount) {
- pPipeline->pVertexAttributeDescriptions = new VkVertexInputAttributeDescription[pPipeline->vtxAttributeCount];
- bufferSize = pPipeline->vtxAttributeCount * sizeof(VkVertexInputAttributeDescription);
- memcpy((void*)pPipeline->pVertexAttributeDescriptions, pVICI->pVertexAttributeDescriptions, bufferSize);
- }
- pPipeline->graphicsPipelineCI.pVertexInputState = &pPipeline->vertexInputCI;
- }
- if (pCreateInfo->pInputAssemblyState != NULL) {
- memcpy((void*)&pPipeline->iaStateCI, pCreateInfo->pInputAssemblyState, sizeof(VkPipelineInputAssemblyStateCreateInfo));
- pPipeline->graphicsPipelineCI.pInputAssemblyState = &pPipeline->iaStateCI;
- }
- if (pCreateInfo->pTessellationState != NULL) {
- memcpy((void*)&pPipeline->tessStateCI, pCreateInfo->pTessellationState, sizeof(VkPipelineTessellationStateCreateInfo));
- pPipeline->graphicsPipelineCI.pTessellationState = &pPipeline->tessStateCI;
- }
- if (pCreateInfo->pViewportState != NULL) {
- memcpy((void*)&pPipeline->vpStateCI, pCreateInfo->pViewportState, sizeof(VkPipelineViewportStateCreateInfo));
- pPipeline->graphicsPipelineCI.pViewportState = &pPipeline->vpStateCI;
- }
- if (pCreateInfo->pRasterizationState != NULL) {
- memcpy((void*)&pPipeline->rsStateCI, pCreateInfo->pRasterizationState, sizeof(VkPipelineRasterizationStateCreateInfo));
- pPipeline->graphicsPipelineCI.pRasterizationState = &pPipeline->rsStateCI;
- }
- if (pCreateInfo->pMultisampleState != NULL) {
- memcpy((void*)&pPipeline->msStateCI, pCreateInfo->pMultisampleState, sizeof(VkPipelineMultisampleStateCreateInfo));
- pPipeline->graphicsPipelineCI.pMultisampleState = &pPipeline->msStateCI;
- }
- if (pCreateInfo->pDepthStencilState != NULL) {
- memcpy((void*)&pPipeline->dsStateCI, pCreateInfo->pDepthStencilState, sizeof(VkPipelineDepthStencilStateCreateInfo));
- pPipeline->graphicsPipelineCI.pDepthStencilState = &pPipeline->dsStateCI;
- }
- if (pCreateInfo->pColorBlendState != NULL) {
- memcpy((void*)&pPipeline->cbStateCI, pCreateInfo->pColorBlendState, sizeof(VkPipelineColorBlendStateCreateInfo));
- // Copy embedded ptrs
- pCBCI = pCreateInfo->pColorBlendState;
- pPipeline->attachmentCount = pCBCI->attachmentCount;
- if (pPipeline->attachmentCount) {
- pPipeline->pAttachments = new VkPipelineColorBlendAttachmentState[pPipeline->attachmentCount];
- bufferSize = pPipeline->attachmentCount * sizeof(VkPipelineColorBlendAttachmentState);
- memcpy((void*)pPipeline->pAttachments, pCBCI->pAttachments, bufferSize);
- }
- pPipeline->graphicsPipelineCI.pColorBlendState = &pPipeline->cbStateCI;
- }
- if (pCreateInfo->pDynamicState != NULL) {
- memcpy((void*)&pPipeline->dynStateCI, pCreateInfo->pDynamicState, sizeof(VkPipelineDynamicStateCreateInfo));
- if (pPipeline->dynStateCI.dynamicStateCount) {
- pPipeline->dynStateCI.pDynamicStates = new VkDynamicState[pPipeline->dynStateCI.dynamicStateCount];
- bufferSize = pPipeline->dynStateCI.dynamicStateCount * sizeof(VkDynamicState);
- memcpy((void*)pPipeline->dynStateCI.pDynamicStates, pCreateInfo->pDynamicState->pDynamicStates, bufferSize);
- }
- pPipeline->graphicsPipelineCI.pDynamicState = &pPipeline->dynStateCI;
- }
- pPipeline->active_sets.clear();
- return pPipeline;
-}
-
-// Free the Pipeline nodes
-static void deletePipelines(layer_data* my_data)
-{
- if (my_data->pipelineMap.size() <= 0)
- return;
- for (auto ii=my_data->pipelineMap.begin(); ii!=my_data->pipelineMap.end(); ++ii) {
- if ((*ii).second->graphicsPipelineCI.stageCount != 0) {
- delete[] (*ii).second->graphicsPipelineCI.pStages;
- }
- if ((*ii).second->pVertexBindingDescriptions) {
- delete[] (*ii).second->pVertexBindingDescriptions;
- }
- if ((*ii).second->pVertexAttributeDescriptions) {
- delete[] (*ii).second->pVertexAttributeDescriptions;
- }
- if ((*ii).second->pAttachments) {
- delete[] (*ii).second->pAttachments;
- }
- if ((*ii).second->dynStateCI.dynamicStateCount != 0) {
- delete[] (*ii).second->dynStateCI.pDynamicStates;
- }
- delete (*ii).second;
- }
- my_data->pipelineMap.clear();
-}
-
-// For given pipeline, return number of MSAA samples, or one if MSAA disabled
-static VkSampleCountFlagBits getNumSamples(layer_data* my_data, const VkPipeline pipeline)
-{
- PIPELINE_NODE* pPipe = my_data->pipelineMap[pipeline];
- if (VK_STRUCTURE_TYPE_PIPELINE_MULTISAMPLE_STATE_CREATE_INFO == pPipe->msStateCI.sType) {
- return pPipe->msStateCI.rasterizationSamples;
- }
- return VK_SAMPLE_COUNT_1_BIT;
-}
-
-// Validate state related to the PSO
-static VkBool32 validatePipelineState(layer_data* my_data, const GLOBAL_CB_NODE* pCB, const VkPipelineBindPoint pipelineBindPoint, const VkPipeline pipeline)
-{
- if (VK_PIPELINE_BIND_POINT_GRAPHICS == pipelineBindPoint) {
- // Verify that any MSAA request in PSO matches sample# in bound FB
- VkSampleCountFlagBits psoNumSamples = getNumSamples(my_data, pipeline);
- if (pCB->activeRenderPass) {
- const VkRenderPassCreateInfo* pRPCI = my_data->renderPassMap[pCB->activeRenderPass]->pCreateInfo;
- const VkSubpassDescription* pSD = &pRPCI->pSubpasses[pCB->activeSubpass];
- VkSampleCountFlagBits subpassNumSamples = (VkSampleCountFlagBits) 0;
- uint32_t i;
-
- for (i = 0; i < pSD->colorAttachmentCount; i++) {
- VkSampleCountFlagBits samples;
-
- if (pSD->pColorAttachments[i].attachment == VK_ATTACHMENT_UNUSED)
- continue;
-
- samples = pRPCI->pAttachments[pSD->pColorAttachments[i].attachment].samples;
- if (subpassNumSamples == (VkSampleCountFlagBits) 0) {
- subpassNumSamples = samples;
- } else if (subpassNumSamples != samples) {
- subpassNumSamples = (VkSampleCountFlagBits) -1;
- break;
- }
- }
- if (pSD->pDepthStencilAttachment && pSD->pDepthStencilAttachment->attachment != VK_ATTACHMENT_UNUSED) {
- const VkSampleCountFlagBits samples = pRPCI->pAttachments[pSD->pDepthStencilAttachment->attachment].samples;
- if (subpassNumSamples == (VkSampleCountFlagBits) 0)
- subpassNumSamples = samples;
- else if (subpassNumSamples != samples)
- subpassNumSamples = (VkSampleCountFlagBits) -1;
- }
-
- if (psoNumSamples != subpassNumSamples) {
- return log_msg(my_data->report_data, VK_DEBUG_REPORT_ERROR_BIT_EXT, VK_DEBUG_REPORT_OBJECT_TYPE_PIPELINE_EXT, (uint64_t) pipeline, __LINE__, DRAWSTATE_NUM_SAMPLES_MISMATCH, "DS",
- "Num samples mismatch! Binding PSO (%#" PRIxLEAST64 ") with %u samples while current RenderPass (%#" PRIxLEAST64 ") w/ %u samples!",
- (uint64_t) pipeline, psoNumSamples, (uint64_t) pCB->activeRenderPass, subpassNumSamples);
- }
- } else {
- // TODO : I believe it's an error if we reach this point and don't have an activeRenderPass
- // Verify and flag error as appropriate
- }
- // TODO : Add more checks here
- } else {
- // TODO : Validate non-gfx pipeline updates
- }
- return VK_FALSE;
-}
-
-// Block of code at start here specifically for managing/tracking DSs
-
-// Return Pool node ptr for specified pool or else NULL
-static DESCRIPTOR_POOL_NODE* getPoolNode(layer_data* my_data, const VkDescriptorPool pool)
-{
- if (my_data->descriptorPoolMap.find(pool) == my_data->descriptorPoolMap.end()) {
- return NULL;
- }
- return my_data->descriptorPoolMap[pool];
-}
-
-static LAYOUT_NODE* getLayoutNode(layer_data* my_data, const VkDescriptorSetLayout layout) {
- if (my_data->descriptorSetLayoutMap.find(layout) == my_data->descriptorSetLayoutMap.end()) {
- return NULL;
- }
- return my_data->descriptorSetLayoutMap[layout];
-}
-
-// Return VK_FALSE if update struct is of valid type, otherwise flag error and return code from callback
-static VkBool32 validUpdateStruct(layer_data* my_data, const VkDevice device, const GENERIC_HEADER* pUpdateStruct)
-{
- switch (pUpdateStruct->sType)
- {
- case VK_STRUCTURE_TYPE_WRITE_DESCRIPTOR_SET:
- case VK_STRUCTURE_TYPE_COPY_DESCRIPTOR_SET:
- return VK_FALSE;
- default:
- return log_msg(my_data->report_data, VK_DEBUG_REPORT_ERROR_BIT_EXT, (VkDebugReportObjectTypeEXT) 0, 0, __LINE__, DRAWSTATE_INVALID_UPDATE_STRUCT, "DS",
- "Unexpected UPDATE struct of type %s (value %u) in vkUpdateDescriptors() struct tree", string_VkStructureType(pUpdateStruct->sType), pUpdateStruct->sType);
- }
-}
-
-// Set count for given update struct in the last parameter
-// Return value of skipCall, which is only VK_TRUE if error occurs and callback signals execution to cease
-static uint32_t getUpdateCount(layer_data* my_data, const VkDevice device, const GENERIC_HEADER* pUpdateStruct)
-{
- switch (pUpdateStruct->sType)
- {
- case VK_STRUCTURE_TYPE_WRITE_DESCRIPTOR_SET:
- return ((VkWriteDescriptorSet*)pUpdateStruct)->descriptorCount;
- case VK_STRUCTURE_TYPE_COPY_DESCRIPTOR_SET:
- // TODO : Need to understand this case better and make sure code is correct
- return ((VkCopyDescriptorSet*)pUpdateStruct)->descriptorCount;
- default:
- return 0;
- }
- return 0;
-}
-
-// For given Layout Node and binding, return index where that binding begins
-static uint32_t getBindingStartIndex(const LAYOUT_NODE* pLayout, const uint32_t binding)
-{
- uint32_t offsetIndex = 0;
- for (uint32_t i = 0; i < pLayout->createInfo.bindingCount; i++) {
- if (pLayout->createInfo.pBindings[i].binding == binding)
- break;
- offsetIndex += pLayout->createInfo.pBindings[i].descriptorCount;
- }
- return offsetIndex;
-}
-
-// For given layout node and binding, return last index that is updated
-static uint32_t getBindingEndIndex(const LAYOUT_NODE* pLayout, const uint32_t binding)
-{
- uint32_t offsetIndex = 0;
- for (uint32_t i = 0; i < pLayout->createInfo.bindingCount; i++) {
- offsetIndex += pLayout->createInfo.pBindings[i].descriptorCount;
- if (pLayout->createInfo.pBindings[i].binding == binding)
- break;
- }
- return offsetIndex-1;
-}
-
-// For given layout and update, return the first overall index of the layout that is updated
-static uint32_t getUpdateStartIndex(layer_data* my_data, const VkDevice device, const LAYOUT_NODE* pLayout, const uint32_t binding, const uint32_t arrayIndex, const GENERIC_HEADER* pUpdateStruct)
-{
- return getBindingStartIndex(pLayout, binding)+arrayIndex;
-}
-
-// For given layout and update, return the last overall index of the layout that is updated
-static uint32_t getUpdateEndIndex(layer_data* my_data, const VkDevice device, const LAYOUT_NODE* pLayout, const uint32_t binding, const uint32_t arrayIndex, const GENERIC_HEADER* pUpdateStruct)
-{
- uint32_t count = getUpdateCount(my_data, device, pUpdateStruct);
- return getBindingStartIndex(pLayout, binding)+arrayIndex+count-1;
-}
-
-// Verify that the descriptor type in the update struct matches what's expected by the layout
-static VkBool32 validateUpdateConsistency(layer_data* my_data, const VkDevice device, const LAYOUT_NODE* pLayout, const GENERIC_HEADER* pUpdateStruct, uint32_t startIndex, uint32_t endIndex)
-{
- // First get actual type of update
- VkBool32 skipCall = VK_FALSE;
- VkDescriptorType actualType;
- uint32_t i = 0;
- switch (pUpdateStruct->sType)
- {
- case VK_STRUCTURE_TYPE_WRITE_DESCRIPTOR_SET:
- actualType = ((VkWriteDescriptorSet*)pUpdateStruct)->descriptorType;
- break;
- case VK_STRUCTURE_TYPE_COPY_DESCRIPTOR_SET:
- /* no need to validate */
- return VK_FALSE;
- break;
- default:
- skipCall |= log_msg(my_data->report_data, VK_DEBUG_REPORT_ERROR_BIT_EXT, (VkDebugReportObjectTypeEXT) 0, 0, __LINE__, DRAWSTATE_INVALID_UPDATE_STRUCT, "DS",
- "Unexpected UPDATE struct of type %s (value %u) in vkUpdateDescriptors() struct tree", string_VkStructureType(pUpdateStruct->sType), pUpdateStruct->sType);
- }
- if (VK_FALSE == skipCall) {
- // Set first stageFlags as reference and verify that all other updates match it
- VkShaderStageFlags refStageFlags = pLayout->stageFlags[startIndex];
- for (i = startIndex; i <= endIndex; i++) {
- if (pLayout->descriptorTypes[i] != actualType) {
- skipCall |= log_msg(my_data->report_data, VK_DEBUG_REPORT_ERROR_BIT_EXT, (VkDebugReportObjectTypeEXT) 0, 0, __LINE__, DRAWSTATE_DESCRIPTOR_TYPE_MISMATCH, "DS",
- "Write descriptor update has descriptor type %s that does not match overlapping binding descriptor type of %s!",
- string_VkDescriptorType(actualType), string_VkDescriptorType(pLayout->descriptorTypes[i]));
- }
- if (pLayout->stageFlags[i] != refStageFlags) {
- skipCall |= log_msg(my_data->report_data, VK_DEBUG_REPORT_ERROR_BIT_EXT, (VkDebugReportObjectTypeEXT) 0, 0, __LINE__, DRAWSTATE_DESCRIPTOR_STAGEFLAGS_MISMATCH, "DS",
- "Write descriptor update has stageFlags %x that do not match overlapping binding descriptor stageFlags of %x!",
- refStageFlags, pLayout->stageFlags[i]);
- }
- }
- }
- return skipCall;
-}
-
-// Determine the update type, allocate a new struct of that type, shadow the given pUpdate
-// struct into the pNewNode param. Return VK_TRUE if error condition encountered and callback signals early exit.
-// NOTE : Calls to this function should be wrapped in mutex
-static VkBool32 shadowUpdateNode(layer_data* my_data, const VkDevice device, GENERIC_HEADER* pUpdate, GENERIC_HEADER** pNewNode)
-{
- VkBool32 skipCall = VK_FALSE;
- VkWriteDescriptorSet* pWDS = NULL;
- VkCopyDescriptorSet* pCDS = NULL;
- switch (pUpdate->sType)
- {
- case VK_STRUCTURE_TYPE_WRITE_DESCRIPTOR_SET:
- pWDS = new VkWriteDescriptorSet;
- *pNewNode = (GENERIC_HEADER*)pWDS;
- memcpy(pWDS, pUpdate, sizeof(VkWriteDescriptorSet));
-
- switch (pWDS->descriptorType) {
- case VK_DESCRIPTOR_TYPE_SAMPLER:
- case VK_DESCRIPTOR_TYPE_COMBINED_IMAGE_SAMPLER:
- case VK_DESCRIPTOR_TYPE_SAMPLED_IMAGE:
- case VK_DESCRIPTOR_TYPE_STORAGE_IMAGE:
- {
- VkDescriptorImageInfo *info = new VkDescriptorImageInfo[pWDS->descriptorCount];
- memcpy(info, pWDS->pImageInfo, pWDS->descriptorCount * sizeof(VkDescriptorImageInfo));
- pWDS->pImageInfo = info;
- }
- break;
- case VK_DESCRIPTOR_TYPE_UNIFORM_TEXEL_BUFFER:
- case VK_DESCRIPTOR_TYPE_STORAGE_TEXEL_BUFFER:
- {
- VkBufferView *info = new VkBufferView[pWDS->descriptorCount];
- memcpy(info, pWDS->pTexelBufferView, pWDS->descriptorCount * sizeof(VkBufferView));
- pWDS->pTexelBufferView = info;
- }
- break;
- case VK_DESCRIPTOR_TYPE_UNIFORM_BUFFER:
- case VK_DESCRIPTOR_TYPE_STORAGE_BUFFER:
- case VK_DESCRIPTOR_TYPE_UNIFORM_BUFFER_DYNAMIC:
- case VK_DESCRIPTOR_TYPE_STORAGE_BUFFER_DYNAMIC:
- {
- VkDescriptorBufferInfo *info = new VkDescriptorBufferInfo[pWDS->descriptorCount];
- memcpy(info, pWDS->pBufferInfo, pWDS->descriptorCount * sizeof(VkDescriptorBufferInfo));
- pWDS->pBufferInfo = info;
- }
- break;
- default:
- return VK_ERROR_VALIDATION_FAILED_EXT;
- break;
- }
- break;
- case VK_STRUCTURE_TYPE_COPY_DESCRIPTOR_SET:
- pCDS = new VkCopyDescriptorSet;
- *pNewNode = (GENERIC_HEADER*)pCDS;
- memcpy(pCDS, pUpdate, sizeof(VkCopyDescriptorSet));
- break;
- default:
- if (log_msg(my_data->report_data, VK_DEBUG_REPORT_ERROR_BIT_EXT, (VkDebugReportObjectTypeEXT) 0, 0, __LINE__, DRAWSTATE_INVALID_UPDATE_STRUCT, "DS",
- "Unexpected UPDATE struct of type %s (value %u) in vkUpdateDescriptors() struct tree", string_VkStructureType(pUpdate->sType), pUpdate->sType))
- return VK_TRUE;
- }
- // Make sure that pNext for the end of shadow copy is NULL
- (*pNewNode)->pNext = NULL;
- return skipCall;
-}
-
-// Verify that given sampler is valid
-static VkBool32 validateSampler(const layer_data* my_data, const VkSampler* pSampler, const VkBool32 immutable)
-{
- VkBool32 skipCall = VK_FALSE;
- auto sampIt = my_data->sampleMap.find(*pSampler);
- if (sampIt == my_data->sampleMap.end()) {
- if (!immutable) {
- skipCall |= log_msg(my_data->report_data, VK_DEBUG_REPORT_ERROR_BIT_EXT, VK_DEBUG_REPORT_OBJECT_TYPE_SAMPLER_EXT, (uint64_t) *pSampler, __LINE__, DRAWSTATE_SAMPLER_DESCRIPTOR_ERROR, "DS",
- "vkUpdateDescriptorSets: Attempt to update descriptor with invalid sampler %#" PRIxLEAST64, (uint64_t) *pSampler);
- } else { // immutable
- skipCall |= log_msg(my_data->report_data, VK_DEBUG_REPORT_ERROR_BIT_EXT, VK_DEBUG_REPORT_OBJECT_TYPE_SAMPLER_EXT, (uint64_t) *pSampler, __LINE__, DRAWSTATE_SAMPLER_DESCRIPTOR_ERROR, "DS",
- "vkUpdateDescriptorSets: Attempt to update descriptor whose binding has an invalid immutable sampler %#" PRIxLEAST64, (uint64_t) *pSampler);
- }
- } else {
- // TODO : Any further checks we want to do on the sampler?
- }
- return skipCall;
-}
-
-// Set the layout on the global level
-void SetLayout(layer_data *my_data, ImageSubresourcePair imgpair,
- const VkImageLayout &layout) {
- VkImage &image = imgpair.image;
- // TODO (mlentine): Maybe set format if new? Not used atm.
- my_data->imageLayoutMap[imgpair].layout = layout;
- // TODO (mlentine): Maybe make vector a set?
- auto subresource =
- std::find(my_data->imageSubresourceMap[image].begin(),
- my_data->imageSubresourceMap[image].end(), imgpair);
- if (subresource == my_data->imageSubresourceMap[image].end()) {
- my_data->imageSubresourceMap[image].push_back(imgpair);
- }
-}
-
-void SetLayout(layer_data *my_data, VkImage image,
- const VkImageLayout &layout) {
- ImageSubresourcePair imgpair = {image, false, VkImageSubresource()};
- SetLayout(my_data, imgpair, layout);
-}
-
-void SetLayout(layer_data *my_data, VkImage image, VkImageSubresource range,
- const VkImageLayout &layout) {
- ImageSubresourcePair imgpair = {image, true, range};
- SetLayout(my_data, imgpair, layout);
-}
-
-// Set the layout on the cmdbuf level
-void SetLayout(GLOBAL_CB_NODE *pCB, VkImage image, ImageSubresourcePair imgpair,
- const IMAGE_CMD_BUF_NODE &node) {
- pCB->imageLayoutMap[imgpair] = node;
- // TODO (mlentine): Maybe make vector a set?
- auto subresource =
- std::find(pCB->imageSubresourceMap[image].begin(),
- pCB->imageSubresourceMap[image].end(), imgpair);
- if (subresource == pCB->imageSubresourceMap[image].end()) {
- pCB->imageSubresourceMap[image].push_back(imgpair);
- }
-}
-
-void SetLayout(GLOBAL_CB_NODE *pCB, VkImage image, ImageSubresourcePair imgpair,
- const VkImageLayout &layout) {
- pCB->imageLayoutMap[imgpair].layout = layout;
- // TODO (mlentine): Maybe make vector a set?
- assert(std::find(pCB->imageSubresourceMap[image].begin(),
- pCB->imageSubresourceMap[image].end(),
- imgpair) != pCB->imageSubresourceMap[image].end());
-}
-
-void SetLayout(GLOBAL_CB_NODE *pCB, VkImage image,
- const IMAGE_CMD_BUF_NODE &node) {
- ImageSubresourcePair imgpair = {image, false, VkImageSubresource()};
- SetLayout(pCB, image, imgpair, node);
-}
-
-void SetLayout(GLOBAL_CB_NODE *pCB, VkImage image, VkImageSubresource range,
- const IMAGE_CMD_BUF_NODE &node) {
- ImageSubresourcePair imgpair = {image, true, range};
- SetLayout(pCB, image, imgpair, node);
-}
-
-void SetLayout(GLOBAL_CB_NODE *pCB, VkImage image,
- const VkImageLayout &layout) {
- ImageSubresourcePair imgpair = {image, false, VkImageSubresource()};
- SetLayout(pCB, image, imgpair, layout);
-}
-
-void SetLayout(GLOBAL_CB_NODE *pCB, VkImage image, VkImageSubresource range,
- const VkImageLayout &layout) {
- ImageSubresourcePair imgpair = {image, true, range};
- SetLayout(pCB, image, imgpair, layout);
-}
-
-void SetLayout(const layer_data *dev_data, GLOBAL_CB_NODE *pCB,
- VkImageView imageView, const VkImageLayout &layout) {
- auto image_view_data = dev_data->imageViewMap.find(imageView);
- assert(image_view_data != dev_data->imageViewMap.end());
- const VkImage &image = image_view_data->second->image;
- const VkImageSubresourceRange &subRange =
- image_view_data->second->subresourceRange;
- // TODO: Do not iterate over every possibility - consolidate where possible
- for (uint32_t j = 0; j < subRange.levelCount; j++) {
- uint32_t level = subRange.baseMipLevel + j;
- for (uint32_t k = 0; k < subRange.layerCount; k++) {
- uint32_t layer = subRange.baseArrayLayer + k;
- VkImageSubresource sub = {subRange.aspectMask, level, layer};
- SetLayout(pCB, image, sub, layout);
- }
- }
-}
-
-// find layout(s) on the cmd buf level
-bool FindLayout(const GLOBAL_CB_NODE *pCB, VkImage image,
- VkImageSubresource range, IMAGE_CMD_BUF_NODE &node) {
- ImageSubresourcePair imgpair = {image, true, range};
- auto imgsubIt = pCB->imageLayoutMap.find(imgpair);
- if (imgsubIt == pCB->imageLayoutMap.end()) {
- imgpair = {image, false, VkImageSubresource()};
- imgsubIt = pCB->imageLayoutMap.find(imgpair);
- if (imgsubIt == pCB->imageLayoutMap.end())
- return false;
- }
- node = imgsubIt->second;
- return true;
-}
-
-// find layout(s) on the global level
-bool FindLayout(const layer_data *my_data, ImageSubresourcePair imgpair,
- VkImageLayout &layout) {
- auto imgsubIt = my_data->imageLayoutMap.find(imgpair);
- if (imgsubIt == my_data->imageLayoutMap.end()) {
- imgpair = {imgpair.image, false, VkImageSubresource()};
- imgsubIt = my_data->imageLayoutMap.find(imgpair);
- if(imgsubIt == my_data->imageLayoutMap.end()) return false;
- }
- layout = imgsubIt->second.layout;
- return true;
-}
-
-bool FindLayout(const layer_data *my_data, VkImage image,
- VkImageSubresource range, VkImageLayout &layout) {
- ImageSubresourcePair imgpair = {image, true, range};
- return FindLayout(my_data, imgpair, layout);
-}
-
-bool FindLayouts(const layer_data *my_data, VkImage image,
- std::vector<VkImageLayout> &layouts) {
- auto sub_data = my_data->imageSubresourceMap.find(image);
- if (sub_data == my_data->imageSubresourceMap.end())
- return false;
- auto imgIt = my_data->imageMap.find(image);
- if (imgIt == my_data->imageMap.end())
- return false;
- bool ignoreGlobal = false;
- // TODO: Make this robust for >1 aspect mask. Now it will just say ignore
- // potential errors in this case.
- if (sub_data->second.size() >=
- (imgIt->second->arrayLayers * imgIt->second->mipLevels + 1)) {
- ignoreGlobal = true;
- }
- for (auto imgsubpair : sub_data->second) {
- if (ignoreGlobal && !imgsubpair.hasSubresource)
- continue;
- auto img_data = my_data->imageLayoutMap.find(imgsubpair);
- if (img_data != my_data->imageLayoutMap.end()) {
- layouts.push_back(img_data->second.layout);
- }
- }
- return true;
-}
-
-// Verify that given imageView is valid
-static VkBool32 validateImageView(const layer_data* my_data, const VkImageView* pImageView, const VkImageLayout imageLayout)
-{
- VkBool32 skipCall = VK_FALSE;
- auto ivIt = my_data->imageViewMap.find(*pImageView);
- if (ivIt == my_data->imageViewMap.end()) {
- skipCall |= log_msg(my_data->report_data, VK_DEBUG_REPORT_ERROR_BIT_EXT, VK_DEBUG_REPORT_OBJECT_TYPE_IMAGE_VIEW_EXT, (uint64_t) *pImageView, __LINE__, DRAWSTATE_IMAGEVIEW_DESCRIPTOR_ERROR, "DS",
- "vkUpdateDescriptorSets: Attempt to update descriptor with invalid imageView %#" PRIxLEAST64, (uint64_t) *pImageView);
- } else {
- // Validate that imageLayout is compatible with aspectMask and image format
- VkImageAspectFlags aspectMask = ivIt->second->subresourceRange.aspectMask;
- VkImage image = ivIt->second->image;
- // TODO : Check here in case we have a bad image
- auto imgIt = my_data->imageMap.find(image);
- if (imgIt == my_data->imageMap.end()) {
- skipCall |= log_msg(my_data->report_data, VK_DEBUG_REPORT_ERROR_BIT_EXT, VK_DEBUG_REPORT_OBJECT_TYPE_IMAGE_EXT, (uint64_t) image, __LINE__, DRAWSTATE_IMAGEVIEW_DESCRIPTOR_ERROR, "DS",
- "vkUpdateDescriptorSets: Attempt to update descriptor with invalid image %#" PRIxLEAST64 " in imageView %#" PRIxLEAST64, (uint64_t) image, (uint64_t) *pImageView);
- } else {
- VkFormat format = (*imgIt).second->format;
- VkBool32 ds = vk_format_is_depth_or_stencil(format);
- switch (imageLayout) {
- case VK_IMAGE_LAYOUT_COLOR_ATTACHMENT_OPTIMAL:
- // Only Color bit must be set
- if ((aspectMask & VK_IMAGE_ASPECT_COLOR_BIT) != VK_IMAGE_ASPECT_COLOR_BIT) {
- skipCall |= log_msg(my_data->report_data, VK_DEBUG_REPORT_ERROR_BIT_EXT, VK_DEBUG_REPORT_OBJECT_TYPE_IMAGE_VIEW_EXT, (uint64_t) *pImageView, __LINE__,
- DRAWSTATE_INVALID_IMAGE_ASPECT, "DS", "vkUpdateDescriptorSets: Updating descriptor with layout VK_IMAGE_LAYOUT_COLOR_ATTACHMENT_OPTIMAL and imageView %#" PRIxLEAST64 ""
- " that does not have VK_IMAGE_ASPECT_COLOR_BIT set.", (uint64_t) *pImageView);
- }
- // format must NOT be DS
- if (ds) {
- skipCall |= log_msg(my_data->report_data, VK_DEBUG_REPORT_ERROR_BIT_EXT, VK_DEBUG_REPORT_OBJECT_TYPE_IMAGE_VIEW_EXT, (uint64_t) *pImageView, __LINE__,
- DRAWSTATE_IMAGEVIEW_DESCRIPTOR_ERROR, "DS", "vkUpdateDescriptorSets: Updating descriptor with layout VK_IMAGE_LAYOUT_COLOR_ATTACHMENT_OPTIMAL and imageView %#" PRIxLEAST64 ""
- " but the image format is %s which is not a color format.", (uint64_t) *pImageView, string_VkFormat(format));
- }
- break;
- case VK_IMAGE_LAYOUT_DEPTH_STENCIL_ATTACHMENT_OPTIMAL:
- case VK_IMAGE_LAYOUT_DEPTH_STENCIL_READ_ONLY_OPTIMAL:
- // Depth or stencil bit must be set, but both must NOT be set
- if (aspectMask & VK_IMAGE_ASPECT_DEPTH_BIT) {
- if (aspectMask & VK_IMAGE_ASPECT_STENCIL_BIT) {
- // both must NOT be set
- skipCall |= log_msg(my_data->report_data, VK_DEBUG_REPORT_ERROR_BIT_EXT, VK_DEBUG_REPORT_OBJECT_TYPE_IMAGE_VIEW_EXT, (uint64_t) *pImageView, __LINE__,
- DRAWSTATE_INVALID_IMAGE_ASPECT, "DS", "vkUpdateDescriptorSets: Updating descriptor with imageView %#" PRIxLEAST64 ""
- " that has both STENCIL and DEPTH aspects set", (uint64_t) *pImageView);
- }
- } else if (!(aspectMask & VK_IMAGE_ASPECT_STENCIL_BIT)) {
- // Neither were set
- skipCall |= log_msg(my_data->report_data, VK_DEBUG_REPORT_ERROR_BIT_EXT, VK_DEBUG_REPORT_OBJECT_TYPE_IMAGE_VIEW_EXT, (uint64_t) *pImageView, __LINE__,
- DRAWSTATE_INVALID_IMAGE_ASPECT, "DS", "vkUpdateDescriptorSets: Updating descriptor with layout %s and imageView %#" PRIxLEAST64 ""
- " that does not have STENCIL or DEPTH aspect set.", string_VkImageLayout(imageLayout), (uint64_t) *pImageView);
- }
- // format must be DS
- if (!ds) {
- skipCall |= log_msg(my_data->report_data, VK_DEBUG_REPORT_ERROR_BIT_EXT, VK_DEBUG_REPORT_OBJECT_TYPE_IMAGE_VIEW_EXT, (uint64_t) *pImageView, __LINE__,
- DRAWSTATE_IMAGEVIEW_DESCRIPTOR_ERROR, "DS", "vkUpdateDescriptorSets: Updating descriptor with layout %s and imageView %#" PRIxLEAST64 ""
- " but the image format is %s which is not a depth/stencil format.", string_VkImageLayout(imageLayout), (uint64_t) *pImageView, string_VkFormat(format));
- }
- break;
- default:
- // anything to check for other layouts?
- break;
- }
- }
- }
- return skipCall;
-}
-
-// Verify that given bufferView is valid
-static VkBool32 validateBufferView(const layer_data* my_data, const VkBufferView* pBufferView)
-{
- VkBool32 skipCall = VK_FALSE;
- auto sampIt = my_data->bufferViewMap.find(*pBufferView);
- if (sampIt == my_data->bufferViewMap.end()) {
- skipCall |= log_msg(my_data->report_data, VK_DEBUG_REPORT_ERROR_BIT_EXT, VK_DEBUG_REPORT_OBJECT_TYPE_BUFFER_VIEW_EXT, (uint64_t) *pBufferView, __LINE__, DRAWSTATE_BUFFERVIEW_DESCRIPTOR_ERROR, "DS",
- "vkUpdateDescriptorSets: Attempt to update descriptor with invalid bufferView %#" PRIxLEAST64, (uint64_t) *pBufferView);
- } else {
- // TODO : Any further checks we want to do on the bufferView?
- }
- return skipCall;
-}
-
-// Verify that given bufferInfo is valid
-static VkBool32 validateBufferInfo(const layer_data* my_data, const VkDescriptorBufferInfo* pBufferInfo)
-{
- VkBool32 skipCall = VK_FALSE;
- auto sampIt = my_data->bufferMap.find(pBufferInfo->buffer);
- if (sampIt == my_data->bufferMap.end()) {
- skipCall |= log_msg(my_data->report_data, VK_DEBUG_REPORT_ERROR_BIT_EXT, VK_DEBUG_REPORT_OBJECT_TYPE_BUFFER_EXT, (uint64_t) pBufferInfo->buffer, __LINE__, DRAWSTATE_BUFFERINFO_DESCRIPTOR_ERROR, "DS",
- "vkUpdateDescriptorSets: Attempt to update descriptor where bufferInfo has invalid buffer %#" PRIxLEAST64, (uint64_t) pBufferInfo->buffer);
- } else {
- // TODO : Any further checks we want to do on the bufferView?
- }
- return skipCall;
-}
-
-static VkBool32 validateUpdateContents(const layer_data* my_data, const VkWriteDescriptorSet *pWDS, const VkDescriptorSetLayoutBinding* pLayoutBinding)
-{
- VkBool32 skipCall = VK_FALSE;
- // First verify that for the given Descriptor type, the correct DescriptorInfo data is supplied
- const VkSampler* pSampler = NULL;
- VkBool32 immutable = VK_FALSE;
- uint32_t i = 0;
- // For given update type, verify that update contents are correct
- switch (pWDS->descriptorType) {
- case VK_DESCRIPTOR_TYPE_SAMPLER:
- for (i=0; i<pWDS->descriptorCount; ++i) {
- skipCall |= validateSampler(my_data, &(pWDS->pImageInfo[i].sampler), immutable);
- }
- break;
- case VK_DESCRIPTOR_TYPE_COMBINED_IMAGE_SAMPLER:
- for (i=0; i<pWDS->descriptorCount; ++i) {
- if (NULL == pLayoutBinding->pImmutableSamplers) {
- pSampler = &(pWDS->pImageInfo[i].sampler);
- if (immutable) {
- skipCall |= log_msg(my_data->report_data, VK_DEBUG_REPORT_ERROR_BIT_EXT, VK_DEBUG_REPORT_OBJECT_TYPE_SAMPLER_EXT, (uint64_t) *pSampler, __LINE__, DRAWSTATE_INCONSISTENT_IMMUTABLE_SAMPLER_UPDATE, "DS",
- "vkUpdateDescriptorSets: Update #%u is not an immutable sampler %#" PRIxLEAST64 ", but previous update(s) from this "
- "VkWriteDescriptorSet struct used an immutable sampler. All updates from a single struct must either "
- "use immutable or non-immutable samplers.", i, (uint64_t) *pSampler);
- }
- } else {
- if (i>0 && !immutable) {
- skipCall |= log_msg(my_data->report_data, VK_DEBUG_REPORT_ERROR_BIT_EXT, VK_DEBUG_REPORT_OBJECT_TYPE_SAMPLER_EXT, (uint64_t) *pSampler, __LINE__, DRAWSTATE_INCONSISTENT_IMMUTABLE_SAMPLER_UPDATE, "DS",
- "vkUpdateDescriptorSets: Update #%u is an immutable sampler, but previous update(s) from this "
- "VkWriteDescriptorSet struct used a non-immutable sampler. All updates from a single struct must either "
- "use immutable or non-immutable samplers.", i);
- }
- immutable = VK_TRUE;
- pSampler = &(pLayoutBinding->pImmutableSamplers[i]);
- }
- skipCall |= validateSampler(my_data, pSampler, immutable);
- }
- // Intentionally fall through here to also validate image stuff
- case VK_DESCRIPTOR_TYPE_SAMPLED_IMAGE:
- case VK_DESCRIPTOR_TYPE_STORAGE_IMAGE:
- case VK_DESCRIPTOR_TYPE_INPUT_ATTACHMENT:
- for (i=0; i<pWDS->descriptorCount; ++i) {
- skipCall |= validateImageView(my_data, &(pWDS->pImageInfo[i].imageView), pWDS->pImageInfo[i].imageLayout);
- }
- break;
- case VK_DESCRIPTOR_TYPE_UNIFORM_TEXEL_BUFFER:
- case VK_DESCRIPTOR_TYPE_STORAGE_TEXEL_BUFFER:
- for (i=0; i<pWDS->descriptorCount; ++i) {
- skipCall |= validateBufferView(my_data, &(pWDS->pTexelBufferView[i]));
- }
- break;
- case VK_DESCRIPTOR_TYPE_UNIFORM_BUFFER:
- case VK_DESCRIPTOR_TYPE_STORAGE_BUFFER:
- case VK_DESCRIPTOR_TYPE_UNIFORM_BUFFER_DYNAMIC:
- case VK_DESCRIPTOR_TYPE_STORAGE_BUFFER_DYNAMIC:
- for (i=0; i<pWDS->descriptorCount; ++i) {
- skipCall |= validateBufferInfo(my_data, &(pWDS->pBufferInfo[i]));
- }
- break;
- default:
- break;
- }
- return skipCall;
-}
-// Validate that given set is valid and that it's not being used by an in-flight CmdBuffer
-// func_str is the name of the calling function
-// Return VK_FALSE if no errors occur
-// Return VK_TRUE if validation error occurs and callback returns VK_TRUE (to skip upcoming API call down the chain)
-VkBool32 validateIdleDescriptorSet(const layer_data* my_data, VkDescriptorSet set, std::string func_str) {
- VkBool32 skip_call = VK_FALSE;
- auto set_node = my_data->setMap.find(set);
- if (set_node == my_data->setMap.end()) {
- skip_call |= log_msg(my_data->report_data, VK_DEBUG_REPORT_ERROR_BIT_EXT, VK_DEBUG_REPORT_OBJECT_TYPE_DESCRIPTOR_SET_EXT, (uint64_t)(set), __LINE__, DRAWSTATE_DOUBLE_DESTROY, "DS",
- "Cannot call %s() on descriptor set %" PRIxLEAST64 " that has not been allocated.", func_str.c_str(), (uint64_t)(set));
- } else {
- if (set_node->second->in_use.load()) {
- skip_call |= log_msg(my_data->report_data, VK_DEBUG_REPORT_ERROR_BIT_EXT, VK_DEBUG_REPORT_OBJECT_TYPE_DESCRIPTOR_SET_EXT, (uint64_t)(set), __LINE__, DRAWSTATE_OBJECT_INUSE, "DS",
- "Cannot call %s() on descriptor set %" PRIxLEAST64 " that is in use by a command buffer.", func_str.c_str(), (uint64_t)(set));
- }
- }
- return skip_call;
-}
-static void invalidateBoundCmdBuffers(layer_data* dev_data, const SET_NODE* pSet)
-{
- // Flag any CBs this set is bound to as INVALID
- for (auto cb : pSet->boundCmdBuffers) {
- auto cb_node = dev_data->commandBufferMap.find(cb);
- if (cb_node != dev_data->commandBufferMap.end()) {
- cb_node->second->state = CB_INVALID;
- }
- }
-}
-// update DS mappings based on write and copy update arrays
-static VkBool32 dsUpdate(layer_data* my_data, VkDevice device, uint32_t descriptorWriteCount, const VkWriteDescriptorSet* pWDS, uint32_t descriptorCopyCount, const VkCopyDescriptorSet* pCDS)
-{
- VkBool32 skipCall = VK_FALSE;
-
- LAYOUT_NODE* pLayout = NULL;
- VkDescriptorSetLayoutCreateInfo* pLayoutCI = NULL;
- // Validate Write updates
- uint32_t i = 0;
- for (i=0; i < descriptorWriteCount; i++) {
- VkDescriptorSet ds = pWDS[i].dstSet;
- SET_NODE* pSet = my_data->setMap[ds];
- // Set being updated cannot be in-flight
- if ((skipCall = validateIdleDescriptorSet(my_data, ds, "VkUpdateDescriptorSets")) == VK_TRUE)
- return skipCall;
- // If set is bound to any cmdBuffers, mark them invalid
- invalidateBoundCmdBuffers(my_data, pSet);
- GENERIC_HEADER* pUpdate = (GENERIC_HEADER*) &pWDS[i];
- pLayout = pSet->pLayout;
- // First verify valid update struct
- if ((skipCall = validUpdateStruct(my_data, device, pUpdate)) == VK_TRUE) {
- break;
- }
- uint32_t binding = 0, endIndex = 0;
- binding = pWDS[i].dstBinding;
- auto bindingToIndex = pLayout->bindingToIndexMap.find(binding);
- // Make sure that layout being updated has the binding being updated
- if (bindingToIndex == pLayout->bindingToIndexMap.end()) {
- skipCall |= log_msg(
- my_data->report_data, VK_DEBUG_REPORT_ERROR_BIT_EXT,
- VK_DEBUG_REPORT_OBJECT_TYPE_DESCRIPTOR_SET_EXT, (uint64_t)(ds),
- __LINE__, DRAWSTATE_INVALID_UPDATE_INDEX, "DS",
- "Descriptor Set %" PRIu64 " does not have binding to match "
- "update binding %u for update type "
- "%s!",
- (uint64_t)(ds), binding,
- string_VkStructureType(pUpdate->sType));
- } else {
- // Next verify that update falls within size of given binding
- endIndex = getUpdateEndIndex(my_data, device, pLayout, binding, pWDS[i].dstArrayElement, pUpdate);
- if (getBindingEndIndex(pLayout, binding) < endIndex) {
- pLayoutCI = &pLayout->createInfo;
- string DSstr = vk_print_vkdescriptorsetlayoutcreateinfo(pLayoutCI, "{DS} ");
- skipCall |= log_msg(my_data->report_data, VK_DEBUG_REPORT_ERROR_BIT_EXT, VK_DEBUG_REPORT_OBJECT_TYPE_DESCRIPTOR_SET_EXT, (uint64_t)(ds), __LINE__, DRAWSTATE_DESCRIPTOR_UPDATE_OUT_OF_BOUNDS, "DS",
- "Descriptor update type of %s is out of bounds for matching binding %u in Layout w/ CI:\n%s!", string_VkStructureType(pUpdate->sType), binding, DSstr.c_str());
- } else { // TODO : should we skip update on a type mismatch or force it?
- uint32_t startIndex;
- startIndex =
- getUpdateStartIndex(my_data, device, pLayout, binding,
- pWDS[i].dstArrayElement, pUpdate);
- // Layout bindings match w/ update, now verify that update type
- // & stageFlags are the same for entire update
- if ((skipCall = validateUpdateConsistency(
- my_data, device, pLayout, pUpdate, startIndex,
- endIndex)) == VK_FALSE) {
- // The update is within bounds and consistent, but need to
- // make sure contents make sense as well
- if ((skipCall = validateUpdateContents(
- my_data, &pWDS[i],
- &pLayout->createInfo.pBindings[bindingToIndex->second])) ==
- VK_FALSE) {
- // Update is good. Save the update info
- // Create new update struct for this set's shadow copy
- GENERIC_HEADER* pNewNode = NULL;
- skipCall |= shadowUpdateNode(my_data, device, pUpdate, &pNewNode);
- if (NULL == pNewNode) {
- skipCall |= log_msg(my_data->report_data, VK_DEBUG_REPORT_ERROR_BIT_EXT, VK_DEBUG_REPORT_OBJECT_TYPE_DESCRIPTOR_SET_EXT, (uint64_t)(ds), __LINE__, DRAWSTATE_OUT_OF_MEMORY, "DS",
- "Out of memory while attempting to allocate UPDATE struct in vkUpdateDescriptors()");
- } else {
- // Insert shadow node into LL of updates for this set
- pNewNode->pNext = pSet->pUpdateStructs;
- pSet->pUpdateStructs = pNewNode;
- // Now update appropriate descriptor(s) to point to new Update node
- for (uint32_t j = startIndex; j <= endIndex; j++) {
- assert(j<pSet->descriptorCount);
- pSet->ppDescriptors[j] = pNewNode;
- }
- }
- }
- }
- }
- }
- }
- // Now validate copy updates
- for (i=0; i < descriptorCopyCount; ++i) {
- SET_NODE *pSrcSet = NULL, *pDstSet = NULL;
- LAYOUT_NODE *pSrcLayout = NULL, *pDstLayout = NULL;
- uint32_t srcStartIndex = 0, srcEndIndex = 0, dstStartIndex = 0, dstEndIndex = 0;
- // For each copy make sure that update falls within given layout and that types match
- pSrcSet = my_data->setMap[pCDS[i].srcSet];
- pDstSet = my_data->setMap[pCDS[i].dstSet];
- // Set being updated cannot be in-flight
- if ((skipCall = validateIdleDescriptorSet(my_data, pDstSet->set, "VkUpdateDescriptorSets")) == VK_TRUE)
- return skipCall;
- invalidateBoundCmdBuffers(my_data, pDstSet);
- pSrcLayout = pSrcSet->pLayout;
- pDstLayout = pDstSet->pLayout;
- // Validate that src binding is valid for src set layout
- if (pSrcLayout->bindingToIndexMap.find(pCDS[i].srcBinding) ==
- pSrcLayout->bindingToIndexMap.end()) {
- skipCall |=
- log_msg(my_data->report_data, VK_DEBUG_REPORT_ERROR_BIT_EXT,
- VK_DEBUG_REPORT_OBJECT_TYPE_DESCRIPTOR_SET_EXT,
- (uint64_t)pSrcSet->set, __LINE__,
- DRAWSTATE_INVALID_UPDATE_INDEX,
- "DS", "Copy descriptor update %u has srcBinding %u "
- "which is out of bounds for underlying SetLayout "
- "%#" PRIxLEAST64 " which only has bindings 0-%u.",
- i, pCDS[i].srcBinding, (uint64_t)pSrcLayout->layout,
- pSrcLayout->createInfo.bindingCount - 1);
- } else if (pDstLayout->bindingToIndexMap.find(pCDS[i].dstBinding) ==
- pDstLayout->bindingToIndexMap.end()) {
- skipCall |=
- log_msg(my_data->report_data, VK_DEBUG_REPORT_ERROR_BIT_EXT,
- VK_DEBUG_REPORT_OBJECT_TYPE_DESCRIPTOR_SET_EXT,
- (uint64_t)pDstSet->set, __LINE__,
- DRAWSTATE_INVALID_UPDATE_INDEX,
- "DS", "Copy descriptor update %u has dstBinding %u "
- "which is out of bounds for underlying SetLayout "
- "%#" PRIxLEAST64 " which only has bindings 0-%u.",
- i, pCDS[i].dstBinding, (uint64_t)pDstLayout->layout,
- pDstLayout->createInfo.bindingCount - 1);
- } else {
- // Proceed with validation. Bindings are ok, but make sure update is within bounds of given layout
- srcEndIndex = getUpdateEndIndex(my_data, device, pSrcLayout, pCDS[i].srcBinding, pCDS[i].srcArrayElement, (const GENERIC_HEADER*)&(pCDS[i]));
- dstEndIndex = getUpdateEndIndex(my_data, device, pDstLayout, pCDS[i].dstBinding, pCDS[i].dstArrayElement, (const GENERIC_HEADER*)&(pCDS[i]));
- if (getBindingEndIndex(pSrcLayout, pCDS[i].srcBinding) < srcEndIndex) {
- pLayoutCI = &pSrcLayout->createInfo;
- string DSstr = vk_print_vkdescriptorsetlayoutcreateinfo(pLayoutCI, "{DS} ");
- skipCall |= log_msg(my_data->report_data, VK_DEBUG_REPORT_ERROR_BIT_EXT, VK_DEBUG_REPORT_OBJECT_TYPE_DESCRIPTOR_SET_EXT, (uint64_t) pSrcSet->set, __LINE__, DRAWSTATE_DESCRIPTOR_UPDATE_OUT_OF_BOUNDS, "DS",
- "Copy descriptor src update is out of bounds for matching binding %u in Layout w/ CI:\n%s!", pCDS[i].srcBinding, DSstr.c_str());
- } else if (getBindingEndIndex(pDstLayout, pCDS[i].dstBinding) < dstEndIndex) {
- pLayoutCI = &pDstLayout->createInfo;
- string DSstr = vk_print_vkdescriptorsetlayoutcreateinfo(pLayoutCI, "{DS} ");
- skipCall |= log_msg(my_data->report_data, VK_DEBUG_REPORT_ERROR_BIT_EXT, VK_DEBUG_REPORT_OBJECT_TYPE_DESCRIPTOR_SET_EXT, (uint64_t) pDstSet->set, __LINE__, DRAWSTATE_DESCRIPTOR_UPDATE_OUT_OF_BOUNDS, "DS",
- "Copy descriptor dest update is out of bounds for matching binding %u in Layout w/ CI:\n%s!", pCDS[i].dstBinding, DSstr.c_str());
- } else {
- srcStartIndex = getUpdateStartIndex(my_data, device, pSrcLayout, pCDS[i].srcBinding, pCDS[i].srcArrayElement, (const GENERIC_HEADER*)&(pCDS[i]));
- dstStartIndex = getUpdateStartIndex(my_data, device, pDstLayout, pCDS[i].dstBinding, pCDS[i].dstArrayElement, (const GENERIC_HEADER*)&(pCDS[i]));
- for (uint32_t j=0; j<pCDS[i].descriptorCount; ++j) {
- // For copy just make sure that the types match and then perform the update
- if (pSrcLayout->descriptorTypes[srcStartIndex+j] != pDstLayout->descriptorTypes[dstStartIndex+j]) {
- skipCall |= log_msg(my_data->report_data, VK_DEBUG_REPORT_ERROR_BIT_EXT, (VkDebugReportObjectTypeEXT) 0, 0, __LINE__, DRAWSTATE_DESCRIPTOR_TYPE_MISMATCH, "DS",
- "Copy descriptor update index %u, update count #%u, has src update descriptor type %s that does not match overlapping dest descriptor type of %s!",
- i, j+1, string_VkDescriptorType(pSrcLayout->descriptorTypes[srcStartIndex+j]), string_VkDescriptorType(pDstLayout->descriptorTypes[dstStartIndex+j]));
- } else {
- // point dst descriptor at corresponding src descriptor
- // TODO : This may be a hole. I believe copy should be its own copy,
- // otherwise a subsequent write update to src will incorrectly affect the copy
- pDstSet->ppDescriptors[j+dstStartIndex] = pSrcSet->ppDescriptors[j+srcStartIndex];
- }
- }
- }
- }
- }
- return skipCall;
-}
-
-// Verify that given pool has descriptors that are being requested for allocation
-static VkBool32 validate_descriptor_availability_in_pool(layer_data* dev_data, DESCRIPTOR_POOL_NODE* pPoolNode, uint32_t count, const VkDescriptorSetLayout* pSetLayouts)
-{
- VkBool32 skipCall = VK_FALSE;
- uint32_t i = 0, j = 0;
- for (i=0; i<count; ++i) {
- LAYOUT_NODE* pLayout = getLayoutNode(dev_data, pSetLayouts[i]);
- if (NULL == pLayout) {
- skipCall |= log_msg(dev_data->report_data, VK_DEBUG_REPORT_ERROR_BIT_EXT, VK_DEBUG_REPORT_OBJECT_TYPE_DESCRIPTOR_SET_LAYOUT_EXT, (uint64_t) pSetLayouts[i], __LINE__, DRAWSTATE_INVALID_LAYOUT, "DS",
- "Unable to find set layout node for layout %#" PRIxLEAST64 " specified in vkAllocateDescriptorSets() call", (uint64_t) pSetLayouts[i]);
- } else {
- uint32_t typeIndex = 0, poolSizeCount = 0;
- for (j=0; j<pLayout->createInfo.bindingCount; ++j) {
- typeIndex = static_cast<uint32_t>(pLayout->createInfo.pBindings[j].descriptorType);
- poolSizeCount = pLayout->createInfo.pBindings[j].descriptorCount;
- if (poolSizeCount > pPoolNode->availableDescriptorTypeCount[typeIndex]) {
- skipCall |= log_msg(dev_data->report_data, VK_DEBUG_REPORT_ERROR_BIT_EXT, VK_DEBUG_REPORT_OBJECT_TYPE_DESCRIPTOR_SET_LAYOUT_EXT, (uint64_t) pLayout->layout, __LINE__, DRAWSTATE_DESCRIPTOR_POOL_EMPTY, "DS",
- "Unable to allocate %u descriptors of type %s from pool %#" PRIxLEAST64 ". This pool only has %u descriptors of this type remaining.",
- poolSizeCount, string_VkDescriptorType(pLayout->createInfo.pBindings[j].descriptorType), (uint64_t) pPoolNode->pool, pPoolNode->availableDescriptorTypeCount[typeIndex]);
- } else { // Decrement available descriptors of this type
- pPoolNode->availableDescriptorTypeCount[typeIndex] -= poolSizeCount;
- }
- }
- }
- }
- return skipCall;
-}
-
-// Free the shadowed update node for this Set
-// NOTE : Calls to this function should be wrapped in mutex
-static void freeShadowUpdateTree(SET_NODE* pSet)
-{
- GENERIC_HEADER* pShadowUpdate = pSet->pUpdateStructs;
- pSet->pUpdateStructs = NULL;
- GENERIC_HEADER* pFreeUpdate = pShadowUpdate;
- // Clear the descriptor mappings as they will now be invalid
- memset(pSet->ppDescriptors, 0, pSet->descriptorCount*sizeof(GENERIC_HEADER*));
- while(pShadowUpdate) {
- pFreeUpdate = pShadowUpdate;
- pShadowUpdate = (GENERIC_HEADER*)pShadowUpdate->pNext;
- VkWriteDescriptorSet * pWDS = NULL;
- switch (pFreeUpdate->sType)
- {
- case VK_STRUCTURE_TYPE_WRITE_DESCRIPTOR_SET:
- pWDS = (VkWriteDescriptorSet*)pFreeUpdate;
- switch (pWDS->descriptorType) {
- case VK_DESCRIPTOR_TYPE_SAMPLER:
- case VK_DESCRIPTOR_TYPE_COMBINED_IMAGE_SAMPLER:
- case VK_DESCRIPTOR_TYPE_SAMPLED_IMAGE:
- case VK_DESCRIPTOR_TYPE_STORAGE_IMAGE:
- {
- delete[] pWDS->pImageInfo;
- }
- break;
- case VK_DESCRIPTOR_TYPE_UNIFORM_TEXEL_BUFFER:
- case VK_DESCRIPTOR_TYPE_STORAGE_TEXEL_BUFFER:
- {
- delete[] pWDS->pTexelBufferView;
- }
- break;
- case VK_DESCRIPTOR_TYPE_UNIFORM_BUFFER:
- case VK_DESCRIPTOR_TYPE_STORAGE_BUFFER:
- case VK_DESCRIPTOR_TYPE_UNIFORM_BUFFER_DYNAMIC:
- case VK_DESCRIPTOR_TYPE_STORAGE_BUFFER_DYNAMIC:
- {
- delete[] pWDS->pBufferInfo;
- }
- break;
- default:
- break;
- }
- break;
- case VK_STRUCTURE_TYPE_COPY_DESCRIPTOR_SET:
- break;
- default:
- assert(0);
- break;
- }
- delete pFreeUpdate;
- }
-}
-
-// Free all DS Pools including their Sets & related sub-structs
-// NOTE : Calls to this function should be wrapped in mutex
-static void deletePools(layer_data* my_data)
-{
- if (my_data->descriptorPoolMap.size() <= 0)
- return;
- for (auto ii=my_data->descriptorPoolMap.begin(); ii!=my_data->descriptorPoolMap.end(); ++ii) {
- SET_NODE* pSet = (*ii).second->pSets;
- SET_NODE* pFreeSet = pSet;
- while (pSet) {
- pFreeSet = pSet;
- pSet = pSet->pNext;
- // Freeing layouts handled in deleteLayouts() function
- // Free Update shadow struct tree
- freeShadowUpdateTree(pFreeSet);
- if (pFreeSet->ppDescriptors) {
- delete[] pFreeSet->ppDescriptors;
- }
- delete pFreeSet;
- }
- delete (*ii).second;
- }
- my_data->descriptorPoolMap.clear();
-}
-
-// WARN : Once deleteLayouts() called, any layout ptrs in Pool/Set data structure will be invalid
-// NOTE : Calls to this function should be wrapped in mutex
-static void deleteLayouts(layer_data* my_data)
-{
- if (my_data->descriptorSetLayoutMap.size() <= 0)
- return;
- for (auto ii=my_data->descriptorSetLayoutMap.begin(); ii!=my_data->descriptorSetLayoutMap.end(); ++ii) {
- LAYOUT_NODE* pLayout = (*ii).second;
- if (pLayout->createInfo.pBindings) {
- for (uint32_t i=0; i<pLayout->createInfo.bindingCount; i++) {
- if (pLayout->createInfo.pBindings[i].pImmutableSamplers)
- delete[] pLayout->createInfo.pBindings[i].pImmutableSamplers;
- }
- delete[] pLayout->createInfo.pBindings;
- }
- delete pLayout;
- }
- my_data->descriptorSetLayoutMap.clear();
-}
-
-// Currently clearing a set is removing all previous updates to that set
-// TODO : Validate if this is correct clearing behavior
-static void clearDescriptorSet(layer_data* my_data, VkDescriptorSet set)
-{
- SET_NODE* pSet = getSetNode(my_data, set);
- if (!pSet) {
- // TODO : Return error
- } else {
- freeShadowUpdateTree(pSet);
- }
-}
-
-static void clearDescriptorPool(layer_data* my_data, const VkDevice device, const VkDescriptorPool pool, VkDescriptorPoolResetFlags flags)
-{
- DESCRIPTOR_POOL_NODE* pPool = getPoolNode(my_data, pool);
- if (!pPool) {
- log_msg(my_data->report_data, VK_DEBUG_REPORT_ERROR_BIT_EXT, VK_DEBUG_REPORT_OBJECT_TYPE_DESCRIPTOR_POOL_EXT, (uint64_t) pool, __LINE__, DRAWSTATE_INVALID_POOL, "DS",
- "Unable to find pool node for pool %#" PRIxLEAST64 " specified in vkResetDescriptorPool() call", (uint64_t) pool);
- } else {
- // TODO: validate flags
- // For every set off of this pool, clear it
- SET_NODE* pSet = pPool->pSets;
- while (pSet) {
- clearDescriptorSet(my_data, pSet->set);
- pSet = pSet->pNext;
- }
- // Reset available count to max count for this pool
- for (uint32_t i=0; i<pPool->availableDescriptorTypeCount.size(); ++i) {
- pPool->availableDescriptorTypeCount[i] = pPool->maxDescriptorTypeCount[i];
- }
- }
-}
-
-// For given CB object, fetch associated CB Node from map
-static GLOBAL_CB_NODE* getCBNode(layer_data* my_data, const VkCommandBuffer cb)
-{
- if (my_data->commandBufferMap.count(cb) == 0) {
- // TODO : How to pass cb as srcObj here?
- log_msg(my_data->report_data, VK_DEBUG_REPORT_ERROR_BIT_EXT, VK_DEBUG_REPORT_OBJECT_TYPE_COMMAND_BUFFER_EXT, 0, __LINE__, DRAWSTATE_INVALID_COMMAND_BUFFER, "DS",
- "Attempt to use CommandBuffer %#" PRIxLEAST64 " that doesn't exist!", (uint64_t)(cb));
- return NULL;
- }
- return my_data->commandBufferMap[cb];
-}
-
-// Free all CB Nodes
-// NOTE : Calls to this function should be wrapped in mutex
-static void deleteCommandBuffers(layer_data* my_data)
-{
- if (my_data->commandBufferMap.size() <= 0) {
- return;
- }
- for (auto ii=my_data->commandBufferMap.begin(); ii!=my_data->commandBufferMap.end(); ++ii) {
- delete (*ii).second;
- }
- my_data->commandBufferMap.clear();
-}
-
-static VkBool32 report_error_no_cb_begin(const layer_data* dev_data, const VkCommandBuffer cb, const char* caller_name)
-{
- return log_msg(dev_data->report_data, VK_DEBUG_REPORT_ERROR_BIT_EXT, VK_DEBUG_REPORT_OBJECT_TYPE_COMMAND_BUFFER_EXT,
- (uint64_t)cb, __LINE__, DRAWSTATE_NO_BEGIN_COMMAND_BUFFER, "DS",
- "You must call vkBeginCommandBuffer() before this call to %s", caller_name);
-}
-
-VkBool32 validateCmdsInCmdBuffer(const layer_data* dev_data, const GLOBAL_CB_NODE* pCB, const CMD_TYPE cmd_type) {
- if (!pCB->activeRenderPass) return VK_FALSE;
- VkBool32 skip_call = VK_FALSE;
- if (pCB->activeSubpassContents == VK_SUBPASS_CONTENTS_SECONDARY_COMMAND_BUFFERS && cmd_type != CMD_EXECUTECOMMANDS) {
- skip_call |= log_msg(dev_data->report_data, VK_DEBUG_REPORT_ERROR_BIT_EXT, (VkDebugReportObjectTypeEXT)0, 0, __LINE__,
- DRAWSTATE_INVALID_COMMAND_BUFFER, "DS", "Commands cannot be called in a subpass using secondary command buffers.");
- } else if (pCB->activeSubpassContents == VK_SUBPASS_CONTENTS_INLINE && cmd_type == CMD_EXECUTECOMMANDS) {
- skip_call |= log_msg(dev_data->report_data, VK_DEBUG_REPORT_ERROR_BIT_EXT, (VkDebugReportObjectTypeEXT)0, 0, __LINE__,
- DRAWSTATE_INVALID_COMMAND_BUFFER, "DS", "vkCmdExecuteCommands() cannot be called in a subpass using inline commands.");
- }
- return skip_call;
-}
-
-static bool checkGraphicsBit(const layer_data* my_data, VkQueueFlags flags, const char* name) {
- if (!(flags & VK_QUEUE_GRAPHICS_BIT))
- return log_msg(my_data->report_data, VK_DEBUG_REPORT_ERROR_BIT_EXT, (VkDebugReportObjectTypeEXT)0, 0, __LINE__,
- DRAWSTATE_INVALID_COMMAND_BUFFER, "DS", "Cannot call %s on a command buffer allocated from a pool without graphics capabilities.", name);
- return false;
-}
-
-static bool checkComputeBit(const layer_data* my_data, VkQueueFlags flags, const char* name) {
- if (!(flags & VK_QUEUE_COMPUTE_BIT))
- return log_msg(my_data->report_data, VK_DEBUG_REPORT_ERROR_BIT_EXT, (VkDebugReportObjectTypeEXT)0, 0, __LINE__,
- DRAWSTATE_INVALID_COMMAND_BUFFER, "DS", "Cannot call %s on a command buffer allocated from a pool without compute capabilities.", name);
- return false;
-}
-
-static bool checkGraphicsOrComputeBit(const layer_data* my_data, VkQueueFlags flags, const char* name) {
- if (!((flags & VK_QUEUE_GRAPHICS_BIT) || (flags & VK_QUEUE_COMPUTE_BIT)))
- return log_msg(my_data->report_data, VK_DEBUG_REPORT_ERROR_BIT_EXT, (VkDebugReportObjectTypeEXT)0, 0, __LINE__,
- DRAWSTATE_INVALID_COMMAND_BUFFER, "DS", "Cannot call %s on a command buffer allocated from a pool without graphics capabilities.", name);
- return false;
-}
-
-// Add specified CMD to the CmdBuffer in given pCB, flagging errors if CB is not
-// in the recording state or if there's an issue with the Cmd ordering
-static VkBool32 addCmd(const layer_data* my_data, GLOBAL_CB_NODE* pCB, const CMD_TYPE cmd, const char* caller_name)
-{
- VkBool32 skipCall = VK_FALSE;
- auto pool_data = my_data->commandPoolMap.find(pCB->createInfo.commandPool);
- if (pool_data != my_data->commandPoolMap.end()) {
- VkQueueFlags flags = my_data->physDevProperties.queue_family_properties[pool_data->second.queueFamilyIndex].queueFlags;
- switch (cmd)
- {
- case CMD_BINDPIPELINE:
- case CMD_BINDPIPELINEDELTA:
- case CMD_BINDDESCRIPTORSETS:
- case CMD_FILLBUFFER:
- case CMD_CLEARCOLORIMAGE:
- case CMD_SETEVENT:
- case CMD_RESETEVENT:
- case CMD_WAITEVENTS:
- case CMD_BEGINQUERY:
- case CMD_ENDQUERY:
- case CMD_RESETQUERYPOOL:
- case CMD_COPYQUERYPOOLRESULTS:
- case CMD_WRITETIMESTAMP:
- skipCall |= checkGraphicsOrComputeBit(my_data, flags, cmdTypeToString(cmd).c_str());
- break;
- case CMD_SETVIEWPORTSTATE:
- case CMD_SETSCISSORSTATE:
- case CMD_SETLINEWIDTHSTATE:
- case CMD_SETDEPTHBIASSTATE:
- case CMD_SETBLENDSTATE:
- case CMD_SETDEPTHBOUNDSSTATE:
- case CMD_SETSTENCILREADMASKSTATE:
- case CMD_SETSTENCILWRITEMASKSTATE:
- case CMD_SETSTENCILREFERENCESTATE:
- case CMD_BINDINDEXBUFFER:
- case CMD_BINDVERTEXBUFFER:
- case CMD_DRAW:
- case CMD_DRAWINDEXED:
- case CMD_DRAWINDIRECT:
- case CMD_DRAWINDEXEDINDIRECT:
- case CMD_BLITIMAGE:
- case CMD_CLEARATTACHMENTS:
- case CMD_CLEARDEPTHSTENCILIMAGE:
- case CMD_RESOLVEIMAGE:
- case CMD_BEGINRENDERPASS:
- case CMD_NEXTSUBPASS:
- case CMD_ENDRENDERPASS:
- skipCall |= checkGraphicsBit(my_data, flags, cmdTypeToString(cmd).c_str());
- break;
- case CMD_DISPATCH:
- case CMD_DISPATCHINDIRECT:
- skipCall |= checkComputeBit(my_data, flags, cmdTypeToString(cmd).c_str());
- break;
- case CMD_COPYBUFFER:
- case CMD_COPYIMAGE:
- case CMD_COPYBUFFERTOIMAGE:
- case CMD_COPYIMAGETOBUFFER:
- case CMD_CLONEIMAGEDATA:
- case CMD_UPDATEBUFFER:
- case CMD_PIPELINEBARRIER:
- case CMD_EXECUTECOMMANDS:
- break;
- default:
- break;
- }
- }
- if (pCB->state != CB_RECORDING) {
- skipCall |= report_error_no_cb_begin(my_data, pCB->commandBuffer, caller_name);
- skipCall |= validateCmdsInCmdBuffer(my_data, pCB, cmd);
- CMD_NODE cmdNode = {};
- // init cmd node and append to end of cmd LL
- cmdNode.cmdNumber = ++pCB->numCmds;
- cmdNode.type = cmd;
- pCB->cmds.push_back(cmdNode);
- }
- return skipCall;
-}
-// Reset the command buffer state
-// Maintain the createInfo and set state to CB_NEW, but clear all other state
-static void resetCB(layer_data* my_data, const VkCommandBuffer cb)
-{
- GLOBAL_CB_NODE* pCB = my_data->commandBufferMap[cb];
- if (pCB) {
- pCB->cmds.clear();
- // Reset CB state (note that createInfo is not cleared)
- pCB->commandBuffer = cb;
- memset(&pCB->beginInfo, 0, sizeof(VkCommandBufferBeginInfo));
- memset(&pCB->inheritanceInfo, 0, sizeof(VkCommandBufferInheritanceInfo));
- pCB->fence = 0;
- pCB->numCmds = 0;
- memset(pCB->drawCount, 0, NUM_DRAW_TYPES * sizeof(uint64_t));
- pCB->state = CB_NEW;
- pCB->submitCount = 0;
- pCB->status = 0;
- pCB->lastBoundPipeline = 0;
- pCB->lastVtxBinding = 0;
- pCB->boundVtxBuffers.clear();
- pCB->viewports.clear();
- pCB->scissors.clear();
- pCB->lineWidth = 0;
- pCB->depthBiasConstantFactor = 0;
- pCB->depthBiasClamp = 0;
- pCB->depthBiasSlopeFactor = 0;
- memset(pCB->blendConstants, 0, 4 * sizeof(float));
- pCB->minDepthBounds = 0;
- pCB->maxDepthBounds = 0;
- memset(&pCB->front, 0, sizeof(stencil_data));
- memset(&pCB->back, 0, sizeof(stencil_data));
- pCB->lastBoundDescriptorSet = 0;
- pCB->lastBoundPipelineLayout = 0;
- memset(&pCB->activeRenderPassBeginInfo, 0, sizeof(pCB->activeRenderPassBeginInfo));
- pCB->activeRenderPass = 0;
- pCB->activeSubpassContents = VK_SUBPASS_CONTENTS_INLINE;
- pCB->activeSubpass = 0;
- pCB->framebuffer = 0;
- // Before clearing uniqueBoundSets, remove this CB off of its boundCBs
- for (auto set : pCB->uniqueBoundSets) {
- auto set_node = my_data->setMap.find(set);
- if (set_node != my_data->setMap.end()) {
- set_node->second->boundCmdBuffers.erase(pCB->commandBuffer);
- }
- }
- pCB->uniqueBoundSets.clear();
- pCB->destroyedSets.clear();
- pCB->updatedSets.clear();
- pCB->boundDescriptorSets.clear();
- pCB->waitedEvents.clear();
- pCB->semaphores.clear();
- pCB->events.clear();
- pCB->waitedEventsBeforeQueryReset.clear();
- pCB->queryToStateMap.clear();
- pCB->activeQueries.clear();
- pCB->startedQueries.clear();
- pCB->imageLayoutMap.clear();
- pCB->eventToStageMap.clear();
- pCB->drawData.clear();
- pCB->currentDrawData.buffers.clear();
- pCB->primaryCommandBuffer = VK_NULL_HANDLE;
- pCB->secondaryCommandBuffers.clear();
- pCB->dynamicOffsets.clear();
- }
-}
-
-// Set PSO-related status bits for CB, including dynamic state set via PSO
-static void set_cb_pso_status(GLOBAL_CB_NODE* pCB, const PIPELINE_NODE* pPipe)
-{
- for (uint32_t i = 0; i < pPipe->cbStateCI.attachmentCount; i++) {
- if (0 != pPipe->pAttachments[i].colorWriteMask) {
- pCB->status |= CBSTATUS_COLOR_BLEND_WRITE_ENABLE;
- }
- }
- if (pPipe->dsStateCI.depthWriteEnable) {
- pCB->status |= CBSTATUS_DEPTH_WRITE_ENABLE;
- }
- if (pPipe->dsStateCI.stencilTestEnable) {
- pCB->status |= CBSTATUS_STENCIL_TEST_ENABLE;
- }
- // Account for any dynamic state not set via this PSO
- if (!pPipe->dynStateCI.dynamicStateCount) { // All state is static
- pCB->status = CBSTATUS_ALL;
- } else {
- // First consider all state on
- // Then unset any state that's noted as dynamic in PSO
- // Finally OR that into CB statemask
- CBStatusFlags psoDynStateMask = CBSTATUS_ALL;
- for (uint32_t i=0; i < pPipe->dynStateCI.dynamicStateCount; i++) {
- switch (pPipe->dynStateCI.pDynamicStates[i]) {
- case VK_DYNAMIC_STATE_VIEWPORT:
- psoDynStateMask &= ~CBSTATUS_VIEWPORT_SET;
- break;
- case VK_DYNAMIC_STATE_SCISSOR:
- psoDynStateMask &= ~CBSTATUS_SCISSOR_SET;
- break;
- case VK_DYNAMIC_STATE_LINE_WIDTH:
- psoDynStateMask &= ~CBSTATUS_LINE_WIDTH_SET;
- break;
- case VK_DYNAMIC_STATE_DEPTH_BIAS:
- psoDynStateMask &= ~CBSTATUS_DEPTH_BIAS_SET;
- break;
- case VK_DYNAMIC_STATE_BLEND_CONSTANTS:
- psoDynStateMask &= ~CBSTATUS_BLEND_SET;
- break;
- case VK_DYNAMIC_STATE_DEPTH_BOUNDS:
- psoDynStateMask &= ~CBSTATUS_DEPTH_BOUNDS_SET;
- break;
- case VK_DYNAMIC_STATE_STENCIL_COMPARE_MASK:
- psoDynStateMask &= ~CBSTATUS_STENCIL_READ_MASK_SET;
- break;
- case VK_DYNAMIC_STATE_STENCIL_WRITE_MASK:
- psoDynStateMask &= ~CBSTATUS_STENCIL_WRITE_MASK_SET;
- break;
- case VK_DYNAMIC_STATE_STENCIL_REFERENCE:
- psoDynStateMask &= ~CBSTATUS_STENCIL_REFERENCE_SET;
- break;
- default:
- // TODO : Flag error here
- break;
- }
- }
- pCB->status |= psoDynStateMask;
- }
-}
-
-// Print the last bound Gfx Pipeline
-static VkBool32 printPipeline(layer_data* my_data, const VkCommandBuffer cb)
-{
- VkBool32 skipCall = VK_FALSE;
- GLOBAL_CB_NODE* pCB = getCBNode(my_data, cb);
- if (pCB) {
- PIPELINE_NODE *pPipeTrav = getPipeline(my_data, pCB->lastBoundPipeline);
- if (!pPipeTrav) {
- // nothing to print
- } else {
- skipCall |= log_msg(my_data->report_data, VK_DEBUG_REPORT_INFORMATION_BIT_EXT, (VkDebugReportObjectTypeEXT) 0, 0, __LINE__, DRAWSTATE_NONE, "DS",
- "%s", vk_print_vkgraphicspipelinecreateinfo(&pPipeTrav->graphicsPipelineCI, "{DS}").c_str());
- }
- }
- return skipCall;
-}
-
-// Print details of DS config to stdout
-static VkBool32 printDSConfig(layer_data* my_data, const VkCommandBuffer cb)
-{
- VkBool32 skipCall = VK_FALSE;
- GLOBAL_CB_NODE* pCB = getCBNode(my_data, cb);
- if (pCB && pCB->lastBoundDescriptorSet) {
- SET_NODE* pSet = getSetNode(my_data, pCB->lastBoundDescriptorSet);
- DESCRIPTOR_POOL_NODE* pPool = getPoolNode(my_data, pSet->pool);
- // Print out pool details
- skipCall |= log_msg(my_data->report_data, VK_DEBUG_REPORT_INFORMATION_BIT_EXT, (VkDebugReportObjectTypeEXT) 0, 0, __LINE__, DRAWSTATE_NONE, "DS",
- "Details for pool %#" PRIxLEAST64 ".", (uint64_t) pPool->pool);
- string poolStr = vk_print_vkdescriptorpoolcreateinfo(&pPool->createInfo, " ");
- skipCall |= log_msg(my_data->report_data, VK_DEBUG_REPORT_INFORMATION_BIT_EXT, (VkDebugReportObjectTypeEXT) 0, 0, __LINE__, DRAWSTATE_NONE, "DS",
- "%s", poolStr.c_str());
- // Print out set details
- char prefix[10];
- uint32_t index = 0;
- skipCall |= log_msg(my_data->report_data, VK_DEBUG_REPORT_INFORMATION_BIT_EXT, (VkDebugReportObjectTypeEXT) 0, 0, __LINE__, DRAWSTATE_NONE, "DS",
- "Details for descriptor set %#" PRIxLEAST64 ".", (uint64_t) pSet->set);
- LAYOUT_NODE* pLayout = pSet->pLayout;
- // Print layout details
- skipCall |= log_msg(my_data->report_data, VK_DEBUG_REPORT_INFORMATION_BIT_EXT, (VkDebugReportObjectTypeEXT) 0, 0, __LINE__, DRAWSTATE_NONE, "DS",
- "Layout #%u, (object %#" PRIxLEAST64 ") for DS %#" PRIxLEAST64 ".", index+1, (uint64_t)(pLayout->layout), (uint64_t)(pSet->set));
- sprintf(prefix, " [L%u] ", index);
- string DSLstr = vk_print_vkdescriptorsetlayoutcreateinfo(&pLayout->createInfo, prefix).c_str();
- skipCall |= log_msg(my_data->report_data, VK_DEBUG_REPORT_INFORMATION_BIT_EXT, (VkDebugReportObjectTypeEXT) 0, 0, __LINE__, DRAWSTATE_NONE, "DS",
- "%s", DSLstr.c_str());
- index++;
- GENERIC_HEADER* pUpdate = pSet->pUpdateStructs;
- if (pUpdate) {
- skipCall |= log_msg(my_data->report_data, VK_DEBUG_REPORT_INFORMATION_BIT_EXT, (VkDebugReportObjectTypeEXT) 0, 0, __LINE__, DRAWSTATE_NONE, "DS",
- "Update Chain [UC] for descriptor set %#" PRIxLEAST64 ":", (uint64_t) pSet->set);
- sprintf(prefix, " [UC] ");
- skipCall |= log_msg(my_data->report_data, VK_DEBUG_REPORT_INFORMATION_BIT_EXT, (VkDebugReportObjectTypeEXT) 0, 0, __LINE__, DRAWSTATE_NONE, "DS",
- "%s", dynamic_display(pUpdate, prefix).c_str());
- // TODO : If there is a "view" associated with this update, print CI for that view
- } else {
- if (0 != pSet->descriptorCount) {
- skipCall |= log_msg(my_data->report_data, VK_DEBUG_REPORT_INFORMATION_BIT_EXT, (VkDebugReportObjectTypeEXT) 0, 0, __LINE__, DRAWSTATE_NONE, "DS",
- "No Update Chain for descriptor set %#" PRIxLEAST64 " which has %u descriptors (vkUpdateDescriptors has not been called)", (uint64_t) pSet->set, pSet->descriptorCount);
- } else {
- skipCall |= log_msg(my_data->report_data, VK_DEBUG_REPORT_INFORMATION_BIT_EXT, (VkDebugReportObjectTypeEXT) 0, 0, __LINE__, DRAWSTATE_NONE, "DS",
- "FYI: No descriptors in descriptor set %#" PRIxLEAST64 ".", (uint64_t) pSet->set);
- }
- }
- }
- return skipCall;
-}
-
-static void printCB(layer_data* my_data, const VkCommandBuffer cb)
-{
- GLOBAL_CB_NODE* pCB = getCBNode(my_data, cb);
- if (pCB && pCB->cmds.size() > 0) {
- log_msg(my_data->report_data, VK_DEBUG_REPORT_INFORMATION_BIT_EXT, (VkDebugReportObjectTypeEXT) 0, 0, __LINE__, DRAWSTATE_NONE, "DS",
- "Cmds in CB %p", (void*)cb);
- vector<CMD_NODE> cmds = pCB->cmds;
- for (auto ii=cmds.begin(); ii!=cmds.end(); ++ii) {
- // TODO : Need to pass cb as srcObj here
- log_msg(my_data->report_data, VK_DEBUG_REPORT_INFORMATION_BIT_EXT, VK_DEBUG_REPORT_OBJECT_TYPE_COMMAND_BUFFER_EXT, 0, __LINE__, DRAWSTATE_NONE, "DS",
- " CMD#%" PRIu64 ": %s", (*ii).cmdNumber, cmdTypeToString((*ii).type).c_str());
- }
- } else {
- // Nothing to print
- }
-}
-
-static VkBool32 synchAndPrintDSConfig(layer_data* my_data, const VkCommandBuffer cb)
-{
- VkBool32 skipCall = VK_FALSE;
- if (!(my_data->report_data->active_flags & VK_DEBUG_REPORT_INFORMATION_BIT_EXT)) {
- return skipCall;
- }
- skipCall |= printDSConfig(my_data, cb);
- skipCall |= printPipeline(my_data, cb);
- return skipCall;
-}
-
-// Flags validation error if the associated call is made inside a render pass. The apiName
-// routine should ONLY be called outside a render pass.
-static VkBool32 insideRenderPass(const layer_data* my_data, GLOBAL_CB_NODE *pCB, const char *apiName)
-{
- VkBool32 inside = VK_FALSE;
- if (pCB->activeRenderPass) {
- inside = log_msg(my_data->report_data, VK_DEBUG_REPORT_ERROR_BIT_EXT, VK_DEBUG_REPORT_OBJECT_TYPE_COMMAND_BUFFER_EXT,
- (uint64_t)pCB->commandBuffer, __LINE__, DRAWSTATE_INVALID_RENDERPASS_CMD, "DS",
- "%s: It is invalid to issue this call inside an active render pass (%#" PRIxLEAST64 ")",
- apiName, (uint64_t) pCB->activeRenderPass);
- }
- return inside;
-}
-
-// Flags validation error if the associated call is made outside a render pass. The apiName
-// routine should ONLY be called inside a render pass.
-static VkBool32 outsideRenderPass(const layer_data* my_data, GLOBAL_CB_NODE *pCB, const char *apiName)
-{
- VkBool32 outside = VK_FALSE;
- if (((pCB->createInfo.level == VK_COMMAND_BUFFER_LEVEL_PRIMARY) &&
- (!pCB->activeRenderPass)) ||
- ((pCB->createInfo.level == VK_COMMAND_BUFFER_LEVEL_SECONDARY) &&
- (!pCB->activeRenderPass) &&
- !(pCB->beginInfo.flags & VK_COMMAND_BUFFER_USAGE_RENDER_PASS_CONTINUE_BIT))) {
- outside = log_msg(my_data->report_data, VK_DEBUG_REPORT_ERROR_BIT_EXT, VK_DEBUG_REPORT_OBJECT_TYPE_COMMAND_BUFFER_EXT,
- (uint64_t)pCB->commandBuffer, __LINE__, DRAWSTATE_NO_ACTIVE_RENDERPASS, "DS",
- "%s: This call must be issued inside an active render pass.", apiName);
- }
- return outside;
-}
-
-static void init_draw_state(layer_data *my_data, const VkAllocationCallbacks *pAllocator)
-{
- uint32_t report_flags = 0;
- uint32_t debug_action = 0;
- FILE *log_output = NULL;
- const char *option_str;
- VkDebugReportCallbackEXT callback;
- // initialize DrawState options
- report_flags = getLayerOptionFlags("DrawStateReportFlags", 0);
- getLayerOptionEnum("DrawStateDebugAction", (uint32_t *) &debug_action);
-
- if (debug_action & VK_DBG_LAYER_ACTION_LOG_MSG)
- {
- option_str = getLayerOption("DrawStateLogFilename");
- log_output = getLayerLogOutput(option_str, "DrawState");
- VkDebugReportCallbackCreateInfoEXT dbgInfo;
- memset(&dbgInfo, 0, sizeof(dbgInfo));
- dbgInfo.sType = VK_STRUCTURE_TYPE_DEBUG_REPORT_CREATE_INFO_EXT;
- dbgInfo.pfnCallback = log_callback;
- dbgInfo.pUserData = log_output;
- dbgInfo.flags = report_flags;
- layer_create_msg_callback(my_data->report_data, &dbgInfo, pAllocator, &callback);
- my_data->logging_callback.push_back(callback);
- }
-
- if (debug_action & VK_DBG_LAYER_ACTION_DEBUG_OUTPUT) {
- VkDebugReportCallbackCreateInfoEXT dbgInfo;
- memset(&dbgInfo, 0, sizeof(dbgInfo));
- dbgInfo.sType = VK_STRUCTURE_TYPE_DEBUG_REPORT_CREATE_INFO_EXT;
- dbgInfo.pfnCallback = win32_debug_output_msg;
- dbgInfo.pUserData = log_output;
- dbgInfo.flags = report_flags;
- layer_create_msg_callback(my_data->report_data, &dbgInfo, pAllocator, &callback);
- my_data->logging_callback.push_back(callback);
- }
-
- if (!globalLockInitialized)
- {
- loader_platform_thread_create_mutex(&globalLock);
- globalLockInitialized = 1;
- }
-}
-
-VK_LAYER_EXPORT VKAPI_ATTR VkResult VKAPI_CALL vkCreateInstance(const VkInstanceCreateInfo* pCreateInfo, const VkAllocationCallbacks* pAllocator, VkInstance* pInstance)
-{
- VkLayerInstanceCreateInfo *chain_info = get_chain_info(pCreateInfo, VK_LAYER_LINK_INFO);
-
- assert(chain_info->u.pLayerInfo);
- PFN_vkGetInstanceProcAddr fpGetInstanceProcAddr = chain_info->u.pLayerInfo->pfnNextGetInstanceProcAddr;
- PFN_vkCreateInstance fpCreateInstance = (PFN_vkCreateInstance) fpGetInstanceProcAddr(NULL, "vkCreateInstance");
- if (fpCreateInstance == NULL) {
- return VK_ERROR_INITIALIZATION_FAILED;
- }
-
- // Advance the link info for the next element on the chain
- chain_info->u.pLayerInfo = chain_info->u.pLayerInfo->pNext;
-
- VkResult result = fpCreateInstance(pCreateInfo, pAllocator, pInstance);
- if (result != VK_SUCCESS)
- return result;
-
- // TBD: Need any locking this early, in case this function is called at the
- // same time by more than one thread?
- layer_data *my_data = get_my_data_ptr(get_dispatch_key(*pInstance), layer_data_map);
- my_data->instance_dispatch_table = new VkLayerInstanceDispatchTable;
- layer_init_instance_dispatch_table(*pInstance, my_data->instance_dispatch_table, fpGetInstanceProcAddr);
-
- my_data->report_data = debug_report_create_instance(
- my_data->instance_dispatch_table,
- *pInstance,
- pCreateInfo->enabledExtensionCount,
- pCreateInfo->ppEnabledExtensionNames);
-
- init_draw_state(my_data, pAllocator);
-
- return result;
-}
-
-/* hook DestroyInstance to remove tableInstanceMap entry */
-VK_LAYER_EXPORT VKAPI_ATTR void VKAPI_CALL vkDestroyInstance(VkInstance instance, const VkAllocationCallbacks* pAllocator)
-{
- // TODOSC : Shouldn't need any customization here
- dispatch_key key = get_dispatch_key(instance);
- // TBD: Need any locking this early, in case this function is called at the
- // same time by more than one thread?
- layer_data *my_data = get_my_data_ptr(key, layer_data_map);
- VkLayerInstanceDispatchTable *pTable = my_data->instance_dispatch_table;
- pTable->DestroyInstance(instance, pAllocator);
-
- loader_platform_thread_lock_mutex(&globalLock);
-
- // Clean up logging callback, if any
- while (my_data->logging_callback.size() > 0) {
- VkDebugReportCallbackEXT callback = my_data->logging_callback.back();
- layer_destroy_msg_callback(my_data->report_data, callback, pAllocator);
- my_data->logging_callback.pop_back();
- }
-
- layer_debug_report_destroy_instance(my_data->report_data);
- delete my_data->instance_dispatch_table;
- layer_data_map.erase(key);
- // TODO : Potential race here with separate threads creating/destroying instance
- if (layer_data_map.empty()) {
- // Release mutex when destroying last instance.
- loader_platform_thread_unlock_mutex(&globalLock);
- loader_platform_thread_delete_mutex(&globalLock);
- globalLockInitialized = 0;
- } else {
- loader_platform_thread_unlock_mutex(&globalLock);
- }
-}
-
-static void createDeviceRegisterExtensions(const VkDeviceCreateInfo* pCreateInfo, VkDevice device)
-{
- uint32_t i;
- // TBD: Need any locking, in case this function is called at the same time
- // by more than one thread?
- layer_data* dev_data = get_my_data_ptr(get_dispatch_key(device), layer_data_map);
- dev_data->device_extensions.debug_marker_enabled = false;
- dev_data->device_extensions.wsi_enabled = false;
-
-
- VkLayerDispatchTable *pDisp = dev_data->device_dispatch_table;
- PFN_vkGetDeviceProcAddr gpa = pDisp->GetDeviceProcAddr;
-
- pDisp->CreateSwapchainKHR = (PFN_vkCreateSwapchainKHR) gpa(device, "vkCreateSwapchainKHR");
- pDisp->DestroySwapchainKHR = (PFN_vkDestroySwapchainKHR) gpa(device, "vkDestroySwapchainKHR");
- pDisp->GetSwapchainImagesKHR = (PFN_vkGetSwapchainImagesKHR) gpa(device, "vkGetSwapchainImagesKHR");
- pDisp->AcquireNextImageKHR = (PFN_vkAcquireNextImageKHR) gpa(device, "vkAcquireNextImageKHR");
- pDisp->QueuePresentKHR = (PFN_vkQueuePresentKHR) gpa(device, "vkQueuePresentKHR");
-
- for (i = 0; i < pCreateInfo->enabledExtensionCount; i++) {
- if (strcmp(pCreateInfo->ppEnabledExtensionNames[i], VK_KHR_SWAPCHAIN_EXTENSION_NAME) == 0) {
- dev_data->device_extensions.wsi_enabled = true;
- }
- if (strcmp(pCreateInfo->ppEnabledExtensionNames[i], DEBUG_MARKER_EXTENSION_NAME) == 0) {
- /* Found a matching extension name, mark it enabled and init dispatch table*/
- dev_data->device_extensions.debug_marker_enabled = true;
- initDebugMarkerTable(device);
-
- }
- }
-}
-
-VK_LAYER_EXPORT VKAPI_ATTR VkResult VKAPI_CALL vkCreateDevice(VkPhysicalDevice gpu, const VkDeviceCreateInfo* pCreateInfo, const VkAllocationCallbacks* pAllocator, VkDevice* pDevice)
-{
- VkLayerDeviceCreateInfo *chain_info = get_chain_info(pCreateInfo, VK_LAYER_LINK_INFO);
-
- assert(chain_info->u.pLayerInfo);
- PFN_vkGetInstanceProcAddr fpGetInstanceProcAddr = chain_info->u.pLayerInfo->pfnNextGetInstanceProcAddr;
- PFN_vkGetDeviceProcAddr fpGetDeviceProcAddr = chain_info->u.pLayerInfo->pfnNextGetDeviceProcAddr;
- PFN_vkCreateDevice fpCreateDevice = (PFN_vkCreateDevice) fpGetInstanceProcAddr(NULL, "vkCreateDevice");
- if (fpCreateDevice == NULL) {
- return VK_ERROR_INITIALIZATION_FAILED;
- }
-
- // Advance the link info for the next element on the chain
- chain_info->u.pLayerInfo = chain_info->u.pLayerInfo->pNext;
-
- VkResult result = fpCreateDevice(gpu, pCreateInfo, pAllocator, pDevice);
- if (result != VK_SUCCESS) {
- return result;
- }
-
- loader_platform_thread_lock_mutex(&globalLock);
- layer_data *my_instance_data = get_my_data_ptr(get_dispatch_key(gpu), layer_data_map);
- layer_data *my_device_data = get_my_data_ptr(get_dispatch_key(*pDevice), layer_data_map);
-
- // Setup device dispatch table
- my_device_data->device_dispatch_table = new VkLayerDispatchTable;
- layer_init_device_dispatch_table(*pDevice, my_device_data->device_dispatch_table, fpGetDeviceProcAddr);
-
- my_device_data->report_data = layer_debug_report_create_device(my_instance_data->report_data, *pDevice);
- createDeviceRegisterExtensions(pCreateInfo, *pDevice);
- // Get physical device limits for this device
- my_instance_data->instance_dispatch_table->GetPhysicalDeviceProperties(gpu, &(my_device_data->physDevProperties.properties));
- my_instance_data->instance_dispatch_table->GetPhysicalDeviceFeatures(gpu, &(my_device_data->physDevProperties.features));
- uint32_t count;
- my_instance_data->instance_dispatch_table->GetPhysicalDeviceQueueFamilyProperties(gpu, &count, nullptr);
- my_device_data->physDevProperties.queue_family_properties.resize(count);
- my_instance_data->instance_dispatch_table->GetPhysicalDeviceQueueFamilyProperties(gpu, &count, &my_device_data->physDevProperties.queue_family_properties[0]);
- // TODO: device limits should make sure these are compatible
- if (pCreateInfo->pEnabledFeatures) {
- my_device_data->physDevProperties.features =
- *pCreateInfo->pEnabledFeatures;
- } else {
- my_instance_data->instance_dispatch_table->GetPhysicalDeviceFeatures(
- gpu, &my_device_data->physDevProperties.features);
- }
- loader_platform_thread_unlock_mutex(&globalLock);
- return result;
-}
-
-// prototype
-static void deleteRenderPasses(layer_data*);
-VK_LAYER_EXPORT VKAPI_ATTR void VKAPI_CALL vkDestroyDevice(VkDevice device, const VkAllocationCallbacks* pAllocator)
-{
- // TODOSC : Shouldn't need any customization here
- dispatch_key key = get_dispatch_key(device);
- layer_data* dev_data = get_my_data_ptr(key, layer_data_map);
- // Free all the memory
- loader_platform_thread_lock_mutex(&globalLock);
- deletePipelines(dev_data);
- deleteRenderPasses(dev_data);
- deleteCommandBuffers(dev_data);
- deletePools(dev_data);
- deleteLayouts(dev_data);
- dev_data->imageViewMap.clear();
- dev_data->imageMap.clear();
- dev_data->bufferViewMap.clear();
- dev_data->bufferMap.clear();
- loader_platform_thread_unlock_mutex(&globalLock);
-
- dev_data->device_dispatch_table->DestroyDevice(device, pAllocator);
- tableDebugMarkerMap.erase(key);
- delete dev_data->device_dispatch_table;
- layer_data_map.erase(key);
-}
-
-static const VkExtensionProperties instance_extensions[] = {
- {
- VK_EXT_DEBUG_REPORT_EXTENSION_NAME,
- VK_EXT_DEBUG_REPORT_SPEC_VERSION
- }
-};
-
-VK_LAYER_EXPORT VKAPI_ATTR VkResult VKAPI_CALL vkEnumerateInstanceExtensionProperties(
- const char *pLayerName,
- uint32_t *pCount,
- VkExtensionProperties* pProperties)
-{
- return util_GetExtensionProperties(1, instance_extensions, pCount, pProperties);
-}
-
-static const VkLayerProperties ds_global_layers[] = {
- {
- "VK_LAYER_LUNARG_draw_state",
- VK_API_VERSION,
- 1,
- "LunarG Validation Layer",
- }
-};
-
-VK_LAYER_EXPORT VKAPI_ATTR VkResult VKAPI_CALL vkEnumerateInstanceLayerProperties(
- uint32_t *pCount,
- VkLayerProperties* pProperties)
-{
- return util_GetLayerProperties(ARRAY_SIZE(ds_global_layers),
- ds_global_layers,
- pCount, pProperties);
-}
-
-static const VkExtensionProperties ds_device_extensions[] = {
- {
- DEBUG_MARKER_EXTENSION_NAME,
- 1,
- }
-};
-
-static const VkLayerProperties ds_device_layers[] = {
- {
- "VK_LAYER_LUNARG_draw_state",
- VK_API_VERSION,
- 1,
- "LunarG Validation Layer",
- }
-};
-
-VK_LAYER_EXPORT VKAPI_ATTR VkResult VKAPI_CALL vkEnumerateDeviceExtensionProperties(
- VkPhysicalDevice physicalDevice,
- const char* pLayerName,
- uint32_t* pCount,
- VkExtensionProperties* pProperties)
-{
- if (pLayerName == NULL) {
- dispatch_key key = get_dispatch_key(physicalDevice);
- layer_data *my_data = get_my_data_ptr(key, layer_data_map);
- return my_data->instance_dispatch_table->EnumerateDeviceExtensionProperties(
- physicalDevice,
- NULL,
- pCount,
- pProperties);
- } else {
- return util_GetExtensionProperties(ARRAY_SIZE(ds_device_extensions),
- ds_device_extensions,
- pCount, pProperties);
- }
-}
-
-VK_LAYER_EXPORT VKAPI_ATTR VkResult VKAPI_CALL vkEnumerateDeviceLayerProperties(
- VkPhysicalDevice physicalDevice,
- uint32_t* pCount,
- VkLayerProperties* pProperties)
-{
- /* DrawState physical device layers are the same as global */
- return util_GetLayerProperties(ARRAY_SIZE(ds_device_layers), ds_device_layers,
- pCount, pProperties);
-}
-
-// This validates that the initial layout specified in the command buffer for
-// the IMAGE is the same
-// as the global IMAGE layout
-VkBool32 ValidateCmdBufImageLayouts(VkCommandBuffer cmdBuffer) {
- VkBool32 skip_call = VK_FALSE;
- layer_data* dev_data = get_my_data_ptr(get_dispatch_key(cmdBuffer), layer_data_map);
- GLOBAL_CB_NODE* pCB = getCBNode(dev_data, cmdBuffer);
- for (auto cb_image_data : pCB->imageLayoutMap) {
- VkImageLayout imageLayout;
- if (!FindLayout(dev_data, cb_image_data.first, imageLayout)) {
- skip_call |= log_msg(
- dev_data->report_data, VK_DEBUG_REPORT_ERROR_BIT_EXT,
- VK_DEBUG_REPORT_OBJECT_TYPE_COMMAND_BUFFER_EXT, 0, __LINE__,
- DRAWSTATE_INVALID_IMAGE_LAYOUT, "DS",
- "Cannot submit cmd buffer using deleted image %" PRIu64 ".",
- reinterpret_cast<const uint64_t &>(cb_image_data.first));
- } else {
- if (imageLayout != cb_image_data.second.initialLayout) {
- skip_call |= log_msg(
- dev_data->report_data, VK_DEBUG_REPORT_ERROR_BIT_EXT,
- VK_DEBUG_REPORT_OBJECT_TYPE_COMMAND_BUFFER_EXT, 0, __LINE__,
- DRAWSTATE_INVALID_IMAGE_LAYOUT, "DS",
- "Cannot submit cmd buffer using image with layout %s when "
- "first use is %s.",
- string_VkImageLayout(imageLayout),
- string_VkImageLayout(cb_image_data.second.initialLayout));
- }
- SetLayout(dev_data, cb_image_data.first, cb_image_data.second.layout);
- }
- }
- return skip_call;
-}
-// Track which resources are in-flight by atomically incrementing their "in_use" count
-VkBool32 validateAndIncrementResources(layer_data* my_data, GLOBAL_CB_NODE* pCB) {
- VkBool32 skip_call = VK_FALSE;
- for (auto drawDataElement : pCB->drawData) {
- for (auto buffer : drawDataElement.buffers) {
- auto buffer_data = my_data->bufferMap.find(buffer);
- if (buffer_data == my_data->bufferMap.end()) {
- skip_call |= log_msg(my_data->report_data, VK_DEBUG_REPORT_ERROR_BIT_EXT, VK_DEBUG_REPORT_OBJECT_TYPE_BUFFER_EXT, (uint64_t)(buffer), __LINE__, DRAWSTATE_INVALID_BUFFER, "DS",
- "Cannot submit cmd buffer using deleted buffer %" PRIu64 ".", (uint64_t)(buffer));
- } else {
- buffer_data->second.in_use.fetch_add(1);
- }
- }
- }
- for (auto set : pCB->uniqueBoundSets) {
- auto setNode = my_data->setMap.find(set);
- if (setNode == my_data->setMap.end()) {
- skip_call |= log_msg(my_data->report_data, VK_DEBUG_REPORT_ERROR_BIT_EXT, VK_DEBUG_REPORT_OBJECT_TYPE_DESCRIPTOR_SET_EXT, (uint64_t)(set), __LINE__, DRAWSTATE_INVALID_DESCRIPTOR_SET, "DS",
- "Cannot submit cmd buffer using deleted descriptor set %" PRIu64 ".", (uint64_t)(set));
- } else {
- setNode->second->in_use.fetch_add(1);
- }
- }
- for (auto semaphore : pCB->semaphores) {
- auto semaphoreNode = my_data->semaphoreMap.find(semaphore);
- if (semaphoreNode == my_data->semaphoreMap.end()) {
- skip_call |= log_msg(
- my_data->report_data, VK_DEBUG_REPORT_ERROR_BIT_EXT,
- VK_DEBUG_REPORT_OBJECT_TYPE_DESCRIPTOR_SET_EXT,
- reinterpret_cast<uint64_t &>(semaphore), __LINE__,
- DRAWSTATE_INVALID_SEMAPHORE, "DS",
- "Cannot submit cmd buffer using deleted semaphore %" PRIu64 ".",
- reinterpret_cast<uint64_t &>(semaphore));
- } else {
- semaphoreNode->second.in_use.fetch_add(1);
- }
- }
- for (auto event : pCB->events) {
- auto eventNode = my_data->eventMap.find(event);
- if (eventNode == my_data->eventMap.end()) {
- skip_call |= log_msg(
- my_data->report_data, VK_DEBUG_REPORT_ERROR_BIT_EXT,
- VK_DEBUG_REPORT_OBJECT_TYPE_DESCRIPTOR_SET_EXT,
- reinterpret_cast<uint64_t &>(event), __LINE__,
- DRAWSTATE_INVALID_EVENT, "DS",
- "Cannot submit cmd buffer using deleted event %" PRIu64 ".",
- reinterpret_cast<uint64_t &>(event));
- } else {
- eventNode->second.in_use.fetch_add(1);
- }
- }
- return skip_call;
-}
-
-void decrementResources(layer_data* my_data, VkCommandBuffer cmdBuffer) {
- GLOBAL_CB_NODE* pCB = getCBNode(my_data, cmdBuffer);
- for (auto drawDataElement : pCB->drawData) {
- for (auto buffer : drawDataElement.buffers) {
- auto buffer_data = my_data->bufferMap.find(buffer);
- if (buffer_data != my_data->bufferMap.end()) {
- buffer_data->second.in_use.fetch_sub(1);
- }
- }
- }
- for (auto set : pCB->uniqueBoundSets) {
- auto setNode = my_data->setMap.find(set);
- if (setNode != my_data->setMap.end()) {
- setNode->second->in_use.fetch_sub(1);
- }
- }
- for (auto semaphore : pCB->semaphores) {
- auto semaphoreNode = my_data->semaphoreMap.find(semaphore);
- if (semaphoreNode != my_data->semaphoreMap.end()) {
- semaphoreNode->second.in_use.fetch_sub(1);
- }
- }
- for (auto event : pCB->events) {
- auto eventNode = my_data->eventMap.find(event);
- if (eventNode != my_data->eventMap.end()) {
- eventNode->second.in_use.fetch_sub(1);
- }
- }
- for (auto queryStatePair : pCB->queryToStateMap) {
- my_data->queryToStateMap[queryStatePair.first] = queryStatePair.second;
- }
- for (auto eventStagePair : pCB->eventToStageMap) {
- my_data->eventMap[eventStagePair.first].stageMask =
- eventStagePair.second;
- }
-}
-
-void decrementResources(layer_data* my_data, uint32_t fenceCount, const VkFence* pFences) {
- for (uint32_t i = 0; i < fenceCount; ++i) {
- auto fence_data = my_data->fenceMap.find(pFences[i]);
- if (fence_data == my_data->fenceMap.end() || !fence_data->second.needsSignaled) return;
- fence_data->second.needsSignaled = false;
- fence_data->second.in_use.fetch_sub(1);
- if (fence_data->second.priorFence != VK_NULL_HANDLE) {
- decrementResources(my_data, 1, &fence_data->second.priorFence);
- }
- for (auto cmdBuffer : fence_data->second.cmdBuffers) {
- decrementResources(my_data, cmdBuffer);
- }
- }
-}
-
-void decrementResources(layer_data* my_data, VkQueue queue) {
- auto queue_data = my_data->queueMap.find(queue);
- if (queue_data != my_data->queueMap.end()) {
- for (auto cmdBuffer : queue_data->second.untrackedCmdBuffers) {
- decrementResources(my_data, cmdBuffer);
- }
- queue_data->second.untrackedCmdBuffers.clear();
- decrementResources(my_data, 1, &queue_data->second.priorFence);
- }
-}
-
-void trackCommandBuffers(layer_data* my_data, VkQueue queue, uint32_t cmdBufferCount, const VkCommandBuffer* pCmdBuffers, VkFence fence) {
- auto queue_data = my_data->queueMap.find(queue);
- if (fence != VK_NULL_HANDLE) {
- VkFence priorFence = VK_NULL_HANDLE;
- auto fence_data = my_data->fenceMap.find(fence);
- if (fence_data == my_data->fenceMap.end()) {
- return;
- }
- if (queue_data != my_data->queueMap.end()) {
- priorFence = queue_data->second.priorFence;
- queue_data->second.priorFence = fence;
- for (auto cmdBuffer : queue_data->second.untrackedCmdBuffers) {
- fence_data->second.cmdBuffers.push_back(cmdBuffer);
- }
- queue_data->second.untrackedCmdBuffers.clear();
- }
- fence_data->second.cmdBuffers.clear();
- fence_data->second.priorFence = priorFence;
- fence_data->second.needsSignaled = true;
- fence_data->second.queue = queue;
- fence_data->second.in_use.fetch_add(1);
- for (uint32_t i = 0; i < cmdBufferCount; ++i) {
- for (auto secondaryCmdBuffer :
- my_data->commandBufferMap[pCmdBuffers[i]]
- ->secondaryCommandBuffers) {
- fence_data->second.cmdBuffers.push_back(
- secondaryCmdBuffer);
- }
- fence_data->second.cmdBuffers.push_back(pCmdBuffers[i]);
- }
- } else {
- if (queue_data != my_data->queueMap.end()) {
- for (uint32_t i = 0; i < cmdBufferCount; ++i) {
- for (auto secondaryCmdBuffer : my_data->commandBufferMap[pCmdBuffers[i]]->secondaryCommandBuffers) {
- queue_data->second.untrackedCmdBuffers.push_back(secondaryCmdBuffer);
- }
- queue_data->second.untrackedCmdBuffers.push_back(pCmdBuffers[i]);
- }
- }
- }
- if (queue_data != my_data->queueMap.end()) {
- for (uint32_t i = 0; i < cmdBufferCount; ++i) {
- // Add cmdBuffers to both the global set and queue set
- for (auto secondaryCmdBuffer : my_data->commandBufferMap[pCmdBuffers[i]]->secondaryCommandBuffers) {
- my_data->globalInFlightCmdBuffers.insert(secondaryCmdBuffer);
- queue_data->second.inFlightCmdBuffers.insert(secondaryCmdBuffer);
- }
- my_data->globalInFlightCmdBuffers.insert(pCmdBuffers[i]);
- queue_data->second.inFlightCmdBuffers.insert(pCmdBuffers[i]);
- }
- }
-}
-
-bool validateCommandBufferSimultaneousUse(layer_data *dev_data,
- GLOBAL_CB_NODE *pCB) {
- bool skip_call = false;
- if (dev_data->globalInFlightCmdBuffers.count(pCB->commandBuffer) &&
- !(pCB->beginInfo.flags &
- VK_COMMAND_BUFFER_USAGE_SIMULTANEOUS_USE_BIT)) {
- skip_call |= log_msg(
- dev_data->report_data, VK_DEBUG_REPORT_ERROR_BIT_EXT,
- VK_DEBUG_REPORT_OBJECT_TYPE_COMMAND_BUFFER_EXT, 0, __LINE__,
- DRAWSTATE_INVALID_FENCE, "DS",
- "Command Buffer %#" PRIx64 " is already in use and is not marked "
- "for simultaneous use.",
- reinterpret_cast<uint64_t>(pCB->commandBuffer));
- }
- return skip_call;
-}
-
-static bool validateCommandBufferState(layer_data *dev_data,
- GLOBAL_CB_NODE *pCB) {
- bool skipCall = false;
- // Validate that cmd buffers have been updated
- if (CB_RECORDED != pCB->state) {
- if (CB_INVALID == pCB->state) {
- // Inform app of reason CB invalid
- if (!pCB->destroyedSets.empty()) {
- std::stringstream set_string;
- for (auto set : pCB->destroyedSets) {
- set_string << " " << set;
- }
- skipCall |= log_msg(dev_data->report_data, VK_DEBUG_REPORT_ERROR_BIT_EXT, VK_DEBUG_REPORT_OBJECT_TYPE_COMMAND_BUFFER_EXT, (uint64_t)(pCB->commandBuffer), __LINE__, DRAWSTATE_INVALID_COMMAND_BUFFER, "DS",
- "You are submitting command buffer %#" PRIxLEAST64 " that is invalid because it had the following bound descriptor set(s) destroyed: %s", (uint64_t)(pCB->commandBuffer), set_string.str().c_str());
- }
- if (!pCB->updatedSets.empty()) {
- std::stringstream set_string;
- for (auto set : pCB->updatedSets) {
- set_string << " " << set;
- }
- skipCall |= log_msg(dev_data->report_data, VK_DEBUG_REPORT_ERROR_BIT_EXT, VK_DEBUG_REPORT_OBJECT_TYPE_COMMAND_BUFFER_EXT, (uint64_t)(pCB->commandBuffer), __LINE__, DRAWSTATE_INVALID_COMMAND_BUFFER, "DS",
- "You are submitting command buffer %#" PRIxLEAST64 " that is invalid because it had the following bound descriptor set(s) updated: %s", (uint64_t)(pCB->commandBuffer), set_string.str().c_str());
- }
- } else { // Flag error for using CB w/o vkEndCommandBuffer() called
- skipCall |= log_msg(dev_data->report_data, VK_DEBUG_REPORT_ERROR_BIT_EXT, VK_DEBUG_REPORT_OBJECT_TYPE_COMMAND_BUFFER_EXT, (uint64_t)(pCB->commandBuffer), __LINE__, DRAWSTATE_NO_END_COMMAND_BUFFER, "DS",
- "You must call vkEndCommandBuffer() on CB %#" PRIxLEAST64 " before this call to vkQueueSubmit()!", (uint64_t)(pCB->commandBuffer));
- }
- }
- return skipCall;
-}
-
-static VkBool32 validatePrimaryCommandBufferState(layer_data *dev_data,
- GLOBAL_CB_NODE *pCB) {
- // Track in-use for resources off of primary and any secondary CBs
- VkBool32 skipCall = validateAndIncrementResources(dev_data, pCB);
- if (!pCB->secondaryCommandBuffers.empty()) {
- for (auto secondaryCmdBuffer : pCB->secondaryCommandBuffers) {
- skipCall |= validateAndIncrementResources(
- dev_data, dev_data->commandBufferMap[secondaryCmdBuffer]);
- GLOBAL_CB_NODE* pSubCB = getCBNode(dev_data, secondaryCmdBuffer);
- if (pSubCB->primaryCommandBuffer != pCB->commandBuffer) {
- log_msg(dev_data->report_data, VK_DEBUG_REPORT_ERROR_BIT_EXT,
- VK_DEBUG_REPORT_OBJECT_TYPE_COMMAND_BUFFER_EXT, 0,
- __LINE__,
- DRAWSTATE_COMMAND_BUFFER_SINGLE_SUBMIT_VIOLATION, "DS",
- "CB %#" PRIxLEAST64
- " was submitted with secondary buffer %#" PRIxLEAST64
- " but that buffer has subsequently been bound to "
- "primary cmd buffer %#" PRIxLEAST64 ".",
- reinterpret_cast<uint64_t>(pCB->commandBuffer),
- reinterpret_cast<uint64_t>(secondaryCmdBuffer),
- reinterpret_cast<uint64_t>(
- pSubCB->primaryCommandBuffer));
- }
- }
- }
- // TODO : Verify if this also needs to be checked for secondary command
- // buffers. If so, this block of code can move to
- // validateCommandBufferState() function. vulkan GL106 filed to clarify
- if ((pCB->beginInfo.flags & VK_COMMAND_BUFFER_USAGE_ONE_TIME_SUBMIT_BIT) &&
- (pCB->submitCount > 1)) {
- skipCall |=
- log_msg(dev_data->report_data, VK_DEBUG_REPORT_ERROR_BIT_EXT,
- VK_DEBUG_REPORT_OBJECT_TYPE_COMMAND_BUFFER_EXT, 0, __LINE__,
- DRAWSTATE_COMMAND_BUFFER_SINGLE_SUBMIT_VIOLATION, "DS",
- "CB %#" PRIxLEAST64
- " was begun w/ VK_COMMAND_BUFFER_USAGE_ONE_TIME_SUBMIT_BIT "
- "set, but has been submitted %#" PRIxLEAST64 " times.",
- (uint64_t)(pCB->commandBuffer), pCB->submitCount);
- }
- skipCall |= validateCommandBufferState(dev_data, pCB);
- // If USAGE_SIMULTANEOUS_USE_BIT not set then CB cannot already be executing
- // on device
- skipCall |= validateCommandBufferSimultaneousUse(dev_data, pCB);
- return skipCall;
-}
-
-VK_LAYER_EXPORT VKAPI_ATTR VkResult VKAPI_CALL vkQueueSubmit(VkQueue queue, uint32_t submitCount, const VkSubmitInfo* pSubmits, VkFence fence)
-{
- VkBool32 skipCall = VK_FALSE;
- GLOBAL_CB_NODE* pCB = NULL;
- layer_data* dev_data = get_my_data_ptr(get_dispatch_key(queue), layer_data_map);
- loader_platform_thread_lock_mutex(&globalLock);
- for (uint32_t submit_idx = 0; submit_idx < submitCount; submit_idx++) {
- const VkSubmitInfo *submit = &pSubmits[submit_idx];
- vector<VkSemaphore> semaphoreList;
- for (uint32_t i=0; i < submit->waitSemaphoreCount; ++i) {
- semaphoreList.push_back(submit->pWaitSemaphores[i]);
- if (dev_data->semaphoreMap[submit->pWaitSemaphores[i]].signaled) {
- dev_data->semaphoreMap[submit->pWaitSemaphores[i]].signaled = 0;
- } else {
- skipCall |= log_msg(dev_data->report_data, VK_DEBUG_REPORT_ERROR_BIT_EXT, VK_DEBUG_REPORT_OBJECT_TYPE_COMMAND_BUFFER_EXT, 0, __LINE__, DRAWSTATE_QUEUE_FORWARD_PROGRESS, "DS",
- "Queue %#" PRIx64 " is waiting on semaphore %#" PRIx64 " that has no way to be signaled.",
- (uint64_t)(queue), (uint64_t)(submit->pWaitSemaphores[i]));
- }
- }
- for (uint32_t i=0; i < submit->signalSemaphoreCount; ++i) {
- semaphoreList.push_back(submit->pSignalSemaphores[i]);
- dev_data->semaphoreMap[submit->pSignalSemaphores[i]].signaled = 1;
- }
- for (uint32_t i=0; i < submit->commandBufferCount; i++) {
- skipCall |= ValidateCmdBufImageLayouts(submit->pCommandBuffers[i]);
- pCB = getCBNode(dev_data, submit->pCommandBuffers[i]);
- pCB->semaphores = semaphoreList;
- pCB->submitCount++; // increment submit count
- skipCall |= validatePrimaryCommandBufferState(dev_data, pCB);
- }
- if ((fence != VK_NULL_HANDLE) && dev_data->fenceMap[fence].in_use.load()) {
- skipCall |= log_msg(
- dev_data->report_data, VK_DEBUG_REPORT_ERROR_BIT_EXT,
- VK_DEBUG_REPORT_OBJECT_TYPE_FENCE_EXT,
- (uint64_t)(fence), __LINE__,
- DRAWSTATE_INVALID_FENCE, "DS",
- "Fence %#" PRIx64 " is already in use by another submission.",
- (uint64_t)(fence));
- }
- trackCommandBuffers(dev_data, queue, submit->commandBufferCount,
- submit->pCommandBuffers, fence);
- }
- loader_platform_thread_unlock_mutex(&globalLock);
- if (VK_FALSE == skipCall)
- return dev_data->device_dispatch_table->QueueSubmit(queue, submitCount, pSubmits, fence);
- return VK_ERROR_VALIDATION_FAILED_EXT;
-}
-
-// Note: This function assumes that the global lock is held by the calling
-// thread.
-VkBool32 cleanInFlightCmdBuffer(layer_data* my_data, VkCommandBuffer cmdBuffer) {
- VkBool32 skip_call = VK_FALSE;
- GLOBAL_CB_NODE* pCB = getCBNode(my_data, cmdBuffer);
- if (pCB) {
- for (auto queryEventsPair : pCB->waitedEventsBeforeQueryReset) {
- for (auto event : queryEventsPair.second) {
- if (my_data->eventMap[event].needsSignaled) {
- skip_call |= log_msg(my_data->report_data, VK_DEBUG_REPORT_ERROR_BIT_EXT, VK_DEBUG_REPORT_OBJECT_TYPE_QUERY_POOL_EXT, 0, 0, DRAWSTATE_INVALID_QUERY, "DS",
- "Cannot get query results on queryPool %" PRIu64 " with index %d which was guarded by unsignaled event %" PRIu64 ".",
- (uint64_t)(queryEventsPair.first.pool), queryEventsPair.first.index, (uint64_t)(event));
- }
- }
- }
- }
- return skip_call;
-}
-// Remove given cmd_buffer from the global inFlight set.
-// Also, if given queue is valid, then remove the cmd_buffer from that queues
-// inFlightCmdBuffer set. Finally, check all other queues and if given cmd_buffer
-// is still in flight on another queue, add it back into the global set.
-// Note: This function assumes that the global lock is held by the calling
-// thread.
-static inline void removeInFlightCmdBuffer(layer_data* dev_data, VkCommandBuffer cmd_buffer, VkQueue queue)
-{
- // Pull it off of global list initially, but if we find it in any other queue list, add it back in
- dev_data->globalInFlightCmdBuffers.erase(cmd_buffer);
- if (dev_data->queueMap.find(queue) != dev_data->queueMap.end()) {
- dev_data->queueMap[queue].inFlightCmdBuffers.erase(cmd_buffer);
- for (auto q : dev_data->queues) {
- if ((q != queue) && (dev_data->queueMap[q].inFlightCmdBuffers.find(cmd_buffer) != dev_data->queueMap[q].inFlightCmdBuffers.end())) {
- dev_data->globalInFlightCmdBuffers.insert(cmd_buffer);
- break;
- }
- }
- }
-}
-
-VK_LAYER_EXPORT VKAPI_ATTR VkResult VKAPI_CALL vkWaitForFences(VkDevice device, uint32_t fenceCount, const VkFence* pFences, VkBool32 waitAll, uint64_t timeout)
-{
- layer_data* dev_data = get_my_data_ptr(get_dispatch_key(device), layer_data_map);
- VkResult result = dev_data->device_dispatch_table->WaitForFences(device, fenceCount, pFences, waitAll, timeout);
- VkBool32 skip_call = VK_FALSE;
- loader_platform_thread_lock_mutex(&globalLock);
- if (result == VK_SUCCESS) {
- // When we know that all fences are complete we can clean/remove their CBs
- if (waitAll || fenceCount == 1) {
- for (uint32_t i = 0; i < fenceCount; ++i) {
- VkQueue fence_queue = dev_data->fenceMap[pFences[i]].queue;
- for (auto cmdBuffer : dev_data->fenceMap[pFences[i]].cmdBuffers) {
- skip_call |= cleanInFlightCmdBuffer(dev_data, cmdBuffer);
- removeInFlightCmdBuffer(dev_data, cmdBuffer, fence_queue);
- }
- }
- decrementResources(dev_data, fenceCount, pFences);
- }
- // NOTE : Alternate case not handled here is when some fences have completed. In
- // this case for app to guarantee which fences completed it will have to call
- // vkGetFenceStatus() at which point we'll clean/remove their CBs if complete.
- }
- loader_platform_thread_unlock_mutex(&globalLock);
- if (VK_FALSE != skip_call)
- return VK_ERROR_VALIDATION_FAILED_EXT;
- return result;
-}
-
-
-VK_LAYER_EXPORT VKAPI_ATTR VkResult VKAPI_CALL vkGetFenceStatus(VkDevice device, VkFence fence)
-{
- layer_data* dev_data = get_my_data_ptr(get_dispatch_key(device), layer_data_map);
- VkResult result = dev_data->device_dispatch_table->GetFenceStatus(device, fence);
- VkBool32 skip_call = VK_FALSE;
- loader_platform_thread_lock_mutex(&globalLock);
- if (result == VK_SUCCESS) {
- auto fence_queue = dev_data->fenceMap[fence].queue;
- for (auto cmdBuffer : dev_data->fenceMap[fence].cmdBuffers) {
- skip_call |= cleanInFlightCmdBuffer(dev_data, cmdBuffer);
- removeInFlightCmdBuffer(dev_data, cmdBuffer, fence_queue);
- }
- decrementResources(dev_data, 1, &fence);
- }
- loader_platform_thread_unlock_mutex(&globalLock);
- if (VK_FALSE != skip_call)
- return VK_ERROR_VALIDATION_FAILED_EXT;
- return result;
-}
-
-VK_LAYER_EXPORT VKAPI_ATTR void VKAPI_CALL vkGetDeviceQueue(VkDevice device, uint32_t queueFamilyIndex, uint32_t queueIndex, VkQueue* pQueue)
-{
- layer_data* dev_data = get_my_data_ptr(get_dispatch_key(device), layer_data_map);
- loader_platform_thread_lock_mutex(&globalLock);
- dev_data->device_dispatch_table->GetDeviceQueue(device, queueFamilyIndex, queueIndex, pQueue);
- dev_data->queues.push_back(*pQueue);
- dev_data->queueMap[*pQueue].device = device;
- loader_platform_thread_unlock_mutex(&globalLock);
-}
-
-VK_LAYER_EXPORT VKAPI_ATTR VkResult VKAPI_CALL vkQueueWaitIdle(VkQueue queue)
-{
- layer_data* dev_data = get_my_data_ptr(get_dispatch_key(queue), layer_data_map);
- decrementResources(dev_data, queue);
- VkBool32 skip_call = VK_FALSE;
- loader_platform_thread_lock_mutex(&globalLock);
- // Iterate over local set since we erase set members as we go in for loop
- auto local_cb_set = dev_data->queueMap[queue].inFlightCmdBuffers;
- for (auto cmdBuffer : local_cb_set) {
- skip_call |= cleanInFlightCmdBuffer(dev_data, cmdBuffer);
- removeInFlightCmdBuffer(dev_data, cmdBuffer, queue);
- }
- dev_data->queueMap[queue].inFlightCmdBuffers.clear();
- loader_platform_thread_unlock_mutex(&globalLock);
- if (VK_FALSE != skip_call)
- return VK_ERROR_VALIDATION_FAILED_EXT;
- return dev_data->device_dispatch_table->QueueWaitIdle(queue);
-}
-
-VK_LAYER_EXPORT VKAPI_ATTR VkResult VKAPI_CALL vkDeviceWaitIdle(VkDevice device)
-{
- VkBool32 skip_call = VK_FALSE;
- layer_data* dev_data = get_my_data_ptr(get_dispatch_key(device), layer_data_map);
- loader_platform_thread_lock_mutex(&globalLock);
- for (auto queue : dev_data->queues) {
- decrementResources(dev_data, queue);
- if (dev_data->queueMap.find(queue) != dev_data->queueMap.end()) {
- // Clear all of the queue inFlightCmdBuffers (global set cleared below)
- dev_data->queueMap[queue].inFlightCmdBuffers.clear();
- }
- }
- for (auto cmdBuffer : dev_data->globalInFlightCmdBuffers) {
- skip_call |= cleanInFlightCmdBuffer(dev_data, cmdBuffer);
- }
- dev_data->globalInFlightCmdBuffers.clear();
- loader_platform_thread_unlock_mutex(&globalLock);
- if (VK_FALSE != skip_call)
- return VK_ERROR_VALIDATION_FAILED_EXT;
- return dev_data->device_dispatch_table->DeviceWaitIdle(device);
-}
-
-VK_LAYER_EXPORT VKAPI_ATTR void VKAPI_CALL vkDestroyFence(VkDevice device, VkFence fence, const VkAllocationCallbacks* pAllocator)
-{
- layer_data *dev_data =
- get_my_data_ptr(get_dispatch_key(device), layer_data_map);
- bool skipCall = false;
- loader_platform_thread_lock_mutex(&globalLock);
- if (dev_data->fenceMap[fence].in_use.load()) {
- skipCall |=
- log_msg(dev_data->report_data, VK_DEBUG_REPORT_ERROR_BIT_EXT,
- VK_DEBUG_REPORT_OBJECT_TYPE_FENCE_EXT, (uint64_t)(fence),
- __LINE__, DRAWSTATE_INVALID_FENCE, "DS",
- "Fence %#" PRIx64 " is in use by a command buffer.",
- (uint64_t)(fence));
- }
- loader_platform_thread_unlock_mutex(&globalLock);
- if (!skipCall)
- dev_data->device_dispatch_table->DestroyFence(device, fence,
- pAllocator);
- // TODO : Clean up any internal data structures using this obj.
-}
-
-VK_LAYER_EXPORT VKAPI_ATTR void VKAPI_CALL vkDestroySemaphore(VkDevice device, VkSemaphore semaphore, const VkAllocationCallbacks* pAllocator)
-{
- layer_data* dev_data = get_my_data_ptr(get_dispatch_key(device), layer_data_map);
- dev_data->device_dispatch_table->DestroySemaphore(device, semaphore, pAllocator);
- loader_platform_thread_lock_mutex(&globalLock);
- if (dev_data->semaphoreMap[semaphore].in_use.load()) {
- log_msg(dev_data->report_data, VK_DEBUG_REPORT_ERROR_BIT_EXT,
- VK_DEBUG_REPORT_OBJECT_TYPE_SEMAPHORE_EXT,
- reinterpret_cast<uint64_t &>(semaphore), __LINE__,
- DRAWSTATE_INVALID_SEMAPHORE, "DS",
- "Cannot delete semaphore %" PRIx64 " which is in use.",
- reinterpret_cast<uint64_t &>(semaphore));
- }
- dev_data->semaphoreMap.erase(semaphore);
- loader_platform_thread_unlock_mutex(&globalLock);
- // TODO : Clean up any internal data structures using this obj.
-}
-
-VK_LAYER_EXPORT VKAPI_ATTR void VKAPI_CALL vkDestroyEvent(VkDevice device, VkEvent event, const VkAllocationCallbacks* pAllocator)
-{
- layer_data *dev_data =
- get_my_data_ptr(get_dispatch_key(device), layer_data_map);
- bool skip_call = false;
- loader_platform_thread_lock_mutex(&globalLock);
- auto event_data = dev_data->eventMap.find(event);
- if (event_data != dev_data->eventMap.end() &&
- event_data->second.in_use.load()) {
- skip_call |=
- log_msg(dev_data->report_data, VK_DEBUG_REPORT_ERROR_BIT_EXT,
- VK_DEBUG_REPORT_OBJECT_TYPE_DESCRIPTOR_SET_EXT,
- reinterpret_cast<uint64_t &>(event), __LINE__,
- DRAWSTATE_INVALID_EVENT, "DS",
- "Cannot delete event %" PRIu64
- " which is in use by a command buffer.",
- reinterpret_cast<uint64_t &>(event));
- }
- loader_platform_thread_unlock_mutex(&globalLock);
- if (!skip_call)
- dev_data->device_dispatch_table->DestroyEvent(device, event,
- pAllocator);
- // TODO : Clean up any internal data structures using this obj.
-}
-
-VK_LAYER_EXPORT VKAPI_ATTR void VKAPI_CALL vkDestroyQueryPool(VkDevice device, VkQueryPool queryPool, const VkAllocationCallbacks* pAllocator)
-{
- get_my_data_ptr(get_dispatch_key(device), layer_data_map)->device_dispatch_table->DestroyQueryPool(device, queryPool, pAllocator);
- // TODO : Clean up any internal data structures using this obj.
-}
-
-VKAPI_ATTR VkResult VKAPI_CALL vkGetQueryPoolResults(VkDevice device, VkQueryPool queryPool, uint32_t firstQuery, uint32_t queryCount,
- size_t dataSize, void* pData, VkDeviceSize stride, VkQueryResultFlags flags) {
- layer_data* dev_data = get_my_data_ptr(get_dispatch_key(device), layer_data_map);
- unordered_map<QueryObject, vector<VkCommandBuffer>> queriesInFlight;
- GLOBAL_CB_NODE* pCB = nullptr;
- loader_platform_thread_lock_mutex(&globalLock);
- for (auto cmdBuffer : dev_data->globalInFlightCmdBuffers) {
- pCB = getCBNode(dev_data, cmdBuffer);
- for (auto queryStatePair : pCB->queryToStateMap) {
- queriesInFlight[queryStatePair.first].push_back(cmdBuffer);
- }
- }
- VkBool32 skip_call = VK_FALSE;
- for (uint32_t i = 0; i < queryCount; ++i) {
- QueryObject query = {queryPool, firstQuery + i};
- auto queryElement = queriesInFlight.find(query);
- auto queryToStateElement = dev_data->queryToStateMap.find(query);
- if (queryToStateElement != dev_data->queryToStateMap.end()) {
- }
- // Available and in flight
- if(queryElement != queriesInFlight.end() && queryToStateElement != dev_data->queryToStateMap.end() && queryToStateElement->second) {
- for (auto cmdBuffer : queryElement->second) {
- pCB = getCBNode(dev_data, cmdBuffer);
- auto queryEventElement = pCB->waitedEventsBeforeQueryReset.find(query);
- if (queryEventElement == pCB->waitedEventsBeforeQueryReset.end()) {
- skip_call |= log_msg(dev_data->report_data, VK_DEBUG_REPORT_ERROR_BIT_EXT, VK_DEBUG_REPORT_OBJECT_TYPE_QUERY_POOL_EXT, 0, __LINE__,
- DRAWSTATE_INVALID_QUERY, "DS", "Cannot get query results on queryPool %" PRIu64 " with index %d which is in flight.",
- (uint64_t)(queryPool), firstQuery + i);
- } else {
- for (auto event : queryEventElement->second) {
- dev_data->eventMap[event].needsSignaled = true;
- }
- }
- }
- // Unavailable and in flight
- } else if (queryElement != queriesInFlight.end() && queryToStateElement != dev_data->queryToStateMap.end() && !queryToStateElement->second) {
- // TODO : Can there be the same query in use by multiple command buffers in flight?
- bool make_available = false;
- for (auto cmdBuffer : queryElement->second) {
- pCB = getCBNode(dev_data, cmdBuffer);
- make_available |= pCB->queryToStateMap[query];
- }
- if (!(((flags & VK_QUERY_RESULT_PARTIAL_BIT) || (flags & VK_QUERY_RESULT_WAIT_BIT)) && make_available)) {
- skip_call |= log_msg(dev_data->report_data, VK_DEBUG_REPORT_ERROR_BIT_EXT, VK_DEBUG_REPORT_OBJECT_TYPE_QUERY_POOL_EXT, 0, __LINE__, DRAWSTATE_INVALID_QUERY, "DS",
- "Cannot get query results on queryPool %" PRIu64 " with index %d which is unavailable.",
- (uint64_t)(queryPool), firstQuery + i);
- }
- // Unavailable
- } else if (queryToStateElement != dev_data->queryToStateMap.end() && !queryToStateElement->second) {
- skip_call |= log_msg(dev_data->report_data, VK_DEBUG_REPORT_ERROR_BIT_EXT, VK_DEBUG_REPORT_OBJECT_TYPE_QUERY_POOL_EXT, 0, __LINE__, DRAWSTATE_INVALID_QUERY, "DS",
- "Cannot get query results on queryPool %" PRIu64 " with index %d which is unavailable.",
- (uint64_t)(queryPool), firstQuery + i);
- // Unitialized
- } else if (queryToStateElement == dev_data->queryToStateMap.end()) {
- skip_call |= log_msg(dev_data->report_data, VK_DEBUG_REPORT_ERROR_BIT_EXT, VK_DEBUG_REPORT_OBJECT_TYPE_QUERY_POOL_EXT, 0, __LINE__, DRAWSTATE_INVALID_QUERY, "DS",
- "Cannot get query results on queryPool %" PRIu64 " with index %d which is uninitialized.",
- (uint64_t)(queryPool), firstQuery + i);
- }
- }
- loader_platform_thread_unlock_mutex(&globalLock);
- if (skip_call)
- return VK_ERROR_VALIDATION_FAILED_EXT;
- return dev_data->device_dispatch_table->GetQueryPoolResults(device, queryPool, firstQuery, queryCount, dataSize, pData, stride, flags);
-}
-
-VkBool32 validateIdleBuffer(const layer_data* my_data, VkBuffer buffer) {
- VkBool32 skip_call = VK_FALSE;
- auto buffer_data = my_data->bufferMap.find(buffer);
- if (buffer_data == my_data->bufferMap.end()) {
- skip_call |= log_msg(my_data->report_data, VK_DEBUG_REPORT_ERROR_BIT_EXT, VK_DEBUG_REPORT_OBJECT_TYPE_BUFFER_EXT, (uint64_t)(buffer), __LINE__, DRAWSTATE_DOUBLE_DESTROY, "DS",
- "Cannot free buffer %" PRIxLEAST64 " that has not been allocated.", (uint64_t)(buffer));
- } else {
- if (buffer_data->second.in_use.load()) {
- skip_call |= log_msg(my_data->report_data, VK_DEBUG_REPORT_ERROR_BIT_EXT, VK_DEBUG_REPORT_OBJECT_TYPE_BUFFER_EXT, (uint64_t)(buffer), __LINE__, DRAWSTATE_OBJECT_INUSE, "DS",
- "Cannot free buffer %" PRIxLEAST64 " that is in use by a command buffer.", (uint64_t)(buffer));
- }
- }
- return skip_call;
-}
-
-VK_LAYER_EXPORT VKAPI_ATTR void VKAPI_CALL vkDestroyBuffer(VkDevice device, VkBuffer buffer, const VkAllocationCallbacks* pAllocator)
-{
- layer_data* dev_data = get_my_data_ptr(get_dispatch_key(device), layer_data_map);
- loader_platform_thread_lock_mutex(&globalLock);
- if (!validateIdleBuffer(dev_data, buffer)) {
- loader_platform_thread_unlock_mutex(&globalLock);
- dev_data->device_dispatch_table->DestroyBuffer(device, buffer, pAllocator);
- loader_platform_thread_lock_mutex(&globalLock);
- }
- dev_data->bufferMap.erase(buffer);
- loader_platform_thread_unlock_mutex(&globalLock);
-}
-
-VK_LAYER_EXPORT VKAPI_ATTR void VKAPI_CALL vkDestroyBufferView(VkDevice device, VkBufferView bufferView, const VkAllocationCallbacks* pAllocator)
-{
- layer_data* dev_data = get_my_data_ptr(get_dispatch_key(device), layer_data_map);
- dev_data->device_dispatch_table->DestroyBufferView(device, bufferView, pAllocator);
- loader_platform_thread_lock_mutex(&globalLock);
- dev_data->bufferViewMap.erase(bufferView);
- loader_platform_thread_unlock_mutex(&globalLock);
-}
-
-VK_LAYER_EXPORT VKAPI_ATTR void VKAPI_CALL vkDestroyImage(VkDevice device, VkImage image, const VkAllocationCallbacks* pAllocator)
-{
- layer_data* dev_data = get_my_data_ptr(get_dispatch_key(device), layer_data_map);
- dev_data->device_dispatch_table->DestroyImage(device, image, pAllocator);
- loader_platform_thread_lock_mutex(&globalLock);
- dev_data->imageMap.erase(image);
- loader_platform_thread_unlock_mutex(&globalLock);
-}
-
-VK_LAYER_EXPORT VKAPI_ATTR void VKAPI_CALL vkDestroyImageView(VkDevice device, VkImageView imageView, const VkAllocationCallbacks* pAllocator)
-{
- get_my_data_ptr(get_dispatch_key(device), layer_data_map)->device_dispatch_table->DestroyImageView(device, imageView, pAllocator);
- // TODO : Clean up any internal data structures using this obj.
-}
-
-VK_LAYER_EXPORT VKAPI_ATTR void VKAPI_CALL vkDestroyShaderModule(VkDevice device, VkShaderModule shaderModule, const VkAllocationCallbacks* pAllocator)
-{
- get_my_data_ptr(get_dispatch_key(device), layer_data_map)->device_dispatch_table->DestroyShaderModule(device, shaderModule, pAllocator);
- // TODO : Clean up any internal data structures using this obj.
-}
-
-VK_LAYER_EXPORT VKAPI_ATTR void VKAPI_CALL vkDestroyPipeline(VkDevice device, VkPipeline pipeline, const VkAllocationCallbacks* pAllocator)
-{
- get_my_data_ptr(get_dispatch_key(device), layer_data_map)->device_dispatch_table->DestroyPipeline(device, pipeline, pAllocator);
- // TODO : Clean up any internal data structures using this obj.
-}
-
-VK_LAYER_EXPORT VKAPI_ATTR void VKAPI_CALL vkDestroyPipelineLayout(VkDevice device, VkPipelineLayout pipelineLayout, const VkAllocationCallbacks* pAllocator)
-{
- get_my_data_ptr(get_dispatch_key(device), layer_data_map)->device_dispatch_table->DestroyPipelineLayout(device, pipelineLayout, pAllocator);
- // TODO : Clean up any internal data structures using this obj.
-}
-
-VK_LAYER_EXPORT VKAPI_ATTR void VKAPI_CALL vkDestroySampler(VkDevice device, VkSampler sampler, const VkAllocationCallbacks* pAllocator)
-{
- get_my_data_ptr(get_dispatch_key(device), layer_data_map)->device_dispatch_table->DestroySampler(device, sampler, pAllocator);
- // TODO : Clean up any internal data structures using this obj.
-}
-
-VK_LAYER_EXPORT VKAPI_ATTR void VKAPI_CALL vkDestroyDescriptorSetLayout(VkDevice device, VkDescriptorSetLayout descriptorSetLayout, const VkAllocationCallbacks* pAllocator)
-{
- get_my_data_ptr(get_dispatch_key(device), layer_data_map)->device_dispatch_table->DestroyDescriptorSetLayout(device, descriptorSetLayout, pAllocator);
- // TODO : Clean up any internal data structures using this obj.
-}
-
-VK_LAYER_EXPORT VKAPI_ATTR void VKAPI_CALL vkDestroyDescriptorPool(VkDevice device, VkDescriptorPool descriptorPool, const VkAllocationCallbacks* pAllocator)
-{
- get_my_data_ptr(get_dispatch_key(device), layer_data_map)->device_dispatch_table->DestroyDescriptorPool(device, descriptorPool, pAllocator);
- // TODO : Clean up any internal data structures using this obj.
-}
-
-VK_LAYER_EXPORT VKAPI_ATTR void VKAPI_CALL vkFreeCommandBuffers(VkDevice device, VkCommandPool commandPool, uint32_t count, const VkCommandBuffer *pCommandBuffers)
-{
- layer_data* dev_data = get_my_data_ptr(get_dispatch_key(device), layer_data_map);
-
- bool skip_call = false;
- loader_platform_thread_lock_mutex(&globalLock);
- for (uint32_t i = 0; i < count; i++) {
- if (dev_data->globalInFlightCmdBuffers.count(pCommandBuffers[i])) {
- skip_call |= log_msg(dev_data->report_data, VK_DEBUG_REPORT_ERROR_BIT_EXT, VK_DEBUG_REPORT_OBJECT_TYPE_COMMAND_BUFFER_EXT,
- reinterpret_cast<uint64_t>(pCommandBuffers[i]), __LINE__, DRAWSTATE_INVALID_COMMAND_BUFFER_RESET, "DS",
- "Attempt to free command buffer (%#" PRIxLEAST64 ") which is in use.", reinterpret_cast<uint64_t>(pCommandBuffers[i]));
- }
- // Delete CB information structure, and remove from commandBufferMap
- auto cb = dev_data->commandBufferMap.find(pCommandBuffers[i]);
- if (cb != dev_data->commandBufferMap.end()) {
- // reset prior to delete for data clean-up
- resetCB(dev_data, (*cb).second->commandBuffer);
- delete (*cb).second;
- dev_data->commandBufferMap.erase(cb);
- }
-
- // Remove commandBuffer reference from commandPoolMap
- dev_data->commandPoolMap[commandPool].commandBuffers.remove(pCommandBuffers[i]);
- }
- loader_platform_thread_unlock_mutex(&globalLock);
-
- if (!skip_call)
- dev_data->device_dispatch_table->FreeCommandBuffers(device, commandPool, count, pCommandBuffers);
-}
-
-VK_LAYER_EXPORT VKAPI_ATTR VkResult VKAPI_CALL vkCreateCommandPool(VkDevice device, const VkCommandPoolCreateInfo* pCreateInfo, const VkAllocationCallbacks* pAllocator, VkCommandPool* pCommandPool)
-{
- layer_data* dev_data = get_my_data_ptr(get_dispatch_key(device), layer_data_map);
-
- VkResult result = dev_data->device_dispatch_table->CreateCommandPool(device, pCreateInfo, pAllocator, pCommandPool);
-
- if (VK_SUCCESS == result) {
- loader_platform_thread_lock_mutex(&globalLock);
- dev_data->commandPoolMap[*pCommandPool].createFlags = pCreateInfo->flags;
- dev_data->commandPoolMap[*pCommandPool].queueFamilyIndex = pCreateInfo->queueFamilyIndex;
- loader_platform_thread_unlock_mutex(&globalLock);
- }
- return result;
-}
-
-VK_LAYER_EXPORT VKAPI_ATTR VkResult VKAPI_CALL vkCreateQueryPool(
- VkDevice device, const VkQueryPoolCreateInfo *pCreateInfo,
- const VkAllocationCallbacks *pAllocator, VkQueryPool *pQueryPool) {
-
- layer_data *dev_data =
- get_my_data_ptr(get_dispatch_key(device), layer_data_map);
- VkResult result = dev_data->device_dispatch_table->CreateQueryPool(
- device, pCreateInfo, pAllocator, pQueryPool);
- if (result == VK_SUCCESS) {
- loader_platform_thread_lock_mutex(&globalLock);
- dev_data->queryPoolMap[*pQueryPool].createInfo = *pCreateInfo;
- loader_platform_thread_unlock_mutex(&globalLock);
- }
- return result;
-}
-
-VkBool32 validateCommandBuffersNotInUse(const layer_data* dev_data, VkCommandPool commandPool) {
- VkBool32 skipCall = VK_FALSE;
- auto pool_data = dev_data->commandPoolMap.find(commandPool);
- if (pool_data != dev_data->commandPoolMap.end()) {
- for (auto cmdBuffer : pool_data->second.commandBuffers) {
- if (dev_data->globalInFlightCmdBuffers.count(cmdBuffer)) {
- skipCall |= log_msg(dev_data->report_data, VK_DEBUG_REPORT_ERROR_BIT_EXT, VK_DEBUG_REPORT_OBJECT_TYPE_COMMAND_POOL_EXT, (uint64_t)(commandPool),
- __LINE__, DRAWSTATE_OBJECT_INUSE, "DS", "Cannot reset command pool %" PRIx64 " when allocated command buffer %" PRIx64 " is in use.",
- (uint64_t)(commandPool), (uint64_t)(cmdBuffer));
- }
- }
- }
- return skipCall;
-}
-
-// Destroy commandPool along with all of the commandBuffers allocated from that pool
-VK_LAYER_EXPORT VKAPI_ATTR void VKAPI_CALL vkDestroyCommandPool(VkDevice device, VkCommandPool commandPool, const VkAllocationCallbacks* pAllocator)
-{
- layer_data* dev_data = get_my_data_ptr(get_dispatch_key(device), layer_data_map);
- loader_platform_thread_lock_mutex(&globalLock);
-
- // Must remove cmdpool from cmdpoolmap, after removing all cmdbuffers in its list from the commandPoolMap
- if (dev_data->commandPoolMap.find(commandPool) != dev_data->commandPoolMap.end()) {
- for (auto poolCb = dev_data->commandPoolMap[commandPool].commandBuffers.begin(); poolCb != dev_data->commandPoolMap[commandPool].commandBuffers.end();) {
- auto del_cb = dev_data->commandBufferMap.find(*poolCb);
- delete (*del_cb).second; // delete CB info structure
- dev_data->commandBufferMap.erase(del_cb); // Remove this command buffer from cbMap
- poolCb = dev_data->commandPoolMap[commandPool].commandBuffers.erase(poolCb); // Remove CB reference from commandPoolMap's list
- }
- }
- dev_data->commandPoolMap.erase(commandPool);
-
- loader_platform_thread_unlock_mutex(&globalLock);
-
- if (VK_TRUE == validateCommandBuffersNotInUse(dev_data, commandPool))
- return;
-
- dev_data->device_dispatch_table->DestroyCommandPool(device, commandPool, pAllocator);
-}
-
-VK_LAYER_EXPORT VKAPI_ATTR VkResult VKAPI_CALL vkResetCommandPool(
- VkDevice device,
- VkCommandPool commandPool,
- VkCommandPoolResetFlags flags)
-{
- layer_data *dev_data = get_my_data_ptr(get_dispatch_key(device), layer_data_map);
- VkResult result = VK_ERROR_VALIDATION_FAILED_EXT;
-
- if (VK_TRUE == validateCommandBuffersNotInUse(dev_data, commandPool))
- return VK_ERROR_VALIDATION_FAILED_EXT;
-
- result = dev_data->device_dispatch_table->ResetCommandPool(device, commandPool, flags);
- // Reset all of the CBs allocated from this pool
- if (VK_SUCCESS == result) {
- loader_platform_thread_lock_mutex(&globalLock);
- auto it = dev_data->commandPoolMap[commandPool].commandBuffers.begin();
- while (it != dev_data->commandPoolMap[commandPool].commandBuffers.end()) {
- resetCB(dev_data, (*it));
- ++it;
- }
- loader_platform_thread_unlock_mutex(&globalLock);
- }
- return result;
-}
-
-VK_LAYER_EXPORT VKAPI_ATTR VkResult VKAPI_CALL
-vkResetFences(VkDevice device, uint32_t fenceCount, const VkFence *pFences) {
- layer_data *dev_data =
- get_my_data_ptr(get_dispatch_key(device), layer_data_map);
- bool skipCall = false;
- loader_platform_thread_lock_mutex(&globalLock);
- for (uint32_t i = 0; i < fenceCount; ++i) {
- if (dev_data->fenceMap[pFences[i]].in_use.load()) {
- skipCall |=
- log_msg(dev_data->report_data, VK_DEBUG_REPORT_ERROR_BIT_EXT,
- VK_DEBUG_REPORT_OBJECT_TYPE_FENCE_EXT,
- reinterpret_cast<const uint64_t &>(pFences[i]),
- __LINE__, DRAWSTATE_INVALID_FENCE, "DS",
- "Fence %#" PRIx64 " is in use by a command buffer.",
- reinterpret_cast<const uint64_t &>(pFences[i]));
- }
- }
- loader_platform_thread_unlock_mutex(&globalLock);
- VkResult result = VK_ERROR_VALIDATION_FAILED_EXT;
- if (!skipCall)
- result = dev_data->device_dispatch_table->ResetFences(
- device, fenceCount, pFences);
- return result;
-}
-
-VK_LAYER_EXPORT VKAPI_ATTR void VKAPI_CALL vkDestroyFramebuffer(VkDevice device, VkFramebuffer framebuffer, const VkAllocationCallbacks* pAllocator)
-{
- get_my_data_ptr(get_dispatch_key(device), layer_data_map)->device_dispatch_table->DestroyFramebuffer(device, framebuffer, pAllocator);
- // TODO : Clean up any internal data structures using this obj.
-}
-
-VK_LAYER_EXPORT VKAPI_ATTR void VKAPI_CALL vkDestroyRenderPass(VkDevice device, VkRenderPass renderPass, const VkAllocationCallbacks* pAllocator)
-{
- get_my_data_ptr(get_dispatch_key(device), layer_data_map)->device_dispatch_table->DestroyRenderPass(device, renderPass, pAllocator);
- // TODO : Clean up any internal data structures using this obj.
-}
-
-VK_LAYER_EXPORT VKAPI_ATTR VkResult VKAPI_CALL vkCreateBuffer(VkDevice device, const VkBufferCreateInfo* pCreateInfo, const VkAllocationCallbacks* pAllocator, VkBuffer* pBuffer)
-{
- layer_data* dev_data = get_my_data_ptr(get_dispatch_key(device), layer_data_map);
- VkResult result = dev_data->device_dispatch_table->CreateBuffer(device, pCreateInfo, pAllocator, pBuffer);
- if (VK_SUCCESS == result) {
- loader_platform_thread_lock_mutex(&globalLock);
- // TODO : This doesn't create deep copy of pQueueFamilyIndices so need to fix that if/when we want that data to be valid
- dev_data->bufferMap[*pBuffer].create_info = unique_ptr<VkBufferCreateInfo>(new VkBufferCreateInfo(*pCreateInfo));
- dev_data->bufferMap[*pBuffer].in_use.store(0);
- loader_platform_thread_unlock_mutex(&globalLock);
- }
- return result;
-}
-
-VK_LAYER_EXPORT VKAPI_ATTR VkResult VKAPI_CALL vkCreateBufferView(VkDevice device, const VkBufferViewCreateInfo* pCreateInfo, const VkAllocationCallbacks* pAllocator, VkBufferView* pView)
-{
- layer_data* dev_data = get_my_data_ptr(get_dispatch_key(device), layer_data_map);
- VkResult result = dev_data->device_dispatch_table->CreateBufferView(device, pCreateInfo, pAllocator, pView);
- if (VK_SUCCESS == result) {
- loader_platform_thread_lock_mutex(&globalLock);
- dev_data->bufferViewMap[*pView] = unique_ptr<VkBufferViewCreateInfo>(new VkBufferViewCreateInfo(*pCreateInfo));
- loader_platform_thread_unlock_mutex(&globalLock);
- }
- return result;
-}
-
-VK_LAYER_EXPORT VKAPI_ATTR VkResult VKAPI_CALL vkCreateImage(VkDevice device, const VkImageCreateInfo* pCreateInfo, const VkAllocationCallbacks* pAllocator, VkImage* pImage)
-{
- layer_data* dev_data = get_my_data_ptr(get_dispatch_key(device), layer_data_map);
- VkResult result = dev_data->device_dispatch_table->CreateImage(device, pCreateInfo, pAllocator, pImage);
- if (VK_SUCCESS == result) {
- IMAGE_NODE image_node;
- image_node.layout = pCreateInfo->initialLayout;
- image_node.format = pCreateInfo->format;
- loader_platform_thread_lock_mutex(&globalLock);
- dev_data->imageMap[*pImage] = unique_ptr<VkImageCreateInfo>(new VkImageCreateInfo(*pCreateInfo));
- ImageSubresourcePair subpair = {*pImage, false, VkImageSubresource()};
- dev_data->imageSubresourceMap[*pImage].push_back(subpair);
- dev_data->imageLayoutMap[subpair] = image_node;
- loader_platform_thread_unlock_mutex(&globalLock);
- }
- return result;
-}
-
-VK_LAYER_EXPORT VKAPI_ATTR VkResult VKAPI_CALL vkCreateImageView(VkDevice device, const VkImageViewCreateInfo* pCreateInfo, const VkAllocationCallbacks* pAllocator, VkImageView* pView)
-{
- layer_data* dev_data = get_my_data_ptr(get_dispatch_key(device), layer_data_map);
- VkResult result = dev_data->device_dispatch_table->CreateImageView(device, pCreateInfo, pAllocator, pView);
- if (VK_SUCCESS == result) {
- loader_platform_thread_lock_mutex(&globalLock);
- dev_data->imageViewMap[*pView] = unique_ptr<VkImageViewCreateInfo>(new VkImageViewCreateInfo(*pCreateInfo));
- loader_platform_thread_unlock_mutex(&globalLock);
- }
- return result;
-}
-
-VK_LAYER_EXPORT VKAPI_ATTR VkResult VKAPI_CALL
- vkCreateFence(VkDevice device, const VkFenceCreateInfo* pCreateInfo,
- const VkAllocationCallbacks* pAllocator, VkFence* pFence) {
- layer_data *dev_data =
- get_my_data_ptr(get_dispatch_key(device), layer_data_map);
- VkResult result = dev_data->device_dispatch_table->CreateFence(
- device, pCreateInfo, pAllocator, pFence);
- if (VK_SUCCESS == result) {
- loader_platform_thread_lock_mutex(&globalLock);
- dev_data->fenceMap[*pFence].in_use.store(0);
- loader_platform_thread_unlock_mutex(&globalLock);
- }
- return result;
-}
-
-// TODO handle pipeline caches
-VKAPI_ATTR VkResult VKAPI_CALL
- vkCreatePipelineCache(VkDevice device,
- const VkPipelineCacheCreateInfo *pCreateInfo,
- const VkAllocationCallbacks *pAllocator,
- VkPipelineCache *pPipelineCache) {
- layer_data *dev_data =
- get_my_data_ptr(get_dispatch_key(device), layer_data_map);
- VkResult result = dev_data->device_dispatch_table->CreatePipelineCache(
- device, pCreateInfo, pAllocator, pPipelineCache);
- return result;
-}
-
-VKAPI_ATTR void VKAPI_CALL vkDestroyPipelineCache(
- VkDevice device,
- VkPipelineCache pipelineCache,
- const VkAllocationCallbacks* pAllocator)
-{
- layer_data* dev_data = get_my_data_ptr(get_dispatch_key(device), layer_data_map);
- dev_data->device_dispatch_table->DestroyPipelineCache(device, pipelineCache, pAllocator);
-}
-
-VKAPI_ATTR VkResult VKAPI_CALL vkGetPipelineCacheData(
- VkDevice device,
- VkPipelineCache pipelineCache,
- size_t* pDataSize,
- void* pData)
-{
- layer_data* dev_data = get_my_data_ptr(get_dispatch_key(device), layer_data_map);
- VkResult result = dev_data->device_dispatch_table->GetPipelineCacheData(device, pipelineCache, pDataSize, pData);
- return result;
-}
-
-VKAPI_ATTR VkResult VKAPI_CALL vkMergePipelineCaches(
- VkDevice device,
- VkPipelineCache dstCache,
- uint32_t srcCacheCount,
- const VkPipelineCache* pSrcCaches)
-{
- layer_data* dev_data = get_my_data_ptr(get_dispatch_key(device), layer_data_map);
- VkResult result = dev_data->device_dispatch_table->MergePipelineCaches(device, dstCache, srcCacheCount, pSrcCaches);
- return result;
-}
-
-VK_LAYER_EXPORT VKAPI_ATTR VkResult VKAPI_CALL vkCreateGraphicsPipelines(
- VkDevice device,
- VkPipelineCache pipelineCache,
- uint32_t count,
- const VkGraphicsPipelineCreateInfo *pCreateInfos,
- const VkAllocationCallbacks *pAllocator,
- VkPipeline *pPipelines)
-{
- VkResult result = VK_SUCCESS;
- //TODO What to do with pipelineCache?
- // The order of operations here is a little convoluted but gets the job done
- // 1. Pipeline create state is first shadowed into PIPELINE_NODE struct
- // 2. Create state is then validated (which uses flags setup during shadowing)
- // 3. If everything looks good, we'll then create the pipeline and add NODE to pipelineMap
- VkBool32 skipCall = VK_FALSE;
- // TODO : Improve this data struct w/ unique_ptrs so cleanup below is automatic
- vector<PIPELINE_NODE*> pPipeNode(count);
- layer_data* dev_data = get_my_data_ptr(get_dispatch_key(device), layer_data_map);
-
- uint32_t i=0;
- loader_platform_thread_lock_mutex(&globalLock);
-
- for (i=0; i<count; i++) {
- pPipeNode[i] = initGraphicsPipeline(dev_data, &pCreateInfos[i], NULL);
- skipCall |= verifyPipelineCreateState(dev_data, device, pPipeNode[i]);
- }
-
- if (VK_FALSE == skipCall) {
- loader_platform_thread_unlock_mutex(&globalLock);
- result = dev_data->device_dispatch_table->CreateGraphicsPipelines(device,
- pipelineCache, count, pCreateInfos, pAllocator, pPipelines);
- loader_platform_thread_lock_mutex(&globalLock);
- for (i=0; i<count; i++) {
- pPipeNode[i]->pipeline = pPipelines[i];
- dev_data->pipelineMap[pPipeNode[i]->pipeline] = pPipeNode[i];
- }
- loader_platform_thread_unlock_mutex(&globalLock);
- } else {
- for (i=0; i<count; i++) {
- if (pPipeNode[i]) {
- // If we allocated a pipeNode, need to clean it up here
- delete[] pPipeNode[i]->pVertexBindingDescriptions;
- delete[] pPipeNode[i]->pVertexAttributeDescriptions;
- delete[] pPipeNode[i]->pAttachments;
- delete pPipeNode[i];
- }
- }
- loader_platform_thread_unlock_mutex(&globalLock);
- return VK_ERROR_VALIDATION_FAILED_EXT;
- }
- return result;
-}
-
-VK_LAYER_EXPORT VKAPI_ATTR VkResult VKAPI_CALL vkCreateComputePipelines(
- VkDevice device,
- VkPipelineCache pipelineCache,
- uint32_t count,
- const VkComputePipelineCreateInfo *pCreateInfos,
- const VkAllocationCallbacks *pAllocator,
- VkPipeline *pPipelines)
-{
- VkResult result = VK_SUCCESS;
- VkBool32 skipCall = VK_FALSE;
-
- // TODO : Improve this data struct w/ unique_ptrs so cleanup below is automatic
- vector<PIPELINE_NODE*> pPipeNode(count);
- layer_data* dev_data = get_my_data_ptr(get_dispatch_key(device), layer_data_map);
-
- uint32_t i=0;
- loader_platform_thread_lock_mutex(&globalLock);
- for (i=0; i<count; i++) {
- // TODO: Verify compute stage bits
-
- // Create and initialize internal tracking data structure
- pPipeNode[i] = new PIPELINE_NODE;
- memcpy(&pPipeNode[i]->computePipelineCI, (const void*)&pCreateInfos[i], sizeof(VkComputePipelineCreateInfo));
-
- // TODO: Add Compute Pipeline Verification
- // skipCall |= verifyPipelineCreateState(dev_data, device, pPipeNode[i]);
- }
-
- if (VK_FALSE == skipCall) {
- loader_platform_thread_unlock_mutex(&globalLock);
- result = dev_data->device_dispatch_table->CreateComputePipelines(device, pipelineCache, count, pCreateInfos, pAllocator, pPipelines);
- loader_platform_thread_lock_mutex(&globalLock);
- for (i=0; i<count; i++) {
- pPipeNode[i]->pipeline = pPipelines[i];
- dev_data->pipelineMap[pPipeNode[i]->pipeline] = pPipeNode[i];
- }
- loader_platform_thread_unlock_mutex(&globalLock);
- } else {
- for (i=0; i<count; i++) {
- if (pPipeNode[i]) {
- // Clean up any locally allocated data structures
- delete pPipeNode[i];
- }
- }
- loader_platform_thread_unlock_mutex(&globalLock);
- return VK_ERROR_VALIDATION_FAILED_EXT;
- }
- return result;
-}
-
-VK_LAYER_EXPORT VKAPI_ATTR VkResult VKAPI_CALL vkCreateSampler(VkDevice device, const VkSamplerCreateInfo* pCreateInfo, const VkAllocationCallbacks* pAllocator, VkSampler* pSampler)
-{
- layer_data* dev_data = get_my_data_ptr(get_dispatch_key(device), layer_data_map);
- VkResult result = dev_data->device_dispatch_table->CreateSampler(device, pCreateInfo, pAllocator, pSampler);
- if (VK_SUCCESS == result) {
- loader_platform_thread_lock_mutex(&globalLock);
- dev_data->sampleMap[*pSampler] = unique_ptr<SAMPLER_NODE>(new SAMPLER_NODE(pSampler, pCreateInfo));
- loader_platform_thread_unlock_mutex(&globalLock);
- }
- return result;
-}
-
-VK_LAYER_EXPORT VKAPI_ATTR VkResult VKAPI_CALL vkCreateDescriptorSetLayout(VkDevice device, const VkDescriptorSetLayoutCreateInfo* pCreateInfo, const VkAllocationCallbacks* pAllocator, VkDescriptorSetLayout* pSetLayout)
-{
- layer_data* dev_data = get_my_data_ptr(get_dispatch_key(device), layer_data_map);
- VkResult result = dev_data->device_dispatch_table->CreateDescriptorSetLayout(device, pCreateInfo, pAllocator, pSetLayout);
- if (VK_SUCCESS == result) {
- // TODOSC : Capture layout bindings set
- LAYOUT_NODE* pNewNode = new LAYOUT_NODE;
- if (NULL == pNewNode) {
- if (log_msg(dev_data->report_data, VK_DEBUG_REPORT_ERROR_BIT_EXT, VK_DEBUG_REPORT_OBJECT_TYPE_DESCRIPTOR_SET_LAYOUT_EXT, (uint64_t) *pSetLayout, __LINE__, DRAWSTATE_OUT_OF_MEMORY, "DS",
- "Out of memory while attempting to allocate LAYOUT_NODE in vkCreateDescriptorSetLayout()"))
- return VK_ERROR_VALIDATION_FAILED_EXT;
- }
- memcpy((void*)&pNewNode->createInfo, pCreateInfo, sizeof(VkDescriptorSetLayoutCreateInfo));
- pNewNode->createInfo.pBindings = new VkDescriptorSetLayoutBinding[pCreateInfo->bindingCount];
- memcpy((void*)pNewNode->createInfo.pBindings, pCreateInfo->pBindings, sizeof(VkDescriptorSetLayoutBinding)*pCreateInfo->bindingCount);
- // g++ does not like reserve with size 0
- if (pCreateInfo->bindingCount)
- pNewNode->bindingToIndexMap.reserve(pCreateInfo->bindingCount);
- uint32_t totalCount = 0;
- for (uint32_t i = 0; i < pCreateInfo->bindingCount; i++) {
- if (!pNewNode->bindingToIndexMap.emplace(pCreateInfo->pBindings[i].binding, i).second) {
- if (log_msg(
- dev_data->report_data, VK_DEBUG_REPORT_ERROR_BIT_EXT,
- VK_DEBUG_REPORT_OBJECT_TYPE_DESCRIPTOR_SET_LAYOUT_EXT,
- (uint64_t)*pSetLayout, __LINE__,
- DRAWSTATE_INVALID_LAYOUT, "DS",
- "duplicated binding number in "
- "VkDescriptorSetLayoutBinding"))
- return VK_ERROR_VALIDATION_FAILED_EXT;
- } else {
- pNewNode->bindingToIndexMap[pCreateInfo->pBindings[i].binding] = i;
- }
- totalCount += pCreateInfo->pBindings[i].descriptorCount;
- if (pCreateInfo->pBindings[i].pImmutableSamplers) {
- VkSampler** ppIS = (VkSampler**)&pNewNode->createInfo.pBindings[i].pImmutableSamplers;
- *ppIS = new VkSampler[pCreateInfo->pBindings[i].descriptorCount];
- memcpy(*ppIS, pCreateInfo->pBindings[i].pImmutableSamplers, pCreateInfo->pBindings[i].descriptorCount*sizeof(VkSampler));
- }
- }
- pNewNode->layout = *pSetLayout;
- pNewNode->startIndex = 0;
- if (totalCount > 0) {
- pNewNode->descriptorTypes.resize(totalCount);
- pNewNode->stageFlags.resize(totalCount);
- uint32_t offset = 0;
- uint32_t j = 0;
- VkDescriptorType dType;
- for (uint32_t i=0; i<pCreateInfo->bindingCount; i++) {
- dType = pCreateInfo->pBindings[i].descriptorType;
- for (j = 0; j < pCreateInfo->pBindings[i].descriptorCount; j++) {
- pNewNode->descriptorTypes[offset + j] = dType;
- pNewNode->stageFlags[offset + j] = pCreateInfo->pBindings[i].stageFlags;
- if ((dType == VK_DESCRIPTOR_TYPE_UNIFORM_BUFFER_DYNAMIC) ||
- (dType == VK_DESCRIPTOR_TYPE_STORAGE_BUFFER_DYNAMIC)) {
- pNewNode->dynamicDescriptorCount++;
- }
- }
- offset += j;
- }
- pNewNode->endIndex = pNewNode->startIndex + totalCount - 1;
- } else { // no descriptors
- pNewNode->endIndex = 0;
- }
- // Put new node at Head of global Layer list
- loader_platform_thread_lock_mutex(&globalLock);
- dev_data->descriptorSetLayoutMap[*pSetLayout] = pNewNode;
- loader_platform_thread_unlock_mutex(&globalLock);
- }
- return result;
-}
-
-VKAPI_ATTR VkResult VKAPI_CALL vkCreatePipelineLayout(VkDevice device, const VkPipelineLayoutCreateInfo* pCreateInfo, const VkAllocationCallbacks* pAllocator, VkPipelineLayout* pPipelineLayout)
-{
- layer_data* dev_data = get_my_data_ptr(get_dispatch_key(device), layer_data_map);
- VkResult result = dev_data->device_dispatch_table->CreatePipelineLayout(device, pCreateInfo, pAllocator, pPipelineLayout);
- if (VK_SUCCESS == result) {
- loader_platform_thread_lock_mutex(&globalLock);
- // TODOSC : Merge capture of the setLayouts per pipeline
- PIPELINE_LAYOUT_NODE& plNode = dev_data->pipelineLayoutMap[*pPipelineLayout];
- plNode.descriptorSetLayouts.resize(pCreateInfo->setLayoutCount);
- uint32_t i = 0;
- for (i=0; i<pCreateInfo->setLayoutCount; ++i) {
- plNode.descriptorSetLayouts[i] = pCreateInfo->pSetLayouts[i];
- }
- plNode.pushConstantRanges.resize(pCreateInfo->pushConstantRangeCount);
- for (i=0; i<pCreateInfo->pushConstantRangeCount; ++i) {
- plNode.pushConstantRanges[i] = pCreateInfo->pPushConstantRanges[i];
- }
- loader_platform_thread_unlock_mutex(&globalLock);
- }
- return result;
-}
-
-VK_LAYER_EXPORT VKAPI_ATTR VkResult VKAPI_CALL vkCreateDescriptorPool(VkDevice device, const VkDescriptorPoolCreateInfo* pCreateInfo, const VkAllocationCallbacks* pAllocator, VkDescriptorPool* pDescriptorPool)
-{
- layer_data* dev_data = get_my_data_ptr(get_dispatch_key(device), layer_data_map);
- VkResult result = dev_data->device_dispatch_table->CreateDescriptorPool(device, pCreateInfo, pAllocator, pDescriptorPool);
- if (VK_SUCCESS == result) {
- // Insert this pool into Global Pool LL at head
- if (log_msg(dev_data->report_data, VK_DEBUG_REPORT_INFORMATION_BIT_EXT, VK_DEBUG_REPORT_OBJECT_TYPE_DESCRIPTOR_POOL_EXT, (uint64_t) *pDescriptorPool, __LINE__, DRAWSTATE_OUT_OF_MEMORY, "DS",
- "Created Descriptor Pool %#" PRIxLEAST64, (uint64_t) *pDescriptorPool))
- return VK_ERROR_VALIDATION_FAILED_EXT;
- DESCRIPTOR_POOL_NODE* pNewNode = new DESCRIPTOR_POOL_NODE(*pDescriptorPool, pCreateInfo);
- if (NULL == pNewNode) {
- if (log_msg(dev_data->report_data, VK_DEBUG_REPORT_ERROR_BIT_EXT, VK_DEBUG_REPORT_OBJECT_TYPE_DESCRIPTOR_POOL_EXT, (uint64_t) *pDescriptorPool, __LINE__, DRAWSTATE_OUT_OF_MEMORY, "DS",
- "Out of memory while attempting to allocate DESCRIPTOR_POOL_NODE in vkCreateDescriptorPool()"))
- return VK_ERROR_VALIDATION_FAILED_EXT;
- } else {
- loader_platform_thread_lock_mutex(&globalLock);
- dev_data->descriptorPoolMap[*pDescriptorPool] = pNewNode;
- loader_platform_thread_unlock_mutex(&globalLock);
- }
- } else {
- // Need to do anything if pool create fails?
- }
- return result;
-}
-
-VK_LAYER_EXPORT VKAPI_ATTR VkResult VKAPI_CALL vkResetDescriptorPool(VkDevice device, VkDescriptorPool descriptorPool, VkDescriptorPoolResetFlags flags)
-{
- layer_data* dev_data = get_my_data_ptr(get_dispatch_key(device), layer_data_map);
- VkResult result = dev_data->device_dispatch_table->ResetDescriptorPool(device, descriptorPool, flags);
- if (VK_SUCCESS == result) {
- loader_platform_thread_lock_mutex(&globalLock);
- clearDescriptorPool(dev_data, device, descriptorPool, flags);
- loader_platform_thread_unlock_mutex(&globalLock);
- }
- return result;
-}
-
-VK_LAYER_EXPORT VKAPI_ATTR VkResult VKAPI_CALL vkAllocateDescriptorSets(VkDevice device, const VkDescriptorSetAllocateInfo* pAllocateInfo, VkDescriptorSet* pDescriptorSets)
-{
- VkBool32 skipCall = VK_FALSE;
- layer_data* dev_data = get_my_data_ptr(get_dispatch_key(device), layer_data_map);
-
- loader_platform_thread_lock_mutex(&globalLock);
- // Verify that requested descriptorSets are available in pool
- DESCRIPTOR_POOL_NODE *pPoolNode = getPoolNode(dev_data, pAllocateInfo->descriptorPool);
- if (!pPoolNode) {
- skipCall |= log_msg(dev_data->report_data, VK_DEBUG_REPORT_ERROR_BIT_EXT, VK_DEBUG_REPORT_OBJECT_TYPE_DESCRIPTOR_POOL_EXT, (uint64_t) pAllocateInfo->descriptorPool, __LINE__, DRAWSTATE_INVALID_POOL, "DS",
- "Unable to find pool node for pool %#" PRIxLEAST64 " specified in vkAllocateDescriptorSets() call", (uint64_t) pAllocateInfo->descriptorPool);
- } else { // Make sure pool has all the available descriptors before calling down chain
- skipCall |= validate_descriptor_availability_in_pool(dev_data, pPoolNode, pAllocateInfo->descriptorSetCount, pAllocateInfo->pSetLayouts);
- }
- loader_platform_thread_unlock_mutex(&globalLock);
- if (skipCall)
- return VK_ERROR_VALIDATION_FAILED_EXT;
- VkResult result = dev_data->device_dispatch_table->AllocateDescriptorSets(device, pAllocateInfo, pDescriptorSets);
- if (VK_SUCCESS == result) {
- loader_platform_thread_lock_mutex(&globalLock);
- DESCRIPTOR_POOL_NODE *pPoolNode = getPoolNode(dev_data, pAllocateInfo->descriptorPool);
- if (pPoolNode) {
- if (pAllocateInfo->descriptorSetCount == 0) {
- log_msg(dev_data->report_data, VK_DEBUG_REPORT_INFORMATION_BIT_EXT, VK_DEBUG_REPORT_OBJECT_TYPE_DESCRIPTOR_SET_EXT, pAllocateInfo->descriptorSetCount, __LINE__, DRAWSTATE_NONE, "DS",
- "AllocateDescriptorSets called with 0 count");
- }
- for (uint32_t i = 0; i < pAllocateInfo->descriptorSetCount; i++) {
- log_msg(dev_data->report_data, VK_DEBUG_REPORT_INFORMATION_BIT_EXT, VK_DEBUG_REPORT_OBJECT_TYPE_DESCRIPTOR_SET_EXT, (uint64_t) pDescriptorSets[i], __LINE__, DRAWSTATE_NONE, "DS",
- "Created Descriptor Set %#" PRIxLEAST64, (uint64_t) pDescriptorSets[i]);
- // Create new set node and add to head of pool nodes
- SET_NODE* pNewNode = new SET_NODE;
- if (NULL == pNewNode) {
- if (log_msg(dev_data->report_data, VK_DEBUG_REPORT_ERROR_BIT_EXT, VK_DEBUG_REPORT_OBJECT_TYPE_DESCRIPTOR_SET_EXT, (uint64_t) pDescriptorSets[i], __LINE__, DRAWSTATE_OUT_OF_MEMORY, "DS",
- "Out of memory while attempting to allocate SET_NODE in vkAllocateDescriptorSets()"))
- return VK_ERROR_VALIDATION_FAILED_EXT;
- } else {
- // TODO : Pool should store a total count of each type of Descriptor available
- // When descriptors are allocated, decrement the count and validate here
- // that the count doesn't go below 0. One reset/free need to bump count back up.
- // Insert set at head of Set LL for this pool
- pNewNode->pNext = pPoolNode->pSets;
- pNewNode->in_use.store(0);
- pPoolNode->pSets = pNewNode;
- LAYOUT_NODE* pLayout = getLayoutNode(dev_data, pAllocateInfo->pSetLayouts[i]);
- if (NULL == pLayout) {
- if (log_msg(dev_data->report_data, VK_DEBUG_REPORT_ERROR_BIT_EXT, VK_DEBUG_REPORT_OBJECT_TYPE_DESCRIPTOR_SET_LAYOUT_EXT, (uint64_t) pAllocateInfo->pSetLayouts[i], __LINE__, DRAWSTATE_INVALID_LAYOUT, "DS",
- "Unable to find set layout node for layout %#" PRIxLEAST64 " specified in vkAllocateDescriptorSets() call", (uint64_t) pAllocateInfo->pSetLayouts[i]))
- return VK_ERROR_VALIDATION_FAILED_EXT;
- }
- pNewNode->pLayout = pLayout;
- pNewNode->pool = pAllocateInfo->descriptorPool;
- pNewNode->set = pDescriptorSets[i];
- pNewNode->descriptorCount = (pLayout->createInfo.bindingCount != 0) ? pLayout->endIndex + 1 : 0;
- if (pNewNode->descriptorCount) {
- size_t descriptorArraySize = sizeof(GENERIC_HEADER*)*pNewNode->descriptorCount;
- pNewNode->ppDescriptors = new GENERIC_HEADER*[descriptorArraySize];
- memset(pNewNode->ppDescriptors, 0, descriptorArraySize);
- }
- dev_data->setMap[pDescriptorSets[i]] = pNewNode;
- }
- }
- }
- loader_platform_thread_unlock_mutex(&globalLock);
- }
- return result;
-}
-
-VK_LAYER_EXPORT VKAPI_ATTR VkResult VKAPI_CALL vkFreeDescriptorSets(VkDevice device, VkDescriptorPool descriptorPool, uint32_t count, const VkDescriptorSet* pDescriptorSets)
-{
- VkBool32 skipCall = VK_FALSE;
- layer_data* dev_data = get_my_data_ptr(get_dispatch_key(device), layer_data_map);
- // Make sure that no sets being destroyed are in-flight
- loader_platform_thread_lock_mutex(&globalLock);
- for (uint32_t i=0; i<count; ++i)
- skipCall |= validateIdleDescriptorSet(dev_data, pDescriptorSets[i], "vkFreeDesriptorSets");
- DESCRIPTOR_POOL_NODE *pPoolNode = getPoolNode(dev_data, descriptorPool);
- if (pPoolNode && !(VK_DESCRIPTOR_POOL_CREATE_FREE_DESCRIPTOR_SET_BIT & pPoolNode->createInfo.flags)) {
- // Can't Free from a NON_FREE pool
- skipCall |= log_msg(dev_data->report_data, VK_DEBUG_REPORT_ERROR_BIT_EXT, VK_DEBUG_REPORT_OBJECT_TYPE_DEVICE_EXT, (uint64_t)device, __LINE__, DRAWSTATE_CANT_FREE_FROM_NON_FREE_POOL, "DS",
- "It is invalid to call vkFreeDescriptorSets() with a pool created without setting VK_DESCRIPTOR_POOL_CREATE_FREE_DESCRIPTOR_SET_BIT.");
- }
- loader_platform_thread_unlock_mutex(&globalLock);
- if (VK_FALSE != skipCall)
- return VK_ERROR_VALIDATION_FAILED_EXT;
- VkResult result = dev_data->device_dispatch_table->FreeDescriptorSets(device, descriptorPool, count, pDescriptorSets);
- if (VK_SUCCESS == result) {
- // For each freed descriptor add it back into the pool as available
- loader_platform_thread_lock_mutex(&globalLock);
- for (uint32_t i=0; i<count; ++i) {
- SET_NODE* pSet = dev_data->setMap[pDescriptorSets[i]]; // getSetNode() without locking
- invalidateBoundCmdBuffers(dev_data, pSet);
- LAYOUT_NODE* pLayout = pSet->pLayout;
- uint32_t typeIndex = 0, poolSizeCount = 0;
- for (uint32_t j=0; j<pLayout->createInfo.bindingCount; ++j) {
- typeIndex = static_cast<uint32_t>(pLayout->createInfo.pBindings[j].descriptorType);
- poolSizeCount = pLayout->createInfo.pBindings[j].descriptorCount;
- pPoolNode->availableDescriptorTypeCount[typeIndex] += poolSizeCount;
- }
- }
- loader_platform_thread_unlock_mutex(&globalLock);
- }
- // TODO : Any other clean-up or book-keeping to do here?
- return result;
-}
-
-VK_LAYER_EXPORT VKAPI_ATTR void VKAPI_CALL vkUpdateDescriptorSets(VkDevice device, uint32_t descriptorWriteCount, const VkWriteDescriptorSet* pDescriptorWrites, uint32_t descriptorCopyCount, const VkCopyDescriptorSet* pDescriptorCopies)
-{
- // dsUpdate will return VK_TRUE only if a bailout error occurs, so we want to call down tree when update returns VK_FALSE
- layer_data* dev_data = get_my_data_ptr(get_dispatch_key(device), layer_data_map);
- loader_platform_thread_lock_mutex(&globalLock);
- VkBool32 rtn = dsUpdate(dev_data,
- device,
- descriptorWriteCount,
- pDescriptorWrites,
- descriptorCopyCount,
- pDescriptorCopies);
- loader_platform_thread_unlock_mutex(&globalLock);
- if (!rtn) {
- dev_data->device_dispatch_table->UpdateDescriptorSets(device, descriptorWriteCount, pDescriptorWrites, descriptorCopyCount, pDescriptorCopies);
- }
-}
-
-VK_LAYER_EXPORT VKAPI_ATTR VkResult VKAPI_CALL vkAllocateCommandBuffers(VkDevice device, const VkCommandBufferAllocateInfo* pCreateInfo, VkCommandBuffer* pCommandBuffer)
-{
- layer_data* dev_data = get_my_data_ptr(get_dispatch_key(device), layer_data_map);
- VkResult result = dev_data->device_dispatch_table->AllocateCommandBuffers(device, pCreateInfo, pCommandBuffer);
- if (VK_SUCCESS == result) {
- loader_platform_thread_lock_mutex(&globalLock);
- for (uint32_t i = 0; i < pCreateInfo->commandBufferCount; i++) {
- // Validate command pool
- if (dev_data->commandPoolMap.find(pCreateInfo->commandPool) != dev_data->commandPoolMap.end()) {
- // Add command buffer to its commandPool map
- dev_data->commandPoolMap[pCreateInfo->commandPool].commandBuffers.push_back(pCommandBuffer[i]);
- GLOBAL_CB_NODE* pCB = new GLOBAL_CB_NODE;
- // Add command buffer to map
- dev_data->commandBufferMap[pCommandBuffer[i]] = pCB;
- resetCB(dev_data, pCommandBuffer[i]);
- pCB->createInfo = *pCreateInfo;
- pCB->device = device;
- }
- }
- loader_platform_thread_unlock_mutex(&globalLock);
- }
- return result;
-}
-
-VK_LAYER_EXPORT VKAPI_ATTR VkResult VKAPI_CALL vkBeginCommandBuffer(VkCommandBuffer commandBuffer, const VkCommandBufferBeginInfo* pBeginInfo)
-{
- VkBool32 skipCall = VK_FALSE;
- layer_data* dev_data = get_my_data_ptr(get_dispatch_key(commandBuffer), layer_data_map);
- loader_platform_thread_lock_mutex(&globalLock);
- // Validate command buffer level
- GLOBAL_CB_NODE* pCB = getCBNode(dev_data, commandBuffer);
- if (pCB) {
- if (pCB->createInfo.level != VK_COMMAND_BUFFER_LEVEL_PRIMARY) {
- // Secondary Command Buffer
- const VkCommandBufferInheritanceInfo *pInfo = pBeginInfo->pInheritanceInfo;
- if (!pInfo) {
- skipCall |= log_msg(dev_data->report_data, VK_DEBUG_REPORT_ERROR_BIT_EXT, VK_DEBUG_REPORT_OBJECT_TYPE_COMMAND_BUFFER_EXT, reinterpret_cast<uint64_t>(commandBuffer), __LINE__,
- DRAWSTATE_BEGIN_CB_INVALID_STATE, "DS", "vkBeginCommandBuffer(): Secondary Command Buffer (%p) must have inheritance info.",
- reinterpret_cast<void*>(commandBuffer));
- } else {
- if (pBeginInfo->flags & VK_COMMAND_BUFFER_USAGE_RENDER_PASS_CONTINUE_BIT) {
- if (!pInfo->renderPass) { // renderpass should NOT be null for an Secondary CB
- skipCall |= log_msg(dev_data->report_data, VK_DEBUG_REPORT_ERROR_BIT_EXT, VK_DEBUG_REPORT_OBJECT_TYPE_COMMAND_BUFFER_EXT, reinterpret_cast<uint64_t>(commandBuffer),
- __LINE__, DRAWSTATE_BEGIN_CB_INVALID_STATE, "DS",
- "vkBeginCommandBuffer(): Secondary Command Buffers (%p) must specify a valid renderpass parameter.", reinterpret_cast<void*>(commandBuffer));
- }
- if (!pInfo->framebuffer) { // framebuffer may be null for an Secondary CB, but this affects perf
- skipCall |= log_msg(dev_data->report_data, VK_DEBUG_REPORT_WARNING_BIT_EXT, VK_DEBUG_REPORT_OBJECT_TYPE_COMMAND_BUFFER_EXT, reinterpret_cast<uint64_t>(commandBuffer),
- __LINE__, DRAWSTATE_BEGIN_CB_INVALID_STATE, "DS",
- "vkBeginCommandBuffer(): Secondary Command Buffers (%p) may perform better if a valid framebuffer parameter is specified.",
- reinterpret_cast<void*>(commandBuffer));
- } else {
- string errorString = "";
- VkRenderPass fbRP = dev_data->frameBufferMap[pInfo->framebuffer]->renderPass;
- if (!verify_renderpass_compatibility(dev_data, fbRP, pInfo->renderPass, errorString)) {
- // renderPass that framebuffer was created with must be compatible with local renderPass
- skipCall |= log_msg(dev_data->report_data, VK_DEBUG_REPORT_ERROR_BIT_EXT, VK_DEBUG_REPORT_OBJECT_TYPE_COMMAND_BUFFER_EXT, reinterpret_cast<uint64_t>(commandBuffer),
- __LINE__, DRAWSTATE_RENDERPASS_INCOMPATIBLE, "DS",
- "vkBeginCommandBuffer(): Secondary Command Buffer (%p) renderPass (%#" PRIxLEAST64 ") is incompatible w/ framebuffer (%#" PRIxLEAST64
- ") w/ render pass (%#" PRIxLEAST64 ") due to: %s", reinterpret_cast<void*>(commandBuffer), (uint64_t)(pInfo->renderPass),
- (uint64_t)(pInfo->framebuffer), (uint64_t)(fbRP), errorString.c_str());
- }
- }
- }
- if ((pInfo->occlusionQueryEnable == VK_FALSE || dev_data->physDevProperties.features.occlusionQueryPrecise == VK_FALSE) && (pInfo->queryFlags & VK_QUERY_CONTROL_PRECISE_BIT)) {
- skipCall |= log_msg(dev_data->report_data, VK_DEBUG_REPORT_ERROR_BIT_EXT, VK_DEBUG_REPORT_OBJECT_TYPE_COMMAND_BUFFER_EXT, reinterpret_cast<uint64_t>(commandBuffer),
- __LINE__, DRAWSTATE_BEGIN_CB_INVALID_STATE, "DS",
- "vkBeginCommandBuffer(): Secondary Command Buffer (%p) must not have VK_QUERY_CONTROL_PRECISE_BIT if occulusionQuery is disabled or the device does not "
- "support precise occlusion queries.", reinterpret_cast<void*>(commandBuffer));
- }
- }
- if (pInfo && pInfo->renderPass != VK_NULL_HANDLE) {
- auto rp_data = dev_data->renderPassMap.find(pInfo->renderPass);
- if (rp_data != dev_data->renderPassMap.end() && rp_data->second && rp_data->second->pCreateInfo) {
- if (pInfo->subpass >= rp_data->second->pCreateInfo->subpassCount) {
- skipCall |= log_msg(dev_data->report_data, VK_DEBUG_REPORT_ERROR_BIT_EXT, VK_DEBUG_REPORT_OBJECT_TYPE_COMMAND_BUFFER_EXT, (uint64_t)commandBuffer,
- __LINE__, DRAWSTATE_BEGIN_CB_INVALID_STATE, "DS",
- "vkBeginCommandBuffer(): Secondary Command Buffers (%p) must has a subpass index (%d) that is less than the number of subpasses (%d).",
- (void*)commandBuffer, pInfo->subpass, rp_data->second->pCreateInfo->subpassCount);
- }
- }
- }
- }
- if (CB_RECORDING == pCB->state) {
- skipCall |= log_msg(dev_data->report_data, VK_DEBUG_REPORT_ERROR_BIT_EXT, VK_DEBUG_REPORT_OBJECT_TYPE_COMMAND_BUFFER_EXT, (uint64_t)commandBuffer, __LINE__, DRAWSTATE_BEGIN_CB_INVALID_STATE, "DS",
- "vkBeginCommandBuffer(): Cannot call Begin on CB (%#" PRIxLEAST64 ") in the RECORDING state. Must first call vkEndCommandBuffer().", (uint64_t)commandBuffer);
- } else if (CB_RECORDED == pCB->state) {
- VkCommandPool cmdPool = pCB->createInfo.commandPool;
- if (!(VK_COMMAND_POOL_CREATE_RESET_COMMAND_BUFFER_BIT & dev_data->commandPoolMap[cmdPool].createFlags)) {
- skipCall |= log_msg(dev_data->report_data, VK_DEBUG_REPORT_ERROR_BIT_EXT, VK_DEBUG_REPORT_OBJECT_TYPE_COMMAND_BUFFER_EXT, (uint64_t)commandBuffer,
- __LINE__, DRAWSTATE_INVALID_COMMAND_BUFFER_RESET, "DS",
- "Call to vkBeginCommandBuffer() on command buffer (%#" PRIxLEAST64 ") attempts to implicitly reset cmdBuffer created from command pool (%#" PRIxLEAST64 ") that does NOT have the VK_COMMAND_POOL_CREATE_RESET_COMMAND_BUFFER_BIT bit set.",
- (uint64_t) commandBuffer, (uint64_t) cmdPool);
- }
- resetCB(dev_data, commandBuffer);
- }
- // Set updated state here in case implicit reset occurs above
- pCB->state = CB_RECORDING;
- pCB->beginInfo = *pBeginInfo;
- if (pCB->beginInfo.pInheritanceInfo) {
- pCB->inheritanceInfo = *(pCB->beginInfo.pInheritanceInfo);
- pCB->beginInfo.pInheritanceInfo = &pCB->inheritanceInfo;
- }
- } else {
- skipCall |= log_msg(dev_data->report_data, VK_DEBUG_REPORT_ERROR_BIT_EXT, VK_DEBUG_REPORT_OBJECT_TYPE_COMMAND_BUFFER_EXT, (uint64_t)commandBuffer, __LINE__, DRAWSTATE_INVALID_COMMAND_BUFFER, "DS",
- "In vkBeginCommandBuffer() and unable to find CommandBuffer Node for CB %p!", (void*)commandBuffer);
- }
- loader_platform_thread_unlock_mutex(&globalLock);
- if (VK_FALSE != skipCall) {
- return VK_ERROR_VALIDATION_FAILED_EXT;
- }
- VkResult result = dev_data->device_dispatch_table->BeginCommandBuffer(commandBuffer, pBeginInfo);
- return result;
-}
-
-VK_LAYER_EXPORT VKAPI_ATTR VkResult VKAPI_CALL vkEndCommandBuffer(VkCommandBuffer commandBuffer)
-{
- VkBool32 skipCall = VK_FALSE;
- VkResult result = VK_SUCCESS;
- layer_data* dev_data = get_my_data_ptr(get_dispatch_key(commandBuffer), layer_data_map);
- loader_platform_thread_lock_mutex(&globalLock);
- GLOBAL_CB_NODE* pCB = getCBNode(dev_data, commandBuffer);
- if (pCB) {
- if (pCB->state != CB_RECORDING) {
- skipCall |= report_error_no_cb_begin(dev_data, commandBuffer, "vkEndCommandBuffer()");
- }
- for (auto query : pCB->activeQueries) {
- skipCall |= log_msg(dev_data->report_data, VK_DEBUG_REPORT_ERROR_BIT_EXT, (VkDebugReportObjectTypeEXT)0, 0, __LINE__, DRAWSTATE_INVALID_QUERY, "DS",
- "Ending command buffer with in progress query: queryPool %" PRIu64 ", index %d", (uint64_t)(query.pool), query.index);
- }
- }
- if (VK_FALSE == skipCall) {
- loader_platform_thread_unlock_mutex(&globalLock);
- result = dev_data->device_dispatch_table->EndCommandBuffer(commandBuffer);
- loader_platform_thread_lock_mutex(&globalLock);
- if (VK_SUCCESS == result) {
- pCB->state = CB_RECORDED;
- // Reset CB status flags
- pCB->status = 0;
- printCB(dev_data, commandBuffer);
- }
- } else {
- result = VK_ERROR_VALIDATION_FAILED_EXT;
- }
- loader_platform_thread_unlock_mutex(&globalLock);
- return result;
-}
-
-VK_LAYER_EXPORT VKAPI_ATTR VkResult VKAPI_CALL vkResetCommandBuffer(VkCommandBuffer commandBuffer, VkCommandBufferResetFlags flags)
-{
- VkBool32 skipCall = VK_FALSE;
- layer_data* dev_data = get_my_data_ptr(get_dispatch_key(commandBuffer), layer_data_map);
- loader_platform_thread_lock_mutex(&globalLock);
- GLOBAL_CB_NODE* pCB = getCBNode(dev_data, commandBuffer);
- VkCommandPool cmdPool = pCB->createInfo.commandPool;
- if (!(VK_COMMAND_POOL_CREATE_RESET_COMMAND_BUFFER_BIT & dev_data->commandPoolMap[cmdPool].createFlags)) {
- skipCall |= log_msg(dev_data->report_data, VK_DEBUG_REPORT_ERROR_BIT_EXT, VK_DEBUG_REPORT_OBJECT_TYPE_COMMAND_BUFFER_EXT, (uint64_t) commandBuffer,
- __LINE__, DRAWSTATE_INVALID_COMMAND_BUFFER_RESET, "DS",
- "Attempt to reset command buffer (%#" PRIxLEAST64 ") created from command pool (%#" PRIxLEAST64 ") that does NOT have the VK_COMMAND_POOL_CREATE_RESET_COMMAND_BUFFER_BIT bit set.",
- (uint64_t) commandBuffer, (uint64_t) cmdPool);
- }
- if (dev_data->globalInFlightCmdBuffers.count(commandBuffer)) {
- skipCall |= log_msg(dev_data->report_data, VK_DEBUG_REPORT_ERROR_BIT_EXT, VK_DEBUG_REPORT_OBJECT_TYPE_COMMAND_BUFFER_EXT, (uint64_t) commandBuffer,
- __LINE__, DRAWSTATE_INVALID_COMMAND_BUFFER_RESET, "DS",
- "Attempt to reset command buffer (%#" PRIxLEAST64 ") which is in use.", reinterpret_cast<uint64_t>(commandBuffer));
- }
- loader_platform_thread_unlock_mutex(&globalLock);
- if (skipCall != VK_FALSE)
- return VK_ERROR_VALIDATION_FAILED_EXT;
- VkResult result = dev_data->device_dispatch_table->ResetCommandBuffer(commandBuffer, flags);
- if (VK_SUCCESS == result) {
- loader_platform_thread_lock_mutex(&globalLock);
- resetCB(dev_data, commandBuffer);
- loader_platform_thread_unlock_mutex(&globalLock);
- }
- return result;
-}
-
-VK_LAYER_EXPORT VKAPI_ATTR void VKAPI_CALL vkCmdBindPipeline(VkCommandBuffer commandBuffer, VkPipelineBindPoint pipelineBindPoint, VkPipeline pipeline)
-{
- VkBool32 skipCall = VK_FALSE;
- layer_data* dev_data = get_my_data_ptr(get_dispatch_key(commandBuffer), layer_data_map);
- loader_platform_thread_lock_mutex(&globalLock);
- GLOBAL_CB_NODE* pCB = getCBNode(dev_data, commandBuffer);
- if (pCB) {
- skipCall |= addCmd(dev_data, pCB, CMD_BINDPIPELINE, "vkCmdBindPipeline()");
- if ((VK_PIPELINE_BIND_POINT_COMPUTE == pipelineBindPoint) && (pCB->activeRenderPass)) {
- skipCall |= log_msg(dev_data->report_data, VK_DEBUG_REPORT_ERROR_BIT_EXT, VK_DEBUG_REPORT_OBJECT_TYPE_PIPELINE_EXT, (uint64_t) pipeline,
- __LINE__, DRAWSTATE_INVALID_RENDERPASS_CMD, "DS",
- "Incorrectly binding compute pipeline (%#" PRIxLEAST64 ") during active RenderPass (%#" PRIxLEAST64 ")",
- (uint64_t) pipeline, (uint64_t) pCB->activeRenderPass);
- } else if (VK_PIPELINE_BIND_POINT_GRAPHICS == pipelineBindPoint) {
- skipCall |= outsideRenderPass(dev_data, pCB, "vkCmdBindPipeline");
- }
-
- PIPELINE_NODE* pPN = getPipeline(dev_data, pipeline);
- if (pPN) {
- pCB->lastBoundPipeline = pipeline;
- set_cb_pso_status(pCB, pPN);
- skipCall |= validatePipelineState(dev_data, pCB, pipelineBindPoint, pipeline);
- } else {
- skipCall |= log_msg(dev_data->report_data, VK_DEBUG_REPORT_ERROR_BIT_EXT, VK_DEBUG_REPORT_OBJECT_TYPE_PIPELINE_EXT,
- (uint64_t) pipeline, __LINE__, DRAWSTATE_INVALID_PIPELINE, "DS",
- "Attempt to bind Pipeline %#" PRIxLEAST64 " that doesn't exist!", (uint64_t)(pipeline));
- }
- }
- loader_platform_thread_unlock_mutex(&globalLock);
- if (VK_FALSE == skipCall)
- dev_data->device_dispatch_table->CmdBindPipeline(commandBuffer, pipelineBindPoint, pipeline);
-}
-
-VK_LAYER_EXPORT VKAPI_ATTR void VKAPI_CALL vkCmdSetViewport(
- VkCommandBuffer commandBuffer,
- uint32_t firstViewport,
- uint32_t viewportCount,
- const VkViewport* pViewports)
-{
- VkBool32 skipCall = VK_FALSE;
- layer_data* dev_data = get_my_data_ptr(get_dispatch_key(commandBuffer), layer_data_map);
- loader_platform_thread_lock_mutex(&globalLock);
- GLOBAL_CB_NODE* pCB = getCBNode(dev_data, commandBuffer);
- if (pCB) {
- skipCall |= addCmd(dev_data, pCB, CMD_SETVIEWPORTSTATE, "vkCmdSetViewport()");
- pCB->status |= CBSTATUS_VIEWPORT_SET;
- pCB->viewports.resize(viewportCount);
- memcpy(pCB->viewports.data(), pViewports, viewportCount * sizeof(VkViewport));
- }
- loader_platform_thread_unlock_mutex(&globalLock);
- if (VK_FALSE == skipCall)
- dev_data->device_dispatch_table->CmdSetViewport(commandBuffer, firstViewport, viewportCount, pViewports);
-}
-
-VK_LAYER_EXPORT VKAPI_ATTR void VKAPI_CALL vkCmdSetScissor(
- VkCommandBuffer commandBuffer,
- uint32_t firstScissor,
- uint32_t scissorCount,
- const VkRect2D* pScissors)
-{
- VkBool32 skipCall = VK_FALSE;
- layer_data* dev_data = get_my_data_ptr(get_dispatch_key(commandBuffer), layer_data_map);
- loader_platform_thread_lock_mutex(&globalLock);
- GLOBAL_CB_NODE* pCB = getCBNode(dev_data, commandBuffer);
- if (pCB) {
- skipCall |= addCmd(dev_data, pCB, CMD_SETSCISSORSTATE, "vkCmdSetScissor()");
- pCB->status |= CBSTATUS_SCISSOR_SET;
- pCB->scissors.resize(scissorCount);
- memcpy(pCB->scissors.data(), pScissors, scissorCount * sizeof(VkRect2D));
- }
- loader_platform_thread_unlock_mutex(&globalLock);
- if (VK_FALSE == skipCall)
- dev_data->device_dispatch_table->CmdSetScissor(commandBuffer, firstScissor, scissorCount, pScissors);
-}
-
-VK_LAYER_EXPORT VKAPI_ATTR void VKAPI_CALL vkCmdSetLineWidth(VkCommandBuffer commandBuffer, float lineWidth)
-{
- VkBool32 skipCall = VK_FALSE;
- layer_data* dev_data = get_my_data_ptr(get_dispatch_key(commandBuffer), layer_data_map);
- loader_platform_thread_lock_mutex(&globalLock);
- GLOBAL_CB_NODE* pCB = getCBNode(dev_data, commandBuffer);
- if (pCB) {
- skipCall |= addCmd(dev_data, pCB, CMD_SETLINEWIDTHSTATE, "vkCmdSetLineWidth()");
- pCB->status |= CBSTATUS_LINE_WIDTH_SET;
- pCB->lineWidth = lineWidth;
- }
- loader_platform_thread_unlock_mutex(&globalLock);
- if (VK_FALSE == skipCall)
- dev_data->device_dispatch_table->CmdSetLineWidth(commandBuffer, lineWidth);
-}
-
-VK_LAYER_EXPORT VKAPI_ATTR void VKAPI_CALL vkCmdSetDepthBias(
- VkCommandBuffer commandBuffer,
- float depthBiasConstantFactor,
- float depthBiasClamp,
- float depthBiasSlopeFactor)
-{
- VkBool32 skipCall = VK_FALSE;
- layer_data* dev_data = get_my_data_ptr(get_dispatch_key(commandBuffer), layer_data_map);
- loader_platform_thread_lock_mutex(&globalLock);
- GLOBAL_CB_NODE* pCB = getCBNode(dev_data, commandBuffer);
- if (pCB) {
- skipCall |= addCmd(dev_data, pCB, CMD_SETDEPTHBIASSTATE, "vkCmdSetDepthBias()");
- pCB->status |= CBSTATUS_DEPTH_BIAS_SET;
- pCB->depthBiasConstantFactor = depthBiasConstantFactor;
- pCB->depthBiasClamp = depthBiasClamp;
- pCB->depthBiasSlopeFactor = depthBiasSlopeFactor;
- }
- loader_platform_thread_unlock_mutex(&globalLock);
- if (VK_FALSE == skipCall)
- dev_data->device_dispatch_table->CmdSetDepthBias(commandBuffer, depthBiasConstantFactor, depthBiasClamp, depthBiasSlopeFactor);
-}
-
-VK_LAYER_EXPORT VKAPI_ATTR void VKAPI_CALL vkCmdSetBlendConstants(VkCommandBuffer commandBuffer, const float blendConstants[4])
-{
- VkBool32 skipCall = VK_FALSE;
- layer_data* dev_data = get_my_data_ptr(get_dispatch_key(commandBuffer), layer_data_map);
- loader_platform_thread_lock_mutex(&globalLock);
- GLOBAL_CB_NODE* pCB = getCBNode(dev_data, commandBuffer);
- if (pCB) {
- skipCall |= addCmd(dev_data, pCB, CMD_SETBLENDSTATE, "vkCmdSetBlendConstants()");
- pCB->status |= CBSTATUS_BLEND_SET;
- memcpy(pCB->blendConstants, blendConstants, 4 * sizeof(float));
- }
- loader_platform_thread_unlock_mutex(&globalLock);
- if (VK_FALSE == skipCall)
- dev_data->device_dispatch_table->CmdSetBlendConstants(commandBuffer, blendConstants);
-}
-
-VK_LAYER_EXPORT VKAPI_ATTR void VKAPI_CALL vkCmdSetDepthBounds(
- VkCommandBuffer commandBuffer,
- float minDepthBounds,
- float maxDepthBounds)
-{
- VkBool32 skipCall = VK_FALSE;
- layer_data* dev_data = get_my_data_ptr(get_dispatch_key(commandBuffer), layer_data_map);
- loader_platform_thread_lock_mutex(&globalLock);
- GLOBAL_CB_NODE* pCB = getCBNode(dev_data, commandBuffer);
- if (pCB) {
- skipCall |= addCmd(dev_data, pCB, CMD_SETDEPTHBOUNDSSTATE, "vkCmdSetDepthBounds()");
- pCB->status |= CBSTATUS_DEPTH_BOUNDS_SET;
- pCB->minDepthBounds = minDepthBounds;
- pCB->maxDepthBounds = maxDepthBounds;
- }
- loader_platform_thread_unlock_mutex(&globalLock);
- if (VK_FALSE == skipCall)
- dev_data->device_dispatch_table->CmdSetDepthBounds(commandBuffer, minDepthBounds, maxDepthBounds);
-}
-
-VK_LAYER_EXPORT VKAPI_ATTR void VKAPI_CALL vkCmdSetStencilCompareMask(
- VkCommandBuffer commandBuffer,
- VkStencilFaceFlags faceMask,
- uint32_t compareMask)
-{
- VkBool32 skipCall = VK_FALSE;
- layer_data* dev_data = get_my_data_ptr(get_dispatch_key(commandBuffer), layer_data_map);
- loader_platform_thread_lock_mutex(&globalLock);
- GLOBAL_CB_NODE* pCB = getCBNode(dev_data, commandBuffer);
- if (pCB) {
- skipCall |= addCmd(dev_data, pCB, CMD_SETSTENCILREADMASKSTATE, "vkCmdSetStencilCompareMask()");
- if (faceMask & VK_STENCIL_FACE_FRONT_BIT) {
- pCB->front.compareMask = compareMask;
- }
- if (faceMask & VK_STENCIL_FACE_BACK_BIT) {
- pCB->back.compareMask = compareMask;
- }
- /* TODO: Do we need to track front and back separately? */
- /* TODO: We aren't capturing the faceMask, do we need to? */
- pCB->status |= CBSTATUS_STENCIL_READ_MASK_SET;
- }
- loader_platform_thread_unlock_mutex(&globalLock);
- if (VK_FALSE == skipCall)
- dev_data->device_dispatch_table->CmdSetStencilCompareMask(commandBuffer, faceMask, compareMask);
-}
-
-VK_LAYER_EXPORT VKAPI_ATTR void VKAPI_CALL vkCmdSetStencilWriteMask(
- VkCommandBuffer commandBuffer,
- VkStencilFaceFlags faceMask,
- uint32_t writeMask)
-{
- VkBool32 skipCall = VK_FALSE;
- layer_data* dev_data = get_my_data_ptr(get_dispatch_key(commandBuffer), layer_data_map);
- loader_platform_thread_lock_mutex(&globalLock);
- GLOBAL_CB_NODE* pCB = getCBNode(dev_data, commandBuffer);
- if (pCB) {
- skipCall |= addCmd(dev_data, pCB, CMD_SETSTENCILWRITEMASKSTATE, "vkCmdSetStencilWriteMask()");
- if (faceMask & VK_STENCIL_FACE_FRONT_BIT) {
- pCB->front.writeMask = writeMask;
- }
- if (faceMask & VK_STENCIL_FACE_BACK_BIT) {
- pCB->back.writeMask = writeMask;
- }
- pCB->status |= CBSTATUS_STENCIL_WRITE_MASK_SET;
- }
- loader_platform_thread_unlock_mutex(&globalLock);
- if (VK_FALSE == skipCall)
- dev_data->device_dispatch_table->CmdSetStencilWriteMask(commandBuffer, faceMask, writeMask);
-}
-
-VK_LAYER_EXPORT VKAPI_ATTR void VKAPI_CALL vkCmdSetStencilReference(
- VkCommandBuffer commandBuffer,
- VkStencilFaceFlags faceMask,
- uint32_t reference)
-{
- VkBool32 skipCall = VK_FALSE;
- layer_data* dev_data = get_my_data_ptr(get_dispatch_key(commandBuffer), layer_data_map);
- loader_platform_thread_lock_mutex(&globalLock);
- GLOBAL_CB_NODE* pCB = getCBNode(dev_data, commandBuffer);
- if (pCB) {
- skipCall |= addCmd(dev_data, pCB, CMD_SETSTENCILREFERENCESTATE, "vkCmdSetStencilReference()");
- if (faceMask & VK_STENCIL_FACE_FRONT_BIT) {
- pCB->front.reference = reference;
- }
- if (faceMask & VK_STENCIL_FACE_BACK_BIT) {
- pCB->back.reference = reference;
- }
- pCB->status |= CBSTATUS_STENCIL_REFERENCE_SET;
- }
- loader_platform_thread_unlock_mutex(&globalLock);
- if (VK_FALSE == skipCall)
- dev_data->device_dispatch_table->CmdSetStencilReference(commandBuffer, faceMask, reference);
-}
-
-VK_LAYER_EXPORT VKAPI_ATTR void VKAPI_CALL vkCmdBindDescriptorSets(VkCommandBuffer commandBuffer, VkPipelineBindPoint pipelineBindPoint, VkPipelineLayout layout, uint32_t firstSet, uint32_t setCount, const VkDescriptorSet* pDescriptorSets, uint32_t dynamicOffsetCount, const uint32_t* pDynamicOffsets)
-{
- VkBool32 skipCall = VK_FALSE;
- layer_data* dev_data = get_my_data_ptr(get_dispatch_key(commandBuffer), layer_data_map);
- loader_platform_thread_lock_mutex(&globalLock);
- GLOBAL_CB_NODE* pCB = getCBNode(dev_data, commandBuffer);
- if (pCB) {
- if (pCB->state == CB_RECORDING) {
- if ((VK_PIPELINE_BIND_POINT_COMPUTE == pipelineBindPoint) && (pCB->activeRenderPass)) {
- skipCall |= log_msg(dev_data->report_data, VK_DEBUG_REPORT_ERROR_BIT_EXT, (VkDebugReportObjectTypeEXT) 0, 0, __LINE__, DRAWSTATE_INVALID_RENDERPASS_CMD, "DS",
- "Incorrectly binding compute DescriptorSets during active RenderPass (%#" PRIxLEAST64 ")", (uint64_t) pCB->activeRenderPass);
- } else if (VK_PIPELINE_BIND_POINT_GRAPHICS == pipelineBindPoint) {
- skipCall |= outsideRenderPass(dev_data, pCB, "vkCmdBindDescriptorSets");
- }
- if (VK_FALSE == skipCall) {
- // Track total count of dynamic descriptor types to make sure we have an offset for each one
- uint32_t totalDynamicDescriptors = 0;
- string errorString = "";
- uint32_t lastSetIndex = firstSet+setCount-1;
- if (lastSetIndex >= pCB->boundDescriptorSets.size())
- pCB->boundDescriptorSets.resize(lastSetIndex+1);
- VkDescriptorSet oldFinalBoundSet = pCB->boundDescriptorSets[lastSetIndex];
- for (uint32_t i=0; i<setCount; i++) {
- SET_NODE* pSet = getSetNode(dev_data, pDescriptorSets[i]);
- if (pSet) {
- pCB->uniqueBoundSets.insert(pDescriptorSets[i]);
- pSet->boundCmdBuffers.insert(commandBuffer);
- pCB->lastBoundDescriptorSet = pDescriptorSets[i];
- pCB->lastBoundPipelineLayout = layout;
- pCB->boundDescriptorSets[i+firstSet] = pDescriptorSets[i];
- skipCall |= log_msg(dev_data->report_data, VK_DEBUG_REPORT_INFORMATION_BIT_EXT, VK_DEBUG_REPORT_OBJECT_TYPE_DESCRIPTOR_SET_EXT, (uint64_t) pDescriptorSets[i], __LINE__, DRAWSTATE_NONE, "DS",
- "DS %#" PRIxLEAST64 " bound on pipeline %s", (uint64_t) pDescriptorSets[i], string_VkPipelineBindPoint(pipelineBindPoint));
- if (!pSet->pUpdateStructs && (pSet->descriptorCount != 0)) {
- skipCall |= log_msg(dev_data->report_data, VK_DEBUG_REPORT_WARNING_BIT_EXT, VK_DEBUG_REPORT_OBJECT_TYPE_DESCRIPTOR_SET_EXT, (uint64_t) pDescriptorSets[i], __LINE__, DRAWSTATE_DESCRIPTOR_SET_NOT_UPDATED, "DS",
- "DS %#" PRIxLEAST64 " bound but it was never updated. You may want to either update it or not bind it.", (uint64_t) pDescriptorSets[i]);
- }
- // Verify that set being bound is compatible with overlapping setLayout of pipelineLayout
- if (!verify_set_layout_compatibility(dev_data, pSet, layout, i+firstSet, errorString)) {
- skipCall |= log_msg(dev_data->report_data, VK_DEBUG_REPORT_ERROR_BIT_EXT, VK_DEBUG_REPORT_OBJECT_TYPE_DESCRIPTOR_SET_EXT, (uint64_t) pDescriptorSets[i], __LINE__, DRAWSTATE_PIPELINE_LAYOUTS_INCOMPATIBLE, "DS",
- "descriptorSet #%u being bound is not compatible with overlapping layout in pipelineLayout due to: %s", i, errorString.c_str());
- }
- if (pSet->pLayout->dynamicDescriptorCount) {
- // First make sure we won't overstep bounds of pDynamicOffsets array
- if ((totalDynamicDescriptors + pSet->pLayout->dynamicDescriptorCount) > dynamicOffsetCount) {
- skipCall |= log_msg(dev_data->report_data, VK_DEBUG_REPORT_ERROR_BIT_EXT, VK_DEBUG_REPORT_OBJECT_TYPE_DESCRIPTOR_SET_EXT, (uint64_t) pDescriptorSets[i], __LINE__, DRAWSTATE_INVALID_DYNAMIC_OFFSET_COUNT, "DS",
- "descriptorSet #%u (%#" PRIxLEAST64 ") requires %u dynamicOffsets, but only %u dynamicOffsets are left in pDynamicOffsets array. There must be one dynamic offset for each dynamic descriptor being bound.",
- i, (uint64_t) pDescriptorSets[i], pSet->pLayout->dynamicDescriptorCount, (dynamicOffsetCount - totalDynamicDescriptors));
- } else { // Validate and store dynamic offsets with the set
- // Validate Dynamic Offset Minimums
- uint32_t cur_dyn_offset = totalDynamicDescriptors;
- for (uint32_t d = 0; d < pSet->descriptorCount; d++) {
- if (pSet->pLayout->descriptorTypes[d] == VK_DESCRIPTOR_TYPE_UNIFORM_BUFFER_DYNAMIC) {
- if (vk_safe_modulo(pDynamicOffsets[cur_dyn_offset], dev_data->physDevProperties.properties.limits.minUniformBufferOffsetAlignment) != 0) {
- skipCall |= log_msg(dev_data->report_data, VK_DEBUG_REPORT_ERROR_BIT_EXT, VK_DEBUG_REPORT_OBJECT_TYPE_PHYSICAL_DEVICE_EXT, 0,
- __LINE__, DRAWSTATE_INVALID_UNIFORM_BUFFER_OFFSET, "DS",
- "vkCmdBindDescriptorSets(): pDynamicOffsets[%d] is %d but must be a multiple of device limit minUniformBufferOffsetAlignment %#" PRIxLEAST64,
- cur_dyn_offset, pDynamicOffsets[cur_dyn_offset], dev_data->physDevProperties.properties.limits.minUniformBufferOffsetAlignment);
- }
- cur_dyn_offset++;
- } else if (pSet->pLayout->descriptorTypes[d] == VK_DESCRIPTOR_TYPE_STORAGE_BUFFER_DYNAMIC) {
- if (vk_safe_modulo(pDynamicOffsets[cur_dyn_offset], dev_data->physDevProperties.properties.limits.minStorageBufferOffsetAlignment) != 0) {
- skipCall |= log_msg(dev_data->report_data, VK_DEBUG_REPORT_ERROR_BIT_EXT, VK_DEBUG_REPORT_OBJECT_TYPE_PHYSICAL_DEVICE_EXT, 0,
- __LINE__, DRAWSTATE_INVALID_STORAGE_BUFFER_OFFSET, "DS",
- "vkCmdBindDescriptorSets(): pDynamicOffsets[%d] is %d but must be a multiple of device limit minStorageBufferOffsetAlignment %#" PRIxLEAST64,
- cur_dyn_offset, pDynamicOffsets[cur_dyn_offset], dev_data->physDevProperties.properties.limits.minStorageBufferOffsetAlignment);
- }
- cur_dyn_offset++;
- }
- }
- // Keep running total of dynamic descriptor count to verify at the end
- totalDynamicDescriptors += pSet->pLayout->dynamicDescriptorCount;
- }
- }
- } else {
- skipCall |= log_msg(dev_data->report_data, VK_DEBUG_REPORT_ERROR_BIT_EXT, VK_DEBUG_REPORT_OBJECT_TYPE_DESCRIPTOR_SET_EXT, (uint64_t) pDescriptorSets[i], __LINE__, DRAWSTATE_INVALID_SET, "DS",
- "Attempt to bind DS %#" PRIxLEAST64 " that doesn't exist!", (uint64_t) pDescriptorSets[i]);
- }
- }
- skipCall |= addCmd(dev_data, pCB, CMD_BINDDESCRIPTORSETS, "vkCmdBindDescrsiptorSets()");
- // For any previously bound sets, need to set them to "invalid" if they were disturbed by this update
- if (firstSet > 0) { // Check set #s below the first bound set
- for (uint32_t i=0; i<firstSet; ++i) {
- if (pCB->boundDescriptorSets[i] && !verify_set_layout_compatibility(dev_data, dev_data->setMap[pCB->boundDescriptorSets[i]], layout, i, errorString)) {
- skipCall |= log_msg(dev_data->report_data, VK_DEBUG_REPORT_PERFORMANCE_WARNING_BIT_EXT, VK_DEBUG_REPORT_OBJECT_TYPE_DESCRIPTOR_SET_EXT, (uint64_t) pCB->boundDescriptorSets[i], __LINE__, DRAWSTATE_NONE, "DS",
- "DescriptorSetDS %#" PRIxLEAST64 " previously bound as set #%u was disturbed by newly bound pipelineLayout (%#" PRIxLEAST64 ")", (uint64_t) pCB->boundDescriptorSets[i], i, (uint64_t) layout);
- pCB->boundDescriptorSets[i] = VK_NULL_HANDLE;
- }
- }
- }
- // Check if newly last bound set invalidates any remaining bound sets
- if ((pCB->boundDescriptorSets.size()-1) > (lastSetIndex)) {
- if (oldFinalBoundSet && !verify_set_layout_compatibility(dev_data, dev_data->setMap[oldFinalBoundSet], layout, lastSetIndex, errorString)) {
- skipCall |= log_msg(dev_data->report_data, VK_DEBUG_REPORT_PERFORMANCE_WARNING_BIT_EXT, VK_DEBUG_REPORT_OBJECT_TYPE_DESCRIPTOR_SET_EXT, (uint64_t) oldFinalBoundSet, __LINE__, DRAWSTATE_NONE, "DS",
- "DescriptorSetDS %#" PRIxLEAST64 " previously bound as set #%u is incompatible with set %#" PRIxLEAST64 " newly bound as set #%u so set #%u and any subsequent sets were disturbed by newly bound pipelineLayout (%#" PRIxLEAST64 ")", (uint64_t) oldFinalBoundSet, lastSetIndex, (uint64_t) pCB->boundDescriptorSets[lastSetIndex], lastSetIndex, lastSetIndex+1, (uint64_t) layout);
- pCB->boundDescriptorSets.resize(lastSetIndex+1);
- }
- }
- // dynamicOffsetCount must equal the total number of dynamic descriptors in the sets being bound
- if (totalDynamicDescriptors != dynamicOffsetCount) {
- skipCall |= log_msg(dev_data->report_data, VK_DEBUG_REPORT_ERROR_BIT_EXT, VK_DEBUG_REPORT_OBJECT_TYPE_COMMAND_BUFFER_EXT, (uint64_t) commandBuffer, __LINE__, DRAWSTATE_INVALID_DYNAMIC_OFFSET_COUNT, "DS",
- "Attempting to bind %u descriptorSets with %u dynamic descriptors, but dynamicOffsetCount is %u. It should exactly match the number of dynamic descriptors.", setCount, totalDynamicDescriptors, dynamicOffsetCount);
- }
- if (dynamicOffsetCount) {
- // Save dynamicOffsets bound to this CB
- pCB->dynamicOffsets.assign(pDynamicOffsets, pDynamicOffsets + dynamicOffsetCount);
- }
- }
- } else {
- skipCall |= report_error_no_cb_begin(dev_data, commandBuffer, "vkCmdBindDescriptorSets()");
- }
- }
- loader_platform_thread_unlock_mutex(&globalLock);
- if (VK_FALSE == skipCall)
- dev_data->device_dispatch_table->CmdBindDescriptorSets(commandBuffer, pipelineBindPoint, layout, firstSet, setCount, pDescriptorSets, dynamicOffsetCount, pDynamicOffsets);
-}
-
-VK_LAYER_EXPORT VKAPI_ATTR void VKAPI_CALL vkCmdBindIndexBuffer(VkCommandBuffer commandBuffer, VkBuffer buffer, VkDeviceSize offset, VkIndexType indexType)
-{
- VkBool32 skipCall = VK_FALSE;
- layer_data* dev_data = get_my_data_ptr(get_dispatch_key(commandBuffer), layer_data_map);
- loader_platform_thread_lock_mutex(&globalLock);
- GLOBAL_CB_NODE* pCB = getCBNode(dev_data, commandBuffer);
- if (pCB) {
- skipCall |= addCmd(dev_data, pCB, CMD_BINDINDEXBUFFER, "vkCmdBindIndexBuffer()");
- VkDeviceSize offset_align = 0;
- switch (indexType) {
- case VK_INDEX_TYPE_UINT16:
- offset_align = 2;
- break;
- case VK_INDEX_TYPE_UINT32:
- offset_align = 4;
- break;
- default:
- // ParamChecker should catch bad enum, we'll also throw alignment error below if offset_align stays 0
- break;
- }
- if (!offset_align || (offset % offset_align)) {
- skipCall |= log_msg(dev_data->report_data, VK_DEBUG_REPORT_ERROR_BIT_EXT, (VkDebugReportObjectTypeEXT) 0, 0, __LINE__, DRAWSTATE_VTX_INDEX_ALIGNMENT_ERROR, "DS",
- "vkCmdBindIndexBuffer() offset (%#" PRIxLEAST64 ") does not fall on alignment (%s) boundary.", offset, string_VkIndexType(indexType));
- }
- pCB->status |= CBSTATUS_INDEX_BUFFER_BOUND;
- }
- loader_platform_thread_unlock_mutex(&globalLock);
- if (VK_FALSE == skipCall)
- dev_data->device_dispatch_table->CmdBindIndexBuffer(commandBuffer, buffer, offset, indexType);
-}
-
-void updateResourceTracking(GLOBAL_CB_NODE* pCB, uint32_t firstBinding, uint32_t bindingCount, const VkBuffer* pBuffers) {
- uint32_t end = firstBinding + bindingCount;
- if (pCB->currentDrawData.buffers.size() < end) {
- pCB->currentDrawData.buffers.resize(end);
- }
- for (uint32_t i = 0; i < bindingCount; ++i) {
- pCB->currentDrawData.buffers[i + firstBinding] = pBuffers[i];
- }
-}
-
-void updateResourceTrackingOnDraw(GLOBAL_CB_NODE* pCB) {
- pCB->drawData.push_back(pCB->currentDrawData);
-}
-
-VK_LAYER_EXPORT VKAPI_ATTR void VKAPI_CALL vkCmdBindVertexBuffers(
- VkCommandBuffer commandBuffer,
- uint32_t firstBinding,
- uint32_t bindingCount,
- const VkBuffer *pBuffers,
- const VkDeviceSize *pOffsets)
-{
- VkBool32 skipCall = VK_FALSE;
- layer_data* dev_data = get_my_data_ptr(get_dispatch_key(commandBuffer), layer_data_map);
- loader_platform_thread_lock_mutex(&globalLock);
- GLOBAL_CB_NODE* pCB = getCBNode(dev_data, commandBuffer);
- if (pCB) {
- addCmd(dev_data, pCB, CMD_BINDVERTEXBUFFER, "vkCmdBindVertexBuffer()");
- updateResourceTracking(pCB, firstBinding, bindingCount, pBuffers);
- } else {
- skipCall |= report_error_no_cb_begin(dev_data, commandBuffer, "vkCmdBindVertexBuffer()");
- }
- loader_platform_thread_unlock_mutex(&globalLock);
- if (VK_FALSE == skipCall)
- dev_data->device_dispatch_table->CmdBindVertexBuffers(commandBuffer, firstBinding, bindingCount, pBuffers, pOffsets);
-}
-
-VK_LAYER_EXPORT VKAPI_ATTR void VKAPI_CALL vkCmdDraw(VkCommandBuffer commandBuffer, uint32_t vertexCount, uint32_t instanceCount, uint32_t firstVertex, uint32_t firstInstance)
-{
- VkBool32 skipCall = VK_FALSE;
- layer_data* dev_data = get_my_data_ptr(get_dispatch_key(commandBuffer), layer_data_map);
- loader_platform_thread_lock_mutex(&globalLock);
- GLOBAL_CB_NODE* pCB = getCBNode(dev_data, commandBuffer);
- if (pCB) {
- skipCall |= addCmd(dev_data, pCB, CMD_DRAW, "vkCmdDraw()");
- pCB->drawCount[DRAW]++;
- skipCall |= validate_draw_state(dev_data, pCB, VK_FALSE);
- // TODO : Need to pass commandBuffer as srcObj here
- skipCall |= log_msg(dev_data->report_data, VK_DEBUG_REPORT_INFORMATION_BIT_EXT, VK_DEBUG_REPORT_OBJECT_TYPE_COMMAND_BUFFER_EXT, 0, __LINE__, DRAWSTATE_NONE, "DS",
- "vkCmdDraw() call #%" PRIu64 ", reporting DS state:", g_drawCount[DRAW]++);
- skipCall |= synchAndPrintDSConfig(dev_data, commandBuffer);
- if (VK_FALSE == skipCall) {
- updateResourceTrackingOnDraw(pCB);
- }
- skipCall |= outsideRenderPass(dev_data, pCB, "vkCmdDraw");
- }
- loader_platform_thread_unlock_mutex(&globalLock);
- if (VK_FALSE == skipCall)
- dev_data->device_dispatch_table->CmdDraw(commandBuffer, vertexCount, instanceCount, firstVertex, firstInstance);
-}
-
-VK_LAYER_EXPORT VKAPI_ATTR void VKAPI_CALL vkCmdDrawIndexed(VkCommandBuffer commandBuffer, uint32_t indexCount, uint32_t instanceCount, uint32_t firstIndex, int32_t vertexOffset, uint32_t firstInstance)
-{
- layer_data* dev_data = get_my_data_ptr(get_dispatch_key(commandBuffer), layer_data_map);
- loader_platform_thread_lock_mutex(&globalLock);
- GLOBAL_CB_NODE* pCB = getCBNode(dev_data, commandBuffer);
- VkBool32 skipCall = VK_FALSE;
- if (pCB) {
- skipCall |= addCmd(dev_data, pCB, CMD_DRAWINDEXED, "vkCmdDrawIndexed()");
- pCB->drawCount[DRAW_INDEXED]++;
- skipCall |= validate_draw_state(dev_data, pCB, VK_TRUE);
- // TODO : Need to pass commandBuffer as srcObj here
- skipCall |= log_msg(dev_data->report_data, VK_DEBUG_REPORT_INFORMATION_BIT_EXT, VK_DEBUG_REPORT_OBJECT_TYPE_COMMAND_BUFFER_EXT, 0, __LINE__, DRAWSTATE_NONE, "DS",
- "vkCmdDrawIndexed() call #%" PRIu64 ", reporting DS state:", g_drawCount[DRAW_INDEXED]++);
- skipCall |= synchAndPrintDSConfig(dev_data, commandBuffer);
- if (VK_FALSE == skipCall) {
- updateResourceTrackingOnDraw(pCB);
- }
- skipCall |= outsideRenderPass(dev_data, pCB, "vkCmdDrawIndexed");
- }
- loader_platform_thread_unlock_mutex(&globalLock);
- if (VK_FALSE == skipCall)
- dev_data->device_dispatch_table->CmdDrawIndexed(commandBuffer, indexCount, instanceCount, firstIndex, vertexOffset, firstInstance);
-}
-
-VK_LAYER_EXPORT VKAPI_ATTR void VKAPI_CALL vkCmdDrawIndirect(VkCommandBuffer commandBuffer, VkBuffer buffer, VkDeviceSize offset, uint32_t count, uint32_t stride)
-{
- layer_data* dev_data = get_my_data_ptr(get_dispatch_key(commandBuffer), layer_data_map);
- loader_platform_thread_lock_mutex(&globalLock);
- GLOBAL_CB_NODE* pCB = getCBNode(dev_data, commandBuffer);
- VkBool32 skipCall = VK_FALSE;
- if (pCB) {
- skipCall |= addCmd(dev_data, pCB, CMD_DRAWINDIRECT, "vkCmdDrawIndirect()");
- pCB->drawCount[DRAW_INDIRECT]++;
- skipCall |= validate_draw_state(dev_data, pCB, VK_FALSE);
- // TODO : Need to pass commandBuffer as srcObj here
- skipCall |= log_msg(dev_data->report_data, VK_DEBUG_REPORT_INFORMATION_BIT_EXT, VK_DEBUG_REPORT_OBJECT_TYPE_COMMAND_BUFFER_EXT, 0, __LINE__, DRAWSTATE_NONE, "DS",
- "vkCmdDrawIndirect() call #%" PRIu64 ", reporting DS state:", g_drawCount[DRAW_INDIRECT]++);
- skipCall |= synchAndPrintDSConfig(dev_data, commandBuffer);
- if (VK_FALSE == skipCall) {
- updateResourceTrackingOnDraw(pCB);
- }
- skipCall |= outsideRenderPass(dev_data, pCB, "vkCmdDrawIndirect");
- }
- loader_platform_thread_unlock_mutex(&globalLock);
- if (VK_FALSE == skipCall)
- dev_data->device_dispatch_table->CmdDrawIndirect(commandBuffer, buffer, offset, count, stride);
-}
-
-VK_LAYER_EXPORT VKAPI_ATTR void VKAPI_CALL vkCmdDrawIndexedIndirect(VkCommandBuffer commandBuffer, VkBuffer buffer, VkDeviceSize offset, uint32_t count, uint32_t stride)
-{
- VkBool32 skipCall = VK_FALSE;
- layer_data* dev_data = get_my_data_ptr(get_dispatch_key(commandBuffer), layer_data_map);
- loader_platform_thread_lock_mutex(&globalLock);
- GLOBAL_CB_NODE* pCB = getCBNode(dev_data, commandBuffer);
- if (pCB) {
- skipCall |= addCmd(dev_data, pCB, CMD_DRAWINDEXEDINDIRECT, "vkCmdDrawIndexedIndirect()");
- pCB->drawCount[DRAW_INDEXED_INDIRECT]++;
- loader_platform_thread_unlock_mutex(&globalLock);
- skipCall |= validate_draw_state(dev_data, pCB, VK_TRUE);
- loader_platform_thread_lock_mutex(&globalLock);
- // TODO : Need to pass commandBuffer as srcObj here
- skipCall |= log_msg(dev_data->report_data, VK_DEBUG_REPORT_INFORMATION_BIT_EXT, VK_DEBUG_REPORT_OBJECT_TYPE_COMMAND_BUFFER_EXT, 0, __LINE__, DRAWSTATE_NONE, "DS",
- "vkCmdDrawIndexedIndirect() call #%" PRIu64 ", reporting DS state:", g_drawCount[DRAW_INDEXED_INDIRECT]++);
- skipCall |= synchAndPrintDSConfig(dev_data, commandBuffer);
- if (VK_FALSE == skipCall) {
- updateResourceTrackingOnDraw(pCB);
- }
- skipCall |= outsideRenderPass(dev_data, pCB, "vkCmdDrawIndexedIndirect");
- }
- loader_platform_thread_unlock_mutex(&globalLock);
- if (VK_FALSE == skipCall)
- dev_data->device_dispatch_table->CmdDrawIndexedIndirect(commandBuffer, buffer, offset, count, stride);
-}
-
-VK_LAYER_EXPORT VKAPI_ATTR void VKAPI_CALL vkCmdDispatch(VkCommandBuffer commandBuffer, uint32_t x, uint32_t y, uint32_t z)
-{
- VkBool32 skipCall = VK_FALSE;
- layer_data* dev_data = get_my_data_ptr(get_dispatch_key(commandBuffer), layer_data_map);
- loader_platform_thread_lock_mutex(&globalLock);
- GLOBAL_CB_NODE* pCB = getCBNode(dev_data, commandBuffer);
- if (pCB) {
- skipCall |= addCmd(dev_data, pCB, CMD_DISPATCH, "vkCmdDispatch()");
- skipCall |= insideRenderPass(dev_data, pCB, "vkCmdDispatch");
- }
- loader_platform_thread_unlock_mutex(&globalLock);
- if (VK_FALSE == skipCall)
- dev_data->device_dispatch_table->CmdDispatch(commandBuffer, x, y, z);
-}
-
-VK_LAYER_EXPORT VKAPI_ATTR void VKAPI_CALL vkCmdDispatchIndirect(VkCommandBuffer commandBuffer, VkBuffer buffer, VkDeviceSize offset)
-{
- VkBool32 skipCall = VK_FALSE;
- layer_data* dev_data = get_my_data_ptr(get_dispatch_key(commandBuffer), layer_data_map);
- loader_platform_thread_lock_mutex(&globalLock);
- GLOBAL_CB_NODE* pCB = getCBNode(dev_data, commandBuffer);
- if (pCB) {
- skipCall |= addCmd(dev_data, pCB, CMD_DISPATCHINDIRECT, "vkCmdDispatchIndirect()");
- skipCall |= insideRenderPass(dev_data, pCB, "vkCmdDispatchIndirect");
- }
- loader_platform_thread_unlock_mutex(&globalLock);
- if (VK_FALSE == skipCall)
- dev_data->device_dispatch_table->CmdDispatchIndirect(commandBuffer, buffer, offset);
-}
-
-VK_LAYER_EXPORT VKAPI_ATTR void VKAPI_CALL vkCmdCopyBuffer(VkCommandBuffer commandBuffer, VkBuffer srcBuffer, VkBuffer dstBuffer, uint32_t regionCount, const VkBufferCopy* pRegions)
-{
- VkBool32 skipCall = VK_FALSE;
- layer_data* dev_data = get_my_data_ptr(get_dispatch_key(commandBuffer), layer_data_map);
- loader_platform_thread_lock_mutex(&globalLock);
- GLOBAL_CB_NODE* pCB = getCBNode(dev_data, commandBuffer);
- if (pCB) {
- skipCall |= addCmd(dev_data, pCB, CMD_COPYBUFFER, "vkCmdCopyBuffer()");
- skipCall |= insideRenderPass(dev_data, pCB, "vkCmdCopyBuffer");
- }
- loader_platform_thread_unlock_mutex(&globalLock);
- if (VK_FALSE == skipCall)
- dev_data->device_dispatch_table->CmdCopyBuffer(commandBuffer, srcBuffer, dstBuffer, regionCount, pRegions);
-}
-
-VkBool32 VerifySourceImageLayout(VkCommandBuffer cmdBuffer, VkImage srcImage, VkImageSubresourceLayers subLayers, VkImageLayout srcImageLayout) {
- VkBool32 skip_call = VK_FALSE;
-
- layer_data* dev_data = get_my_data_ptr(get_dispatch_key(cmdBuffer), layer_data_map);
- GLOBAL_CB_NODE* pCB = getCBNode(dev_data, cmdBuffer);
- for (uint32_t i = 0; i < subLayers.layerCount; ++i) {
- uint32_t layer = i + subLayers.baseArrayLayer;
- VkImageSubresource sub = {subLayers.aspectMask, subLayers.mipLevel, layer};
- IMAGE_CMD_BUF_NODE node;
- if (!FindLayout(pCB, srcImage, sub, node)) {
- SetLayout(pCB, srcImage, sub, {srcImageLayout, srcImageLayout});
- continue;
- }
- if (node.layout != srcImageLayout) {
- // TODO: Improve log message in the next pass
- skip_call |=
- log_msg(dev_data->report_data, VK_DEBUG_REPORT_ERROR_BIT_EXT,
- VK_DEBUG_REPORT_OBJECT_TYPE_COMMAND_BUFFER_EXT, 0,
- __LINE__, DRAWSTATE_INVALID_IMAGE_LAYOUT, "DS",
- "Cannot copy from an image whose source layout is %s "
- "and doesn't match the current layout %s.",
- string_VkImageLayout(srcImageLayout),
- string_VkImageLayout(node.layout));
- }
- }
- if (srcImageLayout != VK_IMAGE_LAYOUT_TRANSFER_SRC_OPTIMAL) {
- if (srcImageLayout == VK_IMAGE_LAYOUT_GENERAL) {
- // LAYOUT_GENERAL is allowed, but may not be performance optimal, flag as perf warning.
- skip_call |= log_msg(dev_data->report_data, VK_DEBUG_REPORT_PERFORMANCE_WARNING_BIT_EXT, (VkDebugReportObjectTypeEXT)0, 0, __LINE__, DRAWSTATE_INVALID_IMAGE_LAYOUT, "DS",
- "Layout for input image should be TRANSFER_SRC_OPTIMAL instead of GENERAL.");
- } else {
- skip_call |=
- log_msg(dev_data->report_data, VK_DEBUG_REPORT_ERROR_BIT_EXT,
- (VkDebugReportObjectTypeEXT)0, 0, __LINE__,
- DRAWSTATE_INVALID_IMAGE_LAYOUT, "DS",
- "Layout for input image is %s but can only be "
- "TRANSFER_SRC_OPTIMAL or GENERAL.",
- string_VkImageLayout(srcImageLayout));
- }
- }
- return skip_call;
-}
-
-VkBool32 VerifyDestImageLayout(VkCommandBuffer cmdBuffer, VkImage destImage, VkImageSubresourceLayers subLayers, VkImageLayout destImageLayout) {
- VkBool32 skip_call = VK_FALSE;
-
- layer_data* dev_data = get_my_data_ptr(get_dispatch_key(cmdBuffer), layer_data_map);
- GLOBAL_CB_NODE* pCB = getCBNode(dev_data, cmdBuffer);
- for (uint32_t i = 0; i < subLayers.layerCount; ++i) {
- uint32_t layer = i + subLayers.baseArrayLayer;
- VkImageSubresource sub = {subLayers.aspectMask, subLayers.mipLevel, layer};
- IMAGE_CMD_BUF_NODE node;
- if (!FindLayout(pCB, destImage, sub, node)) {
- SetLayout(pCB, destImage, sub, {destImageLayout, destImageLayout});
- continue;
- }
- if (node.layout != destImageLayout) {
- skip_call |=
- log_msg(dev_data->report_data, VK_DEBUG_REPORT_ERROR_BIT_EXT,
- VK_DEBUG_REPORT_OBJECT_TYPE_COMMAND_BUFFER_EXT, 0,
- __LINE__, DRAWSTATE_INVALID_IMAGE_LAYOUT, "DS",
- "Cannot copy from an image whose dest layout is %s and "
- "doesn't match the current layout %s.",
- string_VkImageLayout(destImageLayout),
- string_VkImageLayout(node.layout));
- }
- }
- if (destImageLayout != VK_IMAGE_LAYOUT_TRANSFER_DST_OPTIMAL) {
- if (destImageLayout == VK_IMAGE_LAYOUT_GENERAL) {
- // LAYOUT_GENERAL is allowed, but may not be performance optimal, flag as perf warning.
- skip_call |= log_msg(dev_data->report_data, VK_DEBUG_REPORT_PERFORMANCE_WARNING_BIT_EXT, (VkDebugReportObjectTypeEXT)0, 0, __LINE__, DRAWSTATE_INVALID_IMAGE_LAYOUT, "DS",
- "Layout for output image should be TRANSFER_DST_OPTIMAL instead of GENERAL.");
- } else {
- skip_call |=
- log_msg(dev_data->report_data, VK_DEBUG_REPORT_ERROR_BIT_EXT,
- (VkDebugReportObjectTypeEXT)0, 0, __LINE__,
- DRAWSTATE_INVALID_IMAGE_LAYOUT, "DS",
- "Layout for output image is %s but can only be "
- "TRANSFER_DST_OPTIMAL or GENERAL.",
- string_VkImageLayout(destImageLayout));
- }
- }
- return skip_call;
-}
-
-VK_LAYER_EXPORT VKAPI_ATTR void VKAPI_CALL vkCmdCopyImage(VkCommandBuffer commandBuffer,
- VkImage srcImage,
- VkImageLayout srcImageLayout,
- VkImage dstImage,
- VkImageLayout dstImageLayout,
- uint32_t regionCount, const VkImageCopy* pRegions)
-{
- VkBool32 skipCall = VK_FALSE;
- layer_data* dev_data = get_my_data_ptr(get_dispatch_key(commandBuffer), layer_data_map);
- loader_platform_thread_lock_mutex(&globalLock);
- GLOBAL_CB_NODE* pCB = getCBNode(dev_data, commandBuffer);
- if (pCB) {
- skipCall |= addCmd(dev_data, pCB, CMD_COPYIMAGE, "vkCmdCopyImage()");
- skipCall |= insideRenderPass(dev_data, pCB, "vkCmdCopyImage");
- for (uint32_t i = 0; i < regionCount; ++i) {
- skipCall |= VerifySourceImageLayout(commandBuffer, srcImage, pRegions[i].srcSubresource, srcImageLayout);
- skipCall |= VerifyDestImageLayout(commandBuffer, dstImage, pRegions[i].dstSubresource, dstImageLayout);
- }
- }
- loader_platform_thread_unlock_mutex(&globalLock);
- if (VK_FALSE == skipCall)
- dev_data->device_dispatch_table->CmdCopyImage(commandBuffer, srcImage, srcImageLayout, dstImage, dstImageLayout, regionCount, pRegions);
-}
-
-VK_LAYER_EXPORT VKAPI_ATTR void VKAPI_CALL vkCmdBlitImage(VkCommandBuffer commandBuffer,
- VkImage srcImage, VkImageLayout srcImageLayout,
- VkImage dstImage, VkImageLayout dstImageLayout,
- uint32_t regionCount, const VkImageBlit* pRegions,
- VkFilter filter)
-{
- VkBool32 skipCall = VK_FALSE;
- layer_data* dev_data = get_my_data_ptr(get_dispatch_key(commandBuffer), layer_data_map);
- loader_platform_thread_lock_mutex(&globalLock);
- GLOBAL_CB_NODE* pCB = getCBNode(dev_data, commandBuffer);
- if (pCB) {
- skipCall |= addCmd(dev_data, pCB, CMD_BLITIMAGE, "vkCmdBlitImage()");
- skipCall |= insideRenderPass(dev_data, pCB, "vkCmdBlitImage");
- }
- loader_platform_thread_unlock_mutex(&globalLock);
- if (VK_FALSE == skipCall)
- dev_data->device_dispatch_table->CmdBlitImage(commandBuffer, srcImage, srcImageLayout, dstImage, dstImageLayout, regionCount, pRegions, filter);
-}
-
-VK_LAYER_EXPORT VKAPI_ATTR void VKAPI_CALL vkCmdCopyBufferToImage(VkCommandBuffer commandBuffer,
- VkBuffer srcBuffer,
- VkImage dstImage, VkImageLayout dstImageLayout,
- uint32_t regionCount, const VkBufferImageCopy* pRegions)
-{
- VkBool32 skipCall = VK_FALSE;
- layer_data* dev_data = get_my_data_ptr(get_dispatch_key(commandBuffer), layer_data_map);
- loader_platform_thread_lock_mutex(&globalLock);
- GLOBAL_CB_NODE* pCB = getCBNode(dev_data, commandBuffer);
- if (pCB) {
- skipCall |= addCmd(dev_data, pCB, CMD_COPYBUFFERTOIMAGE, "vkCmdCopyBufferToImage()");
- skipCall |= insideRenderPass(dev_data, pCB, "vkCmdCopyBufferToImage");
- for (uint32_t i = 0; i < regionCount; ++i) {
- skipCall |= VerifySourceImageLayout(commandBuffer, dstImage,
- pRegions[i].imageSubresource,
- dstImageLayout);
- }
- }
- loader_platform_thread_unlock_mutex(&globalLock);
- if (VK_FALSE == skipCall)
- dev_data->device_dispatch_table->CmdCopyBufferToImage(commandBuffer, srcBuffer, dstImage, dstImageLayout, regionCount, pRegions);
-}
-
-VK_LAYER_EXPORT VKAPI_ATTR void VKAPI_CALL vkCmdCopyImageToBuffer(VkCommandBuffer commandBuffer,
- VkImage srcImage, VkImageLayout srcImageLayout,
- VkBuffer dstBuffer,
- uint32_t regionCount, const VkBufferImageCopy* pRegions)
-{
- VkBool32 skipCall = VK_FALSE;
- layer_data* dev_data = get_my_data_ptr(get_dispatch_key(commandBuffer), layer_data_map);
- loader_platform_thread_lock_mutex(&globalLock);
- GLOBAL_CB_NODE* pCB = getCBNode(dev_data, commandBuffer);
- if (pCB) {
- skipCall |= addCmd(dev_data, pCB, CMD_COPYIMAGETOBUFFER, "vkCmdCopyImageToBuffer()");
- skipCall |= insideRenderPass(dev_data, pCB, "vkCmdCopyImageToBuffer");
- for (uint32_t i = 0; i < regionCount; ++i) {
- skipCall |= VerifySourceImageLayout(commandBuffer, srcImage,
- pRegions[i].imageSubresource,
- srcImageLayout);
- }
- }
- loader_platform_thread_unlock_mutex(&globalLock);
- if (VK_FALSE == skipCall)
- dev_data->device_dispatch_table->CmdCopyImageToBuffer(commandBuffer, srcImage, srcImageLayout, dstBuffer, regionCount, pRegions);
-}
-
-VK_LAYER_EXPORT VKAPI_ATTR void VKAPI_CALL vkCmdUpdateBuffer(VkCommandBuffer commandBuffer, VkBuffer dstBuffer, VkDeviceSize dstOffset, VkDeviceSize dataSize, const uint32_t* pData)
-{
- VkBool32 skipCall = VK_FALSE;
- layer_data* dev_data = get_my_data_ptr(get_dispatch_key(commandBuffer), layer_data_map);
- loader_platform_thread_lock_mutex(&globalLock);
- GLOBAL_CB_NODE* pCB = getCBNode(dev_data, commandBuffer);
- if (pCB) {
- skipCall |= addCmd(dev_data, pCB, CMD_UPDATEBUFFER, "vkCmdUpdateBuffer()");
- skipCall |= insideRenderPass(dev_data, pCB, "vkCmdCopyUpdateBuffer");
- }
- loader_platform_thread_unlock_mutex(&globalLock);
- if (VK_FALSE == skipCall)
- dev_data->device_dispatch_table->CmdUpdateBuffer(commandBuffer, dstBuffer, dstOffset, dataSize, pData);
-}
-
-VK_LAYER_EXPORT VKAPI_ATTR void VKAPI_CALL vkCmdFillBuffer(VkCommandBuffer commandBuffer, VkBuffer dstBuffer, VkDeviceSize dstOffset, VkDeviceSize size, uint32_t data)
-{
- VkBool32 skipCall = VK_FALSE;
- layer_data* dev_data = get_my_data_ptr(get_dispatch_key(commandBuffer), layer_data_map);
- loader_platform_thread_lock_mutex(&globalLock);
- GLOBAL_CB_NODE* pCB = getCBNode(dev_data, commandBuffer);
- if (pCB) {
- skipCall |= addCmd(dev_data, pCB, CMD_FILLBUFFER, "vkCmdFillBuffer()");
- skipCall |= insideRenderPass(dev_data, pCB, "vkCmdCopyFillBuffer");
- }
- loader_platform_thread_unlock_mutex(&globalLock);
- if (VK_FALSE == skipCall)
- dev_data->device_dispatch_table->CmdFillBuffer(commandBuffer, dstBuffer, dstOffset, size, data);
-}
-
-VK_LAYER_EXPORT VKAPI_ATTR void VKAPI_CALL vkCmdClearAttachments(
- VkCommandBuffer commandBuffer,
- uint32_t attachmentCount,
- const VkClearAttachment* pAttachments,
- uint32_t rectCount,
- const VkClearRect* pRects)
-{
- VkBool32 skipCall = VK_FALSE;
- layer_data* dev_data = get_my_data_ptr(get_dispatch_key(commandBuffer), layer_data_map);
- loader_platform_thread_lock_mutex(&globalLock);
- GLOBAL_CB_NODE* pCB = getCBNode(dev_data, commandBuffer);
- if (pCB) {
- skipCall |= addCmd(dev_data, pCB, CMD_CLEARATTACHMENTS, "vkCmdClearAttachments()");
- // Warn if this is issued prior to Draw Cmd and clearing the entire attachment
- if (!hasDrawCmd(pCB) &&
- (pCB->activeRenderPassBeginInfo.renderArea.extent.width == pRects[0].rect.extent.width) &&
- (pCB->activeRenderPassBeginInfo.renderArea.extent.height == pRects[0].rect.extent.height)) {
- // TODO : commandBuffer should be srcObj
- skipCall |= log_msg(dev_data->report_data, VK_DEBUG_REPORT_WARNING_BIT_EXT, VK_DEBUG_REPORT_OBJECT_TYPE_COMMAND_BUFFER_EXT, 0, __LINE__, DRAWSTATE_CLEAR_CMD_BEFORE_DRAW, "DS",
- "vkCmdClearAttachments() issued on CB object 0x%" PRIxLEAST64 " prior to any Draw Cmds."
- " It is recommended you use RenderPass LOAD_OP_CLEAR on Attachments prior to any Draw.", (uint64_t)(commandBuffer));
- }
- skipCall |= outsideRenderPass(dev_data, pCB, "vkCmdClearAttachments");
- }
-
- // Validate that attachment is in reference list of active subpass
- if (pCB->activeRenderPass) {
- const VkRenderPassCreateInfo *pRPCI = dev_data->renderPassMap[pCB->activeRenderPass]->pCreateInfo;
- const VkSubpassDescription *pSD = &pRPCI->pSubpasses[pCB->activeSubpass];
-
- for (uint32_t attachment_idx = 0; attachment_idx < attachmentCount; attachment_idx++) {
- const VkClearAttachment *attachment = &pAttachments[attachment_idx];
- if (attachment->aspectMask & VK_IMAGE_ASPECT_COLOR_BIT) {
- VkBool32 found = VK_FALSE;
- for (uint32_t i = 0; i < pSD->colorAttachmentCount; i++) {
- if (attachment->colorAttachment == pSD->pColorAttachments[i].attachment) {
- found = VK_TRUE;
- break;
- }
- }
- if (VK_FALSE == found) {
- skipCall |= log_msg(dev_data->report_data, VK_DEBUG_REPORT_ERROR_BIT_EXT, VK_DEBUG_REPORT_OBJECT_TYPE_COMMAND_BUFFER_EXT,
- (uint64_t)commandBuffer, __LINE__, DRAWSTATE_MISSING_ATTACHMENT_REFERENCE, "DS",
- "vkCmdClearAttachments() attachment index %d not found in attachment reference array of active subpass %d",
- attachment->colorAttachment, pCB->activeSubpass);
- }
- } else if (attachment->aspectMask & (VK_IMAGE_ASPECT_DEPTH_BIT | VK_IMAGE_ASPECT_STENCIL_BIT)) {
- if (!pSD->pDepthStencilAttachment || // Says no DS will be used in active subpass
- (pSD->pDepthStencilAttachment->attachment == VK_ATTACHMENT_UNUSED)) { // Says no DS will be used in active subpass
-
- skipCall |= log_msg(dev_data->report_data, VK_DEBUG_REPORT_ERROR_BIT_EXT, VK_DEBUG_REPORT_OBJECT_TYPE_COMMAND_BUFFER_EXT,
- (uint64_t)commandBuffer, __LINE__, DRAWSTATE_MISSING_ATTACHMENT_REFERENCE, "DS",
- "vkCmdClearAttachments() attachment index %d does not match depthStencilAttachment.attachment (%d) found in active subpass %d",
- attachment->colorAttachment,
- (pSD->pDepthStencilAttachment) ? pSD->pDepthStencilAttachment->attachment : VK_ATTACHMENT_UNUSED,
- pCB->activeSubpass);
- }
- }
- }
- }
- loader_platform_thread_unlock_mutex(&globalLock);
- if (VK_FALSE == skipCall)
- dev_data->device_dispatch_table->CmdClearAttachments(commandBuffer, attachmentCount, pAttachments, rectCount, pRects);
-}
-
-VK_LAYER_EXPORT VKAPI_ATTR void VKAPI_CALL vkCmdClearColorImage(
- VkCommandBuffer commandBuffer,
- VkImage image, VkImageLayout imageLayout,
- const VkClearColorValue *pColor,
- uint32_t rangeCount, const VkImageSubresourceRange* pRanges)
-{
- VkBool32 skipCall = VK_FALSE;
- layer_data* dev_data = get_my_data_ptr(get_dispatch_key(commandBuffer), layer_data_map);
- loader_platform_thread_lock_mutex(&globalLock);
- GLOBAL_CB_NODE* pCB = getCBNode(dev_data, commandBuffer);
- if (pCB) {
- skipCall |= addCmd(dev_data, pCB, CMD_CLEARCOLORIMAGE, "vkCmdClearColorImage()");
- skipCall |= insideRenderPass(dev_data, pCB, "vkCmdClearColorImage");
- }
- loader_platform_thread_unlock_mutex(&globalLock);
- if (VK_FALSE == skipCall)
- dev_data->device_dispatch_table->CmdClearColorImage(commandBuffer, image, imageLayout, pColor, rangeCount, pRanges);
-}
-
-VK_LAYER_EXPORT VKAPI_ATTR void VKAPI_CALL vkCmdClearDepthStencilImage(
- VkCommandBuffer commandBuffer,
- VkImage image, VkImageLayout imageLayout,
- const VkClearDepthStencilValue *pDepthStencil,
- uint32_t rangeCount,
- const VkImageSubresourceRange* pRanges)
-{
- VkBool32 skipCall = VK_FALSE;
- layer_data* dev_data = get_my_data_ptr(get_dispatch_key(commandBuffer), layer_data_map);
- loader_platform_thread_lock_mutex(&globalLock);
- GLOBAL_CB_NODE* pCB = getCBNode(dev_data, commandBuffer);
- if (pCB) {
- skipCall |= addCmd(dev_data, pCB, CMD_CLEARDEPTHSTENCILIMAGE, "vkCmdClearDepthStencilImage()");
- skipCall |= insideRenderPass(dev_data, pCB, "vkCmdClearDepthStencilImage");
- }
- loader_platform_thread_unlock_mutex(&globalLock);
- if (VK_FALSE == skipCall)
- dev_data->device_dispatch_table->CmdClearDepthStencilImage(commandBuffer, image, imageLayout, pDepthStencil, rangeCount, pRanges);
-}
-
-VK_LAYER_EXPORT VKAPI_ATTR void VKAPI_CALL vkCmdResolveImage(VkCommandBuffer commandBuffer,
- VkImage srcImage, VkImageLayout srcImageLayout,
- VkImage dstImage, VkImageLayout dstImageLayout,
- uint32_t regionCount, const VkImageResolve* pRegions)
-{
- VkBool32 skipCall = VK_FALSE;
- layer_data* dev_data = get_my_data_ptr(get_dispatch_key(commandBuffer), layer_data_map);
- loader_platform_thread_lock_mutex(&globalLock);
- GLOBAL_CB_NODE* pCB = getCBNode(dev_data, commandBuffer);
- if (pCB) {
- skipCall |= addCmd(dev_data, pCB, CMD_RESOLVEIMAGE, "vkCmdResolveImage()");
- skipCall |= insideRenderPass(dev_data, pCB, "vkCmdResolveImage");
- }
- loader_platform_thread_unlock_mutex(&globalLock);
- if (VK_FALSE == skipCall)
- dev_data->device_dispatch_table->CmdResolveImage(commandBuffer, srcImage, srcImageLayout, dstImage, dstImageLayout, regionCount, pRegions);
-}
-
-VK_LAYER_EXPORT VKAPI_ATTR void VKAPI_CALL vkCmdSetEvent(VkCommandBuffer commandBuffer, VkEvent event, VkPipelineStageFlags stageMask)
-{
- VkBool32 skipCall = VK_FALSE;
- layer_data* dev_data = get_my_data_ptr(get_dispatch_key(commandBuffer), layer_data_map);
- loader_platform_thread_lock_mutex(&globalLock);
- GLOBAL_CB_NODE* pCB = getCBNode(dev_data, commandBuffer);
- if (pCB) {
- skipCall |= addCmd(dev_data, pCB, CMD_SETEVENT, "vkCmdSetEvent()");
- skipCall |= insideRenderPass(dev_data, pCB, "vkCmdSetEvent");
- pCB->events.push_back(event);
- pCB->eventToStageMap[event] = stageMask;
- }
- loader_platform_thread_unlock_mutex(&globalLock);
- if (VK_FALSE == skipCall)
- dev_data->device_dispatch_table->CmdSetEvent(commandBuffer, event, stageMask);
-}
-
-VK_LAYER_EXPORT VKAPI_ATTR void VKAPI_CALL vkCmdResetEvent(VkCommandBuffer commandBuffer, VkEvent event, VkPipelineStageFlags stageMask)
-{
- VkBool32 skipCall = VK_FALSE;
- layer_data* dev_data = get_my_data_ptr(get_dispatch_key(commandBuffer), layer_data_map);
- loader_platform_thread_lock_mutex(&globalLock);
- GLOBAL_CB_NODE* pCB = getCBNode(dev_data, commandBuffer);
- if (pCB) {
- skipCall |= addCmd(dev_data, pCB, CMD_RESETEVENT, "vkCmdResetEvent()");
- skipCall |= insideRenderPass(dev_data, pCB, "vkCmdResetEvent");
- pCB->events.push_back(event);
- }
- loader_platform_thread_unlock_mutex(&globalLock);
- if (VK_FALSE == skipCall)
- dev_data->device_dispatch_table->CmdResetEvent(commandBuffer, event, stageMask);
-}
-
-VkBool32 TransitionImageLayouts(VkCommandBuffer cmdBuffer, uint32_t memBarrierCount, const VkImageMemoryBarrier* pImgMemBarriers) {
- layer_data* dev_data = get_my_data_ptr(get_dispatch_key(cmdBuffer), layer_data_map);
- GLOBAL_CB_NODE* pCB = getCBNode(dev_data, cmdBuffer);
- VkBool32 skip = VK_FALSE;
-
- for (uint32_t i = 0; i < memBarrierCount; ++i) {
- auto mem_barrier = &pImgMemBarriers[i];
- if (!mem_barrier)
- continue;
- // TODO: Do not iterate over every possibility - consolidate where
- // possible
- for (uint32_t j = 0; j < mem_barrier->subresourceRange.levelCount;
- j++) {
- uint32_t level = mem_barrier->subresourceRange.baseMipLevel + j;
- for (uint32_t k = 0; k < mem_barrier->subresourceRange.layerCount;
- k++) {
- uint32_t layer =
- mem_barrier->subresourceRange.baseArrayLayer + k;
- VkImageSubresource sub = {
- mem_barrier->subresourceRange.aspectMask, level, layer};
- IMAGE_CMD_BUF_NODE node;
- if (!FindLayout(pCB, mem_barrier->image, sub, node)) {
- SetLayout(pCB, mem_barrier->image, sub,
- {mem_barrier->oldLayout, mem_barrier->newLayout});
- continue;
- }
- if (mem_barrier->oldLayout == VK_IMAGE_LAYOUT_UNDEFINED) {
- // TODO: Set memory invalid which is in mem_tracker currently
- }
- else if (node.layout != mem_barrier->oldLayout) {
- skip |= log_msg(
- dev_data->report_data, VK_DEBUG_REPORT_ERROR_BIT_EXT,
- (VkDebugReportObjectTypeEXT)0, 0, __LINE__,
- DRAWSTATE_INVALID_IMAGE_LAYOUT, "DS",
- "You cannot transition the layout from %s "
- "when current layout is %s.",
- string_VkImageLayout(mem_barrier->oldLayout),
- string_VkImageLayout(node.layout));
- }
- SetLayout(pCB, mem_barrier->image, sub, mem_barrier->newLayout);
- }
- }
- }
- return skip;
-}
-
-// Print readable FlagBits in FlagMask
-std::string string_VkAccessFlags(VkAccessFlags accessMask)
-{
- std::string result;
- std::string separator;
-
- if (accessMask == 0) {
- result = "[None]";
- } else {
- result = "[";
- for (auto i = 0; i < 32; i++) {
- if (accessMask & (1 << i)) {
- result = result + separator + string_VkAccessFlagBits((VkAccessFlagBits)(1 << i));
- separator = " | ";
- }
- }
- result = result + "]";
- }
- return result;
-}
-
-// AccessFlags MUST have 'required_bit' set, and may have one or more of 'optional_bits' set.
-// If required_bit is zero, accessMask must have at least one of 'optional_bits' set
-// TODO: Add tracking to ensure that at least one barrier has been set for these layout transitions
-VkBool32 ValidateMaskBits(const layer_data* my_data, VkCommandBuffer cmdBuffer, const VkAccessFlags& accessMask, const VkImageLayout& layout,
- VkAccessFlags required_bit, VkAccessFlags optional_bits, const char* type) {
- VkBool32 skip_call = VK_FALSE;
-
- if ((accessMask & required_bit) || (!required_bit && (accessMask & optional_bits))) {
- if (accessMask & !(required_bit | optional_bits)) {
- // TODO: Verify against Valid Use
- skip_call |= log_msg(my_data->report_data, VK_DEBUG_REPORT_WARNING_BIT_EXT, (VkDebugReportObjectTypeEXT)0, 0, __LINE__, DRAWSTATE_INVALID_BARRIER, "DS",
- "Additional bits in %s accessMask %d %s are specified when layout is %s.",
- type, accessMask, string_VkAccessFlags(accessMask).c_str(), string_VkImageLayout(layout));
- }
- } else {
- if (!required_bit) {
- skip_call |= log_msg(my_data->report_data, VK_DEBUG_REPORT_WARNING_BIT_EXT, (VkDebugReportObjectTypeEXT)0, 0, __LINE__, DRAWSTATE_INVALID_BARRIER, "DS",
- "%s AccessMask %d %s must contain at least one of access bits %d %s when layout is %s, unless the app has previously added a barrier for this transition.",
- type, accessMask, string_VkAccessFlags(accessMask).c_str(), optional_bits,
- string_VkAccessFlags(optional_bits).c_str(), string_VkImageLayout(layout));
- } else {
- std::string opt_bits;
- if (optional_bits != 0) {
- std::stringstream ss;
- ss << optional_bits;
- opt_bits = "and may have optional bits " + ss.str() + ' ' + string_VkAccessFlags(optional_bits);
- }
- skip_call |= log_msg(my_data->report_data, VK_DEBUG_REPORT_WARNING_BIT_EXT, (VkDebugReportObjectTypeEXT)0, 0, __LINE__, DRAWSTATE_INVALID_BARRIER, "DS",
- "%s AccessMask %d %s must have required access bit %d %s %s when layout is %s, unless the app has previously added a barrier for this transition.",
- type, accessMask, string_VkAccessFlags(accessMask).c_str(),
- required_bit, string_VkAccessFlags(required_bit).c_str(),
- opt_bits.c_str(), string_VkImageLayout(layout));
- }
- }
- return skip_call;
-}
-
-VkBool32 ValidateMaskBitsFromLayouts(const layer_data* my_data, VkCommandBuffer cmdBuffer, const VkAccessFlags& accessMask, const VkImageLayout& layout, const char* type) {
- VkBool32 skip_call = VK_FALSE;
- switch (layout) {
- case VK_IMAGE_LAYOUT_COLOR_ATTACHMENT_OPTIMAL: {
- skip_call |= ValidateMaskBits(my_data, cmdBuffer, accessMask, layout, VK_ACCESS_COLOR_ATTACHMENT_WRITE_BIT, VK_ACCESS_COLOR_ATTACHMENT_READ_BIT, type);
- break;
- }
- case VK_IMAGE_LAYOUT_DEPTH_STENCIL_ATTACHMENT_OPTIMAL: {
- skip_call |= ValidateMaskBits(my_data, cmdBuffer, accessMask, layout, VK_ACCESS_DEPTH_STENCIL_ATTACHMENT_WRITE_BIT, VK_ACCESS_DEPTH_STENCIL_ATTACHMENT_READ_BIT, type);
- break;
- }
- case VK_IMAGE_LAYOUT_TRANSFER_DST_OPTIMAL: {
- skip_call |= ValidateMaskBits(my_data, cmdBuffer, accessMask, layout, VK_ACCESS_TRANSFER_WRITE_BIT, 0, type);
- break;
- }
- case VK_IMAGE_LAYOUT_PREINITIALIZED: {
- skip_call |= ValidateMaskBits(my_data, cmdBuffer, accessMask, layout, VK_ACCESS_HOST_WRITE_BIT, 0, type);
- break;
- }
- case VK_IMAGE_LAYOUT_DEPTH_STENCIL_READ_ONLY_OPTIMAL: {
- skip_call |= ValidateMaskBits(my_data, cmdBuffer, accessMask, layout, 0, VK_ACCESS_DEPTH_STENCIL_ATTACHMENT_READ_BIT | VK_ACCESS_SHADER_READ_BIT, type);
- break;
- }
- case VK_IMAGE_LAYOUT_SHADER_READ_ONLY_OPTIMAL: {
- skip_call |= ValidateMaskBits(my_data, cmdBuffer, accessMask, layout, 0, VK_ACCESS_INPUT_ATTACHMENT_READ_BIT | VK_ACCESS_SHADER_READ_BIT, type);
- break;
- }
- case VK_IMAGE_LAYOUT_TRANSFER_SRC_OPTIMAL: {
- skip_call |= ValidateMaskBits(my_data, cmdBuffer, accessMask, layout, VK_ACCESS_TRANSFER_READ_BIT, 0, type);
- break;
- }
- case VK_IMAGE_LAYOUT_UNDEFINED: {
- if (accessMask != 0) {
- // TODO: Verify against Valid Use section spec
- skip_call |= log_msg(my_data->report_data, VK_DEBUG_REPORT_WARNING_BIT_EXT, (VkDebugReportObjectTypeEXT)0, 0, __LINE__, DRAWSTATE_INVALID_BARRIER, "DS",
- "Additional bits in %s accessMask %d %s are specified when layout is %s.", type, accessMask, string_VkAccessFlags(accessMask).c_str(),
- string_VkImageLayout(layout));
- }
- break;
- }
- case VK_IMAGE_LAYOUT_GENERAL:
- default: {
- break;
- }
- }
- return skip_call;
-}
-
-VkBool32 ValidateBarriers(VkCommandBuffer cmdBuffer, uint32_t memBarrierCount,
- const VkMemoryBarrier *pMemBarriers,
- uint32_t bufferBarrierCount,
- const VkBufferMemoryBarrier *pBufferMemBarriers,
- uint32_t imageMemBarrierCount,
- const VkImageMemoryBarrier *pImageMemBarriers) {
- VkBool32 skip_call = VK_FALSE;
- layer_data* dev_data = get_my_data_ptr(get_dispatch_key(cmdBuffer), layer_data_map);
- GLOBAL_CB_NODE* pCB = getCBNode(dev_data, cmdBuffer);
- if (pCB->activeRenderPass && memBarrierCount) {
- if (!dev_data->renderPassMap[pCB->activeRenderPass]->hasSelfDependency[pCB->activeSubpass]) {
- skip_call |= log_msg(dev_data->report_data, VK_DEBUG_REPORT_ERROR_BIT_EXT, (VkDebugReportObjectTypeEXT)0, 0, __LINE__, DRAWSTATE_INVALID_BARRIER, "DS",
- "Barriers cannot be set during subpass %d with no self dependency specified.", pCB->activeSubpass);
- }
- }
- for (uint32_t i = 0; i < imageMemBarrierCount; ++i) {
- auto mem_barrier = &pImageMemBarriers[i];
- if (pCB->activeRenderPass) {
- skip_call |=
- log_msg(dev_data->report_data, VK_DEBUG_REPORT_ERROR_BIT_EXT,
- (VkDebugReportObjectTypeEXT)0, 0, __LINE__,
- DRAWSTATE_INVALID_BARRIER, "DS",
- "Image Barriers cannot be used during a render pass.");
- }
- if (mem_barrier && mem_barrier->sType == VK_STRUCTURE_TYPE_IMAGE_MEMORY_BARRIER) {
- skip_call |= ValidateMaskBitsFromLayouts(dev_data, cmdBuffer, mem_barrier->srcAccessMask, mem_barrier->oldLayout, "Source");
- skip_call |= ValidateMaskBitsFromLayouts(dev_data, cmdBuffer, mem_barrier->dstAccessMask, mem_barrier->newLayout, "Dest");
- }
- }
- for (uint32_t i = 0; i < bufferBarrierCount; ++i) {
- auto mem_barrier = &pBufferMemBarriers[i];
- if (pCB->activeRenderPass) {
- skip_call |=
- log_msg(dev_data->report_data, VK_DEBUG_REPORT_ERROR_BIT_EXT,
- (VkDebugReportObjectTypeEXT)0, 0, __LINE__,
- DRAWSTATE_INVALID_BARRIER, "DS",
- "Buffer Barriers cannot be used during a render pass.");
- }
- if (!mem_barrier)
- continue;
- auto buffer_data = dev_data->bufferMap.find(mem_barrier->buffer);
- uint64_t buffer_size = buffer_data->second.create_info
- ? reinterpret_cast<uint64_t &>(
- buffer_data->second.create_info->size)
- : 0;
- if (buffer_data != dev_data->bufferMap.end() &&
- mem_barrier->offset + mem_barrier->size > buffer_size) {
- skip_call |=
- log_msg(dev_data->report_data, VK_DEBUG_REPORT_ERROR_BIT_EXT,
- (VkDebugReportObjectTypeEXT)0, 0, __LINE__,
- DRAWSTATE_INVALID_BARRIER, "DS",
- "Buffer Barrier 0x%" PRIx64 " has offset %" PRIu64
- " and size %" PRIu64
- " whose sum is greater than total size %" PRIu64 ".",
- reinterpret_cast<const uint64_t &>(mem_barrier->buffer),
- reinterpret_cast<const uint64_t &>(mem_barrier->offset),
- reinterpret_cast<const uint64_t &>(mem_barrier->size),
- buffer_size);
- }
- }
- return skip_call;
-}
-
-VK_LAYER_EXPORT VKAPI_ATTR void VKAPI_CALL vkCmdWaitEvents(
- VkCommandBuffer commandBuffer, uint32_t eventCount, const VkEvent* pEvents,
- VkPipelineStageFlags sourceStageMask, VkPipelineStageFlags dstStageMask,
- uint32_t memoryBarrierCount, const VkMemoryBarrier* pMemoryBarriers,
- uint32_t bufferMemoryBarrierCount, const VkBufferMemoryBarrier* pBufferMemoryBarriers,
- uint32_t imageMemoryBarrierCount, const VkImageMemoryBarrier* pImageMemoryBarriers)
-{
- VkBool32 skipCall = VK_FALSE;
- layer_data* dev_data = get_my_data_ptr(get_dispatch_key(commandBuffer), layer_data_map);
- loader_platform_thread_lock_mutex(&globalLock);
- GLOBAL_CB_NODE* pCB = getCBNode(dev_data, commandBuffer);
- if (pCB) {
- VkPipelineStageFlags stageMask = 0;
- for (uint32_t i = 0; i < eventCount; ++i) {
- pCB->waitedEvents.push_back(pEvents[i]);
- pCB->events.push_back(pEvents[i]);
- auto event_data = pCB->eventToStageMap.find(pEvents[i]);
- if (event_data != pCB->eventToStageMap.end()) {
- stageMask |= event_data->second;
- } else {
- auto global_event_data = dev_data->eventMap.find(pEvents[i]);
- if (global_event_data == dev_data->eventMap.end()) {
- skipCall |= log_msg(
- dev_data->report_data, VK_DEBUG_REPORT_ERROR_BIT_EXT,
- VK_DEBUG_REPORT_OBJECT_TYPE_EVENT_EXT,
- reinterpret_cast<const uint64_t &>(pEvents[i]),
- __LINE__, DRAWSTATE_INVALID_FENCE, "DS",
- "Fence 0x%" PRIx64
- " cannot be waited on if it has never been set.",
- reinterpret_cast<const uint64_t &>(pEvents[i]));
- } else {
- stageMask |= global_event_data->second.stageMask;
- }
- }
- }
- if (sourceStageMask != stageMask) {
- skipCall |= log_msg(
- dev_data->report_data, VK_DEBUG_REPORT_ERROR_BIT_EXT,
- (VkDebugReportObjectTypeEXT)0, 0, __LINE__,
- DRAWSTATE_INVALID_FENCE, "DS",
- "srcStageMask in vkCmdWaitEvents must be the bitwise OR of the "
- "stageMask parameters used in calls to vkCmdSetEvent and "
- "VK_PIPELINE_STAGE_HOST_BIT if used with vkSetEvent.");
- }
- if (pCB->state == CB_RECORDING) {
- skipCall |= addCmd(dev_data, pCB, CMD_WAITEVENTS, "vkCmdWaitEvents()");
- } else {
- skipCall |= report_error_no_cb_begin(dev_data, commandBuffer, "vkCmdWaitEvents()");
- }
- skipCall |= TransitionImageLayouts(commandBuffer, imageMemoryBarrierCount, pImageMemoryBarriers);
- skipCall |=
- ValidateBarriers(commandBuffer, memoryBarrierCount, pMemoryBarriers,
- bufferMemoryBarrierCount, pBufferMemoryBarriers,
- imageMemoryBarrierCount, pImageMemoryBarriers);
- }
- loader_platform_thread_unlock_mutex(&globalLock);
- if (VK_FALSE == skipCall)
- dev_data->device_dispatch_table->CmdWaitEvents(commandBuffer, eventCount, pEvents, sourceStageMask, dstStageMask,
- memoryBarrierCount, pMemoryBarriers,
- bufferMemoryBarrierCount, pBufferMemoryBarriers,
- imageMemoryBarrierCount, pImageMemoryBarriers);
-}
-
-VK_LAYER_EXPORT VKAPI_ATTR void VKAPI_CALL vkCmdPipelineBarrier(
- VkCommandBuffer commandBuffer, VkPipelineStageFlags srcStageMask,
- VkPipelineStageFlags dstStageMask, VkDependencyFlags dependencyFlags,
- uint32_t memoryBarrierCount, const VkMemoryBarrier* pMemoryBarriers,
- uint32_t bufferMemoryBarrierCount, const VkBufferMemoryBarrier* pBufferMemoryBarriers,
- uint32_t imageMemoryBarrierCount, const VkImageMemoryBarrier* pImageMemoryBarriers)
-{
- VkBool32 skipCall = VK_FALSE;
- layer_data* dev_data = get_my_data_ptr(get_dispatch_key(commandBuffer), layer_data_map);
- loader_platform_thread_lock_mutex(&globalLock);
- GLOBAL_CB_NODE* pCB = getCBNode(dev_data, commandBuffer);
- if (pCB) {
- skipCall |= addCmd(dev_data, pCB, CMD_PIPELINEBARRIER, "vkCmdPipelineBarrier()");
- skipCall |= TransitionImageLayouts(commandBuffer, imageMemoryBarrierCount, pImageMemoryBarriers);
- skipCall |=
- ValidateBarriers(commandBuffer, memoryBarrierCount, pMemoryBarriers,
- bufferMemoryBarrierCount, pBufferMemoryBarriers,
- imageMemoryBarrierCount, pImageMemoryBarriers);
- }
- loader_platform_thread_unlock_mutex(&globalLock);
- if (VK_FALSE == skipCall)
- dev_data->device_dispatch_table->CmdPipelineBarrier(commandBuffer, srcStageMask, dstStageMask, dependencyFlags,
- memoryBarrierCount, pMemoryBarriers,
- bufferMemoryBarrierCount, pBufferMemoryBarriers,
- imageMemoryBarrierCount, pImageMemoryBarriers);
-}
-
-VK_LAYER_EXPORT VKAPI_ATTR void VKAPI_CALL vkCmdBeginQuery(VkCommandBuffer commandBuffer, VkQueryPool queryPool, uint32_t slot, VkFlags flags)
-{
- VkBool32 skipCall = VK_FALSE;
- layer_data* dev_data = get_my_data_ptr(get_dispatch_key(commandBuffer), layer_data_map);
- loader_platform_thread_lock_mutex(&globalLock);
- GLOBAL_CB_NODE* pCB = getCBNode(dev_data, commandBuffer);
- if (pCB) {
- QueryObject query = {queryPool, slot};
- pCB->activeQueries.insert(query);
- if (!pCB->startedQueries.count(query)) {
- pCB->startedQueries.insert(query);
- }
- skipCall |= addCmd(dev_data, pCB, CMD_BEGINQUERY, "vkCmdBeginQuery()");
- }
- loader_platform_thread_unlock_mutex(&globalLock);
- if (VK_FALSE == skipCall)
- dev_data->device_dispatch_table->CmdBeginQuery(commandBuffer, queryPool, slot, flags);
-}
-
-VK_LAYER_EXPORT VKAPI_ATTR void VKAPI_CALL vkCmdEndQuery(VkCommandBuffer commandBuffer, VkQueryPool queryPool, uint32_t slot)
-{
- VkBool32 skipCall = VK_FALSE;
- layer_data* dev_data = get_my_data_ptr(get_dispatch_key(commandBuffer), layer_data_map);
- loader_platform_thread_lock_mutex(&globalLock);
- GLOBAL_CB_NODE* pCB = getCBNode(dev_data, commandBuffer);
- if (pCB) {
- QueryObject query = {queryPool, slot};
- if (!pCB->activeQueries.count(query)) {
- skipCall |= log_msg(dev_data->report_data, VK_DEBUG_REPORT_ERROR_BIT_EXT, (VkDebugReportObjectTypeEXT)0, 0, __LINE__, DRAWSTATE_INVALID_QUERY, "DS",
- "Ending a query before it was started: queryPool %" PRIu64 ", index %d", (uint64_t)(queryPool), slot);
- } else {
- pCB->activeQueries.erase(query);
- }
- pCB->queryToStateMap[query] = 1;
- if (pCB->state == CB_RECORDING) {
- skipCall |= addCmd(dev_data, pCB, CMD_ENDQUERY, "VkCmdEndQuery()");
- } else {
- skipCall |= report_error_no_cb_begin(dev_data, commandBuffer, "vkCmdEndQuery()");
- }
- }
- loader_platform_thread_unlock_mutex(&globalLock);
- if (VK_FALSE == skipCall)
- dev_data->device_dispatch_table->CmdEndQuery(commandBuffer, queryPool, slot);
-}
-
-VK_LAYER_EXPORT VKAPI_ATTR void VKAPI_CALL vkCmdResetQueryPool(VkCommandBuffer commandBuffer, VkQueryPool queryPool, uint32_t firstQuery, uint32_t queryCount)
-{
- VkBool32 skipCall = VK_FALSE;
- layer_data* dev_data = get_my_data_ptr(get_dispatch_key(commandBuffer), layer_data_map);
- loader_platform_thread_lock_mutex(&globalLock);
- GLOBAL_CB_NODE* pCB = getCBNode(dev_data, commandBuffer);
- if (pCB) {
- for (uint32_t i = 0; i < queryCount; i++) {
- QueryObject query = {queryPool, firstQuery + i};
- pCB->waitedEventsBeforeQueryReset[query] = pCB->waitedEvents;
- pCB->queryToStateMap[query] = 0;
- }
- if (pCB->state == CB_RECORDING) {
- skipCall |= addCmd(dev_data, pCB, CMD_RESETQUERYPOOL, "VkCmdResetQueryPool()");
- } else {
- skipCall |= report_error_no_cb_begin(dev_data, commandBuffer, "vkCmdResetQueryPool()");
- }
- skipCall |= insideRenderPass(dev_data, pCB, "vkCmdQueryPool");
- }
- loader_platform_thread_unlock_mutex(&globalLock);
- if (VK_FALSE == skipCall)
- dev_data->device_dispatch_table->CmdResetQueryPool(commandBuffer, queryPool, firstQuery, queryCount);
-}
-
-VK_LAYER_EXPORT VKAPI_ATTR void VKAPI_CALL vkCmdCopyQueryPoolResults(VkCommandBuffer commandBuffer, VkQueryPool queryPool, uint32_t firstQuery,
- uint32_t queryCount, VkBuffer dstBuffer, VkDeviceSize dstOffset,
- VkDeviceSize stride, VkQueryResultFlags flags)
-{
- VkBool32 skipCall = VK_FALSE;
- layer_data* dev_data = get_my_data_ptr(get_dispatch_key(commandBuffer), layer_data_map);
- loader_platform_thread_lock_mutex(&globalLock);
- GLOBAL_CB_NODE* pCB = getCBNode(dev_data, commandBuffer);
- if (pCB) {
- for (uint32_t i = 0; i < queryCount; i++) {
- QueryObject query = {queryPool, firstQuery + i};
- if(!pCB->queryToStateMap[query]) {
- skipCall |= log_msg(dev_data->report_data, VK_DEBUG_REPORT_ERROR_BIT_EXT, (VkDebugReportObjectTypeEXT)0, 0, __LINE__, DRAWSTATE_INVALID_QUERY, "DS",
- "Requesting a copy from query to buffer with invalid query: queryPool %" PRIu64 ", index %d", (uint64_t)(queryPool), firstQuery + i);
- }
- }
- if (pCB->state == CB_RECORDING) {
- skipCall |= addCmd(dev_data, pCB, CMD_COPYQUERYPOOLRESULTS, "vkCmdCopyQueryPoolResults()");
- } else {
- skipCall |= report_error_no_cb_begin(dev_data, commandBuffer, "vkCmdCopyQueryPoolResults()");
- }
- skipCall |= insideRenderPass(dev_data, pCB, "vkCmdCopyQueryPoolResults");
- }
- loader_platform_thread_unlock_mutex(&globalLock);
- if (VK_FALSE == skipCall)
- dev_data->device_dispatch_table->CmdCopyQueryPoolResults(commandBuffer, queryPool,
- firstQuery, queryCount, dstBuffer, dstOffset, stride, flags);
-}
-
-VK_LAYER_EXPORT VKAPI_ATTR void VKAPI_CALL vkCmdWriteTimestamp(VkCommandBuffer commandBuffer, VkPipelineStageFlagBits pipelineStage, VkQueryPool queryPool, uint32_t slot)
-{
- VkBool32 skipCall = VK_FALSE;
- layer_data* dev_data = get_my_data_ptr(get_dispatch_key(commandBuffer), layer_data_map);
- loader_platform_thread_lock_mutex(&globalLock);
- GLOBAL_CB_NODE* pCB = getCBNode(dev_data, commandBuffer);
- if (pCB) {
- QueryObject query = {queryPool, slot};
- pCB->queryToStateMap[query] = 1;
- if (pCB->state == CB_RECORDING) {
- skipCall |= addCmd(dev_data, pCB, CMD_WRITETIMESTAMP, "vkCmdWriteTimestamp()");
- } else {
- skipCall |= report_error_no_cb_begin(dev_data, commandBuffer, "vkCmdWriteTimestamp()");
- }
- }
- loader_platform_thread_unlock_mutex(&globalLock);
- if (VK_FALSE == skipCall)
- dev_data->device_dispatch_table->CmdWriteTimestamp(commandBuffer, pipelineStage, queryPool, slot);
-}
-
-VK_LAYER_EXPORT VKAPI_ATTR VkResult VKAPI_CALL vkCreateFramebuffer(VkDevice device, const VkFramebufferCreateInfo* pCreateInfo, const VkAllocationCallbacks* pAllocator, VkFramebuffer* pFramebuffer)
-{
- layer_data* dev_data = get_my_data_ptr(get_dispatch_key(device), layer_data_map);
- VkResult result = dev_data->device_dispatch_table->CreateFramebuffer(device, pCreateInfo, pAllocator, pFramebuffer);
- if (VK_SUCCESS == result) {
- // Shadow create info and store in map
- VkFramebufferCreateInfo* localFBCI = new VkFramebufferCreateInfo(*pCreateInfo);
- if (pCreateInfo->pAttachments) {
- localFBCI->pAttachments = new VkImageView[localFBCI->attachmentCount];
- memcpy((void*)localFBCI->pAttachments, pCreateInfo->pAttachments, localFBCI->attachmentCount*sizeof(VkImageView));
- }
- loader_platform_thread_lock_mutex(&globalLock);
- dev_data->frameBufferMap[*pFramebuffer] = localFBCI;
- loader_platform_thread_unlock_mutex(&globalLock);
- }
- return result;
-}
-
-// Store the DAG.
-struct DAGNode {
- uint32_t pass;
- std::vector<uint32_t> prev;
- std::vector<uint32_t> next;
-};
-
-VkBool32 FindDependency(const int index, const int dependent, const std::vector<DAGNode>& subpass_to_node, std::unordered_set<uint32_t>& processed_nodes) {
- // If we have already checked this node we have not found a dependency path so return false.
- if (processed_nodes.count(index))
- return VK_FALSE;
- processed_nodes.insert(index);
- const DAGNode& node = subpass_to_node[index];
- // Look for a dependency path. If one exists return true else recurse on the previous nodes.
- if (std::find(node.prev.begin(), node.prev.end(), dependent) == node.prev.end()) {
- for (auto elem : node.prev) {
- if (FindDependency(elem, dependent, subpass_to_node, processed_nodes))
- return VK_TRUE;
- }
- } else {
- return VK_TRUE;
- }
- return VK_FALSE;
-}
-
-VkBool32 CheckDependencyExists(const layer_data* my_data, VkDevice device, const int subpass, const std::vector<uint32_t>& dependent_subpasses, const std::vector<DAGNode>& subpass_to_node, VkBool32& skip_call) {
- VkBool32 result = VK_TRUE;
- // Loop through all subpasses that share the same attachment and make sure a dependency exists
- for (uint32_t k = 0; k < dependent_subpasses.size(); ++k) {
- if (subpass == dependent_subpasses[k])
- continue;
- const DAGNode& node = subpass_to_node[subpass];
- // Check for a specified dependency between the two nodes. If one exists we are done.
- auto prev_elem = std::find(node.prev.begin(), node.prev.end(), dependent_subpasses[k]);
- auto next_elem = std::find(node.next.begin(), node.next.end(), dependent_subpasses[k]);
- if (prev_elem == node.prev.end() && next_elem == node.next.end()) {
- // If no dependency exits an implicit dependency still might. If so, warn and if not throw an error.
- std::unordered_set<uint32_t> processed_nodes;
- if (FindDependency(subpass, dependent_subpasses[k], subpass_to_node, processed_nodes) ||
- FindDependency(dependent_subpasses[k], subpass, subpass_to_node, processed_nodes)) {
- // TODO: Verify against Valid Use section of spec
- skip_call |= log_msg(my_data->report_data, VK_DEBUG_REPORT_WARNING_BIT_EXT, (VkDebugReportObjectTypeEXT)0, 0, __LINE__, DRAWSTATE_INVALID_RENDERPASS, "DS",
- "A dependency between subpasses %d and %d must exist but only an implicit one is specified.",
- subpass, dependent_subpasses[k]);
- } else {
- skip_call |= log_msg(my_data->report_data, VK_DEBUG_REPORT_ERROR_BIT_EXT, (VkDebugReportObjectTypeEXT)0, 0, __LINE__, DRAWSTATE_INVALID_RENDERPASS, "DS",
- "A dependency between subpasses %d and %d must exist but one is not specified.",
- subpass, dependent_subpasses[k]);
- result = VK_FALSE;
- }
- }
- }
- return result;
-}
-
-VkBool32 CheckPreserved(const layer_data* my_data, VkDevice device, const VkRenderPassCreateInfo* pCreateInfo, const int index, const uint32_t attachment, const std::vector<DAGNode>& subpass_to_node, int depth, VkBool32& skip_call) {
- const DAGNode& node = subpass_to_node[index];
- // If this node writes to the attachment return true as next nodes need to preserve the attachment.
- const VkSubpassDescription& subpass = pCreateInfo->pSubpasses[index];
- for (uint32_t j = 0; j < subpass.colorAttachmentCount; ++j) {
- if (attachment == subpass.pColorAttachments[j].attachment)
- return VK_TRUE;
- }
- if (subpass.pDepthStencilAttachment &&
- subpass.pDepthStencilAttachment->attachment != VK_ATTACHMENT_UNUSED) {
- if (attachment == subpass.pDepthStencilAttachment->attachment)
- return VK_TRUE;
- }
- VkBool32 result = VK_FALSE;
- // Loop through previous nodes and see if any of them write to the attachment.
- for (auto elem : node.prev) {
- result |= CheckPreserved(my_data, device, pCreateInfo, elem, attachment, subpass_to_node, depth + 1, skip_call);
- }
- // If the attachment was written to by a previous node than this node needs to preserve it.
- if (result && depth > 0) {
- const VkSubpassDescription& subpass = pCreateInfo->pSubpasses[index];
- VkBool32 has_preserved = VK_FALSE;
- for (uint32_t j = 0; j < subpass.preserveAttachmentCount; ++j) {
- if (subpass.pPreserveAttachments[j] == attachment) {
- has_preserved = VK_TRUE;
- break;
- }
- }
- if (has_preserved == VK_FALSE) {
- skip_call |= log_msg(my_data->report_data, VK_DEBUG_REPORT_ERROR_BIT_EXT, (VkDebugReportObjectTypeEXT)0, 0, __LINE__, DRAWSTATE_INVALID_RENDERPASS, "DS",
- "Attachment %d is used by a later subpass and must be preserved in subpass %d.", attachment, index);
- }
- }
- return result;
-}
-
-VkBool32 ValidateDependencies(const layer_data* my_data, VkDevice device, const VkRenderPassCreateInfo* pCreateInfo, const std::vector<DAGNode>& subpass_to_node) {
- VkBool32 skip_call = VK_FALSE;
- std::vector<std::vector<uint32_t>> output_attachment_to_subpass(pCreateInfo->attachmentCount);
- std::vector<std::vector<uint32_t>> input_attachment_to_subpass(pCreateInfo->attachmentCount);
- // Find for each attachment the subpasses that use them.
- for (uint32_t i = 0; i < pCreateInfo->subpassCount; ++i) {
- const VkSubpassDescription& subpass = pCreateInfo->pSubpasses[i];
- for (uint32_t j = 0; j < subpass.inputAttachmentCount; ++j) {
- input_attachment_to_subpass[subpass.pInputAttachments[j].attachment].push_back(i);
- }
- for (uint32_t j = 0; j < subpass.colorAttachmentCount; ++j) {
- output_attachment_to_subpass[subpass.pColorAttachments[j].attachment].push_back(i);
- }
- if (subpass.pDepthStencilAttachment && subpass.pDepthStencilAttachment->attachment != VK_ATTACHMENT_UNUSED) {
- output_attachment_to_subpass[subpass.pDepthStencilAttachment->attachment].push_back(i);
- }
- }
- // If there is a dependency needed make sure one exists
- for (uint32_t i = 0; i < pCreateInfo->subpassCount; ++i) {
- const VkSubpassDescription& subpass = pCreateInfo->pSubpasses[i];
- // If the attachment is an input then all subpasses that output must have a dependency relationship
- for (uint32_t j = 0; j < subpass.inputAttachmentCount; ++j) {
- const uint32_t& attachment = subpass.pInputAttachments[j].attachment;
- CheckDependencyExists(my_data, device, i, output_attachment_to_subpass[attachment], subpass_to_node, skip_call);
- }
- // If the attachment is an output then all subpasses that use the attachment must have a dependency relationship
- for (uint32_t j = 0; j < subpass.colorAttachmentCount; ++j) {
- const uint32_t& attachment = subpass.pColorAttachments[j].attachment;
- CheckDependencyExists(my_data, device, i, output_attachment_to_subpass[attachment], subpass_to_node, skip_call);
- CheckDependencyExists(my_data, device, i, input_attachment_to_subpass[attachment], subpass_to_node, skip_call);
- }
- if (subpass.pDepthStencilAttachment && subpass.pDepthStencilAttachment->attachment != VK_ATTACHMENT_UNUSED) {
- const uint32_t& attachment = subpass.pDepthStencilAttachment->attachment;
- CheckDependencyExists(my_data, device, i, output_attachment_to_subpass[attachment], subpass_to_node, skip_call);
- CheckDependencyExists(my_data, device, i, input_attachment_to_subpass[attachment], subpass_to_node, skip_call);
- }
- }
- // Loop through implicit dependencies, if this pass reads make sure the attachment is preserved for all passes after it was written.
- for (uint32_t i = 0; i < pCreateInfo->subpassCount; ++i) {
- const VkSubpassDescription& subpass = pCreateInfo->pSubpasses[i];
- for (uint32_t j = 0; j < subpass.inputAttachmentCount; ++j) {
- CheckPreserved(my_data, device, pCreateInfo, i, subpass.pInputAttachments[j].attachment, subpass_to_node, 0, skip_call);
- }
- }
- return skip_call;
-}
-
-VkBool32 ValidateLayouts(const layer_data* my_data, VkDevice device, const VkRenderPassCreateInfo* pCreateInfo) {
- VkBool32 skip = VK_FALSE;
-
- for (uint32_t i = 0; i < pCreateInfo->subpassCount; ++i) {
- const VkSubpassDescription& subpass = pCreateInfo->pSubpasses[i];
- for (uint32_t j = 0; j < subpass.inputAttachmentCount; ++j) {
- if (subpass.pInputAttachments[j].layout != VK_IMAGE_LAYOUT_DEPTH_STENCIL_READ_ONLY_OPTIMAL &&
- subpass.pInputAttachments[j].layout != VK_IMAGE_LAYOUT_SHADER_READ_ONLY_OPTIMAL) {
- if (subpass.pInputAttachments[j].layout == VK_IMAGE_LAYOUT_GENERAL) {
- // TODO: Verify Valid Use in spec. I believe this is allowed (valid) but may not be optimal performance
- skip |= log_msg(my_data->report_data, VK_DEBUG_REPORT_WARNING_BIT_EXT, (VkDebugReportObjectTypeEXT)0, 0, __LINE__, DRAWSTATE_INVALID_IMAGE_LAYOUT, "DS",
- "Layout for input attachment is GENERAL but should be READ_ONLY_OPTIMAL.");
- } else {
- skip |= log_msg(my_data->report_data, VK_DEBUG_REPORT_ERROR_BIT_EXT, (VkDebugReportObjectTypeEXT)0, 0, __LINE__, DRAWSTATE_INVALID_IMAGE_LAYOUT, "DS",
- "Layout for input attachment is %d but can only be READ_ONLY_OPTIMAL or GENERAL.", subpass.pInputAttachments[j].attachment);
- }
- }
- }
- for (uint32_t j = 0; j < subpass.colorAttachmentCount; ++j) {
- if (subpass.pColorAttachments[j].layout != VK_IMAGE_LAYOUT_COLOR_ATTACHMENT_OPTIMAL) {
- if (subpass.pColorAttachments[j].layout == VK_IMAGE_LAYOUT_GENERAL) {
- // TODO: Verify Valid Use in spec. I believe this is allowed (valid) but may not be optimal performance
- skip |= log_msg(my_data->report_data, VK_DEBUG_REPORT_WARNING_BIT_EXT, (VkDebugReportObjectTypeEXT)0, 0, __LINE__, DRAWSTATE_INVALID_IMAGE_LAYOUT, "DS",
- "Layout for color attachment is GENERAL but should be COLOR_ATTACHMENT_OPTIMAL.");
- } else {
- skip |= log_msg(my_data->report_data, VK_DEBUG_REPORT_ERROR_BIT_EXT, (VkDebugReportObjectTypeEXT)0, 0, __LINE__, DRAWSTATE_INVALID_IMAGE_LAYOUT, "DS",
- "Layout for color attachment is %d but can only be COLOR_ATTACHMENT_OPTIMAL or GENERAL.", subpass.pColorAttachments[j].attachment);
- }
- }
- }
- if ((subpass.pDepthStencilAttachment != NULL) &&
- (subpass.pDepthStencilAttachment->attachment != VK_ATTACHMENT_UNUSED)) {
- if (subpass.pDepthStencilAttachment->layout != VK_IMAGE_LAYOUT_DEPTH_STENCIL_ATTACHMENT_OPTIMAL) {
- if (subpass.pDepthStencilAttachment->layout == VK_IMAGE_LAYOUT_GENERAL) {
- // TODO: Verify Valid Use in spec. I believe this is allowed (valid) but may not be optimal performance
- skip |= log_msg(my_data->report_data, VK_DEBUG_REPORT_WARNING_BIT_EXT, (VkDebugReportObjectTypeEXT)0, 0, __LINE__, DRAWSTATE_INVALID_IMAGE_LAYOUT, "DS",
- "Layout for depth attachment is GENERAL but should be DEPTH_STENCIL_ATTACHMENT_OPTIMAL.");
- } else {
- skip |= log_msg(my_data->report_data, VK_DEBUG_REPORT_ERROR_BIT_EXT, (VkDebugReportObjectTypeEXT)0, 0, __LINE__, DRAWSTATE_INVALID_IMAGE_LAYOUT, "DS",
- "Layout for depth attachment is %d but can only be DEPTH_STENCIL_ATTACHMENT_OPTIMAL or GENERAL.", subpass.pDepthStencilAttachment->attachment);
- }
- }
- }
- }
- return skip;
-}
-
-VkBool32 CreatePassDAG(const layer_data* my_data, VkDevice device, const VkRenderPassCreateInfo* pCreateInfo, std::vector<DAGNode>& subpass_to_node, std::vector<bool>& has_self_dependency) {
- VkBool32 skip_call = VK_FALSE;
- for (uint32_t i = 0; i < pCreateInfo->subpassCount; ++i) {
- DAGNode& subpass_node = subpass_to_node[i];
- subpass_node.pass = i;
- }
- for (uint32_t i = 0; i < pCreateInfo->dependencyCount; ++i) {
- const VkSubpassDependency& dependency = pCreateInfo->pDependencies[i];
- if (dependency.srcSubpass > dependency.dstSubpass && dependency.srcSubpass != VK_SUBPASS_EXTERNAL && dependency.dstSubpass != VK_SUBPASS_EXTERNAL) {
- skip_call |= log_msg(my_data->report_data, VK_DEBUG_REPORT_ERROR_BIT_EXT, (VkDebugReportObjectTypeEXT)0, 0, __LINE__, DRAWSTATE_INVALID_RENDERPASS, "DS",
- "Depedency graph must be specified such that an earlier pass cannot depend on a later pass.");
- } else if (dependency.srcSubpass == VK_SUBPASS_EXTERNAL && dependency.dstSubpass == VK_SUBPASS_EXTERNAL) {
- skip_call |= log_msg(my_data->report_data, VK_DEBUG_REPORT_ERROR_BIT_EXT, (VkDebugReportObjectTypeEXT)0, 0, __LINE__, DRAWSTATE_INVALID_RENDERPASS, "DS",
- "The src and dest subpasses cannot both be external.");
- } else if (dependency.srcSubpass == dependency.dstSubpass) {
- has_self_dependency[dependency.srcSubpass] = true;
- }
- if (dependency.dstSubpass != VK_SUBPASS_EXTERNAL) {
- subpass_to_node[dependency.dstSubpass].prev.push_back(dependency.srcSubpass);
- }
- if (dependency.srcSubpass != VK_SUBPASS_EXTERNAL) {
- subpass_to_node[dependency.srcSubpass].next.push_back(dependency.dstSubpass);
- }
- }
- return skip_call;
-}
-// TODOSC : Add intercept of vkCreateShaderModule
-
-VK_LAYER_EXPORT VKAPI_ATTR VkResult VKAPI_CALL vkCreateShaderModule(
- VkDevice device,
- const VkShaderModuleCreateInfo *pCreateInfo,
- const VkAllocationCallbacks* pAllocator,
- VkShaderModule *pShaderModule)
-{
- layer_data *my_data = get_my_data_ptr(get_dispatch_key(device), layer_data_map);
- VkBool32 skip_call = VK_FALSE;
- if (!shader_is_spirv(pCreateInfo)) {
- skip_call |= log_msg(my_data->report_data, VK_DEBUG_REPORT_ERROR_BIT_EXT, VK_DEBUG_REPORT_OBJECT_TYPE_DEVICE_EXT,
- /* dev */ 0, __LINE__, SHADER_CHECKER_NON_SPIRV_SHADER, "SC",
- "Shader is not SPIR-V");
- }
-
- if (VK_FALSE != skip_call)
- return VK_ERROR_VALIDATION_FAILED_EXT;
-
- VkResult res = my_data->device_dispatch_table->CreateShaderModule(device, pCreateInfo, pAllocator, pShaderModule);
-
- if (res == VK_SUCCESS) {
- loader_platform_thread_lock_mutex(&globalLock);
- my_data->shaderModuleMap[*pShaderModule] = new shader_module(pCreateInfo);
- loader_platform_thread_unlock_mutex(&globalLock);
- }
- return res;
-}
-
-VK_LAYER_EXPORT VKAPI_ATTR VkResult VKAPI_CALL vkCreateRenderPass(VkDevice device, const VkRenderPassCreateInfo* pCreateInfo, const VkAllocationCallbacks* pAllocator, VkRenderPass* pRenderPass)
-{
- VkBool32 skip_call = VK_FALSE;
- layer_data* dev_data = get_my_data_ptr(get_dispatch_key(device), layer_data_map);
- loader_platform_thread_lock_mutex(&globalLock);
- // Create DAG
- std::vector<bool> has_self_dependency(pCreateInfo->subpassCount);
- std::vector<DAGNode> subpass_to_node(pCreateInfo->subpassCount);
- skip_call |= CreatePassDAG(dev_data, device, pCreateInfo, subpass_to_node, has_self_dependency);
- // Validate using DAG
- skip_call |= ValidateDependencies(dev_data, device, pCreateInfo, subpass_to_node);
- skip_call |= ValidateLayouts(dev_data, device, pCreateInfo);
- if (VK_FALSE != skip_call) {
- return VK_ERROR_VALIDATION_FAILED_EXT;
- }
- loader_platform_thread_unlock_mutex(&globalLock);
- VkResult result = dev_data->device_dispatch_table->CreateRenderPass(device, pCreateInfo, pAllocator, pRenderPass);
- if (VK_SUCCESS == result) {
- loader_platform_thread_lock_mutex(&globalLock);
- // TODOSC : Merge in tracking of renderpass from ShaderChecker
- // Shadow create info and store in map
- VkRenderPassCreateInfo* localRPCI = new VkRenderPassCreateInfo(*pCreateInfo);
- if (pCreateInfo->pAttachments) {
- localRPCI->pAttachments = new VkAttachmentDescription[localRPCI->attachmentCount];
- memcpy((void*)localRPCI->pAttachments, pCreateInfo->pAttachments, localRPCI->attachmentCount*sizeof(VkAttachmentDescription));
- }
- if (pCreateInfo->pSubpasses) {
- localRPCI->pSubpasses = new VkSubpassDescription[localRPCI->subpassCount];
- memcpy((void*)localRPCI->pSubpasses, pCreateInfo->pSubpasses, localRPCI->subpassCount*sizeof(VkSubpassDescription));
-
- for (uint32_t i = 0; i < localRPCI->subpassCount; i++) {
- VkSubpassDescription *subpass = (VkSubpassDescription *) &localRPCI->pSubpasses[i];
- const uint32_t attachmentCount = subpass->inputAttachmentCount +
- subpass->colorAttachmentCount * (1 + (subpass->pResolveAttachments?1:0)) +
- ((subpass->pDepthStencilAttachment) ? 1 : 0) + subpass->preserveAttachmentCount;
- VkAttachmentReference *attachments = new VkAttachmentReference[attachmentCount];
-
- memcpy(attachments, subpass->pInputAttachments,
- sizeof(attachments[0]) * subpass->inputAttachmentCount);
- subpass->pInputAttachments = attachments;
- attachments += subpass->inputAttachmentCount;
-
- memcpy(attachments, subpass->pColorAttachments,
- sizeof(attachments[0]) * subpass->colorAttachmentCount);
- subpass->pColorAttachments = attachments;
- attachments += subpass->colorAttachmentCount;
-
- if (subpass->pResolveAttachments) {
- memcpy(attachments, subpass->pResolveAttachments,
- sizeof(attachments[0]) * subpass->colorAttachmentCount);
- subpass->pResolveAttachments = attachments;
- attachments += subpass->colorAttachmentCount;
- }
-
- if (subpass->pDepthStencilAttachment) {
- memcpy(attachments, subpass->pDepthStencilAttachment,
- sizeof(attachments[0]) * 1);
- subpass->pDepthStencilAttachment = attachments;
- attachments += 1;
- }
-
- memcpy(attachments, subpass->pPreserveAttachments,
- sizeof(attachments[0]) * subpass->preserveAttachmentCount);
- subpass->pPreserveAttachments = &attachments->attachment;
- }
- }
- if (pCreateInfo->pDependencies) {
- localRPCI->pDependencies = new VkSubpassDependency[localRPCI->dependencyCount];
- memcpy((void*)localRPCI->pDependencies, pCreateInfo->pDependencies, localRPCI->dependencyCount*sizeof(VkSubpassDependency));
- }
- dev_data->renderPassMap[*pRenderPass] = new RENDER_PASS_NODE(localRPCI);
- dev_data->renderPassMap[*pRenderPass]->hasSelfDependency = has_self_dependency;
- loader_platform_thread_unlock_mutex(&globalLock);
- }
- return result;
-}
-// Free the renderpass shadow
-static void deleteRenderPasses(layer_data* my_data)
-{
- if (my_data->renderPassMap.size() <= 0)
- return;
- for (auto ii=my_data->renderPassMap.begin(); ii!=my_data->renderPassMap.end(); ++ii) {
- const VkRenderPassCreateInfo* pRenderPassInfo = (*ii).second->pCreateInfo;
- if (pRenderPassInfo->pAttachments) {
- delete[] pRenderPassInfo->pAttachments;
- }
- if (pRenderPassInfo->pSubpasses) {
- for (uint32_t i=0; i<pRenderPassInfo->subpassCount; ++i) {
- // Attachements are all allocated in a block, so just need to
- // find the first non-null one to delete
- if (pRenderPassInfo->pSubpasses[i].pInputAttachments) {
- delete[] pRenderPassInfo->pSubpasses[i].pInputAttachments;
- } else if (pRenderPassInfo->pSubpasses[i].pColorAttachments) {
- delete[] pRenderPassInfo->pSubpasses[i].pColorAttachments;
- } else if (pRenderPassInfo->pSubpasses[i].pResolveAttachments) {
- delete[] pRenderPassInfo->pSubpasses[i].pResolveAttachments;
- } else if (pRenderPassInfo->pSubpasses[i].pPreserveAttachments) {
- delete[] pRenderPassInfo->pSubpasses[i].pPreserveAttachments;
- }
- }
- delete[] pRenderPassInfo->pSubpasses;
- }
- if (pRenderPassInfo->pDependencies) {
- delete[] pRenderPassInfo->pDependencies;
- }
- delete pRenderPassInfo;
- delete (*ii).second;
- }
- my_data->renderPassMap.clear();
-}
-
-VkBool32 VerifyFramebufferAndRenderPassLayouts(VkCommandBuffer cmdBuffer, const VkRenderPassBeginInfo* pRenderPassBegin) {
- VkBool32 skip_call = VK_FALSE;
- layer_data* dev_data = get_my_data_ptr(get_dispatch_key(cmdBuffer), layer_data_map);
- GLOBAL_CB_NODE* pCB = getCBNode(dev_data, cmdBuffer);
- const VkRenderPassCreateInfo* pRenderPassInfo = dev_data->renderPassMap[pRenderPassBegin->renderPass]->pCreateInfo;
- const VkFramebufferCreateInfo* pFramebufferInfo = dev_data->frameBufferMap[pRenderPassBegin->framebuffer];
- if (pRenderPassInfo->attachmentCount != pFramebufferInfo->attachmentCount) {
- skip_call |= log_msg(dev_data->report_data, VK_DEBUG_REPORT_ERROR_BIT_EXT, (VkDebugReportObjectTypeEXT)0, 0, __LINE__, DRAWSTATE_INVALID_RENDERPASS, "DS",
- "You cannot start a render pass using a framebuffer with a different number of attachments.");
- }
- for (uint32_t i = 0; i < pRenderPassInfo->attachmentCount; ++i) {
- const VkImageView& image_view = pFramebufferInfo->pAttachments[i];
- auto image_data = dev_data->imageViewMap.find(image_view);
- assert(image_data != dev_data->imageViewMap.end());
- const VkImage& image = image_data->second->image;
- const VkImageSubresourceRange& subRange = image_data->second->subresourceRange;
- IMAGE_CMD_BUF_NODE newNode = {pRenderPassInfo->pAttachments[i].initialLayout, pRenderPassInfo->pAttachments[i].initialLayout};
- // TODO: Do not iterate over every possibility - consolidate where possible
- for (uint32_t j = 0; j < subRange.levelCount; j++) {
- uint32_t level = subRange.baseMipLevel + j;
- for (uint32_t k = 0; k < subRange.layerCount; k++) {
- uint32_t layer = subRange.baseArrayLayer + k;
- VkImageSubresource sub = {subRange.aspectMask, level, layer};
- IMAGE_CMD_BUF_NODE node;
- if (!FindLayout(pCB, image, sub, node)) {
- SetLayout(pCB, image, sub, newNode);
- continue;
- }
- if (newNode.layout != node.layout) {
- skip_call |= log_msg(dev_data->report_data, VK_DEBUG_REPORT_ERROR_BIT_EXT, (VkDebugReportObjectTypeEXT)0, 0, __LINE__, DRAWSTATE_INVALID_RENDERPASS, "DS",
- "You cannot start a render pass using attachment %i where the intial layout differs from the starting layout.", i);
- }
- }
- }
- }
- return skip_call;
-}
-
-void TransitionSubpassLayouts(VkCommandBuffer cmdBuffer, const VkRenderPassBeginInfo* pRenderPassBegin, const int subpass_index) {
- layer_data* dev_data = get_my_data_ptr(get_dispatch_key(cmdBuffer), layer_data_map);
- GLOBAL_CB_NODE* pCB = getCBNode(dev_data, cmdBuffer);
- auto render_pass_data = dev_data->renderPassMap.find(pRenderPassBegin->renderPass);
- if (render_pass_data == dev_data->renderPassMap.end()) {
- return;
- }
- const VkRenderPassCreateInfo* pRenderPassInfo = render_pass_data->second->pCreateInfo;
- auto framebuffer_data = dev_data->frameBufferMap.find(pRenderPassBegin->framebuffer);
- if (framebuffer_data == dev_data->frameBufferMap.end()) {
- return;
- }
- const VkFramebufferCreateInfo* pFramebufferInfo = framebuffer_data->second;
- const VkSubpassDescription& subpass = pRenderPassInfo->pSubpasses[subpass_index];
- for (uint32_t j = 0; j < subpass.inputAttachmentCount; ++j) {
- const VkImageView& image_view = pFramebufferInfo->pAttachments[subpass.pInputAttachments[j].attachment];
- SetLayout(dev_data, pCB, image_view,
- subpass.pInputAttachments[j].layout);
- }
- for (uint32_t j = 0; j < subpass.colorAttachmentCount; ++j) {
- const VkImageView& image_view = pFramebufferInfo->pAttachments[subpass.pColorAttachments[j].attachment];
- SetLayout(dev_data, pCB, image_view,
- subpass.pColorAttachments[j].layout);
- }
- if ((subpass.pDepthStencilAttachment != NULL) &&
- (subpass.pDepthStencilAttachment->attachment != VK_ATTACHMENT_UNUSED)) {
- const VkImageView& image_view = pFramebufferInfo->pAttachments[subpass.pDepthStencilAttachment->attachment];
- SetLayout(dev_data, pCB, image_view,
- subpass.pDepthStencilAttachment->layout);
- }
-}
-
-VkBool32 validatePrimaryCommandBuffer(const layer_data* my_data, const GLOBAL_CB_NODE* pCB, const std::string& cmd_name) {
- VkBool32 skip_call = VK_FALSE;
- if (pCB->createInfo.level != VK_COMMAND_BUFFER_LEVEL_PRIMARY) {
- skip_call |= log_msg(my_data->report_data, VK_DEBUG_REPORT_ERROR_BIT_EXT, (VkDebugReportObjectTypeEXT)0, 0, __LINE__, DRAWSTATE_INVALID_COMMAND_BUFFER, "DS",
- "Cannot execute command %s on a secondary command buffer.", cmd_name.c_str());
- }
- return skip_call;
-}
-
-void TransitionFinalSubpassLayouts(VkCommandBuffer cmdBuffer, const VkRenderPassBeginInfo* pRenderPassBegin) {
- layer_data* dev_data = get_my_data_ptr(get_dispatch_key(cmdBuffer), layer_data_map);
- GLOBAL_CB_NODE* pCB = getCBNode(dev_data, cmdBuffer);
- auto render_pass_data = dev_data->renderPassMap.find(pRenderPassBegin->renderPass);
- if (render_pass_data == dev_data->renderPassMap.end()) {
- return;
- }
- const VkRenderPassCreateInfo* pRenderPassInfo = render_pass_data->second->pCreateInfo;
- auto framebuffer_data = dev_data->frameBufferMap.find(pRenderPassBegin->framebuffer);
- if (framebuffer_data == dev_data->frameBufferMap.end()) {
- return;
- }
- const VkFramebufferCreateInfo* pFramebufferInfo = framebuffer_data->second;
- for (uint32_t i = 0; i < pRenderPassInfo->attachmentCount; ++i) {
- const VkImageView& image_view = pFramebufferInfo->pAttachments[i];
- SetLayout(dev_data, pCB, image_view,
- pRenderPassInfo->pAttachments[i].finalLayout);
- }
-}
-
-VK_LAYER_EXPORT VKAPI_ATTR void VKAPI_CALL vkCmdBeginRenderPass(VkCommandBuffer commandBuffer, const VkRenderPassBeginInfo *pRenderPassBegin, VkSubpassContents contents)
-{
- VkBool32 skipCall = VK_FALSE;
- layer_data* dev_data = get_my_data_ptr(get_dispatch_key(commandBuffer), layer_data_map);
- loader_platform_thread_lock_mutex(&globalLock);
- GLOBAL_CB_NODE* pCB = getCBNode(dev_data, commandBuffer);
- if (pCB) {
- if (pRenderPassBegin && pRenderPassBegin->renderPass) {
- skipCall |= VerifyFramebufferAndRenderPassLayouts(commandBuffer, pRenderPassBegin);
- skipCall |= insideRenderPass(dev_data, pCB, "vkCmdBeginRenderPass");
- skipCall |= validatePrimaryCommandBuffer(dev_data, pCB, "vkCmdBeginRenderPass");
- skipCall |= addCmd(dev_data, pCB, CMD_BEGINRENDERPASS, "vkCmdBeginRenderPass()");
- pCB->activeRenderPass = pRenderPassBegin->renderPass;
- // This is a shallow copy as that is all that is needed for now
- pCB->activeRenderPassBeginInfo = *pRenderPassBegin;
- pCB->activeSubpass = 0;
- pCB->activeSubpassContents = contents;
- pCB->framebuffer = pRenderPassBegin->framebuffer;
- } else {
- skipCall |= log_msg(dev_data->report_data, VK_DEBUG_REPORT_ERROR_BIT_EXT, (VkDebugReportObjectTypeEXT) 0, 0, __LINE__, DRAWSTATE_INVALID_RENDERPASS, "DS",
- "You cannot use a NULL RenderPass object in vkCmdBeginRenderPass()");
- }
- }
- loader_platform_thread_unlock_mutex(&globalLock);
- if (VK_FALSE == skipCall) {
- dev_data->device_dispatch_table->CmdBeginRenderPass(commandBuffer, pRenderPassBegin, contents);
- loader_platform_thread_lock_mutex(&globalLock);
- // This is a shallow copy as that is all that is needed for now
- dev_data->renderPassBeginInfo = *pRenderPassBegin;
- dev_data->currentSubpass = 0;
- loader_platform_thread_unlock_mutex(&globalLock);
- }
-}
-
-VK_LAYER_EXPORT VKAPI_ATTR void VKAPI_CALL vkCmdNextSubpass(VkCommandBuffer commandBuffer, VkSubpassContents contents)
-{
- VkBool32 skipCall = VK_FALSE;
- layer_data* dev_data = get_my_data_ptr(get_dispatch_key(commandBuffer), layer_data_map);
- loader_platform_thread_lock_mutex(&globalLock);
- GLOBAL_CB_NODE* pCB = getCBNode(dev_data, commandBuffer);
- TransitionSubpassLayouts(commandBuffer, &dev_data->renderPassBeginInfo, ++dev_data->currentSubpass);
- if (pCB) {
- skipCall |= validatePrimaryCommandBuffer(dev_data, pCB, "vkCmdNextSubpass");
- skipCall |= addCmd(dev_data, pCB, CMD_NEXTSUBPASS, "vkCmdNextSubpass()");
- pCB->activeSubpass++;
- pCB->activeSubpassContents = contents;
- TransitionSubpassLayouts(commandBuffer, &pCB->activeRenderPassBeginInfo, pCB->activeSubpass);
- if (pCB->lastBoundPipeline) {
- skipCall |= validatePipelineState(dev_data, pCB, VK_PIPELINE_BIND_POINT_GRAPHICS, pCB->lastBoundPipeline);
- }
- skipCall |= outsideRenderPass(dev_data, pCB, "vkCmdNextSubpass");
- }
- loader_platform_thread_unlock_mutex(&globalLock);
- if (VK_FALSE == skipCall)
- dev_data->device_dispatch_table->CmdNextSubpass(commandBuffer, contents);
-}
-
-VK_LAYER_EXPORT VKAPI_ATTR void VKAPI_CALL vkCmdEndRenderPass(VkCommandBuffer commandBuffer)
-{
- VkBool32 skipCall = VK_FALSE;
- layer_data* dev_data = get_my_data_ptr(get_dispatch_key(commandBuffer), layer_data_map);
- loader_platform_thread_lock_mutex(&globalLock);
- GLOBAL_CB_NODE* pCB = getCBNode(dev_data, commandBuffer);
- TransitionFinalSubpassLayouts(commandBuffer, &dev_data->renderPassBeginInfo);
- if (pCB) {
- skipCall |= outsideRenderPass(dev_data, pCB, "vkCmdEndRenderpass");
- skipCall |= validatePrimaryCommandBuffer(dev_data, pCB, "vkCmdEndRenderPass");
- skipCall |= addCmd(dev_data, pCB, CMD_ENDRENDERPASS, "vkCmdEndRenderPass()");
- TransitionFinalSubpassLayouts(commandBuffer, &pCB->activeRenderPassBeginInfo);
- pCB->activeRenderPass = 0;
- pCB->activeSubpass = 0;
- }
- loader_platform_thread_unlock_mutex(&globalLock);
- if (VK_FALSE == skipCall)
- dev_data->device_dispatch_table->CmdEndRenderPass(commandBuffer);
-}
-
-bool logInvalidAttachmentMessage(layer_data* dev_data, VkCommandBuffer secondaryBuffer, VkRenderPass secondaryPass, VkRenderPass primaryPass, uint32_t primaryAttach, uint32_t secondaryAttach, const char* msg) {
- return log_msg(dev_data->report_data, VK_DEBUG_REPORT_ERROR_BIT_EXT, (VkDebugReportObjectTypeEXT) 0, 0, __LINE__, DRAWSTATE_INVALID_SECONDARY_COMMAND_BUFFER, "DS",
- "vkCmdExecuteCommands() called w/ invalid Cmd Buffer %p which has a render pass %" PRIx64 " that is not compatible with the current render pass %" PRIx64 "."
- "Attachment %" PRIu32 " is not compatable with %" PRIu32 ". %s",
- (void*)secondaryBuffer, (uint64_t)(secondaryPass), (uint64_t)(primaryPass), primaryAttach, secondaryAttach, msg);
-}
-
-bool validateAttachmentCompatibility(layer_data* dev_data, VkCommandBuffer primaryBuffer, VkRenderPass primaryPass, uint32_t primaryAttach, VkCommandBuffer secondaryBuffer, VkRenderPass secondaryPass, uint32_t secondaryAttach, bool is_multi) {
- bool skip_call = false;
- auto primary_data = dev_data->renderPassMap.find(primaryPass);
- auto secondary_data = dev_data->renderPassMap.find(secondaryPass);
- if (primary_data->second->pCreateInfo->attachmentCount <= primaryAttach) {
- primaryAttach = VK_ATTACHMENT_UNUSED;
- }
- if (secondary_data->second->pCreateInfo->attachmentCount <= secondaryAttach) {
- secondaryAttach = VK_ATTACHMENT_UNUSED;
- }
- if (primaryAttach == VK_ATTACHMENT_UNUSED && secondaryAttach == VK_ATTACHMENT_UNUSED) {
- return skip_call;
- }
- if (primaryAttach == VK_ATTACHMENT_UNUSED) {
- skip_call |= logInvalidAttachmentMessage(dev_data, secondaryBuffer, secondaryPass, primaryPass, primaryAttach, secondaryAttach, "The first is unused while the second is not.");
- return skip_call;
- }
- if (secondaryAttach == VK_ATTACHMENT_UNUSED) {
- skip_call |= logInvalidAttachmentMessage(dev_data, secondaryBuffer, secondaryPass, primaryPass, primaryAttach, secondaryAttach, "The second is unused while the first is not.");
- return skip_call;
- }
- if (primary_data->second->pCreateInfo->pAttachments[primaryAttach].format != secondary_data->second->pCreateInfo->pAttachments[secondaryAttach].format) {
- skip_call |= logInvalidAttachmentMessage(dev_data, secondaryBuffer, secondaryPass, primaryPass, primaryAttach, secondaryAttach, "They have different formats.");
- }
- if (primary_data->second->pCreateInfo->pAttachments[primaryAttach].samples != secondary_data->second->pCreateInfo->pAttachments[secondaryAttach].samples) {
- skip_call |= logInvalidAttachmentMessage(dev_data, secondaryBuffer, secondaryPass, primaryPass, primaryAttach, secondaryAttach, "They have different samples.");
- }
- if (is_multi && primary_data->second->pCreateInfo->pAttachments[primaryAttach].flags != secondary_data->second->pCreateInfo->pAttachments[secondaryAttach].flags) {
- skip_call |= logInvalidAttachmentMessage(dev_data, secondaryBuffer, secondaryPass, primaryPass, primaryAttach, secondaryAttach, "They have different flags.");
- }
- return skip_call;
-}
-
-bool validateSubpassCompatibility(layer_data* dev_data, VkCommandBuffer primaryBuffer, VkRenderPass primaryPass, VkCommandBuffer secondaryBuffer, VkRenderPass secondaryPass, const int subpass, bool is_multi) {
- bool skip_call = false;
- auto primary_data = dev_data->renderPassMap.find(primaryPass);
- auto secondary_data = dev_data->renderPassMap.find(secondaryPass);
- const VkSubpassDescription& primary_desc = primary_data->second->pCreateInfo->pSubpasses[subpass];
- const VkSubpassDescription& secondary_desc = secondary_data->second->pCreateInfo->pSubpasses[subpass];
- uint32_t maxInputAttachmentCount = std::max(primary_desc.inputAttachmentCount, secondary_desc.inputAttachmentCount);
- for (uint32_t i = 0; i < maxInputAttachmentCount; ++i) {
- uint32_t primary_input_attach = VK_ATTACHMENT_UNUSED, secondary_input_attach = VK_ATTACHMENT_UNUSED;
- if (i < primary_desc.inputAttachmentCount) {
- primary_input_attach = primary_desc.pInputAttachments[i].attachment;
- }
- if (i < secondary_desc.inputAttachmentCount) {
- secondary_input_attach = secondary_desc.pInputAttachments[i].attachment;
- }
- skip_call |= validateAttachmentCompatibility(dev_data, primaryBuffer, primaryPass, primary_input_attach, secondaryBuffer, secondaryPass, secondary_input_attach, is_multi);
- }
- maxInputAttachmentCount = std::max(primary_desc.colorAttachmentCount, secondary_desc.colorAttachmentCount);
- for (uint32_t i = 0; i < maxInputAttachmentCount; ++i) {
- uint32_t primary_color_attach = VK_ATTACHMENT_UNUSED, secondary_color_attach = VK_ATTACHMENT_UNUSED;
- if (i < primary_desc.colorAttachmentCount) {
- primary_color_attach = primary_desc.pColorAttachments[i].attachment;
- }
- if (i < secondary_desc.colorAttachmentCount) {
- secondary_color_attach = secondary_desc.pColorAttachments[i].attachment;
- }
- skip_call |= validateAttachmentCompatibility(dev_data, primaryBuffer, primaryPass, primary_color_attach, secondaryBuffer, secondaryPass, secondary_color_attach, is_multi);
- uint32_t primary_resolve_attach = VK_ATTACHMENT_UNUSED, secondary_resolve_attach = VK_ATTACHMENT_UNUSED;
- if (i < primary_desc.colorAttachmentCount && primary_desc.pResolveAttachments) {
- primary_resolve_attach = primary_desc.pResolveAttachments[i].attachment;
- }
- if (i < secondary_desc.colorAttachmentCount && secondary_desc.pResolveAttachments) {
- secondary_resolve_attach = secondary_desc.pResolveAttachments[i].attachment;
- }
- skip_call |= validateAttachmentCompatibility(dev_data, primaryBuffer, primaryPass, primary_resolve_attach, secondaryBuffer, secondaryPass, secondary_resolve_attach, is_multi);
- }
- uint32_t primary_depthstencil_attach = VK_ATTACHMENT_UNUSED, secondary_depthstencil_attach = VK_ATTACHMENT_UNUSED;
- if (primary_desc.pDepthStencilAttachment) {
- primary_depthstencil_attach = primary_desc.pDepthStencilAttachment[0].attachment;
- }
- if (secondary_desc.pDepthStencilAttachment) {
- secondary_depthstencil_attach = secondary_desc.pDepthStencilAttachment[0].attachment;
- }
- skip_call |= validateAttachmentCompatibility(dev_data, primaryBuffer, primaryPass, primary_depthstencil_attach, secondaryBuffer, secondaryPass, secondary_depthstencil_attach, is_multi);
- return skip_call;
-}
-
-bool validateRenderPassCompatibility(layer_data* dev_data, VkCommandBuffer primaryBuffer, VkRenderPass primaryPass, VkCommandBuffer secondaryBuffer, VkRenderPass secondaryPass) {
- bool skip_call = false;
- // Early exit if renderPass objects are identical (and therefore compatible)
- if (primaryPass == secondaryPass)
- return skip_call;
- auto primary_data = dev_data->renderPassMap.find(primaryPass);
- auto secondary_data = dev_data->renderPassMap.find(secondaryPass);
- if (primary_data == dev_data->renderPassMap.end() || primary_data->second == nullptr) {
- skip_call |= log_msg(dev_data->report_data, VK_DEBUG_REPORT_ERROR_BIT_EXT, (VkDebugReportObjectTypeEXT) 0, 0, __LINE__, DRAWSTATE_INVALID_SECONDARY_COMMAND_BUFFER, "DS",
- "vkCmdExecuteCommands() called w/ invalid current Cmd Buffer %p which has invalid render pass %" PRIx64 ".",
- (void*)primaryBuffer, (uint64_t)(primaryPass));
- return skip_call;
- }
- if (secondary_data == dev_data->renderPassMap.end() || secondary_data->second == nullptr) {
- skip_call |= log_msg(dev_data->report_data, VK_DEBUG_REPORT_ERROR_BIT_EXT, (VkDebugReportObjectTypeEXT) 0, 0, __LINE__, DRAWSTATE_INVALID_SECONDARY_COMMAND_BUFFER, "DS",
- "vkCmdExecuteCommands() called w/ invalid secondary Cmd Buffer %p which has invalid render pass %" PRIx64 ".",
- (void*)secondaryBuffer, (uint64_t)(secondaryPass));
- return skip_call;
- }
- if (primary_data->second->pCreateInfo->subpassCount != secondary_data->second->pCreateInfo->subpassCount) {
- skip_call |= log_msg(dev_data->report_data, VK_DEBUG_REPORT_ERROR_BIT_EXT, (VkDebugReportObjectTypeEXT) 0, 0, __LINE__, DRAWSTATE_INVALID_SECONDARY_COMMAND_BUFFER, "DS",
- "vkCmdExecuteCommands() called w/ invalid Cmd Buffer %p which has a render pass %" PRIx64 " that is not compatible with the current render pass %" PRIx64 "."
- "They have a different number of subpasses.",
- (void*)secondaryBuffer, (uint64_t)(secondaryPass), (uint64_t)(primaryPass));
- return skip_call;
- }
- bool is_multi = primary_data->second->pCreateInfo->subpassCount > 1;
- for (uint32_t i = 0; i < primary_data->second->pCreateInfo->subpassCount; ++i) {
- skip_call |= validateSubpassCompatibility(dev_data, primaryBuffer, primaryPass, secondaryBuffer, secondaryPass, i, is_multi);
- }
- return skip_call;
-}
-
-bool validateFramebuffer(layer_data* dev_data, VkCommandBuffer primaryBuffer, const GLOBAL_CB_NODE* pCB, VkCommandBuffer secondaryBuffer, const GLOBAL_CB_NODE* pSubCB) {
- bool skip_call = false;
- if (!pSubCB->beginInfo.pInheritanceInfo) {
- return skip_call;
- }
- VkFramebuffer primary_fb = pCB->framebuffer;
- VkFramebuffer secondary_fb = pSubCB->beginInfo.pInheritanceInfo->framebuffer;
- if (secondary_fb != VK_NULL_HANDLE) {
- if (primary_fb != secondary_fb) {
- skip_call |= log_msg(dev_data->report_data, VK_DEBUG_REPORT_ERROR_BIT_EXT, (VkDebugReportObjectTypeEXT) 0, 0, __LINE__, DRAWSTATE_INVALID_SECONDARY_COMMAND_BUFFER, "DS",
- "vkCmdExecuteCommands() called w/ invalid Cmd Buffer %p which has a framebuffer %" PRIx64 " that is not compatible with the current framebuffer %" PRIx64 ".",
- (void*)secondaryBuffer, (uint64_t)(secondary_fb), (uint64_t)(primary_fb));
- }
- auto fb_data = dev_data->frameBufferMap.find(secondary_fb);
- if (fb_data == dev_data->frameBufferMap.end() || !fb_data->second) {
- skip_call |= log_msg(dev_data->report_data, VK_DEBUG_REPORT_ERROR_BIT_EXT, (VkDebugReportObjectTypeEXT) 0, 0, __LINE__, DRAWSTATE_INVALID_SECONDARY_COMMAND_BUFFER, "DS",
- "vkCmdExecuteCommands() called w/ invalid Cmd Buffer %p which has invalid framebuffer %" PRIx64 ".",
- (void*)secondaryBuffer, (uint64_t)(secondary_fb));
- return skip_call;
- }
- skip_call |= validateRenderPassCompatibility(dev_data, secondaryBuffer, fb_data->second->renderPass, secondaryBuffer, pSubCB->beginInfo.pInheritanceInfo->renderPass);
- }
- return skip_call;
-}
-
-bool validateSecondaryCommandBufferState(layer_data *dev_data,
- GLOBAL_CB_NODE *pCB,
- GLOBAL_CB_NODE *pSubCB) {
- bool skipCall = false;
- unordered_set<int> activeTypes;
- for (auto queryObject : pCB->activeQueries) {
- auto queryPoolData = dev_data->queryPoolMap.find(queryObject.pool);
- if (queryPoolData != dev_data->queryPoolMap.end()) {
- if (queryPoolData->second.createInfo.queryType ==
- VK_QUERY_TYPE_PIPELINE_STATISTICS &&
- pSubCB->beginInfo.pInheritanceInfo) {
- VkQueryPipelineStatisticFlags cmdBufStatistics =
- pSubCB->beginInfo.pInheritanceInfo->pipelineStatistics;
- if ((cmdBufStatistics &
- queryPoolData->second.createInfo.pipelineStatistics) !=
- cmdBufStatistics) {
- skipCall |= log_msg(
- dev_data->report_data, VK_DEBUG_REPORT_ERROR_BIT_EXT,
- (VkDebugReportObjectTypeEXT)0, 0, __LINE__,
- DRAWSTATE_INVALID_SECONDARY_COMMAND_BUFFER, "DS",
- "vkCmdExecuteCommands() called w/ invalid Cmd Buffer %p "
- "which has invalid active query pool %" PRIx64
- ". Pipeline statistics is being queried so the command "
- "buffer must have all bits set on the queryPool.",
- reinterpret_cast<void *>(pCB->commandBuffer),
- reinterpret_cast<const uint64_t&>(queryPoolData->first));
- }
- }
- activeTypes.insert(queryPoolData->second.createInfo.queryType);
- }
- }
- for (auto queryObject : pSubCB->startedQueries) {
- auto queryPoolData = dev_data->queryPoolMap.find(queryObject.pool);
- if (queryPoolData != dev_data->queryPoolMap.end() &&
- activeTypes.count(queryPoolData->second.createInfo.queryType)) {
- skipCall |=
- log_msg(dev_data->report_data, VK_DEBUG_REPORT_ERROR_BIT_EXT,
- (VkDebugReportObjectTypeEXT)0, 0, __LINE__,
- DRAWSTATE_INVALID_SECONDARY_COMMAND_BUFFER, "DS",
- "vkCmdExecuteCommands() called w/ invalid Cmd Buffer %p "
- "which has invalid active query pool %" PRIx64
- "of type %d but a query of that type has been started on "
- "secondary Cmd Buffer %p.",
- reinterpret_cast<void *>(pCB->commandBuffer),
- reinterpret_cast<const uint64_t&>(queryPoolData->first),
- queryPoolData->second.createInfo.queryType,
- reinterpret_cast<void *>(pSubCB->commandBuffer));
- }
- }
- return skipCall;
-}
-
-VK_LAYER_EXPORT VKAPI_ATTR void VKAPI_CALL vkCmdExecuteCommands(VkCommandBuffer commandBuffer, uint32_t commandBuffersCount, const VkCommandBuffer* pCommandBuffers)
-{
- VkBool32 skipCall = VK_FALSE;
- layer_data* dev_data = get_my_data_ptr(get_dispatch_key(commandBuffer), layer_data_map);
- loader_platform_thread_lock_mutex(&globalLock);
- GLOBAL_CB_NODE* pCB = getCBNode(dev_data, commandBuffer);
- if (pCB) {
- GLOBAL_CB_NODE* pSubCB = NULL;
- for (uint32_t i=0; i<commandBuffersCount; i++) {
- pSubCB = getCBNode(dev_data, pCommandBuffers[i]);
- if (!pSubCB) {
- skipCall |= log_msg(dev_data->report_data, VK_DEBUG_REPORT_ERROR_BIT_EXT, (VkDebugReportObjectTypeEXT) 0, 0, __LINE__, DRAWSTATE_INVALID_SECONDARY_COMMAND_BUFFER, "DS",
- "vkCmdExecuteCommands() called w/ invalid Cmd Buffer %p in element %u of pCommandBuffers array.", (void*)pCommandBuffers[i], i);
- } else if (VK_COMMAND_BUFFER_LEVEL_PRIMARY == pSubCB->createInfo.level) {
- skipCall |= log_msg(dev_data->report_data, VK_DEBUG_REPORT_ERROR_BIT_EXT, (VkDebugReportObjectTypeEXT) 0, 0, __LINE__, DRAWSTATE_INVALID_SECONDARY_COMMAND_BUFFER, "DS",
- "vkCmdExecuteCommands() called w/ Primary Cmd Buffer %p in element %u of pCommandBuffers array. All cmd buffers in pCommandBuffers array must be secondary.", (void*)pCommandBuffers[i], i);
- } else if (pCB->activeRenderPass) { // Secondary CB w/i RenderPass must have *CONTINUE_BIT set
- if (!(pSubCB->beginInfo.flags & VK_COMMAND_BUFFER_USAGE_RENDER_PASS_CONTINUE_BIT)) {
- skipCall |= log_msg(dev_data->report_data, VK_DEBUG_REPORT_ERROR_BIT_EXT, VK_DEBUG_REPORT_OBJECT_TYPE_COMMAND_BUFFER_EXT, (uint64_t)pCommandBuffers[i], __LINE__, DRAWSTATE_BEGIN_CB_INVALID_STATE, "DS",
- "vkCmdExecuteCommands(): Secondary Command Buffer (%p) executed within render pass (%#" PRIxLEAST64 ") must have had vkBeginCommandBuffer() called w/ VK_COMMAND_BUFFER_USAGE_RENDER_PASS_CONTINUE_BIT set.", (void*)pCommandBuffers[i], (uint64_t)pCB->activeRenderPass);
- } else {
- // Make sure render pass is compatible with parent command buffer pass if has continue
- skipCall |= validateRenderPassCompatibility(dev_data, commandBuffer, pCB->activeRenderPass, pCommandBuffers[i], pSubCB->beginInfo.pInheritanceInfo->renderPass);
- skipCall |= validateFramebuffer(dev_data, commandBuffer, pCB, pCommandBuffers[i], pSubCB);
- }
- string errorString = "";
- if (!verify_renderpass_compatibility(dev_data, pCB->activeRenderPass, pSubCB->beginInfo.pInheritanceInfo->renderPass, errorString)) {
- skipCall |= log_msg(dev_data->report_data, VK_DEBUG_REPORT_ERROR_BIT_EXT, VK_DEBUG_REPORT_OBJECT_TYPE_COMMAND_BUFFER_EXT, (uint64_t)pCommandBuffers[i], __LINE__, DRAWSTATE_RENDERPASS_INCOMPATIBLE, "DS",
- "vkCmdExecuteCommands(): Secondary Command Buffer (%p) w/ render pass (%#" PRIxLEAST64 ") is incompatible w/ primary command buffer (%p) w/ render pass (%#" PRIxLEAST64 ") due to: %s",
- (void*)pCommandBuffers[i], (uint64_t)pSubCB->beginInfo.pInheritanceInfo->renderPass, (void*)commandBuffer, (uint64_t)pCB->activeRenderPass, errorString.c_str());
- }
- // If framebuffer for secondary CB is not NULL, then it must match FB from vkCmdBeginRenderPass()
- // that this CB will be executed in AND framebuffer must have been created w/ RP compatible w/ renderpass
- if (pSubCB->beginInfo.pInheritanceInfo->framebuffer) {
- if (pSubCB->beginInfo.pInheritanceInfo->framebuffer != pCB->activeRenderPassBeginInfo.framebuffer) {
- skipCall |= log_msg(dev_data->report_data, VK_DEBUG_REPORT_ERROR_BIT_EXT, VK_DEBUG_REPORT_OBJECT_TYPE_COMMAND_BUFFER_EXT, (uint64_t)pCommandBuffers[i], __LINE__, DRAWSTATE_FRAMEBUFFER_INCOMPATIBLE, "DS",
- "vkCmdExecuteCommands(): Secondary Command Buffer (%p) references framebuffer (%#" PRIxLEAST64 ") that does not match framebuffer (%#" PRIxLEAST64 ") in active renderpass (%#" PRIxLEAST64 ").",
- (void*)pCommandBuffers[i], (uint64_t)pSubCB->beginInfo.pInheritanceInfo->framebuffer, (uint64_t)pCB->activeRenderPassBeginInfo.framebuffer, (uint64_t)pCB->activeRenderPass);
- }
- }
- }
- // TODO(mlentine): Move more logic into this method
- skipCall |=
- validateSecondaryCommandBufferState(dev_data, pCB, pSubCB);
- skipCall |= validateCommandBufferState(dev_data, pSubCB);
- // Secondary cmdBuffers are considered pending execution starting w/
- // being recorded
- if (!(pSubCB->beginInfo.flags &
- VK_COMMAND_BUFFER_USAGE_SIMULTANEOUS_USE_BIT)) {
- if (dev_data->globalInFlightCmdBuffers.find(
- pSubCB->commandBuffer) !=
- dev_data->globalInFlightCmdBuffers.end()) {
- skipCall |= log_msg(
- dev_data->report_data, VK_DEBUG_REPORT_ERROR_BIT_EXT,
- VK_DEBUG_REPORT_OBJECT_TYPE_COMMAND_BUFFER_EXT,
- (uint64_t)(pCB->commandBuffer), __LINE__,
- DRAWSTATE_INVALID_CB_SIMULTANEOUS_USE, "DS",
- "Attempt to simultaneously execute CB %#" PRIxLEAST64
- " w/o VK_COMMAND_BUFFER_USAGE_SIMULTANEOUS_USE_BIT "
- "set!",
- (uint64_t)(pCB->commandBuffer));
- }
- if (pCB->beginInfo.flags & VK_COMMAND_BUFFER_USAGE_SIMULTANEOUS_USE_BIT) {
- // Warn that non-simultaneous secondary cmd buffer renders primary non-simultaneous
- skipCall |= log_msg(dev_data->report_data, VK_DEBUG_REPORT_WARNING_BIT_EXT, VK_DEBUG_REPORT_OBJECT_TYPE_COMMAND_BUFFER_EXT, (uint64_t)(pCommandBuffers[i]), __LINE__, DRAWSTATE_INVALID_CB_SIMULTANEOUS_USE, "DS",
- "vkCmdExecuteCommands(): Secondary Command Buffer (%#" PRIxLEAST64 ") does not have VK_COMMAND_BUFFER_USAGE_SIMULTANEOUS_USE_BIT set and will cause primary command buffer (%#" PRIxLEAST64 ") to be treated as if it does not have VK_COMMAND_BUFFER_USAGE_SIMULTANEOUS_USE_BIT set, even though it does.",
- (uint64_t)(pCommandBuffers[i]), (uint64_t)(pCB->commandBuffer));
- pCB->beginInfo.flags &= ~VK_COMMAND_BUFFER_USAGE_SIMULTANEOUS_USE_BIT;
- }
- }
- if (!pCB->activeQueries.empty() &&
- !dev_data->physDevProperties.features.inheritedQueries) {
- skipCall |= log_msg(
- dev_data->report_data, VK_DEBUG_REPORT_ERROR_BIT_EXT,
- VK_DEBUG_REPORT_OBJECT_TYPE_COMMAND_BUFFER_EXT,
- reinterpret_cast<uint64_t>(pCommandBuffers[i]), __LINE__,
- DRAWSTATE_INVALID_COMMAND_BUFFER, "DS",
- "vkCmdExecuteCommands(): Secondary Command Buffer "
- "(%#" PRIxLEAST64 ") cannot be submitted with a query in "
- "flight and inherited queries not "
- "supported on this device.",
- reinterpret_cast<uint64_t>(pCommandBuffers[i]));
- }
- pSubCB->primaryCommandBuffer = pCB->commandBuffer;
- pCB->secondaryCommandBuffers.insert(pSubCB->commandBuffer);
- dev_data->globalInFlightCmdBuffers.insert(pSubCB->commandBuffer);
- }
- skipCall |= validatePrimaryCommandBuffer(dev_data, pCB, "vkCmdExecuteComands");
- skipCall |= addCmd(dev_data, pCB, CMD_EXECUTECOMMANDS, "vkCmdExecuteComands()");
- }
- loader_platform_thread_unlock_mutex(&globalLock);
- if (VK_FALSE == skipCall)
- dev_data->device_dispatch_table->CmdExecuteCommands(commandBuffer, commandBuffersCount, pCommandBuffers);
-}
-
-VkBool32 ValidateMapImageLayouts(VkDevice device, VkDeviceMemory mem) {
- VkBool32 skip_call = VK_FALSE;
- layer_data* dev_data = get_my_data_ptr(get_dispatch_key(device), layer_data_map);
- auto mem_data = dev_data->memImageMap.find(mem);
- if (mem_data != dev_data->memImageMap.end()) {
- std::vector<VkImageLayout> layouts;
- if (FindLayouts(dev_data, mem_data->second, layouts)) {
- for (auto layout : layouts) {
- if (layout != VK_IMAGE_LAYOUT_PREINITIALIZED &&
- layout != VK_IMAGE_LAYOUT_GENERAL) {
- skip_call |= log_msg(
- dev_data->report_data, VK_DEBUG_REPORT_ERROR_BIT_EXT,
- (VkDebugReportObjectTypeEXT)0, 0, __LINE__,
- DRAWSTATE_INVALID_IMAGE_LAYOUT, "DS",
- "Cannot map an image with layout %s. Only "
- "GENERAL or PREINITIALIZED are supported.",
- string_VkImageLayout(layout));
- }
- }
- }
- }
- return skip_call;
-}
-
-VK_LAYER_EXPORT VKAPI_ATTR VkResult VKAPI_CALL vkMapMemory(
- VkDevice device,
- VkDeviceMemory mem,
- VkDeviceSize offset,
- VkDeviceSize size,
- VkFlags flags,
- void **ppData)
-{
- layer_data* dev_data = get_my_data_ptr(get_dispatch_key(device), layer_data_map);
-
- VkBool32 skip_call = VK_FALSE;
- loader_platform_thread_lock_mutex(&globalLock);
- skip_call = ValidateMapImageLayouts(device, mem);
- loader_platform_thread_unlock_mutex(&globalLock);
-
- if (VK_FALSE == skip_call) {
- return dev_data->device_dispatch_table->MapMemory(device, mem, offset, size, flags, ppData);
- }
- return VK_ERROR_VALIDATION_FAILED_EXT;
-}
-
-VKAPI_ATTR VkResult VKAPI_CALL vkBindImageMemory(
- VkDevice device,
- VkImage image,
- VkDeviceMemory mem,
- VkDeviceSize memOffset)
-{
- layer_data* dev_data = get_my_data_ptr(get_dispatch_key(device), layer_data_map);
- VkResult result = dev_data->device_dispatch_table->BindImageMemory(device, image, mem, memOffset);
- loader_platform_thread_lock_mutex(&globalLock);
- dev_data->memImageMap[mem] = image;
- loader_platform_thread_unlock_mutex(&globalLock);
- return result;
-}
-
-
-VKAPI_ATTR VkResult VKAPI_CALL vkSetEvent(VkDevice device, VkEvent event) {
- layer_data* dev_data = get_my_data_ptr(get_dispatch_key(device), layer_data_map);
- loader_platform_thread_lock_mutex(&globalLock);
- dev_data->eventMap[event].needsSignaled = false;
- dev_data->eventMap[event].stageMask = VK_PIPELINE_STAGE_HOST_BIT;
- loader_platform_thread_unlock_mutex(&globalLock);
- VkResult result = dev_data->device_dispatch_table->SetEvent(device, event);
- return result;
-}
-
-VKAPI_ATTR VkResult VKAPI_CALL vkQueueBindSparse(
- VkQueue queue,
- uint32_t bindInfoCount,
- const VkBindSparseInfo* pBindInfo,
- VkFence fence)
-{
- layer_data* dev_data = get_my_data_ptr(get_dispatch_key(queue), layer_data_map);
- VkBool32 skip_call = VK_FALSE;
-
- loader_platform_thread_lock_mutex(&globalLock);
- for (uint32_t bindIdx=0; bindIdx < bindInfoCount; ++bindIdx) {
- const VkBindSparseInfo& bindInfo = pBindInfo[bindIdx];
- for (uint32_t i=0; i < bindInfo.waitSemaphoreCount; ++i) {
- if (dev_data->semaphoreMap[bindInfo.pWaitSemaphores[i]].signaled) {
- dev_data->semaphoreMap[bindInfo.pWaitSemaphores[i]].signaled =
- 0;
- } else {
- skip_call |= log_msg(dev_data->report_data, VK_DEBUG_REPORT_ERROR_BIT_EXT, VK_DEBUG_REPORT_OBJECT_TYPE_COMMAND_BUFFER_EXT, 0, __LINE__, DRAWSTATE_QUEUE_FORWARD_PROGRESS, "DS",
- "Queue %#" PRIx64 " is waiting on semaphore %#" PRIx64 " that has no way to be signaled.",
- (uint64_t)(queue), (uint64_t)(bindInfo.pWaitSemaphores[i]));
- }
- }
- for (uint32_t i=0; i < bindInfo.signalSemaphoreCount; ++i) {
- dev_data->semaphoreMap[bindInfo.pSignalSemaphores[i]].signaled = 1;
- }
- }
- loader_platform_thread_unlock_mutex(&globalLock);
-
- if (VK_FALSE == skip_call)
- return dev_data->device_dispatch_table->QueueBindSparse(queue, bindInfoCount, pBindInfo, fence);
- else
- return VK_ERROR_VALIDATION_FAILED_EXT;
-}
-
-VKAPI_ATTR VkResult VKAPI_CALL vkCreateSemaphore(
- VkDevice device,
- const VkSemaphoreCreateInfo* pCreateInfo,
- const VkAllocationCallbacks* pAllocator,
- VkSemaphore* pSemaphore)
-{
- layer_data* dev_data = get_my_data_ptr(get_dispatch_key(device), layer_data_map);
- VkResult result = dev_data->device_dispatch_table->CreateSemaphore(device, pCreateInfo, pAllocator, pSemaphore);
- if (result == VK_SUCCESS) {
- loader_platform_thread_lock_mutex(&globalLock);
- dev_data->semaphoreMap[*pSemaphore].signaled = 0;
- dev_data->semaphoreMap[*pSemaphore].in_use.store(0);
- loader_platform_thread_unlock_mutex(&globalLock);
- }
- return result;
-}
-
-VK_LAYER_EXPORT VKAPI_ATTR VkResult VKAPI_CALL vkCreateSwapchainKHR(
- VkDevice device,
- const VkSwapchainCreateInfoKHR *pCreateInfo,
- const VkAllocationCallbacks *pAllocator,
- VkSwapchainKHR *pSwapchain)
-{
- layer_data* dev_data = get_my_data_ptr(get_dispatch_key(device), layer_data_map);
- VkResult result = dev_data->device_dispatch_table->CreateSwapchainKHR(device, pCreateInfo, pAllocator, pSwapchain);
-
- if (VK_SUCCESS == result) {
- SWAPCHAIN_NODE *swapchain_data = new SWAPCHAIN_NODE(pCreateInfo);
- loader_platform_thread_lock_mutex(&globalLock);
- dev_data->device_extensions.swapchainMap[*pSwapchain] = swapchain_data;
- loader_platform_thread_unlock_mutex(&globalLock);
- }
-
- return result;
-}
-
-VK_LAYER_EXPORT VKAPI_ATTR void VKAPI_CALL vkDestroySwapchainKHR(
- VkDevice device,
- VkSwapchainKHR swapchain,
- const VkAllocationCallbacks *pAllocator)
-{
- layer_data* dev_data = get_my_data_ptr(get_dispatch_key(device), layer_data_map);
-
- loader_platform_thread_lock_mutex(&globalLock);
- auto swapchain_data = dev_data->device_extensions.swapchainMap.find(swapchain);
- if (swapchain_data != dev_data->device_extensions.swapchainMap.end()) {
- if (swapchain_data->second->images.size() > 0) {
- for (auto swapchain_image : swapchain_data->second->images) {
- auto image_sub =
- dev_data->imageSubresourceMap.find(swapchain_image);
- if (image_sub != dev_data->imageSubresourceMap.end()) {
- for (auto imgsubpair : image_sub->second) {
- auto image_item =
- dev_data->imageLayoutMap.find(imgsubpair);
- if (image_item != dev_data->imageLayoutMap.end()) {
- dev_data->imageLayoutMap.erase(image_item);
- }
- }
- dev_data->imageSubresourceMap.erase(image_sub);
- }
- }
- }
- delete swapchain_data->second;
- dev_data->device_extensions.swapchainMap.erase(swapchain);
- }
- loader_platform_thread_unlock_mutex(&globalLock);
- dev_data->device_dispatch_table->DestroySwapchainKHR(device, swapchain, pAllocator);
-}
-
-VK_LAYER_EXPORT VKAPI_ATTR VkResult VKAPI_CALL vkGetSwapchainImagesKHR(
- VkDevice device,
- VkSwapchainKHR swapchain,
- uint32_t* pCount,
- VkImage* pSwapchainImages)
-{
- layer_data* dev_data = get_my_data_ptr(get_dispatch_key(device), layer_data_map);
- VkResult result = dev_data->device_dispatch_table->GetSwapchainImagesKHR(device, swapchain, pCount, pSwapchainImages);
-
- if (result == VK_SUCCESS && pSwapchainImages != NULL) {
- // This should never happen and is checked by param checker.
- if (!pCount) return result;
- loader_platform_thread_lock_mutex(&globalLock);
- for (uint32_t i = 0; i < *pCount; ++i) {
- IMAGE_NODE image_node;
- image_node.layout = VK_IMAGE_LAYOUT_UNDEFINED;
- auto swapchain_node = dev_data->device_extensions.swapchainMap[swapchain];
- image_node.format = swapchain_node->createInfo.imageFormat;
- swapchain_node->images.push_back(pSwapchainImages[i]);
- ImageSubresourcePair subpair = {pSwapchainImages[i], false,
- VkImageSubresource()};
- dev_data->imageSubresourceMap[pSwapchainImages[i]].push_back(
- subpair);
- dev_data->imageLayoutMap[subpair] = image_node;
- }
- loader_platform_thread_unlock_mutex(&globalLock);
- }
- return result;
-}
-
-VK_LAYER_EXPORT VKAPI_ATTR VkResult VKAPI_CALL vkQueuePresentKHR(VkQueue queue, const VkPresentInfoKHR* pPresentInfo)
-{
- layer_data* dev_data = get_my_data_ptr(get_dispatch_key(queue), layer_data_map);
- VkBool32 skip_call = VK_FALSE;
-
- if (pPresentInfo) {
- loader_platform_thread_lock_mutex(&globalLock);
- for (uint32_t i=0; i < pPresentInfo->waitSemaphoreCount; ++i) {
- if (dev_data->semaphoreMap[pPresentInfo->pWaitSemaphores[i]]
- .signaled) {
- dev_data->semaphoreMap[pPresentInfo->pWaitSemaphores[i]]
- .signaled = 0;
- } else {
- skip_call |= log_msg(
- dev_data->report_data, VK_DEBUG_REPORT_ERROR_BIT_EXT,
- VK_DEBUG_REPORT_OBJECT_TYPE_COMMAND_BUFFER_EXT, 0, __LINE__,
- DRAWSTATE_QUEUE_FORWARD_PROGRESS, "DS",
- "Queue %#" PRIx64 " is waiting on semaphore %#" PRIx64
- " that has no way to be signaled.",
- (uint64_t)(queue),
- (uint64_t)(pPresentInfo->pWaitSemaphores[i]));
- }
- }
- for (uint32_t i = 0; i < pPresentInfo->swapchainCount; ++i) {
- auto swapchain_data = dev_data->device_extensions.swapchainMap.find(pPresentInfo->pSwapchains[i]);
- if (swapchain_data != dev_data->device_extensions.swapchainMap.end() && pPresentInfo->pImageIndices[i] < swapchain_data->second->images.size()) {
- VkImage image = swapchain_data->second->images[pPresentInfo->pImageIndices[i]];
- vector<VkImageLayout> layouts;
- if (FindLayouts(dev_data, image, layouts)) {
- for (auto layout : layouts) {
- if (layout != VK_IMAGE_LAYOUT_PRESENT_SRC_KHR) {
- skip_call |= log_msg(
- dev_data->report_data,
- VK_DEBUG_REPORT_ERROR_BIT_EXT,
- VK_DEBUG_REPORT_OBJECT_TYPE_QUEUE_EXT,
- reinterpret_cast<uint64_t &>(queue), __LINE__,
- DRAWSTATE_INVALID_IMAGE_LAYOUT, "DS",
- "Images passed to present must be in layout "
- "PRESENT_SOURCE_KHR but is in %s",
- string_VkImageLayout(layout));
- }
- }
- }
- }
- }
- loader_platform_thread_unlock_mutex(&globalLock);
- }
-
- if (VK_FALSE == skip_call)
- return dev_data->device_dispatch_table->QueuePresentKHR(queue, pPresentInfo);
- return VK_ERROR_VALIDATION_FAILED_EXT;
-}
-
-VKAPI_ATTR VkResult VKAPI_CALL vkAcquireNextImageKHR(
- VkDevice device,
- VkSwapchainKHR swapchain,
- uint64_t timeout,
- VkSemaphore semaphore,
- VkFence fence,
- uint32_t* pImageIndex)
-{
- layer_data* dev_data = get_my_data_ptr(get_dispatch_key(device), layer_data_map);
- VkResult result = dev_data->device_dispatch_table->AcquireNextImageKHR(device, swapchain, timeout, semaphore, fence, pImageIndex);
- loader_platform_thread_lock_mutex(&globalLock);
- // FIXME/TODO: Need to add some thing code the "fence" parameter
- dev_data->semaphoreMap[semaphore].signaled = 1;
- loader_platform_thread_unlock_mutex(&globalLock);
- return result;
-}
-
-VK_LAYER_EXPORT VKAPI_ATTR VkResult VKAPI_CALL vkCreateDebugReportCallbackEXT(
- VkInstance instance,
- const VkDebugReportCallbackCreateInfoEXT* pCreateInfo,
- const VkAllocationCallbacks* pAllocator,
- VkDebugReportCallbackEXT* pMsgCallback)
-{
- layer_data* my_data = get_my_data_ptr(get_dispatch_key(instance), layer_data_map);
- VkLayerInstanceDispatchTable *pTable = my_data->instance_dispatch_table;
- VkResult res = pTable->CreateDebugReportCallbackEXT(instance, pCreateInfo, pAllocator, pMsgCallback);
- if (VK_SUCCESS == res) {
- loader_platform_thread_lock_mutex(&globalLock);
- res = layer_create_msg_callback(my_data->report_data, pCreateInfo, pAllocator, pMsgCallback);
- loader_platform_thread_unlock_mutex(&globalLock);
- }
- return res;
-}
-
-VK_LAYER_EXPORT VKAPI_ATTR void VKAPI_CALL vkDestroyDebugReportCallbackEXT(
- VkInstance instance,
- VkDebugReportCallbackEXT msgCallback,
- const VkAllocationCallbacks* pAllocator)
-{
- layer_data* my_data = get_my_data_ptr(get_dispatch_key(instance), layer_data_map);
- VkLayerInstanceDispatchTable *pTable = my_data->instance_dispatch_table;
- pTable->DestroyDebugReportCallbackEXT(instance, msgCallback, pAllocator);
- loader_platform_thread_lock_mutex(&globalLock);
- layer_destroy_msg_callback(my_data->report_data, msgCallback, pAllocator);
- loader_platform_thread_unlock_mutex(&globalLock);
-}
-
-VK_LAYER_EXPORT VKAPI_ATTR void VKAPI_CALL vkDebugReportMessageEXT(
- VkInstance instance,
- VkDebugReportFlagsEXT flags,
- VkDebugReportObjectTypeEXT objType,
- uint64_t object,
- size_t location,
- int32_t msgCode,
- const char* pLayerPrefix,
- const char* pMsg)
-{
- layer_data *my_data = get_my_data_ptr(get_dispatch_key(instance), layer_data_map);
- my_data->instance_dispatch_table->DebugReportMessageEXT(instance, flags, objType, object, location, msgCode, pLayerPrefix, pMsg);
-}
-
-VK_LAYER_EXPORT VKAPI_ATTR void VKAPI_CALL vkCmdDbgMarkerBegin(VkCommandBuffer commandBuffer, const char* pMarker)
-{
- VkBool32 skipCall = VK_FALSE;
- layer_data* dev_data = get_my_data_ptr(get_dispatch_key(commandBuffer), layer_data_map);
- loader_platform_thread_lock_mutex(&globalLock);
- GLOBAL_CB_NODE* pCB = getCBNode(dev_data, commandBuffer);
- if (!dev_data->device_extensions.debug_marker_enabled) {
- skipCall |= log_msg(dev_data->report_data, VK_DEBUG_REPORT_ERROR_BIT_EXT, VK_DEBUG_REPORT_OBJECT_TYPE_COMMAND_BUFFER_EXT, (uint64_t)commandBuffer, __LINE__, DRAWSTATE_INVALID_EXTENSION, "DS",
- "Attempt to use CmdDbgMarkerBegin but extension disabled!");
- return;
- } else if (pCB) {
- skipCall |= addCmd(dev_data, pCB, CMD_DBGMARKERBEGIN, "vkCmdDbgMarkerBegin()");
- }
- loader_platform_thread_unlock_mutex(&globalLock);
- if (VK_FALSE == skipCall)
- debug_marker_dispatch_table(commandBuffer)->CmdDbgMarkerBegin(commandBuffer, pMarker);
-}
-
-VK_LAYER_EXPORT VKAPI_ATTR void VKAPI_CALL vkCmdDbgMarkerEnd(VkCommandBuffer commandBuffer)
-{
- VkBool32 skipCall = VK_FALSE;
- layer_data* dev_data = get_my_data_ptr(get_dispatch_key(commandBuffer), layer_data_map);
- loader_platform_thread_lock_mutex(&globalLock);
- GLOBAL_CB_NODE* pCB = getCBNode(dev_data, commandBuffer);
- if (!dev_data->device_extensions.debug_marker_enabled) {
- skipCall |= log_msg(dev_data->report_data, VK_DEBUG_REPORT_ERROR_BIT_EXT, VK_DEBUG_REPORT_OBJECT_TYPE_COMMAND_BUFFER_EXT, (uint64_t)commandBuffer, __LINE__, DRAWSTATE_INVALID_EXTENSION, "DS",
- "Attempt to use CmdDbgMarkerEnd but extension disabled!");
- return;
- } else if (pCB) {
- skipCall |= addCmd(dev_data, pCB, CMD_DBGMARKEREND, "vkCmdDbgMarkerEnd()");
- }
- loader_platform_thread_unlock_mutex(&globalLock);
- if (VK_FALSE == skipCall)
- debug_marker_dispatch_table(commandBuffer)->CmdDbgMarkerEnd(commandBuffer);
-}
-
-VK_LAYER_EXPORT VKAPI_ATTR PFN_vkVoidFunction VKAPI_CALL vkGetDeviceProcAddr(VkDevice dev, const char* funcName)
-{
- if (!strcmp(funcName, "vkGetDeviceProcAddr"))
- return (PFN_vkVoidFunction) vkGetDeviceProcAddr;
- if (!strcmp(funcName, "vkDestroyDevice"))
- return (PFN_vkVoidFunction) vkDestroyDevice;
- if (!strcmp(funcName, "vkQueueSubmit"))
- return (PFN_vkVoidFunction) vkQueueSubmit;
- if (!strcmp(funcName, "vkWaitForFences"))
- return (PFN_vkVoidFunction) vkWaitForFences;
- if (!strcmp(funcName, "vkGetFenceStatus"))
- return (PFN_vkVoidFunction) vkGetFenceStatus;
- if (!strcmp(funcName, "vkQueueWaitIdle"))
- return (PFN_vkVoidFunction) vkQueueWaitIdle;
- if (!strcmp(funcName, "vkDeviceWaitIdle"))
- return (PFN_vkVoidFunction) vkDeviceWaitIdle;
- if (!strcmp(funcName, "vkGetDeviceQueue"))
- return (PFN_vkVoidFunction) vkGetDeviceQueue;
- if (!strcmp(funcName, "vkDestroyInstance"))
- return (PFN_vkVoidFunction) vkDestroyInstance;
- if (!strcmp(funcName, "vkDestroyDevice"))
- return (PFN_vkVoidFunction) vkDestroyDevice;
- if (!strcmp(funcName, "vkDestroyFence"))
- return (PFN_vkVoidFunction) vkDestroyFence;
- if (!strcmp(funcName, "vkResetFences"))
- return (PFN_vkVoidFunction)vkResetFences;
- if (!strcmp(funcName, "vkDestroySemaphore"))
- return (PFN_vkVoidFunction) vkDestroySemaphore;
- if (!strcmp(funcName, "vkDestroyEvent"))
- return (PFN_vkVoidFunction) vkDestroyEvent;
- if (!strcmp(funcName, "vkDestroyQueryPool"))
- return (PFN_vkVoidFunction) vkDestroyQueryPool;
- if (!strcmp(funcName, "vkDestroyBuffer"))
- return (PFN_vkVoidFunction) vkDestroyBuffer;
- if (!strcmp(funcName, "vkDestroyBufferView"))
- return (PFN_vkVoidFunction) vkDestroyBufferView;
- if (!strcmp(funcName, "vkDestroyImage"))
- return (PFN_vkVoidFunction) vkDestroyImage;
- if (!strcmp(funcName, "vkDestroyImageView"))
- return (PFN_vkVoidFunction) vkDestroyImageView;
- if (!strcmp(funcName, "vkDestroyShaderModule"))
- return (PFN_vkVoidFunction) vkDestroyShaderModule;
- if (!strcmp(funcName, "vkDestroyPipeline"))
- return (PFN_vkVoidFunction) vkDestroyPipeline;
- if (!strcmp(funcName, "vkDestroyPipelineLayout"))
- return (PFN_vkVoidFunction) vkDestroyPipelineLayout;
- if (!strcmp(funcName, "vkDestroySampler"))
- return (PFN_vkVoidFunction) vkDestroySampler;
- if (!strcmp(funcName, "vkDestroyDescriptorSetLayout"))
- return (PFN_vkVoidFunction) vkDestroyDescriptorSetLayout;
- if (!strcmp(funcName, "vkDestroyDescriptorPool"))
- return (PFN_vkVoidFunction) vkDestroyDescriptorPool;
- if (!strcmp(funcName, "vkDestroyFramebuffer"))
- return (PFN_vkVoidFunction) vkDestroyFramebuffer;
- if (!strcmp(funcName, "vkDestroyRenderPass"))
- return (PFN_vkVoidFunction) vkDestroyRenderPass;
- if (!strcmp(funcName, "vkCreateBuffer"))
- return (PFN_vkVoidFunction) vkCreateBuffer;
- if (!strcmp(funcName, "vkCreateBufferView"))
- return (PFN_vkVoidFunction) vkCreateBufferView;
- if (!strcmp(funcName, "vkCreateImage"))
- return (PFN_vkVoidFunction) vkCreateImage;
- if (!strcmp(funcName, "vkCreateImageView"))
- return (PFN_vkVoidFunction) vkCreateImageView;
- if (!strcmp(funcName, "vkCreateFence"))
- return (PFN_vkVoidFunction) vkCreateFence;
- if (!strcmp(funcName, "CreatePipelineCache"))
- return (PFN_vkVoidFunction) vkCreatePipelineCache;
- if (!strcmp(funcName, "DestroyPipelineCache"))
- return (PFN_vkVoidFunction) vkDestroyPipelineCache;
- if (!strcmp(funcName, "GetPipelineCacheData"))
- return (PFN_vkVoidFunction) vkGetPipelineCacheData;
- if (!strcmp(funcName, "MergePipelineCaches"))
- return (PFN_vkVoidFunction) vkMergePipelineCaches;
- if (!strcmp(funcName, "vkCreateGraphicsPipelines"))
- return (PFN_vkVoidFunction) vkCreateGraphicsPipelines;
- if (!strcmp(funcName, "vkCreateComputePipelines"))
- return (PFN_vkVoidFunction) vkCreateComputePipelines;
- if (!strcmp(funcName, "vkCreateSampler"))
- return (PFN_vkVoidFunction) vkCreateSampler;
- if (!strcmp(funcName, "vkCreateDescriptorSetLayout"))
- return (PFN_vkVoidFunction) vkCreateDescriptorSetLayout;
- if (!strcmp(funcName, "vkCreatePipelineLayout"))
- return (PFN_vkVoidFunction) vkCreatePipelineLayout;
- if (!strcmp(funcName, "vkCreateDescriptorPool"))
- return (PFN_vkVoidFunction) vkCreateDescriptorPool;
- if (!strcmp(funcName, "vkResetDescriptorPool"))
- return (PFN_vkVoidFunction) vkResetDescriptorPool;
- if (!strcmp(funcName, "vkAllocateDescriptorSets"))
- return (PFN_vkVoidFunction) vkAllocateDescriptorSets;
- if (!strcmp(funcName, "vkFreeDescriptorSets"))
- return (PFN_vkVoidFunction) vkFreeDescriptorSets;
- if (!strcmp(funcName, "vkUpdateDescriptorSets"))
- return (PFN_vkVoidFunction) vkUpdateDescriptorSets;
- if (!strcmp(funcName, "vkCreateCommandPool"))
- return (PFN_vkVoidFunction) vkCreateCommandPool;
- if (!strcmp(funcName, "vkDestroyCommandPool"))
- return (PFN_vkVoidFunction) vkDestroyCommandPool;
- if (!strcmp(funcName, "vkResetCommandPool"))
- return (PFN_vkVoidFunction) vkResetCommandPool;
- if (!strcmp(funcName, "vkCreateQueryPool"))
- return (PFN_vkVoidFunction)vkCreateQueryPool;
- if (!strcmp(funcName, "vkAllocateCommandBuffers"))
- return (PFN_vkVoidFunction) vkAllocateCommandBuffers;
- if (!strcmp(funcName, "vkFreeCommandBuffers"))
- return (PFN_vkVoidFunction) vkFreeCommandBuffers;
- if (!strcmp(funcName, "vkBeginCommandBuffer"))
- return (PFN_vkVoidFunction) vkBeginCommandBuffer;
- if (!strcmp(funcName, "vkEndCommandBuffer"))
- return (PFN_vkVoidFunction) vkEndCommandBuffer;
- if (!strcmp(funcName, "vkResetCommandBuffer"))
- return (PFN_vkVoidFunction) vkResetCommandBuffer;
- if (!strcmp(funcName, "vkCmdBindPipeline"))
- return (PFN_vkVoidFunction) vkCmdBindPipeline;
- if (!strcmp(funcName, "vkCmdSetViewport"))
- return (PFN_vkVoidFunction) vkCmdSetViewport;
- if (!strcmp(funcName, "vkCmdSetScissor"))
- return (PFN_vkVoidFunction) vkCmdSetScissor;
- if (!strcmp(funcName, "vkCmdSetLineWidth"))
- return (PFN_vkVoidFunction) vkCmdSetLineWidth;
- if (!strcmp(funcName, "vkCmdSetDepthBias"))
- return (PFN_vkVoidFunction) vkCmdSetDepthBias;
- if (!strcmp(funcName, "vkCmdSetBlendConstants"))
- return (PFN_vkVoidFunction) vkCmdSetBlendConstants;
- if (!strcmp(funcName, "vkCmdSetDepthBounds"))
- return (PFN_vkVoidFunction) vkCmdSetDepthBounds;
- if (!strcmp(funcName, "vkCmdSetStencilCompareMask"))
- return (PFN_vkVoidFunction) vkCmdSetStencilCompareMask;
- if (!strcmp(funcName, "vkCmdSetStencilWriteMask"))
- return (PFN_vkVoidFunction) vkCmdSetStencilWriteMask;
- if (!strcmp(funcName, "vkCmdSetStencilReference"))
- return (PFN_vkVoidFunction) vkCmdSetStencilReference;
- if (!strcmp(funcName, "vkCmdBindDescriptorSets"))
- return (PFN_vkVoidFunction) vkCmdBindDescriptorSets;
- if (!strcmp(funcName, "vkCmdBindVertexBuffers"))
- return (PFN_vkVoidFunction) vkCmdBindVertexBuffers;
- if (!strcmp(funcName, "vkCmdBindIndexBuffer"))
- return (PFN_vkVoidFunction) vkCmdBindIndexBuffer;
- if (!strcmp(funcName, "vkCmdDraw"))
- return (PFN_vkVoidFunction) vkCmdDraw;
- if (!strcmp(funcName, "vkCmdDrawIndexed"))
- return (PFN_vkVoidFunction) vkCmdDrawIndexed;
- if (!strcmp(funcName, "vkCmdDrawIndirect"))
- return (PFN_vkVoidFunction) vkCmdDrawIndirect;
- if (!strcmp(funcName, "vkCmdDrawIndexedIndirect"))
- return (PFN_vkVoidFunction) vkCmdDrawIndexedIndirect;
- if (!strcmp(funcName, "vkCmdDispatch"))
- return (PFN_vkVoidFunction) vkCmdDispatch;
- if (!strcmp(funcName, "vkCmdDispatchIndirect"))
- return (PFN_vkVoidFunction) vkCmdDispatchIndirect;
- if (!strcmp(funcName, "vkCmdCopyBuffer"))
- return (PFN_vkVoidFunction) vkCmdCopyBuffer;
- if (!strcmp(funcName, "vkCmdCopyImage"))
- return (PFN_vkVoidFunction) vkCmdCopyImage;
- if (!strcmp(funcName, "vkCmdCopyBufferToImage"))
- return (PFN_vkVoidFunction) vkCmdCopyBufferToImage;
- if (!strcmp(funcName, "vkCmdCopyImageToBuffer"))
- return (PFN_vkVoidFunction) vkCmdCopyImageToBuffer;
- if (!strcmp(funcName, "vkCmdUpdateBuffer"))
- return (PFN_vkVoidFunction) vkCmdUpdateBuffer;
- if (!strcmp(funcName, "vkCmdFillBuffer"))
- return (PFN_vkVoidFunction) vkCmdFillBuffer;
- if (!strcmp(funcName, "vkCmdClearColorImage"))
- return (PFN_vkVoidFunction) vkCmdClearColorImage;
- if (!strcmp(funcName, "vkCmdClearDepthStencilImage"))
- return (PFN_vkVoidFunction) vkCmdClearDepthStencilImage;
- if (!strcmp(funcName, "vkCmdClearAttachments"))
- return (PFN_vkVoidFunction) vkCmdClearAttachments;
- if (!strcmp(funcName, "vkCmdResolveImage"))
- return (PFN_vkVoidFunction) vkCmdResolveImage;
- if (!strcmp(funcName, "vkCmdSetEvent"))
- return (PFN_vkVoidFunction) vkCmdSetEvent;
- if (!strcmp(funcName, "vkCmdResetEvent"))
- return (PFN_vkVoidFunction) vkCmdResetEvent;
- if (!strcmp(funcName, "vkCmdWaitEvents"))
- return (PFN_vkVoidFunction) vkCmdWaitEvents;
- if (!strcmp(funcName, "vkCmdPipelineBarrier"))
- return (PFN_vkVoidFunction) vkCmdPipelineBarrier;
- if (!strcmp(funcName, "vkCmdBeginQuery"))
- return (PFN_vkVoidFunction) vkCmdBeginQuery;
- if (!strcmp(funcName, "vkCmdEndQuery"))
- return (PFN_vkVoidFunction) vkCmdEndQuery;
- if (!strcmp(funcName, "vkCmdResetQueryPool"))
- return (PFN_vkVoidFunction) vkCmdResetQueryPool;
- if (!strcmp(funcName, "vkCmdWriteTimestamp"))
- return (PFN_vkVoidFunction) vkCmdWriteTimestamp;
- if (!strcmp(funcName, "vkCreateFramebuffer"))
- return (PFN_vkVoidFunction) vkCreateFramebuffer;
- if (!strcmp(funcName, "vkCreateShaderModule"))
- return (PFN_vkVoidFunction) vkCreateShaderModule;
- if (!strcmp(funcName, "vkCreateRenderPass"))
- return (PFN_vkVoidFunction) vkCreateRenderPass;
- if (!strcmp(funcName, "vkCmdBeginRenderPass"))
- return (PFN_vkVoidFunction) vkCmdBeginRenderPass;
- if (!strcmp(funcName, "vkCmdNextSubpass"))
- return (PFN_vkVoidFunction) vkCmdNextSubpass;
- if (!strcmp(funcName, "vkCmdEndRenderPass"))
- return (PFN_vkVoidFunction) vkCmdEndRenderPass;
- if (!strcmp(funcName, "vkCmdExecuteCommands"))
- return (PFN_vkVoidFunction) vkCmdExecuteCommands;
- if (!strcmp(funcName, "vkSetEvent"))
- return (PFN_vkVoidFunction) vkSetEvent;
- if (!strcmp(funcName, "vkMapMemory"))
- return (PFN_vkVoidFunction) vkMapMemory;
- if (!strcmp(funcName, "vkGetQueryPoolResults"))
- return (PFN_vkVoidFunction) vkGetQueryPoolResults;
- if (!strcmp(funcName, "vkBindImageMemory"))
- return (PFN_vkVoidFunction) vkBindImageMemory;
- if (!strcmp(funcName, "vkQueueBindSparse"))
- return (PFN_vkVoidFunction) vkQueueBindSparse;
- if (!strcmp(funcName, "vkCreateSemaphore"))
- return (PFN_vkVoidFunction) vkCreateSemaphore;
-
- if (dev == NULL)
- return NULL;
-
- layer_data *dev_data;
- dev_data = get_my_data_ptr(get_dispatch_key(dev), layer_data_map);
-
- if (dev_data->device_extensions.wsi_enabled)
- {
- if (!strcmp(funcName, "vkCreateSwapchainKHR"))
- return (PFN_vkVoidFunction) vkCreateSwapchainKHR;
- if (!strcmp(funcName, "vkDestroySwapchainKHR"))
- return (PFN_vkVoidFunction) vkDestroySwapchainKHR;
- if (!strcmp(funcName, "vkGetSwapchainImagesKHR"))
- return (PFN_vkVoidFunction) vkGetSwapchainImagesKHR;
- if (!strcmp(funcName, "vkAcquireNextImageKHR"))
- return (PFN_vkVoidFunction) vkAcquireNextImageKHR;
- if (!strcmp(funcName, "vkQueuePresentKHR"))
- return (PFN_vkVoidFunction) vkQueuePresentKHR;
- }
-
- VkLayerDispatchTable* pTable = dev_data->device_dispatch_table;
- if (dev_data->device_extensions.debug_marker_enabled)
- {
- if (!strcmp(funcName, "vkCmdDbgMarkerBegin"))
- return (PFN_vkVoidFunction) vkCmdDbgMarkerBegin;
- if (!strcmp(funcName, "vkCmdDbgMarkerEnd"))
- return (PFN_vkVoidFunction) vkCmdDbgMarkerEnd;
- }
- {
- if (pTable->GetDeviceProcAddr == NULL)
- return NULL;
- return pTable->GetDeviceProcAddr(dev, funcName);
- }
-}
-
-VK_LAYER_EXPORT VKAPI_ATTR PFN_vkVoidFunction VKAPI_CALL vkGetInstanceProcAddr(VkInstance instance, const char* funcName)
-{
- if (!strcmp(funcName, "vkGetInstanceProcAddr"))
- return (PFN_vkVoidFunction) vkGetInstanceProcAddr;
- if (!strcmp(funcName, "vkGetDeviceProcAddr"))
- return (PFN_vkVoidFunction) vkGetDeviceProcAddr;
- if (!strcmp(funcName, "vkCreateInstance"))
- return (PFN_vkVoidFunction) vkCreateInstance;
- if (!strcmp(funcName, "vkCreateDevice"))
- return (PFN_vkVoidFunction) vkCreateDevice;
- if (!strcmp(funcName, "vkDestroyInstance"))
- return (PFN_vkVoidFunction) vkDestroyInstance;
- if (!strcmp(funcName, "vkEnumerateInstanceLayerProperties"))
- return (PFN_vkVoidFunction) vkEnumerateInstanceLayerProperties;
- if (!strcmp(funcName, "vkEnumerateInstanceExtensionProperties"))
- return (PFN_vkVoidFunction) vkEnumerateInstanceExtensionProperties;
- if (!strcmp(funcName, "vkEnumerateDeviceLayerProperties"))
- return (PFN_vkVoidFunction) vkEnumerateDeviceLayerProperties;
- if (!strcmp(funcName, "vkEnumerateDeviceExtensionProperties"))
- return (PFN_vkVoidFunction) vkEnumerateDeviceExtensionProperties;
-
- if (instance == NULL)
- return NULL;
-
- PFN_vkVoidFunction fptr;
-
- layer_data* my_data;
- my_data = get_my_data_ptr(get_dispatch_key(instance), layer_data_map);
- fptr = debug_report_get_instance_proc_addr(my_data->report_data, funcName);
- if (fptr)
- return fptr;
-
- VkLayerInstanceDispatchTable* pTable = my_data->instance_dispatch_table;
- if (pTable->GetInstanceProcAddr == NULL)
- return NULL;
- return pTable->GetInstanceProcAddr(instance, funcName);
-}
diff --git a/layers/draw_state.h b/layers/draw_state.h
deleted file mode 100755
index c7435c4cd..000000000
--- a/layers/draw_state.h
+++ /dev/null
@@ -1,707 +0,0 @@
-/* Copyright (c) 2015-2016 The Khronos Group Inc.
- * Copyright (c) 2015-2016 Valve Corporation
- * Copyright (c) 2015-2016 LunarG, Inc.
- * Copyright (C) 2015-2016 Google Inc.
- *
- * Permission is hereby granted, free of charge, to any person obtaining a copy
- * of this software and/or associated documentation files (the "Materials"), to
- * deal in the Materials without restriction, including without limitation the
- * rights to use, copy, modify, merge, publish, distribute, sublicense, and/or
- * sell copies of the Materials, and to permit persons to whom the Materials
- * are furnished to do so, subject to the following conditions:
- *
- * The above copyright notice(s) and this permission notice shall be included
- * in all copies or substantial portions of the Materials.
- *
- * THE MATERIALS ARE PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
- * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
- * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT.
- *
- * IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM,
- * DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR
- * OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE MATERIALS OR THE
- * USE OR OTHER DEALINGS IN THE MATERIALS
- *
- * Author: Courtney Goeltzenleuchter <courtneygo@google.com>
- * Author: Tobin Ehlis <tobine@google.com>
- * Author: Chris Forbes <chrisf@ijw.co.nz>
- * Author: Mark Lobodzinski <mark@lunarg.com>
- */
-
-#include "vulkan/vk_layer.h"
-#include <atomic>
-#include <vector>
-#include <memory>
-
-using std::vector;
-
-// Draw State ERROR codes
-typedef enum _DRAW_STATE_ERROR {
- DRAWSTATE_NONE, // Used for INFO & other non-error messages
- DRAWSTATE_INTERNAL_ERROR, // Error with DrawState internal data structures
- DRAWSTATE_NO_PIPELINE_BOUND, // Unable to identify a bound pipeline
- DRAWSTATE_INVALID_POOL, // Invalid DS pool
- DRAWSTATE_INVALID_SET, // Invalid DS
- DRAWSTATE_INVALID_LAYOUT, // Invalid DS layout
- DRAWSTATE_INVALID_IMAGE_LAYOUT, // Invalid Image layout
- DRAWSTATE_INVALID_PIPELINE, // Invalid Pipeline handle referenced
- DRAWSTATE_INVALID_PIPELINE_LAYOUT, // Invalid PipelineLayout
- DRAWSTATE_INVALID_PIPELINE_CREATE_STATE, // Attempt to create a pipeline
- // with invalid state
- DRAWSTATE_INVALID_COMMAND_BUFFER, // Invalid CommandBuffer referenced
- DRAWSTATE_INVALID_BARRIER, // Invalid Barrier
- DRAWSTATE_INVALID_BUFFER, // Invalid Buffer
- DRAWSTATE_INVALID_QUERY, // Invalid Query
- DRAWSTATE_INVALID_FENCE, // Invalid Fence
- DRAWSTATE_INVALID_SEMAPHORE, // Invalid Semaphore
- DRAWSTATE_INVALID_EVENT, // Invalid Event
- DRAWSTATE_VTX_INDEX_OUT_OF_BOUNDS, // binding in vkCmdBindVertexData() too
- // large for PSO's
- // pVertexBindingDescriptions array
- DRAWSTATE_VTX_INDEX_ALIGNMENT_ERROR, // binding offset in
- // vkCmdBindIndexBuffer() out of
- // alignment based on indexType
- // DRAWSTATE_MISSING_DOT_PROGRAM, // No "dot" program in order
- // to generate png image
- DRAWSTATE_OUT_OF_MEMORY, // malloc failed
- DRAWSTATE_INVALID_DESCRIPTOR_SET, // Descriptor Set handle is unknown
- DRAWSTATE_DESCRIPTOR_TYPE_MISMATCH, // Type in layout vs. update are not the
- // same
- DRAWSTATE_DESCRIPTOR_STAGEFLAGS_MISMATCH, // StageFlags in layout are not
- // the same throughout a single
- // VkWriteDescriptorSet update
- DRAWSTATE_DESCRIPTOR_UPDATE_OUT_OF_BOUNDS, // Descriptors set for update out
- // of bounds for corresponding
- // layout section
- DRAWSTATE_DESCRIPTOR_POOL_EMPTY, // Attempt to allocate descriptor from a
- // pool with no more descriptors of that
- // type available
- DRAWSTATE_CANT_FREE_FROM_NON_FREE_POOL, // Invalid to call
- // vkFreeDescriptorSets on Sets
- // allocated from a NON_FREE Pool
- DRAWSTATE_INVALID_UPDATE_INDEX, // Index of requested update is invalid for
- // specified descriptors set
- DRAWSTATE_INVALID_UPDATE_STRUCT, // Struct in DS Update tree is of invalid
- // type
- DRAWSTATE_NUM_SAMPLES_MISMATCH, // Number of samples in bound PSO does not
- // match number in FB of current RenderPass
- DRAWSTATE_NO_END_COMMAND_BUFFER, // Must call vkEndCommandBuffer() before
- // QueueSubmit on that commandBuffer
- DRAWSTATE_NO_BEGIN_COMMAND_BUFFER, // Binding cmds or calling End on CB that
- // never had vkBeginCommandBuffer()
- // called on it
- DRAWSTATE_COMMAND_BUFFER_SINGLE_SUBMIT_VIOLATION, // Cmd Buffer created with
- // VK_COMMAND_BUFFER_USAGE_ONE_TIME_SUBMIT_BIT
- // flag is submitted
- // multiple times
- DRAWSTATE_INVALID_SECONDARY_COMMAND_BUFFER, // vkCmdExecuteCommands() called
- // with a primary commandBuffer
- // in pCommandBuffers array
- DRAWSTATE_VIEWPORT_NOT_BOUND, // Draw submitted with no viewport state bound
- DRAWSTATE_SCISSOR_NOT_BOUND, // Draw submitted with no scissor state bound
- DRAWSTATE_LINE_WIDTH_NOT_BOUND, // Draw submitted with no line width state
- // bound
- DRAWSTATE_DEPTH_BIAS_NOT_BOUND, // Draw submitted with no depth bias state
- // bound
- DRAWSTATE_BLEND_NOT_BOUND, // Draw submitted with no blend state bound when
- // color write enabled
- DRAWSTATE_DEPTH_BOUNDS_NOT_BOUND, // Draw submitted with no depth bounds
- // state bound when depth enabled
- DRAWSTATE_STENCIL_NOT_BOUND, // Draw submitted with no stencil state bound
- // when stencil enabled
- DRAWSTATE_INDEX_BUFFER_NOT_BOUND, // Draw submitted with no depth-stencil
- // state bound when depth write enabled
- DRAWSTATE_PIPELINE_LAYOUTS_INCOMPATIBLE, // Draw submitted PSO Pipeline
- // layout that's not compatible
- // with layout from
- // BindDescriptorSets
- DRAWSTATE_RENDERPASS_INCOMPATIBLE, // Incompatible renderpasses between
- // secondary cmdBuffer and primary
- // cmdBuffer or framebuffer
- DRAWSTATE_FRAMEBUFFER_INCOMPATIBLE, // Incompatible framebuffer between
- // secondary cmdBuffer and active
- // renderPass
- DRAWSTATE_INVALID_RENDERPASS, // Use of a NULL or otherwise invalid
- // RenderPass object
- DRAWSTATE_INVALID_RENDERPASS_CMD, // Invalid cmd submitted while a
- // RenderPass is active
- DRAWSTATE_NO_ACTIVE_RENDERPASS, // Rendering cmd submitted without an active
- // RenderPass
- DRAWSTATE_DESCRIPTOR_SET_NOT_UPDATED, // DescriptorSet bound but it was
- // never updated. This is a warning
- // code.
- DRAWSTATE_DESCRIPTOR_SET_NOT_BOUND, // DescriptorSet used by pipeline at
- // draw time is not bound, or has been
- // disturbed (which would have flagged
- // previous warning)
- DRAWSTATE_INVALID_DYNAMIC_OFFSET_COUNT, // DescriptorSets bound with
- // different number of dynamic
- // descriptors that were included in
- // dynamicOffsetCount
- DRAWSTATE_CLEAR_CMD_BEFORE_DRAW, // Clear cmd issued before any Draw in
- // CommandBuffer, should use RenderPass Ops
- // instead
- DRAWSTATE_BEGIN_CB_INVALID_STATE, // CB state at Begin call is bad. Can be
- // Primary/Secondary CB created with
- // mismatched FB/RP information or CB in
- // RECORDING state
- DRAWSTATE_INVALID_CB_SIMULTANEOUS_USE, // CmdBuffer is being used in
- // violation of
- // VK_COMMAND_BUFFER_USAGE_SIMULTANEOUS_USE_BIT
- // rules (i.e. simultaneous use w/o
- // that bit set)
- DRAWSTATE_INVALID_COMMAND_BUFFER_RESET, // Attempting to call Reset (or
- // Begin on recorded cmdBuffer) that
- // was allocated from Pool w/o
- // VK_COMMAND_POOL_CREATE_RESET_COMMAND_BUFFER_BIT
- // bit set
- DRAWSTATE_VIEWPORT_SCISSOR_MISMATCH, // Count for viewports and scissors
- // mismatch and/or state doesn't match
- // count
- DRAWSTATE_INVALID_IMAGE_ASPECT, // Image aspect is invalid for the current
- // operation
- DRAWSTATE_MISSING_ATTACHMENT_REFERENCE, // Attachment reference must be
- // present in active subpass
- DRAWSTATE_INVALID_EXTENSION,
- DRAWSTATE_SAMPLER_DESCRIPTOR_ERROR, // A Descriptor of *_SAMPLER type is
- // being updated with an invalid or bad
- // Sampler
- DRAWSTATE_INCONSISTENT_IMMUTABLE_SAMPLER_UPDATE, // Descriptors of
- // *COMBINED_IMAGE_SAMPLER
- // type are being updated
- // where some, but not all,
- // of the updates use
- // immutable samplers
- DRAWSTATE_IMAGEVIEW_DESCRIPTOR_ERROR, // A Descriptor of *_IMAGE or
- // *_ATTACHMENT type is being updated
- // with an invalid or bad ImageView
- DRAWSTATE_BUFFERVIEW_DESCRIPTOR_ERROR, // A Descriptor of *_TEXEL_BUFFER
- // type is being updated with an
- // invalid or bad BufferView
- DRAWSTATE_BUFFERINFO_DESCRIPTOR_ERROR, // A Descriptor of
- // *_[UNIFORM|STORAGE]_BUFFER_[DYNAMIC]
- // type is being updated with an
- // invalid or bad BufferView
- DRAWSTATE_DYNAMIC_OFFSET_OVERFLOW, // At draw time the dynamic offset
- // combined with buffer offset and range
- // oversteps size of buffer
- DRAWSTATE_DOUBLE_DESTROY, // Destroying an object twice
- DRAWSTATE_OBJECT_INUSE, // Destroying or modifying an object in use by a
- // command buffer
- DRAWSTATE_QUEUE_FORWARD_PROGRESS, // Queue cannot guarantee forward progress
- DRAWSTATE_INVALID_UNIFORM_BUFFER_OFFSET, // Dynamic Uniform Buffer Offsets
- // violate device limit
- DRAWSTATE_INVALID_STORAGE_BUFFER_OFFSET, // Dynamic Storage Buffer Offsets
- // violate device limit
-} DRAW_STATE_ERROR;
-
-typedef enum _SHADER_CHECKER_ERROR {
- SHADER_CHECKER_NONE,
- SHADER_CHECKER_FS_MIXED_BROADCAST, /* FS writes broadcast output AND custom outputs */
- SHADER_CHECKER_INTERFACE_TYPE_MISMATCH, /* Type mismatch between shader stages or shader and pipeline */
- SHADER_CHECKER_OUTPUT_NOT_CONSUMED, /* Entry appears in output interface, but missing in input */
- SHADER_CHECKER_INPUT_NOT_PRODUCED, /* Entry appears in input interface, but missing in output */
- SHADER_CHECKER_NON_SPIRV_SHADER, /* Shader image is not SPIR-V */
- SHADER_CHECKER_INCONSISTENT_SPIRV, /* General inconsistency within a SPIR-V module */
- SHADER_CHECKER_UNKNOWN_STAGE, /* Stage is not supported by analysis */
- SHADER_CHECKER_INCONSISTENT_VI, /* VI state contains conflicting binding or attrib descriptions */
- SHADER_CHECKER_MISSING_DESCRIPTOR, /* Shader attempts to use a descriptor binding not declared in the layout */
- SHADER_CHECKER_BAD_SPECIALIZATION, /* Specialization map entry points outside specialization data block */
- SHADER_CHECKER_MISSING_ENTRYPOINT, /* Shader module does not contain the requested entrypoint */
-} SHADER_CHECKER_ERROR;
-
-typedef enum _DRAW_TYPE
-{
- DRAW = 0,
- DRAW_INDEXED = 1,
- DRAW_INDIRECT = 2,
- DRAW_INDEXED_INDIRECT = 3,
- DRAW_BEGIN_RANGE = DRAW,
- DRAW_END_RANGE = DRAW_INDEXED_INDIRECT,
- NUM_DRAW_TYPES = (DRAW_END_RANGE - DRAW_BEGIN_RANGE + 1),
-} DRAW_TYPE;
-
-typedef struct _SHADER_DS_MAPPING {
- uint32_t slotCount;
- VkDescriptorSetLayoutCreateInfo* pShaderMappingSlot;
-} SHADER_DS_MAPPING;
-
-typedef struct _GENERIC_HEADER {
- VkStructureType sType;
- const void* pNext;
-} GENERIC_HEADER;
-
-typedef struct _PIPELINE_NODE {
- VkPipeline pipeline;
- VkGraphicsPipelineCreateInfo graphicsPipelineCI;
- VkPipelineVertexInputStateCreateInfo vertexInputCI;
- VkPipelineInputAssemblyStateCreateInfo iaStateCI;
- VkPipelineTessellationStateCreateInfo tessStateCI;
- VkPipelineViewportStateCreateInfo vpStateCI;
- VkPipelineRasterizationStateCreateInfo rsStateCI;
- VkPipelineMultisampleStateCreateInfo msStateCI;
- VkPipelineColorBlendStateCreateInfo cbStateCI;
- VkPipelineDepthStencilStateCreateInfo dsStateCI;
- VkPipelineDynamicStateCreateInfo dynStateCI;
- VkPipelineShaderStageCreateInfo vsCI;
- VkPipelineShaderStageCreateInfo tcsCI;
- VkPipelineShaderStageCreateInfo tesCI;
- VkPipelineShaderStageCreateInfo gsCI;
- VkPipelineShaderStageCreateInfo fsCI;
- // Compute shader is include in VkComputePipelineCreateInfo
- VkComputePipelineCreateInfo computePipelineCI;
- // Flag of which shader stages are active for this pipeline
- uint32_t active_shaders;
- // Capture which sets are actually used by the shaders of this pipeline
- std::set<unsigned> active_sets;
- // Vtx input info (if any)
- uint32_t vtxBindingCount; // number of bindings
- VkVertexInputBindingDescription* pVertexBindingDescriptions;
- uint32_t vtxAttributeCount; // number of attributes
- VkVertexInputAttributeDescription* pVertexAttributeDescriptions;
- uint32_t attachmentCount; // number of CB attachments
- VkPipelineColorBlendAttachmentState* pAttachments;
- // Default constructor
- _PIPELINE_NODE():pipeline{},
- graphicsPipelineCI{},
- vertexInputCI{},
- iaStateCI{},
- tessStateCI{},
- vpStateCI{},
- rsStateCI{},
- msStateCI{},
- cbStateCI{},
- dsStateCI{},
- dynStateCI{},
- vsCI{},
- tcsCI{},
- tesCI{},
- gsCI{},
- fsCI{},
- computePipelineCI{},
- active_shaders(0),
- vtxBindingCount(0),
- pVertexBindingDescriptions(0),
- vtxAttributeCount(0),
- pVertexAttributeDescriptions(0),
- attachmentCount(0),
- pAttachments(0)
- {};
-} PIPELINE_NODE;
-
-class BASE_NODE {
- public:
- std::atomic_int in_use;
-};
-
-typedef struct _SAMPLER_NODE {
- VkSampler sampler;
- VkSamplerCreateInfo createInfo;
-
- _SAMPLER_NODE(const VkSampler* ps, const VkSamplerCreateInfo* pci) : sampler(*ps), createInfo(*pci) {};
-} SAMPLER_NODE;
-
-typedef struct _IMAGE_NODE {
- VkImageLayout layout;
- VkFormat format;
-} IMAGE_NODE;
-
-typedef struct _IMAGE_CMD_BUF_NODE {
- VkImageLayout initialLayout;
- VkImageLayout layout;
-} IMAGE_CMD_BUF_NODE;
-
-class BUFFER_NODE : public BASE_NODE {
- public:
- using BASE_NODE::in_use;
- unique_ptr<VkBufferCreateInfo> create_info;
-};
-
-struct RENDER_PASS_NODE {
- VkRenderPassCreateInfo const* pCreateInfo;
- std::vector<bool> hasSelfDependency;
- vector<std::vector<VkFormat>> subpassColorFormats;
-
- RENDER_PASS_NODE(VkRenderPassCreateInfo const *pCreateInfo) : pCreateInfo(pCreateInfo)
- {
- uint32_t i;
-
- subpassColorFormats.reserve(pCreateInfo->subpassCount);
- for (i = 0; i < pCreateInfo->subpassCount; i++) {
- const VkSubpassDescription *subpass = &pCreateInfo->pSubpasses[i];
- vector<VkFormat> color_formats;
- uint32_t j;
-
- color_formats.reserve(subpass->colorAttachmentCount);
- for (j = 0; j < subpass->colorAttachmentCount; j++) {
- const uint32_t att = subpass->pColorAttachments[j].attachment;
- const VkFormat format = pCreateInfo->pAttachments[att].format;
-
- color_formats.push_back(format);
- }
-
- subpassColorFormats.push_back(color_formats);
- }
- }
-};
-
-class PHYS_DEV_PROPERTIES_NODE {
- public:
- VkPhysicalDeviceProperties properties;
- VkPhysicalDeviceFeatures features;
- vector<VkQueueFamilyProperties> queue_family_properties;
-};
-
-class FENCE_NODE : public BASE_NODE {
- public:
- using BASE_NODE::in_use;
- VkQueue queue;
- vector<VkCommandBuffer> cmdBuffers;
- bool needsSignaled;
- VkFence priorFence;
-};
-
-class SEMAPHORE_NODE : public BASE_NODE {
- public:
- using BASE_NODE::in_use;
- uint32_t signaled;
-};
-
-class EVENT_NODE : public BASE_NODE {
- public:
- using BASE_NODE::in_use;
- bool needsSignaled;
- VkPipelineStageFlags stageMask;
-};
-
-class QUEUE_NODE {
- public:
- VkDevice device;
- VkFence priorFence;
- vector<VkCommandBuffer> untrackedCmdBuffers;
- unordered_set<VkCommandBuffer> inFlightCmdBuffers;
-};
-
-class QUERY_POOL_NODE : public BASE_NODE {
- public:
- VkQueryPoolCreateInfo createInfo;
-};
-
-// Descriptor Data structures
-// Layout Node has the core layout data
-typedef struct _LAYOUT_NODE {
- VkDescriptorSetLayout layout;
- VkDescriptorSetLayoutCreateInfo createInfo;
- uint32_t startIndex; // 1st index of this layout
- uint32_t endIndex; // last index of this layout
- uint32_t dynamicDescriptorCount; // Total count of dynamic descriptors used
- // by this layout
- vector<VkDescriptorType> descriptorTypes; // Type per descriptor in this
- // layout to verify correct
- // updates
- vector<VkShaderStageFlags> stageFlags; // stageFlags per descriptor in this
- // layout to verify correct updates
- unordered_map<uint32_t, uint32_t> bindingToIndexMap; // map set binding # to
- // pBindings index
- // Default constructor
- _LAYOUT_NODE():layout{},
- createInfo{},
- startIndex(0),
- endIndex(0),
- dynamicDescriptorCount(0)
- {};
-} LAYOUT_NODE;
-
-// Store layouts and pushconstants for PipelineLayout
-struct PIPELINE_LAYOUT_NODE {
- vector<VkDescriptorSetLayout> descriptorSetLayouts;
- vector<VkPushConstantRange> pushConstantRanges;
-};
-
-class SET_NODE : public BASE_NODE {
- public:
- using BASE_NODE::in_use;
- VkDescriptorSet set;
- VkDescriptorPool pool;
- // Head of LL of all Update structs for this set
- GENERIC_HEADER* pUpdateStructs;
- // Total num of descriptors in this set (count of its layout plus all prior layouts)
- uint32_t descriptorCount;
- GENERIC_HEADER** ppDescriptors; // Array where each index points to update node for its slot
- LAYOUT_NODE* pLayout; // Layout for this set
- SET_NODE* pNext;
- unordered_set<VkCommandBuffer> boundCmdBuffers; // Cmd buffers that this set has been bound to
- SET_NODE() : pUpdateStructs(NULL), ppDescriptors(NULL), pLayout(NULL), pNext(NULL) {};
-};
-
-typedef struct _DESCRIPTOR_POOL_NODE {
- VkDescriptorPool pool;
- uint32_t maxSets;
- VkDescriptorPoolCreateInfo createInfo;
- SET_NODE* pSets; // Head of LL of sets for this Pool
- vector<uint32_t> maxDescriptorTypeCount; // max # of descriptors of each type in this pool
- vector<uint32_t> availableDescriptorTypeCount; // available # of descriptors of each type in this pool
-
- _DESCRIPTOR_POOL_NODE(const VkDescriptorPool pool,
- const VkDescriptorPoolCreateInfo *pCreateInfo)
- : pool(pool), maxSets(pCreateInfo->maxSets), createInfo(*pCreateInfo),
- pSets(NULL), maxDescriptorTypeCount(VK_DESCRIPTOR_TYPE_RANGE_SIZE),
- availableDescriptorTypeCount(VK_DESCRIPTOR_TYPE_RANGE_SIZE) {
- if (createInfo.poolSizeCount) { // Shadow type struct from ptr into local struct
- size_t poolSizeCountSize = createInfo.poolSizeCount * sizeof(VkDescriptorPoolSize);
- createInfo.pPoolSizes = new VkDescriptorPoolSize[poolSizeCountSize];
- memcpy((void*)createInfo.pPoolSizes, pCreateInfo->pPoolSizes, poolSizeCountSize);
- // Now set max counts for each descriptor type based on count of that type times maxSets
- uint32_t i=0;
- for (i=0; i<createInfo.poolSizeCount; ++i) {
- uint32_t typeIndex = static_cast<uint32_t>(createInfo.pPoolSizes[i].type);
- uint32_t poolSizeCount = createInfo.pPoolSizes[i].descriptorCount;
- maxDescriptorTypeCount[typeIndex] += poolSizeCount;
- }
- for (i=0; i<maxDescriptorTypeCount.size(); ++i) {
- maxDescriptorTypeCount[i] *= createInfo.maxSets;
- // Initially the available counts are equal to the max counts
- availableDescriptorTypeCount[i] = maxDescriptorTypeCount[i];
- }
- } else {
- createInfo.pPoolSizes = NULL; // Make sure this is NULL so we don't try to clean it up
- }
- }
- ~_DESCRIPTOR_POOL_NODE() {
- if (createInfo.pPoolSizes) {
- delete[] createInfo.pPoolSizes;
- }
- // TODO : pSets are currently freed in deletePools function which uses freeShadowUpdateTree function
- // need to migrate that struct to smart ptrs for auto-cleanup
- }
-} DESCRIPTOR_POOL_NODE;
-
-// Cmd Buffer Tracking
-typedef enum _CMD_TYPE
-{
- CMD_BINDPIPELINE,
- CMD_BINDPIPELINEDELTA,
- CMD_SETVIEWPORTSTATE,
- CMD_SETSCISSORSTATE,
- CMD_SETLINEWIDTHSTATE,
- CMD_SETDEPTHBIASSTATE,
- CMD_SETBLENDSTATE,
- CMD_SETDEPTHBOUNDSSTATE,
- CMD_SETSTENCILREADMASKSTATE,
- CMD_SETSTENCILWRITEMASKSTATE,
- CMD_SETSTENCILREFERENCESTATE,
- CMD_BINDDESCRIPTORSETS,
- CMD_BINDINDEXBUFFER,
- CMD_BINDVERTEXBUFFER,
- CMD_DRAW,
- CMD_DRAWINDEXED,
- CMD_DRAWINDIRECT,
- CMD_DRAWINDEXEDINDIRECT,
- CMD_DISPATCH,
- CMD_DISPATCHINDIRECT,
- CMD_COPYBUFFER,
- CMD_COPYIMAGE,
- CMD_BLITIMAGE,
- CMD_COPYBUFFERTOIMAGE,
- CMD_COPYIMAGETOBUFFER,
- CMD_CLONEIMAGEDATA,
- CMD_UPDATEBUFFER,
- CMD_FILLBUFFER,
- CMD_CLEARCOLORIMAGE,
- CMD_CLEARATTACHMENTS,
- CMD_CLEARDEPTHSTENCILIMAGE,
- CMD_RESOLVEIMAGE,
- CMD_SETEVENT,
- CMD_RESETEVENT,
- CMD_WAITEVENTS,
- CMD_PIPELINEBARRIER,
- CMD_BEGINQUERY,
- CMD_ENDQUERY,
- CMD_RESETQUERYPOOL,
- CMD_COPYQUERYPOOLRESULTS,
- CMD_WRITETIMESTAMP,
- CMD_INITATOMICCOUNTERS,
- CMD_LOADATOMICCOUNTERS,
- CMD_SAVEATOMICCOUNTERS,
- CMD_BEGINRENDERPASS,
- CMD_NEXTSUBPASS,
- CMD_ENDRENDERPASS,
- CMD_EXECUTECOMMANDS,
- CMD_DBGMARKERBEGIN,
- CMD_DBGMARKEREND,
-} CMD_TYPE;
-// Data structure for holding sequence of cmds in cmd buffer
-typedef struct _CMD_NODE {
- CMD_TYPE type;
- uint64_t cmdNumber;
-} CMD_NODE;
-
-typedef enum _CB_STATE
-{
- CB_NEW, // Newly created CB w/o any cmds
- CB_RECORDING, // BeginCB has been called on this CB
- CB_RECORDED, // EndCB has been called on this CB
- CB_INVALID // CB had a bound descriptor set destroyed or updated
-} CB_STATE;
-// CB Status -- used to track status of various bindings on cmd buffer objects
-typedef VkFlags CBStatusFlags;
-typedef enum _CBStatusFlagBits
-{
- CBSTATUS_NONE = 0x00000000, // No status is set
- CBSTATUS_VIEWPORT_SET = 0x00000001, // Viewport has been set
- CBSTATUS_LINE_WIDTH_SET = 0x00000002, // Line width has been set
- CBSTATUS_DEPTH_BIAS_SET = 0x00000004, // Depth bias has been set
- CBSTATUS_COLOR_BLEND_WRITE_ENABLE = 0x00000008, // PSO w/ CB Enable set has been set
- CBSTATUS_BLEND_SET = 0x00000010, // Blend state object has been set
- CBSTATUS_DEPTH_WRITE_ENABLE = 0x00000020, // PSO w/ Depth Enable set has been set
- CBSTATUS_STENCIL_TEST_ENABLE = 0x00000040, // PSO w/ Stencil Enable set has been set
- CBSTATUS_DEPTH_BOUNDS_SET = 0x00000080, // Depth bounds state object has been set
- CBSTATUS_STENCIL_READ_MASK_SET = 0x00000100, // Stencil read mask has been set
- CBSTATUS_STENCIL_WRITE_MASK_SET = 0x00000200, // Stencil write mask has been set
- CBSTATUS_STENCIL_REFERENCE_SET = 0x00000400, // Stencil reference has been set
- CBSTATUS_INDEX_BUFFER_BOUND = 0x00000800, // Index buffer has been set
- CBSTATUS_SCISSOR_SET = 0x00001000, // Scissor has been set
- CBSTATUS_ALL = 0x00001FFF, // All dynamic state set
-} CBStatusFlagBits;
-
-typedef struct stencil_data {
- uint32_t compareMask;
- uint32_t writeMask;
- uint32_t reference;
-} CBStencilData;
-
-typedef struct _DRAW_DATA {
- vector<VkBuffer> buffers;
-} DRAW_DATA;
-
-struct ImageSubresourcePair {
- VkImage image;
- bool hasSubresource;
- VkImageSubresource subresource;
-};
-
-bool operator==(const ImageSubresourcePair &img1, const ImageSubresourcePair &img2) {
- if (img1.image != img2.image || img1.hasSubresource != img2.hasSubresource) return false;
- return !img1.hasSubresource || (img1.subresource.aspectMask == img2.subresource.aspectMask &&
- img1.subresource.mipLevel == img2.subresource.mipLevel &&
- img1.subresource.arrayLayer == img2.subresource.arrayLayer);
-}
-
-namespace std {
-template <> struct hash<ImageSubresourcePair> {
- size_t operator()(ImageSubresourcePair img) const throw() {
- size_t hashVal =
- hash<uint64_t>()(reinterpret_cast<uint64_t &>(img.image));
- hashVal ^= hash<bool>()(img.hasSubresource);
- if (img.hasSubresource) {
- hashVal ^= hash<uint32_t>()(reinterpret_cast<uint32_t &>(img.subresource.aspectMask));
- hashVal ^= hash<uint32_t>()(img.subresource.mipLevel);
- hashVal ^= hash<uint32_t>()(img.subresource.arrayLayer);
- }
- return hashVal;
- }
-};
-}
-
-struct QueryObject {
- VkQueryPool pool;
- uint32_t index;
-};
-
-bool operator==(const QueryObject& query1, const QueryObject& query2) {
- return (query1.pool == query2.pool && query1.index == query2.index);
-}
-
-namespace std {
-template <>
-struct hash<QueryObject> {
- size_t operator()(QueryObject query) const throw() {
- return hash<uint64_t>()((uint64_t)(query.pool)) ^ hash<uint32_t>()(query.index);
- }
-};
-}
-
-// Cmd Buffer Wrapper Struct
-typedef struct _GLOBAL_CB_NODE {
- VkCommandBuffer commandBuffer;
- VkCommandBufferAllocateInfo createInfo;
- VkCommandBufferBeginInfo beginInfo;
- VkCommandBufferInheritanceInfo inheritanceInfo;
- VkFence fence; // fence tracking this cmd buffer
- VkDevice device; // device this DB belongs to
- uint64_t numCmds; // number of cmds in this CB
- uint64_t drawCount[NUM_DRAW_TYPES]; // Count of each type of draw in this CB
- CB_STATE state; // Track cmd buffer update state
- uint64_t submitCount; // Number of times CB has been submitted
- CBStatusFlags status; // Track status of various bindings on cmd buffer
- vector<CMD_NODE> cmds; // vector of commands bound to this command buffer
- // Currently storing "lastBound" objects on per-CB basis
- // long-term may want to create caches of "lastBound" states and could have
- // each individual CMD_NODE referencing its own "lastBound" state
- VkPipeline lastBoundPipeline;
- uint32_t lastVtxBinding;
- vector<VkBuffer> boundVtxBuffers;
- vector<VkViewport> viewports;
- vector<VkRect2D> scissors;
- float lineWidth;
- float depthBiasConstantFactor;
- float depthBiasClamp;
- float depthBiasSlopeFactor;
- float blendConstants[4];
- float minDepthBounds;
- float maxDepthBounds;
- CBStencilData front;
- CBStencilData back;
- VkDescriptorSet lastBoundDescriptorSet;
- VkPipelineLayout lastBoundPipelineLayout;
- VkRenderPassBeginInfo activeRenderPassBeginInfo;
- VkRenderPass activeRenderPass;
- VkSubpassContents activeSubpassContents;
- uint32_t activeSubpass;
- VkFramebuffer framebuffer;
- // Capture unique std::set of descriptorSets that are bound to this CB.
- std::set<VkDescriptorSet> uniqueBoundSets;
- // Keep running track of which sets are bound to which set# at any given time
- // Track descriptor sets that are destroyed or updated while bound to CB
- std::set<VkDescriptorSet> destroyedSets;
- std::set<VkDescriptorSet> updatedSets;
- vector<VkDescriptorSet> boundDescriptorSets; // Index is set# that given set is bound to
- vector<VkEvent> waitedEvents;
- vector<VkSemaphore> semaphores;
- vector<VkEvent> events;
- unordered_map<QueryObject, vector<VkEvent> > waitedEventsBeforeQueryReset;
- unordered_map<QueryObject, bool> queryToStateMap; // 0 is unavailable, 1 is available
- unordered_set<QueryObject> activeQueries;
- unordered_set<QueryObject> startedQueries;
- unordered_map<ImageSubresourcePair, IMAGE_CMD_BUF_NODE> imageLayoutMap;
- unordered_map<VkImage, vector<ImageSubresourcePair>> imageSubresourceMap;
- unordered_map<VkEvent, VkPipelineStageFlags> eventToStageMap;
- vector<DRAW_DATA> drawData;
- DRAW_DATA currentDrawData;
- VkCommandBuffer primaryCommandBuffer;
- // If cmd buffer is primary, track secondary command buffers pending
- // execution
- std::unordered_set<VkCommandBuffer> secondaryCommandBuffers;
- vector<uint32_t> dynamicOffsets; // one dynamic offset per dynamic descriptor bound to this CB
-} GLOBAL_CB_NODE;
-
-typedef struct _SWAPCHAIN_NODE {
- VkSwapchainCreateInfoKHR createInfo;
- uint32_t* pQueueFamilyIndices;
- std::vector<VkImage> images;
- _SWAPCHAIN_NODE(const VkSwapchainCreateInfoKHR *pCreateInfo) :
- createInfo(*pCreateInfo),
- pQueueFamilyIndices(NULL)
- {
- if (pCreateInfo->queueFamilyIndexCount) {
- pQueueFamilyIndices = new uint32_t[pCreateInfo->queueFamilyIndexCount];
- memcpy(pQueueFamilyIndices, pCreateInfo->pQueueFamilyIndices, pCreateInfo->queueFamilyIndexCount*sizeof(uint32_t));
- createInfo.pQueueFamilyIndices = pQueueFamilyIndices;
- }
- }
- ~_SWAPCHAIN_NODE()
- {
- if (pQueueFamilyIndices)
- delete pQueueFamilyIndices;
- }
-} SWAPCHAIN_NODE;
diff --git a/layers/glsl_compiler.c b/layers/glsl_compiler.c
deleted file mode 100644
index e69de29bb..000000000
--- a/layers/glsl_compiler.c
+++ /dev/null
diff --git a/layers/image.cpp b/layers/image.cpp
index 56197c9cc..a95584e1e 100644
--- a/layers/image.cpp
+++ b/layers/image.cpp
@@ -54,65 +54,37 @@ using namespace std;
using namespace std;
struct layer_data {
- debug_report_data *report_data;
- vector<VkDebugReportCallbackEXT> logging_callback;
- VkLayerDispatchTable* device_dispatch_table;
+ debug_report_data *report_data;
+ vector<VkDebugReportCallbackEXT> logging_callback;
+ VkLayerDispatchTable *device_dispatch_table;
VkLayerInstanceDispatchTable *instance_dispatch_table;
- VkPhysicalDevice physicalDevice;
- VkPhysicalDeviceProperties physicalDeviceProperties;
+ VkPhysicalDevice physicalDevice;
+ VkPhysicalDeviceProperties physicalDeviceProperties;
unordered_map<VkImage, IMAGE_STATE> imageMap;
- layer_data() :
- report_data(nullptr),
- device_dispatch_table(nullptr),
- instance_dispatch_table(nullptr),
- physicalDevice(0),
- physicalDeviceProperties()
- {};
+ layer_data()
+ : report_data(nullptr), device_dispatch_table(nullptr), instance_dispatch_table(nullptr), physicalDevice(0),
+ physicalDeviceProperties(){};
};
-static unordered_map<void*, layer_data*> layer_data_map;
+static unordered_map<void *, layer_data *> layer_data_map;
+static int globalLockInitialized = 0;
+static loader_platform_thread_mutex globalLock;
-static void InitImage(layer_data *data, const VkAllocationCallbacks *pAllocator)
-{
- VkDebugReportCallbackEXT callback;
- uint32_t report_flags = getLayerOptionFlags("ImageReportFlags", 0);
+static void init_image(layer_data *my_data, const VkAllocationCallbacks *pAllocator) {
- uint32_t debug_action = 0;
- getLayerOptionEnum("ImageDebugAction", (uint32_t *) &debug_action);
- if(debug_action & VK_DBG_LAYER_ACTION_LOG_MSG)
- {
- FILE *log_output = NULL;
- const char* option_str = getLayerOption("ImageLogFilename");
- log_output = getLayerLogOutput(option_str, "Image");
- VkDebugReportCallbackCreateInfoEXT dbgInfo;
- memset(&dbgInfo, 0, sizeof(dbgInfo));
- dbgInfo.sType = VK_STRUCTURE_TYPE_DEBUG_REPORT_CREATE_INFO_EXT;
- dbgInfo.pfnCallback = log_callback;
- dbgInfo.pUserData = log_output;
- dbgInfo.flags = report_flags;
- layer_create_msg_callback(data->report_data, &dbgInfo, pAllocator, &callback);
- data->logging_callback.push_back(callback);
- }
+ layer_debug_actions(my_data->report_data, my_data->logging_callback, pAllocator, "lunarg_image");
- if (debug_action & VK_DBG_LAYER_ACTION_DEBUG_OUTPUT) {
- VkDebugReportCallbackCreateInfoEXT dbgInfo;
- memset(&dbgInfo, 0, sizeof(dbgInfo));
- dbgInfo.sType = VK_STRUCTURE_TYPE_DEBUG_REPORT_CREATE_INFO_EXT;
- dbgInfo.pfnCallback = win32_debug_output_msg;
- dbgInfo.flags = report_flags;
- layer_create_msg_callback(data->report_data, &dbgInfo, pAllocator, &callback);
- data->logging_callback.push_back(callback);
+ if (!globalLockInitialized) {
+ loader_platform_thread_create_mutex(&globalLock);
+ globalLockInitialized = 1;
}
}
-VK_LAYER_EXPORT VKAPI_ATTR VkResult VKAPI_CALL vkCreateDebugReportCallbackEXT(
- VkInstance instance,
- const VkDebugReportCallbackCreateInfoEXT* pCreateInfo,
- const VkAllocationCallbacks* pAllocator,
- VkDebugReportCallbackEXT* pMsgCallback)
-{
+VK_LAYER_EXPORT VKAPI_ATTR VkResult VKAPI_CALL
+vkCreateDebugReportCallbackEXT(VkInstance instance, const VkDebugReportCallbackCreateInfoEXT *pCreateInfo,
+ const VkAllocationCallbacks *pAllocator, VkDebugReportCallbackEXT *pMsgCallback) {
layer_data *my_data = get_my_data_ptr(get_dispatch_key(instance), layer_data_map);
VkResult res = my_data->instance_dispatch_table->CreateDebugReportCallbackEXT(instance, pCreateInfo, pAllocator, pMsgCallback);
if (res == VK_SUCCESS) {
@@ -121,37 +93,29 @@ VK_LAYER_EXPORT VKAPI_ATTR VkResult VKAPI_CALL vkCreateDebugReportCallbackEXT(
return res;
}
-VK_LAYER_EXPORT VKAPI_ATTR void VKAPI_CALL vkDestroyDebugReportCallbackEXT(
- VkInstance instance,
- VkDebugReportCallbackEXT msgCallback,
- const VkAllocationCallbacks* pAllocator)
-{
+VK_LAYER_EXPORT VKAPI_ATTR void VKAPI_CALL vkDestroyDebugReportCallbackEXT(VkInstance instance,
+ VkDebugReportCallbackEXT msgCallback,
+ const VkAllocationCallbacks *pAllocator) {
layer_data *my_data = get_my_data_ptr(get_dispatch_key(instance), layer_data_map);
my_data->instance_dispatch_table->DestroyDebugReportCallbackEXT(instance, msgCallback, pAllocator);
layer_destroy_msg_callback(my_data->report_data, msgCallback, pAllocator);
}
-VK_LAYER_EXPORT VKAPI_ATTR void VKAPI_CALL vkDebugReportMessageEXT(
- VkInstance instance,
- VkDebugReportFlagsEXT flags,
- VkDebugReportObjectTypeEXT objType,
- uint64_t object,
- size_t location,
- int32_t msgCode,
- const char* pLayerPrefix,
- const char* pMsg)
-{
+VK_LAYER_EXPORT VKAPI_ATTR void VKAPI_CALL
+vkDebugReportMessageEXT(VkInstance instance, VkDebugReportFlagsEXT flags, VkDebugReportObjectTypeEXT objType, uint64_t object,
+ size_t location, int32_t msgCode, const char *pLayerPrefix, const char *pMsg) {
layer_data *my_data = get_my_data_ptr(get_dispatch_key(instance), layer_data_map);
- my_data->instance_dispatch_table->DebugReportMessageEXT(instance, flags, objType, object, location, msgCode, pLayerPrefix, pMsg);
+ my_data->instance_dispatch_table->DebugReportMessageEXT(instance, flags, objType, object, location, msgCode, pLayerPrefix,
+ pMsg);
}
-VK_LAYER_EXPORT VKAPI_ATTR VkResult VKAPI_CALL vkCreateInstance(const VkInstanceCreateInfo* pCreateInfo, const VkAllocationCallbacks* pAllocator, VkInstance* pInstance)
-{
+VK_LAYER_EXPORT VKAPI_ATTR VkResult VKAPI_CALL
+vkCreateInstance(const VkInstanceCreateInfo *pCreateInfo, const VkAllocationCallbacks *pAllocator, VkInstance *pInstance) {
VkLayerInstanceCreateInfo *chain_info = get_chain_info(pCreateInfo, VK_LAYER_LINK_INFO);
assert(chain_info->u.pLayerInfo);
PFN_vkGetInstanceProcAddr fpGetInstanceProcAddr = chain_info->u.pLayerInfo->pfnNextGetInstanceProcAddr;
- PFN_vkCreateInstance fpCreateInstance = (PFN_vkCreateInstance) fpGetInstanceProcAddr(NULL, "vkCreateInstance");
+ PFN_vkCreateInstance fpCreateInstance = (PFN_vkCreateInstance)fpGetInstanceProcAddr(NULL, "vkCreateInstance");
if (fpCreateInstance == NULL) {
return VK_ERROR_INITIALIZATION_FAILED;
}
@@ -167,19 +131,15 @@ VK_LAYER_EXPORT VKAPI_ATTR VkResult VKAPI_CALL vkCreateInstance(const VkInstance
my_data->instance_dispatch_table = new VkLayerInstanceDispatchTable;
layer_init_instance_dispatch_table(*pInstance, my_data->instance_dispatch_table, fpGetInstanceProcAddr);
- my_data->report_data = debug_report_create_instance(
- my_data->instance_dispatch_table,
- *pInstance,
- pCreateInfo->enabledExtensionCount,
- pCreateInfo->ppEnabledExtensionNames);
+ my_data->report_data = debug_report_create_instance(my_data->instance_dispatch_table, *pInstance,
+ pCreateInfo->enabledExtensionCount, pCreateInfo->ppEnabledExtensionNames);
- InitImage(my_data, pAllocator);
+ init_image(my_data, pAllocator);
return result;
}
-VK_LAYER_EXPORT VKAPI_ATTR void VKAPI_CALL vkDestroyInstance(VkInstance instance, const VkAllocationCallbacks* pAllocator)
-{
+VK_LAYER_EXPORT VKAPI_ATTR void VKAPI_CALL vkDestroyInstance(VkInstance instance, const VkAllocationCallbacks *pAllocator) {
// Grab the key before the instance is destroyed.
dispatch_key key = get_dispatch_key(instance);
layer_data *my_data = get_my_data_ptr(key, layer_data_map);
@@ -196,17 +156,17 @@ VK_LAYER_EXPORT VKAPI_ATTR void VKAPI_CALL vkDestroyInstance(VkInstance instance
layer_debug_report_destroy_instance(my_data->report_data);
delete my_data->instance_dispatch_table;
layer_data_map.erase(key);
-
}
-VK_LAYER_EXPORT VKAPI_ATTR VkResult VKAPI_CALL vkCreateDevice(VkPhysicalDevice physicalDevice, const VkDeviceCreateInfo* pCreateInfo, const VkAllocationCallbacks* pAllocator, VkDevice* pDevice)
-{
+VK_LAYER_EXPORT VKAPI_ATTR VkResult VKAPI_CALL vkCreateDevice(VkPhysicalDevice physicalDevice,
+ const VkDeviceCreateInfo *pCreateInfo,
+ const VkAllocationCallbacks *pAllocator, VkDevice *pDevice) {
VkLayerDeviceCreateInfo *chain_info = get_chain_info(pCreateInfo, VK_LAYER_LINK_INFO);
assert(chain_info->u.pLayerInfo);
PFN_vkGetInstanceProcAddr fpGetInstanceProcAddr = chain_info->u.pLayerInfo->pfnNextGetInstanceProcAddr;
PFN_vkGetDeviceProcAddr fpGetDeviceProcAddr = chain_info->u.pLayerInfo->pfnNextGetDeviceProcAddr;
- PFN_vkCreateDevice fpCreateDevice = (PFN_vkCreateDevice) fpGetInstanceProcAddr(NULL, "vkCreateDevice");
+ PFN_vkCreateDevice fpCreateDevice = (PFN_vkCreateDevice)fpGetInstanceProcAddr(NULL, "vkCreateDevice");
if (fpCreateDevice == NULL) {
return VK_ERROR_INITIALIZATION_FAILED;
}
@@ -229,13 +189,13 @@ VK_LAYER_EXPORT VKAPI_ATTR VkResult VKAPI_CALL vkCreateDevice(VkPhysicalDevice p
my_device_data->report_data = layer_debug_report_create_device(my_instance_data->report_data, *pDevice);
my_device_data->physicalDevice = physicalDevice;
- my_instance_data->instance_dispatch_table->GetPhysicalDeviceProperties(physicalDevice, &(my_device_data->physicalDeviceProperties));
+ my_instance_data->instance_dispatch_table->GetPhysicalDeviceProperties(physicalDevice,
+ &(my_device_data->physicalDeviceProperties));
return result;
}
-VK_LAYER_EXPORT VKAPI_ATTR void VKAPI_CALL vkDestroyDevice(VkDevice device, const VkAllocationCallbacks* pAllocator)
-{
+VK_LAYER_EXPORT VKAPI_ATTR void VKAPI_CALL vkDestroyDevice(VkDevice device, const VkAllocationCallbacks *pAllocator) {
dispatch_key key = get_dispatch_key(device);
layer_data *my_data = get_my_data_ptr(key, layer_data_map);
my_data->device_dispatch_table->DestroyDevice(device, pAllocator);
@@ -243,256 +203,228 @@ VK_LAYER_EXPORT VKAPI_ATTR void VKAPI_CALL vkDestroyDevice(VkDevice device, cons
layer_data_map.erase(key);
}
-static const VkExtensionProperties instance_extensions[] = {
- {
- VK_EXT_DEBUG_REPORT_EXTENSION_NAME,
- VK_EXT_DEBUG_REPORT_SPEC_VERSION
- }
-};
+static const VkExtensionProperties instance_extensions[] = {{VK_EXT_DEBUG_REPORT_EXTENSION_NAME, VK_EXT_DEBUG_REPORT_SPEC_VERSION}};
-VK_LAYER_EXPORT VKAPI_ATTR VkResult VKAPI_CALL vkEnumerateInstanceExtensionProperties(
- const char *pLayerName,
- uint32_t *pCount,
- VkExtensionProperties* pProperties)
-{
+VK_LAYER_EXPORT VKAPI_ATTR VkResult VKAPI_CALL
+vkEnumerateInstanceExtensionProperties(const char *pLayerName, uint32_t *pCount, VkExtensionProperties *pProperties) {
return util_GetExtensionProperties(1, instance_extensions, pCount, pProperties);
}
-static const VkLayerProperties pc_global_layers[] = {
- {
- "VK_LAYER_LUNARG_image",
- VK_API_VERSION,
- 1,
- "LunarG Validation Layer",
- }
-};
+static const VkLayerProperties pc_global_layers[] = {{
+ "VK_LAYER_LUNARG_image", VK_LAYER_API_VERSION, 1, "LunarG Validation Layer",
+}};
-VK_LAYER_EXPORT VKAPI_ATTR VkResult VKAPI_CALL vkEnumerateInstanceLayerProperties(
- uint32_t *pCount,
- VkLayerProperties* pProperties)
-{
- return util_GetLayerProperties(ARRAY_SIZE(pc_global_layers),
- pc_global_layers,
- pCount, pProperties);
+VK_LAYER_EXPORT VKAPI_ATTR VkResult VKAPI_CALL
+vkEnumerateInstanceLayerProperties(uint32_t *pCount, VkLayerProperties *pProperties) {
+ return util_GetLayerProperties(ARRAY_SIZE(pc_global_layers), pc_global_layers, pCount, pProperties);
}
-VK_LAYER_EXPORT VKAPI_ATTR VkResult VKAPI_CALL vkEnumerateDeviceExtensionProperties(
- VkPhysicalDevice physicalDevice,
- const char* pLayerName,
- uint32_t* pCount,
- VkExtensionProperties* pProperties)
-{
+VK_LAYER_EXPORT VKAPI_ATTR VkResult VKAPI_CALL vkEnumerateDeviceExtensionProperties(VkPhysicalDevice physicalDevice,
+ const char *pLayerName, uint32_t *pCount,
+ VkExtensionProperties *pProperties) {
// Image does not have any physical device extensions
if (pLayerName == NULL) {
dispatch_key key = get_dispatch_key(physicalDevice);
layer_data *my_data = get_my_data_ptr(key, layer_data_map);
VkLayerInstanceDispatchTable *pTable = my_data->instance_dispatch_table;
- return pTable->EnumerateDeviceExtensionProperties(
- physicalDevice,
- NULL,
- pCount,
- pProperties);
+ return pTable->EnumerateDeviceExtensionProperties(physicalDevice, NULL, pCount, pProperties);
} else {
return util_GetExtensionProperties(0, NULL, pCount, pProperties);
}
}
-VK_LAYER_EXPORT VKAPI_ATTR VkResult VKAPI_CALL vkEnumerateDeviceLayerProperties(
- VkPhysicalDevice physicalDevice,
- uint32_t* pCount,
- VkLayerProperties* pProperties)
-{
+VK_LAYER_EXPORT VKAPI_ATTR VkResult VKAPI_CALL
+vkEnumerateDeviceLayerProperties(VkPhysicalDevice physicalDevice, uint32_t *pCount, VkLayerProperties *pProperties) {
// ParamChecker's physical device layers are the same as global
- return util_GetLayerProperties(ARRAY_SIZE(pc_global_layers), pc_global_layers,
- pCount, pProperties);
+ return util_GetLayerProperties(ARRAY_SIZE(pc_global_layers), pc_global_layers, pCount, pProperties);
}
// Start of the Image layer proper
// Returns TRUE if a format is a depth-compatible format
-bool is_depth_format(VkFormat format)
-{
+bool is_depth_format(VkFormat format) {
bool result = VK_FALSE;
switch (format) {
- case VK_FORMAT_D16_UNORM:
- case VK_FORMAT_X8_D24_UNORM_PACK32:
- case VK_FORMAT_D32_SFLOAT:
- case VK_FORMAT_S8_UINT:
- case VK_FORMAT_D16_UNORM_S8_UINT:
- case VK_FORMAT_D24_UNORM_S8_UINT:
- case VK_FORMAT_D32_SFLOAT_S8_UINT:
- result = VK_TRUE;
- break;
- default:
- break;
+ case VK_FORMAT_D16_UNORM:
+ case VK_FORMAT_X8_D24_UNORM_PACK32:
+ case VK_FORMAT_D32_SFLOAT:
+ case VK_FORMAT_S8_UINT:
+ case VK_FORMAT_D16_UNORM_S8_UINT:
+ case VK_FORMAT_D24_UNORM_S8_UINT:
+ case VK_FORMAT_D32_SFLOAT_S8_UINT:
+ result = VK_TRUE;
+ break;
+ default:
+ break;
}
return result;
}
-static inline uint32_t validate_VkImageLayoutKHR(VkImageLayout input_value)
-{
- return ((validate_VkImageLayout(input_value) == 1) ||
- (input_value == VK_IMAGE_LAYOUT_PRESENT_SRC_KHR));
+static inline uint32_t validate_VkImageLayoutKHR(VkImageLayout input_value) {
+ return ((validate_VkImageLayout(input_value) == 1) || (input_value == VK_IMAGE_LAYOUT_PRESENT_SRC_KHR));
}
-VK_LAYER_EXPORT VKAPI_ATTR VkResult VKAPI_CALL vkCreateImage(VkDevice device, const VkImageCreateInfo* pCreateInfo, const VkAllocationCallbacks* pAllocator, VkImage* pImage)
-{
- VkBool32 skipCall = VK_FALSE;
- VkResult result = VK_ERROR_VALIDATION_FAILED_EXT;
+VK_LAYER_EXPORT VKAPI_ATTR VkResult VKAPI_CALL
+vkCreateImage(VkDevice device, const VkImageCreateInfo *pCreateInfo, const VkAllocationCallbacks *pAllocator, VkImage *pImage) {
+ VkBool32 skipCall = VK_FALSE;
+ VkResult result = VK_ERROR_VALIDATION_FAILED_EXT;
VkImageFormatProperties ImageFormatProperties;
- layer_data *device_data = get_my_data_ptr(get_dispatch_key(device), layer_data_map);
- VkPhysicalDevice physicalDevice = device_data->physicalDevice;
- layer_data *phy_dev_data = get_my_data_ptr(get_dispatch_key(physicalDevice), layer_data_map);
+ layer_data *device_data = get_my_data_ptr(get_dispatch_key(device), layer_data_map);
+ VkPhysicalDevice physicalDevice = device_data->physicalDevice;
+ layer_data *phy_dev_data = get_my_data_ptr(get_dispatch_key(physicalDevice), layer_data_map);
- if (pCreateInfo->format != VK_FORMAT_UNDEFINED)
- {
+ if (pCreateInfo->format != VK_FORMAT_UNDEFINED) {
VkFormatProperties properties;
- phy_dev_data->instance_dispatch_table->GetPhysicalDeviceFormatProperties(
- device_data->physicalDevice, pCreateInfo->format, &properties);
+ phy_dev_data->instance_dispatch_table->GetPhysicalDeviceFormatProperties(device_data->physicalDevice, pCreateInfo->format,
+ &properties);
- if ((properties.linearTilingFeatures) == 0 && (properties.optimalTilingFeatures == 0))
- {
+ if ((properties.linearTilingFeatures) == 0 && (properties.optimalTilingFeatures == 0)) {
char const str[] = "vkCreateImage parameter, VkFormat pCreateInfo->format, contains unsupported format";
// TODO: Verify against Valid Use section of spec. Generally if something yield an undefined result, it's invalid
- skipCall |= log_msg(device_data->report_data, VK_DEBUG_REPORT_WARNING_BIT_EXT, (VkDebugReportObjectTypeEXT)0, 0, __LINE__, IMAGE_FORMAT_UNSUPPORTED, "IMAGE", str);
+ skipCall |= log_msg(device_data->report_data, VK_DEBUG_REPORT_ERROR_BIT_EXT, (VkDebugReportObjectTypeEXT)0, 0, __LINE__,
+ IMAGE_FORMAT_UNSUPPORTED, "IMAGE", str);
}
}
// Internal call to get format info. Still goes through layers, could potentially go directly to ICD.
phy_dev_data->instance_dispatch_table->GetPhysicalDeviceImageFormatProperties(
- physicalDevice, pCreateInfo->format, pCreateInfo->imageType, pCreateInfo->tiling,
- pCreateInfo->usage, pCreateInfo->flags, &ImageFormatProperties);
+ physicalDevice, pCreateInfo->format, pCreateInfo->imageType, pCreateInfo->tiling, pCreateInfo->usage, pCreateInfo->flags,
+ &ImageFormatProperties);
VkDeviceSize imageGranularity = device_data->physicalDeviceProperties.limits.bufferImageGranularity;
imageGranularity = imageGranularity == 1 ? 0 : imageGranularity;
- if ((pCreateInfo->extent.depth > ImageFormatProperties.maxExtent.depth) ||
- (pCreateInfo->extent.width > ImageFormatProperties.maxExtent.width) ||
+ if ((pCreateInfo->extent.depth > ImageFormatProperties.maxExtent.depth) ||
+ (pCreateInfo->extent.width > ImageFormatProperties.maxExtent.width) ||
(pCreateInfo->extent.height > ImageFormatProperties.maxExtent.height)) {
- skipCall |= log_msg(phy_dev_data->report_data, VK_DEBUG_REPORT_ERROR_BIT_EXT, VK_DEBUG_REPORT_OBJECT_TYPE_IMAGE_EXT, (uint64_t)pImage, __LINE__,
- IMAGE_INVALID_FORMAT_LIMITS_VIOLATION, "Image",
- "CreateImage extents exceed allowable limits for format: "
- "Width = %d Height = %d Depth = %d: Limits for Width = %d Height = %d Depth = %d for format %s.",
- pCreateInfo->extent.width, pCreateInfo->extent.height, pCreateInfo->extent.depth,
- ImageFormatProperties.maxExtent.width, ImageFormatProperties.maxExtent.height, ImageFormatProperties.maxExtent.depth,
- string_VkFormat(pCreateInfo->format));
-
+ skipCall |= log_msg(phy_dev_data->report_data, VK_DEBUG_REPORT_ERROR_BIT_EXT, VK_DEBUG_REPORT_OBJECT_TYPE_IMAGE_EXT,
+ (uint64_t)pImage, __LINE__, IMAGE_INVALID_FORMAT_LIMITS_VIOLATION, "Image",
+ "CreateImage extents exceed allowable limits for format: "
+ "Width = %d Height = %d Depth = %d: Limits for Width = %d Height = %d Depth = %d for format %s.",
+ pCreateInfo->extent.width, pCreateInfo->extent.height, pCreateInfo->extent.depth,
+ ImageFormatProperties.maxExtent.width, ImageFormatProperties.maxExtent.height,
+ ImageFormatProperties.maxExtent.depth, string_VkFormat(pCreateInfo->format));
}
- uint64_t totalSize = ((uint64_t)pCreateInfo->extent.width *
- (uint64_t)pCreateInfo->extent.height *
- (uint64_t)pCreateInfo->extent.depth *
- (uint64_t)pCreateInfo->arrayLayers *
- (uint64_t)pCreateInfo->samples *
- (uint64_t)vk_format_get_size(pCreateInfo->format) +
- (uint64_t)imageGranularity ) & ~(uint64_t)imageGranularity;
+ uint64_t totalSize = ((uint64_t)pCreateInfo->extent.width * (uint64_t)pCreateInfo->extent.height *
+ (uint64_t)pCreateInfo->extent.depth * (uint64_t)pCreateInfo->arrayLayers *
+ (uint64_t)pCreateInfo->samples * (uint64_t)vk_format_get_size(pCreateInfo->format) +
+ (uint64_t)imageGranularity) &
+ ~(uint64_t)imageGranularity;
if (totalSize > ImageFormatProperties.maxResourceSize) {
- skipCall |= log_msg(phy_dev_data->report_data, VK_DEBUG_REPORT_ERROR_BIT_EXT, VK_DEBUG_REPORT_OBJECT_TYPE_IMAGE_EXT, (uint64_t)pImage, __LINE__,
- IMAGE_INVALID_FORMAT_LIMITS_VIOLATION, "Image",
- "CreateImage resource size exceeds allowable maximum "
- "Image resource size = %#" PRIxLEAST64 ", maximum resource size = %#" PRIxLEAST64 " ",
- totalSize, ImageFormatProperties.maxResourceSize);
+ skipCall |= log_msg(phy_dev_data->report_data, VK_DEBUG_REPORT_ERROR_BIT_EXT, VK_DEBUG_REPORT_OBJECT_TYPE_IMAGE_EXT,
+ (uint64_t)pImage, __LINE__, IMAGE_INVALID_FORMAT_LIMITS_VIOLATION, "Image",
+ "CreateImage resource size exceeds allowable maximum "
+ "Image resource size = %#" PRIxLEAST64 ", maximum resource size = %#" PRIxLEAST64 " ",
+ totalSize, ImageFormatProperties.maxResourceSize);
}
if (pCreateInfo->mipLevels > ImageFormatProperties.maxMipLevels) {
- skipCall |= log_msg(phy_dev_data->report_data, VK_DEBUG_REPORT_ERROR_BIT_EXT, VK_DEBUG_REPORT_OBJECT_TYPE_IMAGE_EXT, (uint64_t)pImage, __LINE__,
- IMAGE_INVALID_FORMAT_LIMITS_VIOLATION, "Image",
- "CreateImage mipLevels=%d exceeds allowable maximum supported by format of %d",
- pCreateInfo->mipLevels, ImageFormatProperties.maxMipLevels);
+ skipCall |= log_msg(phy_dev_data->report_data, VK_DEBUG_REPORT_ERROR_BIT_EXT, VK_DEBUG_REPORT_OBJECT_TYPE_IMAGE_EXT,
+ (uint64_t)pImage, __LINE__, IMAGE_INVALID_FORMAT_LIMITS_VIOLATION, "Image",
+ "CreateImage mipLevels=%d exceeds allowable maximum supported by format of %d", pCreateInfo->mipLevels,
+ ImageFormatProperties.maxMipLevels);
}
if (pCreateInfo->arrayLayers > ImageFormatProperties.maxArrayLayers) {
- skipCall |= log_msg(phy_dev_data->report_data, VK_DEBUG_REPORT_ERROR_BIT_EXT, VK_DEBUG_REPORT_OBJECT_TYPE_IMAGE_EXT, (uint64_t)pImage, __LINE__,
- IMAGE_INVALID_FORMAT_LIMITS_VIOLATION, "Image",
- "CreateImage arrayLayers=%d exceeds allowable maximum supported by format of %d",
- pCreateInfo->arrayLayers, ImageFormatProperties.maxArrayLayers);
+ skipCall |= log_msg(phy_dev_data->report_data, VK_DEBUG_REPORT_ERROR_BIT_EXT, VK_DEBUG_REPORT_OBJECT_TYPE_IMAGE_EXT,
+ (uint64_t)pImage, __LINE__, IMAGE_INVALID_FORMAT_LIMITS_VIOLATION, "Image",
+ "CreateImage arrayLayers=%d exceeds allowable maximum supported by format of %d",
+ pCreateInfo->arrayLayers, ImageFormatProperties.maxArrayLayers);
}
if ((pCreateInfo->samples & ImageFormatProperties.sampleCounts) == 0) {
- skipCall |= log_msg(phy_dev_data->report_data, VK_DEBUG_REPORT_ERROR_BIT_EXT, VK_DEBUG_REPORT_OBJECT_TYPE_IMAGE_EXT, (uint64_t)pImage, __LINE__,
- IMAGE_INVALID_FORMAT_LIMITS_VIOLATION, "Image",
- "CreateImage samples %s is not supported by format 0x%.8X",
- string_VkSampleCountFlagBits(pCreateInfo->samples), ImageFormatProperties.sampleCounts);
+ skipCall |= log_msg(phy_dev_data->report_data, VK_DEBUG_REPORT_ERROR_BIT_EXT, VK_DEBUG_REPORT_OBJECT_TYPE_IMAGE_EXT,
+ (uint64_t)pImage, __LINE__, IMAGE_INVALID_FORMAT_LIMITS_VIOLATION, "Image",
+ "CreateImage samples %s is not supported by format 0x%.8X",
+ string_VkSampleCountFlagBits(pCreateInfo->samples), ImageFormatProperties.sampleCounts);
+ }
+
+ if (pCreateInfo->initialLayout != VK_IMAGE_LAYOUT_UNDEFINED && pCreateInfo->initialLayout != VK_IMAGE_LAYOUT_PREINITIALIZED) {
+ skipCall |= log_msg(phy_dev_data->report_data, VK_DEBUG_REPORT_ERROR_BIT_EXT, VK_DEBUG_REPORT_OBJECT_TYPE_IMAGE_EXT,
+ (uint64_t)pImage, __LINE__, IMAGE_INVALID_LAYOUT, "Image",
+ "vkCreateImage parameter, pCreateInfo->initialLayout, must be VK_IMAGE_LAYOUT_UNDEFINED or "
+ "VK_IMAGE_LAYOUT_PREINITIALIZED");
}
if (VK_FALSE == skipCall) {
result = device_data->device_dispatch_table->CreateImage(device, pCreateInfo, pAllocator, pImage);
}
if (result == VK_SUCCESS) {
+ loader_platform_thread_lock_mutex(&globalLock);
device_data->imageMap[*pImage] = IMAGE_STATE(pCreateInfo);
+ loader_platform_thread_unlock_mutex(&globalLock);
}
return result;
}
-VK_LAYER_EXPORT VKAPI_ATTR void VKAPI_CALL vkDestroyImage(VkDevice device, VkImage image, const VkAllocationCallbacks* pAllocator)
-{
+VK_LAYER_EXPORT VKAPI_ATTR void VKAPI_CALL vkDestroyImage(VkDevice device, VkImage image, const VkAllocationCallbacks *pAllocator) {
layer_data *device_data = get_my_data_ptr(get_dispatch_key(device), layer_data_map);
+ loader_platform_thread_lock_mutex(&globalLock);
device_data->imageMap.erase(image);
+ loader_platform_thread_unlock_mutex(&globalLock);
device_data->device_dispatch_table->DestroyImage(device, image, pAllocator);
}
-VK_LAYER_EXPORT VKAPI_ATTR VkResult VKAPI_CALL vkCreateRenderPass(VkDevice device, const VkRenderPassCreateInfo* pCreateInfo, const VkAllocationCallbacks* pAllocator, VkRenderPass* pRenderPass)
-{
+VK_LAYER_EXPORT VKAPI_ATTR VkResult VKAPI_CALL vkCreateRenderPass(VkDevice device, const VkRenderPassCreateInfo *pCreateInfo,
+ const VkAllocationCallbacks *pAllocator,
+ VkRenderPass *pRenderPass) {
layer_data *my_data = get_my_data_ptr(get_dispatch_key(device), layer_data_map);
VkBool32 skipCall = VK_FALSE;
- for(uint32_t i = 0; i < pCreateInfo->attachmentCount; ++i)
- {
- if(pCreateInfo->pAttachments[i].format != VK_FORMAT_UNDEFINED)
- {
+ for (uint32_t i = 0; i < pCreateInfo->attachmentCount; ++i) {
+ if (pCreateInfo->pAttachments[i].format != VK_FORMAT_UNDEFINED) {
VkFormatProperties properties;
- get_my_data_ptr(get_dispatch_key(my_data->physicalDevice), layer_data_map)->instance_dispatch_table->GetPhysicalDeviceFormatProperties(
- my_data->physicalDevice, pCreateInfo->pAttachments[i].format, &properties);
+ get_my_data_ptr(get_dispatch_key(my_data->physicalDevice), layer_data_map)
+ ->instance_dispatch_table->GetPhysicalDeviceFormatProperties(my_data->physicalDevice,
+ pCreateInfo->pAttachments[i].format, &properties);
- if((properties.linearTilingFeatures) == 0 && (properties.optimalTilingFeatures == 0))
- {
+ if ((properties.linearTilingFeatures) == 0 && (properties.optimalTilingFeatures == 0)) {
std::stringstream ss;
- ss << "vkCreateRenderPass parameter, VkFormat in pCreateInfo->pAttachments[" << i << "], contains unsupported format";
+ ss << "vkCreateRenderPass parameter, VkFormat in pCreateInfo->pAttachments[" << i
+ << "], contains unsupported format";
// TODO: Verify against Valid Use section of spec. Generally if something yield an undefined result, it's invalid
- skipCall |= log_msg(my_data->report_data, VK_DEBUG_REPORT_WARNING_BIT_EXT, (VkDebugReportObjectTypeEXT)0, 0, __LINE__, IMAGE_FORMAT_UNSUPPORTED, "IMAGE", "%s", ss.str().c_str());
+ skipCall |= log_msg(my_data->report_data, VK_DEBUG_REPORT_ERROR_BIT_EXT, (VkDebugReportObjectTypeEXT)0, 0, __LINE__,
+ IMAGE_FORMAT_UNSUPPORTED, "IMAGE", "%s", ss.str().c_str());
}
}
}
- for(uint32_t i = 0; i < pCreateInfo->attachmentCount; ++i)
- {
- if(!validate_VkImageLayoutKHR(pCreateInfo->pAttachments[i].initialLayout) ||
- !validate_VkImageLayoutKHR(pCreateInfo->pAttachments[i].finalLayout))
- {
+ for (uint32_t i = 0; i < pCreateInfo->attachmentCount; ++i) {
+ if (!validate_VkImageLayoutKHR(pCreateInfo->pAttachments[i].initialLayout) ||
+ !validate_VkImageLayoutKHR(pCreateInfo->pAttachments[i].finalLayout)) {
std::stringstream ss;
ss << "vkCreateRenderPass parameter, VkImageLayout in pCreateInfo->pAttachments[" << i << "], is unrecognized";
// TODO: Verify against Valid Use section of spec. Generally if something yield an undefined result, it's invalid
- skipCall |= log_msg(my_data->report_data, VK_DEBUG_REPORT_WARNING_BIT_EXT, (VkDebugReportObjectTypeEXT)0, 0, __LINE__, IMAGE_RENDERPASS_INVALID_ATTACHMENT, "IMAGE", "%s", ss.str().c_str());
+ skipCall |= log_msg(my_data->report_data, VK_DEBUG_REPORT_ERROR_BIT_EXT, (VkDebugReportObjectTypeEXT)0, 0, __LINE__,
+ IMAGE_RENDERPASS_INVALID_ATTACHMENT, "IMAGE", "%s", ss.str().c_str());
}
}
- for(uint32_t i = 0; i < pCreateInfo->attachmentCount; ++i)
- {
- if(!validate_VkAttachmentLoadOp(pCreateInfo->pAttachments[i].loadOp))
- {
+ for (uint32_t i = 0; i < pCreateInfo->attachmentCount; ++i) {
+ if (!validate_VkAttachmentLoadOp(pCreateInfo->pAttachments[i].loadOp)) {
std::stringstream ss;
ss << "vkCreateRenderPass parameter, VkAttachmentLoadOp in pCreateInfo->pAttachments[" << i << "], is unrecognized";
// TODO: Verify against Valid Use section of spec. Generally if something yield an undefined result, it's invalid
- skipCall |= log_msg(my_data->report_data, VK_DEBUG_REPORT_WARNING_BIT_EXT, (VkDebugReportObjectTypeEXT)0, 0, __LINE__, IMAGE_RENDERPASS_INVALID_ATTACHMENT, "IMAGE", "%s", ss.str().c_str());
+ skipCall |= log_msg(my_data->report_data, VK_DEBUG_REPORT_ERROR_BIT_EXT, (VkDebugReportObjectTypeEXT)0, 0, __LINE__,
+ IMAGE_RENDERPASS_INVALID_ATTACHMENT, "IMAGE", "%s", ss.str().c_str());
}
}
- for(uint32_t i = 0; i < pCreateInfo->attachmentCount; ++i)
- {
- if(!validate_VkAttachmentStoreOp(pCreateInfo->pAttachments[i].storeOp))
- {
+ for (uint32_t i = 0; i < pCreateInfo->attachmentCount; ++i) {
+ if (!validate_VkAttachmentStoreOp(pCreateInfo->pAttachments[i].storeOp)) {
std::stringstream ss;
ss << "vkCreateRenderPass parameter, VkAttachmentStoreOp in pCreateInfo->pAttachments[" << i << "], is unrecognized";
// TODO: Verify against Valid Use section of spec. Generally if something yield an undefined result, it's invalid
- skipCall |= log_msg(my_data->report_data, VK_DEBUG_REPORT_WARNING_BIT_EXT, (VkDebugReportObjectTypeEXT)0, 0, __LINE__, IMAGE_RENDERPASS_INVALID_ATTACHMENT, "IMAGE", "%s", ss.str().c_str());
+ skipCall |= log_msg(my_data->report_data, VK_DEBUG_REPORT_ERROR_BIT_EXT, (VkDebugReportObjectTypeEXT)0, 0, __LINE__,
+ IMAGE_RENDERPASS_INVALID_ATTACHMENT, "IMAGE", "%s", ss.str().c_str());
}
}
// Any depth buffers specified as attachments?
bool depthFormatPresent = VK_FALSE;
- for (uint32_t i = 0; i < pCreateInfo->attachmentCount; ++i)
- {
+ for (uint32_t i = 0; i < pCreateInfo->attachmentCount; ++i) {
depthFormatPresent |= is_depth_format(pCreateInfo->pAttachments[i].format);
}
@@ -502,8 +434,10 @@ VK_LAYER_EXPORT VKAPI_ATTR VkResult VKAPI_CALL vkCreateRenderPass(VkDevice devic
if (pCreateInfo->pSubpasses[i].pDepthStencilAttachment &&
pCreateInfo->pSubpasses[i].pDepthStencilAttachment->attachment != VK_ATTACHMENT_UNUSED) {
std::stringstream ss;
- ss << "vkCreateRenderPass has no depth/stencil attachment, yet subpass[" << i << "] has VkSubpassDescription::depthStencilAttachment value that is not VK_ATTACHMENT_UNUSED";
- skipCall |= log_msg(my_data->report_data, VK_DEBUG_REPORT_ERROR_BIT_EXT, (VkDebugReportObjectTypeEXT)0, 0, __LINE__, IMAGE_RENDERPASS_INVALID_DS_ATTACHMENT, "IMAGE", "%s", ss.str().c_str());
+ ss << "vkCreateRenderPass has no depth/stencil attachment, yet subpass[" << i
+ << "] has VkSubpassDescription::depthStencilAttachment value that is not VK_ATTACHMENT_UNUSED";
+ skipCall |= log_msg(my_data->report_data, VK_DEBUG_REPORT_ERROR_BIT_EXT, (VkDebugReportObjectTypeEXT)0, 0, __LINE__,
+ IMAGE_RENDERPASS_INVALID_DS_ATTACHMENT, "IMAGE", "%s", ss.str().c_str());
}
}
}
@@ -515,59 +449,65 @@ VK_LAYER_EXPORT VKAPI_ATTR VkResult VKAPI_CALL vkCreateRenderPass(VkDevice devic
return result;
}
-VK_LAYER_EXPORT VKAPI_ATTR VkResult VKAPI_CALL vkCreateImageView(VkDevice device, const VkImageViewCreateInfo* pCreateInfo, const VkAllocationCallbacks* pAllocator, VkImageView* pView)
-{
+VK_LAYER_EXPORT VKAPI_ATTR VkResult VKAPI_CALL vkCreateImageView(VkDevice device, const VkImageViewCreateInfo *pCreateInfo,
+ const VkAllocationCallbacks *pAllocator, VkImageView *pView) {
VkBool32 skipCall = VK_FALSE;
layer_data *device_data = get_my_data_ptr(get_dispatch_key(device), layer_data_map);
auto imageEntry = device_data->imageMap.find(pCreateInfo->image);
if (imageEntry != device_data->imageMap.end()) {
if (pCreateInfo->subresourceRange.baseMipLevel >= imageEntry->second.mipLevels) {
std::stringstream ss;
- ss << "vkCreateImageView called with baseMipLevel " << pCreateInfo->subresourceRange.baseMipLevel
- << " for image " << pCreateInfo->image << " that only has " << imageEntry->second.mipLevels << " mip levels.";
- skipCall |= log_msg(device_data->report_data, VK_DEBUG_REPORT_ERROR_BIT_EXT, (VkDebugReportObjectTypeEXT)0, 0, __LINE__, IMAGE_VIEW_CREATE_ERROR, "IMAGE", "%s", ss.str().c_str());
+ ss << "vkCreateImageView called with baseMipLevel " << pCreateInfo->subresourceRange.baseMipLevel << " for image "
+ << pCreateInfo->image << " that only has " << imageEntry->second.mipLevels << " mip levels.";
+ skipCall |= log_msg(device_data->report_data, VK_DEBUG_REPORT_ERROR_BIT_EXT, (VkDebugReportObjectTypeEXT)0, 0, __LINE__,
+ IMAGE_VIEW_CREATE_ERROR, "IMAGE", "%s", ss.str().c_str());
}
if (pCreateInfo->subresourceRange.baseArrayLayer >= imageEntry->second.arraySize) {
std::stringstream ss;
ss << "vkCreateImageView called with baseArrayLayer " << pCreateInfo->subresourceRange.baseArrayLayer << " for image "
- << pCreateInfo->image << " that only has " << imageEntry->second.arraySize << " mip levels.";
- skipCall |= log_msg(device_data->report_data, VK_DEBUG_REPORT_ERROR_BIT_EXT, (VkDebugReportObjectTypeEXT)0, 0, __LINE__, IMAGE_VIEW_CREATE_ERROR, "IMAGE", "%s", ss.str().c_str());
+ << pCreateInfo->image << " that only has " << imageEntry->second.arraySize << " array layers.";
+ skipCall |= log_msg(device_data->report_data, VK_DEBUG_REPORT_ERROR_BIT_EXT, (VkDebugReportObjectTypeEXT)0, 0, __LINE__,
+ IMAGE_VIEW_CREATE_ERROR, "IMAGE", "%s", ss.str().c_str());
}
if (!pCreateInfo->subresourceRange.levelCount) {
std::stringstream ss;
ss << "vkCreateImageView called with 0 in pCreateInfo->subresourceRange.levelCount.";
- skipCall |= log_msg(device_data->report_data, VK_DEBUG_REPORT_ERROR_BIT_EXT, (VkDebugReportObjectTypeEXT)0, 0, __LINE__, IMAGE_VIEW_CREATE_ERROR, "IMAGE", "%s", ss.str().c_str());
+ skipCall |= log_msg(device_data->report_data, VK_DEBUG_REPORT_ERROR_BIT_EXT, (VkDebugReportObjectTypeEXT)0, 0, __LINE__,
+ IMAGE_VIEW_CREATE_ERROR, "IMAGE", "%s", ss.str().c_str());
}
if (!pCreateInfo->subresourceRange.layerCount) {
std::stringstream ss;
ss << "vkCreateImageView called with 0 in pCreateInfo->subresourceRange.layerCount.";
- skipCall |= log_msg(device_data->report_data, VK_DEBUG_REPORT_ERROR_BIT_EXT, (VkDebugReportObjectTypeEXT)0, 0, __LINE__, IMAGE_VIEW_CREATE_ERROR, "IMAGE", "%s", ss.str().c_str());
+ skipCall |= log_msg(device_data->report_data, VK_DEBUG_REPORT_ERROR_BIT_EXT, (VkDebugReportObjectTypeEXT)0, 0, __LINE__,
+ IMAGE_VIEW_CREATE_ERROR, "IMAGE", "%s", ss.str().c_str());
}
- VkImageCreateFlags imageFlags = imageEntry->second.flags;
- VkFormat imageFormat = imageEntry->second.format;
- VkFormat ivciFormat = pCreateInfo->format;
- VkImageAspectFlags aspectMask = pCreateInfo->subresourceRange.aspectMask;
+ VkImageCreateFlags imageFlags = imageEntry->second.flags;
+ VkFormat imageFormat = imageEntry->second.format;
+ VkFormat ivciFormat = pCreateInfo->format;
+ VkImageAspectFlags aspectMask = pCreateInfo->subresourceRange.aspectMask;
// Validate VK_IMAGE_CREATE_MUTABLE_FORMAT_BIT state
if (imageFlags & VK_IMAGE_CREATE_MUTABLE_FORMAT_BIT) {
// Format MUST be compatible (in the same format compatibility class) as the format the image was created with
if (vk_format_get_compatibility_class(imageFormat) != vk_format_get_compatibility_class(ivciFormat)) {
std::stringstream ss;
- ss << "vkCreateImageView(): ImageView format " << string_VkFormat(ivciFormat) << " is not in the same format compatibility class as image (" <<
- (uint64_t)pCreateInfo->image << ") format " << string_VkFormat(imageFormat) << ". Images created with the VK_IMAGE_CREATE_MUTABLE_FORMAT BIT " <<
- "can support ImageViews with differing formats but they must be in the same compatibility class.";
- skipCall |= log_msg(device_data->report_data, VK_DEBUG_REPORT_ERROR_BIT_EXT, (VkDebugReportObjectTypeEXT)0, 0, __LINE__,
- IMAGE_VIEW_CREATE_ERROR, "IMAGE", "%s", ss.str().c_str());
+ ss << "vkCreateImageView(): ImageView format " << string_VkFormat(ivciFormat)
+ << " is not in the same format compatibility class as image (" << (uint64_t)pCreateInfo->image << ") format "
+ << string_VkFormat(imageFormat) << ". Images created with the VK_IMAGE_CREATE_MUTABLE_FORMAT BIT "
+ << "can support ImageViews with differing formats but they must be in the same compatibility class.";
+ skipCall |= log_msg(device_data->report_data, VK_DEBUG_REPORT_ERROR_BIT_EXT, (VkDebugReportObjectTypeEXT)0, 0,
+ __LINE__, IMAGE_VIEW_CREATE_ERROR, "IMAGE", "%s", ss.str().c_str());
}
} else {
// Format MUST be IDENTICAL to the format the image was created with
if (imageFormat != ivciFormat) {
std::stringstream ss;
- ss << "vkCreateImageView() format " << string_VkFormat(ivciFormat) << " differs from image " << (uint64_t)pCreateInfo->image << " format " <<
- string_VkFormat(imageFormat) << ". Formats MUST be IDENTICAL unless VK_IMAGE_CREATE_MUTABLE_FORMAT BIT was set on image creation.";
- skipCall |= log_msg(device_data->report_data, VK_DEBUG_REPORT_ERROR_BIT_EXT, (VkDebugReportObjectTypeEXT)0, 0, __LINE__,
- IMAGE_VIEW_CREATE_ERROR, "IMAGE", "%s", ss.str().c_str());
+ ss << "vkCreateImageView() format " << string_VkFormat(ivciFormat) << " differs from image "
+ << (uint64_t)pCreateInfo->image << " format " << string_VkFormat(imageFormat)
+ << ". Formats MUST be IDENTICAL unless VK_IMAGE_CREATE_MUTABLE_FORMAT BIT was set on image creation.";
+ skipCall |= log_msg(device_data->report_data, VK_DEBUG_REPORT_ERROR_BIT_EXT, (VkDebugReportObjectTypeEXT)0, 0,
+ __LINE__, IMAGE_VIEW_CREATE_ERROR, "IMAGE", "%s", ss.str().c_str());
}
}
@@ -576,63 +516,74 @@ VK_LAYER_EXPORT VKAPI_ATTR VkResult VKAPI_CALL vkCreateImageView(VkDevice device
if ((aspectMask & VK_IMAGE_ASPECT_COLOR_BIT) != VK_IMAGE_ASPECT_COLOR_BIT) {
std::stringstream ss;
ss << "vkCreateImageView: Color image formats must have the VK_IMAGE_ASPECT_COLOR_BIT set";
- skipCall |= log_msg(device_data->report_data, VK_DEBUG_REPORT_ERROR_BIT_EXT, VK_DEBUG_REPORT_OBJECT_TYPE_IMAGE_EXT,
- (uint64_t)pCreateInfo->image, __LINE__, IMAGE_INVALID_IMAGE_ASPECT, "IMAGE", "%s", ss.str().c_str());
+ skipCall |=
+ log_msg(device_data->report_data, VK_DEBUG_REPORT_ERROR_BIT_EXT, VK_DEBUG_REPORT_OBJECT_TYPE_IMAGE_EXT,
+ (uint64_t)pCreateInfo->image, __LINE__, IMAGE_INVALID_IMAGE_ASPECT, "IMAGE", "%s", ss.str().c_str());
}
if ((aspectMask & VK_IMAGE_ASPECT_COLOR_BIT) != aspectMask) {
std::stringstream ss;
ss << "vkCreateImageView: Color image formats must have ONLY the VK_IMAGE_ASPECT_COLOR_BIT set";
- skipCall |= log_msg(device_data->report_data, VK_DEBUG_REPORT_ERROR_BIT_EXT, VK_DEBUG_REPORT_OBJECT_TYPE_IMAGE_EXT,
- (uint64_t)pCreateInfo->image, __LINE__, IMAGE_INVALID_IMAGE_ASPECT, "IMAGE", "%s", ss.str().c_str());
+ skipCall |=
+ log_msg(device_data->report_data, VK_DEBUG_REPORT_ERROR_BIT_EXT, VK_DEBUG_REPORT_OBJECT_TYPE_IMAGE_EXT,
+ (uint64_t)pCreateInfo->image, __LINE__, IMAGE_INVALID_IMAGE_ASPECT, "IMAGE", "%s", ss.str().c_str());
}
if (VK_FALSE == vk_format_is_color(ivciFormat)) {
std::stringstream ss;
ss << "vkCreateImageView: The image view's format can differ from the parent image's format, but both must be "
- << "color formats. ImageFormat is " << string_VkFormat(imageFormat) << " ImageViewFormat is " << string_VkFormat(ivciFormat);
+ << "color formats. ImageFormat is " << string_VkFormat(imageFormat) << " ImageViewFormat is "
+ << string_VkFormat(ivciFormat);
skipCall |= log_msg(device_data->report_data, VK_DEBUG_REPORT_ERROR_BIT_EXT, VK_DEBUG_REPORT_OBJECT_TYPE_IMAGE_EXT,
- (uint64_t)pCreateInfo->image, __LINE__, IMAGE_INVALID_FORMAT, "IMAGE", "%s", ss.str().c_str());
+ (uint64_t)pCreateInfo->image, __LINE__, IMAGE_INVALID_FORMAT, "IMAGE", "%s", ss.str().c_str());
}
// TODO: Uncompressed formats are compatible if they occupy they same number of bits per pixel.
// Compressed formats are compatible if the only difference between them is the numerical type of
// the uncompressed pixels (e.g. signed vs. unsigned, or sRGB vs. UNORM encoding).
- } else if (vk_format_is_depth_and_stencil(imageFormat)) {
+ } else if (vk_format_is_depth_and_stencil(imageFormat)) {
if ((aspectMask & (VK_IMAGE_ASPECT_DEPTH_BIT | VK_IMAGE_ASPECT_STENCIL_BIT)) == 0) {
std::stringstream ss;
- ss << "vkCreateImageView: Depth/stencil image formats must have at least one of VK_IMAGE_ASPECT_DEPTH_BIT and VK_IMAGE_ASPECT_STENCIL_BIT set";
- skipCall |= log_msg(device_data->report_data, VK_DEBUG_REPORT_ERROR_BIT_EXT, VK_DEBUG_REPORT_OBJECT_TYPE_IMAGE_EXT,
- (uint64_t)pCreateInfo->image, __LINE__, IMAGE_INVALID_IMAGE_ASPECT, "IMAGE", "%s", ss.str().c_str());
+ ss << "vkCreateImageView: Depth/stencil image formats must have at least one of VK_IMAGE_ASPECT_DEPTH_BIT and "
+ "VK_IMAGE_ASPECT_STENCIL_BIT set";
+ skipCall |=
+ log_msg(device_data->report_data, VK_DEBUG_REPORT_ERROR_BIT_EXT, VK_DEBUG_REPORT_OBJECT_TYPE_IMAGE_EXT,
+ (uint64_t)pCreateInfo->image, __LINE__, IMAGE_INVALID_IMAGE_ASPECT, "IMAGE", "%s", ss.str().c_str());
}
if ((aspectMask & (VK_IMAGE_ASPECT_DEPTH_BIT | VK_IMAGE_ASPECT_STENCIL_BIT)) != aspectMask) {
std::stringstream ss;
- ss << "vkCreateImageView: Combination depth/stencil image formats can have only the VK_IMAGE_ASPECT_DEPTH_BIT and VK_IMAGE_ASPECT_STENCIL_BIT set";
- skipCall |= log_msg(device_data->report_data, VK_DEBUG_REPORT_ERROR_BIT_EXT, VK_DEBUG_REPORT_OBJECT_TYPE_IMAGE_EXT,
- (uint64_t)pCreateInfo->image, __LINE__, IMAGE_INVALID_IMAGE_ASPECT, "IMAGE", "%s", ss.str().c_str());
+ ss << "vkCreateImageView: Combination depth/stencil image formats can have only the VK_IMAGE_ASPECT_DEPTH_BIT and "
+ "VK_IMAGE_ASPECT_STENCIL_BIT set";
+ skipCall |=
+ log_msg(device_data->report_data, VK_DEBUG_REPORT_ERROR_BIT_EXT, VK_DEBUG_REPORT_OBJECT_TYPE_IMAGE_EXT,
+ (uint64_t)pCreateInfo->image, __LINE__, IMAGE_INVALID_IMAGE_ASPECT, "IMAGE", "%s", ss.str().c_str());
}
- } else if (vk_format_is_depth_only(imageFormat)) {
+ } else if (vk_format_is_depth_only(imageFormat)) {
if ((aspectMask & VK_IMAGE_ASPECT_DEPTH_BIT) != VK_IMAGE_ASPECT_DEPTH_BIT) {
std::stringstream ss;
ss << "vkCreateImageView: Depth-only image formats must have the VK_IMAGE_ASPECT_DEPTH_BIT set";
- skipCall |= log_msg(device_data->report_data, VK_DEBUG_REPORT_ERROR_BIT_EXT, VK_DEBUG_REPORT_OBJECT_TYPE_IMAGE_EXT,
- (uint64_t)pCreateInfo->image, __LINE__, IMAGE_INVALID_IMAGE_ASPECT, "IMAGE", "%s", ss.str().c_str());
+ skipCall |=
+ log_msg(device_data->report_data, VK_DEBUG_REPORT_ERROR_BIT_EXT, VK_DEBUG_REPORT_OBJECT_TYPE_IMAGE_EXT,
+ (uint64_t)pCreateInfo->image, __LINE__, IMAGE_INVALID_IMAGE_ASPECT, "IMAGE", "%s", ss.str().c_str());
}
if ((aspectMask & VK_IMAGE_ASPECT_DEPTH_BIT) != aspectMask) {
std::stringstream ss;
ss << "vkCreateImageView: Depth-only image formats can have only the VK_IMAGE_ASPECT_DEPTH_BIT set";
- skipCall |= log_msg(device_data->report_data, VK_DEBUG_REPORT_ERROR_BIT_EXT, VK_DEBUG_REPORT_OBJECT_TYPE_IMAGE_EXT,
- (uint64_t)pCreateInfo->image, __LINE__, IMAGE_INVALID_IMAGE_ASPECT, "IMAGE", "%s", ss.str().c_str());
+ skipCall |=
+ log_msg(device_data->report_data, VK_DEBUG_REPORT_ERROR_BIT_EXT, VK_DEBUG_REPORT_OBJECT_TYPE_IMAGE_EXT,
+ (uint64_t)pCreateInfo->image, __LINE__, IMAGE_INVALID_IMAGE_ASPECT, "IMAGE", "%s", ss.str().c_str());
}
- } else if (vk_format_is_stencil_only(imageFormat)) {
+ } else if (vk_format_is_stencil_only(imageFormat)) {
if ((aspectMask & VK_IMAGE_ASPECT_STENCIL_BIT) != VK_IMAGE_ASPECT_STENCIL_BIT) {
std::stringstream ss;
ss << "vkCreateImageView: Stencil-only image formats must have the VK_IMAGE_ASPECT_STENCIL_BIT set";
- skipCall |= log_msg(device_data->report_data, VK_DEBUG_REPORT_ERROR_BIT_EXT, VK_DEBUG_REPORT_OBJECT_TYPE_IMAGE_EXT,
- (uint64_t)pCreateInfo->image, __LINE__, IMAGE_INVALID_IMAGE_ASPECT, "IMAGE", "%s", ss.str().c_str());
+ skipCall |=
+ log_msg(device_data->report_data, VK_DEBUG_REPORT_ERROR_BIT_EXT, VK_DEBUG_REPORT_OBJECT_TYPE_IMAGE_EXT,
+ (uint64_t)pCreateInfo->image, __LINE__, IMAGE_INVALID_IMAGE_ASPECT, "IMAGE", "%s", ss.str().c_str());
}
if ((aspectMask & VK_IMAGE_ASPECT_STENCIL_BIT) != aspectMask) {
std::stringstream ss;
ss << "vkCreateImageView: Stencil-only image formats can have only the VK_IMAGE_ASPECT_STENCIL_BIT set";
- skipCall |= log_msg(device_data->report_data, VK_DEBUG_REPORT_ERROR_BIT_EXT, VK_DEBUG_REPORT_OBJECT_TYPE_IMAGE_EXT,
- (uint64_t)pCreateInfo->image, __LINE__, IMAGE_INVALID_IMAGE_ASPECT, "IMAGE", "%s", ss.str().c_str());
+ skipCall |=
+ log_msg(device_data->report_data, VK_DEBUG_REPORT_ERROR_BIT_EXT, VK_DEBUG_REPORT_OBJECT_TYPE_IMAGE_EXT,
+ (uint64_t)pCreateInfo->image, __LINE__, IMAGE_INVALID_IMAGE_ASPECT, "IMAGE", "%s", ss.str().c_str());
}
}
}
@@ -645,68 +596,62 @@ VK_LAYER_EXPORT VKAPI_ATTR VkResult VKAPI_CALL vkCreateImageView(VkDevice device
return result;
}
-VK_LAYER_EXPORT VKAPI_ATTR void VKAPI_CALL vkCmdClearColorImage(
- VkCommandBuffer commandBuffer,
- VkImage image,
- VkImageLayout imageLayout,
- const VkClearColorValue *pColor,
- uint32_t rangeCount,
- const VkImageSubresourceRange *pRanges)
-{
+VK_LAYER_EXPORT VKAPI_ATTR void VKAPI_CALL vkCmdClearColorImage(VkCommandBuffer commandBuffer, VkImage image,
+ VkImageLayout imageLayout, const VkClearColorValue *pColor,
+ uint32_t rangeCount, const VkImageSubresourceRange *pRanges) {
VkBool32 skipCall = VK_FALSE;
layer_data *device_data = get_my_data_ptr(get_dispatch_key(commandBuffer), layer_data_map);
+
+ if (imageLayout != VK_IMAGE_LAYOUT_GENERAL && imageLayout != VK_IMAGE_LAYOUT_TRANSFER_DST_OPTIMAL) {
+ skipCall |= log_msg(device_data->report_data, VK_DEBUG_REPORT_ERROR_BIT_EXT, VK_DEBUG_REPORT_OBJECT_TYPE_COMMAND_BUFFER_EXT,
+ (uint64_t)commandBuffer, __LINE__, IMAGE_INVALID_LAYOUT, "IMAGE",
+ "vkCmdClearColorImage parameter, imageLayout, must be VK_IMAGE_LAYOUT_GENERAL or "
+ "VK_IMAGE_LAYOUT_TRANSFER_DST_OPTIMAL");
+ }
+
// For each range, image aspect must be color only
for (uint32_t i = 0; i < rangeCount; i++) {
if (pRanges[i].aspectMask != VK_IMAGE_ASPECT_COLOR_BIT) {
- char const str[] = "vkCmdClearColorImage aspectMasks for all subresource ranges must be set to VK_IMAGE_ASPECT_COLOR_BIT";
- skipCall |= log_msg(device_data->report_data, VK_DEBUG_REPORT_ERROR_BIT_EXT, VK_DEBUG_REPORT_OBJECT_TYPE_COMMAND_BUFFER_EXT,
- (uint64_t)commandBuffer, __LINE__, IMAGE_INVALID_IMAGE_ASPECT, "IMAGE", str);
+ char const str[] =
+ "vkCmdClearColorImage aspectMasks for all subresource ranges must be set to VK_IMAGE_ASPECT_COLOR_BIT";
+ skipCall |=
+ log_msg(device_data->report_data, VK_DEBUG_REPORT_ERROR_BIT_EXT, VK_DEBUG_REPORT_OBJECT_TYPE_COMMAND_BUFFER_EXT,
+ (uint64_t)commandBuffer, __LINE__, IMAGE_INVALID_IMAGE_ASPECT, "IMAGE", str);
}
}
if (VK_FALSE == skipCall) {
- device_data->device_dispatch_table->CmdClearColorImage(commandBuffer, image, imageLayout,
- pColor, rangeCount, pRanges);
+ device_data->device_dispatch_table->CmdClearColorImage(commandBuffer, image, imageLayout, pColor, rangeCount, pRanges);
}
}
-VK_LAYER_EXPORT VKAPI_ATTR void VKAPI_CALL vkCmdClearDepthStencilImage(
- VkCommandBuffer commandBuffer,
- VkImage image,
- VkImageLayout imageLayout,
- const VkClearDepthStencilValue *pDepthStencil,
- uint32_t rangeCount,
- const VkImageSubresourceRange *pRanges)
-{
+VK_LAYER_EXPORT VKAPI_ATTR void VKAPI_CALL
+vkCmdClearDepthStencilImage(VkCommandBuffer commandBuffer, VkImage image, VkImageLayout imageLayout,
+ const VkClearDepthStencilValue *pDepthStencil, uint32_t rangeCount,
+ const VkImageSubresourceRange *pRanges) {
VkBool32 skipCall = VK_FALSE;
layer_data *device_data = get_my_data_ptr(get_dispatch_key(commandBuffer), layer_data_map);
// For each range, Image aspect must be depth or stencil or both
for (uint32_t i = 0; i < rangeCount; i++) {
- if (((pRanges[i].aspectMask & VK_IMAGE_ASPECT_DEPTH_BIT) != VK_IMAGE_ASPECT_DEPTH_BIT) &&
- ((pRanges[i].aspectMask & VK_IMAGE_ASPECT_STENCIL_BIT) != VK_IMAGE_ASPECT_STENCIL_BIT))
- {
+ if (((pRanges[i].aspectMask & VK_IMAGE_ASPECT_DEPTH_BIT) != VK_IMAGE_ASPECT_DEPTH_BIT) &&
+ ((pRanges[i].aspectMask & VK_IMAGE_ASPECT_STENCIL_BIT) != VK_IMAGE_ASPECT_STENCIL_BIT)) {
char const str[] = "vkCmdClearDepthStencilImage aspectMasks for all subresource ranges must be "
"set to VK_IMAGE_ASPECT_DEPTH_BIT and/or VK_IMAGE_ASPECT_STENCIL_BIT";
- skipCall |= log_msg(device_data->report_data, VK_DEBUG_REPORT_ERROR_BIT_EXT, VK_DEBUG_REPORT_OBJECT_TYPE_COMMAND_BUFFER_EXT,
- (uint64_t)commandBuffer, __LINE__, IMAGE_INVALID_IMAGE_ASPECT, "IMAGE", str);
+ skipCall |=
+ log_msg(device_data->report_data, VK_DEBUG_REPORT_ERROR_BIT_EXT, VK_DEBUG_REPORT_OBJECT_TYPE_COMMAND_BUFFER_EXT,
+ (uint64_t)commandBuffer, __LINE__, IMAGE_INVALID_IMAGE_ASPECT, "IMAGE", str);
}
}
if (VK_FALSE == skipCall) {
- device_data->device_dispatch_table->CmdClearDepthStencilImage(commandBuffer,
- image, imageLayout, pDepthStencil, rangeCount, pRanges);
+ device_data->device_dispatch_table->CmdClearDepthStencilImage(commandBuffer, image, imageLayout, pDepthStencil, rangeCount,
+ pRanges);
}
}
-VK_LAYER_EXPORT VKAPI_ATTR void VKAPI_CALL vkCmdCopyImage(
- VkCommandBuffer commandBuffer,
- VkImage srcImage,
- VkImageLayout srcImageLayout,
- VkImage dstImage,
- VkImageLayout dstImageLayout,
- uint32_t regionCount,
- const VkImageCopy *pRegions)
-{
+VK_LAYER_EXPORT VKAPI_ATTR void VKAPI_CALL
+vkCmdCopyImage(VkCommandBuffer commandBuffer, VkImage srcImage, VkImageLayout srcImageLayout, VkImage dstImage,
+ VkImageLayout dstImageLayout, uint32_t regionCount, const VkImageCopy *pRegions) {
VkBool32 skipCall = VK_FALSE;
layer_data *device_data = get_my_data_ptr(get_dispatch_key(commandBuffer), layer_data_map);
auto srcImageEntry = device_data->imageMap.find(srcImage);
@@ -717,80 +662,80 @@ VK_LAYER_EXPORT VKAPI_ATTR void VKAPI_CALL vkCmdCopyImage(
// For each region, src aspect mask must match dest aspect mask
// For each region, color aspects cannot be mixed with depth/stencil aspects
for (uint32_t i = 0; i < regionCount; i++) {
- if(pRegions[i].srcSubresource.layerCount == 0)
- {
+ if (pRegions[i].srcSubresource.layerCount == 0) {
char const str[] = "vkCmdCopyImage: number of layers in source subresource is zero";
// TODO: Verify against Valid Use section of spec
- skipCall |= log_msg(device_data->report_data, VK_DEBUG_REPORT_WARNING_BIT_EXT, VK_DEBUG_REPORT_OBJECT_TYPE_COMMAND_BUFFER_EXT,
- (uint64_t)commandBuffer, __LINE__, IMAGE_MISMATCHED_IMAGE_ASPECT, "IMAGE", str);
+ skipCall |=
+ log_msg(device_data->report_data, VK_DEBUG_REPORT_ERROR_BIT_EXT, VK_DEBUG_REPORT_OBJECT_TYPE_COMMAND_BUFFER_EXT,
+ (uint64_t)commandBuffer, __LINE__, IMAGE_MISMATCHED_IMAGE_ASPECT, "IMAGE", str);
}
- if(pRegions[i].dstSubresource.layerCount == 0)
- {
+ if (pRegions[i].dstSubresource.layerCount == 0) {
char const str[] = "vkCmdCopyImage: number of layers in destination subresource is zero";
// TODO: Verify against Valid Use section of spec
- skipCall |= log_msg(device_data->report_data, VK_DEBUG_REPORT_WARNING_BIT_EXT, VK_DEBUG_REPORT_OBJECT_TYPE_COMMAND_BUFFER_EXT,
- (uint64_t)commandBuffer, __LINE__, IMAGE_MISMATCHED_IMAGE_ASPECT, "IMAGE", str);
+ skipCall |=
+ log_msg(device_data->report_data, VK_DEBUG_REPORT_ERROR_BIT_EXT, VK_DEBUG_REPORT_OBJECT_TYPE_COMMAND_BUFFER_EXT,
+ (uint64_t)commandBuffer, __LINE__, IMAGE_MISMATCHED_IMAGE_ASPECT, "IMAGE", str);
}
- if(pRegions[i].srcSubresource.layerCount != pRegions[i].dstSubresource.layerCount)
- {
+ if (pRegions[i].srcSubresource.layerCount != pRegions[i].dstSubresource.layerCount) {
char const str[] = "vkCmdCopyImage: number of layers in source and destination subresources must match";
- skipCall |= log_msg(device_data->report_data, VK_DEBUG_REPORT_ERROR_BIT_EXT, VK_DEBUG_REPORT_OBJECT_TYPE_COMMAND_BUFFER_EXT,
- (uint64_t)commandBuffer, __LINE__, IMAGE_MISMATCHED_IMAGE_ASPECT, "IMAGE", str);
+ skipCall |=
+ log_msg(device_data->report_data, VK_DEBUG_REPORT_ERROR_BIT_EXT, VK_DEBUG_REPORT_OBJECT_TYPE_COMMAND_BUFFER_EXT,
+ (uint64_t)commandBuffer, __LINE__, IMAGE_MISMATCHED_IMAGE_ASPECT, "IMAGE", str);
}
if (pRegions[i].srcSubresource.aspectMask != pRegions[i].dstSubresource.aspectMask) {
char const str[] = "vkCmdCopyImage: Src and dest aspectMasks for each region must match";
- skipCall |= log_msg(device_data->report_data, VK_DEBUG_REPORT_ERROR_BIT_EXT, VK_DEBUG_REPORT_OBJECT_TYPE_COMMAND_BUFFER_EXT,
- (uint64_t)commandBuffer, __LINE__, IMAGE_MISMATCHED_IMAGE_ASPECT, "IMAGE", str);
+ skipCall |=
+ log_msg(device_data->report_data, VK_DEBUG_REPORT_ERROR_BIT_EXT, VK_DEBUG_REPORT_OBJECT_TYPE_COMMAND_BUFFER_EXT,
+ (uint64_t)commandBuffer, __LINE__, IMAGE_MISMATCHED_IMAGE_ASPECT, "IMAGE", str);
}
if ((pRegions[i].srcSubresource.aspectMask & VK_IMAGE_ASPECT_COLOR_BIT) &&
(pRegions[i].srcSubresource.aspectMask & (VK_IMAGE_ASPECT_DEPTH_BIT | VK_IMAGE_ASPECT_STENCIL_BIT))) {
char const str[] = "vkCmdCopyImage aspectMask cannot specify both COLOR and DEPTH/STENCIL aspects";
- skipCall |= log_msg(device_data->report_data, VK_DEBUG_REPORT_ERROR_BIT_EXT, VK_DEBUG_REPORT_OBJECT_TYPE_COMMAND_BUFFER_EXT,
- (uint64_t)commandBuffer, __LINE__, IMAGE_INVALID_IMAGE_ASPECT, "IMAGE", str);
+ skipCall |=
+ log_msg(device_data->report_data, VK_DEBUG_REPORT_ERROR_BIT_EXT, VK_DEBUG_REPORT_OBJECT_TYPE_COMMAND_BUFFER_EXT,
+ (uint64_t)commandBuffer, __LINE__, IMAGE_INVALID_IMAGE_ASPECT, "IMAGE", str);
}
}
- if ((srcImageEntry != device_data->imageMap.end())
- && (dstImageEntry != device_data->imageMap.end())) {
+ if ((srcImageEntry != device_data->imageMap.end()) && (dstImageEntry != device_data->imageMap.end())) {
if (srcImageEntry->second.imageType != dstImageEntry->second.imageType) {
char const str[] = "vkCmdCopyImage called with unmatched source and dest image types.";
- skipCall |= log_msg(device_data->report_data, VK_DEBUG_REPORT_ERROR_BIT_EXT, VK_DEBUG_REPORT_OBJECT_TYPE_COMMAND_BUFFER_EXT,
- (uint64_t)commandBuffer, __LINE__, IMAGE_MISMATCHED_IMAGE_TYPE, "IMAGE", str);
+ skipCall |=
+ log_msg(device_data->report_data, VK_DEBUG_REPORT_ERROR_BIT_EXT, VK_DEBUG_REPORT_OBJECT_TYPE_COMMAND_BUFFER_EXT,
+ (uint64_t)commandBuffer, __LINE__, IMAGE_MISMATCHED_IMAGE_TYPE, "IMAGE", str);
}
// Check that format is same size or exact stencil/depth
if (is_depth_format(srcImageEntry->second.format)) {
if (srcImageEntry->second.format != dstImageEntry->second.format) {
char const str[] = "vkCmdCopyImage called with unmatched source and dest image depth/stencil formats.";
- skipCall |= log_msg(device_data->report_data, VK_DEBUG_REPORT_ERROR_BIT_EXT, VK_DEBUG_REPORT_OBJECT_TYPE_COMMAND_BUFFER_EXT,
- (uint64_t)commandBuffer, __LINE__, IMAGE_MISMATCHED_IMAGE_FORMAT, "IMAGE", str);
+ skipCall |=
+ log_msg(device_data->report_data, VK_DEBUG_REPORT_ERROR_BIT_EXT, VK_DEBUG_REPORT_OBJECT_TYPE_COMMAND_BUFFER_EXT,
+ (uint64_t)commandBuffer, __LINE__, IMAGE_MISMATCHED_IMAGE_FORMAT, "IMAGE", str);
}
} else {
size_t srcSize = vk_format_get_size(srcImageEntry->second.format);
size_t destSize = vk_format_get_size(dstImageEntry->second.format);
if (srcSize != destSize) {
char const str[] = "vkCmdCopyImage called with unmatched source and dest image format sizes.";
- skipCall |= log_msg(device_data->report_data, VK_DEBUG_REPORT_ERROR_BIT_EXT, VK_DEBUG_REPORT_OBJECT_TYPE_COMMAND_BUFFER_EXT,
- (uint64_t)commandBuffer, __LINE__, IMAGE_MISMATCHED_IMAGE_FORMAT, "IMAGE", str);
+ skipCall |=
+ log_msg(device_data->report_data, VK_DEBUG_REPORT_ERROR_BIT_EXT, VK_DEBUG_REPORT_OBJECT_TYPE_COMMAND_BUFFER_EXT,
+ (uint64_t)commandBuffer, __LINE__, IMAGE_MISMATCHED_IMAGE_FORMAT, "IMAGE", str);
}
}
}
if (VK_FALSE == skipCall) {
- device_data->device_dispatch_table->CmdCopyImage(commandBuffer, srcImage,
- srcImageLayout, dstImage, dstImageLayout, regionCount, pRegions);
+ device_data->device_dispatch_table->CmdCopyImage(commandBuffer, srcImage, srcImageLayout, dstImage, dstImageLayout,
+ regionCount, pRegions);
}
}
-VKAPI_ATTR void VKAPI_CALL vkCmdClearAttachments(
- VkCommandBuffer commandBuffer,
- uint32_t attachmentCount,
- const VkClearAttachment* pAttachments,
- uint32_t rectCount,
- const VkClearRect* pRects)
-{
+VKAPI_ATTR void VKAPI_CALL vkCmdClearAttachments(VkCommandBuffer commandBuffer, uint32_t attachmentCount,
+ const VkClearAttachment *pAttachments, uint32_t rectCount,
+ const VkClearRect *pRects) {
VkBool32 skipCall = VK_FALSE;
VkImageAspectFlags aspectMask;
layer_data *device_data = get_my_data_ptr(get_dispatch_key(commandBuffer), layer_data_map);
@@ -799,120 +744,104 @@ VKAPI_ATTR void VKAPI_CALL vkCmdClearAttachments(
if (aspectMask & VK_IMAGE_ASPECT_COLOR_BIT) {
if (aspectMask != VK_IMAGE_ASPECT_COLOR_BIT) {
// VK_IMAGE_ASPECT_COLOR_BIT is not the only bit set for this attachment
- char const str[] = "vkCmdClearAttachments aspectMask [%d] must set only VK_IMAGE_ASPECT_COLOR_BIT of a color attachment.";
- skipCall |= log_msg(device_data->report_data, VK_DEBUG_REPORT_ERROR_BIT_EXT, VK_DEBUG_REPORT_OBJECT_TYPE_COMMAND_BUFFER_EXT,
- (uint64_t)commandBuffer, __LINE__, IMAGE_INVALID_IMAGE_ASPECT, "IMAGE", str, i);
+ char const str[] =
+ "vkCmdClearAttachments aspectMask [%d] must set only VK_IMAGE_ASPECT_COLOR_BIT of a color attachment.";
+ skipCall |=
+ log_msg(device_data->report_data, VK_DEBUG_REPORT_ERROR_BIT_EXT, VK_DEBUG_REPORT_OBJECT_TYPE_COMMAND_BUFFER_EXT,
+ (uint64_t)commandBuffer, __LINE__, IMAGE_INVALID_IMAGE_ASPECT, "IMAGE", str, i);
}
} else {
// Image aspect must be depth or stencil or both
- if (((aspectMask & VK_IMAGE_ASPECT_DEPTH_BIT) != VK_IMAGE_ASPECT_DEPTH_BIT) &&
- ((aspectMask & VK_IMAGE_ASPECT_STENCIL_BIT) != VK_IMAGE_ASPECT_STENCIL_BIT))
- {
- char const str[] = "vkCmdClearAttachments aspectMask [%d] must be set to VK_IMAGE_ASPECT_DEPTH_BIT and/or VK_IMAGE_ASPECT_STENCIL_BIT";
- skipCall |= log_msg(device_data->report_data, VK_DEBUG_REPORT_ERROR_BIT_EXT, VK_DEBUG_REPORT_OBJECT_TYPE_COMMAND_BUFFER_EXT,
- (uint64_t)commandBuffer, __LINE__, IMAGE_INVALID_IMAGE_ASPECT, "IMAGE", str, i);
+ if (((aspectMask & VK_IMAGE_ASPECT_DEPTH_BIT) != VK_IMAGE_ASPECT_DEPTH_BIT) &&
+ ((aspectMask & VK_IMAGE_ASPECT_STENCIL_BIT) != VK_IMAGE_ASPECT_STENCIL_BIT)) {
+ char const str[] = "vkCmdClearAttachments aspectMask [%d] must be set to VK_IMAGE_ASPECT_DEPTH_BIT and/or "
+ "VK_IMAGE_ASPECT_STENCIL_BIT";
+ skipCall |=
+ log_msg(device_data->report_data, VK_DEBUG_REPORT_ERROR_BIT_EXT, VK_DEBUG_REPORT_OBJECT_TYPE_COMMAND_BUFFER_EXT,
+ (uint64_t)commandBuffer, __LINE__, IMAGE_INVALID_IMAGE_ASPECT, "IMAGE", str, i);
}
}
}
if (VK_FALSE == skipCall) {
- device_data->device_dispatch_table->CmdClearAttachments(commandBuffer,
- attachmentCount, pAttachments, rectCount, pRects);
+ device_data->device_dispatch_table->CmdClearAttachments(commandBuffer, attachmentCount, pAttachments, rectCount, pRects);
}
}
-VK_LAYER_EXPORT VKAPI_ATTR void VKAPI_CALL vkCmdCopyImageToBuffer(
- VkCommandBuffer commandBuffer,
- VkImage srcImage,
- VkImageLayout srcImageLayout,
- VkBuffer dstBuffer,
- uint32_t regionCount,
- const VkBufferImageCopy *pRegions)
-{
+VK_LAYER_EXPORT VKAPI_ATTR void VKAPI_CALL vkCmdCopyImageToBuffer(VkCommandBuffer commandBuffer, VkImage srcImage,
+ VkImageLayout srcImageLayout, VkBuffer dstBuffer,
+ uint32_t regionCount, const VkBufferImageCopy *pRegions) {
VkBool32 skipCall = VK_FALSE;
layer_data *device_data = get_my_data_ptr(get_dispatch_key(commandBuffer), layer_data_map);
// For each region, the number of layers in the image subresource should not be zero
// Image aspect must be ONE OF color, depth, stencil
for (uint32_t i = 0; i < regionCount; i++) {
- if(pRegions[i].imageSubresource.layerCount == 0)
- {
+ if (pRegions[i].imageSubresource.layerCount == 0) {
char const str[] = "vkCmdCopyImageToBuffer: number of layers in image subresource is zero";
// TODO: Verify against Valid Use section of spec, if this case yields undefined results, then it's an error
- skipCall |= log_msg(device_data->report_data, VK_DEBUG_REPORT_WARNING_BIT_EXT, VK_DEBUG_REPORT_OBJECT_TYPE_COMMAND_BUFFER_EXT,
- (uint64_t)commandBuffer, __LINE__, IMAGE_MISMATCHED_IMAGE_ASPECT, "IMAGE", str);
+ skipCall |=
+ log_msg(device_data->report_data, VK_DEBUG_REPORT_ERROR_BIT_EXT, VK_DEBUG_REPORT_OBJECT_TYPE_COMMAND_BUFFER_EXT,
+ (uint64_t)commandBuffer, __LINE__, IMAGE_MISMATCHED_IMAGE_ASPECT, "IMAGE", str);
}
VkImageAspectFlags aspectMask = pRegions[i].imageSubresource.aspectMask;
- if ((aspectMask != VK_IMAGE_ASPECT_COLOR_BIT) &&
- (aspectMask != VK_IMAGE_ASPECT_DEPTH_BIT) &&
+ if ((aspectMask != VK_IMAGE_ASPECT_COLOR_BIT) && (aspectMask != VK_IMAGE_ASPECT_DEPTH_BIT) &&
(aspectMask != VK_IMAGE_ASPECT_STENCIL_BIT)) {
char const str[] = "vkCmdCopyImageToBuffer: aspectMasks for each region must specify only COLOR or DEPTH or STENCIL";
- skipCall |= log_msg(device_data->report_data, VK_DEBUG_REPORT_ERROR_BIT_EXT, VK_DEBUG_REPORT_OBJECT_TYPE_COMMAND_BUFFER_EXT,
- (uint64_t)commandBuffer, __LINE__, IMAGE_INVALID_IMAGE_ASPECT, "IMAGE", str);
+ skipCall |=
+ log_msg(device_data->report_data, VK_DEBUG_REPORT_ERROR_BIT_EXT, VK_DEBUG_REPORT_OBJECT_TYPE_COMMAND_BUFFER_EXT,
+ (uint64_t)commandBuffer, __LINE__, IMAGE_INVALID_IMAGE_ASPECT, "IMAGE", str);
}
}
if (VK_FALSE == skipCall) {
- device_data->device_dispatch_table->CmdCopyImageToBuffer(commandBuffer,
- srcImage, srcImageLayout, dstBuffer, regionCount, pRegions);
+ device_data->device_dispatch_table->CmdCopyImageToBuffer(commandBuffer, srcImage, srcImageLayout, dstBuffer, regionCount,
+ pRegions);
}
}
-VK_LAYER_EXPORT VKAPI_ATTR void VKAPI_CALL vkCmdCopyBufferToImage(
- VkCommandBuffer commandBuffer,
- VkBuffer srcBuffer,
- VkImage dstImage,
- VkImageLayout dstImageLayout,
- uint32_t regionCount,
- const VkBufferImageCopy *pRegions)
-{
+VK_LAYER_EXPORT VKAPI_ATTR void VKAPI_CALL vkCmdCopyBufferToImage(VkCommandBuffer commandBuffer, VkBuffer srcBuffer,
+ VkImage dstImage, VkImageLayout dstImageLayout,
+ uint32_t regionCount, const VkBufferImageCopy *pRegions) {
VkBool32 skipCall = VK_FALSE;
layer_data *device_data = get_my_data_ptr(get_dispatch_key(commandBuffer), layer_data_map);
// For each region, the number of layers in the image subresource should not be zero
// Image aspect must be ONE OF color, depth, stencil
for (uint32_t i = 0; i < regionCount; i++) {
- if(pRegions[i].imageSubresource.layerCount == 0)
- {
+ if (pRegions[i].imageSubresource.layerCount == 0) {
char const str[] = "vkCmdCopyBufferToImage: number of layers in image subresource is zero";
// TODO: Verify against Valid Use section of spec, if this case yields undefined results, then it's an error
- skipCall |= log_msg(device_data->report_data, VK_DEBUG_REPORT_WARNING_BIT_EXT, VK_DEBUG_REPORT_OBJECT_TYPE_COMMAND_BUFFER_EXT,
- (uint64_t)commandBuffer, __LINE__, IMAGE_MISMATCHED_IMAGE_ASPECT, "IMAGE", str);
+ skipCall |=
+ log_msg(device_data->report_data, VK_DEBUG_REPORT_ERROR_BIT_EXT, VK_DEBUG_REPORT_OBJECT_TYPE_COMMAND_BUFFER_EXT,
+ (uint64_t)commandBuffer, __LINE__, IMAGE_MISMATCHED_IMAGE_ASPECT, "IMAGE", str);
}
VkImageAspectFlags aspectMask = pRegions[i].imageSubresource.aspectMask;
- if ((aspectMask != VK_IMAGE_ASPECT_COLOR_BIT) &&
- (aspectMask != VK_IMAGE_ASPECT_DEPTH_BIT) &&
+ if ((aspectMask != VK_IMAGE_ASPECT_COLOR_BIT) && (aspectMask != VK_IMAGE_ASPECT_DEPTH_BIT) &&
(aspectMask != VK_IMAGE_ASPECT_STENCIL_BIT)) {
char const str[] = "vkCmdCopyBufferToImage: aspectMasks for each region must specify only COLOR or DEPTH or STENCIL";
- skipCall |= log_msg(device_data->report_data, VK_DEBUG_REPORT_ERROR_BIT_EXT, VK_DEBUG_REPORT_OBJECT_TYPE_COMMAND_BUFFER_EXT,
- (uint64_t)commandBuffer, __LINE__, IMAGE_INVALID_IMAGE_ASPECT, "IMAGE", str);
+ skipCall |=
+ log_msg(device_data->report_data, VK_DEBUG_REPORT_ERROR_BIT_EXT, VK_DEBUG_REPORT_OBJECT_TYPE_COMMAND_BUFFER_EXT,
+ (uint64_t)commandBuffer, __LINE__, IMAGE_INVALID_IMAGE_ASPECT, "IMAGE", str);
}
}
if (VK_FALSE == skipCall) {
- device_data->device_dispatch_table->CmdCopyBufferToImage(commandBuffer,
- srcBuffer, dstImage, dstImageLayout, regionCount, pRegions);
+ device_data->device_dispatch_table->CmdCopyBufferToImage(commandBuffer, srcBuffer, dstImage, dstImageLayout, regionCount,
+ pRegions);
}
}
-VK_LAYER_EXPORT VKAPI_ATTR void VKAPI_CALL vkCmdBlitImage(
- VkCommandBuffer commandBuffer,
- VkImage srcImage,
- VkImageLayout srcImageLayout,
- VkImage dstImage,
- VkImageLayout dstImageLayout,
- uint32_t regionCount,
- const VkImageBlit *pRegions,
- VkFilter filter)
-{
- VkBool32 skipCall = VK_FALSE;
+VK_LAYER_EXPORT VKAPI_ATTR void VKAPI_CALL
+vkCmdBlitImage(VkCommandBuffer commandBuffer, VkImage srcImage, VkImageLayout srcImageLayout, VkImage dstImage,
+ VkImageLayout dstImageLayout, uint32_t regionCount, const VkImageBlit *pRegions, VkFilter filter) {
+ VkBool32 skipCall = VK_FALSE;
layer_data *device_data = get_my_data_ptr(get_dispatch_key(commandBuffer), layer_data_map);
- auto srcImageEntry = device_data->imageMap.find(srcImage);
+ auto srcImageEntry = device_data->imageMap.find(srcImage);
auto dstImageEntry = device_data->imageMap.find(dstImage);
- if ((srcImageEntry != device_data->imageMap.end()) &&
- (dstImageEntry != device_data->imageMap.end())) {
+ if ((srcImageEntry != device_data->imageMap.end()) && (dstImageEntry != device_data->imageMap.end())) {
VkFormat srcFormat = srcImageEntry->second.format;
VkFormat dstFormat = dstImageEntry->second.format;
@@ -924,44 +853,45 @@ VK_LAYER_EXPORT VKAPI_ATTR void VKAPI_CALL vkCmdBlitImage(
ss << "vkCmdBlitImage: If one of srcImage and dstImage images has signed/unsigned integer format, "
<< "the other one must also have signed/unsigned integer format. "
<< "Source format is " << string_VkFormat(srcFormat) << " Destination format is " << string_VkFormat(dstFormat);
- skipCall |= log_msg(device_data->report_data, VK_DEBUG_REPORT_ERROR_BIT_EXT, VK_DEBUG_REPORT_OBJECT_TYPE_COMMAND_BUFFER_EXT,
- (uint64_t)commandBuffer, __LINE__, IMAGE_INVALID_FORMAT, "IMAGE", "%s", ss.str().c_str());
+ skipCall |=
+ log_msg(device_data->report_data, VK_DEBUG_REPORT_ERROR_BIT_EXT, VK_DEBUG_REPORT_OBJECT_TYPE_COMMAND_BUFFER_EXT,
+ (uint64_t)commandBuffer, __LINE__, IMAGE_INVALID_FORMAT, "IMAGE", "%s", ss.str().c_str());
}
// Validate aspect bits and formats for depth/stencil images
- if (vk_format_is_depth_or_stencil(srcFormat) ||
- vk_format_is_depth_or_stencil(dstFormat)) {
+ if (vk_format_is_depth_or_stencil(srcFormat) || vk_format_is_depth_or_stencil(dstFormat)) {
if (srcFormat != dstFormat) {
std::stringstream ss;
ss << "vkCmdBlitImage: If one of srcImage and dstImage images has a format of depth, stencil or depth "
<< "stencil, the other one must have exactly the same format. "
<< "Source format is " << string_VkFormat(srcFormat) << " Destination format is " << string_VkFormat(dstFormat);
- skipCall |= log_msg(device_data->report_data, VK_DEBUG_REPORT_ERROR_BIT_EXT, VK_DEBUG_REPORT_OBJECT_TYPE_COMMAND_BUFFER_EXT,
- (uint64_t)commandBuffer, __LINE__, IMAGE_INVALID_FORMAT, "IMAGE", "%s", ss.str().c_str());
+ skipCall |=
+ log_msg(device_data->report_data, VK_DEBUG_REPORT_ERROR_BIT_EXT, VK_DEBUG_REPORT_OBJECT_TYPE_COMMAND_BUFFER_EXT,
+ (uint64_t)commandBuffer, __LINE__, IMAGE_INVALID_FORMAT, "IMAGE", "%s", ss.str().c_str());
}
for (uint32_t i = 0; i < regionCount; i++) {
- if(pRegions[i].srcSubresource.layerCount == 0)
- {
+ if (pRegions[i].srcSubresource.layerCount == 0) {
char const str[] = "vkCmdBlitImage: number of layers in source subresource is zero";
// TODO: Verify against Valid Use section of spec, if this case yields undefined results, then it's an error
- skipCall |= log_msg(device_data->report_data, VK_DEBUG_REPORT_WARNING_BIT_EXT, VK_DEBUG_REPORT_OBJECT_TYPE_COMMAND_BUFFER_EXT,
- (uint64_t)commandBuffer, __LINE__, IMAGE_MISMATCHED_IMAGE_ASPECT, "IMAGE", str);
+ skipCall |= log_msg(device_data->report_data, VK_DEBUG_REPORT_ERROR_BIT_EXT,
+ VK_DEBUG_REPORT_OBJECT_TYPE_COMMAND_BUFFER_EXT, (uint64_t)commandBuffer, __LINE__,
+ IMAGE_MISMATCHED_IMAGE_ASPECT, "IMAGE", str);
}
- if(pRegions[i].dstSubresource.layerCount == 0)
- {
+ if (pRegions[i].dstSubresource.layerCount == 0) {
char const str[] = "vkCmdBlitImage: number of layers in destination subresource is zero";
// TODO: Verify against Valid Use section of spec, if this case yields undefined results, then it's an error
- skipCall |= log_msg(device_data->report_data, VK_DEBUG_REPORT_WARNING_BIT_EXT, VK_DEBUG_REPORT_OBJECT_TYPE_COMMAND_BUFFER_EXT,
- (uint64_t)commandBuffer, __LINE__, IMAGE_MISMATCHED_IMAGE_ASPECT, "IMAGE", str);
+ skipCall |= log_msg(device_data->report_data, VK_DEBUG_REPORT_ERROR_BIT_EXT,
+ VK_DEBUG_REPORT_OBJECT_TYPE_COMMAND_BUFFER_EXT, (uint64_t)commandBuffer, __LINE__,
+ IMAGE_MISMATCHED_IMAGE_ASPECT, "IMAGE", str);
}
- if(pRegions[i].srcSubresource.layerCount != pRegions[i].dstSubresource.layerCount)
- {
+ if (pRegions[i].srcSubresource.layerCount != pRegions[i].dstSubresource.layerCount) {
char const str[] = "vkCmdBlitImage: number of layers in source and destination subresources must match";
- skipCall |= log_msg(device_data->report_data, VK_DEBUG_REPORT_ERROR_BIT_EXT, VK_DEBUG_REPORT_OBJECT_TYPE_COMMAND_BUFFER_EXT,
- (uint64_t)commandBuffer, __LINE__, IMAGE_MISMATCHED_IMAGE_ASPECT, "IMAGE", str);
+ skipCall |= log_msg(device_data->report_data, VK_DEBUG_REPORT_ERROR_BIT_EXT,
+ VK_DEBUG_REPORT_OBJECT_TYPE_COMMAND_BUFFER_EXT, (uint64_t)commandBuffer, __LINE__,
+ IMAGE_MISMATCHED_IMAGE_ASPECT, "IMAGE", str);
}
VkImageAspectFlags srcAspect = pRegions[i].srcSubresource.aspectMask;
@@ -971,104 +901,91 @@ VK_LAYER_EXPORT VKAPI_ATTR void VKAPI_CALL vkCmdBlitImage(
std::stringstream ss;
ss << "vkCmdBlitImage: Image aspects of depth/stencil images should match";
// TODO: Verify against Valid Use section of spec, if this case yields undefined results, then it's an error
- skipCall |= log_msg(device_data->report_data, VK_DEBUG_REPORT_WARNING_BIT_EXT, VK_DEBUG_REPORT_OBJECT_TYPE_COMMAND_BUFFER_EXT,
- (uint64_t)commandBuffer, __LINE__, IMAGE_INVALID_IMAGE_ASPECT, "IMAGE", "%s", ss.str().c_str());
+ skipCall |= log_msg(device_data->report_data, VK_DEBUG_REPORT_ERROR_BIT_EXT,
+ VK_DEBUG_REPORT_OBJECT_TYPE_COMMAND_BUFFER_EXT, (uint64_t)commandBuffer, __LINE__,
+ IMAGE_INVALID_IMAGE_ASPECT, "IMAGE", "%s", ss.str().c_str());
}
if (vk_format_is_depth_and_stencil(srcFormat)) {
if ((srcAspect != VK_IMAGE_ASPECT_DEPTH_BIT) && (srcAspect != VK_IMAGE_ASPECT_STENCIL_BIT)) {
std::stringstream ss;
- ss << "vkCmdBlitImage: Combination depth/stencil image formats must have only one of VK_IMAGE_ASPECT_DEPTH_BIT "
+ ss << "vkCmdBlitImage: Combination depth/stencil image formats must have only one of "
+ "VK_IMAGE_ASPECT_DEPTH_BIT "
<< "and VK_IMAGE_ASPECT_STENCIL_BIT set in srcImage and dstImage";
- skipCall |= log_msg(device_data->report_data, VK_DEBUG_REPORT_ERROR_BIT_EXT, VK_DEBUG_REPORT_OBJECT_TYPE_COMMAND_BUFFER_EXT,
- (uint64_t)commandBuffer, __LINE__, IMAGE_INVALID_IMAGE_ASPECT, "IMAGE", "%s", ss.str().c_str());
+ skipCall |= log_msg(device_data->report_data, VK_DEBUG_REPORT_ERROR_BIT_EXT,
+ VK_DEBUG_REPORT_OBJECT_TYPE_COMMAND_BUFFER_EXT, (uint64_t)commandBuffer, __LINE__,
+ IMAGE_INVALID_IMAGE_ASPECT, "IMAGE", "%s", ss.str().c_str());
}
} else if (vk_format_is_stencil_only(srcFormat)) {
if (srcAspect != VK_IMAGE_ASPECT_STENCIL_BIT) {
std::stringstream ss;
ss << "vkCmdBlitImage: Stencil-only image formats must have only the VK_IMAGE_ASPECT_STENCIL_BIT "
<< "set in both the srcImage and dstImage";
- skipCall |= log_msg(device_data->report_data, VK_DEBUG_REPORT_ERROR_BIT_EXT, VK_DEBUG_REPORT_OBJECT_TYPE_COMMAND_BUFFER_EXT,
- (uint64_t)commandBuffer, __LINE__, IMAGE_INVALID_IMAGE_ASPECT, "IMAGE", "%s", ss.str().c_str());
+ skipCall |= log_msg(device_data->report_data, VK_DEBUG_REPORT_ERROR_BIT_EXT,
+ VK_DEBUG_REPORT_OBJECT_TYPE_COMMAND_BUFFER_EXT, (uint64_t)commandBuffer, __LINE__,
+ IMAGE_INVALID_IMAGE_ASPECT, "IMAGE", "%s", ss.str().c_str());
}
} else if (vk_format_is_depth_only(srcFormat)) {
if (srcAspect != VK_IMAGE_ASPECT_DEPTH_BIT) {
std::stringstream ss;
ss << "vkCmdBlitImage: Depth-only image formats must have only the VK_IMAGE_ASPECT_DEPTH "
<< "set in both the srcImage and dstImage";
- skipCall |= log_msg(device_data->report_data, VK_DEBUG_REPORT_ERROR_BIT_EXT, VK_DEBUG_REPORT_OBJECT_TYPE_COMMAND_BUFFER_EXT,
- (uint64_t)commandBuffer, __LINE__, IMAGE_INVALID_IMAGE_ASPECT, "IMAGE", "%s", ss.str().c_str());
+ skipCall |= log_msg(device_data->report_data, VK_DEBUG_REPORT_ERROR_BIT_EXT,
+ VK_DEBUG_REPORT_OBJECT_TYPE_COMMAND_BUFFER_EXT, (uint64_t)commandBuffer, __LINE__,
+ IMAGE_INVALID_IMAGE_ASPECT, "IMAGE", "%s", ss.str().c_str());
}
}
}
}
// Validate filter
- if (vk_format_is_depth_or_stencil(srcFormat) ||
- vk_format_is_int(srcFormat)) {
+ if (vk_format_is_depth_or_stencil(srcFormat) || vk_format_is_int(srcFormat)) {
if (filter != VK_FILTER_NEAREST) {
std::stringstream ss;
ss << "vkCmdBlitImage: If the format of srcImage is a depth, stencil, depth stencil or integer-based format "
<< "then filter must be VK_FILTER_NEAREST.";
- skipCall |= log_msg(device_data->report_data, VK_DEBUG_REPORT_ERROR_BIT_EXT, VK_DEBUG_REPORT_OBJECT_TYPE_COMMAND_BUFFER_EXT,
- (uint64_t)commandBuffer, __LINE__, IMAGE_INVALID_FILTER, "IMAGE", "%s", ss.str().c_str());
+ skipCall |=
+ log_msg(device_data->report_data, VK_DEBUG_REPORT_ERROR_BIT_EXT, VK_DEBUG_REPORT_OBJECT_TYPE_COMMAND_BUFFER_EXT,
+ (uint64_t)commandBuffer, __LINE__, IMAGE_INVALID_FILTER, "IMAGE", "%s", ss.str().c_str());
}
}
}
- device_data->device_dispatch_table->CmdBlitImage(commandBuffer, srcImage,
- srcImageLayout, dstImage, dstImageLayout, regionCount, pRegions, filter);
+ device_data->device_dispatch_table->CmdBlitImage(commandBuffer, srcImage, srcImageLayout, dstImage, dstImageLayout, regionCount,
+ pRegions, filter);
}
-VK_LAYER_EXPORT VKAPI_ATTR void VKAPI_CALL vkCmdPipelineBarrier(
- VkCommandBuffer commandBuffer,
- VkPipelineStageFlags srcStageMask,
- VkPipelineStageFlags dstStageMask,
- VkDependencyFlags dependencyFlags,
- uint32_t memoryBarrierCount,
- const VkMemoryBarrier *pMemoryBarriers,
- uint32_t bufferMemoryBarrierCount,
- const VkBufferMemoryBarrier *pBufferMemoryBarriers,
- uint32_t imageMemoryBarrierCount,
- const VkImageMemoryBarrier *pImageMemoryBarriers)
-{
+VK_LAYER_EXPORT VKAPI_ATTR void VKAPI_CALL
+vkCmdPipelineBarrier(VkCommandBuffer commandBuffer, VkPipelineStageFlags srcStageMask, VkPipelineStageFlags dstStageMask,
+ VkDependencyFlags dependencyFlags, uint32_t memoryBarrierCount, const VkMemoryBarrier *pMemoryBarriers,
+ uint32_t bufferMemoryBarrierCount, const VkBufferMemoryBarrier *pBufferMemoryBarriers,
+ uint32_t imageMemoryBarrierCount, const VkImageMemoryBarrier *pImageMemoryBarriers) {
VkBool32 skipCall = VK_FALSE;
layer_data *device_data = get_my_data_ptr(get_dispatch_key(commandBuffer), layer_data_map);
- for (uint32_t i = 0; i < imageMemoryBarrierCount; ++i)
- {
- VkImageMemoryBarrier const*const barrier = (VkImageMemoryBarrier const*const) &pImageMemoryBarriers[i];
- if (barrier->sType == VK_STRUCTURE_TYPE_IMAGE_MEMORY_BARRIER)
- {
- if (barrier->subresourceRange.layerCount == 0)
- {
+ for (uint32_t i = 0; i < imageMemoryBarrierCount; ++i) {
+ VkImageMemoryBarrier const *const barrier = (VkImageMemoryBarrier const *const) & pImageMemoryBarriers[i];
+ if (barrier->sType == VK_STRUCTURE_TYPE_IMAGE_MEMORY_BARRIER) {
+ if (barrier->subresourceRange.layerCount == 0) {
std::stringstream ss;
ss << "vkCmdPipelineBarrier called with 0 in ppMemoryBarriers[" << i << "]->subresourceRange.layerCount.";
- skipCall |= log_msg(device_data->report_data, VK_DEBUG_REPORT_ERROR_BIT_EXT, (VkDebugReportObjectTypeEXT)0,
- 0, __LINE__, IMAGE_INVALID_IMAGE_RESOURCE, "IMAGE", "%s", ss.str().c_str());
+ skipCall |= log_msg(device_data->report_data, VK_DEBUG_REPORT_ERROR_BIT_EXT, (VkDebugReportObjectTypeEXT)0, 0,
+ __LINE__, IMAGE_INVALID_IMAGE_RESOURCE, "IMAGE", "%s", ss.str().c_str());
}
}
}
- if (skipCall)
- {
+ if (skipCall) {
return;
}
device_data->device_dispatch_table->CmdPipelineBarrier(commandBuffer, srcStageMask, dstStageMask, dependencyFlags,
- memoryBarrierCount, pMemoryBarriers,
- bufferMemoryBarrierCount, pBufferMemoryBarriers,
- imageMemoryBarrierCount, pImageMemoryBarriers);
+ memoryBarrierCount, pMemoryBarriers, bufferMemoryBarrierCount,
+ pBufferMemoryBarriers, imageMemoryBarrierCount, pImageMemoryBarriers);
}
-VK_LAYER_EXPORT VKAPI_ATTR void VKAPI_CALL vkCmdResolveImage(
- VkCommandBuffer commandBuffer,
- VkImage srcImage,
- VkImageLayout srcImageLayout,
- VkImage dstImage,
- VkImageLayout dstImageLayout,
- uint32_t regionCount,
- const VkImageResolve *pRegions)
-{
+VK_LAYER_EXPORT VKAPI_ATTR void VKAPI_CALL
+vkCmdResolveImage(VkCommandBuffer commandBuffer, VkImage srcImage, VkImageLayout srcImageLayout, VkImage dstImage,
+ VkImageLayout dstImageLayout, uint32_t regionCount, const VkImageResolve *pRegions) {
VkBool32 skipCall = VK_FALSE;
layer_data *device_data = get_my_data_ptr(get_dispatch_key(commandBuffer), layer_data_map);
auto srcImageEntry = device_data->imageMap.find(srcImage);
@@ -1077,70 +994,71 @@ VK_LAYER_EXPORT VKAPI_ATTR void VKAPI_CALL vkCmdResolveImage(
// For each region, the number of layers in the image subresource should not be zero
// For each region, src and dest image aspect must be color only
for (uint32_t i = 0; i < regionCount; i++) {
- if(pRegions[i].srcSubresource.layerCount == 0)
- {
+ if (pRegions[i].srcSubresource.layerCount == 0) {
char const str[] = "vkCmdResolveImage: number of layers in source subresource is zero";
// TODO: Verify against Valid Use section of spec. Generally if something yield an undefined result, it's invalid/error
- skipCall |= log_msg(device_data->report_data, VK_DEBUG_REPORT_WARNING_BIT_EXT, VK_DEBUG_REPORT_OBJECT_TYPE_COMMAND_BUFFER_EXT,
- (uint64_t)commandBuffer, __LINE__, IMAGE_MISMATCHED_IMAGE_ASPECT, "IMAGE", str);
+ skipCall |=
+ log_msg(device_data->report_data, VK_DEBUG_REPORT_ERROR_BIT_EXT, VK_DEBUG_REPORT_OBJECT_TYPE_COMMAND_BUFFER_EXT,
+ (uint64_t)commandBuffer, __LINE__, IMAGE_MISMATCHED_IMAGE_ASPECT, "IMAGE", str);
}
- if(pRegions[i].dstSubresource.layerCount == 0)
- {
+ if (pRegions[i].dstSubresource.layerCount == 0) {
char const str[] = "vkCmdResolveImage: number of layers in destination subresource is zero";
// TODO: Verify against Valid Use section of spec. Generally if something yield an undefined result, it's invalid/error
- skipCall |= log_msg(device_data->report_data, VK_DEBUG_REPORT_WARNING_BIT_EXT, VK_DEBUG_REPORT_OBJECT_TYPE_COMMAND_BUFFER_EXT,
- (uint64_t)commandBuffer, __LINE__, IMAGE_MISMATCHED_IMAGE_ASPECT, "IMAGE", str);
+ skipCall |=
+ log_msg(device_data->report_data, VK_DEBUG_REPORT_ERROR_BIT_EXT, VK_DEBUG_REPORT_OBJECT_TYPE_COMMAND_BUFFER_EXT,
+ (uint64_t)commandBuffer, __LINE__, IMAGE_MISMATCHED_IMAGE_ASPECT, "IMAGE", str);
}
- if ((pRegions[i].srcSubresource.aspectMask != VK_IMAGE_ASPECT_COLOR_BIT) ||
+ if ((pRegions[i].srcSubresource.aspectMask != VK_IMAGE_ASPECT_COLOR_BIT) ||
(pRegions[i].dstSubresource.aspectMask != VK_IMAGE_ASPECT_COLOR_BIT)) {
- char const str[] = "vkCmdResolveImage: src and dest aspectMasks for each region must specify only VK_IMAGE_ASPECT_COLOR_BIT";
- skipCall |= log_msg(device_data->report_data, VK_DEBUG_REPORT_ERROR_BIT_EXT, VK_DEBUG_REPORT_OBJECT_TYPE_COMMAND_BUFFER_EXT,
- (uint64_t)commandBuffer, __LINE__, IMAGE_INVALID_IMAGE_ASPECT, "IMAGE", str);
+ char const str[] =
+ "vkCmdResolveImage: src and dest aspectMasks for each region must specify only VK_IMAGE_ASPECT_COLOR_BIT";
+ skipCall |=
+ log_msg(device_data->report_data, VK_DEBUG_REPORT_ERROR_BIT_EXT, VK_DEBUG_REPORT_OBJECT_TYPE_COMMAND_BUFFER_EXT,
+ (uint64_t)commandBuffer, __LINE__, IMAGE_INVALID_IMAGE_ASPECT, "IMAGE", str);
}
}
- if ((srcImageEntry != device_data->imageMap.end()) &&
- (dstImageEntry != device_data->imageMap.end())) {
+ if ((srcImageEntry != device_data->imageMap.end()) && (dstImageEntry != device_data->imageMap.end())) {
if (srcImageEntry->second.format != dstImageEntry->second.format) {
- char const str[] = "vkCmdResolveImage called with unmatched source and dest formats.";
- skipCall |= log_msg(device_data->report_data, VK_DEBUG_REPORT_ERROR_BIT_EXT, VK_DEBUG_REPORT_OBJECT_TYPE_COMMAND_BUFFER_EXT,
- (uint64_t)commandBuffer, __LINE__, IMAGE_MISMATCHED_IMAGE_FORMAT, "IMAGE", str);
+ char const str[] = "vkCmdResolveImage called with unmatched source and dest formats.";
+ skipCall |=
+ log_msg(device_data->report_data, VK_DEBUG_REPORT_ERROR_BIT_EXT, VK_DEBUG_REPORT_OBJECT_TYPE_COMMAND_BUFFER_EXT,
+ (uint64_t)commandBuffer, __LINE__, IMAGE_MISMATCHED_IMAGE_FORMAT, "IMAGE", str);
}
if (srcImageEntry->second.imageType != dstImageEntry->second.imageType) {
- char const str[] = "vkCmdResolveImage called with unmatched source and dest image types.";
- skipCall |= log_msg(device_data->report_data, VK_DEBUG_REPORT_ERROR_BIT_EXT, VK_DEBUG_REPORT_OBJECT_TYPE_COMMAND_BUFFER_EXT,
- (uint64_t)commandBuffer, __LINE__, IMAGE_MISMATCHED_IMAGE_TYPE, "IMAGE", str);
+ char const str[] = "vkCmdResolveImage called with unmatched source and dest image types.";
+ skipCall |=
+ log_msg(device_data->report_data, VK_DEBUG_REPORT_ERROR_BIT_EXT, VK_DEBUG_REPORT_OBJECT_TYPE_COMMAND_BUFFER_EXT,
+ (uint64_t)commandBuffer, __LINE__, IMAGE_MISMATCHED_IMAGE_TYPE, "IMAGE", str);
}
if (srcImageEntry->second.samples == VK_SAMPLE_COUNT_1_BIT) {
- char const str[] = "vkCmdResolveImage called with source sample count less than 2.";
- skipCall |= log_msg(device_data->report_data, VK_DEBUG_REPORT_ERROR_BIT_EXT, VK_DEBUG_REPORT_OBJECT_TYPE_COMMAND_BUFFER_EXT,
- (uint64_t)commandBuffer, __LINE__, IMAGE_INVALID_RESOLVE_SAMPLES, "IMAGE", str);
+ char const str[] = "vkCmdResolveImage called with source sample count less than 2.";
+ skipCall |=
+ log_msg(device_data->report_data, VK_DEBUG_REPORT_ERROR_BIT_EXT, VK_DEBUG_REPORT_OBJECT_TYPE_COMMAND_BUFFER_EXT,
+ (uint64_t)commandBuffer, __LINE__, IMAGE_INVALID_RESOLVE_SAMPLES, "IMAGE", str);
}
if (dstImageEntry->second.samples != VK_SAMPLE_COUNT_1_BIT) {
- char const str[] = "vkCmdResolveImage called with dest sample count greater than 1.";
- skipCall |= log_msg(device_data->report_data, VK_DEBUG_REPORT_ERROR_BIT_EXT, VK_DEBUG_REPORT_OBJECT_TYPE_COMMAND_BUFFER_EXT,
- (uint64_t)commandBuffer, __LINE__, IMAGE_INVALID_RESOLVE_SAMPLES, "IMAGE", str);
+ char const str[] = "vkCmdResolveImage called with dest sample count greater than 1.";
+ skipCall |=
+ log_msg(device_data->report_data, VK_DEBUG_REPORT_ERROR_BIT_EXT, VK_DEBUG_REPORT_OBJECT_TYPE_COMMAND_BUFFER_EXT,
+ (uint64_t)commandBuffer, __LINE__, IMAGE_INVALID_RESOLVE_SAMPLES, "IMAGE", str);
}
}
if (VK_FALSE == skipCall) {
- device_data->device_dispatch_table->CmdResolveImage(commandBuffer, srcImage,
- srcImageLayout, dstImage, dstImageLayout, regionCount, pRegions);
+ device_data->device_dispatch_table->CmdResolveImage(commandBuffer, srcImage, srcImageLayout, dstImage, dstImageLayout,
+ regionCount, pRegions);
}
}
-VK_LAYER_EXPORT VKAPI_ATTR void VKAPI_CALL vkGetImageSubresourceLayout(
- VkDevice device,
- VkImage image,
- const VkImageSubresource *pSubresource,
- VkSubresourceLayout *pLayout)
-{
- VkBool32 skipCall = VK_FALSE;
+VK_LAYER_EXPORT VKAPI_ATTR void VKAPI_CALL
+vkGetImageSubresourceLayout(VkDevice device, VkImage image, const VkImageSubresource *pSubresource, VkSubresourceLayout *pLayout) {
+ VkBool32 skipCall = VK_FALSE;
layer_data *device_data = get_my_data_ptr(get_dispatch_key(device), layer_data_map);
- VkFormat format;
+ VkFormat format;
auto imageEntry = device_data->imageMap.find(image);
@@ -1150,67 +1068,67 @@ VK_LAYER_EXPORT VKAPI_ATTR void VKAPI_CALL vkGetImageSubresourceLayout(
if (vk_format_is_color(format)) {
if (pSubresource->aspectMask != VK_IMAGE_ASPECT_COLOR_BIT) {
std::stringstream ss;
- ss << "vkGetImageSubresourceLayout: For color formats, the aspectMask field of VkImageSubresource must be VK_IMAGE_ASPECT_COLOR.";
+ ss << "vkGetImageSubresourceLayout: For color formats, the aspectMask field of VkImageSubresource must be "
+ "VK_IMAGE_ASPECT_COLOR.";
skipCall |= log_msg(device_data->report_data, VK_DEBUG_REPORT_ERROR_BIT_EXT, VK_DEBUG_REPORT_OBJECT_TYPE_IMAGE_EXT,
- (uint64_t)image, __LINE__, IMAGE_INVALID_IMAGE_ASPECT, "IMAGE", "%s", ss.str().c_str());
+ (uint64_t)image, __LINE__, IMAGE_INVALID_IMAGE_ASPECT, "IMAGE", "%s", ss.str().c_str());
}
} else if (vk_format_is_depth_or_stencil(format)) {
if ((pSubresource->aspectMask != VK_IMAGE_ASPECT_DEPTH_BIT) &&
(pSubresource->aspectMask != VK_IMAGE_ASPECT_STENCIL_BIT)) {
std::stringstream ss;
- ss << "vkGetImageSubresourceLayout: For depth/stencil formats, the aspectMask selects either the depth or stencil image aspectMask.";
+ ss << "vkGetImageSubresourceLayout: For depth/stencil formats, the aspectMask selects either the depth or stencil "
+ "image aspectMask.";
skipCall |= log_msg(device_data->report_data, VK_DEBUG_REPORT_ERROR_BIT_EXT, VK_DEBUG_REPORT_OBJECT_TYPE_IMAGE_EXT,
- (uint64_t)image, __LINE__, IMAGE_INVALID_IMAGE_ASPECT, "IMAGE", "%s", ss.str().c_str());
+ (uint64_t)image, __LINE__, IMAGE_INVALID_IMAGE_ASPECT, "IMAGE", "%s", ss.str().c_str());
}
}
}
if (VK_FALSE == skipCall) {
- device_data->device_dispatch_table->GetImageSubresourceLayout(device,
- image, pSubresource, pLayout);
+ device_data->device_dispatch_table->GetImageSubresourceLayout(device, image, pSubresource, pLayout);
}
}
-VK_LAYER_EXPORT VKAPI_ATTR void VKAPI_CALL vkGetPhysicalDeviceProperties(VkPhysicalDevice physicalDevice, VkPhysicalDeviceProperties* pProperties)
-{
+VK_LAYER_EXPORT VKAPI_ATTR void VKAPI_CALL
+vkGetPhysicalDeviceProperties(VkPhysicalDevice physicalDevice, VkPhysicalDeviceProperties *pProperties) {
layer_data *phy_dev_data = get_my_data_ptr(get_dispatch_key(physicalDevice), layer_data_map);
phy_dev_data->instance_dispatch_table->GetPhysicalDeviceProperties(physicalDevice, pProperties);
}
-VK_LAYER_EXPORT VKAPI_ATTR PFN_vkVoidFunction VKAPI_CALL vkGetDeviceProcAddr(VkDevice device, const char* funcName)
-{
+VK_LAYER_EXPORT VKAPI_ATTR PFN_vkVoidFunction VKAPI_CALL vkGetDeviceProcAddr(VkDevice device, const char *funcName) {
if (!strcmp(funcName, "vkGetDeviceProcAddr"))
- return (PFN_vkVoidFunction) vkGetDeviceProcAddr;
+ return (PFN_vkVoidFunction)vkGetDeviceProcAddr;
if (!strcmp(funcName, "vkDestroyDevice"))
- return (PFN_vkVoidFunction) vkDestroyDevice;
+ return (PFN_vkVoidFunction)vkDestroyDevice;
if (!strcmp(funcName, "vkCreateImage"))
- return (PFN_vkVoidFunction) vkCreateImage;
+ return (PFN_vkVoidFunction)vkCreateImage;
if (!strcmp(funcName, "vkDestroyImage"))
- return (PFN_vkVoidFunction) vkDestroyImage;
+ return (PFN_vkVoidFunction)vkDestroyImage;
if (!strcmp(funcName, "vkCreateImageView"))
- return (PFN_vkVoidFunction) vkCreateImageView;
+ return (PFN_vkVoidFunction)vkCreateImageView;
if (!strcmp(funcName, "vkCreateRenderPass"))
- return (PFN_vkVoidFunction) vkCreateRenderPass;
+ return (PFN_vkVoidFunction)vkCreateRenderPass;
if (!strcmp(funcName, "vkCmdClearColorImage"))
- return (PFN_vkVoidFunction) vkCmdClearColorImage;
+ return (PFN_vkVoidFunction)vkCmdClearColorImage;
if (!strcmp(funcName, "vkCmdClearDepthStencilImage"))
- return (PFN_vkVoidFunction) vkCmdClearDepthStencilImage;
+ return (PFN_vkVoidFunction)vkCmdClearDepthStencilImage;
if (!strcmp(funcName, "vkCmdClearAttachments"))
- return (PFN_vkVoidFunction) vkCmdClearAttachments;
+ return (PFN_vkVoidFunction)vkCmdClearAttachments;
if (!strcmp(funcName, "vkCmdCopyImage"))
- return (PFN_vkVoidFunction) vkCmdCopyImage;
+ return (PFN_vkVoidFunction)vkCmdCopyImage;
if (!strcmp(funcName, "vkCmdCopyImageToBuffer"))
- return (PFN_vkVoidFunction) vkCmdCopyImageToBuffer;
+ return (PFN_vkVoidFunction)vkCmdCopyImageToBuffer;
if (!strcmp(funcName, "vkCmdCopyBufferToImage"))
- return (PFN_vkVoidFunction) vkCmdCopyBufferToImage;
+ return (PFN_vkVoidFunction)vkCmdCopyBufferToImage;
if (!strcmp(funcName, "vkCmdBlitImage"))
- return (PFN_vkVoidFunction) vkCmdBlitImage;
+ return (PFN_vkVoidFunction)vkCmdBlitImage;
if (!strcmp(funcName, "vkCmdPipelineBarrier"))
- return (PFN_vkVoidFunction) vkCmdPipelineBarrier;
+ return (PFN_vkVoidFunction)vkCmdPipelineBarrier;
if (!strcmp(funcName, "vkCmdResolveImage"))
- return (PFN_vkVoidFunction) vkCmdResolveImage;
+ return (PFN_vkVoidFunction)vkCmdResolveImage;
if (!strcmp(funcName, "vkGetImageSubresourceLayout"))
- return (PFN_vkVoidFunction) vkGetImageSubresourceLayout;
+ return (PFN_vkVoidFunction)vkGetImageSubresourceLayout;
if (device == NULL) {
return NULL;
@@ -1218,7 +1136,7 @@ VK_LAYER_EXPORT VKAPI_ATTR PFN_vkVoidFunction VKAPI_CALL vkGetDeviceProcAddr(VkD
layer_data *my_data = get_my_data_ptr(get_dispatch_key(device), layer_data_map);
- VkLayerDispatchTable* pTable = my_data->device_dispatch_table;
+ VkLayerDispatchTable *pTable = my_data->device_dispatch_table;
{
if (pTable->GetDeviceProcAddr == NULL)
return NULL;
@@ -1226,26 +1144,25 @@ VK_LAYER_EXPORT VKAPI_ATTR PFN_vkVoidFunction VKAPI_CALL vkGetDeviceProcAddr(VkD
}
}
-VK_LAYER_EXPORT VKAPI_ATTR PFN_vkVoidFunction VKAPI_CALL vkGetInstanceProcAddr(VkInstance instance, const char* funcName)
-{
+VK_LAYER_EXPORT VKAPI_ATTR PFN_vkVoidFunction VKAPI_CALL vkGetInstanceProcAddr(VkInstance instance, const char *funcName) {
if (!strcmp(funcName, "vkGetInstanceProcAddr"))
- return (PFN_vkVoidFunction) vkGetInstanceProcAddr;
+ return (PFN_vkVoidFunction)vkGetInstanceProcAddr;
if (!strcmp(funcName, "vkCreateInstance"))
- return (PFN_vkVoidFunction) vkCreateInstance;
+ return (PFN_vkVoidFunction)vkCreateInstance;
if (!strcmp(funcName, "vkDestroyInstance"))
- return (PFN_vkVoidFunction) vkDestroyInstance;
+ return (PFN_vkVoidFunction)vkDestroyInstance;
if (!strcmp(funcName, "vkCreateDevice"))
- return (PFN_vkVoidFunction) vkCreateDevice;
+ return (PFN_vkVoidFunction)vkCreateDevice;
if (!strcmp(funcName, "vkEnumerateInstanceLayerProperties"))
- return (PFN_vkVoidFunction) vkEnumerateInstanceLayerProperties;
+ return (PFN_vkVoidFunction)vkEnumerateInstanceLayerProperties;
if (!strcmp(funcName, "vkEnumerateInstanceExtensionProperties"))
- return (PFN_vkVoidFunction) vkEnumerateInstanceExtensionProperties;
+ return (PFN_vkVoidFunction)vkEnumerateInstanceExtensionProperties;
if (!strcmp(funcName, "vkEnumerateDeviceLayerProperties"))
- return (PFN_vkVoidFunction) vkEnumerateDeviceLayerProperties;
+ return (PFN_vkVoidFunction)vkEnumerateDeviceLayerProperties;
if (!strcmp(funcName, "vkEnumerateDeviceExtensionProperties"))
- return (PFN_vkVoidFunction) vkEnumerateDeviceExtensionProperties;
+ return (PFN_vkVoidFunction)vkEnumerateDeviceExtensionProperties;
if (!strcmp(funcName, "vkGetPhysicalDeviceProperties"))
- return (PFN_vkVoidFunction) vkGetPhysicalDeviceProperties;
+ return (PFN_vkVoidFunction)vkGetPhysicalDeviceProperties;
if (instance == NULL) {
return NULL;
@@ -1254,10 +1171,10 @@ VK_LAYER_EXPORT VKAPI_ATTR PFN_vkVoidFunction VKAPI_CALL vkGetInstanceProcAddr(V
layer_data *my_data = get_my_data_ptr(get_dispatch_key(instance), layer_data_map);
PFN_vkVoidFunction fptr = debug_report_get_instance_proc_addr(my_data->report_data, funcName);
- if(fptr)
+ if (fptr)
return fptr;
- VkLayerInstanceDispatchTable* pTable = my_data->instance_dispatch_table;
+ VkLayerInstanceDispatchTable *pTable = my_data->instance_dispatch_table;
if (pTable->GetInstanceProcAddr == NULL)
return NULL;
return pTable->GetInstanceProcAddr(instance, funcName);
diff --git a/layers/image.h b/layers/image.h
index 350a3beb0..bec5bb9e2 100644
--- a/layers/image.h
+++ b/layers/image.h
@@ -33,12 +33,12 @@
#include "vk_layer_logging.h"
// Image ERROR codes
-typedef enum _IMAGE_ERROR
-{
+typedef enum _IMAGE_ERROR {
IMAGE_NONE, // Used for INFO & other non-error messages
IMAGE_FORMAT_UNSUPPORTED, // Request to create Image or RenderPass with a format that is not supported
IMAGE_RENDERPASS_INVALID_ATTACHMENT, // Invalid image layouts and/or load/storeOps for an attachment when creating RenderPass
- IMAGE_RENDERPASS_INVALID_DS_ATTACHMENT, // If no depth attachment for a RenderPass, verify that subpass DS attachment is set to UNUSED
+ IMAGE_RENDERPASS_INVALID_DS_ATTACHMENT, // If no depth attachment for a RenderPass, verify that subpass DS attachment is set to
+ // UNUSED
IMAGE_INVALID_IMAGE_ASPECT, // Image aspect mask bits are invalid for this API call
IMAGE_MISMATCHED_IMAGE_ASPECT, // Image aspect masks for source and dest images do not match
IMAGE_VIEW_CREATE_ERROR, // Error occurred trying to create Image View
@@ -49,27 +49,24 @@ typedef enum _IMAGE_ERROR
IMAGE_INVALID_FILTER, // Operation specifies an invalid filter setting
IMAGE_INVALID_IMAGE_RESOURCE, // Image resource/subresource called with invalid setting
IMAGE_INVALID_FORMAT_LIMITS_VIOLATION, // Device limits for this format have been exceeded
+ IMAGE_INVALID_LAYOUT, // Operation specifies an invalid layout.
} IMAGE_ERROR;
-typedef struct _IMAGE_STATE
-{
- uint32_t mipLevels;
- uint32_t arraySize;
- VkFormat format;
+typedef struct _IMAGE_STATE {
+ uint32_t mipLevels;
+ uint32_t arraySize;
+ VkFormat format;
VkSampleCountFlagBits samples;
- VkImageType imageType;
- VkExtent3D extent;
- VkImageCreateFlags flags;
- _IMAGE_STATE():mipLevels(0), arraySize(0), format(VK_FORMAT_UNDEFINED), samples(VK_SAMPLE_COUNT_1_BIT), imageType(VK_IMAGE_TYPE_RANGE_SIZE), extent{}, flags(0) {};
- _IMAGE_STATE(const VkImageCreateInfo* pCreateInfo):
- mipLevels(pCreateInfo->mipLevels),
- arraySize(pCreateInfo->arrayLayers),
- format(pCreateInfo->format),
- samples(pCreateInfo->samples),
- imageType(pCreateInfo->imageType),
- extent(pCreateInfo->extent),
- flags(pCreateInfo->flags)
- {};
+ VkImageType imageType;
+ VkExtent3D extent;
+ VkImageCreateFlags flags;
+ _IMAGE_STATE()
+ : mipLevels(0), arraySize(0), format(VK_FORMAT_UNDEFINED), samples(VK_SAMPLE_COUNT_1_BIT),
+ imageType(VK_IMAGE_TYPE_RANGE_SIZE), extent{}, flags(0){};
+ _IMAGE_STATE(const VkImageCreateInfo *pCreateInfo)
+ : mipLevels(pCreateInfo->mipLevels), arraySize(pCreateInfo->arrayLayers), format(pCreateInfo->format),
+ samples(pCreateInfo->samples), imageType(pCreateInfo->imageType), extent(pCreateInfo->extent),
+ flags(pCreateInfo->flags){};
} IMAGE_STATE;
#endif // IMAGE_H
diff --git a/layers/linux/VkLayer_mem_tracker.json b/layers/linux/VkLayer_core_validation.json
index a4abe0442..e819cc1d8 100644
--- a/layers/linux/VkLayer_mem_tracker.json
+++ b/layers/linux/VkLayer_core_validation.json
@@ -1,17 +1,22 @@
{
"file_format_version" : "1.0.0",
"layer" : {
- "name": "VK_LAYER_LUNARG_mem_tracker",
+ "name": "VK_LAYER_LUNARG_core_validation",
"type": "GLOBAL",
- "library_path": "./libVkLayer_mem_tracker.so",
- "api_version": "1.0.3",
+ "library_path": "./libVkLayer_core_validation.so",
+ "api_version": "1.0.6",
"implementation_version": "1",
"description": "LunarG Validation Layer",
"instance_extensions": [
{
"name": "VK_EXT_debug_report",
- "spec_version": "1"
+ "spec_version": "2"
}
]
}
}
+
+
+
+
+
diff --git a/layers/linux/VkLayer_device_limits.json b/layers/linux/VkLayer_device_limits.json
index cc58efcf8..1974af644 100644
--- a/layers/linux/VkLayer_device_limits.json
+++ b/layers/linux/VkLayer_device_limits.json
@@ -4,13 +4,13 @@
"name": "VK_LAYER_LUNARG_device_limits",
"type": "GLOBAL",
"library_path": "./libVkLayer_device_limits.so",
- "api_version": "1.0.3",
+ "api_version": "1.0.6",
"implementation_version": "1",
"description": "LunarG Validation Layer",
"instance_extensions": [
{
"name": "VK_EXT_debug_report",
- "spec_version": "1"
+ "spec_version": "2"
}
]
}
diff --git a/layers/linux/VkLayer_draw_state.json b/layers/linux/VkLayer_draw_state.json
deleted file mode 100644
index a8e0acbfb..000000000
--- a/layers/linux/VkLayer_draw_state.json
+++ /dev/null
@@ -1,24 +0,0 @@
-{
- "file_format_version" : "1.0.0",
- "layer" : {
- "name": "VK_LAYER_LUNARG_draw_state",
- "type": "GLOBAL",
- "library_path": "./libVkLayer_draw_state.so",
- "api_version": "1.0.3",
- "implementation_version": "1",
- "description": "LunarG Validation Layer",
- "instance_extensions": [
- {
- "name": "VK_EXT_debug_report",
- "spec_version": "1"
- }
- ],
- "device_extensions": [
- {
- "name": "VK_LUNARG_DEBUG_MARKER",
- "spec_version": "0",
- "entrypoints": ["vkCmdDbgMarkerBegin","vkCmdDbgMarkerEnd"]
- }
- ]
- }
-}
diff --git a/layers/linux/VkLayer_image.json b/layers/linux/VkLayer_image.json
index 19796fdda..6caf23a5f 100644
--- a/layers/linux/VkLayer_image.json
+++ b/layers/linux/VkLayer_image.json
@@ -4,13 +4,13 @@
"name": "VK_LAYER_LUNARG_image",
"type": "GLOBAL",
"library_path": "./libVkLayer_image.so",
- "api_version": "1.0.3",
+ "api_version": "1.0.6",
"implementation_version": "1",
"description": "LunarG Validation Layer",
"instance_extensions": [
{
"name": "VK_EXT_debug_report",
- "spec_version": "1"
+ "spec_version": "2"
}
]
}
diff --git a/layers/linux/VkLayer_object_tracker.json b/layers/linux/VkLayer_object_tracker.json
index 606f861d2..42b97589e 100644
--- a/layers/linux/VkLayer_object_tracker.json
+++ b/layers/linux/VkLayer_object_tracker.json
@@ -4,13 +4,13 @@
"name": "VK_LAYER_LUNARG_object_tracker",
"type": "GLOBAL",
"library_path": "./libVkLayer_object_tracker.so",
- "api_version": "1.0.3",
+ "api_version": "1.0.6",
"implementation_version": "1",
"description": "LunarG Validation Layer",
"instance_extensions": [
{
"name": "VK_EXT_debug_report",
- "spec_version": "1"
+ "spec_version": "2"
}
]
}
diff --git a/layers/windows/VkLayer_param_checker.json b/layers/linux/VkLayer_parameter_validation.json
index 049319f88..a9d1fa1d7 100644
--- a/layers/windows/VkLayer_param_checker.json
+++ b/layers/linux/VkLayer_parameter_validation.json
@@ -1,16 +1,16 @@
{
"file_format_version" : "1.0.0",
"layer" : {
- "name": "VK_LAYER_LUNARG_param_checker",
+ "name": "VK_LAYER_LUNARG_parameter_validation",
"type": "GLOBAL",
- "library_path": ".\\VkLayer_param_checker.dll",
- "api_version": "1.0.3",
+ "library_path": "./libVkLayer_parameter_validation.so",
+ "api_version": "1.0.6",
"implementation_version": "1",
"description": "LunarG Validation Layer",
"instance_extensions": [
{
"name": "VK_EXT_debug_report",
- "spec_version": "1"
+ "spec_version": "2"
}
]
}
diff --git a/layers/linux/VkLayer_swapchain.json b/layers/linux/VkLayer_swapchain.json
index 71e1b44bc..c9e28453a 100644
--- a/layers/linux/VkLayer_swapchain.json
+++ b/layers/linux/VkLayer_swapchain.json
@@ -4,13 +4,13 @@
"name": "VK_LAYER_LUNARG_swapchain",
"type": "GLOBAL",
"library_path": "./libVkLayer_swapchain.so",
- "api_version": "1.0.3",
+ "api_version": "1.0.6",
"implementation_version": "1",
"description": "LunarG Validation Layer",
"instance_extensions": [
{
"name": "VK_EXT_debug_report",
- "spec_version": "1"
+ "spec_version": "2"
}
]
}
diff --git a/layers/linux/VkLayer_threading.json b/layers/linux/VkLayer_threading.json
index 1558e5f7a..5c1f3eba4 100644
--- a/layers/linux/VkLayer_threading.json
+++ b/layers/linux/VkLayer_threading.json
@@ -4,13 +4,13 @@
"name": "VK_LAYER_GOOGLE_threading",
"type": "GLOBAL",
"library_path": "./libVkLayer_threading.so",
- "api_version": "1.0.3",
+ "api_version": "1.0.6",
"implementation_version": "1",
"description": "Google Validation Layer",
"instance_extensions": [
{
"name": "VK_EXT_debug_report",
- "spec_version": "1"
+ "spec_version": "2"
}
]
}
diff --git a/layers/linux/VkLayer_unique_objects.json b/layers/linux/VkLayer_unique_objects.json
index e5ca5a719..346674090 100644
--- a/layers/linux/VkLayer_unique_objects.json
+++ b/layers/linux/VkLayer_unique_objects.json
@@ -4,7 +4,7 @@
"name": "VK_LAYER_GOOGLE_unique_objects",
"type": "GLOBAL",
"library_path": "./libVkLayer_unique_objects.so",
- "api_version": "1.0.3",
+ "api_version": "1.0.6",
"implementation_version": "1",
"description": "Google Validation Layer"
}
diff --git a/layers/mem_tracker.cpp b/layers/mem_tracker.cpp
deleted file mode 100644
index 3600be187..000000000
--- a/layers/mem_tracker.cpp
+++ /dev/null
@@ -1,3598 +0,0 @@
-/* Copyright (c) 2015-2016 The Khronos Group Inc.
- * Copyright (c) 2015-2016 Valve Corporation
- * Copyright (c) 2015-2016 LunarG, Inc.
- * Copyright (C) 2015-2016 Google Inc.
- *
- * Permission is hereby granted, free of charge, to any person obtaining a copy
- * of this software and/or associated documentation files (the "Materials"), to
- * deal in the Materials without restriction, including without limitation the
- * rights to use, copy, modify, merge, publish, distribute, sublicense, and/or
- * sell copies of the Materials, and to permit persons to whom the Materials
- * are furnished to do so, subject to the following conditions:
- *
- * The above copyright notice(s) and this permission notice shall be included
- * in all copies or substantial portions of the Materials.
- *
- * THE MATERIALS ARE PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
- * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
- * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT.
- *
- * IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM,
- * DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR
- * OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE MATERIALS OR THE
- * USE OR OTHER DEALINGS IN THE MATERIALS
- *
- * Author: Cody Northrop <cody@lunarg.com>
- * Author: Jon Ashburn <jon@lunarg.com>
- * Author: Mark Lobodzinski <mark@lunarg.com>
- * Author: Tobin Ehlis <tobin@lunarg.com>
- */
-
-#include <inttypes.h>
-#include <stdio.h>
-#include <stdlib.h>
-#include <string.h>
-#include <assert.h>
-#include <functional>
-#include <list>
-#include <map>
-#include <vector>
-using namespace std;
-
-#include "vk_loader_platform.h"
-#include "vk_dispatch_table_helper.h"
-#include "vk_struct_string_helper_cpp.h"
-#include "mem_tracker.h"
-#include "vk_layer_config.h"
-#include "vk_layer_extension_utils.h"
-#include "vk_layer_table.h"
-#include "vk_layer_data.h"
-#include "vk_layer_logging.h"
-
-// WSI Image Objects bypass usual Image Object creation methods. A special Memory
-// Object value will be used to identify them internally.
-static const VkDeviceMemory MEMTRACKER_SWAP_CHAIN_IMAGE_KEY = (VkDeviceMemory)(-1);
-
-struct layer_data {
- debug_report_data *report_data;
- std::vector<VkDebugReportCallbackEXT> logging_callback;
- VkLayerDispatchTable *device_dispatch_table;
- VkLayerInstanceDispatchTable *instance_dispatch_table;
- VkBool32 wsi_enabled;
- uint64_t currentFenceId;
- VkPhysicalDeviceProperties properties;
- unordered_map<VkDeviceMemory, vector<MEMORY_RANGE>> bufferRanges, imageRanges;
- // Maps for tracking key structs related to MemTracker state
- unordered_map<VkCommandBuffer, MT_CB_INFO> cbMap;
- unordered_map<VkCommandPool, MT_CMD_POOL_INFO> commandPoolMap;
- unordered_map<VkDeviceMemory, MT_MEM_OBJ_INFO> memObjMap;
- unordered_map<VkFence, MT_FENCE_INFO> fenceMap;
- unordered_map<VkQueue, MT_QUEUE_INFO> queueMap;
- unordered_map<VkSwapchainKHR, MT_SWAP_CHAIN_INFO*> swapchainMap;
- unordered_map<VkSemaphore, MtSemaphoreState> semaphoreMap;
- unordered_map<VkFramebuffer, MT_FB_INFO> fbMap;
- unordered_map<VkRenderPass, MT_PASS_INFO> passMap;
- unordered_map<VkImageView, MT_IMAGE_VIEW_INFO> imageViewMap;
- unordered_map<VkDescriptorSet, MT_DESCRIPTOR_SET_INFO> descriptorSetMap;
- // Images and Buffers are 2 objects that can have memory bound to them so they get special treatment
- unordered_map<uint64_t, MT_OBJ_BINDING_INFO> imageMap;
- unordered_map<uint64_t, MT_OBJ_BINDING_INFO> bufferMap;
-
- layer_data() :
- report_data(nullptr),
- device_dispatch_table(nullptr),
- instance_dispatch_table(nullptr),
- wsi_enabled(VK_FALSE),
- currentFenceId(1)
- {};
-};
-
-static unordered_map<void *, layer_data *> layer_data_map;
-
-static VkPhysicalDeviceMemoryProperties memProps;
-
-static VkBool32 clear_cmd_buf_and_mem_references(layer_data* my_data, const VkCommandBuffer cb);
-
-// TODO : This can be much smarter, using separate locks for separate global data
-static int globalLockInitialized = 0;
-static loader_platform_thread_mutex globalLock;
-
-#define MAX_BINDING 0xFFFFFFFF
-
-static MT_OBJ_BINDING_INFO*
- get_object_binding_info(
- layer_data *my_data,
- uint64_t handle,
- VkDebugReportObjectTypeEXT type)
-{
- MT_OBJ_BINDING_INFO* retValue = NULL;
- switch (type)
- {
- case VK_DEBUG_REPORT_OBJECT_TYPE_IMAGE_EXT:
- {
- auto it = my_data->imageMap.find(handle);
- if (it != my_data->imageMap.end())
- return &(*it).second;
- break;
- }
- case VK_DEBUG_REPORT_OBJECT_TYPE_BUFFER_EXT:
- {
- auto it = my_data->bufferMap.find(handle);
- if (it != my_data->bufferMap.end())
- return &(*it).second;
- break;
- }
- default:
- break;
- }
- return retValue;
-}
-
-template layer_data *get_my_data_ptr<layer_data>(
- void *data_key,
- std::unordered_map<void *, layer_data *> &data_map);
-
-// Add new queue for this device to map container
-static void
-add_queue_info(
- layer_data *my_data,
- const VkQueue queue)
-{
- MT_QUEUE_INFO* pInfo = &my_data->queueMap[queue];
- pInfo->lastRetiredId = 0;
- pInfo->lastSubmittedId = 0;
-}
-
-static void
-delete_queue_info_list(
- layer_data* my_data)
-{
- // Process queue list, cleaning up each entry before deleting
- my_data->queueMap.clear();
-}
-
-static void
-add_swap_chain_info(
- layer_data *my_data,
- const VkSwapchainKHR swapchain,
- const VkSwapchainCreateInfoKHR *pCI)
-{
- MT_SWAP_CHAIN_INFO* pInfo = new MT_SWAP_CHAIN_INFO;
- memcpy(&pInfo->createInfo, pCI, sizeof(VkSwapchainCreateInfoKHR));
- my_data->swapchainMap[swapchain] = pInfo;
-}
-
-// Add new CBInfo for this cb to map container
-static void
-add_cmd_buf_info(
- layer_data *my_data,
- VkCommandPool commandPool,
- const VkCommandBuffer cb)
-{
- my_data->cbMap[cb].commandBuffer = cb;
- my_data->commandPoolMap[commandPool].pCommandBuffers.push_front(cb);
-}
-
-// Delete CBInfo from container and clear mem references to CB
-static VkBool32
-delete_cmd_buf_info(
- layer_data *my_data,
- VkCommandPool commandPool,
- const VkCommandBuffer cb)
-{
- VkBool32 result = VK_TRUE;
- result = clear_cmd_buf_and_mem_references(my_data, cb);
- // Delete the CBInfo info
- if (result != VK_TRUE) {
- my_data->commandPoolMap[commandPool].pCommandBuffers.remove(cb);
- my_data->cbMap.erase(cb);
- }
- return result;
-}
-
-// Return ptr to Info in CB map, or NULL if not found
-static MT_CB_INFO*
-get_cmd_buf_info(
- layer_data *my_data,
- const VkCommandBuffer cb)
-{
- auto item = my_data->cbMap.find(cb);
- if (item != my_data->cbMap.end()) {
- return &(*item).second;
- } else {
- return NULL;
- }
-}
-
-static void
-add_object_binding_info(
- layer_data *my_data,
- const uint64_t handle,
- const VkDebugReportObjectTypeEXT type,
- const VkDeviceMemory mem)
-{
- switch (type)
- {
- // Buffers and images are unique as their CreateInfo is in container struct
- case VK_DEBUG_REPORT_OBJECT_TYPE_BUFFER_EXT:
- {
- auto pCI = &my_data->bufferMap[handle];
- pCI->mem = mem;
- break;
- }
- case VK_DEBUG_REPORT_OBJECT_TYPE_IMAGE_EXT:
- {
- auto pCI = &my_data->imageMap[handle];
- pCI->mem = mem;
- break;
- }
- default:
- break;
- }
-}
-
-static void
-add_object_create_info(
- layer_data *my_data,
- const uint64_t handle,
- const VkDebugReportObjectTypeEXT type,
- const void *pCreateInfo)
-{
- // TODO : For any CreateInfo struct that has ptrs, need to deep copy them and appropriately clean up on Destroy
- switch (type)
- {
- // Buffers and images are unique as their CreateInfo is in container struct
- case VK_DEBUG_REPORT_OBJECT_TYPE_BUFFER_EXT:
- {
- auto pCI = &my_data->bufferMap[handle];
- memset(pCI, 0, sizeof(MT_OBJ_BINDING_INFO));
- memcpy(&pCI->create_info.buffer, pCreateInfo, sizeof(VkBufferCreateInfo));
- break;
- }
- case VK_DEBUG_REPORT_OBJECT_TYPE_IMAGE_EXT:
- {
- auto pCI = &my_data->imageMap[handle];
- memset(pCI, 0, sizeof(MT_OBJ_BINDING_INFO));
- memcpy(&pCI->create_info.image, pCreateInfo, sizeof(VkImageCreateInfo));
- break;
- }
- // Swap Chain is very unique, use my_data->imageMap, but copy in
- // SwapChainCreatInfo's usage flags and set the mem value to a unique key. These is used by
- // vkCreateImageView and internal MemTracker routines to distinguish swap chain images
- case VK_DEBUG_REPORT_OBJECT_TYPE_SWAPCHAIN_KHR_EXT:
- {
- auto pCI = &my_data->imageMap[handle];
- memset(pCI, 0, sizeof(MT_OBJ_BINDING_INFO));
- pCI->mem = MEMTRACKER_SWAP_CHAIN_IMAGE_KEY;
- pCI->valid = false;
- pCI->create_info.image.usage =
- const_cast<VkSwapchainCreateInfoKHR*>(static_cast<const VkSwapchainCreateInfoKHR *>(pCreateInfo))->imageUsage;
- break;
- }
- default:
- break;
- }
-}
-
-// Add a fence, creating one if necessary to our list of fences/fenceIds
-static VkBool32
-add_fence_info(
- layer_data *my_data,
- VkFence fence,
- VkQueue queue,
- uint64_t *fenceId)
-{
- VkBool32 skipCall = VK_FALSE;
- *fenceId = my_data->currentFenceId++;
-
- // If no fence, create an internal fence to track the submissions
- if (fence != VK_NULL_HANDLE) {
- my_data->fenceMap[fence].fenceId = *fenceId;
- my_data->fenceMap[fence].queue = queue;
- // Validate that fence is in UNSIGNALED state
- VkFenceCreateInfo* pFenceCI = &(my_data->fenceMap[fence].createInfo);
- if (pFenceCI->flags & VK_FENCE_CREATE_SIGNALED_BIT) {
- skipCall = log_msg(my_data->report_data, VK_DEBUG_REPORT_ERROR_BIT_EXT, VK_DEBUG_REPORT_OBJECT_TYPE_FENCE_EXT, (uint64_t) fence, __LINE__, MEMTRACK_INVALID_FENCE_STATE, "MEM",
- "Fence %#" PRIxLEAST64 " submitted in SIGNALED state. Fences must be reset before being submitted", (uint64_t) fence);
- }
- } else {
- // TODO : Do we need to create an internal fence here for tracking purposes?
- }
- // Update most recently submitted fence and fenceId for Queue
- my_data->queueMap[queue].lastSubmittedId = *fenceId;
- return skipCall;
-}
-
-// Remove a fenceInfo from our list of fences/fenceIds
-static void
-delete_fence_info(
- layer_data *my_data,
- VkFence fence)
-{
- my_data->fenceMap.erase(fence);
-}
-
-// Record information when a fence is known to be signalled
-static void
-update_fence_tracking(
- layer_data *my_data,
- VkFence fence)
-{
- auto fence_item = my_data->fenceMap.find(fence);
- if (fence_item != my_data->fenceMap.end()) {
- MT_FENCE_INFO *pCurFenceInfo = &(*fence_item).second;
- VkQueue queue = pCurFenceInfo->queue;
- auto queue_item = my_data->queueMap.find(queue);
- if (queue_item != my_data->queueMap.end()) {
- MT_QUEUE_INFO *pQueueInfo = &(*queue_item).second;
- if (pQueueInfo->lastRetiredId < pCurFenceInfo->fenceId) {
- pQueueInfo->lastRetiredId = pCurFenceInfo->fenceId;
- }
- }
- }
-
- // Update fence state in fenceCreateInfo structure
- auto pFCI = &(my_data->fenceMap[fence].createInfo);
- pFCI->flags = static_cast<VkFenceCreateFlags>(pFCI->flags | VK_FENCE_CREATE_SIGNALED_BIT);
-}
-
-// Helper routine that updates the fence list for a specific queue to all-retired
-static void
-retire_queue_fences(
- layer_data *my_data,
- VkQueue queue)
-{
- MT_QUEUE_INFO *pQueueInfo = &my_data->queueMap[queue];
- // Set queue's lastRetired to lastSubmitted indicating all fences completed
- pQueueInfo->lastRetiredId = pQueueInfo->lastSubmittedId;
-}
-
-// Helper routine that updates all queues to all-retired
-static void
-retire_device_fences(
- layer_data *my_data,
- VkDevice device)
-{
- // Process each queue for device
- // TODO: Add multiple device support
- for (auto ii=my_data->queueMap.begin(); ii!=my_data->queueMap.end(); ++ii) {
- // Set queue's lastRetired to lastSubmitted indicating all fences completed
- MT_QUEUE_INFO *pQueueInfo = &(*ii).second;
- pQueueInfo->lastRetiredId = pQueueInfo->lastSubmittedId;
- }
-}
-
-// Helper function to validate correct usage bits set for buffers or images
-// Verify that (actual & desired) flags != 0 or,
-// if strict is true, verify that (actual & desired) flags == desired
-// In case of error, report it via dbg callbacks
-static VkBool32
-validate_usage_flags(
- layer_data *my_data,
- void *disp_obj,
- VkFlags actual,
- VkFlags desired,
- VkBool32 strict,
- uint64_t obj_handle,
- VkDebugReportObjectTypeEXT obj_type,
- char const *ty_str,
- char const *func_name,
- char const *usage_str)
-{
- VkBool32 correct_usage = VK_FALSE;
- VkBool32 skipCall = VK_FALSE;
- if (strict)
- correct_usage = ((actual & desired) == desired);
- else
- correct_usage = ((actual & desired) != 0);
- if (!correct_usage) {
- skipCall = log_msg(my_data->report_data, VK_DEBUG_REPORT_ERROR_BIT_EXT, obj_type, obj_handle, __LINE__, MEMTRACK_INVALID_USAGE_FLAG, "MEM",
- "Invalid usage flag for %s %#" PRIxLEAST64 " used by %s. In this case, %s should have %s set during creation.",
- ty_str, obj_handle, func_name, ty_str, usage_str);
- }
- return skipCall;
-}
-
-// Helper function to validate usage flags for images
-// Pulls image info and then sends actual vs. desired usage off to helper above where
-// an error will be flagged if usage is not correct
-static VkBool32
-validate_image_usage_flags(
- layer_data *my_data,
- void *disp_obj,
- VkImage image,
- VkFlags desired,
- VkBool32 strict,
- char const *func_name,
- char const *usage_string)
-{
- VkBool32 skipCall = VK_FALSE;
- MT_OBJ_BINDING_INFO* pBindInfo = get_object_binding_info(my_data, (uint64_t)image, VK_DEBUG_REPORT_OBJECT_TYPE_IMAGE_EXT);
- if (pBindInfo) {
- skipCall = validate_usage_flags(my_data, disp_obj, pBindInfo->create_info.image.usage, desired, strict,
- (uint64_t) image, VK_DEBUG_REPORT_OBJECT_TYPE_IMAGE_EXT, "image", func_name, usage_string);
- }
- return skipCall;
-}
-
-// Helper function to validate usage flags for buffers
-// Pulls buffer info and then sends actual vs. desired usage off to helper above where
-// an error will be flagged if usage is not correct
-static VkBool32
-validate_buffer_usage_flags(
- layer_data *my_data,
- void *disp_obj,
- VkBuffer buffer,
- VkFlags desired,
- VkBool32 strict,
- char const *func_name,
- char const *usage_string)
-{
- VkBool32 skipCall = VK_FALSE;
- MT_OBJ_BINDING_INFO* pBindInfo = get_object_binding_info(my_data, (uint64_t) buffer, VK_DEBUG_REPORT_OBJECT_TYPE_BUFFER_EXT);
- if (pBindInfo) {
- skipCall = validate_usage_flags(my_data, disp_obj, pBindInfo->create_info.buffer.usage, desired, strict,
- (uint64_t) buffer, VK_DEBUG_REPORT_OBJECT_TYPE_BUFFER_EXT, "buffer", func_name, usage_string);
- }
- return skipCall;
-}
-
-// Return ptr to info in map container containing mem, or NULL if not found
-// Calls to this function should be wrapped in mutex
-static MT_MEM_OBJ_INFO*
-get_mem_obj_info(
- layer_data *my_data,
- const VkDeviceMemory mem)
-{
- auto item = my_data->memObjMap.find(mem);
- if (item != my_data->memObjMap.end()) {
- return &(*item).second;
- } else {
- return NULL;
- }
-}
-
-static void
-add_mem_obj_info(
- layer_data *my_data,
- void *object,
- const VkDeviceMemory mem,
- const VkMemoryAllocateInfo *pAllocateInfo)
-{
- assert(object != NULL);
-
- memcpy(&my_data->memObjMap[mem].allocInfo, pAllocateInfo, sizeof(VkMemoryAllocateInfo));
- // TODO: Update for real hardware, actually process allocation info structures
- my_data->memObjMap[mem].allocInfo.pNext = NULL;
- my_data->memObjMap[mem].object = object;
- my_data->memObjMap[mem].refCount = 0;
- my_data->memObjMap[mem].mem = mem;
- my_data->memObjMap[mem].memRange.offset = 0;
- my_data->memObjMap[mem].memRange.size = 0;
- my_data->memObjMap[mem].pData = 0;
- my_data->memObjMap[mem].pDriverData = 0;
- my_data->memObjMap[mem].valid = false;
-}
-
-static VkBool32 validate_memory_is_valid(layer_data *my_data, VkDeviceMemory mem, const char* functionName, VkImage image = VK_NULL_HANDLE) {
- if (mem == MEMTRACKER_SWAP_CHAIN_IMAGE_KEY) {
- MT_OBJ_BINDING_INFO* pBindInfo = get_object_binding_info(my_data, (uint64_t)(image), VK_DEBUG_REPORT_OBJECT_TYPE_IMAGE_EXT);
- if (pBindInfo && !pBindInfo->valid) {
- return log_msg(my_data->report_data, VK_DEBUG_REPORT_ERROR_BIT_EXT, VK_DEBUG_REPORT_OBJECT_TYPE_DEVICE_MEMORY_EXT,
- (uint64_t)(mem), __LINE__, MEMTRACK_INVALID_USAGE_FLAG, "MEM",
- "%s: Cannot read invalid swapchain image %" PRIx64 ", please fill the memory before using.", functionName, (uint64_t)(image));
- }
- }
- else {
- MT_MEM_OBJ_INFO *pMemObj = get_mem_obj_info(my_data, mem);
- if (pMemObj && !pMemObj->valid) {
- return log_msg(my_data->report_data, VK_DEBUG_REPORT_ERROR_BIT_EXT, VK_DEBUG_REPORT_OBJECT_TYPE_DEVICE_MEMORY_EXT,
- (uint64_t)(mem), __LINE__, MEMTRACK_INVALID_USAGE_FLAG, "MEM",
- "%s: Cannot read invalid memory %" PRIx64 ", please fill the memory before using.", functionName, (uint64_t)(mem));
- }
- }
- return false;
-}
-
-static void set_memory_valid(layer_data *my_data, VkDeviceMemory mem, bool valid, VkImage image = VK_NULL_HANDLE) {
- if (mem == MEMTRACKER_SWAP_CHAIN_IMAGE_KEY) {
- MT_OBJ_BINDING_INFO* pBindInfo = get_object_binding_info(my_data, (uint64_t)(image), VK_DEBUG_REPORT_OBJECT_TYPE_IMAGE_EXT);
- if (pBindInfo) {
- pBindInfo->valid = valid;
- }
- } else {
- MT_MEM_OBJ_INFO *pMemObj = get_mem_obj_info(my_data, mem);
- if (pMemObj) {
- pMemObj->valid = valid;
- }
- }
-}
-
-// Find CB Info and add mem reference to list container
-// Find Mem Obj Info and add CB reference to list container
-static VkBool32
-update_cmd_buf_and_mem_references(
- layer_data *my_data,
- const VkCommandBuffer cb,
- const VkDeviceMemory mem,
- const char *apiName)
-{
- VkBool32 skipCall = VK_FALSE;
-
- // Skip validation if this image was created through WSI
- if (mem != MEMTRACKER_SWAP_CHAIN_IMAGE_KEY) {
-
- // First update CB binding in MemObj mini CB list
- MT_MEM_OBJ_INFO* pMemInfo = get_mem_obj_info(my_data, mem);
- if (pMemInfo) {
- // Search for cmd buffer object in memory object's binding list
- VkBool32 found = VK_FALSE;
- if (pMemInfo->pCommandBufferBindings.size() > 0) {
- for (list<VkCommandBuffer>::iterator it = pMemInfo->pCommandBufferBindings.begin(); it != pMemInfo->pCommandBufferBindings.end(); ++it) {
- if ((*it) == cb) {
- found = VK_TRUE;
- break;
- }
- }
- }
- // If not present, add to list
- if (found == VK_FALSE) {
- pMemInfo->pCommandBufferBindings.push_front(cb);
- pMemInfo->refCount++;
- }
- // Now update CBInfo's Mem reference list
- MT_CB_INFO* pCBInfo = get_cmd_buf_info(my_data, cb);
- // TODO: keep track of all destroyed CBs so we know if this is a stale or simply invalid object
- if (pCBInfo) {
- // Search for memory object in cmd buffer's reference list
- VkBool32 found = VK_FALSE;
- if (pCBInfo->pMemObjList.size() > 0) {
- for (auto it = pCBInfo->pMemObjList.begin(); it != pCBInfo->pMemObjList.end(); ++it) {
- if ((*it) == mem) {
- found = VK_TRUE;
- break;
- }
- }
- }
- // If not present, add to list
- if (found == VK_FALSE) {
- pCBInfo->pMemObjList.push_front(mem);
- }
- }
- }
- }
- return skipCall;
-}
-
-// Free bindings related to CB
-static VkBool32
-clear_cmd_buf_and_mem_references(
- layer_data *my_data,
- const VkCommandBuffer cb)
-{
- VkBool32 skipCall = VK_FALSE;
- MT_CB_INFO* pCBInfo = get_cmd_buf_info(my_data, cb);
-
- if (pCBInfo && (pCBInfo->pMemObjList.size() > 0)) {
- list<VkDeviceMemory> mem_obj_list = pCBInfo->pMemObjList;
- for (list<VkDeviceMemory>::iterator it=mem_obj_list.begin(); it!=mem_obj_list.end(); ++it) {
- MT_MEM_OBJ_INFO* pInfo = get_mem_obj_info(my_data, *it);
- if (pInfo) {
- pInfo->pCommandBufferBindings.remove(cb);
- pInfo->refCount--;
- }
- }
- pCBInfo->pMemObjList.clear();
- pCBInfo->activeDescriptorSets.clear();
- pCBInfo->validate_functions.clear();
- }
- return skipCall;
-}
-
-// Delete the entire CB list
-static VkBool32
-delete_cmd_buf_info_list(
- layer_data* my_data)
-{
- VkBool32 skipCall = VK_FALSE;
- for (unordered_map<VkCommandBuffer, MT_CB_INFO>::iterator ii=my_data->cbMap.begin(); ii!=my_data->cbMap.end(); ++ii) {
- skipCall |= clear_cmd_buf_and_mem_references(my_data, (*ii).first);
- }
- my_data->cbMap.clear();
- return skipCall;
-}
-
-// For given MemObjInfo, report Obj & CB bindings
-static VkBool32
-reportMemReferencesAndCleanUp(
- layer_data *my_data,
- MT_MEM_OBJ_INFO *pMemObjInfo)
-{
- VkBool32 skipCall = VK_FALSE;
- size_t cmdBufRefCount = pMemObjInfo->pCommandBufferBindings.size();
- size_t objRefCount = pMemObjInfo->pObjBindings.size();
-
- if ((pMemObjInfo->pCommandBufferBindings.size()) != 0) {
- skipCall = log_msg(my_data->report_data, VK_DEBUG_REPORT_ERROR_BIT_EXT, VK_DEBUG_REPORT_OBJECT_TYPE_DEVICE_MEMORY_EXT, (uint64_t) pMemObjInfo->mem, __LINE__, MEMTRACK_FREED_MEM_REF, "MEM",
- "Attempting to free memory object %#" PRIxLEAST64 " which still contains " PRINTF_SIZE_T_SPECIFIER " references",
- (uint64_t) pMemObjInfo->mem, (cmdBufRefCount + objRefCount));
- }
-
- if (cmdBufRefCount > 0 && pMemObjInfo->pCommandBufferBindings.size() > 0) {
- for (list<VkCommandBuffer>::const_iterator it = pMemObjInfo->pCommandBufferBindings.begin(); it != pMemObjInfo->pCommandBufferBindings.end(); ++it) {
- // TODO : CommandBuffer should be source Obj here
- log_msg(my_data->report_data, VK_DEBUG_REPORT_INFORMATION_BIT_EXT, VK_DEBUG_REPORT_OBJECT_TYPE_COMMAND_BUFFER_EXT, (uint64_t)(*it), __LINE__, MEMTRACK_FREED_MEM_REF, "MEM",
- "Command Buffer %p still has a reference to mem obj %#" PRIxLEAST64, (*it), (uint64_t) pMemObjInfo->mem);
- }
- // Clear the list of hanging references
- pMemObjInfo->pCommandBufferBindings.clear();
- }
-
- if (objRefCount > 0 && pMemObjInfo->pObjBindings.size() > 0) {
- for (auto it = pMemObjInfo->pObjBindings.begin(); it != pMemObjInfo->pObjBindings.end(); ++it) {
- log_msg(my_data->report_data, VK_DEBUG_REPORT_INFORMATION_BIT_EXT, it->type, it->handle, __LINE__, MEMTRACK_FREED_MEM_REF, "MEM",
- "VK Object %#" PRIxLEAST64 " still has a reference to mem obj %#" PRIxLEAST64, it->handle, (uint64_t) pMemObjInfo->mem);
- }
- // Clear the list of hanging references
- pMemObjInfo->pObjBindings.clear();
- }
- return skipCall;
-}
-
-static VkBool32
-deleteMemObjInfo(
- layer_data *my_data,
- void *object,
- VkDeviceMemory mem)
-{
- VkBool32 skipCall = VK_FALSE;
- auto item = my_data->memObjMap.find(mem);
- if (item != my_data->memObjMap.end()) {
- my_data->memObjMap.erase(item);
- } else {
- skipCall = log_msg(my_data->report_data, VK_DEBUG_REPORT_ERROR_BIT_EXT, VK_DEBUG_REPORT_OBJECT_TYPE_DEVICE_MEMORY_EXT, (uint64_t) mem, __LINE__, MEMTRACK_INVALID_MEM_OBJ, "MEM",
- "Request to delete memory object %#" PRIxLEAST64 " not present in memory Object Map", (uint64_t) mem);
- }
- return skipCall;
-}
-
-// Check if fence for given CB is completed
-static VkBool32
-checkCBCompleted(
- layer_data *my_data,
- const VkCommandBuffer cb,
- VkBool32 *complete)
-{
- MT_CB_INFO *pCBInfo = get_cmd_buf_info(my_data, cb);
- VkBool32 skipCall = VK_FALSE;
- *complete = VK_TRUE;
-
- if (pCBInfo) {
- if (pCBInfo->lastSubmittedQueue != NULL) {
- VkQueue queue = pCBInfo->lastSubmittedQueue;
- MT_QUEUE_INFO *pQueueInfo = &my_data->queueMap[queue];
- if (pCBInfo->fenceId > pQueueInfo->lastRetiredId) {
- skipCall = log_msg(my_data->report_data, VK_DEBUG_REPORT_INFORMATION_BIT_EXT, VK_DEBUG_REPORT_OBJECT_TYPE_COMMAND_BUFFER_EXT, (uint64_t)cb, __LINE__,
- MEMTRACK_NONE, "MEM", "fence %#" PRIxLEAST64 " for CB %p has not been checked for completion",
- (uint64_t) pCBInfo->lastSubmittedFence, cb);
- *complete = VK_FALSE;
- }
- }
- }
- return skipCall;
-}
-
-static VkBool32
-freeMemObjInfo(
- layer_data *my_data,
- void* object,
- VkDeviceMemory mem,
- VkBool32 internal)
-{
- VkBool32 skipCall = VK_FALSE;
- // Parse global list to find info w/ mem
- MT_MEM_OBJ_INFO* pInfo = get_mem_obj_info(my_data, mem);
- if (pInfo) {
- if (pInfo->allocInfo.allocationSize == 0 && !internal) {
- // TODO: Verify against Valid Use section
- skipCall = log_msg(my_data->report_data, VK_DEBUG_REPORT_WARNING_BIT_EXT, VK_DEBUG_REPORT_OBJECT_TYPE_DEVICE_MEMORY_EXT, (uint64_t) mem, __LINE__, MEMTRACK_INVALID_MEM_OBJ, "MEM",
- "Attempting to free memory associated with a Persistent Image, %#" PRIxLEAST64 ", "
- "this should not be explicitly freed\n", (uint64_t) mem);
- } else {
- // Clear any CB bindings for completed CBs
- // TODO : Is there a better place to do this?
-
- VkBool32 commandBufferComplete = VK_FALSE;
- assert(pInfo->object != VK_NULL_HANDLE);
- list<VkCommandBuffer>::iterator it = pInfo->pCommandBufferBindings.begin();
- list<VkCommandBuffer>::iterator temp;
- while (pInfo->pCommandBufferBindings.size() > 0 && it != pInfo->pCommandBufferBindings.end()) {
- skipCall |= checkCBCompleted(my_data, *it, &commandBufferComplete);
- if (VK_TRUE == commandBufferComplete) {
- temp = it;
- ++temp;
- skipCall |= clear_cmd_buf_and_mem_references(my_data, *it);
- it = temp;
- } else {
- ++it;
- }
- }
-
- // Now verify that no references to this mem obj remain and remove bindings
- if (0 != pInfo->refCount) {
- skipCall |= reportMemReferencesAndCleanUp(my_data, pInfo);
- }
- // Delete mem obj info
- skipCall |= deleteMemObjInfo(my_data, object, mem);
- }
- }
- return skipCall;
-}
-
-static const char*
-object_type_to_string(
- VkDebugReportObjectTypeEXT type)
-{
- switch (type)
- {
- case VK_DEBUG_REPORT_OBJECT_TYPE_IMAGE_EXT:
- return "image";
- break;
- case VK_DEBUG_REPORT_OBJECT_TYPE_BUFFER_EXT:
- return "buffer";
- break;
- case VK_DEBUG_REPORT_OBJECT_TYPE_SWAPCHAIN_KHR_EXT:
- return "swapchain";
- break;
- default:
- return "unknown";
- }
-}
-
-// Remove object binding performs 3 tasks:
-// 1. Remove ObjectInfo from MemObjInfo list container of obj bindings & free it
-// 2. Decrement refCount for MemObjInfo
-// 3. Clear mem binding for image/buffer by setting its handle to 0
-// TODO : This only applied to Buffer, Image, and Swapchain objects now, how should it be updated/customized?
-static VkBool32
-clear_object_binding(
- layer_data *my_data,
- void *dispObj,
- uint64_t handle,
- VkDebugReportObjectTypeEXT type)
-{
- // TODO : Need to customize images/buffers/swapchains to track mem binding and clear it here appropriately
- VkBool32 skipCall = VK_FALSE;
- MT_OBJ_BINDING_INFO* pObjBindInfo = get_object_binding_info(my_data, handle, type);
- if (pObjBindInfo) {
- MT_MEM_OBJ_INFO* pMemObjInfo = get_mem_obj_info(my_data, pObjBindInfo->mem);
- // TODO : Make sure this is a reasonable way to reset mem binding
- pObjBindInfo->mem = VK_NULL_HANDLE;
- if (pMemObjInfo) {
- // This obj is bound to a memory object. Remove the reference to this object in that memory object's list, decrement the memObj's refcount
- // and set the objects memory binding pointer to NULL.
- VkBool32 clearSucceeded = VK_FALSE;
- for (auto it = pMemObjInfo->pObjBindings.begin(); it != pMemObjInfo->pObjBindings.end(); ++it) {
- if ((it->handle == handle) && (it->type == type)) {
- pMemObjInfo->refCount--;
- pMemObjInfo->pObjBindings.erase(it);
- clearSucceeded = VK_TRUE;
- break;
- }
- }
- if (VK_FALSE == clearSucceeded ) {
- skipCall |= log_msg(my_data->report_data, VK_DEBUG_REPORT_ERROR_BIT_EXT, type, handle, __LINE__, MEMTRACK_INVALID_OBJECT, "MEM",
- "While trying to clear mem binding for %s obj %#" PRIxLEAST64 ", unable to find that object referenced by mem obj %#" PRIxLEAST64,
- object_type_to_string(type), handle, (uint64_t) pMemObjInfo->mem);
- }
- }
- }
- return skipCall;
-}
-
-// For NULL mem case, output warning
-// Make sure given object is in global object map
-// IF a previous binding existed, output validation error
-// Otherwise, add reference from objectInfo to memoryInfo
-// Add reference off of objInfo
-// device is required for error logging, need a dispatchable
-// object for that.
-static VkBool32
-set_mem_binding(
- layer_data *my_data,
- void *dispatch_object,
- VkDeviceMemory mem,
- uint64_t handle,
- VkDebugReportObjectTypeEXT type,
- const char *apiName)
-{
- VkBool32 skipCall = VK_FALSE;
- // Handle NULL case separately, just clear previous binding & decrement reference
- if (mem == VK_NULL_HANDLE) {
- // TODO: Verify against Valid Use section of spec.
- skipCall = log_msg(my_data->report_data, VK_DEBUG_REPORT_WARNING_BIT_EXT, type, handle, __LINE__, MEMTRACK_INVALID_MEM_OBJ, "MEM",
- "In %s, attempting to Bind Obj(%#" PRIxLEAST64 ") to NULL", apiName, handle);
- } else {
- MT_OBJ_BINDING_INFO* pObjBindInfo = get_object_binding_info(my_data, handle, type);
- if (!pObjBindInfo) {
- skipCall |= log_msg(my_data->report_data, VK_DEBUG_REPORT_ERROR_BIT_EXT, type, handle, __LINE__, MEMTRACK_MISSING_MEM_BINDINGS, "MEM",
- "In %s, attempting to update Binding of %s Obj(%#" PRIxLEAST64 ") that's not in global list()",
- object_type_to_string(type), apiName, handle);
- } else {
- // non-null case so should have real mem obj
- MT_MEM_OBJ_INFO* pMemInfo = get_mem_obj_info(my_data, mem);
- if (pMemInfo) {
- // TODO : Need to track mem binding for obj and report conflict here
- MT_MEM_OBJ_INFO* pPrevBinding = get_mem_obj_info(my_data, pObjBindInfo->mem);
- if (pPrevBinding != NULL) {
- skipCall |= log_msg(my_data->report_data, VK_DEBUG_REPORT_ERROR_BIT_EXT, VK_DEBUG_REPORT_OBJECT_TYPE_DEVICE_MEMORY_EXT, (uint64_t) mem, __LINE__, MEMTRACK_REBIND_OBJECT, "MEM",
- "In %s, attempting to bind memory (%#" PRIxLEAST64 ") to object (%#" PRIxLEAST64 ") which has already been bound to mem object %#" PRIxLEAST64,
- apiName, (uint64_t) mem, handle, (uint64_t) pPrevBinding->mem);
- }
- else {
- MT_OBJ_HANDLE_TYPE oht;
- oht.handle = handle;
- oht.type = type;
- pMemInfo->pObjBindings.push_front(oht);
- pMemInfo->refCount++;
- // For image objects, make sure default memory state is correctly set
- // TODO : What's the best/correct way to handle this?
- if (VK_DEBUG_REPORT_OBJECT_TYPE_IMAGE_EXT == type) {
- VkImageCreateInfo ici = pObjBindInfo->create_info.image;
- if (ici.usage & (VK_IMAGE_USAGE_COLOR_ATTACHMENT_BIT |
- VK_IMAGE_USAGE_DEPTH_STENCIL_ATTACHMENT_BIT)) {
- // TODO:: More memory state transition stuff.
- }
- }
- pObjBindInfo->mem = mem;
- }
- }
- }
- }
- return skipCall;
-}
-
-// For NULL mem case, clear any previous binding Else...
-// Make sure given object is in its object map
-// IF a previous binding existed, update binding
-// Add reference from objectInfo to memoryInfo
-// Add reference off of object's binding info
-// Return VK_TRUE if addition is successful, VK_FALSE otherwise
-static VkBool32
-set_sparse_mem_binding(
- layer_data *my_data,
- void *dispObject,
- VkDeviceMemory mem,
- uint64_t handle,
- VkDebugReportObjectTypeEXT type,
- const char *apiName)
-{
- VkBool32 skipCall = VK_FALSE;
- // Handle NULL case separately, just clear previous binding & decrement reference
- if (mem == VK_NULL_HANDLE) {
- skipCall = clear_object_binding(my_data, dispObject, handle, type);
- } else {
- MT_OBJ_BINDING_INFO* pObjBindInfo = get_object_binding_info(my_data, handle, type);
- if (!pObjBindInfo) {
- skipCall |= log_msg(my_data->report_data, VK_DEBUG_REPORT_ERROR_BIT_EXT, type, handle, __LINE__, MEMTRACK_MISSING_MEM_BINDINGS, "MEM",
- "In %s, attempting to update Binding of Obj(%#" PRIxLEAST64 ") that's not in global list()", apiName, handle);
- }
- // non-null case so should have real mem obj
- MT_MEM_OBJ_INFO* pInfo = get_mem_obj_info(my_data, mem);
- if (pInfo) {
- // Search for object in memory object's binding list
- VkBool32 found = VK_FALSE;
- if (pInfo->pObjBindings.size() > 0) {
- for (auto it = pInfo->pObjBindings.begin(); it != pInfo->pObjBindings.end(); ++it) {
- if (((*it).handle == handle) && ((*it).type == type)) {
- found = VK_TRUE;
- break;
- }
- }
- }
- // If not present, add to list
- if (found == VK_FALSE) {
- MT_OBJ_HANDLE_TYPE oht;
- oht.handle = handle;
- oht.type = type;
- pInfo->pObjBindings.push_front(oht);
- pInfo->refCount++;
- }
- // Need to set mem binding for this object
- pObjBindInfo->mem = mem;
- }
- }
- return skipCall;
-}
-
-template <typename T> void
-print_object_map_members(
- layer_data *my_data,
- void *dispObj,
- T const& objectName,
- VkDebugReportObjectTypeEXT objectType,
- const char *objectStr)
-{
- for (auto const& element : objectName) {
- log_msg(my_data->report_data, VK_DEBUG_REPORT_INFORMATION_BIT_EXT, objectType, 0, __LINE__, MEMTRACK_NONE, "MEM",
- " %s Object list contains %s Object %#" PRIxLEAST64 " ", objectStr, objectStr, element.first);
- }
-}
-
-// For given Object, get 'mem' obj that it's bound to or NULL if no binding
-static VkBool32
-get_mem_binding_from_object(
- layer_data *my_data,
- void *dispObj,
- const uint64_t handle,
- const VkDebugReportObjectTypeEXT type,
- VkDeviceMemory *mem)
-{
- VkBool32 skipCall = VK_FALSE;
- *mem = VK_NULL_HANDLE;
- MT_OBJ_BINDING_INFO* pObjBindInfo = get_object_binding_info(my_data, handle, type);
- if (pObjBindInfo) {
- if (pObjBindInfo->mem) {
- *mem = pObjBindInfo->mem;
- } else {
- skipCall = log_msg(my_data->report_data, VK_DEBUG_REPORT_ERROR_BIT_EXT, type, handle, __LINE__, MEMTRACK_MISSING_MEM_BINDINGS, "MEM",
- "Trying to get mem binding for object %#" PRIxLEAST64 " but object has no mem binding", handle);
- }
- } else {
- skipCall = log_msg(my_data->report_data, VK_DEBUG_REPORT_ERROR_BIT_EXT, type, handle, __LINE__, MEMTRACK_INVALID_OBJECT, "MEM",
- "Trying to get mem binding for object %#" PRIxLEAST64 " but no such object in %s list",
- handle, object_type_to_string(type));
- }
- return skipCall;
-}
-
-// Print details of MemObjInfo list
-static void
-print_mem_list(
- layer_data *my_data,
- void *dispObj)
-{
- MT_MEM_OBJ_INFO* pInfo = NULL;
-
- // Early out if info is not requested
- if (!(my_data->report_data->active_flags & VK_DEBUG_REPORT_INFORMATION_BIT_EXT)) {
- return;
- }
-
- // Just printing each msg individually for now, may want to package these into single large print
- log_msg(my_data->report_data, VK_DEBUG_REPORT_INFORMATION_BIT_EXT, VK_DEBUG_REPORT_OBJECT_TYPE_DEVICE_MEMORY_EXT, 0, __LINE__, MEMTRACK_NONE, "MEM",
- "Details of Memory Object list (of size " PRINTF_SIZE_T_SPECIFIER " elements)", my_data->memObjMap.size());
- log_msg(my_data->report_data, VK_DEBUG_REPORT_INFORMATION_BIT_EXT, VK_DEBUG_REPORT_OBJECT_TYPE_DEVICE_MEMORY_EXT, 0, __LINE__, MEMTRACK_NONE, "MEM",
- "=============================");
-
- if (my_data->memObjMap.size() <= 0)
- return;
-
- for (auto ii=my_data->memObjMap.begin(); ii!=my_data->memObjMap.end(); ++ii) {
- pInfo = &(*ii).second;
-
- log_msg(my_data->report_data, VK_DEBUG_REPORT_INFORMATION_BIT_EXT, VK_DEBUG_REPORT_OBJECT_TYPE_DEVICE_MEMORY_EXT, 0, __LINE__, MEMTRACK_NONE, "MEM",
- " ===MemObjInfo at %p===", (void*)pInfo);
- log_msg(my_data->report_data, VK_DEBUG_REPORT_INFORMATION_BIT_EXT, VK_DEBUG_REPORT_OBJECT_TYPE_DEVICE_MEMORY_EXT, 0, __LINE__, MEMTRACK_NONE, "MEM",
- " Mem object: %#" PRIxLEAST64, (uint64_t)(pInfo->mem));
- log_msg(my_data->report_data, VK_DEBUG_REPORT_INFORMATION_BIT_EXT, VK_DEBUG_REPORT_OBJECT_TYPE_DEVICE_MEMORY_EXT, 0, __LINE__, MEMTRACK_NONE, "MEM",
- " Ref Count: %u", pInfo->refCount);
- if (0 != pInfo->allocInfo.allocationSize) {
- string pAllocInfoMsg = vk_print_vkmemoryallocateinfo(&pInfo->allocInfo, "MEM(INFO): ");
- log_msg(my_data->report_data, VK_DEBUG_REPORT_INFORMATION_BIT_EXT, VK_DEBUG_REPORT_OBJECT_TYPE_DEVICE_MEMORY_EXT, 0, __LINE__, MEMTRACK_NONE, "MEM",
- " Mem Alloc info:\n%s", pAllocInfoMsg.c_str());
- } else {
- log_msg(my_data->report_data, VK_DEBUG_REPORT_INFORMATION_BIT_EXT, VK_DEBUG_REPORT_OBJECT_TYPE_DEVICE_MEMORY_EXT, 0, __LINE__, MEMTRACK_NONE, "MEM",
- " Mem Alloc info is NULL (alloc done by vkCreateSwapchainKHR())");
- }
-
- log_msg(my_data->report_data, VK_DEBUG_REPORT_INFORMATION_BIT_EXT, VK_DEBUG_REPORT_OBJECT_TYPE_DEVICE_MEMORY_EXT, 0, __LINE__, MEMTRACK_NONE, "MEM",
- " VK OBJECT Binding list of size " PRINTF_SIZE_T_SPECIFIER " elements:", pInfo->pObjBindings.size());
- if (pInfo->pObjBindings.size() > 0) {
- for (list<MT_OBJ_HANDLE_TYPE>::iterator it = pInfo->pObjBindings.begin(); it != pInfo->pObjBindings.end(); ++it) {
- log_msg(my_data->report_data, VK_DEBUG_REPORT_INFORMATION_BIT_EXT, VK_DEBUG_REPORT_OBJECT_TYPE_DEVICE_MEMORY_EXT, 0, __LINE__, MEMTRACK_NONE, "MEM",
- " VK OBJECT %" PRIu64, it->handle);
- }
- }
-
- log_msg(my_data->report_data, VK_DEBUG_REPORT_INFORMATION_BIT_EXT, VK_DEBUG_REPORT_OBJECT_TYPE_DEVICE_MEMORY_EXT, 0, __LINE__, MEMTRACK_NONE, "MEM",
- " VK Command Buffer (CB) binding list of size " PRINTF_SIZE_T_SPECIFIER " elements", pInfo->pCommandBufferBindings.size());
- if (pInfo->pCommandBufferBindings.size() > 0)
- {
- for (list<VkCommandBuffer>::iterator it = pInfo->pCommandBufferBindings.begin(); it != pInfo->pCommandBufferBindings.end(); ++it) {
- log_msg(my_data->report_data, VK_DEBUG_REPORT_INFORMATION_BIT_EXT, VK_DEBUG_REPORT_OBJECT_TYPE_DEVICE_MEMORY_EXT, 0, __LINE__, MEMTRACK_NONE, "MEM",
- " VK CB %p", (*it));
- }
- }
- }
-}
-
-static void
-printCBList(
- layer_data *my_data,
- void *dispObj)
-{
- MT_CB_INFO* pCBInfo = NULL;
-
- // Early out if info is not requested
- if (!(my_data->report_data->active_flags & VK_DEBUG_REPORT_INFORMATION_BIT_EXT)) {
- return;
- }
-
- log_msg(my_data->report_data, VK_DEBUG_REPORT_INFORMATION_BIT_EXT, VK_DEBUG_REPORT_OBJECT_TYPE_DEVICE_MEMORY_EXT, 0, __LINE__, MEMTRACK_NONE, "MEM",
- "Details of CB list (of size " PRINTF_SIZE_T_SPECIFIER " elements)", my_data->cbMap.size());
- log_msg(my_data->report_data, VK_DEBUG_REPORT_INFORMATION_BIT_EXT, VK_DEBUG_REPORT_OBJECT_TYPE_DEVICE_MEMORY_EXT, 0, __LINE__, MEMTRACK_NONE, "MEM",
- "==================");
-
- if (my_data->cbMap.size() <= 0)
- return;
-
- for (auto ii=my_data->cbMap.begin(); ii!=my_data->cbMap.end(); ++ii) {
- pCBInfo = &(*ii).second;
-
- log_msg(my_data->report_data, VK_DEBUG_REPORT_INFORMATION_BIT_EXT, VK_DEBUG_REPORT_OBJECT_TYPE_DEVICE_MEMORY_EXT, 0, __LINE__, MEMTRACK_NONE, "MEM",
- " CB Info (%p) has CB %p, fenceId %" PRIx64", and fence %#" PRIxLEAST64,
- (void*)pCBInfo, (void*)pCBInfo->commandBuffer, pCBInfo->fenceId,
- (uint64_t) pCBInfo->lastSubmittedFence);
-
- if (pCBInfo->pMemObjList.size() <= 0)
- continue;
- for (list<VkDeviceMemory>::iterator it = pCBInfo->pMemObjList.begin(); it != pCBInfo->pMemObjList.end(); ++it) {
- log_msg(my_data->report_data, VK_DEBUG_REPORT_INFORMATION_BIT_EXT, VK_DEBUG_REPORT_OBJECT_TYPE_DEVICE_MEMORY_EXT, 0, __LINE__, MEMTRACK_NONE, "MEM",
- " Mem obj %" PRIu64, (uint64_t)(*it));
- }
- }
-}
-
-static void
-init_mem_tracker(
- layer_data *my_data,
- const VkAllocationCallbacks *pAllocator)
-{
- uint32_t report_flags = 0;
- uint32_t debug_action = 0;
- FILE *log_output = NULL;
- const char *option_str;
- VkDebugReportCallbackEXT callback;
- // initialize MemTracker options
- report_flags = getLayerOptionFlags("MemTrackerReportFlags", 0);
- getLayerOptionEnum("MemTrackerDebugAction", (uint32_t *) &debug_action);
-
- if (debug_action & VK_DBG_LAYER_ACTION_LOG_MSG)
- {
- option_str = getLayerOption("MemTrackerLogFilename");
- log_output = getLayerLogOutput(option_str, "MemTracker");
- VkDebugReportCallbackCreateInfoEXT dbgInfo;
- memset(&dbgInfo, 0, sizeof(dbgInfo));
- dbgInfo.sType = VK_STRUCTURE_TYPE_DEBUG_REPORT_CREATE_INFO_EXT;
- dbgInfo.pfnCallback = log_callback;
- dbgInfo.pUserData = log_output;
- dbgInfo.flags = report_flags;
- layer_create_msg_callback(my_data->report_data, &dbgInfo, pAllocator, &callback);
- my_data->logging_callback.push_back(callback);
- }
-
- if (debug_action & VK_DBG_LAYER_ACTION_DEBUG_OUTPUT) {
- VkDebugReportCallbackCreateInfoEXT dbgInfo;
- memset(&dbgInfo, 0, sizeof(dbgInfo));
- dbgInfo.sType = VK_STRUCTURE_TYPE_DEBUG_REPORT_CREATE_INFO_EXT;
- dbgInfo.pfnCallback = win32_debug_output_msg;
- dbgInfo.pUserData = log_output;
- dbgInfo.flags = report_flags;
- layer_create_msg_callback(my_data->report_data, &dbgInfo, pAllocator, &callback);
- my_data->logging_callback.push_back(callback);
- }
-
- if (!globalLockInitialized)
- {
- loader_platform_thread_create_mutex(&globalLock);
- globalLockInitialized = 1;
- }
-
- // Zero out memory property data
- memset(&memProps, 0, sizeof(VkPhysicalDeviceMemoryProperties));
-}
-
-// hook DestroyInstance to remove tableInstanceMap entry
-VK_LAYER_EXPORT VKAPI_ATTR void VKAPI_CALL vkDestroyInstance(
- VkInstance instance,
- const VkAllocationCallbacks *pAllocator)
-{
- // Grab the key before the instance is destroyed.
- dispatch_key key = get_dispatch_key(instance);
- layer_data *my_data = get_my_data_ptr(key, layer_data_map);
- VkLayerInstanceDispatchTable *pTable = my_data->instance_dispatch_table;
- pTable->DestroyInstance(instance, pAllocator);
-
- loader_platform_thread_lock_mutex(&globalLock);
- // Clean up logging callback, if any
- while (my_data->logging_callback.size() > 0) {
- VkDebugReportCallbackEXT callback = my_data->logging_callback.back();
- layer_destroy_msg_callback(my_data->report_data, callback, pAllocator);
- my_data->logging_callback.pop_back();
- }
-
- layer_debug_report_destroy_instance(my_data->report_data);
- delete my_data->instance_dispatch_table;
- layer_data_map.erase(key);
- loader_platform_thread_unlock_mutex(&globalLock);
- if (layer_data_map.empty()) {
- // Release mutex when destroying last instance
- loader_platform_thread_delete_mutex(&globalLock);
- globalLockInitialized = 0;
- }
-}
-
-VKAPI_ATTR VkResult VKAPI_CALL vkCreateInstance(
- const VkInstanceCreateInfo* pCreateInfo,
- const VkAllocationCallbacks* pAllocator,
- VkInstance* pInstance)
-{
- VkLayerInstanceCreateInfo *chain_info = get_chain_info(pCreateInfo, VK_LAYER_LINK_INFO);
-
- assert(chain_info->u.pLayerInfo);
- PFN_vkGetInstanceProcAddr fpGetInstanceProcAddr = chain_info->u.pLayerInfo->pfnNextGetInstanceProcAddr;
- PFN_vkCreateInstance fpCreateInstance = (PFN_vkCreateInstance) fpGetInstanceProcAddr(NULL, "vkCreateInstance");
- if (fpCreateInstance == NULL) {
- return VK_ERROR_INITIALIZATION_FAILED;
- }
-
- // Advance the link info for the next element on the chain
- chain_info->u.pLayerInfo = chain_info->u.pLayerInfo->pNext;
-
- VkResult result = fpCreateInstance(pCreateInfo, pAllocator, pInstance);
- if (result != VK_SUCCESS) {
- return result;
- }
-
- layer_data *my_data = get_my_data_ptr(get_dispatch_key(*pInstance), layer_data_map);
- my_data->instance_dispatch_table = new VkLayerInstanceDispatchTable;
- layer_init_instance_dispatch_table(*pInstance, my_data->instance_dispatch_table, fpGetInstanceProcAddr);
-
- my_data->report_data = debug_report_create_instance(
- my_data->instance_dispatch_table,
- *pInstance,
- pCreateInfo->enabledExtensionCount,
- pCreateInfo->ppEnabledExtensionNames);
-
- init_mem_tracker(my_data, pAllocator);
-
- return result;
-}
-
-static void
-createDeviceRegisterExtensions(
- const VkDeviceCreateInfo *pCreateInfo,
- VkDevice device)
-{
- layer_data *my_device_data = get_my_data_ptr(get_dispatch_key(device), layer_data_map);
- VkLayerDispatchTable *pDisp = my_device_data->device_dispatch_table;
- PFN_vkGetDeviceProcAddr gpa = pDisp->GetDeviceProcAddr;
- pDisp->CreateSwapchainKHR = (PFN_vkCreateSwapchainKHR) gpa(device, "vkCreateSwapchainKHR");
- pDisp->DestroySwapchainKHR = (PFN_vkDestroySwapchainKHR) gpa(device, "vkDestroySwapchainKHR");
- pDisp->GetSwapchainImagesKHR = (PFN_vkGetSwapchainImagesKHR) gpa(device, "vkGetSwapchainImagesKHR");
- pDisp->AcquireNextImageKHR = (PFN_vkAcquireNextImageKHR) gpa(device, "vkAcquireNextImageKHR");
- pDisp->QueuePresentKHR = (PFN_vkQueuePresentKHR) gpa(device, "vkQueuePresentKHR");
- my_device_data->wsi_enabled = VK_FALSE;
- for (uint32_t i = 0; i < pCreateInfo->enabledExtensionCount; i++) {
- if (strcmp(pCreateInfo->ppEnabledExtensionNames[i], VK_KHR_SWAPCHAIN_EXTENSION_NAME) == 0)
- my_device_data->wsi_enabled = true;
- }
-}
-
-VK_LAYER_EXPORT VKAPI_ATTR VkResult VKAPI_CALL vkCreateDevice(
- VkPhysicalDevice gpu,
- const VkDeviceCreateInfo *pCreateInfo,
- const VkAllocationCallbacks *pAllocator,
- VkDevice *pDevice)
-{
- VkLayerDeviceCreateInfo *chain_info = get_chain_info(pCreateInfo, VK_LAYER_LINK_INFO);
-
- assert(chain_info->u.pLayerInfo);
- PFN_vkGetInstanceProcAddr fpGetInstanceProcAddr = chain_info->u.pLayerInfo->pfnNextGetInstanceProcAddr;
- PFN_vkGetDeviceProcAddr fpGetDeviceProcAddr = chain_info->u.pLayerInfo->pfnNextGetDeviceProcAddr;
- PFN_vkCreateDevice fpCreateDevice = (PFN_vkCreateDevice) fpGetInstanceProcAddr(NULL, "vkCreateDevice");
- if (fpCreateDevice == NULL) {
- return VK_ERROR_INITIALIZATION_FAILED;
- }
-
- // Advance the link info for the next element on the chain
- chain_info->u.pLayerInfo = chain_info->u.pLayerInfo->pNext;
-
- VkResult result = fpCreateDevice(gpu, pCreateInfo, pAllocator, pDevice);
- if (result != VK_SUCCESS) {
- return result;
- }
-
- layer_data *my_instance_data = get_my_data_ptr(get_dispatch_key(gpu), layer_data_map);
- layer_data *my_device_data = get_my_data_ptr(get_dispatch_key(*pDevice), layer_data_map);
-
- // Setup device dispatch table
- my_device_data->device_dispatch_table = new VkLayerDispatchTable;
- layer_init_device_dispatch_table(*pDevice, my_device_data->device_dispatch_table, fpGetDeviceProcAddr);
-
- my_device_data->report_data = layer_debug_report_create_device(my_instance_data->report_data, *pDevice);
- createDeviceRegisterExtensions(pCreateInfo, *pDevice);
- my_instance_data->instance_dispatch_table->GetPhysicalDeviceProperties(gpu, &my_device_data->properties);
-
- return result;
-}
-
-VK_LAYER_EXPORT VKAPI_ATTR void VKAPI_CALL vkDestroyDevice(
- VkDevice device,
- const VkAllocationCallbacks *pAllocator)
-{
- dispatch_key key = get_dispatch_key(device);
- layer_data *my_device_data = get_my_data_ptr(key, layer_data_map);
- VkBool32 skipCall = VK_FALSE;
- loader_platform_thread_lock_mutex(&globalLock);
- log_msg(my_device_data->report_data, VK_DEBUG_REPORT_INFORMATION_BIT_EXT, VK_DEBUG_REPORT_OBJECT_TYPE_DEVICE_EXT, (uint64_t)device, __LINE__, MEMTRACK_NONE, "MEM",
- "Printing List details prior to vkDestroyDevice()");
- log_msg(my_device_data->report_data, VK_DEBUG_REPORT_INFORMATION_BIT_EXT, VK_DEBUG_REPORT_OBJECT_TYPE_DEVICE_EXT, (uint64_t)device, __LINE__, MEMTRACK_NONE, "MEM",
- "================================================");
- print_mem_list(my_device_data, device);
- printCBList(my_device_data, device);
- skipCall = delete_cmd_buf_info_list(my_device_data);
- // Report any memory leaks
- MT_MEM_OBJ_INFO* pInfo = NULL;
- if (my_device_data->memObjMap.size() > 0) {
- for (auto ii=my_device_data->memObjMap.begin(); ii!=my_device_data->memObjMap.end(); ++ii) {
- pInfo = &(*ii).second;
- if (pInfo->allocInfo.allocationSize != 0) {
- // Valid Usage: All child objects created on device must have been destroyed prior to destroying device
- skipCall |= log_msg(my_device_data->report_data, VK_DEBUG_REPORT_WARNING_BIT_EXT, VK_DEBUG_REPORT_OBJECT_TYPE_DEVICE_MEMORY_EXT, (uint64_t) pInfo->mem, __LINE__, MEMTRACK_MEMORY_LEAK, "MEM",
- "Mem Object %" PRIu64 " has not been freed. You should clean up this memory by calling "
- "vkFreeMemory(%" PRIu64 ") prior to vkDestroyDevice().", (uint64_t)(pInfo->mem), (uint64_t)(pInfo->mem));
- }
- }
- }
- // Queues persist until device is destroyed
- delete_queue_info_list(my_device_data);
- layer_debug_report_destroy_device(device);
- loader_platform_thread_unlock_mutex(&globalLock);
-
-#if DISPATCH_MAP_DEBUG
- fprintf(stderr, "Device: %p, key: %p\n", device, key);
-#endif
- VkLayerDispatchTable *pDisp = my_device_data->device_dispatch_table;
- if (VK_FALSE == skipCall) {
- pDisp->DestroyDevice(device, pAllocator);
- }
- delete my_device_data->device_dispatch_table;
- layer_data_map.erase(key);
-}
-
-VK_LAYER_EXPORT VKAPI_ATTR void VKAPI_CALL vkGetPhysicalDeviceMemoryProperties(
- VkPhysicalDevice physicalDevice,
- VkPhysicalDeviceMemoryProperties *pMemoryProperties)
-{
- layer_data *my_data = get_my_data_ptr(get_dispatch_key(physicalDevice), layer_data_map);
- VkLayerInstanceDispatchTable *pInstanceTable = my_data->instance_dispatch_table;
- pInstanceTable->GetPhysicalDeviceMemoryProperties(physicalDevice, pMemoryProperties);
- memcpy(&memProps, pMemoryProperties, sizeof(VkPhysicalDeviceMemoryProperties));
-}
-
-static const VkExtensionProperties instance_extensions[] = {
- {
- VK_EXT_DEBUG_REPORT_EXTENSION_NAME,
- VK_EXT_DEBUG_REPORT_SPEC_VERSION
- }
-};
-
-VK_LAYER_EXPORT VKAPI_ATTR VkResult VKAPI_CALL vkEnumerateInstanceExtensionProperties(
- const char *pLayerName,
- uint32_t *pCount,
- VkExtensionProperties *pProperties)
-{
- return util_GetExtensionProperties(1, instance_extensions, pCount, pProperties);
-}
-
-static const VkLayerProperties mtGlobalLayers[] = {
- {
- "VK_LAYER_LUNARG_mem_tracker",
- VK_API_VERSION,
- 1,
- "LunarG Validation Layer",
- }
-};
-
-VK_LAYER_EXPORT VKAPI_ATTR VkResult VKAPI_CALL vkEnumerateInstanceLayerProperties(
- uint32_t *pCount,
- VkLayerProperties *pProperties)
-{
- return util_GetLayerProperties(ARRAY_SIZE(mtGlobalLayers),
- mtGlobalLayers,
- pCount, pProperties);
-}
-
-VK_LAYER_EXPORT VKAPI_ATTR VkResult VKAPI_CALL vkEnumerateDeviceExtensionProperties(
- VkPhysicalDevice physicalDevice,
- const char *pLayerName,
- uint32_t *pCount,
- VkExtensionProperties *pProperties)
-{
- /* Mem tracker does not have any physical device extensions */
- if (pLayerName == NULL) {
- layer_data *my_data = get_my_data_ptr(get_dispatch_key(physicalDevice), layer_data_map);
- VkLayerInstanceDispatchTable *pInstanceTable = my_data->instance_dispatch_table;
- return pInstanceTable->EnumerateDeviceExtensionProperties(
- physicalDevice, NULL, pCount, pProperties);
- } else {
- return util_GetExtensionProperties(0, NULL, pCount, pProperties);
- }
-}
-
-VK_LAYER_EXPORT VKAPI_ATTR VkResult VKAPI_CALL vkEnumerateDeviceLayerProperties(
- VkPhysicalDevice physicalDevice,
- uint32_t *pCount,
- VkLayerProperties *pProperties)
-{
- /* Mem tracker's physical device layers are the same as global */
- return util_GetLayerProperties(ARRAY_SIZE(mtGlobalLayers), mtGlobalLayers,
- pCount, pProperties);
-}
-
-VK_LAYER_EXPORT VKAPI_ATTR void VKAPI_CALL vkGetDeviceQueue(
- VkDevice device,
- uint32_t queueNodeIndex,
- uint32_t queueIndex,
- VkQueue *pQueue)
-{
- layer_data *my_data = get_my_data_ptr(get_dispatch_key(device), layer_data_map);
- my_data->device_dispatch_table->GetDeviceQueue(device, queueNodeIndex, queueIndex, pQueue);
- loader_platform_thread_lock_mutex(&globalLock);
- add_queue_info(my_data, *pQueue);
- loader_platform_thread_unlock_mutex(&globalLock);
-}
-
-VK_LAYER_EXPORT VKAPI_ATTR VkResult VKAPI_CALL vkQueueSubmit(
- VkQueue queue,
- uint32_t submitCount,
- const VkSubmitInfo *pSubmits,
- VkFence fence)
-{
- layer_data *my_data = get_my_data_ptr(get_dispatch_key(queue), layer_data_map);
- VkResult result = VK_ERROR_VALIDATION_FAILED_EXT;
-
- loader_platform_thread_lock_mutex(&globalLock);
- // TODO : Need to track fence and clear mem references when fence clears
- MT_CB_INFO* pCBInfo = NULL;
- uint64_t fenceId = 0;
- VkBool32 skipCall = add_fence_info(my_data, fence, queue, &fenceId);
-
- print_mem_list(my_data, queue);
- printCBList(my_data, queue);
- for (uint32_t submit_idx = 0; submit_idx < submitCount; submit_idx++) {
- const VkSubmitInfo *submit = &pSubmits[submit_idx];
- for (uint32_t i = 0; i < submit->commandBufferCount; i++) {
- pCBInfo = get_cmd_buf_info(my_data, submit->pCommandBuffers[i]);
- if (pCBInfo) {
- pCBInfo->fenceId = fenceId;
- pCBInfo->lastSubmittedFence = fence;
- pCBInfo->lastSubmittedQueue = queue;
- for (auto& function : pCBInfo->validate_functions) {
- skipCall |= function();
- }
- }
- }
-
- for (uint32_t i = 0; i < submit->waitSemaphoreCount; i++) {
- VkSemaphore sem = submit->pWaitSemaphores[i];
-
- if (my_data->semaphoreMap.find(sem) != my_data->semaphoreMap.end()) {
- if (my_data->semaphoreMap[sem] != MEMTRACK_SEMAPHORE_STATE_SIGNALLED) {
- skipCall = log_msg(my_data->report_data, VK_DEBUG_REPORT_ERROR_BIT_EXT, VK_DEBUG_REPORT_OBJECT_TYPE_SEMAPHORE_EXT, (uint64_t) sem,
- __LINE__, MEMTRACK_NONE, "SEMAPHORE",
- "vkQueueSubmit: Semaphore must be in signaled state before passing to pWaitSemaphores");
- }
- my_data->semaphoreMap[sem] = MEMTRACK_SEMAPHORE_STATE_WAIT;
- }
- }
- for (uint32_t i = 0; i < submit->signalSemaphoreCount; i++) {
- VkSemaphore sem = submit->pSignalSemaphores[i];
-
- if (my_data->semaphoreMap.find(sem) != my_data->semaphoreMap.end()) {
- if (my_data->semaphoreMap[sem] != MEMTRACK_SEMAPHORE_STATE_UNSET) {
- skipCall = log_msg(my_data->report_data, VK_DEBUG_REPORT_ERROR_BIT_EXT, VK_DEBUG_REPORT_OBJECT_TYPE_SEMAPHORE_EXT, (uint64_t) sem,
- __LINE__, MEMTRACK_NONE, "SEMAPHORE",
- "vkQueueSubmit: Semaphore must not be currently signaled or in a wait state");
- }
- my_data->semaphoreMap[sem] = MEMTRACK_SEMAPHORE_STATE_SIGNALLED;
- }
- }
- }
-
- loader_platform_thread_unlock_mutex(&globalLock);
- if (VK_FALSE == skipCall) {
- result = my_data->device_dispatch_table->QueueSubmit(
- queue, submitCount, pSubmits, fence);
- }
-
- loader_platform_thread_lock_mutex(&globalLock);
- for (uint32_t submit_idx = 0; submit_idx < submitCount; submit_idx++) {
- const VkSubmitInfo *submit = &pSubmits[submit_idx];
- for (uint32_t i = 0; i < submit->waitSemaphoreCount; i++) {
- VkSemaphore sem = submit->pWaitSemaphores[i];
-
- if (my_data->semaphoreMap.find(sem) != my_data->semaphoreMap.end()) {
- my_data->semaphoreMap[sem] = MEMTRACK_SEMAPHORE_STATE_UNSET;
- }
- }
- }
- loader_platform_thread_unlock_mutex(&globalLock);
-
- return result;
-}
-
-VK_LAYER_EXPORT VKAPI_ATTR VkResult VKAPI_CALL vkAllocateMemory(
- VkDevice device,
- const VkMemoryAllocateInfo *pAllocateInfo,
- const VkAllocationCallbacks *pAllocator,
- VkDeviceMemory *pMemory)
-{
- layer_data *my_data = get_my_data_ptr(get_dispatch_key(device), layer_data_map);
- VkResult result = my_data->device_dispatch_table->AllocateMemory(device, pAllocateInfo, pAllocator, pMemory);
- // TODO : Track allocations and overall size here
- loader_platform_thread_lock_mutex(&globalLock);
- add_mem_obj_info(my_data, device, *pMemory, pAllocateInfo);
- print_mem_list(my_data, device);
- loader_platform_thread_unlock_mutex(&globalLock);
- return result;
-}
-
-VK_LAYER_EXPORT VKAPI_ATTR void VKAPI_CALL vkFreeMemory(
- VkDevice device,
- VkDeviceMemory mem,
- const VkAllocationCallbacks *pAllocator)
-{
- layer_data *my_data = get_my_data_ptr(get_dispatch_key(device), layer_data_map);
- my_data->bufferRanges.erase(mem);
- my_data->imageRanges.erase(mem);
-
- // From spec : A memory object is freed by calling vkFreeMemory() when it is no longer needed.
- // Before freeing a memory object, an application must ensure the memory object is no longer
- // in use by the device—for example by command buffers queued for execution. The memory need
- // not yet be unbound from all images and buffers, but any further use of those images or
- // buffers (on host or device) for anything other than destroying those objects will result in
- // undefined behavior.
-
- loader_platform_thread_lock_mutex(&globalLock);
- freeMemObjInfo(my_data, device, mem, VK_FALSE);
- print_mem_list(my_data, device);
- printCBList(my_data, device);
- loader_platform_thread_unlock_mutex(&globalLock);
- my_data->device_dispatch_table->FreeMemory(device, mem, pAllocator);
-}
-
-VkBool32
-validateMemRange(
- layer_data *my_data,
- VkDeviceMemory mem,
- VkDeviceSize offset,
- VkDeviceSize size)
-{
- VkBool32 skipCall = VK_FALSE;
-
- if (size == 0) {
- // TODO: a size of 0 is not listed as an invalid use in the spec, should it be?
- skipCall = log_msg(my_data->report_data, VK_DEBUG_REPORT_WARNING_BIT_EXT, VK_DEBUG_REPORT_OBJECT_TYPE_DEVICE_MEMORY_EXT, (uint64_t)mem, __LINE__,
- MEMTRACK_INVALID_MAP, "MEM", "VkMapMemory: Attempting to map memory range of size zero");
- }
-
- auto mem_element = my_data->memObjMap.find(mem);
- if (mem_element != my_data->memObjMap.end()) {
- // It is an application error to call VkMapMemory on an object that is already mapped
- if (mem_element->second.memRange.size != 0) {
- skipCall = log_msg(my_data->report_data, VK_DEBUG_REPORT_ERROR_BIT_EXT, VK_DEBUG_REPORT_OBJECT_TYPE_DEVICE_MEMORY_EXT, (uint64_t)mem, __LINE__,
- MEMTRACK_INVALID_MAP, "MEM", "VkMapMemory: Attempting to map memory on an already-mapped object %#" PRIxLEAST64, (uint64_t)mem);
- }
-
- // Validate that offset + size is within object's allocationSize
- if (size == VK_WHOLE_SIZE) {
- if (offset >= mem_element->second.allocInfo.allocationSize) {
- skipCall = log_msg(my_data->report_data, VK_DEBUG_REPORT_ERROR_BIT_EXT, VK_DEBUG_REPORT_OBJECT_TYPE_DEVICE_MEMORY_EXT, (uint64_t)mem, __LINE__,
- MEMTRACK_INVALID_MAP, "MEM", "Mapping Memory from %" PRIu64 " to %" PRIu64 " with total array size %" PRIu64,
- offset, mem_element->second.allocInfo.allocationSize, mem_element->second.allocInfo.allocationSize);
- }
- } else {
- if ((offset + size) > mem_element->second.allocInfo.allocationSize) {
- skipCall = log_msg(my_data->report_data, VK_DEBUG_REPORT_ERROR_BIT_EXT, VK_DEBUG_REPORT_OBJECT_TYPE_DEVICE_MEMORY_EXT, (uint64_t)mem, __LINE__,
- MEMTRACK_INVALID_MAP, "MEM", "Mapping Memory from %" PRIu64 " to %" PRIu64 " with total array size %" PRIu64,
- offset, size + offset, mem_element->second.allocInfo.allocationSize);
- }
- }
- }
- return skipCall;
-}
-
-void
-storeMemRanges(
- layer_data *my_data,
- VkDeviceMemory mem,
- VkDeviceSize offset,
- VkDeviceSize size)
- {
- auto mem_element = my_data->memObjMap.find(mem);
- if (mem_element != my_data->memObjMap.end()) {
- MemRange new_range;
- new_range.offset = offset;
- new_range.size = size;
- mem_element->second.memRange = new_range;
- }
-}
-
-VkBool32 deleteMemRanges(
- layer_data *my_data,
- VkDeviceMemory mem)
-{
- VkBool32 skipCall = VK_FALSE;
- auto mem_element = my_data->memObjMap.find(mem);
- if (mem_element != my_data->memObjMap.end()) {
- if (!mem_element->second.memRange.size) {
- // Valid Usage: memory must currently be mapped
- skipCall = log_msg(my_data->report_data, VK_DEBUG_REPORT_WARNING_BIT_EXT, VK_DEBUG_REPORT_OBJECT_TYPE_DEVICE_MEMORY_EXT, (uint64_t)mem, __LINE__, MEMTRACK_INVALID_MAP, "MEM",
- "Unmapping Memory without memory being mapped: mem obj %#" PRIxLEAST64, (uint64_t)mem);
- }
- mem_element->second.memRange.size = 0;
- if (mem_element->second.pData) {
- free(mem_element->second.pData);
- mem_element->second.pData = 0;
- }
- }
- return skipCall;
-}
-
-static char NoncoherentMemoryFillValue = 0xb;
-
-void
-initializeAndTrackMemory(
- layer_data *my_data,
- VkDeviceMemory mem,
- VkDeviceSize size,
- void **ppData)
-{
- auto mem_element = my_data->memObjMap.find(mem);
- if (mem_element != my_data->memObjMap.end()) {
- mem_element->second.pDriverData = *ppData;
- uint32_t index = mem_element->second.allocInfo.memoryTypeIndex;
- if (memProps.memoryTypes[index].propertyFlags & VK_MEMORY_PROPERTY_HOST_COHERENT_BIT) {
- mem_element->second.pData = 0;
- } else {
- if (size == VK_WHOLE_SIZE) {
- size = mem_element->second.allocInfo.allocationSize;
- }
- size_t convSize = (size_t)(size);
- mem_element->second.pData = malloc(2 * convSize);
- memset(mem_element->second.pData, NoncoherentMemoryFillValue, 2 * convSize);
- *ppData = static_cast<char*>(mem_element->second.pData) + (convSize / 2);
- }
- }
-}
-
-VK_LAYER_EXPORT VKAPI_ATTR VkResult VKAPI_CALL vkMapMemory(
- VkDevice device,
- VkDeviceMemory mem,
- VkDeviceSize offset,
- VkDeviceSize size,
- VkFlags flags,
- void **ppData)
-{
- layer_data *my_data = get_my_data_ptr(get_dispatch_key(device), layer_data_map);
- VkBool32 skipCall = VK_FALSE;
- VkResult result = VK_ERROR_VALIDATION_FAILED_EXT;
- loader_platform_thread_lock_mutex(&globalLock);
- MT_MEM_OBJ_INFO *pMemObj = get_mem_obj_info(my_data, mem);
- if (pMemObj) {
- pMemObj->valid = true;
- if ((memProps.memoryTypes[pMemObj->allocInfo.memoryTypeIndex].propertyFlags &
- VK_MEMORY_PROPERTY_HOST_VISIBLE_BIT) == 0) {
- skipCall = log_msg(my_data->report_data, VK_DEBUG_REPORT_ERROR_BIT_EXT, VK_DEBUG_REPORT_OBJECT_TYPE_DEVICE_MEMORY_EXT,
- (uint64_t) mem, __LINE__, MEMTRACK_INVALID_STATE, "MEM",
- "Mapping Memory without VK_MEMORY_PROPERTY_HOST_VISIBLE_BIT set: mem obj %#" PRIxLEAST64, (uint64_t) mem);
- }
- }
- skipCall |= validateMemRange(my_data, mem, offset, size);
- storeMemRanges(my_data, mem, offset, size);
- loader_platform_thread_unlock_mutex(&globalLock);
- if (VK_FALSE == skipCall) {
- result = my_data->device_dispatch_table->MapMemory(device, mem, offset, size, flags, ppData);
- initializeAndTrackMemory(my_data, mem, size, ppData);
- }
- return result;
-}
-
-VK_LAYER_EXPORT VKAPI_ATTR void VKAPI_CALL vkUnmapMemory(
- VkDevice device,
- VkDeviceMemory mem)
-{
- layer_data *my_data = get_my_data_ptr(get_dispatch_key(device), layer_data_map);
- VkBool32 skipCall = VK_FALSE;
-
- loader_platform_thread_lock_mutex(&globalLock);
- skipCall |= deleteMemRanges(my_data, mem);
- loader_platform_thread_unlock_mutex(&globalLock);
- if (VK_FALSE == skipCall) {
- my_data->device_dispatch_table->UnmapMemory(device, mem);
- }
-}
-
-VkBool32
-validateMemoryIsMapped(
- layer_data *my_data,
- uint32_t memRangeCount,
- const VkMappedMemoryRange *pMemRanges)
-{
- VkBool32 skipCall = VK_FALSE;
- for (uint32_t i = 0; i < memRangeCount; ++i) {
- auto mem_element = my_data->memObjMap.find(pMemRanges[i].memory);
- if (mem_element != my_data->memObjMap.end()) {
- if (mem_element->second.memRange.offset > pMemRanges[i].offset ||
- (mem_element->second.memRange.offset + mem_element->second.memRange.size) < (pMemRanges[i].offset + pMemRanges[i].size)) {
- skipCall |= log_msg(my_data->report_data, VK_DEBUG_REPORT_ERROR_BIT_EXT, VK_DEBUG_REPORT_OBJECT_TYPE_DEVICE_MEMORY_EXT, (uint64_t)pMemRanges[i].memory,
- __LINE__, MEMTRACK_INVALID_MAP, "MEM", "Memory must be mapped before it can be flushed or invalidated.");
- }
- }
- }
- return skipCall;
-}
-
-VkBool32
-validateAndCopyNoncoherentMemoryToDriver(
- layer_data *my_data,
- uint32_t memRangeCount,
- const VkMappedMemoryRange *pMemRanges)
-{
- VkBool32 skipCall = VK_FALSE;
- for (uint32_t i = 0; i < memRangeCount; ++i) {
- auto mem_element = my_data->memObjMap.find(pMemRanges[i].memory);
- if (mem_element != my_data->memObjMap.end()) {
- if (mem_element->second.pData) {
- VkDeviceSize size = mem_element->second.memRange.size;
- VkDeviceSize half_size = (size / 2);
- char* data = static_cast<char*>(mem_element->second.pData);
- for (auto j = 0; j < half_size; ++j) {
- if (data[j] != NoncoherentMemoryFillValue) {
- skipCall |= log_msg(my_data->report_data, VK_DEBUG_REPORT_ERROR_BIT_EXT, VK_DEBUG_REPORT_OBJECT_TYPE_DEVICE_MEMORY_EXT, (uint64_t)pMemRanges[i].memory,
- __LINE__, MEMTRACK_INVALID_MAP, "MEM", "Memory overflow was detected on mem obj %" PRIxLEAST64, (uint64_t)pMemRanges[i].memory);
- }
- }
- for (auto j = size + half_size; j < 2 * size; ++j) {
- if (data[j] != NoncoherentMemoryFillValue) {
- skipCall |= log_msg(my_data->report_data, VK_DEBUG_REPORT_ERROR_BIT_EXT, VK_DEBUG_REPORT_OBJECT_TYPE_DEVICE_MEMORY_EXT, (uint64_t)pMemRanges[i].memory,
- __LINE__, MEMTRACK_INVALID_MAP, "MEM", "Memory overflow was detected on mem obj %" PRIxLEAST64, (uint64_t)pMemRanges[i].memory);
- }
- }
- memcpy(mem_element->second.pDriverData, static_cast<void*>(data + (size_t)(half_size)), (size_t)(size));
- }
- }
- }
- return skipCall;
-}
-
-VK_LAYER_EXPORT VkResult VKAPI_CALL vkFlushMappedMemoryRanges(
- VkDevice device,
- uint32_t memRangeCount,
- const VkMappedMemoryRange *pMemRanges)
-{
- VkResult result = VK_ERROR_VALIDATION_FAILED_EXT;
- VkBool32 skipCall = VK_FALSE;
- layer_data *my_data = get_my_data_ptr(get_dispatch_key(device), layer_data_map);
-
- loader_platform_thread_lock_mutex(&globalLock);
- skipCall |= validateAndCopyNoncoherentMemoryToDriver(my_data, memRangeCount, pMemRanges);
- skipCall |= validateMemoryIsMapped(my_data, memRangeCount, pMemRanges);
- loader_platform_thread_unlock_mutex(&globalLock);
- if (VK_FALSE == skipCall ) {
- result = my_data->device_dispatch_table->FlushMappedMemoryRanges(device, memRangeCount, pMemRanges);
- }
- return result;
-}
-
-VK_LAYER_EXPORT VkResult VKAPI_CALL vkInvalidateMappedMemoryRanges(
- VkDevice device,
- uint32_t memRangeCount,
- const VkMappedMemoryRange *pMemRanges)
-{
- VkResult result = VK_ERROR_VALIDATION_FAILED_EXT;
- VkBool32 skipCall = VK_FALSE;
- layer_data *my_data = get_my_data_ptr(get_dispatch_key(device), layer_data_map);
-
- loader_platform_thread_lock_mutex(&globalLock);
- skipCall |= validateMemoryIsMapped(my_data, memRangeCount, pMemRanges);
- loader_platform_thread_unlock_mutex(&globalLock);
- if (VK_FALSE == skipCall) {
- result = my_data->device_dispatch_table->InvalidateMappedMemoryRanges(device, memRangeCount, pMemRanges);
- }
- return result;
-}
-
-VK_LAYER_EXPORT VKAPI_ATTR void VKAPI_CALL vkDestroyFence(
- VkDevice device,
- VkFence fence,
- const VkAllocationCallbacks *pAllocator)
-{
- layer_data *my_data = get_my_data_ptr(get_dispatch_key(device), layer_data_map);
- loader_platform_thread_lock_mutex(&globalLock);
- delete_fence_info(my_data, fence);
- auto item = my_data->fenceMap.find(fence);
- if (item != my_data->fenceMap.end()) {
- my_data->fenceMap.erase(item);
- }
- loader_platform_thread_unlock_mutex(&globalLock);
- my_data->device_dispatch_table->DestroyFence(device, fence, pAllocator);
-}
-
-VK_LAYER_EXPORT VKAPI_ATTR void VKAPI_CALL vkDestroyBuffer(
- VkDevice device,
- VkBuffer buffer,
- const VkAllocationCallbacks *pAllocator)
-{
- layer_data *my_data = get_my_data_ptr(get_dispatch_key(device), layer_data_map);
- VkBool32 skipCall = VK_FALSE;
- loader_platform_thread_lock_mutex(&globalLock);
- auto item = my_data->bufferMap.find((uint64_t)buffer);
- if (item != my_data->bufferMap.end()) {
- skipCall = clear_object_binding(my_data, device, (uint64_t)buffer, VK_DEBUG_REPORT_OBJECT_TYPE_BUFFER_EXT);
- my_data->bufferMap.erase(item);
- }
- loader_platform_thread_unlock_mutex(&globalLock);
- if (VK_FALSE == skipCall) {
- my_data->device_dispatch_table->DestroyBuffer(device, buffer, pAllocator);
- }
-}
-
-VK_LAYER_EXPORT VKAPI_ATTR void VKAPI_CALL vkDestroyImage(
- VkDevice device,
- VkImage image,
- const VkAllocationCallbacks *pAllocator)
-{
- layer_data *my_data = get_my_data_ptr(get_dispatch_key(device), layer_data_map);
- VkBool32 skipCall = VK_FALSE;
- loader_platform_thread_lock_mutex(&globalLock);
- auto item = my_data->imageMap.find((uint64_t)image);
- if (item != my_data->imageMap.end()) {
- skipCall = clear_object_binding(my_data, device, (uint64_t)image, VK_DEBUG_REPORT_OBJECT_TYPE_IMAGE_EXT);
- my_data->imageMap.erase(item);
- }
- loader_platform_thread_unlock_mutex(&globalLock);
- if (VK_FALSE == skipCall) {
- my_data->device_dispatch_table->DestroyImage(device, image, pAllocator);
- }
-}
-
-VkBool32 print_memory_range_error(layer_data *my_data, const uint64_t object_handle, const uint64_t other_handle, VkDebugReportObjectTypeEXT object_type) {
- if (object_type == VK_DEBUG_REPORT_OBJECT_TYPE_BUFFER_EXT) {
- return log_msg(my_data->report_data, VK_DEBUG_REPORT_ERROR_BIT_EXT, object_type, object_handle, 0, MEMTRACK_INVALID_ALIASING, "MEM",
- "Buffer %" PRIx64 " is alised with image %" PRIx64, object_handle, other_handle);
- } else {
- return log_msg(my_data->report_data, VK_DEBUG_REPORT_ERROR_BIT_EXT, object_type, object_handle, 0, MEMTRACK_INVALID_ALIASING, "MEM",
- "Image %" PRIx64 " is alised with buffer %" PRIx64, object_handle, other_handle);
- }
-}
-
-VkBool32 validate_memory_range(layer_data *my_data, const unordered_map<VkDeviceMemory, vector<MEMORY_RANGE>>& memory, const MEMORY_RANGE& new_range, VkDebugReportObjectTypeEXT object_type) {
- VkBool32 skip_call = false;
- if (!memory.count(new_range.memory)) return false;
- const vector<MEMORY_RANGE>& ranges = memory.at(new_range.memory);
- for (auto range : ranges) {
- if ((range.end & ~(my_data->properties.limits.bufferImageGranularity - 1)) < new_range.start) continue;
- if (range.start > (new_range.end & ~(my_data->properties.limits.bufferImageGranularity - 1))) continue;
- skip_call |= print_memory_range_error(my_data, new_range.handle, range.handle, object_type);
- }
- return skip_call;
-}
-
-VkBool32 validate_buffer_image_aliasing(
- layer_data *my_data,
- uint64_t handle,
- VkDeviceMemory mem,
- VkDeviceSize memoryOffset,
- VkMemoryRequirements memRequirements,
- unordered_map<VkDeviceMemory, vector<MEMORY_RANGE>>& ranges,
- const unordered_map<VkDeviceMemory, vector<MEMORY_RANGE>>& other_ranges,
- VkDebugReportObjectTypeEXT object_type)
-{
- MEMORY_RANGE range;
- range.handle = handle;
- range.memory = mem;
- range.start = memoryOffset;
- range.end = memoryOffset + memRequirements.size - 1;
- ranges[mem].push_back(range);
- return validate_memory_range(my_data, other_ranges, range, object_type);
-}
-
-VK_LAYER_EXPORT VKAPI_ATTR VkResult VKAPI_CALL vkBindBufferMemory(
- VkDevice device,
- VkBuffer buffer,
- VkDeviceMemory mem,
- VkDeviceSize memoryOffset)
-{
- layer_data *my_data = get_my_data_ptr(get_dispatch_key(device), layer_data_map);
- VkResult result = VK_ERROR_VALIDATION_FAILED_EXT;
- loader_platform_thread_lock_mutex(&globalLock);
- // Track objects tied to memory
- uint64_t buffer_handle = (uint64_t)(buffer);
- VkBool32 skipCall = set_mem_binding(my_data, device, mem, buffer_handle, VK_DEBUG_REPORT_OBJECT_TYPE_BUFFER_EXT, "vkBindBufferMemory");
- add_object_binding_info(my_data, buffer_handle, VK_DEBUG_REPORT_OBJECT_TYPE_BUFFER_EXT, mem);
- {
- VkMemoryRequirements memRequirements;
- vkGetBufferMemoryRequirements(device, buffer, &memRequirements);
- skipCall |= validate_buffer_image_aliasing(my_data, buffer_handle, mem, memoryOffset, memRequirements, my_data->bufferRanges, my_data->imageRanges, VK_DEBUG_REPORT_OBJECT_TYPE_BUFFER_EXT);
- }
- print_mem_list(my_data, device);
- loader_platform_thread_unlock_mutex(&globalLock);
- if (VK_FALSE == skipCall) {
- result = my_data->device_dispatch_table->BindBufferMemory(device, buffer, mem, memoryOffset);
- }
- return result;
-}
-
-VK_LAYER_EXPORT VKAPI_ATTR VkResult VKAPI_CALL vkBindImageMemory(
- VkDevice device,
- VkImage image,
- VkDeviceMemory mem,
- VkDeviceSize memoryOffset)
-{
- layer_data *my_data = get_my_data_ptr(get_dispatch_key(device), layer_data_map);
- VkResult result = VK_ERROR_VALIDATION_FAILED_EXT;
- loader_platform_thread_lock_mutex(&globalLock);
- // Track objects tied to memory
- uint64_t image_handle = (uint64_t)(image);
- VkBool32 skipCall = set_mem_binding(my_data, device, mem, image_handle, VK_DEBUG_REPORT_OBJECT_TYPE_IMAGE_EXT, "vkBindImageMemory");
- add_object_binding_info(my_data, image_handle, VK_DEBUG_REPORT_OBJECT_TYPE_IMAGE_EXT, mem);
- {
- VkMemoryRequirements memRequirements;
- vkGetImageMemoryRequirements(device, image, &memRequirements);
- skipCall |= validate_buffer_image_aliasing(my_data, image_handle, mem, memoryOffset, memRequirements, my_data->imageRanges, my_data->bufferRanges, VK_DEBUG_REPORT_OBJECT_TYPE_IMAGE_EXT);
- }
- print_mem_list(my_data, device);
- loader_platform_thread_unlock_mutex(&globalLock);
- if (VK_FALSE == skipCall) {
- result = my_data->device_dispatch_table->BindImageMemory(device, image, mem, memoryOffset);
- }
- return result;
-}
-
-VK_LAYER_EXPORT VKAPI_ATTR void VKAPI_CALL vkGetBufferMemoryRequirements(
- VkDevice device,
- VkBuffer buffer,
- VkMemoryRequirements *pMemoryRequirements)
-{
- layer_data *my_data = get_my_data_ptr(get_dispatch_key(device), layer_data_map);
- // TODO : What to track here?
- // Could potentially save returned mem requirements and validate values passed into BindBufferMemory
- my_data->device_dispatch_table->GetBufferMemoryRequirements(device, buffer, pMemoryRequirements);
-}
-
-VK_LAYER_EXPORT VKAPI_ATTR void VKAPI_CALL vkGetImageMemoryRequirements(
- VkDevice device,
- VkImage image,
- VkMemoryRequirements *pMemoryRequirements)
-{
- layer_data *my_data = get_my_data_ptr(get_dispatch_key(device), layer_data_map);
- // TODO : What to track here?
- // Could potentially save returned mem requirements and validate values passed into BindImageMemory
- my_data->device_dispatch_table->GetImageMemoryRequirements(device, image, pMemoryRequirements);
-}
-
-VK_LAYER_EXPORT VKAPI_ATTR VkResult VKAPI_CALL vkQueueBindSparse(
- VkQueue queue,
- uint32_t bindInfoCount,
- const VkBindSparseInfo *pBindInfo,
- VkFence fence)
-{
- layer_data *my_data = get_my_data_ptr(get_dispatch_key(queue), layer_data_map);
- VkResult result = VK_ERROR_VALIDATION_FAILED_EXT;
- VkBool32 skipCall = VK_FALSE;
-
- loader_platform_thread_lock_mutex(&globalLock);
-
- for (uint32_t i = 0; i < bindInfoCount; i++) {
- // Track objects tied to memory
- for (uint32_t j = 0; j < pBindInfo[i].bufferBindCount; j++) {
- for (uint32_t k = 0; k < pBindInfo[i].pBufferBinds[j].bindCount; k++) {
- if (set_sparse_mem_binding(my_data, queue,
- pBindInfo[i].pBufferBinds[j].pBinds[k].memory,
- (uint64_t) pBindInfo[i].pBufferBinds[j].buffer,
- VK_DEBUG_REPORT_OBJECT_TYPE_BUFFER_EXT, "vkQueueBindSparse"))
- skipCall = VK_TRUE;
- }
- }
- for (uint32_t j = 0; j < pBindInfo[i].imageOpaqueBindCount; j++) {
- for (uint32_t k = 0; k < pBindInfo[i].pImageOpaqueBinds[j].bindCount; k++) {
- if (set_sparse_mem_binding(my_data, queue,
- pBindInfo[i].pImageOpaqueBinds[j].pBinds[k].memory,
- (uint64_t) pBindInfo[i].pImageOpaqueBinds[j].image,
- VK_DEBUG_REPORT_OBJECT_TYPE_IMAGE_EXT, "vkQueueBindSparse"))
- skipCall = VK_TRUE;
- }
- }
- for (uint32_t j = 0; j < pBindInfo[i].imageBindCount; j++) {
- for (uint32_t k = 0; k < pBindInfo[i].pImageBinds[j].bindCount; k++) {
- if (set_sparse_mem_binding(my_data, queue,
- pBindInfo[i].pImageBinds[j].pBinds[k].memory,
- (uint64_t) pBindInfo[i].pImageBinds[j].image,
- VK_DEBUG_REPORT_OBJECT_TYPE_IMAGE_EXT, "vkQueueBindSparse"))
- skipCall = VK_TRUE;
- }
- }
- }
-
- print_mem_list(my_data, queue);
- loader_platform_thread_unlock_mutex(&globalLock);
- if (VK_FALSE == skipCall) {
- result = my_data->device_dispatch_table->QueueBindSparse(queue, bindInfoCount, pBindInfo, fence);
- }
- return result;
-}
-
-VK_LAYER_EXPORT VKAPI_ATTR VkResult VKAPI_CALL vkCreateFence(
- VkDevice device,
- const VkFenceCreateInfo *pCreateInfo,
- const VkAllocationCallbacks *pAllocator,
- VkFence *pFence)
-{
- layer_data *my_data = get_my_data_ptr(get_dispatch_key(device), layer_data_map);
- VkResult result = my_data->device_dispatch_table->CreateFence(device, pCreateInfo, pAllocator, pFence);
- if (VK_SUCCESS == result) {
- loader_platform_thread_lock_mutex(&globalLock);
- MT_FENCE_INFO* pFI = &my_data->fenceMap[*pFence];
- memset(pFI, 0, sizeof(MT_FENCE_INFO));
- memcpy(&(pFI->createInfo), pCreateInfo, sizeof(VkFenceCreateInfo));
- if (pCreateInfo->flags & VK_FENCE_CREATE_SIGNALED_BIT) {
- pFI->firstTimeFlag = VK_TRUE;
- }
- loader_platform_thread_unlock_mutex(&globalLock);
- }
- return result;
-}
-
-VK_LAYER_EXPORT VKAPI_ATTR VkResult VKAPI_CALL vkResetFences(
- VkDevice device,
- uint32_t fenceCount,
- const VkFence *pFences)
-{
- layer_data *my_data = get_my_data_ptr(get_dispatch_key(device), layer_data_map);
- VkResult result = VK_ERROR_VALIDATION_FAILED_EXT;
- VkBool32 skipCall = VK_FALSE;
-
- loader_platform_thread_lock_mutex(&globalLock);
- // Reset fence state in fenceCreateInfo structure
- for (uint32_t i = 0; i < fenceCount; i++) {
- auto fence_item = my_data->fenceMap.find(pFences[i]);
- if (fence_item != my_data->fenceMap.end()) {
- // Validate fences in SIGNALED state
- if (!(fence_item->second.createInfo.flags & VK_FENCE_CREATE_SIGNALED_BIT)) {
- // TODO: I don't see a Valid Usage section for ResetFences. This behavior should be documented there.
- skipCall = log_msg(my_data->report_data, VK_DEBUG_REPORT_WARNING_BIT_EXT, VK_DEBUG_REPORT_OBJECT_TYPE_FENCE_EXT, (uint64_t) pFences[i], __LINE__, MEMTRACK_INVALID_FENCE_STATE, "MEM",
- "Fence %#" PRIxLEAST64 " submitted to VkResetFences in UNSIGNALED STATE", (uint64_t) pFences[i]);
- }
- else {
- fence_item->second.createInfo.flags =
- static_cast<VkFenceCreateFlags>(fence_item->second.createInfo.flags & ~VK_FENCE_CREATE_SIGNALED_BIT);
- }
- }
- }
- loader_platform_thread_unlock_mutex(&globalLock);
- if (VK_FALSE == skipCall) {
- result = my_data->device_dispatch_table->ResetFences(device, fenceCount, pFences);
- }
- return result;
-}
-
-static inline VkBool32
-verifyFenceStatus(
- VkDevice device,
- VkFence fence,
- const char *apiCall)
-{
- layer_data *my_data = get_my_data_ptr(get_dispatch_key(device), layer_data_map);
- VkBool32 skipCall = VK_FALSE;
- auto pFenceInfo = my_data->fenceMap.find(fence);
- if (pFenceInfo != my_data->fenceMap.end()) {
- if (pFenceInfo->second.firstTimeFlag != VK_TRUE) {
- if ((pFenceInfo->second.createInfo.flags & VK_FENCE_CREATE_SIGNALED_BIT) && pFenceInfo->second.firstTimeFlag != VK_TRUE) {
- skipCall |= log_msg(my_data->report_data, VK_DEBUG_REPORT_INFORMATION_BIT_EXT, VK_DEBUG_REPORT_OBJECT_TYPE_FENCE_EXT, (uint64_t) fence, __LINE__, MEMTRACK_INVALID_FENCE_STATE, "MEM",
- "%s specified fence %#" PRIxLEAST64 " already in SIGNALED state.", apiCall, (uint64_t) fence);
- }
- if (!pFenceInfo->second.queue &&
- !pFenceInfo->second
- .swapchain) { // Checking status of unsubmitted fence
- skipCall |= log_msg(
- my_data->report_data, VK_DEBUG_REPORT_WARNING_BIT_EXT,
- VK_DEBUG_REPORT_OBJECT_TYPE_FENCE_EXT,
- reinterpret_cast<uint64_t &>(fence),
- __LINE__, MEMTRACK_INVALID_FENCE_STATE, "MEM",
- "%s called for fence %#" PRIxLEAST64
- " which has not been submitted on a Queue or during "
- "acquire next image.",
- apiCall, reinterpret_cast<uint64_t &>(fence));
- }
- } else {
- pFenceInfo->second.firstTimeFlag = VK_FALSE;
- }
- }
- return skipCall;
-}
-
-VK_LAYER_EXPORT VKAPI_ATTR VkResult VKAPI_CALL vkGetFenceStatus(
- VkDevice device,
- VkFence fence)
-{
- layer_data *my_data = get_my_data_ptr(get_dispatch_key(device), layer_data_map);
- loader_platform_thread_lock_mutex(&globalLock);
- VkBool32 skipCall = verifyFenceStatus(device, fence, "vkGetFenceStatus");
- loader_platform_thread_unlock_mutex(&globalLock);
- if (skipCall)
- return VK_ERROR_VALIDATION_FAILED_EXT;
- VkResult result = my_data->device_dispatch_table->GetFenceStatus(device, fence);
- if (VK_SUCCESS == result) {
- loader_platform_thread_lock_mutex(&globalLock);
- update_fence_tracking(my_data, fence);
- loader_platform_thread_unlock_mutex(&globalLock);
- }
- return result;
-}
-
-VK_LAYER_EXPORT VKAPI_ATTR VkResult VKAPI_CALL vkWaitForFences(
- VkDevice device,
- uint32_t fenceCount,
- const VkFence *pFences,
- VkBool32 waitAll,
- uint64_t timeout)
-{
- layer_data *my_data = get_my_data_ptr(get_dispatch_key(device), layer_data_map);
- VkBool32 skipCall = VK_FALSE;
- // Verify fence status of submitted fences
- loader_platform_thread_lock_mutex(&globalLock);
- for(uint32_t i = 0; i < fenceCount; i++) {
- skipCall |= verifyFenceStatus(device, pFences[i], "vkWaitForFences");
- }
- loader_platform_thread_unlock_mutex(&globalLock);
- if (skipCall)
- return VK_ERROR_VALIDATION_FAILED_EXT;
- VkResult result = my_data->device_dispatch_table->WaitForFences(device, fenceCount, pFences, waitAll, timeout);
-
- if (VK_SUCCESS == result) {
- loader_platform_thread_lock_mutex(&globalLock);
- if (waitAll || fenceCount == 1) { // Clear all the fences
- for(uint32_t i = 0; i < fenceCount; i++) {
- update_fence_tracking(my_data, pFences[i]);
- }
- }
- loader_platform_thread_unlock_mutex(&globalLock);
- }
- return result;
-}
-
-VK_LAYER_EXPORT VKAPI_ATTR VkResult VKAPI_CALL vkQueueWaitIdle(
- VkQueue queue)
-{
- layer_data *my_data = get_my_data_ptr(get_dispatch_key(queue), layer_data_map);
- VkResult result = my_data->device_dispatch_table->QueueWaitIdle(queue);
- if (VK_SUCCESS == result) {
- loader_platform_thread_lock_mutex(&globalLock);
- retire_queue_fences(my_data, queue);
- loader_platform_thread_unlock_mutex(&globalLock);
- }
- return result;
-}
-
-VK_LAYER_EXPORT VKAPI_ATTR VkResult VKAPI_CALL vkDeviceWaitIdle(
- VkDevice device)
-{
- layer_data *my_data = get_my_data_ptr(get_dispatch_key(device), layer_data_map);
- VkResult result = my_data->device_dispatch_table->DeviceWaitIdle(device);
- if (VK_SUCCESS == result) {
- loader_platform_thread_lock_mutex(&globalLock);
- retire_device_fences(my_data, device);
- loader_platform_thread_unlock_mutex(&globalLock);
- }
- return result;
-}
-
-VK_LAYER_EXPORT VKAPI_ATTR VkResult VKAPI_CALL vkCreateBuffer(
- VkDevice device,
- const VkBufferCreateInfo *pCreateInfo,
- const VkAllocationCallbacks *pAllocator,
- VkBuffer *pBuffer)
-{
- layer_data *my_data = get_my_data_ptr(get_dispatch_key(device), layer_data_map);
- VkResult result = my_data->device_dispatch_table->CreateBuffer(device, pCreateInfo, pAllocator, pBuffer);
- if (VK_SUCCESS == result) {
- loader_platform_thread_lock_mutex(&globalLock);
- add_object_create_info(my_data, (uint64_t)*pBuffer, VK_DEBUG_REPORT_OBJECT_TYPE_BUFFER_EXT, pCreateInfo);
- loader_platform_thread_unlock_mutex(&globalLock);
- }
- return result;
-}
-
-VK_LAYER_EXPORT VKAPI_ATTR VkResult VKAPI_CALL vkCreateImage(
- VkDevice device,
- const VkImageCreateInfo *pCreateInfo,
- const VkAllocationCallbacks *pAllocator,
- VkImage *pImage)
-{
- layer_data *my_data = get_my_data_ptr(get_dispatch_key(device), layer_data_map);
- VkResult result = my_data->device_dispatch_table->CreateImage(device, pCreateInfo, pAllocator, pImage);
- if (VK_SUCCESS == result) {
- loader_platform_thread_lock_mutex(&globalLock);
- add_object_create_info(my_data, (uint64_t)*pImage, VK_DEBUG_REPORT_OBJECT_TYPE_IMAGE_EXT, pCreateInfo);
- loader_platform_thread_unlock_mutex(&globalLock);
- }
- return result;
-}
-
-VK_LAYER_EXPORT VKAPI_ATTR VkResult VKAPI_CALL vkCreateImageView(
- VkDevice device,
- const VkImageViewCreateInfo *pCreateInfo,
- const VkAllocationCallbacks *pAllocator,
- VkImageView *pView)
-{
- layer_data *my_data = get_my_data_ptr(get_dispatch_key(device), layer_data_map);
- VkResult result = my_data->device_dispatch_table->CreateImageView(device, pCreateInfo, pAllocator, pView);
- if (result == VK_SUCCESS) {
- loader_platform_thread_lock_mutex(&globalLock);
- my_data->imageViewMap[*pView].image = pCreateInfo->image;
- // Validate that img has correct usage flags set
- validate_image_usage_flags(my_data, device, pCreateInfo->image,
- VK_IMAGE_USAGE_SAMPLED_BIT | VK_IMAGE_USAGE_STORAGE_BIT | VK_IMAGE_USAGE_DEPTH_STENCIL_ATTACHMENT_BIT | VK_IMAGE_USAGE_COLOR_ATTACHMENT_BIT,
- VK_FALSE, "vkCreateImageView()", "VK_IMAGE_USAGE_[SAMPLED|STORAGE|COLOR_ATTACHMENT]_BIT");
- loader_platform_thread_unlock_mutex(&globalLock);
- }
- return result;
-}
-
-VK_LAYER_EXPORT VKAPI_ATTR VkResult VKAPI_CALL vkCreateBufferView(
- VkDevice device,
- const VkBufferViewCreateInfo *pCreateInfo,
- const VkAllocationCallbacks *pAllocator,
- VkBufferView *pView)
-{
- layer_data *my_data = get_my_data_ptr(get_dispatch_key(device), layer_data_map);
- VkResult result = my_data->device_dispatch_table->CreateBufferView(device, pCreateInfo, pAllocator, pView);
- if (result == VK_SUCCESS) {
- loader_platform_thread_lock_mutex(&globalLock);
- // In order to create a valid buffer view, the buffer must have been created with at least one of the
- // following flags: UNIFORM_TEXEL_BUFFER_BIT or STORAGE_TEXEL_BUFFER_BIT
- validate_buffer_usage_flags(my_data, device, pCreateInfo->buffer,
- VK_BUFFER_USAGE_UNIFORM_TEXEL_BUFFER_BIT | VK_BUFFER_USAGE_STORAGE_TEXEL_BUFFER_BIT,
- VK_FALSE, "vkCreateBufferView()", "VK_BUFFER_USAGE_[STORAGE|UNIFORM]_TEXEL_BUFFER_BIT");
- loader_platform_thread_unlock_mutex(&globalLock);
- }
- return result;
-}
-
-VK_LAYER_EXPORT VKAPI_ATTR VkResult VKAPI_CALL vkAllocateCommandBuffers(
- VkDevice device,
- const VkCommandBufferAllocateInfo *pCreateInfo,
- VkCommandBuffer *pCommandBuffer)
-{
- layer_data *my_data = get_my_data_ptr(get_dispatch_key(device), layer_data_map);
- VkResult result = my_data->device_dispatch_table->AllocateCommandBuffers(device, pCreateInfo, pCommandBuffer);
-
- loader_platform_thread_lock_mutex(&globalLock);
- if (VK_SUCCESS == result) {
- for (uint32_t i = 0; i < pCreateInfo->commandBufferCount; i++) {
- add_cmd_buf_info(my_data, pCreateInfo->commandPool, pCommandBuffer[i]);
- }
- }
- loader_platform_thread_unlock_mutex(&globalLock);
- printCBList(my_data, device);
- return result;
-}
-
-VK_LAYER_EXPORT VKAPI_ATTR void VKAPI_CALL vkFreeCommandBuffers(
- VkDevice device,
- VkCommandPool commandPool,
- uint32_t commandBufferCount,
- const VkCommandBuffer *pCommandBuffers)
-{
- VkBool32 skipCall = VK_FALSE;
- layer_data *my_data = get_my_data_ptr(get_dispatch_key(device), layer_data_map);
-
- loader_platform_thread_lock_mutex(&globalLock);
- for (uint32_t i = 0; i < commandBufferCount; i++) {
- skipCall |= delete_cmd_buf_info(my_data, commandPool, pCommandBuffers[i]);
- }
- printCBList(my_data, device);
- loader_platform_thread_unlock_mutex(&globalLock);
-
- if (VK_FALSE == skipCall) {
- my_data->device_dispatch_table->FreeCommandBuffers(device, commandPool, commandBufferCount, pCommandBuffers);
- }
-}
-
-VK_LAYER_EXPORT VKAPI_ATTR VkResult VKAPI_CALL vkCreateCommandPool(
- VkDevice device,
- const VkCommandPoolCreateInfo *pCreateInfo,
- const VkAllocationCallbacks *pAllocator,
- VkCommandPool *pCommandPool)
-{
- layer_data *my_data = get_my_data_ptr(get_dispatch_key(device), layer_data_map);
- VkResult result = my_data->device_dispatch_table->CreateCommandPool(device, pCreateInfo, pAllocator, pCommandPool);
-
- loader_platform_thread_lock_mutex(&globalLock);
-
- // Add cmd pool to map
- my_data->commandPoolMap[*pCommandPool].createFlags = pCreateInfo->flags;
- loader_platform_thread_unlock_mutex(&globalLock);
-
- return result;
-}
-
-VK_LAYER_EXPORT VKAPI_ATTR void VKAPI_CALL vkDestroyCommandPool(
- VkDevice device,
- VkCommandPool commandPool,
- const VkAllocationCallbacks *pAllocator)
-{
- VkBool32 commandBufferComplete = VK_FALSE;
- VkBool32 skipCall = VK_FALSE;
- // Verify that command buffers in pool are complete (not in-flight)
- layer_data *my_data = get_my_data_ptr(get_dispatch_key(device), layer_data_map);
- loader_platform_thread_lock_mutex(&globalLock);
- for (auto it = my_data->commandPoolMap[commandPool].pCommandBuffers.begin();
- it != my_data->commandPoolMap[commandPool].pCommandBuffers.end(); it++) {
- commandBufferComplete = VK_FALSE;
- skipCall = checkCBCompleted(my_data, *it, &commandBufferComplete);
- if (VK_FALSE == commandBufferComplete) {
- skipCall |= log_msg(my_data->report_data, VK_DEBUG_REPORT_ERROR_BIT_EXT, VK_DEBUG_REPORT_OBJECT_TYPE_COMMAND_BUFFER_EXT, (uint64_t)(*it), __LINE__,
- MEMTRACK_RESET_CB_WHILE_IN_FLIGHT, "MEM", "Destroying Command Pool 0x%" PRIxLEAST64 " before "
- "its command buffer (0x%" PRIxLEAST64 ") has completed.", (uint64_t)(commandPool),
- reinterpret_cast<uint64_t>(*it));
- }
- }
- loader_platform_thread_unlock_mutex(&globalLock);
-
- if (VK_FALSE == skipCall) {
- my_data->device_dispatch_table->DestroyCommandPool(device, commandPool, pAllocator);
- }
-
- loader_platform_thread_lock_mutex(&globalLock);
- auto item = my_data->commandPoolMap[commandPool].pCommandBuffers.begin();
- // Remove command buffers from command buffer map
- while (item != my_data->commandPoolMap[commandPool].pCommandBuffers.end()) {
- auto del_item = item++;
- delete_cmd_buf_info(my_data, commandPool, *del_item);
- }
- my_data->commandPoolMap.erase(commandPool);
- loader_platform_thread_unlock_mutex(&globalLock);
-}
-
-VK_LAYER_EXPORT VKAPI_ATTR VkResult VKAPI_CALL vkResetCommandPool(
- VkDevice device,
- VkCommandPool commandPool,
- VkCommandPoolResetFlags flags)
-{
- layer_data *my_data = get_my_data_ptr(get_dispatch_key(device), layer_data_map);
- VkBool32 commandBufferComplete = VK_FALSE;
- VkBool32 skipCall = VK_FALSE;
- VkResult result = VK_ERROR_VALIDATION_FAILED_EXT;
-
- loader_platform_thread_lock_mutex(&globalLock);
- auto it = my_data->commandPoolMap[commandPool].pCommandBuffers.begin();
- // Verify that CB's in pool are complete (not in-flight)
- while (it != my_data->commandPoolMap[commandPool].pCommandBuffers.end()) {
- skipCall = checkCBCompleted(my_data, (*it), &commandBufferComplete);
- if (VK_FALSE == commandBufferComplete) {
- skipCall |= log_msg(my_data->report_data, VK_DEBUG_REPORT_ERROR_BIT_EXT, VK_DEBUG_REPORT_OBJECT_TYPE_COMMAND_BUFFER_EXT, (uint64_t)(*it), __LINE__,
- MEMTRACK_RESET_CB_WHILE_IN_FLIGHT, "MEM", "Resetting CB %p before it has completed. You must check CB "
- "flag before calling vkResetCommandBuffer().", (*it));
- } else {
- // Clear memory references at this point.
- skipCall |= clear_cmd_buf_and_mem_references(my_data, (*it));
- }
- ++it;
- }
- loader_platform_thread_unlock_mutex(&globalLock);
-
- if (VK_FALSE == skipCall) {
- result = my_data->device_dispatch_table->ResetCommandPool(device, commandPool, flags);
- }
-
- return result;
-}
-
-VK_LAYER_EXPORT VKAPI_ATTR VkResult VKAPI_CALL vkBeginCommandBuffer(
- VkCommandBuffer commandBuffer,
- const VkCommandBufferBeginInfo *pBeginInfo)
-{
- layer_data *my_data = get_my_data_ptr(get_dispatch_key(commandBuffer), layer_data_map);
- VkResult result = VK_ERROR_VALIDATION_FAILED_EXT;
- VkBool32 skipCall = VK_FALSE;
- VkBool32 commandBufferComplete = VK_FALSE;
- loader_platform_thread_lock_mutex(&globalLock);
-
- // This implicitly resets the Cmd Buffer so make sure any fence is done and then clear memory references
- skipCall = checkCBCompleted(my_data, commandBuffer, &commandBufferComplete);
-
- if (VK_FALSE == commandBufferComplete) {
- skipCall |= log_msg(my_data->report_data, VK_DEBUG_REPORT_ERROR_BIT_EXT, VK_DEBUG_REPORT_OBJECT_TYPE_COMMAND_BUFFER_EXT, (uint64_t)commandBuffer, __LINE__,
- MEMTRACK_RESET_CB_WHILE_IN_FLIGHT, "MEM", "Calling vkBeginCommandBuffer() on active CB %p before it has completed. "
- "You must check CB flag before this call.", commandBuffer);
- }
- loader_platform_thread_unlock_mutex(&globalLock);
- if (VK_FALSE == skipCall) {
- result = my_data->device_dispatch_table->BeginCommandBuffer(commandBuffer, pBeginInfo);
- }
- loader_platform_thread_lock_mutex(&globalLock);
- clear_cmd_buf_and_mem_references(my_data, commandBuffer);
- loader_platform_thread_unlock_mutex(&globalLock);
- return result;
-}
-
-VK_LAYER_EXPORT VKAPI_ATTR VkResult VKAPI_CALL vkEndCommandBuffer(
- VkCommandBuffer commandBuffer)
-{
- layer_data *my_data = get_my_data_ptr(get_dispatch_key(commandBuffer), layer_data_map);
- // TODO : Anything to do here?
- VkResult result = my_data->device_dispatch_table->EndCommandBuffer(commandBuffer);
- return result;
-}
-
-VK_LAYER_EXPORT VKAPI_ATTR VkResult VKAPI_CALL vkResetCommandBuffer(
- VkCommandBuffer commandBuffer,
- VkCommandBufferResetFlags flags)
-{
- layer_data *my_data = get_my_data_ptr(get_dispatch_key(commandBuffer), layer_data_map);
- VkResult result = VK_ERROR_VALIDATION_FAILED_EXT;
- VkBool32 skipCall = VK_FALSE;
- VkBool32 commandBufferComplete = VK_FALSE;
- loader_platform_thread_lock_mutex(&globalLock);
-
- // Verify that CB is complete (not in-flight)
- skipCall = checkCBCompleted(my_data, commandBuffer, &commandBufferComplete);
- if (VK_FALSE == commandBufferComplete) {
- skipCall |= log_msg(my_data->report_data, VK_DEBUG_REPORT_ERROR_BIT_EXT, VK_DEBUG_REPORT_OBJECT_TYPE_COMMAND_BUFFER_EXT, (uint64_t)commandBuffer, __LINE__,
- MEMTRACK_RESET_CB_WHILE_IN_FLIGHT, "MEM", "Resetting CB %p before it has completed. You must check CB "
- "flag before calling vkResetCommandBuffer().", commandBuffer);
- }
- // Clear memory references as this point.
- skipCall |= clear_cmd_buf_and_mem_references(my_data, commandBuffer);
- loader_platform_thread_unlock_mutex(&globalLock);
- if (VK_FALSE == skipCall) {
- result = my_data->device_dispatch_table->ResetCommandBuffer(commandBuffer, flags);
- }
- return result;
-}
-
-// TODO : For any vkCmdBind* calls that include an object which has mem bound to it,
-// need to account for that mem now having binding to given commandBuffer
-VK_LAYER_EXPORT VKAPI_ATTR void VKAPI_CALL vkCmdBindPipeline(
- VkCommandBuffer commandBuffer,
- VkPipelineBindPoint pipelineBindPoint,
- VkPipeline pipeline)
-{
- layer_data *my_data = get_my_data_ptr(get_dispatch_key(commandBuffer), layer_data_map);
-#if 0 // FIXME: NEED TO FIX THE FOLLOWING CODE AND REMOVE THIS #if 0
- // TODO : If memory bound to pipeline, then need to tie that mem to commandBuffer
- if (getPipeline(pipeline)) {
- MT_CB_INFO *pCBInfo = get_cmd_buf_info(my_data, commandBuffer);
- if (pCBInfo) {
- pCBInfo->pipelines[pipelineBindPoint] = pipeline;
- }
- }
- else {
- "Attempt to bind Pipeline %p that doesn't exist!", (void*)pipeline);
- layerCbMsg(VK_DEBUG_REPORT_ERROR_BIT_EXT, VK_DEBUG_REPORT_OBJECT_TYPE_PIPELINE_EXT, pipeline, __LINE__, MEMTRACK_INVALID_OBJECT, (char *) "DS", (char *) str);
- }
-#endif
- my_data->device_dispatch_table->CmdBindPipeline(commandBuffer, pipelineBindPoint, pipeline);
-}
-
-VK_LAYER_EXPORT VKAPI_ATTR void VKAPI_CALL vkCmdBindDescriptorSets(
- VkCommandBuffer commandBuffer,
- VkPipelineBindPoint pipelineBindPoint,
- VkPipelineLayout layout,
- uint32_t firstSet,
- uint32_t setCount,
- const VkDescriptorSet *pDescriptorSets,
- uint32_t dynamicOffsetCount,
- const uint32_t *pDynamicOffsets)
-{
- layer_data *my_data = get_my_data_ptr(get_dispatch_key(commandBuffer), layer_data_map);
- auto cb_data = my_data->cbMap.find(commandBuffer);
- if (cb_data != my_data->cbMap.end()) {
- std::vector<VkDescriptorSet>& activeDescriptorSets = cb_data->second.activeDescriptorSets;
- if (activeDescriptorSets.size() < (setCount + firstSet)) {
- activeDescriptorSets.resize(setCount + firstSet);
- }
- for (uint32_t i = 0; i < setCount; ++i) {
- activeDescriptorSets[i + firstSet] = pDescriptorSets[i];
- }
- }
- // TODO : Somewhere need to verify that all textures referenced by shaders in DS are in some type of *SHADER_READ* state
- my_data->device_dispatch_table->CmdBindDescriptorSets(
- commandBuffer, pipelineBindPoint, layout, firstSet, setCount, pDescriptorSets, dynamicOffsetCount, pDynamicOffsets);
-}
-
-VK_LAYER_EXPORT VKAPI_ATTR void VKAPI_CALL vkCmdBindVertexBuffers(
- VkCommandBuffer commandBuffer,
- uint32_t firstBinding,
- uint32_t bindingCount,
- const VkBuffer *pBuffers,
- const VkDeviceSize *pOffsets)
-{
- layer_data *my_data = get_my_data_ptr(get_dispatch_key(commandBuffer), layer_data_map);
- VkBool32 skip_call = false;
- loader_platform_thread_lock_mutex(&globalLock);
- for (uint32_t i = 0; i < bindingCount; ++i) {
- VkDeviceMemory mem;
- skip_call |= get_mem_binding_from_object(my_data, commandBuffer, (uint64_t)(pBuffers[i]),
- VK_DEBUG_REPORT_OBJECT_TYPE_BUFFER_EXT, &mem);
- auto cb_data = my_data->cbMap.find(commandBuffer);
- if (cb_data != my_data->cbMap.end()) {
- std::function<VkBool32()> function = [=]() { return validate_memory_is_valid(my_data, mem, "vkCmdBindVertexBuffers()"); };
- cb_data->second.validate_functions.push_back(function);
- }
- }
- loader_platform_thread_unlock_mutex(&globalLock);
- // TODO : Somewhere need to verify that VBs have correct usage state flagged
- if (!skip_call)
- my_data->device_dispatch_table->CmdBindVertexBuffers(commandBuffer, firstBinding, bindingCount, pBuffers, pOffsets);
-}
-
-VK_LAYER_EXPORT VKAPI_ATTR void VKAPI_CALL vkCmdBindIndexBuffer(
- VkCommandBuffer commandBuffer,
- VkBuffer buffer,
- VkDeviceSize offset,
- VkIndexType indexType)
-{
- layer_data *my_data = get_my_data_ptr(get_dispatch_key(commandBuffer), layer_data_map);
- VkDeviceMemory mem;
- loader_platform_thread_lock_mutex(&globalLock);
- VkBool32 skip_call = get_mem_binding_from_object(my_data, commandBuffer, (uint64_t)(buffer), VK_DEBUG_REPORT_OBJECT_TYPE_BUFFER_EXT, &mem);
- auto cb_data = my_data->cbMap.find(commandBuffer);
- if (cb_data != my_data->cbMap.end()) {
- std::function<VkBool32()> function = [=]() { return validate_memory_is_valid(my_data, mem, "vkCmdBindIndexBuffer()"); };
- cb_data->second.validate_functions.push_back(function);
- }
- loader_platform_thread_unlock_mutex(&globalLock);
- // TODO : Somewhere need to verify that IBs have correct usage state flagged
- if (!skip_call)
- my_data->device_dispatch_table->CmdBindIndexBuffer(commandBuffer, buffer, offset, indexType);
-}
-
-VK_LAYER_EXPORT VKAPI_ATTR void VKAPI_CALL vkUpdateDescriptorSets(
- VkDevice device,
- uint32_t descriptorWriteCount,
- const VkWriteDescriptorSet* pDescriptorWrites,
- uint32_t descriptorCopyCount,
- const VkCopyDescriptorSet* pDescriptorCopies)
-{
- layer_data *my_data = get_my_data_ptr(get_dispatch_key(device), layer_data_map);
- for (uint32_t i = 0; i < descriptorWriteCount; ++i) {
- if (pDescriptorWrites[i].descriptorType == VK_DESCRIPTOR_TYPE_STORAGE_IMAGE) {
- my_data->descriptorSetMap[pDescriptorWrites[i].dstSet].images.push_back(pDescriptorWrites[i].pImageInfo->imageView);
- } else if (pDescriptorWrites[i].descriptorType == VK_DESCRIPTOR_TYPE_STORAGE_TEXEL_BUFFER ) {
- // TODO: Handle texel buffer writes
- } else if (pDescriptorWrites[i].descriptorType == VK_DESCRIPTOR_TYPE_STORAGE_BUFFER ||
- pDescriptorWrites[i].descriptorType == VK_DESCRIPTOR_TYPE_STORAGE_BUFFER_DYNAMIC) {
- my_data->descriptorSetMap[pDescriptorWrites[i].dstSet].buffers.push_back(pDescriptorWrites[i].pBufferInfo->buffer);
- }
- }
- my_data->device_dispatch_table->UpdateDescriptorSets(device, descriptorWriteCount, pDescriptorWrites, descriptorCopyCount, pDescriptorCopies);
-}
-
-bool markStoreImagesAndBuffersAsWritten(
- VkCommandBuffer commandBuffer)
-{
- bool skip_call = false;
- loader_platform_thread_lock_mutex(&globalLock);
- layer_data *my_data = get_my_data_ptr(get_dispatch_key(commandBuffer), layer_data_map);
- auto cb_data = my_data->cbMap.find(commandBuffer);
- if (cb_data == my_data->cbMap.end()) return skip_call;
- std::vector<VkDescriptorSet>& activeDescriptorSets = cb_data->second.activeDescriptorSets;
- for (auto descriptorSet : activeDescriptorSets) {
- auto ds_data = my_data->descriptorSetMap.find(descriptorSet);
- if (ds_data == my_data->descriptorSetMap.end()) continue;
- std::vector<VkImageView> images = ds_data->second.images;
- std::vector<VkBuffer> buffers = ds_data->second.buffers;
- for (auto imageView : images) {
- auto iv_data = my_data->imageViewMap.find(imageView);
- if (iv_data == my_data->imageViewMap.end()) continue;
- VkImage image = iv_data->second.image;
- VkDeviceMemory mem;
- skip_call |= get_mem_binding_from_object(my_data, commandBuffer, (uint64_t)image, VK_DEBUG_REPORT_OBJECT_TYPE_BUFFER_EXT, &mem);
- std::function<VkBool32()> function = [=]() { set_memory_valid(my_data, mem, true, image); return VK_FALSE; };
- cb_data->second.validate_functions.push_back(function);
- }
- for (auto buffer : buffers) {
- VkDeviceMemory mem;
- skip_call |= get_mem_binding_from_object(my_data, commandBuffer, (uint64_t)buffer, VK_DEBUG_REPORT_OBJECT_TYPE_BUFFER_EXT, &mem);
- std::function<VkBool32()> function = [=]() { set_memory_valid(my_data, mem, true); return VK_FALSE; };
- cb_data->second.validate_functions.push_back(function);
- }
- }
- loader_platform_thread_unlock_mutex(&globalLock);
- return skip_call;
-}
-
-VKAPI_ATTR void VKAPI_CALL vkCmdDraw(
- VkCommandBuffer commandBuffer,
- uint32_t vertexCount,
- uint32_t instanceCount,
- uint32_t firstVertex,
- uint32_t firstInstance)
-{
- layer_data *my_data = get_my_data_ptr(get_dispatch_key(commandBuffer), layer_data_map);
- bool skip_call = markStoreImagesAndBuffersAsWritten(commandBuffer);
- if (!skip_call)
- my_data->device_dispatch_table->CmdDraw(commandBuffer, vertexCount, instanceCount, firstVertex, firstInstance);
-}
-
-VKAPI_ATTR void VKAPI_CALL vkCmdDrawIndexed(
- VkCommandBuffer commandBuffer,
- uint32_t indexCount,
- uint32_t instanceCount,
- uint32_t firstIndex,
- int32_t vertexOffset,
- uint32_t firstInstance)
-{
- layer_data *my_data = get_my_data_ptr(get_dispatch_key(commandBuffer), layer_data_map);
- bool skip_call = markStoreImagesAndBuffersAsWritten(commandBuffer);
- if (!skip_call)
- my_data->device_dispatch_table->CmdDrawIndexed(commandBuffer, indexCount, instanceCount, firstIndex, vertexOffset, firstInstance);
-}
-
-VK_LAYER_EXPORT VKAPI_ATTR void VKAPI_CALL vkCmdDrawIndirect(
- VkCommandBuffer commandBuffer,
- VkBuffer buffer,
- VkDeviceSize offset,
- uint32_t count,
- uint32_t stride)
-{
- layer_data *my_data = get_my_data_ptr(get_dispatch_key(commandBuffer), layer_data_map);
- VkDeviceMemory mem;
- loader_platform_thread_lock_mutex(&globalLock);
- VkBool32 skipCall = get_mem_binding_from_object(my_data, commandBuffer, (uint64_t)buffer, VK_DEBUG_REPORT_OBJECT_TYPE_BUFFER_EXT, &mem);
- skipCall |= update_cmd_buf_and_mem_references(my_data, commandBuffer, mem, "vkCmdDrawIndirect");
- skipCall |= markStoreImagesAndBuffersAsWritten(commandBuffer);
- loader_platform_thread_unlock_mutex(&globalLock);
- if (VK_FALSE == skipCall) {
- my_data->device_dispatch_table->CmdDrawIndirect(commandBuffer, buffer, offset, count, stride);
- }
-}
-
-VK_LAYER_EXPORT VKAPI_ATTR void VKAPI_CALL vkCmdDrawIndexedIndirect(
- VkCommandBuffer commandBuffer,
- VkBuffer buffer,
- VkDeviceSize offset,
- uint32_t count,
- uint32_t stride)
-{
- layer_data *my_data = get_my_data_ptr(get_dispatch_key(commandBuffer), layer_data_map);
- VkDeviceMemory mem;
- loader_platform_thread_lock_mutex(&globalLock);
- VkBool32 skipCall = get_mem_binding_from_object(my_data, commandBuffer, (uint64_t)buffer, VK_DEBUG_REPORT_OBJECT_TYPE_BUFFER_EXT, &mem);
- skipCall |= update_cmd_buf_and_mem_references(my_data, commandBuffer, mem, "vkCmdDrawIndexedIndirect");
- skipCall |= markStoreImagesAndBuffersAsWritten(commandBuffer);
- loader_platform_thread_unlock_mutex(&globalLock);
- if (VK_FALSE == skipCall) {
- my_data->device_dispatch_table->CmdDrawIndexedIndirect(commandBuffer, buffer, offset, count, stride);
- }
-}
-
-
-VKAPI_ATTR void VKAPI_CALL vkCmdDispatch(
- VkCommandBuffer commandBuffer,
- uint32_t x,
- uint32_t y,
- uint32_t z)
-{
- layer_data *my_data = get_my_data_ptr(get_dispatch_key(commandBuffer), layer_data_map);
- bool skip_call = markStoreImagesAndBuffersAsWritten(commandBuffer);
- if (!skip_call)
- my_data->device_dispatch_table->CmdDispatch(commandBuffer, x, y, z);
-}
-
-VK_LAYER_EXPORT VKAPI_ATTR void VKAPI_CALL vkCmdDispatchIndirect(
- VkCommandBuffer commandBuffer,
- VkBuffer buffer,
- VkDeviceSize offset)
-{
- layer_data *my_data = get_my_data_ptr(get_dispatch_key(commandBuffer), layer_data_map);
- VkDeviceMemory mem;
- loader_platform_thread_lock_mutex(&globalLock);
- VkBool32 skipCall = get_mem_binding_from_object(my_data, commandBuffer, (uint64_t)buffer, VK_DEBUG_REPORT_OBJECT_TYPE_BUFFER_EXT, &mem);
- skipCall |= update_cmd_buf_and_mem_references(my_data, commandBuffer, mem, "vkCmdDispatchIndirect");
- skipCall |= markStoreImagesAndBuffersAsWritten(commandBuffer);
- loader_platform_thread_unlock_mutex(&globalLock);
- if (VK_FALSE == skipCall) {
- my_data->device_dispatch_table->CmdDispatchIndirect(commandBuffer, buffer, offset);
- }
-}
-
-VK_LAYER_EXPORT VKAPI_ATTR void VKAPI_CALL vkCmdCopyBuffer(
- VkCommandBuffer commandBuffer,
- VkBuffer srcBuffer,
- VkBuffer dstBuffer,
- uint32_t regionCount,
- const VkBufferCopy *pRegions)
-{
- layer_data *my_data = get_my_data_ptr(get_dispatch_key(commandBuffer), layer_data_map);
- VkDeviceMemory mem;
- VkBool32 skipCall = VK_FALSE;
- auto cb_data = my_data->cbMap.find(commandBuffer);
- loader_platform_thread_lock_mutex(&globalLock);
- skipCall = get_mem_binding_from_object(my_data, commandBuffer, (uint64_t)srcBuffer, VK_DEBUG_REPORT_OBJECT_TYPE_BUFFER_EXT, &mem);
- if (cb_data != my_data->cbMap.end()) {
- std::function<VkBool32()> function = [=]() { return validate_memory_is_valid(my_data, mem, "vkCmdCopyBuffer()"); };
- cb_data->second.validate_functions.push_back(function);
- }
- skipCall |= update_cmd_buf_and_mem_references(my_data, commandBuffer, mem, "vkCmdCopyBuffer");
- skipCall |= get_mem_binding_from_object(my_data, commandBuffer, (uint64_t)dstBuffer, VK_DEBUG_REPORT_OBJECT_TYPE_BUFFER_EXT, &mem);
- if (cb_data != my_data->cbMap.end()) {
- std::function<VkBool32()> function = [=]() { set_memory_valid(my_data, mem, true); return VK_FALSE; };
- cb_data->second.validate_functions.push_back(function);
- }
- skipCall |= update_cmd_buf_and_mem_references(my_data, commandBuffer, mem, "vkCmdCopyBuffer");
- // Validate that SRC & DST buffers have correct usage flags set
- skipCall |= validate_buffer_usage_flags(my_data, commandBuffer, srcBuffer, VK_BUFFER_USAGE_TRANSFER_SRC_BIT, true, "vkCmdCopyBuffer()", "VK_BUFFER_USAGE_TRANSFER_SRC_BIT");
- skipCall |= validate_buffer_usage_flags(my_data, commandBuffer, dstBuffer, VK_BUFFER_USAGE_TRANSFER_DST_BIT, true, "vkCmdCopyBuffer()", "VK_BUFFER_USAGE_TRANSFER_DST_BIT");
- loader_platform_thread_unlock_mutex(&globalLock);
- if (VK_FALSE == skipCall) {
- my_data->device_dispatch_table->CmdCopyBuffer(commandBuffer, srcBuffer, dstBuffer, regionCount, pRegions);
- }
-}
-
-VK_LAYER_EXPORT VKAPI_ATTR void VKAPI_CALL vkCmdCopyQueryPoolResults(
- VkCommandBuffer commandBuffer,
- VkQueryPool queryPool,
- uint32_t firstQuery,
- uint32_t queryCount,
- VkBuffer dstBuffer,
- VkDeviceSize dstOffset,
- VkDeviceSize destStride,
- VkQueryResultFlags flags)
-{
- layer_data *my_data = get_my_data_ptr(get_dispatch_key(commandBuffer), layer_data_map);
- VkDeviceMemory mem;
- VkBool32 skipCall = VK_FALSE;
- auto cb_data = my_data->cbMap.find(commandBuffer);
- loader_platform_thread_lock_mutex(&globalLock);
- skipCall |= get_mem_binding_from_object(my_data, commandBuffer, (uint64_t)dstBuffer, VK_DEBUG_REPORT_OBJECT_TYPE_BUFFER_EXT, &mem);
- if (cb_data != my_data->cbMap.end()) {
- std::function<VkBool32()> function = [=]() { set_memory_valid(my_data, mem, true); return VK_FALSE; };
- cb_data->second.validate_functions.push_back(function);
- }
- skipCall |= update_cmd_buf_and_mem_references(my_data, commandBuffer, mem, "vkCmdCopyQueryPoolResults");
- // Validate that DST buffer has correct usage flags set
- skipCall |= validate_buffer_usage_flags(my_data, commandBuffer, dstBuffer, VK_BUFFER_USAGE_TRANSFER_DST_BIT, true, "vkCmdCopyQueryPoolResults()", "VK_BUFFER_USAGE_TRANSFER_DST_BIT");
- loader_platform_thread_unlock_mutex(&globalLock);
- if (VK_FALSE == skipCall) {
- my_data->device_dispatch_table->CmdCopyQueryPoolResults(commandBuffer, queryPool, firstQuery, queryCount, dstBuffer, dstOffset, destStride, flags);
- }
-}
-
-VK_LAYER_EXPORT VKAPI_ATTR void VKAPI_CALL vkCmdCopyImage(
- VkCommandBuffer commandBuffer,
- VkImage srcImage,
- VkImageLayout srcImageLayout,
- VkImage dstImage,
- VkImageLayout dstImageLayout,
- uint32_t regionCount,
- const VkImageCopy *pRegions)
-{
- layer_data *my_data = get_my_data_ptr(get_dispatch_key(commandBuffer), layer_data_map);
- VkDeviceMemory mem;
- VkBool32 skipCall = VK_FALSE;
- auto cb_data = my_data->cbMap.find(commandBuffer);
- loader_platform_thread_lock_mutex(&globalLock);
- // Validate that src & dst images have correct usage flags set
- skipCall = get_mem_binding_from_object(my_data, commandBuffer, (uint64_t)srcImage, VK_DEBUG_REPORT_OBJECT_TYPE_IMAGE_EXT, &mem);
- if (cb_data != my_data->cbMap.end()) {
- std::function<VkBool32()> function = [=]() { return validate_memory_is_valid(my_data, mem, "vkCmdCopyImage()", srcImage); };
- cb_data->second.validate_functions.push_back(function);
- }
- skipCall |= update_cmd_buf_and_mem_references(my_data, commandBuffer, mem, "vkCmdCopyImage");
- skipCall |= get_mem_binding_from_object(my_data, commandBuffer, (uint64_t)dstImage, VK_DEBUG_REPORT_OBJECT_TYPE_IMAGE_EXT, &mem);
- if (cb_data != my_data->cbMap.end()) {
- std::function<VkBool32()> function = [=]() { set_memory_valid(my_data, mem, true, dstImage); return VK_FALSE; };
- cb_data->second.validate_functions.push_back(function);
- }
- skipCall |= update_cmd_buf_and_mem_references(my_data, commandBuffer, mem, "vkCmdCopyImage");
- skipCall |= validate_image_usage_flags(my_data, commandBuffer, srcImage, VK_IMAGE_USAGE_TRANSFER_SRC_BIT, true, "vkCmdCopyImage()", "VK_IMAGE_USAGE_TRANSFER_SRC_BIT");
- skipCall |= validate_image_usage_flags(my_data, commandBuffer, dstImage, VK_IMAGE_USAGE_TRANSFER_DST_BIT, true, "vkCmdCopyImage()", "VK_IMAGE_USAGE_TRANSFER_DST_BIT");
- loader_platform_thread_unlock_mutex(&globalLock);
- if (VK_FALSE == skipCall) {
- my_data->device_dispatch_table->CmdCopyImage(
- commandBuffer, srcImage, srcImageLayout, dstImage, dstImageLayout, regionCount, pRegions);
- }
-}
-
-VK_LAYER_EXPORT VKAPI_ATTR void VKAPI_CALL vkCmdBlitImage(
- VkCommandBuffer commandBuffer,
- VkImage srcImage,
- VkImageLayout srcImageLayout,
- VkImage dstImage,
- VkImageLayout dstImageLayout,
- uint32_t regionCount,
- const VkImageBlit *pRegions,
- VkFilter filter)
-{
- layer_data *my_data = get_my_data_ptr(get_dispatch_key(commandBuffer), layer_data_map);
- VkDeviceMemory mem;
- VkBool32 skipCall = VK_FALSE;
- auto cb_data = my_data->cbMap.find(commandBuffer);
- loader_platform_thread_lock_mutex(&globalLock);
- // Validate that src & dst images have correct usage flags set
- skipCall = get_mem_binding_from_object(my_data, commandBuffer, (uint64_t)srcImage, VK_DEBUG_REPORT_OBJECT_TYPE_IMAGE_EXT, &mem);
- if (cb_data != my_data->cbMap.end()) {
- std::function<VkBool32()> function = [=]() { return validate_memory_is_valid(my_data, mem, "vkCmdBlitImage()", srcImage); };
- cb_data->second.validate_functions.push_back(function);
- }
- skipCall |= update_cmd_buf_and_mem_references(my_data, commandBuffer, mem, "vkCmdBlitImage");
- skipCall |= get_mem_binding_from_object(my_data, commandBuffer, (uint64_t)dstImage, VK_DEBUG_REPORT_OBJECT_TYPE_IMAGE_EXT, &mem);\
- if (cb_data != my_data->cbMap.end()) {
- std::function<VkBool32()> function = [=]() { set_memory_valid(my_data, mem, true, dstImage); return VK_FALSE; };
- cb_data->second.validate_functions.push_back(function);
- }
- skipCall |= update_cmd_buf_and_mem_references(my_data, commandBuffer, mem, "vkCmdBlitImage");
- skipCall |= validate_image_usage_flags(my_data, commandBuffer, srcImage, VK_IMAGE_USAGE_TRANSFER_SRC_BIT, true, "vkCmdBlitImage()", "VK_IMAGE_USAGE_TRANSFER_SRC_BIT");
- skipCall |= validate_image_usage_flags(my_data, commandBuffer, dstImage, VK_IMAGE_USAGE_TRANSFER_DST_BIT, true, "vkCmdBlitImage()", "VK_IMAGE_USAGE_TRANSFER_DST_BIT");
- loader_platform_thread_unlock_mutex(&globalLock);
- if (VK_FALSE == skipCall) {
- my_data->device_dispatch_table->CmdBlitImage(
- commandBuffer, srcImage, srcImageLayout, dstImage, dstImageLayout, regionCount, pRegions, filter);
- }
-}
-
-VK_LAYER_EXPORT VKAPI_ATTR void VKAPI_CALL vkCmdCopyBufferToImage(
- VkCommandBuffer commandBuffer,
- VkBuffer srcBuffer,
- VkImage dstImage,
- VkImageLayout dstImageLayout,
- uint32_t regionCount,
- const VkBufferImageCopy *pRegions)
-{
- layer_data *my_data = get_my_data_ptr(get_dispatch_key(commandBuffer), layer_data_map);
- VkDeviceMemory mem;
- VkBool32 skipCall = VK_FALSE;
- auto cb_data = my_data->cbMap.find(commandBuffer);
- loader_platform_thread_lock_mutex(&globalLock);
- skipCall = get_mem_binding_from_object(my_data, commandBuffer, (uint64_t)dstImage, VK_DEBUG_REPORT_OBJECT_TYPE_IMAGE_EXT, &mem);
- if (cb_data != my_data->cbMap.end()) {
- std::function<VkBool32()> function = [=]() { set_memory_valid(my_data, mem, true, dstImage); return VK_FALSE; };
- cb_data->second.validate_functions.push_back(function);
- }
- skipCall |= update_cmd_buf_and_mem_references(my_data, commandBuffer, mem, "vkCmdCopyBufferToImage");
- skipCall |= get_mem_binding_from_object(my_data, commandBuffer, (uint64_t)srcBuffer, VK_DEBUG_REPORT_OBJECT_TYPE_BUFFER_EXT, &mem);
- if (cb_data != my_data->cbMap.end()) {
- std::function<VkBool32()> function = [=]() { return validate_memory_is_valid(my_data, mem, "vkCmdCopyBufferToImage()"); };
- cb_data->second.validate_functions.push_back(function);
- }
- skipCall |= update_cmd_buf_and_mem_references(my_data, commandBuffer, mem, "vkCmdCopyBufferToImage");
- // Validate that src buff & dst image have correct usage flags set
- skipCall |= validate_buffer_usage_flags(my_data, commandBuffer, srcBuffer, VK_BUFFER_USAGE_TRANSFER_SRC_BIT, true, "vkCmdCopyBufferToImage()", "VK_BUFFER_USAGE_TRANSFER_SRC_BIT");
- skipCall |= validate_image_usage_flags(my_data, commandBuffer, dstImage, VK_IMAGE_USAGE_TRANSFER_DST_BIT, true, "vkCmdCopyBufferToImage()", "VK_IMAGE_USAGE_TRANSFER_DST_BIT");
- loader_platform_thread_unlock_mutex(&globalLock);
- if (VK_FALSE == skipCall) {
- my_data->device_dispatch_table->CmdCopyBufferToImage(
- commandBuffer, srcBuffer, dstImage, dstImageLayout, regionCount, pRegions);
- }
-}
-
-VK_LAYER_EXPORT VKAPI_ATTR void VKAPI_CALL vkCmdCopyImageToBuffer(
- VkCommandBuffer commandBuffer,
- VkImage srcImage,
- VkImageLayout srcImageLayout,
- VkBuffer dstBuffer,
- uint32_t regionCount,
- const VkBufferImageCopy *pRegions)
-{
- layer_data *my_data = get_my_data_ptr(get_dispatch_key(commandBuffer), layer_data_map);
- VkDeviceMemory mem;
- VkBool32 skipCall = VK_FALSE;
- auto cb_data = my_data->cbMap.find(commandBuffer);
- loader_platform_thread_lock_mutex(&globalLock);
- skipCall = get_mem_binding_from_object(my_data, commandBuffer, (uint64_t)srcImage, VK_DEBUG_REPORT_OBJECT_TYPE_IMAGE_EXT, &mem);
- if (cb_data != my_data->cbMap.end()) {
- std::function<VkBool32()> function = [=]() { return validate_memory_is_valid(my_data, mem, "vkCmdCopyImageToBuffer()", srcImage); };
- cb_data->second.validate_functions.push_back(function);
- }
- skipCall |= update_cmd_buf_and_mem_references(my_data, commandBuffer, mem, "vkCmdCopyImageToBuffer");
- skipCall |= get_mem_binding_from_object(my_data, commandBuffer, (uint64_t)dstBuffer, VK_DEBUG_REPORT_OBJECT_TYPE_BUFFER_EXT, &mem);
- if (cb_data != my_data->cbMap.end()) {
- std::function<VkBool32()> function = [=]() { set_memory_valid(my_data, mem, true); return VK_FALSE; };
- cb_data->second.validate_functions.push_back(function);
- }
- skipCall |= update_cmd_buf_and_mem_references(my_data, commandBuffer, mem, "vkCmdCopyImageToBuffer");
- // Validate that dst buff & src image have correct usage flags set
- skipCall |= validate_image_usage_flags(my_data, commandBuffer, srcImage, VK_IMAGE_USAGE_TRANSFER_SRC_BIT, true, "vkCmdCopyImageToBuffer()", "VK_IMAGE_USAGE_TRANSFER_SRC_BIT");
- skipCall |= validate_buffer_usage_flags(my_data, commandBuffer, dstBuffer, VK_BUFFER_USAGE_TRANSFER_DST_BIT, true, "vkCmdCopyImageToBuffer()", "VK_BUFFER_USAGE_TRANSFER_DST_BIT");
- loader_platform_thread_unlock_mutex(&globalLock);
- if (VK_FALSE == skipCall) {
- my_data->device_dispatch_table->CmdCopyImageToBuffer(
- commandBuffer, srcImage, srcImageLayout, dstBuffer, regionCount, pRegions);
- }
-}
-
-VK_LAYER_EXPORT VKAPI_ATTR void VKAPI_CALL vkCmdUpdateBuffer(
- VkCommandBuffer commandBuffer,
- VkBuffer dstBuffer,
- VkDeviceSize dstOffset,
- VkDeviceSize dataSize,
- const uint32_t *pData)
-{
- layer_data *my_data = get_my_data_ptr(get_dispatch_key(commandBuffer), layer_data_map);
- VkDeviceMemory mem;
- VkBool32 skipCall = VK_FALSE;
- auto cb_data = my_data->cbMap.find(commandBuffer);
- loader_platform_thread_lock_mutex(&globalLock);
- skipCall = get_mem_binding_from_object(my_data, commandBuffer, (uint64_t)dstBuffer, VK_DEBUG_REPORT_OBJECT_TYPE_BUFFER_EXT, &mem);
- if (cb_data != my_data->cbMap.end()) {
- std::function<VkBool32()> function = [=]() { set_memory_valid(my_data, mem, true); return VK_FALSE; };
- cb_data->second.validate_functions.push_back(function);
- }
- skipCall |= update_cmd_buf_and_mem_references(my_data, commandBuffer, mem, "vkCmdUpdateBuffer");
- // Validate that dst buff has correct usage flags set
- skipCall |= validate_buffer_usage_flags(my_data, commandBuffer, dstBuffer, VK_BUFFER_USAGE_TRANSFER_DST_BIT, true, "vkCmdUpdateBuffer()", "VK_BUFFER_USAGE_TRANSFER_DST_BIT");
- loader_platform_thread_unlock_mutex(&globalLock);
- if (VK_FALSE == skipCall) {
- my_data->device_dispatch_table->CmdUpdateBuffer(commandBuffer, dstBuffer, dstOffset, dataSize, pData);
- }
-}
-
-VK_LAYER_EXPORT VKAPI_ATTR void VKAPI_CALL vkCmdFillBuffer(
- VkCommandBuffer commandBuffer,
- VkBuffer dstBuffer,
- VkDeviceSize dstOffset,
- VkDeviceSize size,
- uint32_t data)
-{
- layer_data *my_data = get_my_data_ptr(get_dispatch_key(commandBuffer), layer_data_map);
- VkDeviceMemory mem;
- VkBool32 skipCall = VK_FALSE;
- auto cb_data = my_data->cbMap.find(commandBuffer);
- loader_platform_thread_lock_mutex(&globalLock);
- skipCall = get_mem_binding_from_object(my_data, commandBuffer, (uint64_t)dstBuffer, VK_DEBUG_REPORT_OBJECT_TYPE_BUFFER_EXT, &mem);
- if (cb_data != my_data->cbMap.end()) {
- std::function<VkBool32()> function = [=]() { set_memory_valid(my_data, mem, true); return VK_FALSE; };
- cb_data->second.validate_functions.push_back(function);
- }
- skipCall |= update_cmd_buf_and_mem_references(my_data, commandBuffer, mem, "vkCmdFillBuffer");
- // Validate that dst buff has correct usage flags set
- skipCall |= validate_buffer_usage_flags(my_data, commandBuffer, dstBuffer, VK_BUFFER_USAGE_TRANSFER_DST_BIT, true, "vkCmdFillBuffer()", "VK_BUFFER_USAGE_TRANSFER_DST_BIT");
- loader_platform_thread_unlock_mutex(&globalLock);
- if (VK_FALSE == skipCall) {
- my_data->device_dispatch_table->CmdFillBuffer(commandBuffer, dstBuffer, dstOffset, size, data);
- }
-}
-
-VK_LAYER_EXPORT VKAPI_ATTR void VKAPI_CALL vkCmdClearColorImage(
- VkCommandBuffer commandBuffer,
- VkImage image,
- VkImageLayout imageLayout,
- const VkClearColorValue *pColor,
- uint32_t rangeCount,
- const VkImageSubresourceRange *pRanges)
-{
- layer_data *my_data = get_my_data_ptr(get_dispatch_key(commandBuffer), layer_data_map);
- // TODO : Verify memory is in VK_IMAGE_STATE_CLEAR state
- VkDeviceMemory mem;
- VkBool32 skipCall = VK_FALSE;
- auto cb_data = my_data->cbMap.find(commandBuffer);
- loader_platform_thread_lock_mutex(&globalLock);
- skipCall = get_mem_binding_from_object(my_data, commandBuffer, (uint64_t)image, VK_DEBUG_REPORT_OBJECT_TYPE_IMAGE_EXT, &mem);
- if (cb_data != my_data->cbMap.end()) {
- std::function<VkBool32()> function = [=]() { set_memory_valid(my_data, mem, true, image); return VK_FALSE; };
- cb_data->second.validate_functions.push_back(function);
- }
- skipCall |= update_cmd_buf_and_mem_references(my_data, commandBuffer, mem, "vkCmdClearColorImage");
- loader_platform_thread_unlock_mutex(&globalLock);
- if (VK_FALSE == skipCall) {
- my_data->device_dispatch_table->CmdClearColorImage(commandBuffer, image, imageLayout, pColor, rangeCount, pRanges);
- }
-}
-
-VK_LAYER_EXPORT VKAPI_ATTR void VKAPI_CALL vkCmdClearDepthStencilImage(
- VkCommandBuffer commandBuffer,
- VkImage image,
- VkImageLayout imageLayout,
- const VkClearDepthStencilValue *pDepthStencil,
- uint32_t rangeCount,
- const VkImageSubresourceRange *pRanges)
-{
- layer_data *my_data = get_my_data_ptr(get_dispatch_key(commandBuffer), layer_data_map);
- // TODO : Verify memory is in VK_IMAGE_STATE_CLEAR state
- VkDeviceMemory mem;
- VkBool32 skipCall = VK_FALSE;
- auto cb_data = my_data->cbMap.find(commandBuffer);
- loader_platform_thread_lock_mutex(&globalLock);
- skipCall = get_mem_binding_from_object(my_data, commandBuffer, (uint64_t)image, VK_DEBUG_REPORT_OBJECT_TYPE_IMAGE_EXT, &mem);
- if (cb_data != my_data->cbMap.end()) {
- std::function<VkBool32()> function = [=]() { set_memory_valid(my_data, mem, true, image); return VK_FALSE; };
- cb_data->second.validate_functions.push_back(function);
- }
- skipCall |= update_cmd_buf_and_mem_references(my_data, commandBuffer, mem, "vkCmdClearDepthStencilImage");
- loader_platform_thread_unlock_mutex(&globalLock);
- if (VK_FALSE == skipCall) {
- my_data->device_dispatch_table->CmdClearDepthStencilImage(
- commandBuffer, image, imageLayout, pDepthStencil, rangeCount, pRanges);
- }
-}
-
-VK_LAYER_EXPORT VKAPI_ATTR void VKAPI_CALL vkCmdResolveImage(
- VkCommandBuffer commandBuffer,
- VkImage srcImage,
- VkImageLayout srcImageLayout,
- VkImage dstImage,
- VkImageLayout dstImageLayout,
- uint32_t regionCount,
- const VkImageResolve *pRegions)
-{
- layer_data *my_data = get_my_data_ptr(get_dispatch_key(commandBuffer), layer_data_map);
- VkBool32 skipCall = VK_FALSE;
- auto cb_data = my_data->cbMap.find(commandBuffer);
- loader_platform_thread_lock_mutex(&globalLock);
- VkDeviceMemory mem;
- skipCall = get_mem_binding_from_object(my_data, commandBuffer, (uint64_t)srcImage, VK_DEBUG_REPORT_OBJECT_TYPE_IMAGE_EXT, &mem);
- if (cb_data != my_data->cbMap.end()) {
- std::function<VkBool32()> function = [=]() { return validate_memory_is_valid(my_data, mem, "vkCmdResolveImage()", srcImage); };
- cb_data->second.validate_functions.push_back(function);
- }
- skipCall |= update_cmd_buf_and_mem_references(my_data, commandBuffer, mem, "vkCmdResolveImage");
- skipCall |= get_mem_binding_from_object(my_data, commandBuffer, (uint64_t)dstImage, VK_DEBUG_REPORT_OBJECT_TYPE_IMAGE_EXT, &mem);
- if (cb_data != my_data->cbMap.end()) {
- std::function<VkBool32()> function = [=]() { set_memory_valid(my_data, mem, true, dstImage); return VK_FALSE; };
- cb_data->second.validate_functions.push_back(function);
- }
- skipCall |= update_cmd_buf_and_mem_references(my_data, commandBuffer, mem, "vkCmdResolveImage");
- loader_platform_thread_unlock_mutex(&globalLock);
- if (VK_FALSE == skipCall) {
- my_data->device_dispatch_table->CmdResolveImage(
- commandBuffer, srcImage, srcImageLayout, dstImage, dstImageLayout, regionCount, pRegions);
- }
-}
-
-VK_LAYER_EXPORT VKAPI_ATTR void VKAPI_CALL vkCmdBeginQuery(
- VkCommandBuffer commandBuffer,
- VkQueryPool queryPool,
- uint32_t slot,
- VkFlags flags)
-{
- layer_data *my_data = get_my_data_ptr(get_dispatch_key(commandBuffer), layer_data_map);
- my_data->device_dispatch_table->CmdBeginQuery(commandBuffer, queryPool, slot, flags);
-}
-
-VK_LAYER_EXPORT VKAPI_ATTR void VKAPI_CALL vkCmdEndQuery(
- VkCommandBuffer commandBuffer,
- VkQueryPool queryPool,
- uint32_t slot)
-{
- layer_data *my_data = get_my_data_ptr(get_dispatch_key(commandBuffer), layer_data_map);
- my_data->device_dispatch_table->CmdEndQuery(commandBuffer, queryPool, slot);
-}
-
-VK_LAYER_EXPORT VKAPI_ATTR void VKAPI_CALL vkCmdResetQueryPool(
- VkCommandBuffer commandBuffer,
- VkQueryPool queryPool,
- uint32_t firstQuery,
- uint32_t queryCount)
-{
- layer_data *my_data = get_my_data_ptr(get_dispatch_key(commandBuffer), layer_data_map);
- my_data->device_dispatch_table->CmdResetQueryPool(commandBuffer, queryPool, firstQuery, queryCount);
-}
-
-VK_LAYER_EXPORT VKAPI_ATTR VkResult VKAPI_CALL vkCreateDebugReportCallbackEXT(
- VkInstance instance,
- const VkDebugReportCallbackCreateInfoEXT* pCreateInfo,
- const VkAllocationCallbacks* pAllocator,
- VkDebugReportCallbackEXT* pMsgCallback)
-{
- layer_data *my_data = get_my_data_ptr(get_dispatch_key(instance), layer_data_map);
- VkLayerInstanceDispatchTable *pTable = my_data->instance_dispatch_table;
- VkResult res = pTable->CreateDebugReportCallbackEXT(instance, pCreateInfo, pAllocator, pMsgCallback);
- if (res == VK_SUCCESS) {
- loader_platform_thread_lock_mutex(&globalLock);
- res = layer_create_msg_callback(my_data->report_data, pCreateInfo, pAllocator, pMsgCallback);
- loader_platform_thread_unlock_mutex(&globalLock);
- }
- return res;
-}
-
-VK_LAYER_EXPORT VKAPI_ATTR void VKAPI_CALL vkDestroyDebugReportCallbackEXT(
- VkInstance instance,
- VkDebugReportCallbackEXT msgCallback,
- const VkAllocationCallbacks* pAllocator)
-{
- layer_data *my_data = get_my_data_ptr(get_dispatch_key(instance), layer_data_map);
- VkLayerInstanceDispatchTable *pTable = my_data->instance_dispatch_table;
- pTable->DestroyDebugReportCallbackEXT(instance, msgCallback, pAllocator);
- loader_platform_thread_lock_mutex(&globalLock);
- layer_destroy_msg_callback(my_data->report_data, msgCallback, pAllocator);
- loader_platform_thread_unlock_mutex(&globalLock);
-}
-
-VK_LAYER_EXPORT VKAPI_ATTR void VKAPI_CALL vkDebugReportMessageEXT(
- VkInstance instance,
- VkDebugReportFlagsEXT flags,
- VkDebugReportObjectTypeEXT objType,
- uint64_t object,
- size_t location,
- int32_t msgCode,
- const char* pLayerPrefix,
- const char* pMsg)
-{
- layer_data *my_data = get_my_data_ptr(get_dispatch_key(instance), layer_data_map);
- my_data->instance_dispatch_table->DebugReportMessageEXT(instance, flags, objType, object, location, msgCode, pLayerPrefix, pMsg);
-}
-
-VK_LAYER_EXPORT VKAPI_ATTR VkResult VKAPI_CALL vkCreateSwapchainKHR(
- VkDevice device,
- const VkSwapchainCreateInfoKHR *pCreateInfo,
- const VkAllocationCallbacks *pAllocator,
- VkSwapchainKHR *pSwapchain)
-{
- layer_data *my_data = get_my_data_ptr(get_dispatch_key(device), layer_data_map);
- VkResult result = my_data->device_dispatch_table->CreateSwapchainKHR(device, pCreateInfo, pAllocator, pSwapchain);
-
- if (VK_SUCCESS == result) {
- loader_platform_thread_lock_mutex(&globalLock);
- add_swap_chain_info(my_data, *pSwapchain, pCreateInfo);
- loader_platform_thread_unlock_mutex(&globalLock);
- }
-
- return result;
-}
-
-VK_LAYER_EXPORT VKAPI_ATTR void VKAPI_CALL vkDestroySwapchainKHR(
- VkDevice device,
- VkSwapchainKHR swapchain,
- const VkAllocationCallbacks *pAllocator)
-{
- layer_data *my_data = get_my_data_ptr(get_dispatch_key(device), layer_data_map);
- VkBool32 skipCall = VK_FALSE;
- loader_platform_thread_lock_mutex(&globalLock);
- if (my_data->swapchainMap.find(swapchain) != my_data->swapchainMap.end()) {
- MT_SWAP_CHAIN_INFO* pInfo = my_data->swapchainMap[swapchain];
-
- if (pInfo->images.size() > 0) {
- for (auto it = pInfo->images.begin(); it != pInfo->images.end(); it++) {
- skipCall = clear_object_binding(my_data, device, (uint64_t)*it, VK_DEBUG_REPORT_OBJECT_TYPE_SWAPCHAIN_KHR_EXT);
- auto image_item = my_data->imageMap.find((uint64_t)*it);
- if (image_item != my_data->imageMap.end())
- my_data->imageMap.erase(image_item);
- }
- }
- delete pInfo;
- my_data->swapchainMap.erase(swapchain);
- }
- loader_platform_thread_unlock_mutex(&globalLock);
- if (VK_FALSE == skipCall) {
- my_data->device_dispatch_table->DestroySwapchainKHR(device, swapchain, pAllocator);
- }
-}
-
-VK_LAYER_EXPORT VKAPI_ATTR VkResult VKAPI_CALL vkGetSwapchainImagesKHR(
- VkDevice device,
- VkSwapchainKHR swapchain,
- uint32_t *pCount,
- VkImage *pSwapchainImages)
-{
- layer_data *my_data = get_my_data_ptr(get_dispatch_key(device), layer_data_map);
- VkResult result = my_data->device_dispatch_table->GetSwapchainImagesKHR(device, swapchain, pCount, pSwapchainImages);
-
- loader_platform_thread_lock_mutex(&globalLock);
- if (result == VK_SUCCESS && pSwapchainImages != NULL) {
- const size_t count = *pCount;
- MT_SWAP_CHAIN_INFO *pInfo = my_data->swapchainMap[swapchain];
-
- if (pInfo->images.empty()) {
- pInfo->images.resize(count);
- memcpy(&pInfo->images[0], pSwapchainImages, sizeof(pInfo->images[0]) * count);
-
- if (pInfo->images.size() > 0) {
- for (std::vector<VkImage>::const_iterator it = pInfo->images.begin();
- it != pInfo->images.end(); it++) {
- // Add image object binding, then insert the new Mem Object and then bind it to created image
- add_object_create_info(my_data, (uint64_t)*it, VK_DEBUG_REPORT_OBJECT_TYPE_SWAPCHAIN_KHR_EXT, &pInfo->createInfo);
- }
- }
- } else {
- const size_t count = *pCount;
- MT_SWAP_CHAIN_INFO *pInfo = my_data->swapchainMap[swapchain];
- const VkBool32 mismatch = (pInfo->images.size() != count ||
- memcmp(&pInfo->images[0], pSwapchainImages, sizeof(pInfo->images[0]) * count));
-
- if (mismatch) {
- // TODO: Verify against Valid Usage section of extension
- log_msg(my_data->report_data, VK_DEBUG_REPORT_WARNING_BIT_EXT, VK_DEBUG_REPORT_OBJECT_TYPE_SWAPCHAIN_KHR_EXT, (uint64_t) swapchain, __LINE__, MEMTRACK_NONE, "SWAP_CHAIN",
- "vkGetSwapchainInfoKHR(%" PRIu64 ", VK_SWAP_CHAIN_INFO_TYPE_PERSISTENT_IMAGES_KHR) returned mismatching data", (uint64_t)(swapchain));
- }
- }
- }
- loader_platform_thread_unlock_mutex(&globalLock);
- return result;
-}
-
-VK_LAYER_EXPORT VKAPI_ATTR VkResult VKAPI_CALL vkAcquireNextImageKHR(
- VkDevice device,
- VkSwapchainKHR swapchain,
- uint64_t timeout,
- VkSemaphore semaphore,
- VkFence fence,
- uint32_t *pImageIndex)
-{
- layer_data *my_data = get_my_data_ptr(get_dispatch_key(device), layer_data_map);
- VkResult result = VK_ERROR_VALIDATION_FAILED_EXT;
- VkBool32 skipCall = VK_FALSE;
-
- loader_platform_thread_lock_mutex(&globalLock);
- if (my_data->semaphoreMap.find(semaphore) != my_data->semaphoreMap.end()) {
- if (my_data->semaphoreMap[semaphore] != MEMTRACK_SEMAPHORE_STATE_UNSET) {
- skipCall = log_msg(my_data->report_data, VK_DEBUG_REPORT_ERROR_BIT_EXT, VK_DEBUG_REPORT_OBJECT_TYPE_SEMAPHORE_EXT, (uint64_t)semaphore,
- __LINE__, MEMTRACK_NONE, "SEMAPHORE",
- "vkAcquireNextImageKHR: Semaphore must not be currently signaled or in a wait state");
- }
- my_data->semaphoreMap[semaphore] = MEMTRACK_SEMAPHORE_STATE_SIGNALLED;
- }
- auto fence_data = my_data->fenceMap.find(fence);
- if (fence_data != my_data->fenceMap.end()) {
- fence_data->second.swapchain = swapchain;
- }
- loader_platform_thread_unlock_mutex(&globalLock);
- if (VK_FALSE == skipCall) {
- result = my_data->device_dispatch_table->AcquireNextImageKHR(device,
- swapchain, timeout, semaphore, fence, pImageIndex);
- }
- return result;
-}
-
-VK_LAYER_EXPORT VKAPI_ATTR VkResult VKAPI_CALL vkQueuePresentKHR(
- VkQueue queue,
- const VkPresentInfoKHR* pPresentInfo)
-{
- VkResult result = VK_ERROR_VALIDATION_FAILED_EXT;
- layer_data *my_data = get_my_data_ptr(get_dispatch_key(queue), layer_data_map);
- VkBool32 skip_call = false;
- VkDeviceMemory mem;
- loader_platform_thread_lock_mutex(&globalLock);
- for (uint32_t i = 0; i < pPresentInfo->swapchainCount; ++i) {
- MT_SWAP_CHAIN_INFO *pInfo = my_data->swapchainMap[pPresentInfo->pSwapchains[i]];
- VkImage image = pInfo->images[pPresentInfo->pImageIndices[i]];
- skip_call |= get_mem_binding_from_object(my_data, queue, (uint64_t)(image), VK_DEBUG_REPORT_OBJECT_TYPE_IMAGE_EXT, &mem);
- skip_call |= validate_memory_is_valid(my_data, mem, "vkQueuePresentKHR()", image);
- }
- loader_platform_thread_unlock_mutex(&globalLock);
- if (!skip_call) {
- result = my_data->device_dispatch_table->QueuePresentKHR(queue, pPresentInfo);
- }
-
- loader_platform_thread_lock_mutex(&globalLock);
- for (uint32_t i = 0; i < pPresentInfo->waitSemaphoreCount; i++) {
- VkSemaphore sem = pPresentInfo->pWaitSemaphores[i];
- if (my_data->semaphoreMap.find(sem) != my_data->semaphoreMap.end()) {
- my_data->semaphoreMap[sem] = MEMTRACK_SEMAPHORE_STATE_UNSET;
- }
- }
- loader_platform_thread_unlock_mutex(&globalLock);
-
- return result;
-}
-
-VK_LAYER_EXPORT VKAPI_ATTR VkResult VKAPI_CALL vkCreateSemaphore(
- VkDevice device,
- const VkSemaphoreCreateInfo *pCreateInfo,
- const VkAllocationCallbacks *pAllocator,
- VkSemaphore *pSemaphore)
-{
- layer_data *my_data = get_my_data_ptr(get_dispatch_key(device), layer_data_map);
- VkResult result = my_data->device_dispatch_table->CreateSemaphore(device, pCreateInfo, pAllocator, pSemaphore);
- loader_platform_thread_lock_mutex(&globalLock);
- if (*pSemaphore != VK_NULL_HANDLE) {
- my_data->semaphoreMap[*pSemaphore] = MEMTRACK_SEMAPHORE_STATE_UNSET;
- }
- loader_platform_thread_unlock_mutex(&globalLock);
- return result;
-}
-
-VK_LAYER_EXPORT VKAPI_ATTR void VKAPI_CALL vkDestroySemaphore(
- VkDevice device,
- VkSemaphore semaphore,
- const VkAllocationCallbacks *pAllocator)
-{
- layer_data *my_data = get_my_data_ptr(get_dispatch_key(device), layer_data_map);
- loader_platform_thread_lock_mutex(&globalLock);
- auto item = my_data->semaphoreMap.find(semaphore);
- if (item != my_data->semaphoreMap.end()) {
- my_data->semaphoreMap.erase(item);
- }
- loader_platform_thread_unlock_mutex(&globalLock);
- my_data->device_dispatch_table->DestroySemaphore(device, semaphore, pAllocator);
-}
-
-VK_LAYER_EXPORT VKAPI_ATTR VkResult VKAPI_CALL vkCreateFramebuffer(
- VkDevice device,
- const VkFramebufferCreateInfo* pCreateInfo,
- const VkAllocationCallbacks* pAllocator,
- VkFramebuffer* pFramebuffer)
-{
- layer_data *my_data = get_my_data_ptr(get_dispatch_key(device), layer_data_map);
- VkResult result = my_data->device_dispatch_table->CreateFramebuffer(device, pCreateInfo, pAllocator, pFramebuffer);
- loader_platform_thread_lock_mutex(&globalLock);
- for (uint32_t i = 0; i < pCreateInfo->attachmentCount; ++i) {
- VkImageView view = pCreateInfo->pAttachments[i];
- auto view_data = my_data->imageViewMap.find(view);
- if (view_data == my_data->imageViewMap.end()) {
- continue;
- }
- MT_FB_ATTACHMENT_INFO fb_info;
- get_mem_binding_from_object(my_data, device, (uint64_t)(view_data->second.image), VK_DEBUG_REPORT_OBJECT_TYPE_IMAGE_EXT, &fb_info.mem);
- fb_info.image = view_data->second.image;
- my_data->fbMap[*pFramebuffer].attachments.push_back(fb_info);
- }
- loader_platform_thread_unlock_mutex(&globalLock);
- return result;
-}
-
-VKAPI_ATTR void VKAPI_CALL vkDestroyFramebuffer(
- VkDevice device,
- VkFramebuffer framebuffer,
- const VkAllocationCallbacks* pAllocator)
-{
- layer_data *my_data = get_my_data_ptr(get_dispatch_key(device), layer_data_map);
-
- loader_platform_thread_lock_mutex(&globalLock);
- auto item = my_data->fbMap.find(framebuffer);
- if (item != my_data->fbMap.end()) {
- my_data->fbMap.erase(framebuffer);
- }
- loader_platform_thread_unlock_mutex(&globalLock);
-
- my_data->device_dispatch_table->DestroyFramebuffer(device, framebuffer, pAllocator);
-}
-
-VK_LAYER_EXPORT VKAPI_ATTR VkResult VKAPI_CALL vkCreateRenderPass(
- VkDevice device,
- const VkRenderPassCreateInfo* pCreateInfo,
- const VkAllocationCallbacks* pAllocator,
- VkRenderPass* pRenderPass)
-{
- layer_data *my_data = get_my_data_ptr(get_dispatch_key(device), layer_data_map);
- VkResult result = my_data->device_dispatch_table->CreateRenderPass(device, pCreateInfo, pAllocator, pRenderPass);
- loader_platform_thread_lock_mutex(&globalLock);
- for (uint32_t i = 0; i < pCreateInfo->attachmentCount; ++i) {
- VkAttachmentDescription desc = pCreateInfo->pAttachments[i];
- MT_PASS_ATTACHMENT_INFO pass_info;
- pass_info.load_op = desc.loadOp;
- pass_info.store_op = desc.storeOp;
- pass_info.attachment = i;
- my_data->passMap[*pRenderPass].attachments.push_back(pass_info);
- }
- //TODO: Maybe fill list and then copy instead of locking
- std::unordered_map<uint32_t, bool>& attachment_first_read = my_data->passMap[*pRenderPass].attachment_first_read;
- std::unordered_map<uint32_t, VkImageLayout>& attachment_first_layout = my_data->passMap[*pRenderPass].attachment_first_layout;
- for (uint32_t i = 0; i < pCreateInfo->subpassCount; ++i) {
- const VkSubpassDescription& subpass = pCreateInfo->pSubpasses[i];
- for (uint32_t j = 0; j < subpass.inputAttachmentCount; ++j) {
- uint32_t attachment = subpass.pInputAttachments[j].attachment;
- if (attachment_first_read.count(attachment)) continue;
- attachment_first_read.insert(std::make_pair(attachment, true));
- attachment_first_layout.insert(std::make_pair(attachment, subpass.pInputAttachments[j].layout));
- }
- for (uint32_t j = 0; j < subpass.colorAttachmentCount; ++j) {
- uint32_t attachment = subpass.pColorAttachments[j].attachment;
- if (attachment_first_read.count(attachment)) continue;
- attachment_first_read.insert(std::make_pair(attachment, false));
- attachment_first_layout.insert(std::make_pair(attachment, subpass.pColorAttachments[j].layout));
- }
- if (subpass.pDepthStencilAttachment && subpass.pDepthStencilAttachment->attachment != VK_ATTACHMENT_UNUSED) {
- uint32_t attachment = subpass.pDepthStencilAttachment->attachment;
- if (attachment_first_read.count(attachment)) continue;
- attachment_first_read.insert(std::make_pair(attachment, false));
- attachment_first_layout.insert(std::make_pair(attachment, subpass.pDepthStencilAttachment->layout));
- }
- }
- loader_platform_thread_unlock_mutex(&globalLock);
-
- return result;
-}
-
-VK_LAYER_EXPORT VKAPI_ATTR void VKAPI_CALL vkDestroyRenderPass(
- VkDevice device,
- VkRenderPass renderPass,
- const VkAllocationCallbacks *pAllocator)
-{
- layer_data *my_data = get_my_data_ptr(get_dispatch_key(device), layer_data_map);
- my_data->device_dispatch_table->DestroyRenderPass(device, renderPass, pAllocator);
-
- loader_platform_thread_lock_mutex(&globalLock);
- my_data->passMap.erase(renderPass);
- loader_platform_thread_unlock_mutex(&globalLock);
-}
-
-VK_LAYER_EXPORT VKAPI_ATTR void VKAPI_CALL vkCmdBeginRenderPass(
- VkCommandBuffer cmdBuffer,
- const VkRenderPassBeginInfo *pRenderPassBegin,
- VkSubpassContents contents)
-{
- layer_data *my_data = get_my_data_ptr(get_dispatch_key(cmdBuffer), layer_data_map);
- VkBool32 skip_call = false;
- if (pRenderPassBegin) {
- loader_platform_thread_lock_mutex(&globalLock);
- auto pass_data = my_data->passMap.find(pRenderPassBegin->renderPass);
- if (pass_data != my_data->passMap.end()) {
- MT_PASS_INFO& pass_info = pass_data->second;
- pass_info.fb = pRenderPassBegin->framebuffer;
- auto cb_data = my_data->cbMap.find(cmdBuffer);
- for (size_t i = 0; i < pass_info.attachments.size(); ++i) {
- MT_FB_ATTACHMENT_INFO& fb_info = my_data->fbMap[pass_info.fb].attachments[i];
- if (pass_info.attachments[i].load_op == VK_ATTACHMENT_LOAD_OP_CLEAR) {
- if (cb_data != my_data->cbMap.end()) {
- std::function<VkBool32()> function = [=]() { set_memory_valid(my_data, fb_info.mem, true, fb_info.image); return VK_FALSE; };
- cb_data->second.validate_functions.push_back(function);
- }
- VkImageLayout& attachment_layout = pass_info.attachment_first_layout[pass_info.attachments[i].attachment];
- if (attachment_layout == VK_IMAGE_LAYOUT_DEPTH_STENCIL_READ_ONLY_OPTIMAL ||
- attachment_layout == VK_IMAGE_LAYOUT_SHADER_READ_ONLY_OPTIMAL) {
- skip_call |= log_msg(my_data->report_data, VK_DEBUG_REPORT_ERROR_BIT_EXT, VK_DEBUG_REPORT_OBJECT_TYPE_RENDER_PASS_EXT,
- (uint64_t)(pRenderPassBegin->renderPass), __LINE__, MEMTRACK_INVALID_LAYOUT, "MEM",
- "Cannot clear attachment %d with invalid first layout %d.", pass_info.attachments[i].attachment, attachment_layout);
- }
- } else if (pass_info.attachments[i].load_op == VK_ATTACHMENT_LOAD_OP_DONT_CARE) {
- if (cb_data != my_data->cbMap.end()) {
- std::function<VkBool32()> function = [=]() { set_memory_valid(my_data, fb_info.mem, false, fb_info.image); return VK_FALSE; };
- cb_data->second.validate_functions.push_back(function);
- }
- } else if (pass_info.attachments[i].load_op == VK_ATTACHMENT_LOAD_OP_LOAD) {
- if (cb_data != my_data->cbMap.end()) {
- std::function<VkBool32()> function = [=]() { return validate_memory_is_valid(my_data, fb_info.mem, "vkCmdBeginRenderPass()", fb_info.image); };
- cb_data->second.validate_functions.push_back(function);
- }
- }
- if (pass_info.attachment_first_read[pass_info.attachments[i].attachment]) {
- if (cb_data != my_data->cbMap.end()) {
- std::function<VkBool32()> function = [=]() { return validate_memory_is_valid(my_data, fb_info.mem, "vkCmdBeginRenderPass()", fb_info.image); };
- cb_data->second.validate_functions.push_back(function);
- }
- }
- }
- if (cb_data != my_data->cbMap.end()) {
- cb_data->second.pass = pRenderPassBegin->renderPass;
- }
- }
- loader_platform_thread_unlock_mutex(&globalLock);
- }
- if (!skip_call)
- return my_data->device_dispatch_table->CmdBeginRenderPass(cmdBuffer, pRenderPassBegin, contents);
-}
-
-VK_LAYER_EXPORT VKAPI_ATTR void VKAPI_CALL vkCmdEndRenderPass(
- VkCommandBuffer cmdBuffer)
-{
- layer_data *my_data = get_my_data_ptr(get_dispatch_key(cmdBuffer), layer_data_map);
- loader_platform_thread_lock_mutex(&globalLock);
- auto cb_data = my_data->cbMap.find(cmdBuffer);
- if (cb_data != my_data->cbMap.end()) {
- auto pass_data = my_data->passMap.find(cb_data->second.pass);
- if (pass_data != my_data->passMap.end()) {
- MT_PASS_INFO& pass_info = pass_data->second;
- for (size_t i = 0; i < pass_info.attachments.size(); ++i) {
- MT_FB_ATTACHMENT_INFO& fb_info = my_data->fbMap[pass_info.fb].attachments[i];
- if (pass_info.attachments[i].store_op == VK_ATTACHMENT_STORE_OP_STORE) {
- if (cb_data != my_data->cbMap.end()) {
- std::function<VkBool32()> function = [=]() { set_memory_valid(my_data, fb_info.mem, true, fb_info.image); return VK_FALSE; };
- cb_data->second.validate_functions.push_back(function);
- }
- } else if (pass_info.attachments[i].store_op == VK_ATTACHMENT_STORE_OP_DONT_CARE) {
- if (cb_data != my_data->cbMap.end()) {
- std::function<VkBool32()> function = [=]() { set_memory_valid(my_data, fb_info.mem, false, fb_info.image); return VK_FALSE; };
- cb_data->second.validate_functions.push_back(function);
- }
- }
- }
- }
- }
- loader_platform_thread_unlock_mutex(&globalLock);
- my_data->device_dispatch_table->CmdEndRenderPass(cmdBuffer);
-}
-
-VK_LAYER_EXPORT VKAPI_ATTR PFN_vkVoidFunction VKAPI_CALL vkGetDeviceProcAddr(
- VkDevice dev,
- const char *funcName)
-{
- if (!strcmp(funcName, "vkGetDeviceProcAddr"))
- return (PFN_vkVoidFunction) vkGetDeviceProcAddr;
- if (!strcmp(funcName, "vkDestroyDevice"))
- return (PFN_vkVoidFunction) vkDestroyDevice;
- if (!strcmp(funcName, "vkQueueSubmit"))
- return (PFN_vkVoidFunction) vkQueueSubmit;
- if (!strcmp(funcName, "vkAllocateMemory"))
- return (PFN_vkVoidFunction) vkAllocateMemory;
- if (!strcmp(funcName, "vkFreeMemory"))
- return (PFN_vkVoidFunction) vkFreeMemory;
- if (!strcmp(funcName, "vkMapMemory"))
- return (PFN_vkVoidFunction) vkMapMemory;
- if (!strcmp(funcName, "vkUnmapMemory"))
- return (PFN_vkVoidFunction) vkUnmapMemory;
- if (!strcmp(funcName, "vkFlushMappedMemoryRanges"))
- return (PFN_vkVoidFunction) vkFlushMappedMemoryRanges;
- if (!strcmp(funcName, "vkInvalidateMappedMemoryRanges"))
- return (PFN_vkVoidFunction) vkInvalidateMappedMemoryRanges;
- if (!strcmp(funcName, "vkDestroyFence"))
- return (PFN_vkVoidFunction) vkDestroyFence;
- if (!strcmp(funcName, "vkDestroyBuffer"))
- return (PFN_vkVoidFunction) vkDestroyBuffer;
- if (!strcmp(funcName, "vkDestroyImage"))
- return (PFN_vkVoidFunction) vkDestroyImage;
- if (!strcmp(funcName, "vkBindBufferMemory"))
- return (PFN_vkVoidFunction) vkBindBufferMemory;
- if (!strcmp(funcName, "vkBindImageMemory"))
- return (PFN_vkVoidFunction) vkBindImageMemory;
- if (!strcmp(funcName, "vkGetBufferMemoryRequirements"))
- return (PFN_vkVoidFunction) vkGetBufferMemoryRequirements;
- if (!strcmp(funcName, "vkGetImageMemoryRequirements"))
- return (PFN_vkVoidFunction) vkGetImageMemoryRequirements;
- if (!strcmp(funcName, "vkQueueBindSparse"))
- return (PFN_vkVoidFunction) vkQueueBindSparse;
- if (!strcmp(funcName, "vkCreateFence"))
- return (PFN_vkVoidFunction) vkCreateFence;
- if (!strcmp(funcName, "vkGetFenceStatus"))
- return (PFN_vkVoidFunction) vkGetFenceStatus;
- if (!strcmp(funcName, "vkResetFences"))
- return (PFN_vkVoidFunction) vkResetFences;
- if (!strcmp(funcName, "vkWaitForFences"))
- return (PFN_vkVoidFunction) vkWaitForFences;
- if (!strcmp(funcName, "vkCreateSemaphore"))
- return (PFN_vkVoidFunction) vkCreateSemaphore;
- if (!strcmp(funcName, "vkDestroySemaphore"))
- return (PFN_vkVoidFunction) vkDestroySemaphore;
- if (!strcmp(funcName, "vkQueueWaitIdle"))
- return (PFN_vkVoidFunction) vkQueueWaitIdle;
- if (!strcmp(funcName, "vkDeviceWaitIdle"))
- return (PFN_vkVoidFunction) vkDeviceWaitIdle;
- if (!strcmp(funcName, "vkCreateBuffer"))
- return (PFN_vkVoidFunction) vkCreateBuffer;
- if (!strcmp(funcName, "vkCreateImage"))
- return (PFN_vkVoidFunction) vkCreateImage;
- if (!strcmp(funcName, "vkCreateImageView"))
- return (PFN_vkVoidFunction) vkCreateImageView;
- if (!strcmp(funcName, "vkCreateBufferView"))
- return (PFN_vkVoidFunction) vkCreateBufferView;
- if (!strcmp(funcName, "vkUpdateDescriptorSets"))
- return (PFN_vkVoidFunction) vkUpdateDescriptorSets;
- if (!strcmp(funcName, "vkAllocateCommandBuffers"))
- return (PFN_vkVoidFunction) vkAllocateCommandBuffers;
- if (!strcmp(funcName, "vkFreeCommandBuffers"))
- return (PFN_vkVoidFunction) vkFreeCommandBuffers;
- if (!strcmp(funcName, "vkCreateCommandPool"))
- return (PFN_vkVoidFunction) vkCreateCommandPool;
- if (!strcmp(funcName, "vkDestroyCommandPool"))
- return (PFN_vkVoidFunction) vkDestroyCommandPool;
- if (!strcmp(funcName, "vkResetCommandPool"))
- return (PFN_vkVoidFunction) vkResetCommandPool;
- if (!strcmp(funcName, "vkBeginCommandBuffer"))
- return (PFN_vkVoidFunction) vkBeginCommandBuffer;
- if (!strcmp(funcName, "vkEndCommandBuffer"))
- return (PFN_vkVoidFunction) vkEndCommandBuffer;
- if (!strcmp(funcName, "vkResetCommandBuffer"))
- return (PFN_vkVoidFunction) vkResetCommandBuffer;
- if (!strcmp(funcName, "vkCmdBindPipeline"))
- return (PFN_vkVoidFunction) vkCmdBindPipeline;
- if (!strcmp(funcName, "vkCmdBindDescriptorSets"))
- return (PFN_vkVoidFunction) vkCmdBindDescriptorSets;
- if (!strcmp(funcName, "vkCmdBindVertexBuffers"))
- return (PFN_vkVoidFunction) vkCmdBindVertexBuffers;
- if (!strcmp(funcName, "vkCmdBindIndexBuffer"))
- return (PFN_vkVoidFunction) vkCmdBindIndexBuffer;
- if (!strcmp(funcName, "vkCmdDraw"))
- return (PFN_vkVoidFunction) vkCmdDraw;
- if (!strcmp(funcName, "vkCmdDrawIndexed"))
- return (PFN_vkVoidFunction) vkCmdDrawIndexed;
- if (!strcmp(funcName, "vkCmdDrawIndirect"))
- return (PFN_vkVoidFunction) vkCmdDrawIndirect;
- if (!strcmp(funcName, "vkCmdDrawIndexedIndirect"))
- return (PFN_vkVoidFunction) vkCmdDrawIndexedIndirect;
- if (!strcmp(funcName, "vkCmdDispatch"))
- return (PFN_vkVoidFunction)vkCmdDispatch;
- if (!strcmp(funcName, "vkCmdDispatchIndirect"))
- return (PFN_vkVoidFunction)vkCmdDispatchIndirect;
- if (!strcmp(funcName, "vkCmdCopyBuffer"))
- return (PFN_vkVoidFunction)vkCmdCopyBuffer;
- if (!strcmp(funcName, "vkCmdCopyQueryPoolResults"))
- return (PFN_vkVoidFunction)vkCmdCopyQueryPoolResults;
- if (!strcmp(funcName, "vkCmdCopyImage"))
- return (PFN_vkVoidFunction) vkCmdCopyImage;
- if (!strcmp(funcName, "vkCmdCopyBufferToImage"))
- return (PFN_vkVoidFunction) vkCmdCopyBufferToImage;
- if (!strcmp(funcName, "vkCmdCopyImageToBuffer"))
- return (PFN_vkVoidFunction) vkCmdCopyImageToBuffer;
- if (!strcmp(funcName, "vkCmdUpdateBuffer"))
- return (PFN_vkVoidFunction) vkCmdUpdateBuffer;
- if (!strcmp(funcName, "vkCmdFillBuffer"))
- return (PFN_vkVoidFunction) vkCmdFillBuffer;
- if (!strcmp(funcName, "vkCmdClearColorImage"))
- return (PFN_vkVoidFunction) vkCmdClearColorImage;
- if (!strcmp(funcName, "vkCmdClearDepthStencilImage"))
- return (PFN_vkVoidFunction) vkCmdClearDepthStencilImage;
- if (!strcmp(funcName, "vkCmdResolveImage"))
- return (PFN_vkVoidFunction) vkCmdResolveImage;
- if (!strcmp(funcName, "vkCmdBeginQuery"))
- return (PFN_vkVoidFunction) vkCmdBeginQuery;
- if (!strcmp(funcName, "vkCmdEndQuery"))
- return (PFN_vkVoidFunction) vkCmdEndQuery;
- if (!strcmp(funcName, "vkCmdResetQueryPool"))
- return (PFN_vkVoidFunction) vkCmdResetQueryPool;
- if (!strcmp(funcName, "vkCreateRenderPass"))
- return (PFN_vkVoidFunction) vkCreateRenderPass;
- if (!strcmp(funcName, "vkDestroyRenderPass"))
- return (PFN_vkVoidFunction) vkDestroyRenderPass;
- if (!strcmp(funcName, "vkCmdBeginRenderPass"))
- return (PFN_vkVoidFunction) vkCmdBeginRenderPass;
- if (!strcmp(funcName, "vkCmdEndRenderPass"))
- return (PFN_vkVoidFunction) vkCmdEndRenderPass;
- if (!strcmp(funcName, "vkGetDeviceQueue"))
- return (PFN_vkVoidFunction) vkGetDeviceQueue;
- if (!strcmp(funcName, "vkCreateFramebuffer"))
- return (PFN_vkVoidFunction) vkCreateFramebuffer;
- if (!strcmp(funcName, "vkDestroyFramebuffer"))
- return (PFN_vkVoidFunction) vkDestroyFramebuffer;
-
-
- if (dev == NULL)
- return NULL;
-
- layer_data *my_data;
- my_data = get_my_data_ptr(get_dispatch_key(dev), layer_data_map);
- if (my_data->wsi_enabled)
- {
- if (!strcmp(funcName, "vkCreateSwapchainKHR"))
- return (PFN_vkVoidFunction) vkCreateSwapchainKHR;
- if (!strcmp(funcName, "vkDestroySwapchainKHR"))
- return (PFN_vkVoidFunction) vkDestroySwapchainKHR;
- if (!strcmp(funcName, "vkGetSwapchainImagesKHR"))
- return (PFN_vkVoidFunction) vkGetSwapchainImagesKHR;
- if (!strcmp(funcName, "vkAcquireNextImageKHR"))
- return (PFN_vkVoidFunction)vkAcquireNextImageKHR;
- if (!strcmp(funcName, "vkQueuePresentKHR"))
- return (PFN_vkVoidFunction)vkQueuePresentKHR;
- }
-
- VkLayerDispatchTable *pDisp = my_data->device_dispatch_table;
- if (pDisp->GetDeviceProcAddr == NULL)
- return NULL;
- return pDisp->GetDeviceProcAddr(dev, funcName);
-}
-
-VK_LAYER_EXPORT VKAPI_ATTR PFN_vkVoidFunction VKAPI_CALL vkGetInstanceProcAddr(
- VkInstance instance,
- const char *funcName)
-{
- PFN_vkVoidFunction fptr;
-
- if (!strcmp(funcName, "vkGetInstanceProcAddr"))
- return (PFN_vkVoidFunction) vkGetInstanceProcAddr;
- if (!strcmp(funcName, "vkGetDeviceProcAddr"))
- return (PFN_vkVoidFunction) vkGetDeviceProcAddr;
- if (!strcmp(funcName, "vkDestroyInstance"))
- return (PFN_vkVoidFunction) vkDestroyInstance;
- if (!strcmp(funcName, "vkCreateInstance"))
- return (PFN_vkVoidFunction) vkCreateInstance;
- if (!strcmp(funcName, "vkGetPhysicalDeviceMemoryProperties"))
- return (PFN_vkVoidFunction) vkGetPhysicalDeviceMemoryProperties;
- if (!strcmp(funcName, "vkCreateDevice"))
- return (PFN_vkVoidFunction) vkCreateDevice;
- if (!strcmp(funcName, "vkEnumerateInstanceLayerProperties"))
- return (PFN_vkVoidFunction) vkEnumerateInstanceLayerProperties;
- if (!strcmp(funcName, "vkEnumerateInstanceExtensionProperties"))
- return (PFN_vkVoidFunction) vkEnumerateInstanceExtensionProperties;
- if (!strcmp(funcName, "vkEnumerateDeviceLayerProperties"))
- return (PFN_vkVoidFunction) vkEnumerateDeviceLayerProperties;
- if (!strcmp(funcName, "vkEnumerateDeviceExtensionProperties"))
- return (PFN_vkVoidFunction) vkEnumerateDeviceExtensionProperties;
-
- if (instance == NULL) return NULL;
-
- layer_data *my_data;
- my_data = get_my_data_ptr(get_dispatch_key(instance), layer_data_map);
-
- fptr = debug_report_get_instance_proc_addr(my_data->report_data, funcName);
- if (fptr) return fptr;
-
- VkLayerInstanceDispatchTable* pTable = my_data->instance_dispatch_table;
- if (pTable->GetInstanceProcAddr == NULL)
- return NULL;
- return pTable->GetInstanceProcAddr(instance, funcName);
-}
diff --git a/layers/mem_tracker.h b/layers/mem_tracker.h
deleted file mode 100644
index dd835e34e..000000000
--- a/layers/mem_tracker.h
+++ /dev/null
@@ -1,221 +0,0 @@
-/* Copyright (c) 2015-2016 The Khronos Group Inc.
- * Copyright (c) 2015-2016 Valve Corporation
- * Copyright (c) 2015-2016 LunarG, Inc.
- * Copyright (C) 2015-2016 Google Inc.
- *
- * Permission is hereby granted, free of charge, to any person obtaining a copy
- * of this software and/or associated documentation files (the "Materials"), to
- * deal in the Materials without restriction, including without limitation the
- * rights to use, copy, modify, merge, publish, distribute, sublicense, and/or
- * sell copies of the Materials, and to permit persons to whom the Materials
- * are furnished to do so, subject to the following conditions:
- *
- * The above copyright notice(s) and this permission notice shall be included
- * in all copies or substantial portions of the Materials.
- *
- * THE MATERIALS ARE PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
- * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
- * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT.
- *
- * IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM,
- * DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR
- * OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE MATERIALS OR THE
- * USE OR OTHER DEALINGS IN THE MATERIALS
- *
- * Author: Tobin Ehlis <tobin@lunarg.com>
- * Author: Mark Lobodzinski <mark@lunarg.com>
- */
-
-#pragma once
-#include <vector>
-#include <unordered_map>
-#include "vulkan/vk_layer.h"
-
-#ifdef __cplusplus
-extern "C" {
-#endif
-
-// Mem Tracker ERROR codes
-typedef enum _MEM_TRACK_ERROR
-{
- MEMTRACK_NONE, // Used for INFO & other non-error messages
- MEMTRACK_INVALID_CB, // Cmd Buffer invalid
- MEMTRACK_INVALID_MEM_OBJ, // Invalid Memory Object
- MEMTRACK_INVALID_ALIASING, // Invalid Memory Aliasing
- MEMTRACK_INVALID_LAYOUT, // Invalid Layout
- MEMTRACK_INTERNAL_ERROR, // Bug in Mem Track Layer internal data structures
- MEMTRACK_FREED_MEM_REF, // MEM Obj freed while it still has obj and/or CB refs
- MEMTRACK_MEM_OBJ_CLEAR_EMPTY_BINDINGS, // Clearing bindings on mem obj that doesn't have any bindings
- MEMTRACK_MISSING_MEM_BINDINGS, // Trying to retrieve mem bindings, but none found (may be internal error)
- MEMTRACK_INVALID_OBJECT, // Attempting to reference generic VK Object that is invalid
- MEMTRACK_MEMORY_BINDING_ERROR, // Error during one of many calls that bind memory to object or CB
- MEMTRACK_MEMORY_LEAK, // Failure to call vkFreeMemory on Mem Obj prior to DestroyDevice
- MEMTRACK_INVALID_STATE, // Memory not in the correct state
- MEMTRACK_RESET_CB_WHILE_IN_FLIGHT, // vkResetCommandBuffer() called on a CB that hasn't completed
- MEMTRACK_INVALID_FENCE_STATE, // Invalid Fence State signaled or used
- MEMTRACK_REBIND_OBJECT, // Non-sparse object bindings are immutable
- MEMTRACK_INVALID_USAGE_FLAG, // Usage flags specified at image/buffer create conflict w/ use of object
- MEMTRACK_INVALID_MAP, // Size flag specified at alloc is too small for mapping range
-} MEM_TRACK_ERROR;
-
-// MemTracker Semaphore states
-typedef enum _MtSemaphoreState
-{
- MEMTRACK_SEMAPHORE_STATE_UNSET, // Semaphore is in an undefined state
- MEMTRACK_SEMAPHORE_STATE_SIGNALLED, // Semaphore has is in signalled state
- MEMTRACK_SEMAPHORE_STATE_WAIT, // Semaphore is in wait state
-} MtSemaphoreState;
-
-struct MemRange {
- VkDeviceSize offset;
- VkDeviceSize size;
-};
-
-/*
- * Data Structure overview
- * There are 4 global STL(' maps
- * cbMap -- map of command Buffer (CB) objects to MT_CB_INFO structures
- * Each MT_CB_INFO struct has an stl list container with
- * memory objects that are referenced by this CB
- * memObjMap -- map of Memory Objects to MT_MEM_OBJ_INFO structures
- * Each MT_MEM_OBJ_INFO has two stl list containers with:
- * -- all CBs referencing this mem obj
- * -- all VK Objects that are bound to this memory
- * objectMap -- map of objects to MT_OBJ_INFO structures
- *
- * Algorithm overview
- * These are the primary events that should happen related to different objects
- * 1. Command buffers
- * CREATION - Add object,structure to map
- * CMD BIND - If mem associated, add mem reference to list container
- * DESTROY - Remove from map, decrement (and report) mem references
- * 2. Mem Objects
- * CREATION - Add object,structure to map
- * OBJ BIND - Add obj structure to list container for that mem node
- * CMB BIND - If mem-related add CB structure to list container for that mem node
- * DESTROY - Flag as errors any remaining refs and remove from map
- * 3. Generic Objects
- * MEM BIND - DESTROY any previous binding, Add obj node w/ ref to map, add obj ref to list container for that mem node
- * DESTROY - If mem bound, remove reference list container for that memInfo, remove object ref from map
- */
-// TODO : Is there a way to track when Cmd Buffer finishes & remove mem references at that point?
-// TODO : Could potentially store a list of freed mem allocs to flag when they're incorrectly used
-
-// Simple struct to hold handle and type of object so they can be uniquely identified and looked up in appropriate map
-struct MT_OBJ_HANDLE_TYPE {
- uint64_t handle;
- VkDebugReportObjectTypeEXT type;
-};
-
-// Data struct for tracking memory object
-struct MT_MEM_OBJ_INFO {
- void* object; // Dispatchable object used to create this memory (device of swapchain)
- uint32_t refCount; // Count of references (obj bindings or CB use)
- bool valid; // Stores if the memory has valid data or not
- VkDeviceMemory mem;
- VkMemoryAllocateInfo allocInfo;
- list<MT_OBJ_HANDLE_TYPE> pObjBindings; // list container of objects bound to this memory
- list<VkCommandBuffer> pCommandBufferBindings; // list container of cmd buffers that reference this mem object
- MemRange memRange;
- void *pData, *pDriverData;
-};
-
-// This only applies to Buffers and Images, which can have memory bound to them
-struct MT_OBJ_BINDING_INFO {
- VkDeviceMemory mem;
- bool valid; //If this is a swapchain image backing memory is not a MT_MEM_OBJ_INFO so store it here.
- union create_info {
- VkImageCreateInfo image;
- VkBufferCreateInfo buffer;
- } create_info;
-};
-
-// Track all command buffers
-typedef struct _MT_CB_INFO {
- VkCommandBufferAllocateInfo createInfo;
- VkPipeline pipelines[VK_PIPELINE_BIND_POINT_RANGE_SIZE];
- uint32_t attachmentCount;
- VkCommandBuffer commandBuffer;
- uint64_t fenceId;
- VkFence lastSubmittedFence;
- VkQueue lastSubmittedQueue;
- VkRenderPass pass;
- vector<VkDescriptorSet> activeDescriptorSets;
- vector<std::function<VkBool32()> > validate_functions;
- // Order dependent, stl containers must be at end of struct
- list<VkDeviceMemory> pMemObjList; // List container of Mem objs referenced by this CB
- // Constructor
- _MT_CB_INFO():createInfo{},pipelines{},attachmentCount(0),fenceId(0),lastSubmittedFence{},lastSubmittedQueue{} {};
-} MT_CB_INFO;
-
-// Track command pools and their command buffers
-typedef struct _MT_CMD_POOL_INFO {
- VkCommandPoolCreateFlags createFlags;
- list<VkCommandBuffer> pCommandBuffers; // list container of cmd buffers allocated from this pool
-} MT_CMD_POOL_INFO;
-
-struct MT_IMAGE_VIEW_INFO {
- VkImage image;
-};
-
-struct MT_FB_ATTACHMENT_INFO {
- VkImage image;
- VkDeviceMemory mem;
-};
-
-struct MT_FB_INFO {
- std::vector<MT_FB_ATTACHMENT_INFO> attachments;
-};
-
-struct MT_PASS_ATTACHMENT_INFO {
- uint32_t attachment;
- VkAttachmentLoadOp load_op;
- VkAttachmentStoreOp store_op;
-};
-
-struct MT_PASS_INFO {
- VkFramebuffer fb;
- std::vector<MT_PASS_ATTACHMENT_INFO> attachments;
- std::unordered_map<uint32_t, bool> attachment_first_read;
- std::unordered_map<uint32_t, VkImageLayout> attachment_first_layout;
-};
-
-// Associate fenceId with a fence object
-struct MT_FENCE_INFO {
- uint64_t fenceId; // Sequence number for fence at last submit
- VkQueue queue; // Queue that this fence is submitted against or NULL
- VkSwapchainKHR
- swapchain; // Swapchain that this fence is submitted against or NULL
- VkBool32 firstTimeFlag; // Fence was created in signaled state, avoid warnings for first use
- VkFenceCreateInfo createInfo;
-};
-
-// Track Queue information
-struct MT_QUEUE_INFO {
- uint64_t lastRetiredId;
- uint64_t lastSubmittedId;
- list<VkCommandBuffer> pQueueCommandBuffers;
- list<VkDeviceMemory> pMemRefList;
-};
-
-struct MT_DESCRIPTOR_SET_INFO {
- std::vector<VkImageView> images;
- std::vector<VkBuffer> buffers;
-};
-
-// Track Swapchain Information
-struct MT_SWAP_CHAIN_INFO {
- VkSwapchainCreateInfoKHR createInfo;
- std::vector<VkImage> images;
-};
-
-struct MEMORY_RANGE {
- uint64_t handle;
- VkDeviceMemory memory;
- VkDeviceSize start;
- VkDeviceSize end;
-};
-
-#ifdef __cplusplus
-}
-#endif
diff --git a/layers/object_tracker.h b/layers/object_tracker.h
index 29a445023..664bf617f 100644
--- a/layers/object_tracker.h
+++ b/layers/object_tracker.h
@@ -31,40 +31,40 @@
#include "vk_layer_extension_utils.h"
#include "vk_enum_string_helper.h"
#include "vk_layer_table.h"
+#include "vk_layer_utils.h"
// Object Tracker ERROR codes
-typedef enum _OBJECT_TRACK_ERROR
-{
- OBJTRACK_NONE, // Used for INFO & other non-error messages
- OBJTRACK_UNKNOWN_OBJECT, // Updating uses of object that's not in global object list
- OBJTRACK_INTERNAL_ERROR, // Bug with data tracking within the layer
- OBJTRACK_DESTROY_OBJECT_FAILED, // Couldn't find object to be destroyed
- OBJTRACK_OBJECT_LEAK, // OBJECT was not correctly freed/destroyed
- OBJTRACK_OBJCOUNT_MAX_EXCEEDED, // Request for Object data in excess of max obj count
- OBJTRACK_INVALID_OBJECT, // Object used that has never been created
- OBJTRACK_DESCRIPTOR_POOL_MISMATCH, // Descriptor Pools specified incorrectly
- OBJTRACK_COMMAND_POOL_MISMATCH, // Command Pools specified incorrectly
+typedef enum _OBJECT_TRACK_ERROR {
+ OBJTRACK_NONE, // Used for INFO & other non-error messages
+ OBJTRACK_UNKNOWN_OBJECT, // Updating uses of object that's not in global object list
+ OBJTRACK_INTERNAL_ERROR, // Bug with data tracking within the layer
+ OBJTRACK_DESTROY_OBJECT_FAILED, // Couldn't find object to be destroyed
+ OBJTRACK_OBJECT_LEAK, // OBJECT was not correctly freed/destroyed
+ OBJTRACK_OBJCOUNT_MAX_EXCEEDED, // Request for Object data in excess of max obj count
+ OBJTRACK_INVALID_OBJECT, // Object used that has never been created
+ OBJTRACK_DESCRIPTOR_POOL_MISMATCH, // Descriptor Pools specified incorrectly
+ OBJTRACK_COMMAND_POOL_MISMATCH, // Command Pools specified incorrectly
} OBJECT_TRACK_ERROR;
// Object Status -- used to track state of individual objects
typedef VkFlags ObjectStatusFlags;
-typedef enum _ObjectStatusFlagBits
-{
- OBJSTATUS_NONE = 0x00000000, // No status is set
- OBJSTATUS_FENCE_IS_SUBMITTED = 0x00000001, // Fence has been submitted
- OBJSTATUS_VIEWPORT_BOUND = 0x00000002, // Viewport state object has been bound
- OBJSTATUS_RASTER_BOUND = 0x00000004, // Viewport state object has been bound
- OBJSTATUS_COLOR_BLEND_BOUND = 0x00000008, // Viewport state object has been bound
- OBJSTATUS_DEPTH_STENCIL_BOUND = 0x00000010, // Viewport state object has been bound
- OBJSTATUS_GPU_MEM_MAPPED = 0x00000020, // Memory object is currently mapped
- OBJSTATUS_COMMAND_BUFFER_SECONDARY = 0x00000040, // Command Buffer is of type SECONDARY
+typedef enum _ObjectStatusFlagBits {
+ OBJSTATUS_NONE = 0x00000000, // No status is set
+ OBJSTATUS_FENCE_IS_SUBMITTED = 0x00000001, // Fence has been submitted
+ OBJSTATUS_VIEWPORT_BOUND = 0x00000002, // Viewport state object has been bound
+ OBJSTATUS_RASTER_BOUND = 0x00000004, // Viewport state object has been bound
+ OBJSTATUS_COLOR_BLEND_BOUND = 0x00000008, // Viewport state object has been bound
+ OBJSTATUS_DEPTH_STENCIL_BOUND = 0x00000010, // Viewport state object has been bound
+ OBJSTATUS_GPU_MEM_MAPPED = 0x00000020, // Memory object is currently mapped
+ OBJSTATUS_COMMAND_BUFFER_SECONDARY = 0x00000040, // Command Buffer is of type SECONDARY
} ObjectStatusFlagBits;
typedef struct _OBJTRACK_NODE {
- uint64_t vkObj; // Object handle
- VkDebugReportObjectTypeEXT objType; // Object type identifier
- ObjectStatusFlags status; // Object state
- uint64_t parentObj; // Parent object
+ uint64_t vkObj; // Object handle
+ VkDebugReportObjectTypeEXT objType; // Object type identifier
+ ObjectStatusFlags status; // Object state
+ uint64_t parentObj; // Parent object
+ uint64_t belongsTo; // Object Scope -- owning device/instance
} OBJTRACK_NODE;
// prototype for extension functions
@@ -77,17 +77,12 @@ typedef uint64_t (*OBJ_TRACK_GET_OBJECTS_OF_TYPE_COUNT)(VkDevice, VkDebugReportO
struct layer_data {
debug_report_data *report_data;
- //TODO: put instance data here
- VkDebugReportCallbackEXT logging_callback;
+ // TODO: put instance data here
+ std::vector<VkDebugReportCallbackEXT> logging_callback;
bool wsi_enabled;
bool objtrack_extensions_enabled;
- layer_data() :
- report_data(nullptr),
- logging_callback(VK_NULL_HANDLE),
- wsi_enabled(false),
- objtrack_extensions_enabled(false)
- {};
+ layer_data() : report_data(nullptr), wsi_enabled(false), objtrack_extensions_enabled(false){};
};
struct instExts {
@@ -95,13 +90,13 @@ struct instExts {
};
static std::unordered_map<void *, struct instExts> instanceExtMap;
-static std::unordered_map<void*, layer_data *> layer_data_map;
-static device_table_map object_tracker_device_table_map;
-static instance_table_map object_tracker_instance_table_map;
+static std::unordered_map<void *, layer_data *> layer_data_map;
+static device_table_map object_tracker_device_table_map;
+static instance_table_map object_tracker_instance_table_map;
// We need additionally validate image usage using a separate map
// of swapchain-created images
-static unordered_map<uint64_t, OBJTRACK_NODE*> swapchainImageMap;
+static unordered_map<uint64_t, OBJTRACK_NODE *> swapchainImageMap;
static long long unsigned int object_track_index = 0;
static int objLockInitialized = 0;
@@ -110,31 +105,28 @@ static loader_platform_thread_mutex objLock;
// Objects stored in a global map w/ struct containing basic info
// unordered_map<const void*, OBJTRACK_NODE*> objMap;
-#define NUM_OBJECT_TYPES (VK_DEBUG_REPORT_OBJECT_TYPE_DEBUG_REPORT_EXT+1)
-
-static uint64_t numObjs[NUM_OBJECT_TYPES] = {0};
-static uint64_t numTotalObjs = 0;
-static VkQueueFamilyProperties *queueInfo = NULL;
-static uint32_t queueCount = 0;
+#define NUM_OBJECT_TYPES (VK_DEBUG_REPORT_OBJECT_TYPE_DEBUG_REPORT_EXT + 1)
-template layer_data *get_my_data_ptr<layer_data>(
- void *data_key, std::unordered_map<void *, layer_data *> &data_map);
+static uint64_t numObjs[NUM_OBJECT_TYPES] = {0};
+static uint64_t numTotalObjs = 0;
+static VkQueueFamilyProperties *queueInfo = NULL;
+static uint32_t queueCount = 0;
+template layer_data *get_my_data_ptr<layer_data>(void *data_key, std::unordered_map<void *, layer_data *> &data_map);
//
// Internal Object Tracker Functions
//
-static void createDeviceRegisterExtensions(const VkDeviceCreateInfo* pCreateInfo, VkDevice device)
-{
+static void createDeviceRegisterExtensions(const VkDeviceCreateInfo *pCreateInfo, VkDevice device) {
layer_data *my_device_data = get_my_data_ptr(get_dispatch_key(device), layer_data_map);
VkLayerDispatchTable *pDisp = get_dispatch_table(object_tracker_device_table_map, device);
PFN_vkGetDeviceProcAddr gpa = pDisp->GetDeviceProcAddr;
- pDisp->CreateSwapchainKHR = (PFN_vkCreateSwapchainKHR) gpa(device, "vkCreateSwapchainKHR");
- pDisp->DestroySwapchainKHR = (PFN_vkDestroySwapchainKHR) gpa(device, "vkDestroySwapchainKHR");
- pDisp->GetSwapchainImagesKHR = (PFN_vkGetSwapchainImagesKHR) gpa(device, "vkGetSwapchainImagesKHR");
- pDisp->AcquireNextImageKHR = (PFN_vkAcquireNextImageKHR) gpa(device, "vkAcquireNextImageKHR");
- pDisp->QueuePresentKHR = (PFN_vkQueuePresentKHR) gpa(device, "vkQueuePresentKHR");
+ pDisp->CreateSwapchainKHR = (PFN_vkCreateSwapchainKHR)gpa(device, "vkCreateSwapchainKHR");
+ pDisp->DestroySwapchainKHR = (PFN_vkDestroySwapchainKHR)gpa(device, "vkDestroySwapchainKHR");
+ pDisp->GetSwapchainImagesKHR = (PFN_vkGetSwapchainImagesKHR)gpa(device, "vkGetSwapchainImagesKHR");
+ pDisp->AcquireNextImageKHR = (PFN_vkAcquireNextImageKHR)gpa(device, "vkAcquireNextImageKHR");
+ pDisp->QueuePresentKHR = (PFN_vkQueuePresentKHR)gpa(device, "vkQueuePresentKHR");
my_device_data->wsi_enabled = false;
for (uint32_t i = 0; i < pCreateInfo->enabledExtensionCount; i++) {
if (strcmp(pCreateInfo->ppEnabledExtensionNames[i], VK_KHR_SWAPCHAIN_EXTENSION_NAME) == 0)
@@ -145,64 +137,70 @@ static void createDeviceRegisterExtensions(const VkDeviceCreateInfo* pCreateInfo
}
}
-static void createInstanceRegisterExtensions(const VkInstanceCreateInfo* pCreateInfo, VkInstance instance)
-{
+static void createInstanceRegisterExtensions(const VkInstanceCreateInfo *pCreateInfo, VkInstance instance) {
uint32_t i;
VkLayerInstanceDispatchTable *pDisp = get_dispatch_table(object_tracker_instance_table_map, instance);
PFN_vkGetInstanceProcAddr gpa = pDisp->GetInstanceProcAddr;
- pDisp->GetPhysicalDeviceSurfaceSupportKHR = (PFN_vkGetPhysicalDeviceSurfaceSupportKHR) gpa(instance, "vkGetPhysicalDeviceSurfaceSupportKHR");
- pDisp->GetPhysicalDeviceSurfaceCapabilitiesKHR = (PFN_vkGetPhysicalDeviceSurfaceCapabilitiesKHR) gpa(instance, "vkGetPhysicalDeviceSurfaceCapabilitiesKHR");
- pDisp->GetPhysicalDeviceSurfaceFormatsKHR = (PFN_vkGetPhysicalDeviceSurfaceFormatsKHR) gpa(instance, "vkGetPhysicalDeviceSurfaceFormatsKHR");
- pDisp->GetPhysicalDeviceSurfacePresentModesKHR = (PFN_vkGetPhysicalDeviceSurfacePresentModesKHR) gpa(instance, "vkGetPhysicalDeviceSurfacePresentModesKHR");
+
+ pDisp->DestroySurfaceKHR = (PFN_vkDestroySurfaceKHR)gpa(instance, "vkDestroySurfaceKHR");
+ pDisp->GetPhysicalDeviceSurfaceSupportKHR =
+ (PFN_vkGetPhysicalDeviceSurfaceSupportKHR)gpa(instance, "vkGetPhysicalDeviceSurfaceSupportKHR");
+ pDisp->GetPhysicalDeviceSurfaceCapabilitiesKHR =
+ (PFN_vkGetPhysicalDeviceSurfaceCapabilitiesKHR)gpa(instance, "vkGetPhysicalDeviceSurfaceCapabilitiesKHR");
+ pDisp->GetPhysicalDeviceSurfaceFormatsKHR =
+ (PFN_vkGetPhysicalDeviceSurfaceFormatsKHR)gpa(instance, "vkGetPhysicalDeviceSurfaceFormatsKHR");
+ pDisp->GetPhysicalDeviceSurfacePresentModesKHR =
+ (PFN_vkGetPhysicalDeviceSurfacePresentModesKHR)gpa(instance, "vkGetPhysicalDeviceSurfacePresentModesKHR");
#if VK_USE_PLATFORM_WIN32_KHR
- pDisp->CreateWin32SurfaceKHR = (PFN_vkCreateWin32SurfaceKHR) gpa(instance, "vkCreateWin32SurfaceKHR");
- pDisp->GetPhysicalDeviceWin32PresentationSupportKHR = (PFN_vkGetPhysicalDeviceWin32PresentationSupportKHR) gpa(instance, "vkGetPhysicalDeviceWin32PresentationSupportKHR");
+ pDisp->CreateWin32SurfaceKHR = (PFN_vkCreateWin32SurfaceKHR)gpa(instance, "vkCreateWin32SurfaceKHR");
+ pDisp->GetPhysicalDeviceWin32PresentationSupportKHR =
+ (PFN_vkGetPhysicalDeviceWin32PresentationSupportKHR)gpa(instance, "vkGetPhysicalDeviceWin32PresentationSupportKHR");
#endif // VK_USE_PLATFORM_WIN32_KHR
#ifdef VK_USE_PLATFORM_XCB_KHR
- pDisp->CreateXcbSurfaceKHR = (PFN_vkCreateXcbSurfaceKHR) gpa(instance, "vkCreateXcbSurfaceKHR");
- pDisp->GetPhysicalDeviceXcbPresentationSupportKHR = (PFN_vkGetPhysicalDeviceXcbPresentationSupportKHR) gpa(instance, "vkGetPhysicalDeviceXcbPresentationSupportKHR");
+ pDisp->CreateXcbSurfaceKHR = (PFN_vkCreateXcbSurfaceKHR)gpa(instance, "vkCreateXcbSurfaceKHR");
+ pDisp->GetPhysicalDeviceXcbPresentationSupportKHR =
+ (PFN_vkGetPhysicalDeviceXcbPresentationSupportKHR)gpa(instance, "vkGetPhysicalDeviceXcbPresentationSupportKHR");
#endif // VK_USE_PLATFORM_XCB_KHR
#ifdef VK_USE_PLATFORM_XLIB_KHR
- pDisp->CreateXlibSurfaceKHR = (PFN_vkCreateXlibSurfaceKHR) gpa(instance, "vkCreateXlibSurfaceKHR");
- pDisp->GetPhysicalDeviceXlibPresentationSupportKHR = (PFN_vkGetPhysicalDeviceXlibPresentationSupportKHR) gpa(instance, "vkGetPhysicalDeviceXlibPresentationSupportKHR");
+ pDisp->CreateXlibSurfaceKHR = (PFN_vkCreateXlibSurfaceKHR)gpa(instance, "vkCreateXlibSurfaceKHR");
+ pDisp->GetPhysicalDeviceXlibPresentationSupportKHR =
+ (PFN_vkGetPhysicalDeviceXlibPresentationSupportKHR)gpa(instance, "vkGetPhysicalDeviceXlibPresentationSupportKHR");
#endif // VK_USE_PLATFORM_XLIB_KHR
#ifdef VK_USE_PLATFORM_MIR_KHR
- pDisp->CreateMirSurfaceKHR = (PFN_vkCreateMirSurfaceKHR) gpa(instance, "vkCreateMirSurfaceKHR");
- pDisp->GetPhysicalDeviceMirPresentationSupportKHR = (PFN_vkGetPhysicalDeviceMirPresentationSupportKHR) gpa(instance, "vkGetPhysicalDeviceMirPresentationSupportKHR");
+ pDisp->CreateMirSurfaceKHR = (PFN_vkCreateMirSurfaceKHR)gpa(instance, "vkCreateMirSurfaceKHR");
+ pDisp->GetPhysicalDeviceMirPresentationSupportKHR =
+ (PFN_vkGetPhysicalDeviceMirPresentationSupportKHR)gpa(instance, "vkGetPhysicalDeviceMirPresentationSupportKHR");
#endif // VK_USE_PLATFORM_MIR_KHR
#ifdef VK_USE_PLATFORM_WAYLAND_KHR
- pDisp->CreateWaylandSurfaceKHR = (PFN_vkCreateWaylandSurfaceKHR) gpa(instance, "vkCreateWaylandSurfaceKHR");
- pDisp->GetPhysicalDeviceWaylandPresentationSupportKHR = (PFN_vkGetPhysicalDeviceWaylandPresentationSupportKHR) gpa(instance, "vkGetPhysicalDeviceWaylandPresentationSupportKHR");
+ pDisp->CreateWaylandSurfaceKHR = (PFN_vkCreateWaylandSurfaceKHR)gpa(instance, "vkCreateWaylandSurfaceKHR");
+ pDisp->GetPhysicalDeviceWaylandPresentationSupportKHR =
+ (PFN_vkGetPhysicalDeviceWaylandPresentationSupportKHR)gpa(instance, "vkGetPhysicalDeviceWaylandPresentationSupportKHR");
#endif // VK_USE_PLATFORM_WAYLAND_KHR
#ifdef VK_USE_PLATFORM_ANDROID_KHR
- pDisp->CreateAndroidSurfaceKHR = (PFN_vkCreateAndroidSurfaceKHR) gpa(instance, "vkCreateAndroidSurfaceKHR");
+ pDisp->CreateAndroidSurfaceKHR = (PFN_vkCreateAndroidSurfaceKHR)gpa(instance, "vkCreateAndroidSurfaceKHR");
#endif // VK_USE_PLATFORM_ANDROID_KHR
instanceExtMap[pDisp].wsi_enabled = false;
for (i = 0; i < pCreateInfo->enabledExtensionCount; i++) {
if (strcmp(pCreateInfo->ppEnabledExtensionNames[i], VK_KHR_SURFACE_EXTENSION_NAME) == 0)
instanceExtMap[pDisp].wsi_enabled = true;
-
}
}
// Indicate device or instance dispatch table type
-typedef enum _DispTableType
-{
+typedef enum _DispTableType {
DISP_TBL_TYPE_INSTANCE,
DISP_TBL_TYPE_DEVICE,
} DispTableType;
-debug_report_data *mdd(const void* object)
-{
+debug_report_data *mdd(const void *object) {
dispatch_key key = get_dispatch_key(object);
layer_data *my_data = get_my_data_ptr(key, layer_data_map);
return my_data->report_data;
}
-debug_report_data *mid(VkInstance object)
-{
+debug_report_data *mid(VkInstance object) {
dispatch_key key = get_dispatch_key(object);
layer_data *my_data = get_my_data_ptr(key, layer_data_map);
return my_data->report_data;
@@ -210,7 +208,7 @@ debug_report_data *mid(VkInstance object)
// For each Queue's doubly linked-list of mem refs
typedef struct _OT_MEM_INFO {
- VkDeviceMemory mem;
+ VkDeviceMemory mem;
struct _OT_MEM_INFO *pNextMI;
struct _OT_MEM_INFO *pPrevMI;
@@ -218,51 +216,42 @@ typedef struct _OT_MEM_INFO {
// Track Queue information
typedef struct _OT_QUEUE_INFO {
- OT_MEM_INFO *pMemRefList;
- struct _OT_QUEUE_INFO *pNextQI;
- uint32_t queueNodeIndex;
- VkQueue queue;
- uint32_t refCount;
+ OT_MEM_INFO *pMemRefList;
+ struct _OT_QUEUE_INFO *pNextQI;
+ uint32_t queueNodeIndex;
+ VkQueue queue;
+ uint32_t refCount;
} OT_QUEUE_INFO;
// Global list of QueueInfo structures, one per queue
static OT_QUEUE_INFO *g_pQueueInfo = NULL;
// Convert an object type enum to an object type array index
-static uint32_t
-objTypeToIndex(
- uint32_t objType)
-{
+static uint32_t objTypeToIndex(uint32_t objType) {
uint32_t index = objType;
return index;
}
// Add new queue to head of global queue list
-static void
-addQueueInfo(
- uint32_t queueNodeIndex,
- VkQueue queue)
-{
+static void addQueueInfo(uint32_t queueNodeIndex, VkQueue queue) {
OT_QUEUE_INFO *pQueueInfo = new OT_QUEUE_INFO;
if (pQueueInfo != NULL) {
memset(pQueueInfo, 0, sizeof(OT_QUEUE_INFO));
- pQueueInfo->queue = queue;
+ pQueueInfo->queue = queue;
pQueueInfo->queueNodeIndex = queueNodeIndex;
- pQueueInfo->pNextQI = g_pQueueInfo;
- g_pQueueInfo = pQueueInfo;
- }
- else {
- log_msg(mdd(queue), VK_DEBUG_REPORT_ERROR_BIT_EXT, VK_DEBUG_REPORT_OBJECT_TYPE_QUEUE_EXT, reinterpret_cast<uint64_t>(queue), __LINE__, OBJTRACK_INTERNAL_ERROR, "OBJTRACK",
- "ERROR: VK_ERROR_OUT_OF_HOST_MEMORY -- could not allocate memory for Queue Information");
+ pQueueInfo->pNextQI = g_pQueueInfo;
+ g_pQueueInfo = pQueueInfo;
+ } else {
+ log_msg(mdd(queue), VK_DEBUG_REPORT_ERROR_BIT_EXT, VK_DEBUG_REPORT_OBJECT_TYPE_QUEUE_EXT, reinterpret_cast<uint64_t>(queue),
+ __LINE__, OBJTRACK_INTERNAL_ERROR, "OBJTRACK",
+ "ERROR: VK_ERROR_OUT_OF_HOST_MEMORY -- could not allocate memory for Queue Information");
}
}
// Destroy memRef lists and free all memory
-static void
-destroyQueueMemRefLists(void)
-{
- OT_QUEUE_INFO *pQueueInfo = g_pQueueInfo;
+static void destroyQueueMemRefLists(void) {
+ OT_QUEUE_INFO *pQueueInfo = g_pQueueInfo;
OT_QUEUE_INFO *pDelQueueInfo = NULL;
while (pQueueInfo != NULL) {
OT_MEM_INFO *pMemInfo = pQueueInfo->pMemRefList;
@@ -272,41 +261,31 @@ destroyQueueMemRefLists(void)
delete pDelMemInfo;
}
pDelQueueInfo = pQueueInfo;
- pQueueInfo = pQueueInfo->pNextQI;
+ pQueueInfo = pQueueInfo->pNextQI;
delete pDelQueueInfo;
}
g_pQueueInfo = pQueueInfo;
}
-static void
-setGpuQueueInfoState(
- uint32_t count,
- void *pData)
-{
+static void setGpuQueueInfoState(uint32_t count, void *pData) {
queueCount = count;
- queueInfo = (VkQueueFamilyProperties*)realloc((void*)queueInfo, count * sizeof(VkQueueFamilyProperties));
+ queueInfo = (VkQueueFamilyProperties *)realloc((void *)queueInfo, count * sizeof(VkQueueFamilyProperties));
if (queueInfo != NULL) {
memcpy(queueInfo, pData, count * sizeof(VkQueueFamilyProperties));
}
}
// Check Queue type flags for selected queue operations
-static void
-validateQueueFlags(
- VkQueue queue,
- const char *function)
-{
+static void validateQueueFlags(VkQueue queue, const char *function) {
OT_QUEUE_INFO *pQueueInfo = g_pQueueInfo;
while ((pQueueInfo != NULL) && (pQueueInfo->queue != queue)) {
pQueueInfo = pQueueInfo->pNextQI;
}
if (pQueueInfo != NULL) {
if ((queueInfo != NULL) && (queueInfo[pQueueInfo->queueNodeIndex].queueFlags & VK_QUEUE_SPARSE_BINDING_BIT) == 0) {
- log_msg(mdd(queue), VK_DEBUG_REPORT_ERROR_BIT_EXT, VK_DEBUG_REPORT_OBJECT_TYPE_QUEUE_EXT, reinterpret_cast<uint64_t>(queue), __LINE__, OBJTRACK_UNKNOWN_OBJECT, "OBJTRACK",
- "Attempting %s on a non-memory-management capable queue -- VK_QUEUE_SPARSE_BINDING_BIT not set", function);
- } else {
- log_msg(mdd(queue), VK_DEBUG_REPORT_ERROR_BIT_EXT, VK_DEBUG_REPORT_OBJECT_TYPE_QUEUE_EXT, reinterpret_cast<uint64_t>(queue), __LINE__, OBJTRACK_UNKNOWN_OBJECT, "OBJTRACK",
- "Attempting %s on a possibly non-memory-management capable queue -- VK_QUEUE_SPARSE_BINDING_BIT not known", function);
+ log_msg(mdd(queue), VK_DEBUG_REPORT_ERROR_BIT_EXT, VK_DEBUG_REPORT_OBJECT_TYPE_QUEUE_EXT,
+ reinterpret_cast<uint64_t>(queue), __LINE__, OBJTRACK_UNKNOWN_OBJECT, "OBJTRACK",
+ "Attempting %s on a non-memory-management capable queue -- VK_QUEUE_SPARSE_BINDING_BIT not set", function);
}
}
}
@@ -347,34 +326,12 @@ validate_status(
#endif
#include "vk_dispatch_table_helper.h"
-static void
-initObjectTracker(
- layer_data *my_data,
- const VkAllocationCallbacks *pAllocator)
-{
- uint32_t report_flags = 0;
- uint32_t debug_action = 0;
- FILE *log_output = NULL;
- const char *option_str;
- // initialize ObjectTracker options
- report_flags = getLayerOptionFlags("ObjectTrackerReportFlags", 0);
- getLayerOptionEnum("ObjectTrackerDebugAction", (uint32_t *) &debug_action);
-
- if (debug_action & VK_DBG_LAYER_ACTION_LOG_MSG)
- {
- option_str = getLayerOption("ObjectTrackerLogFilename");
- log_output = getLayerLogOutput(option_str, "ObjectTracker");
- VkDebugReportCallbackCreateInfoEXT dbgInfo;
- memset(&dbgInfo, 0, sizeof(dbgInfo));
- dbgInfo.sType = VK_STRUCTURE_TYPE_DEBUG_REPORT_CREATE_INFO_EXT;
- dbgInfo.pfnCallback = log_callback;
- dbgInfo.pUserData = log_output;
- dbgInfo.flags = report_flags;
- layer_create_msg_callback(my_data->report_data, &dbgInfo, pAllocator, &my_data->logging_callback);
- }
- if (!objLockInitialized)
- {
+static void init_object_tracker(layer_data *my_data, const VkAllocationCallbacks *pAllocator) {
+
+ layer_debug_actions(my_data->report_data, my_data->logging_callback, pAllocator, "lunarg_object_tracker");
+
+ if (!objLockInitialized) {
// TODO/TBD: Need to delete this mutex sometime. How??? One
// suggestion is to call this during vkCreateInstance(), and then we
// can clean it up during vkDestroyInstance(). However, that requires
@@ -386,117 +343,133 @@ initObjectTracker(
}
//
-// Forward declares of generated routines
+// Forward declarations
//
static void create_physical_device(VkInstance dispatchable_object, VkPhysicalDevice vkObj, VkDebugReportObjectTypeEXT objType);
static void create_instance(VkInstance dispatchable_object, VkInstance object, VkDebugReportObjectTypeEXT objType);
static void create_device(VkDevice dispatchable_object, VkDevice object, VkDebugReportObjectTypeEXT objType);
+static void create_device(VkPhysicalDevice dispatchable_object, VkDevice object, VkDebugReportObjectTypeEXT objType);
static void create_queue(VkDevice dispatchable_object, VkQueue vkObj, VkDebugReportObjectTypeEXT objType);
static VkBool32 validate_image(VkQueue dispatchable_object, VkImage object, VkDebugReportObjectTypeEXT objType, bool null_allowed);
-static VkBool32 validate_instance(VkInstance dispatchable_object, VkInstance object, VkDebugReportObjectTypeEXT objType, bool null_allowed);
-static VkBool32 validate_device(VkDevice dispatchable_object, VkDevice object, VkDebugReportObjectTypeEXT objType, bool null_allowed);
-static VkBool32 validate_descriptor_pool(VkDevice dispatchable_object, VkDescriptorPool object, VkDebugReportObjectTypeEXT objType, bool null_allowed);
-static VkBool32 validate_descriptor_set_layout(VkDevice dispatchable_object, VkDescriptorSetLayout object, VkDebugReportObjectTypeEXT objType, bool null_allowed);
-static VkBool32 validate_command_pool(VkDevice dispatchable_object, VkCommandPool object, VkDebugReportObjectTypeEXT objType, bool null_allowed);
-static VkBool32 validate_buffer(VkQueue dispatchable_object, VkBuffer object, VkDebugReportObjectTypeEXT objType, bool null_allowed);
+static VkBool32 validate_instance(VkInstance dispatchable_object, VkInstance object, VkDebugReportObjectTypeEXT objType,
+ bool null_allowed);
+static VkBool32 validate_device(VkDevice dispatchable_object, VkDevice object, VkDebugReportObjectTypeEXT objType,
+ bool null_allowed);
+static VkBool32 validate_descriptor_pool(VkDevice dispatchable_object, VkDescriptorPool object, VkDebugReportObjectTypeEXT objType,
+ bool null_allowed);
+static VkBool32 validate_descriptor_set_layout(VkDevice dispatchable_object, VkDescriptorSetLayout object,
+ VkDebugReportObjectTypeEXT objType, bool null_allowed);
+static VkBool32 validate_command_pool(VkDevice dispatchable_object, VkCommandPool object, VkDebugReportObjectTypeEXT objType,
+ bool null_allowed);
+static VkBool32 validate_buffer(VkQueue dispatchable_object, VkBuffer object, VkDebugReportObjectTypeEXT objType,
+ bool null_allowed);
static void create_pipeline(VkDevice dispatchable_object, VkPipeline vkObj, VkDebugReportObjectTypeEXT objType);
-static VkBool32 validate_pipeline_cache(VkDevice dispatchable_object, VkPipelineCache object, VkDebugReportObjectTypeEXT objType, bool null_allowed);
-static VkBool32 validate_render_pass(VkDevice dispatchable_object, VkRenderPass object, VkDebugReportObjectTypeEXT objType, bool null_allowed);
-static VkBool32 validate_shader_module(VkDevice dispatchable_object, VkShaderModule object, VkDebugReportObjectTypeEXT objType, bool null_allowed);
-static VkBool32 validate_pipeline_layout(VkDevice dispatchable_object, VkPipelineLayout object, VkDebugReportObjectTypeEXT objType, bool null_allowed);
-static VkBool32 validate_pipeline(VkDevice dispatchable_object, VkPipeline object, VkDebugReportObjectTypeEXT objType, bool null_allowed);
+static VkBool32 validate_pipeline_cache(VkDevice dispatchable_object, VkPipelineCache object, VkDebugReportObjectTypeEXT objType,
+ bool null_allowed);
+static VkBool32 validate_render_pass(VkDevice dispatchable_object, VkRenderPass object, VkDebugReportObjectTypeEXT objType,
+ bool null_allowed);
+static VkBool32 validate_shader_module(VkDevice dispatchable_object, VkShaderModule object, VkDebugReportObjectTypeEXT objType,
+ bool null_allowed);
+static VkBool32 validate_pipeline_layout(VkDevice dispatchable_object, VkPipelineLayout object, VkDebugReportObjectTypeEXT objType,
+ bool null_allowed);
+static VkBool32 validate_pipeline(VkDevice dispatchable_object, VkPipeline object, VkDebugReportObjectTypeEXT objType,
+ bool null_allowed);
static void destroy_command_pool(VkDevice dispatchable_object, VkCommandPool object);
static void destroy_command_buffer(VkCommandBuffer dispatchable_object, VkCommandBuffer object);
static void destroy_descriptor_pool(VkDevice dispatchable_object, VkDescriptorPool object);
static void destroy_descriptor_set(VkDevice dispatchable_object, VkDescriptorSet object);
static void destroy_device_memory(VkDevice dispatchable_object, VkDeviceMemory object);
static void destroy_swapchain_khr(VkDevice dispatchable_object, VkSwapchainKHR object);
-static VkBool32 set_device_memory_status(VkDevice dispatchable_object, VkDeviceMemory object, VkDebugReportObjectTypeEXT objType, ObjectStatusFlags status_flag);
-static VkBool32 reset_device_memory_status(VkDevice dispatchable_object, VkDeviceMemory object, VkDebugReportObjectTypeEXT objType, ObjectStatusFlags status_flag);
+static VkBool32 set_device_memory_status(VkDevice dispatchable_object, VkDeviceMemory object, VkDebugReportObjectTypeEXT objType,
+ ObjectStatusFlags status_flag);
+static VkBool32 reset_device_memory_status(VkDevice dispatchable_object, VkDeviceMemory object, VkDebugReportObjectTypeEXT objType,
+ ObjectStatusFlags status_flag);
#if 0
static VkBool32 validate_status(VkDevice dispatchable_object, VkFence object, VkDebugReportObjectTypeEXT objType,
ObjectStatusFlags status_mask, ObjectStatusFlags status_flag, VkFlags msg_flags, OBJECT_TRACK_ERROR error_code,
const char *fail_msg);
#endif
-extern unordered_map<uint64_t, OBJTRACK_NODE*> VkPhysicalDeviceMap;
-extern unordered_map<uint64_t, OBJTRACK_NODE*> VkImageMap;
-extern unordered_map<uint64_t, OBJTRACK_NODE*> VkQueueMap;
-extern unordered_map<uint64_t, OBJTRACK_NODE*> VkDescriptorSetMap;
-extern unordered_map<uint64_t, OBJTRACK_NODE*> VkBufferMap;
-extern unordered_map<uint64_t, OBJTRACK_NODE*> VkFenceMap;
-extern unordered_map<uint64_t, OBJTRACK_NODE*> VkSemaphoreMap;
-extern unordered_map<uint64_t, OBJTRACK_NODE*> VkCommandPoolMap;
-extern unordered_map<uint64_t, OBJTRACK_NODE*> VkCommandBufferMap;
-extern unordered_map<uint64_t, OBJTRACK_NODE*> VkSwapchainKHRMap;
-extern unordered_map<uint64_t, OBJTRACK_NODE*> VkSurfaceKHRMap;
-
-static void create_physical_device(VkInstance dispatchable_object, VkPhysicalDevice vkObj, VkDebugReportObjectTypeEXT objType)
-{
- log_msg(mdd(dispatchable_object), VK_DEBUG_REPORT_INFORMATION_BIT_EXT, objType, reinterpret_cast<uint64_t>(vkObj), __LINE__, OBJTRACK_NONE, "OBJTRACK",
- "OBJ[%llu] : CREATE %s object 0x%" PRIxLEAST64 , object_track_index++, string_VkDebugReportObjectTypeEXT(objType),
- reinterpret_cast<uint64_t>(vkObj));
-
- OBJTRACK_NODE* pNewObjNode = new OBJTRACK_NODE;
+extern unordered_map<uint64_t, OBJTRACK_NODE *> VkPhysicalDeviceMap;
+extern unordered_map<uint64_t, OBJTRACK_NODE *> VkDeviceMap;
+extern unordered_map<uint64_t, OBJTRACK_NODE *> VkImageMap;
+extern unordered_map<uint64_t, OBJTRACK_NODE *> VkQueueMap;
+extern unordered_map<uint64_t, OBJTRACK_NODE *> VkDescriptorSetMap;
+extern unordered_map<uint64_t, OBJTRACK_NODE *> VkBufferMap;
+extern unordered_map<uint64_t, OBJTRACK_NODE *> VkFenceMap;
+extern unordered_map<uint64_t, OBJTRACK_NODE *> VkSemaphoreMap;
+extern unordered_map<uint64_t, OBJTRACK_NODE *> VkCommandPoolMap;
+extern unordered_map<uint64_t, OBJTRACK_NODE *> VkCommandBufferMap;
+extern unordered_map<uint64_t, OBJTRACK_NODE *> VkSwapchainKHRMap;
+extern unordered_map<uint64_t, OBJTRACK_NODE *> VkSurfaceKHRMap;
+
+static void create_physical_device(VkInstance dispatchable_object, VkPhysicalDevice vkObj, VkDebugReportObjectTypeEXT objType) {
+ log_msg(mdd(dispatchable_object), VK_DEBUG_REPORT_INFORMATION_BIT_EXT, objType, reinterpret_cast<uint64_t>(vkObj), __LINE__,
+ OBJTRACK_NONE, "OBJTRACK", "OBJ[%llu] : CREATE %s object 0x%" PRIxLEAST64, object_track_index++,
+ string_VkDebugReportObjectTypeEXT(objType), reinterpret_cast<uint64_t>(vkObj));
+
+ OBJTRACK_NODE *pNewObjNode = new OBJTRACK_NODE;
pNewObjNode->objType = objType;
- pNewObjNode->status = OBJSTATUS_NONE;
- pNewObjNode->vkObj = reinterpret_cast<uint64_t>(vkObj);
+ pNewObjNode->belongsTo = (uint64_t)dispatchable_object;
+ pNewObjNode->status = OBJSTATUS_NONE;
+ pNewObjNode->vkObj = reinterpret_cast<uint64_t>(vkObj);
VkPhysicalDeviceMap[reinterpret_cast<uint64_t>(vkObj)] = pNewObjNode;
uint32_t objIndex = objTypeToIndex(objType);
numObjs[objIndex]++;
numTotalObjs++;
}
-static void create_surface_khr(VkInstance dispatchable_object, VkSurfaceKHR vkObj, VkDebugReportObjectTypeEXT objType)
-{
+static void create_surface_khr(VkInstance dispatchable_object, VkSurfaceKHR vkObj, VkDebugReportObjectTypeEXT objType) {
// TODO: Add tracking of surface objects
- log_msg(mdd(dispatchable_object), VK_DEBUG_REPORT_INFORMATION_BIT_EXT, objType, (uint64_t)(vkObj), __LINE__, OBJTRACK_NONE, "OBJTRACK",
- "OBJ[%llu] : CREATE %s object 0x%" PRIxLEAST64 , object_track_index++, string_VkDebugReportObjectTypeEXT(objType),
- (uint64_t)(vkObj));
+ log_msg(mdd(dispatchable_object), VK_DEBUG_REPORT_INFORMATION_BIT_EXT, objType, (uint64_t)(vkObj), __LINE__, OBJTRACK_NONE,
+ "OBJTRACK", "OBJ[%llu] : CREATE %s object 0x%" PRIxLEAST64, object_track_index++,
+ string_VkDebugReportObjectTypeEXT(objType), (uint64_t)(vkObj));
- OBJTRACK_NODE* pNewObjNode = new OBJTRACK_NODE;
+ OBJTRACK_NODE *pNewObjNode = new OBJTRACK_NODE;
pNewObjNode->objType = objType;
- pNewObjNode->status = OBJSTATUS_NONE;
- pNewObjNode->vkObj = (uint64_t)(vkObj);
+ pNewObjNode->belongsTo = (uint64_t)dispatchable_object;
+ pNewObjNode->status = OBJSTATUS_NONE;
+ pNewObjNode->vkObj = (uint64_t)(vkObj);
VkSurfaceKHRMap[(uint64_t)vkObj] = pNewObjNode;
uint32_t objIndex = objTypeToIndex(objType);
numObjs[objIndex]++;
numTotalObjs++;
}
-static void destroy_surface_khr(VkInstance dispatchable_object, VkSurfaceKHR object)
-{
+static void destroy_surface_khr(VkInstance dispatchable_object, VkSurfaceKHR object) {
uint64_t object_handle = (uint64_t)(object);
if (VkSurfaceKHRMap.find(object_handle) != VkSurfaceKHRMap.end()) {
- OBJTRACK_NODE* pNode = VkSurfaceKHRMap[(uint64_t)object];
+ OBJTRACK_NODE *pNode = VkSurfaceKHRMap[(uint64_t)object];
uint32_t objIndex = objTypeToIndex(pNode->objType);
assert(numTotalObjs > 0);
numTotalObjs--;
assert(numObjs[objIndex] > 0);
numObjs[objIndex]--;
- log_msg(mdd(dispatchable_object), VK_DEBUG_REPORT_INFORMATION_BIT_EXT, pNode->objType, object_handle, __LINE__, OBJTRACK_NONE, "OBJTRACK",
- "OBJ_STAT Destroy %s obj 0x%" PRIxLEAST64 " (%" PRIu64 " total objs remain & %" PRIu64 " %s objs).",
- string_VkDebugReportObjectTypeEXT(pNode->objType), (uint64_t)(object), numTotalObjs, numObjs[objIndex],
- string_VkDebugReportObjectTypeEXT(pNode->objType));
+ log_msg(mdd(dispatchable_object), VK_DEBUG_REPORT_INFORMATION_BIT_EXT, pNode->objType, object_handle, __LINE__,
+ OBJTRACK_NONE, "OBJTRACK",
+ "OBJ_STAT Destroy %s obj 0x%" PRIxLEAST64 " (%" PRIu64 " total objs remain & %" PRIu64 " %s objs).",
+ string_VkDebugReportObjectTypeEXT(pNode->objType), (uint64_t)(object), numTotalObjs, numObjs[objIndex],
+ string_VkDebugReportObjectTypeEXT(pNode->objType));
delete pNode;
VkSurfaceKHRMap.erase(object_handle);
} else {
- log_msg(mdd(dispatchable_object), VK_DEBUG_REPORT_ERROR_BIT_EXT, (VkDebugReportObjectTypeEXT ) 0, object_handle, __LINE__, OBJTRACK_NONE, "OBJTRACK",
- "Unable to remove obj 0x%" PRIxLEAST64 ". Was it created? Has it already been destroyed?",
- object_handle);
+ log_msg(mdd(dispatchable_object), VK_DEBUG_REPORT_ERROR_BIT_EXT, (VkDebugReportObjectTypeEXT)0, object_handle, __LINE__,
+ OBJTRACK_NONE, "OBJTRACK",
+ "Unable to remove obj 0x%" PRIxLEAST64 ". Was it created? Has it already been destroyed?", object_handle);
}
}
-static void alloc_command_buffer(VkDevice device, VkCommandPool commandPool, VkCommandBuffer vkObj, VkDebugReportObjectTypeEXT objType, VkCommandBufferLevel level)
-{
- log_msg(mdd(device), VK_DEBUG_REPORT_INFORMATION_BIT_EXT, objType, reinterpret_cast<uint64_t>(vkObj), __LINE__, OBJTRACK_NONE, "OBJTRACK",
- "OBJ[%llu] : CREATE %s object 0x%" PRIxLEAST64 , object_track_index++, string_VkDebugReportObjectTypeEXT(objType),
- reinterpret_cast<uint64_t>(vkObj));
-
- OBJTRACK_NODE* pNewObjNode = new OBJTRACK_NODE;
- pNewObjNode->objType = objType;
- pNewObjNode->vkObj = reinterpret_cast<uint64_t>(vkObj);
- pNewObjNode->parentObj = (uint64_t) commandPool;
+static void alloc_command_buffer(VkDevice device, VkCommandPool commandPool, VkCommandBuffer vkObj,
+ VkDebugReportObjectTypeEXT objType, VkCommandBufferLevel level) {
+ log_msg(mdd(device), VK_DEBUG_REPORT_INFORMATION_BIT_EXT, objType, reinterpret_cast<uint64_t>(vkObj), __LINE__, OBJTRACK_NONE,
+ "OBJTRACK", "OBJ[%llu] : CREATE %s object 0x%" PRIxLEAST64, object_track_index++,
+ string_VkDebugReportObjectTypeEXT(objType), reinterpret_cast<uint64_t>(vkObj));
+
+ OBJTRACK_NODE *pNewObjNode = new OBJTRACK_NODE;
+ pNewObjNode->objType = objType;
+ pNewObjNode->belongsTo = (uint64_t)device;
+ pNewObjNode->vkObj = reinterpret_cast<uint64_t>(vkObj);
+ pNewObjNode->parentObj = (uint64_t)commandPool;
if (level == VK_COMMAND_BUFFER_LEVEL_SECONDARY) {
pNewObjNode->status = OBJSTATUS_COMMAND_BUFFER_SECONDARY;
} else {
@@ -508,127 +481,142 @@ static void alloc_command_buffer(VkDevice device, VkCommandPool commandPool, VkC
numTotalObjs++;
}
-static void free_command_buffer(VkDevice device, VkCommandPool commandPool, VkCommandBuffer commandBuffer)
-{
+static void free_command_buffer(VkDevice device, VkCommandPool commandPool, VkCommandBuffer commandBuffer) {
uint64_t object_handle = reinterpret_cast<uint64_t>(commandBuffer);
if (VkCommandBufferMap.find(object_handle) != VkCommandBufferMap.end()) {
- OBJTRACK_NODE* pNode = VkCommandBufferMap[(uint64_t)commandBuffer];
-
- if (pNode->parentObj != (uint64_t)(commandPool)) {
- log_msg(mdd(device), VK_DEBUG_REPORT_ERROR_BIT_EXT, pNode->objType, object_handle, __LINE__, OBJTRACK_COMMAND_POOL_MISMATCH, "OBJTRACK",
- "FreeCommandBuffers is attempting to free Command Buffer 0x%" PRIxLEAST64 " belonging to Command Pool 0x%" PRIxLEAST64 " from pool 0x%" PRIxLEAST64 ").",
- reinterpret_cast<uint64_t>(commandBuffer), pNode->parentObj, (uint64_t)(commandPool));
- } else {
+ OBJTRACK_NODE *pNode = VkCommandBufferMap[(uint64_t)commandBuffer];
+
+ if (pNode->parentObj != (uint64_t)(commandPool)) {
+ log_msg(mdd(device), VK_DEBUG_REPORT_ERROR_BIT_EXT, pNode->objType, object_handle, __LINE__,
+ OBJTRACK_COMMAND_POOL_MISMATCH, "OBJTRACK",
+ "FreeCommandBuffers is attempting to free Command Buffer 0x%" PRIxLEAST64
+ " belonging to Command Pool 0x%" PRIxLEAST64 " from pool 0x%" PRIxLEAST64 ").",
+ reinterpret_cast<uint64_t>(commandBuffer), pNode->parentObj, (uint64_t)(commandPool));
+ } else {
uint32_t objIndex = objTypeToIndex(pNode->objType);
assert(numTotalObjs > 0);
numTotalObjs--;
assert(numObjs[objIndex] > 0);
numObjs[objIndex]--;
- log_msg(mdd(device), VK_DEBUG_REPORT_INFORMATION_BIT_EXT, pNode->objType, object_handle, __LINE__, OBJTRACK_NONE, "OBJTRACK",
- "OBJ_STAT Destroy %s obj 0x%" PRIxLEAST64 " (%" PRIu64 " total objs remain & %" PRIu64 " %s objs).",
- string_VkDebugReportObjectTypeEXT(pNode->objType), reinterpret_cast<uint64_t>(commandBuffer), numTotalObjs, numObjs[objIndex],
- string_VkDebugReportObjectTypeEXT(pNode->objType));
+ log_msg(mdd(device), VK_DEBUG_REPORT_INFORMATION_BIT_EXT, pNode->objType, object_handle, __LINE__, OBJTRACK_NONE,
+ "OBJTRACK", "OBJ_STAT Destroy %s obj 0x%" PRIxLEAST64 " (%" PRIu64 " total objs remain & %" PRIu64 " %s objs).",
+ string_VkDebugReportObjectTypeEXT(pNode->objType), reinterpret_cast<uint64_t>(commandBuffer), numTotalObjs,
+ numObjs[objIndex], string_VkDebugReportObjectTypeEXT(pNode->objType));
delete pNode;
VkCommandBufferMap.erase(object_handle);
}
} else {
- log_msg(mdd(device), VK_DEBUG_REPORT_ERROR_BIT_EXT, (VkDebugReportObjectTypeEXT) 0, object_handle, __LINE__, OBJTRACK_NONE, "OBJTRACK",
- "Unable to remove obj 0x%" PRIxLEAST64 ". Was it created? Has it already been destroyed?",
- object_handle);
+ log_msg(mdd(device), VK_DEBUG_REPORT_ERROR_BIT_EXT, (VkDebugReportObjectTypeEXT)0, object_handle, __LINE__, OBJTRACK_NONE,
+ "OBJTRACK", "Unable to remove obj 0x%" PRIxLEAST64 ". Was it created? Has it already been destroyed?",
+ object_handle);
}
}
-static void alloc_descriptor_set(VkDevice device, VkDescriptorPool descriptorPool, VkDescriptorSet vkObj, VkDebugReportObjectTypeEXT objType)
-{
+static void alloc_descriptor_set(VkDevice device, VkDescriptorPool descriptorPool, VkDescriptorSet vkObj,
+ VkDebugReportObjectTypeEXT objType) {
log_msg(mdd(device), VK_DEBUG_REPORT_INFORMATION_BIT_EXT, objType, (uint64_t)(vkObj), __LINE__, OBJTRACK_NONE, "OBJTRACK",
- "OBJ[%llu] : CREATE %s object 0x%" PRIxLEAST64 , object_track_index++, string_VkDebugReportObjectTypeEXT(objType),
- (uint64_t)(vkObj));
-
- OBJTRACK_NODE* pNewObjNode = new OBJTRACK_NODE;
- pNewObjNode->objType = objType;
- pNewObjNode->status = OBJSTATUS_NONE;
- pNewObjNode->vkObj = (uint64_t)(vkObj);
- pNewObjNode->parentObj = (uint64_t) descriptorPool;
+ "OBJ[%llu] : CREATE %s object 0x%" PRIxLEAST64, object_track_index++, string_VkDebugReportObjectTypeEXT(objType),
+ (uint64_t)(vkObj));
+
+ OBJTRACK_NODE *pNewObjNode = new OBJTRACK_NODE;
+ pNewObjNode->objType = objType;
+ pNewObjNode->belongsTo = (uint64_t)device;
+ pNewObjNode->status = OBJSTATUS_NONE;
+ pNewObjNode->vkObj = (uint64_t)(vkObj);
+ pNewObjNode->parentObj = (uint64_t)descriptorPool;
VkDescriptorSetMap[(uint64_t)vkObj] = pNewObjNode;
uint32_t objIndex = objTypeToIndex(objType);
numObjs[objIndex]++;
numTotalObjs++;
}
-static void free_descriptor_set(VkDevice device, VkDescriptorPool descriptorPool, VkDescriptorSet descriptorSet)
-{
+static void free_descriptor_set(VkDevice device, VkDescriptorPool descriptorPool, VkDescriptorSet descriptorSet) {
uint64_t object_handle = (uint64_t)(descriptorSet);
if (VkDescriptorSetMap.find(object_handle) != VkDescriptorSetMap.end()) {
- OBJTRACK_NODE* pNode = VkDescriptorSetMap[(uint64_t)descriptorSet];
+ OBJTRACK_NODE *pNode = VkDescriptorSetMap[(uint64_t)descriptorSet];
if (pNode->parentObj != (uint64_t)(descriptorPool)) {
- log_msg(mdd(device), VK_DEBUG_REPORT_ERROR_BIT_EXT, pNode->objType, object_handle, __LINE__, OBJTRACK_DESCRIPTOR_POOL_MISMATCH, "OBJTRACK",
- "FreeDescriptorSets is attempting to free descriptorSet 0x%" PRIxLEAST64 " belonging to Descriptor Pool 0x%" PRIxLEAST64 " from pool 0x%" PRIxLEAST64 ").",
- (uint64_t)(descriptorSet), pNode->parentObj, (uint64_t)(descriptorPool));
+ log_msg(mdd(device), VK_DEBUG_REPORT_ERROR_BIT_EXT, pNode->objType, object_handle, __LINE__,
+ OBJTRACK_DESCRIPTOR_POOL_MISMATCH, "OBJTRACK",
+ "FreeDescriptorSets is attempting to free descriptorSet 0x%" PRIxLEAST64
+ " belonging to Descriptor Pool 0x%" PRIxLEAST64 " from pool 0x%" PRIxLEAST64 ").",
+ (uint64_t)(descriptorSet), pNode->parentObj, (uint64_t)(descriptorPool));
} else {
uint32_t objIndex = objTypeToIndex(pNode->objType);
assert(numTotalObjs > 0);
numTotalObjs--;
assert(numObjs[objIndex] > 0);
numObjs[objIndex]--;
- log_msg(mdd(device), VK_DEBUG_REPORT_INFORMATION_BIT_EXT, pNode->objType, object_handle, __LINE__, OBJTRACK_NONE, "OBJTRACK",
- "OBJ_STAT Destroy %s obj 0x%" PRIxLEAST64 " (%" PRIu64 " total objs remain & %" PRIu64 " %s objs).",
- string_VkDebugReportObjectTypeEXT(pNode->objType), (uint64_t)(descriptorSet), numTotalObjs, numObjs[objIndex],
- string_VkDebugReportObjectTypeEXT(pNode->objType));
+ log_msg(mdd(device), VK_DEBUG_REPORT_INFORMATION_BIT_EXT, pNode->objType, object_handle, __LINE__, OBJTRACK_NONE,
+ "OBJTRACK", "OBJ_STAT Destroy %s obj 0x%" PRIxLEAST64 " (%" PRIu64 " total objs remain & %" PRIu64 " %s objs).",
+ string_VkDebugReportObjectTypeEXT(pNode->objType), (uint64_t)(descriptorSet), numTotalObjs, numObjs[objIndex],
+ string_VkDebugReportObjectTypeEXT(pNode->objType));
delete pNode;
VkDescriptorSetMap.erase(object_handle);
}
} else {
- log_msg(mdd(device), VK_DEBUG_REPORT_ERROR_BIT_EXT, (VkDebugReportObjectTypeEXT) 0, object_handle, __LINE__, OBJTRACK_NONE, "OBJTRACK",
- "Unable to remove obj 0x%" PRIxLEAST64 ". Was it created? Has it already been destroyed?",
- object_handle);
+ log_msg(mdd(device), VK_DEBUG_REPORT_ERROR_BIT_EXT, (VkDebugReportObjectTypeEXT)0, object_handle, __LINE__, OBJTRACK_NONE,
+ "OBJTRACK", "Unable to remove obj 0x%" PRIxLEAST64 ". Was it created? Has it already been destroyed?",
+ object_handle);
}
}
-static void create_queue(VkDevice dispatchable_object, VkQueue vkObj, VkDebugReportObjectTypeEXT objType)
-{
- log_msg(mdd(dispatchable_object), VK_DEBUG_REPORT_INFORMATION_BIT_EXT, objType, reinterpret_cast<uint64_t>(vkObj), __LINE__, OBJTRACK_NONE, "OBJTRACK",
- "OBJ[%llu] : CREATE %s object 0x%" PRIxLEAST64 , object_track_index++, string_VkDebugReportObjectTypeEXT(objType),
- reinterpret_cast<uint64_t>(vkObj));
+static void create_queue(VkDevice dispatchable_object, VkQueue vkObj, VkDebugReportObjectTypeEXT objType) {
+ log_msg(mdd(dispatchable_object), VK_DEBUG_REPORT_INFORMATION_BIT_EXT, objType, reinterpret_cast<uint64_t>(vkObj), __LINE__,
+ OBJTRACK_NONE, "OBJTRACK", "OBJ[%llu] : CREATE %s object 0x%" PRIxLEAST64, object_track_index++,
+ string_VkDebugReportObjectTypeEXT(objType), reinterpret_cast<uint64_t>(vkObj));
- OBJTRACK_NODE* pNewObjNode = new OBJTRACK_NODE;
+ OBJTRACK_NODE *pNewObjNode = new OBJTRACK_NODE;
pNewObjNode->objType = objType;
- pNewObjNode->status = OBJSTATUS_NONE;
- pNewObjNode->vkObj = reinterpret_cast<uint64_t>(vkObj);
+ pNewObjNode->belongsTo = (uint64_t)dispatchable_object;
+ pNewObjNode->status = OBJSTATUS_NONE;
+ pNewObjNode->vkObj = reinterpret_cast<uint64_t>(vkObj);
VkQueueMap[reinterpret_cast<uint64_t>(vkObj)] = pNewObjNode;
uint32_t objIndex = objTypeToIndex(objType);
numObjs[objIndex]++;
numTotalObjs++;
}
-static void create_swapchain_image_obj(VkDevice dispatchable_object, VkImage vkObj, VkSwapchainKHR swapchain)
-{
- log_msg(mdd(dispatchable_object), VK_DEBUG_REPORT_INFORMATION_BIT_EXT, VK_DEBUG_REPORT_OBJECT_TYPE_IMAGE_EXT, (uint64_t) vkObj, __LINE__, OBJTRACK_NONE, "OBJTRACK",
- "OBJ[%llu] : CREATE %s object 0x%" PRIxLEAST64 , object_track_index++, "SwapchainImage",
- (uint64_t)(vkObj));
-
- OBJTRACK_NODE* pNewObjNode = new OBJTRACK_NODE;
- pNewObjNode->objType = VK_DEBUG_REPORT_OBJECT_TYPE_IMAGE_EXT;
- pNewObjNode->status = OBJSTATUS_NONE;
- pNewObjNode->vkObj = (uint64_t) vkObj;
- pNewObjNode->parentObj = (uint64_t) swapchain;
+static void create_swapchain_image_obj(VkDevice dispatchable_object, VkImage vkObj, VkSwapchainKHR swapchain) {
+ log_msg(mdd(dispatchable_object), VK_DEBUG_REPORT_INFORMATION_BIT_EXT, VK_DEBUG_REPORT_OBJECT_TYPE_IMAGE_EXT, (uint64_t)vkObj,
+ __LINE__, OBJTRACK_NONE, "OBJTRACK", "OBJ[%llu] : CREATE %s object 0x%" PRIxLEAST64, object_track_index++,
+ "SwapchainImage", (uint64_t)(vkObj));
+
+ OBJTRACK_NODE *pNewObjNode = new OBJTRACK_NODE;
+ pNewObjNode->belongsTo = (uint64_t)dispatchable_object;
+ pNewObjNode->objType = VK_DEBUG_REPORT_OBJECT_TYPE_IMAGE_EXT;
+ pNewObjNode->status = OBJSTATUS_NONE;
+ pNewObjNode->vkObj = (uint64_t)vkObj;
+ pNewObjNode->parentObj = (uint64_t)swapchain;
swapchainImageMap[(uint64_t)(vkObj)] = pNewObjNode;
}
+static void create_device(VkInstance dispatchable_object, VkDevice vkObj, VkDebugReportObjectTypeEXT objType) {
+ log_msg(mid(dispatchable_object), VK_DEBUG_REPORT_INFORMATION_BIT_EXT, objType, (uint64_t)(vkObj), __LINE__, OBJTRACK_NONE,
+ "OBJTRACK", "OBJ[%llu] : CREATE %s object 0x%" PRIxLEAST64, object_track_index++,
+ string_VkDebugReportObjectTypeEXT(objType), (uint64_t)(vkObj));
+
+ OBJTRACK_NODE *pNewObjNode = new OBJTRACK_NODE;
+ pNewObjNode->belongsTo = (uint64_t)dispatchable_object;
+ pNewObjNode->objType = objType;
+ pNewObjNode->status = OBJSTATUS_NONE;
+ pNewObjNode->vkObj = (uint64_t)(vkObj);
+ VkDeviceMap[(uint64_t)vkObj] = pNewObjNode;
+ uint32_t objIndex = objTypeToIndex(objType);
+ numObjs[objIndex]++;
+ numTotalObjs++;
+}
+
//
// Non-auto-generated API functions called by generated code
//
-VkResult
-explicit_CreateInstance(
- const VkInstanceCreateInfo *pCreateInfo,
- const VkAllocationCallbacks *pAllocator,
- VkInstance *pInstance)
-{
+VkResult explicit_CreateInstance(const VkInstanceCreateInfo *pCreateInfo, const VkAllocationCallbacks *pAllocator,
+ VkInstance *pInstance) {
VkLayerInstanceCreateInfo *chain_info = get_chain_info(pCreateInfo, VK_LAYER_LINK_INFO);
assert(chain_info->u.pLayerInfo);
PFN_vkGetInstanceProcAddr fpGetInstanceProcAddr = chain_info->u.pLayerInfo->pfnNextGetInstanceProcAddr;
- PFN_vkCreateInstance fpCreateInstance = (PFN_vkCreateInstance) fpGetInstanceProcAddr(NULL, "vkCreateInstance");
+ PFN_vkCreateInstance fpCreateInstance = (PFN_vkCreateInstance)fpGetInstanceProcAddr(NULL, "vkCreateInstance");
if (fpCreateInstance == NULL) {
return VK_ERROR_INITIALIZATION_FAILED;
}
@@ -645,13 +633,10 @@ explicit_CreateInstance(
initInstanceTable(*pInstance, fpGetInstanceProcAddr, object_tracker_instance_table_map);
VkLayerInstanceDispatchTable *pInstanceTable = get_dispatch_table(object_tracker_instance_table_map, *pInstance);
- my_data->report_data = debug_report_create_instance(
- pInstanceTable,
- *pInstance,
- pCreateInfo->enabledExtensionCount,
- pCreateInfo->ppEnabledExtensionNames);
+ my_data->report_data = debug_report_create_instance(pInstanceTable, *pInstance, pCreateInfo->enabledExtensionCount,
+ pCreateInfo->ppEnabledExtensionNames);
- initObjectTracker(my_data, pAllocator);
+ init_object_tracker(my_data, pAllocator);
createInstanceRegisterExtensions(pCreateInfo, *pInstance);
create_instance(*pInstance, *pInstance, VK_DEBUG_REPORT_OBJECT_TYPE_INSTANCE_EXT);
@@ -659,12 +644,7 @@ explicit_CreateInstance(
return result;
}
-void
-explicit_GetPhysicalDeviceQueueFamilyProperties(
- VkPhysicalDevice gpu,
- uint32_t* pCount,
- VkQueueFamilyProperties* pProperties)
-{
+void explicit_GetPhysicalDeviceQueueFamilyProperties(VkPhysicalDevice gpu, uint32_t *pCount, VkQueueFamilyProperties *pProperties) {
get_dispatch_table(object_tracker_instance_table_map, gpu)->GetPhysicalDeviceQueueFamilyProperties(gpu, pCount, pProperties);
loader_platform_thread_lock_mutex(&objLock);
@@ -673,20 +653,15 @@ explicit_GetPhysicalDeviceQueueFamilyProperties(
loader_platform_thread_unlock_mutex(&objLock);
}
-VkResult
-explicit_CreateDevice(
- VkPhysicalDevice gpu,
- const VkDeviceCreateInfo *pCreateInfo,
- const VkAllocationCallbacks *pAllocator,
- VkDevice *pDevice)
-{
+VkResult explicit_CreateDevice(VkPhysicalDevice gpu, const VkDeviceCreateInfo *pCreateInfo, const VkAllocationCallbacks *pAllocator,
+ VkDevice *pDevice) {
loader_platform_thread_lock_mutex(&objLock);
VkLayerDeviceCreateInfo *chain_info = get_chain_info(pCreateInfo, VK_LAYER_LINK_INFO);
assert(chain_info->u.pLayerInfo);
PFN_vkGetInstanceProcAddr fpGetInstanceProcAddr = chain_info->u.pLayerInfo->pfnNextGetInstanceProcAddr;
PFN_vkGetDeviceProcAddr fpGetDeviceProcAddr = chain_info->u.pLayerInfo->pfnNextGetDeviceProcAddr;
- PFN_vkCreateDevice fpCreateDevice = (PFN_vkCreateDevice) fpGetInstanceProcAddr(NULL, "vkCreateDevice");
+ PFN_vkCreateDevice fpCreateDevice = (PFN_vkCreateDevice)fpGetInstanceProcAddr(NULL, "vkCreateDevice");
if (fpCreateDevice == NULL) {
loader_platform_thread_unlock_mutex(&objLock);
return VK_ERROR_INITIALIZATION_FAILED;
@@ -709,21 +684,25 @@ explicit_CreateDevice(
createDeviceRegisterExtensions(pCreateInfo, *pDevice);
- create_device(*pDevice, *pDevice, VK_DEBUG_REPORT_OBJECT_TYPE_DEVICE_EXT);
+ if (VkPhysicalDeviceMap.find((uint64_t)gpu) != VkPhysicalDeviceMap.end()) {
+ OBJTRACK_NODE *pNewObjNode = VkPhysicalDeviceMap[(uint64_t)gpu];
+ create_device((VkInstance)pNewObjNode->belongsTo, *pDevice, VK_DEBUG_REPORT_OBJECT_TYPE_DEVICE_EXT);
+ }
loader_platform_thread_unlock_mutex(&objLock);
return result;
}
-VkResult explicit_EnumeratePhysicalDevices(VkInstance instance, uint32_t* pPhysicalDeviceCount, VkPhysicalDevice* pPhysicalDevices)
-{
+VkResult explicit_EnumeratePhysicalDevices(VkInstance instance, uint32_t *pPhysicalDeviceCount,
+ VkPhysicalDevice *pPhysicalDevices) {
VkBool32 skipCall = VK_FALSE;
loader_platform_thread_lock_mutex(&objLock);
skipCall |= validate_instance(instance, instance, VK_DEBUG_REPORT_OBJECT_TYPE_INSTANCE_EXT, false);
loader_platform_thread_unlock_mutex(&objLock);
if (skipCall)
return VK_ERROR_VALIDATION_FAILED_EXT;
- VkResult result = get_dispatch_table(object_tracker_instance_table_map, instance)->EnumeratePhysicalDevices(instance, pPhysicalDeviceCount, pPhysicalDevices);
+ VkResult result = get_dispatch_table(object_tracker_instance_table_map, instance)
+ ->EnumeratePhysicalDevices(instance, pPhysicalDeviceCount, pPhysicalDevices);
loader_platform_thread_lock_mutex(&objLock);
if (result == VK_SUCCESS) {
if (pPhysicalDevices) {
@@ -736,13 +715,7 @@ VkResult explicit_EnumeratePhysicalDevices(VkInstance instance, uint32_t* pPhysi
return result;
}
-void
-explicit_GetDeviceQueue(
- VkDevice device,
- uint32_t queueNodeIndex,
- uint32_t queueIndex,
- VkQueue *pQueue)
-{
+void explicit_GetDeviceQueue(VkDevice device, uint32_t queueNodeIndex, uint32_t queueIndex, VkQueue *pQueue) {
loader_platform_thread_lock_mutex(&objLock);
validate_device(device, device, VK_DEBUG_REPORT_OBJECT_TYPE_DEVICE_EXT, false);
loader_platform_thread_unlock_mutex(&objLock);
@@ -755,15 +728,8 @@ explicit_GetDeviceQueue(
loader_platform_thread_unlock_mutex(&objLock);
}
-VkResult
-explicit_MapMemory(
- VkDevice device,
- VkDeviceMemory mem,
- VkDeviceSize offset,
- VkDeviceSize size,
- VkFlags flags,
- void **ppData)
-{
+VkResult explicit_MapMemory(VkDevice device, VkDeviceMemory mem, VkDeviceSize offset, VkDeviceSize size, VkFlags flags,
+ void **ppData) {
VkBool32 skipCall = VK_FALSE;
loader_platform_thread_lock_mutex(&objLock);
skipCall |= set_device_memory_status(device, mem, VK_DEBUG_REPORT_OBJECT_TYPE_DEVICE_MEMORY_EXT, OBJSTATUS_GPU_MEM_MAPPED);
@@ -772,16 +738,13 @@ explicit_MapMemory(
if (skipCall == VK_TRUE)
return VK_ERROR_VALIDATION_FAILED_EXT;
- VkResult result = get_dispatch_table(object_tracker_device_table_map, device)->MapMemory(device, mem, offset, size, flags, ppData);
+ VkResult result =
+ get_dispatch_table(object_tracker_device_table_map, device)->MapMemory(device, mem, offset, size, flags, ppData);
return result;
}
-void
-explicit_UnmapMemory(
- VkDevice device,
- VkDeviceMemory mem)
-{
+void explicit_UnmapMemory(VkDevice device, VkDeviceMemory mem) {
VkBool32 skipCall = VK_FALSE;
loader_platform_thread_lock_mutex(&objLock);
skipCall |= reset_device_memory_status(device, mem, VK_DEBUG_REPORT_OBJECT_TYPE_DEVICE_MEMORY_EXT, OBJSTATUS_GPU_MEM_MAPPED);
@@ -793,13 +756,7 @@ explicit_UnmapMemory(
get_dispatch_table(object_tracker_device_table_map, device)->UnmapMemory(device, mem);
}
-VkResult
-explicit_QueueBindSparse(
- VkQueue queue,
- uint32_t bindInfoCount,
- const VkBindSparseInfo* pBindInfo,
- VkFence fence)
-{
+VkResult explicit_QueueBindSparse(VkQueue queue, uint32_t bindInfoCount, const VkBindSparseInfo *pBindInfo, VkFence fence) {
loader_platform_thread_lock_mutex(&objLock);
validateQueueFlags(queue, "QueueBindSparse");
@@ -814,16 +771,13 @@ explicit_QueueBindSparse(
loader_platform_thread_unlock_mutex(&objLock);
- VkResult result = get_dispatch_table(object_tracker_device_table_map, queue)->QueueBindSparse(queue, bindInfoCount, pBindInfo, fence);
+ VkResult result =
+ get_dispatch_table(object_tracker_device_table_map, queue)->QueueBindSparse(queue, bindInfoCount, pBindInfo, fence);
return result;
}
-VkResult
-explicit_AllocateCommandBuffers(
- VkDevice device,
- const VkCommandBufferAllocateInfo *pAllocateInfo,
- VkCommandBuffer* pCommandBuffers)
-{
+VkResult explicit_AllocateCommandBuffers(VkDevice device, const VkCommandBufferAllocateInfo *pAllocateInfo,
+ VkCommandBuffer *pCommandBuffers) {
VkBool32 skipCall = VK_FALSE;
loader_platform_thread_lock_mutex(&objLock);
skipCall |= validate_device(device, device, VK_DEBUG_REPORT_OBJECT_TYPE_DEVICE_EXT, false);
@@ -834,42 +788,42 @@ explicit_AllocateCommandBuffers(
return VK_ERROR_VALIDATION_FAILED_EXT;
}
- VkResult result = get_dispatch_table(object_tracker_device_table_map, device)->AllocateCommandBuffers(
- device, pAllocateInfo, pCommandBuffers);
+ VkResult result =
+ get_dispatch_table(object_tracker_device_table_map, device)->AllocateCommandBuffers(device, pAllocateInfo, pCommandBuffers);
loader_platform_thread_lock_mutex(&objLock);
for (uint32_t i = 0; i < pAllocateInfo->commandBufferCount; i++) {
- alloc_command_buffer(device, pAllocateInfo->commandPool, pCommandBuffers[i], VK_DEBUG_REPORT_OBJECT_TYPE_COMMAND_BUFFER_EXT, pAllocateInfo->level);
+ alloc_command_buffer(device, pAllocateInfo->commandPool, pCommandBuffers[i], VK_DEBUG_REPORT_OBJECT_TYPE_COMMAND_BUFFER_EXT,
+ pAllocateInfo->level);
}
loader_platform_thread_unlock_mutex(&objLock);
return result;
}
-VkResult
-explicit_AllocateDescriptorSets(
- VkDevice device,
- const VkDescriptorSetAllocateInfo *pAllocateInfo,
- VkDescriptorSet *pDescriptorSets)
-{
+VkResult explicit_AllocateDescriptorSets(VkDevice device, const VkDescriptorSetAllocateInfo *pAllocateInfo,
+ VkDescriptorSet *pDescriptorSets) {
VkBool32 skipCall = VK_FALSE;
loader_platform_thread_lock_mutex(&objLock);
skipCall |= validate_device(device, device, VK_DEBUG_REPORT_OBJECT_TYPE_DEVICE_EXT, false);
- skipCall |= validate_descriptor_pool(device, pAllocateInfo->descriptorPool, VK_DEBUG_REPORT_OBJECT_TYPE_DESCRIPTOR_POOL_EXT, false);
+ skipCall |=
+ validate_descriptor_pool(device, pAllocateInfo->descriptorPool, VK_DEBUG_REPORT_OBJECT_TYPE_DESCRIPTOR_POOL_EXT, false);
for (uint32_t i = 0; i < pAllocateInfo->descriptorSetCount; i++) {
- skipCall |= validate_descriptor_set_layout(device, pAllocateInfo->pSetLayouts[i], VK_DEBUG_REPORT_OBJECT_TYPE_DESCRIPTOR_SET_LAYOUT_EXT, false);
+ skipCall |= validate_descriptor_set_layout(device, pAllocateInfo->pSetLayouts[i],
+ VK_DEBUG_REPORT_OBJECT_TYPE_DESCRIPTOR_SET_LAYOUT_EXT, false);
}
loader_platform_thread_unlock_mutex(&objLock);
if (skipCall)
return VK_ERROR_VALIDATION_FAILED_EXT;
- VkResult result = get_dispatch_table(object_tracker_device_table_map, device)->AllocateDescriptorSets(
- device, pAllocateInfo, pDescriptorSets);
+ VkResult result =
+ get_dispatch_table(object_tracker_device_table_map, device)->AllocateDescriptorSets(device, pAllocateInfo, pDescriptorSets);
if (VK_SUCCESS == result) {
loader_platform_thread_lock_mutex(&objLock);
for (uint32_t i = 0; i < pAllocateInfo->descriptorSetCount; i++) {
- alloc_descriptor_set(device, pAllocateInfo->descriptorPool, pDescriptorSets[i], VK_DEBUG_REPORT_OBJECT_TYPE_DESCRIPTOR_SET_EXT);
+ alloc_descriptor_set(device, pAllocateInfo->descriptorPool, pDescriptorSets[i],
+ VK_DEBUG_REPORT_OBJECT_TYPE_DESCRIPTOR_SET_EXT);
}
loader_platform_thread_unlock_mutex(&objLock);
}
@@ -877,46 +831,35 @@ explicit_AllocateDescriptorSets(
return result;
}
-void
-explicit_FreeCommandBuffers(
- VkDevice device,
- VkCommandPool commandPool,
- uint32_t commandBufferCount,
- const VkCommandBuffer *pCommandBuffers)
-{
+void explicit_FreeCommandBuffers(VkDevice device, VkCommandPool commandPool, uint32_t commandBufferCount,
+ const VkCommandBuffer *pCommandBuffers) {
loader_platform_thread_lock_mutex(&objLock);
validate_command_pool(device, commandPool, VK_DEBUG_REPORT_OBJECT_TYPE_COMMAND_POOL_EXT, false);
validate_device(device, device, VK_DEBUG_REPORT_OBJECT_TYPE_DEVICE_EXT, false);
loader_platform_thread_unlock_mutex(&objLock);
- get_dispatch_table(object_tracker_device_table_map, device)->FreeCommandBuffers(device,
- commandPool, commandBufferCount, pCommandBuffers);
+ get_dispatch_table(object_tracker_device_table_map, device)
+ ->FreeCommandBuffers(device, commandPool, commandBufferCount, pCommandBuffers);
loader_platform_thread_lock_mutex(&objLock);
- for (uint32_t i = 0; i < commandBufferCount; i++)
- {
+ for (uint32_t i = 0; i < commandBufferCount; i++) {
free_command_buffer(device, commandPool, *pCommandBuffers);
pCommandBuffers++;
}
loader_platform_thread_unlock_mutex(&objLock);
}
-void
-explicit_DestroySwapchainKHR(
- VkDevice device,
- VkSwapchainKHR swapchain,
- const VkAllocationCallbacks *pAllocator)
-{
+void explicit_DestroySwapchainKHR(VkDevice device, VkSwapchainKHR swapchain, const VkAllocationCallbacks *pAllocator) {
loader_platform_thread_lock_mutex(&objLock);
// A swapchain's images are implicitly deleted when the swapchain is deleted.
// Remove this swapchain's images from our map of such images.
- unordered_map<uint64_t, OBJTRACK_NODE*>::iterator itr = swapchainImageMap.begin();
+ unordered_map<uint64_t, OBJTRACK_NODE *>::iterator itr = swapchainImageMap.begin();
while (itr != swapchainImageMap.end()) {
- OBJTRACK_NODE* pNode = (*itr).second;
+ OBJTRACK_NODE *pNode = (*itr).second;
if (pNode->parentObj == (uint64_t)(swapchain)) {
- swapchainImageMap.erase(itr++);
+ swapchainImageMap.erase(itr++);
} else {
- ++itr;
+ ++itr;
}
}
destroy_swapchain_khr(device, swapchain);
@@ -925,12 +868,7 @@ explicit_DestroySwapchainKHR(
get_dispatch_table(object_tracker_device_table_map, device)->DestroySwapchainKHR(device, swapchain, pAllocator);
}
-void
-explicit_FreeMemory(
- VkDevice device,
- VkDeviceMemory mem,
- const VkAllocationCallbacks* pAllocator)
-{
+void explicit_FreeMemory(VkDevice device, VkDeviceMemory mem, const VkAllocationCallbacks *pAllocator) {
loader_platform_thread_lock_mutex(&objLock);
validate_device(device, device, VK_DEBUG_REPORT_OBJECT_TYPE_DEVICE_EXT, false);
loader_platform_thread_unlock_mutex(&objLock);
@@ -942,34 +880,24 @@ explicit_FreeMemory(
loader_platform_thread_unlock_mutex(&objLock);
}
-VkResult
-explicit_FreeDescriptorSets(
- VkDevice device,
- VkDescriptorPool descriptorPool,
- uint32_t count,
- const VkDescriptorSet *pDescriptorSets)
-{
+VkResult explicit_FreeDescriptorSets(VkDevice device, VkDescriptorPool descriptorPool, uint32_t count,
+ const VkDescriptorSet *pDescriptorSets) {
loader_platform_thread_lock_mutex(&objLock);
validate_descriptor_pool(device, descriptorPool, VK_DEBUG_REPORT_OBJECT_TYPE_DESCRIPTOR_POOL_EXT, false);
validate_device(device, device, VK_DEBUG_REPORT_OBJECT_TYPE_DEVICE_EXT, false);
loader_platform_thread_unlock_mutex(&objLock);
- VkResult result = get_dispatch_table(object_tracker_device_table_map, device)->FreeDescriptorSets(device, descriptorPool, count, pDescriptorSets);
+ VkResult result = get_dispatch_table(object_tracker_device_table_map, device)
+ ->FreeDescriptorSets(device, descriptorPool, count, pDescriptorSets);
loader_platform_thread_lock_mutex(&objLock);
- for (uint32_t i=0; i<count; i++)
- {
+ for (uint32_t i = 0; i < count; i++) {
free_descriptor_set(device, descriptorPool, *pDescriptorSets++);
}
loader_platform_thread_unlock_mutex(&objLock);
return result;
}
-void
-explicit_DestroyDescriptorPool(
- VkDevice device,
- VkDescriptorPool descriptorPool,
- const VkAllocationCallbacks *pAllocator)
-{
+void explicit_DestroyDescriptorPool(VkDevice device, VkDescriptorPool descriptorPool, const VkAllocationCallbacks *pAllocator) {
VkBool32 skipCall = VK_FALSE;
loader_platform_thread_lock_mutex(&objLock);
skipCall |= validate_device(device, device, VK_DEBUG_REPORT_OBJECT_TYPE_DEVICE_EXT, false);
@@ -981,9 +909,9 @@ explicit_DestroyDescriptorPool(
// A DescriptorPool's descriptor sets are implicitly deleted when the pool is deleted.
// Remove this pool's descriptor sets from our descriptorSet map.
loader_platform_thread_lock_mutex(&objLock);
- unordered_map<uint64_t, OBJTRACK_NODE*>::iterator itr = VkDescriptorSetMap.begin();
+ unordered_map<uint64_t, OBJTRACK_NODE *>::iterator itr = VkDescriptorSetMap.begin();
while (itr != VkDescriptorSetMap.end()) {
- OBJTRACK_NODE* pNode = (*itr).second;
+ OBJTRACK_NODE *pNode = (*itr).second;
auto del_itr = itr++;
if (pNode->parentObj == (uint64_t)(descriptorPool)) {
destroy_descriptor_set(device, (VkDescriptorSet)((*del_itr).first));
@@ -994,12 +922,7 @@ explicit_DestroyDescriptorPool(
get_dispatch_table(object_tracker_device_table_map, device)->DestroyDescriptorPool(device, descriptorPool, pAllocator);
}
-void
-explicit_DestroyCommandPool(
- VkDevice device,
- VkCommandPool commandPool,
- const VkAllocationCallbacks *pAllocator)
-{
+void explicit_DestroyCommandPool(VkDevice device, VkCommandPool commandPool, const VkAllocationCallbacks *pAllocator) {
VkBool32 skipCall = VK_FALSE;
loader_platform_thread_lock_mutex(&objLock);
skipCall |= validate_device(device, device, VK_DEBUG_REPORT_OBJECT_TYPE_DEVICE_EXT, false);
@@ -1011,10 +934,10 @@ explicit_DestroyCommandPool(
loader_platform_thread_lock_mutex(&objLock);
// A CommandPool's command buffers are implicitly deleted when the pool is deleted.
// Remove this pool's cmdBuffers from our cmd buffer map.
- unordered_map<uint64_t, OBJTRACK_NODE*>::iterator itr = VkCommandBufferMap.begin();
- unordered_map<uint64_t, OBJTRACK_NODE*>::iterator del_itr;
+ unordered_map<uint64_t, OBJTRACK_NODE *>::iterator itr = VkCommandBufferMap.begin();
+ unordered_map<uint64_t, OBJTRACK_NODE *>::iterator del_itr;
while (itr != VkCommandBufferMap.end()) {
- OBJTRACK_NODE* pNode = (*itr).second;
+ OBJTRACK_NODE *pNode = (*itr).second;
del_itr = itr++;
if (pNode->parentObj == (uint64_t)(commandPool)) {
destroy_command_buffer(reinterpret_cast<VkCommandBuffer>((*del_itr).first),
@@ -1026,13 +949,7 @@ explicit_DestroyCommandPool(
get_dispatch_table(object_tracker_device_table_map, device)->DestroyCommandPool(device, commandPool, pAllocator);
}
-VkResult
-explicit_GetSwapchainImagesKHR(
- VkDevice device,
- VkSwapchainKHR swapchain,
- uint32_t *pCount,
- VkImage *pSwapchainImages)
-{
+VkResult explicit_GetSwapchainImagesKHR(VkDevice device, VkSwapchainKHR swapchain, uint32_t *pCount, VkImage *pSwapchainImages) {
VkBool32 skipCall = VK_FALSE;
loader_platform_thread_lock_mutex(&objLock);
skipCall |= validate_device(device, device, VK_DEBUG_REPORT_OBJECT_TYPE_DEVICE_EXT, false);
@@ -1040,7 +957,8 @@ explicit_GetSwapchainImagesKHR(
if (skipCall)
return VK_ERROR_VALIDATION_FAILED_EXT;
- VkResult result = get_dispatch_table(object_tracker_device_table_map, device)->GetSwapchainImagesKHR(device, swapchain, pCount, pSwapchainImages);
+ VkResult result = get_dispatch_table(object_tracker_device_table_map, device)
+ ->GetSwapchainImagesKHR(device, swapchain, pCount, pSwapchainImages);
if (pSwapchainImages != NULL) {
loader_platform_thread_lock_mutex(&objLock);
@@ -1053,35 +971,33 @@ explicit_GetSwapchainImagesKHR(
}
// TODO: Add special case to codegen to cover validating all the pipelines instead of just the first
-VkResult
-explicit_CreateGraphicsPipelines(
- VkDevice device,
- VkPipelineCache pipelineCache,
- uint32_t createInfoCount,
- const VkGraphicsPipelineCreateInfo *pCreateInfos,
- const VkAllocationCallbacks *pAllocator,
- VkPipeline *pPipelines)
-{
+VkResult explicit_CreateGraphicsPipelines(VkDevice device, VkPipelineCache pipelineCache, uint32_t createInfoCount,
+ const VkGraphicsPipelineCreateInfo *pCreateInfos, const VkAllocationCallbacks *pAllocator,
+ VkPipeline *pPipelines) {
VkBool32 skipCall = VK_FALSE;
loader_platform_thread_lock_mutex(&objLock);
skipCall |= validate_device(device, device, VK_DEBUG_REPORT_OBJECT_TYPE_DEVICE_EXT, false);
if (pCreateInfos) {
- for (uint32_t idx0=0; idx0<createInfoCount; ++idx0) {
+ for (uint32_t idx0 = 0; idx0 < createInfoCount; ++idx0) {
if (pCreateInfos[idx0].basePipelineHandle) {
- skipCall |= validate_pipeline(device, pCreateInfos[idx0].basePipelineHandle, VK_DEBUG_REPORT_OBJECT_TYPE_PIPELINE_EXT, true);
+ skipCall |= validate_pipeline(device, pCreateInfos[idx0].basePipelineHandle,
+ VK_DEBUG_REPORT_OBJECT_TYPE_PIPELINE_EXT, true);
}
if (pCreateInfos[idx0].layout) {
- skipCall |= validate_pipeline_layout(device, pCreateInfos[idx0].layout, VK_DEBUG_REPORT_OBJECT_TYPE_PIPELINE_LAYOUT_EXT, false);
+ skipCall |= validate_pipeline_layout(device, pCreateInfos[idx0].layout,
+ VK_DEBUG_REPORT_OBJECT_TYPE_PIPELINE_LAYOUT_EXT, false);
}
if (pCreateInfos[idx0].pStages) {
- for (uint32_t idx1=0; idx1<pCreateInfos[idx0].stageCount; ++idx1) {
+ for (uint32_t idx1 = 0; idx1 < pCreateInfos[idx0].stageCount; ++idx1) {
if (pCreateInfos[idx0].pStages[idx1].module) {
- skipCall |= validate_shader_module(device, pCreateInfos[idx0].pStages[idx1].module, VK_DEBUG_REPORT_OBJECT_TYPE_SHADER_MODULE_EXT, false);
+ skipCall |= validate_shader_module(device, pCreateInfos[idx0].pStages[idx1].module,
+ VK_DEBUG_REPORT_OBJECT_TYPE_SHADER_MODULE_EXT, false);
}
}
}
if (pCreateInfos[idx0].renderPass) {
- skipCall |= validate_render_pass(device, pCreateInfos[idx0].renderPass, VK_DEBUG_REPORT_OBJECT_TYPE_RENDER_PASS_EXT, false);
+ skipCall |=
+ validate_render_pass(device, pCreateInfos[idx0].renderPass, VK_DEBUG_REPORT_OBJECT_TYPE_RENDER_PASS_EXT, false);
}
}
}
@@ -1091,7 +1007,8 @@ explicit_CreateGraphicsPipelines(
loader_platform_thread_unlock_mutex(&objLock);
if (skipCall)
return VK_ERROR_VALIDATION_FAILED_EXT;
- VkResult result = get_dispatch_table(object_tracker_device_table_map, device)->CreateGraphicsPipelines(device, pipelineCache, createInfoCount, pCreateInfos, pAllocator, pPipelines);
+ VkResult result = get_dispatch_table(object_tracker_device_table_map, device)
+ ->CreateGraphicsPipelines(device, pipelineCache, createInfoCount, pCreateInfos, pAllocator, pPipelines);
loader_platform_thread_lock_mutex(&objLock);
if (result == VK_SUCCESS) {
for (uint32_t idx2 = 0; idx2 < createInfoCount; ++idx2) {
@@ -1103,28 +1020,25 @@ explicit_CreateGraphicsPipelines(
}
// TODO: Add special case to codegen to cover validating all the pipelines instead of just the first
-VkResult
-explicit_CreateComputePipelines(
- VkDevice device,
- VkPipelineCache pipelineCache,
- uint32_t createInfoCount,
- const VkComputePipelineCreateInfo *pCreateInfos,
- const VkAllocationCallbacks *pAllocator,
- VkPipeline *pPipelines)
-{
+VkResult explicit_CreateComputePipelines(VkDevice device, VkPipelineCache pipelineCache, uint32_t createInfoCount,
+ const VkComputePipelineCreateInfo *pCreateInfos, const VkAllocationCallbacks *pAllocator,
+ VkPipeline *pPipelines) {
VkBool32 skipCall = VK_FALSE;
loader_platform_thread_lock_mutex(&objLock);
skipCall |= validate_device(device, device, VK_DEBUG_REPORT_OBJECT_TYPE_DEVICE_EXT, false);
if (pCreateInfos) {
- for (uint32_t idx0=0; idx0<createInfoCount; ++idx0) {
+ for (uint32_t idx0 = 0; idx0 < createInfoCount; ++idx0) {
if (pCreateInfos[idx0].basePipelineHandle) {
- skipCall |= validate_pipeline(device, pCreateInfos[idx0].basePipelineHandle, VK_DEBUG_REPORT_OBJECT_TYPE_PIPELINE_EXT, true);
+ skipCall |= validate_pipeline(device, pCreateInfos[idx0].basePipelineHandle,
+ VK_DEBUG_REPORT_OBJECT_TYPE_PIPELINE_EXT, true);
}
if (pCreateInfos[idx0].layout) {
- skipCall |= validate_pipeline_layout(device, pCreateInfos[idx0].layout, VK_DEBUG_REPORT_OBJECT_TYPE_PIPELINE_LAYOUT_EXT, false);
+ skipCall |= validate_pipeline_layout(device, pCreateInfos[idx0].layout,
+ VK_DEBUG_REPORT_OBJECT_TYPE_PIPELINE_LAYOUT_EXT, false);
}
if (pCreateInfos[idx0].stage.module) {
- skipCall |= validate_shader_module(device, pCreateInfos[idx0].stage.module, VK_DEBUG_REPORT_OBJECT_TYPE_SHADER_MODULE_EXT, false);
+ skipCall |= validate_shader_module(device, pCreateInfos[idx0].stage.module,
+ VK_DEBUG_REPORT_OBJECT_TYPE_SHADER_MODULE_EXT, false);
}
}
}
@@ -1134,7 +1048,8 @@ explicit_CreateComputePipelines(
loader_platform_thread_unlock_mutex(&objLock);
if (skipCall)
return VK_ERROR_VALIDATION_FAILED_EXT;
- VkResult result = get_dispatch_table(object_tracker_device_table_map, device)->CreateComputePipelines(device, pipelineCache, createInfoCount, pCreateInfos, pAllocator, pPipelines);
+ VkResult result = get_dispatch_table(object_tracker_device_table_map, device)
+ ->CreateComputePipelines(device, pipelineCache, createInfoCount, pCreateInfos, pAllocator, pPipelines);
loader_platform_thread_lock_mutex(&objLock);
if (result == VK_SUCCESS) {
for (uint32_t idx1 = 0; idx1 < createInfoCount; ++idx1) {
diff --git a/layers/param_checker.cpp b/layers/param_checker.cpp
deleted file mode 100644
index 70099b6ca..000000000
--- a/layers/param_checker.cpp
+++ /dev/null
@@ -1,7759 +0,0 @@
-/* Copyright (c) 2015-2016 The Khronos Group Inc.
- * Copyright (c) 2015-2016 Valve Corporation
- * Copyright (c) 2015-2016 LunarG, Inc.
- * Copyright (C) 2015-2016 Google Inc.
- *
- * Permission is hereby granted, free of charge, to any person obtaining a copy
- * of this software and/or associated documentation files (the "Materials"), to
- * deal in the Materials without restriction, including without limitation the
- * rights to use, copy, modify, merge, publish, distribute, sublicense, and/or
- * sell copies of the Materials, and to permit persons to whom the Materials
- * are furnished to do so, subject to the following conditions:
- *
- * The above copyright notice(s) and this permission notice shall be included
- * in all copies or substantial portions of the Materials.
- *
- * THE MATERIALS ARE PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
- * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
- * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT.
- *
- * IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM,
- * DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR
- * OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE MATERIALS OR THE
- * USE OR OTHER DEALINGS IN THE MATERIALS
- *
- * Author: Jeremy Hayes <jeremy@lunarg.com>
- * Author: Tony Barbour <tony@LunarG.com>
- * Author: Mark Lobodzinski <mark@LunarG.com>
- * Author: Dustin Graves <dustin@lunarg.com>
- */
-
-#include <stdio.h>
-#include <stdlib.h>
-#include <string.h>
-
-#include <iostream>
-#include <string>
-#include <sstream>
-#include <unordered_map>
-#include <unordered_set>
-#include <vector>
-
-#include "vk_loader_platform.h"
-#include "vulkan/vk_layer.h"
-#include "vk_layer_config.h"
-#include "vk_enum_validate_helper.h"
-#include "vk_struct_validate_helper.h"
-
-#include "vk_layer_table.h"
-#include "vk_layer_data.h"
-#include "vk_layer_logging.h"
-#include "vk_layer_extension_utils.h"
-#include "vk_layer_utils.h"
-
-#include "param_check.h"
-
-struct layer_data {
- debug_report_data *report_data;
- std::vector<VkDebugReportCallbackEXT> logging_callback;
-
- //TODO: Split instance/device structs
- //Device Data
- //Map for queue family index to queue count
- std::unordered_map<uint32_t, uint32_t> queueFamilyIndexMap;
-
- layer_data() :
- report_data(nullptr)
- {};
-};
-
-static std::unordered_map<void*, layer_data*> layer_data_map;
-static device_table_map pc_device_table_map;
-static instance_table_map pc_instance_table_map;
-
-// "my instance data"
-debug_report_data *mid(VkInstance object)
-{
- dispatch_key key = get_dispatch_key(object);
- layer_data *data = get_my_data_ptr(key, layer_data_map);
-#if DISPATCH_MAP_DEBUG
- fprintf(stderr, "MID: map: %p, object: %p, key: %p, data: %p\n", &layer_data_map, object, key, data);
-#endif
- assert(data != NULL);
-
- return data->report_data;
-}
-
-// "my device data"
-debug_report_data *mdd(void* object)
-{
- dispatch_key key = get_dispatch_key(object);
- layer_data *data = get_my_data_ptr(key, layer_data_map);
-#if DISPATCH_MAP_DEBUG
- fprintf(stderr, "MDD: map: %p, object: %p, key: %p, data: %p\n", &layer_data_map, object, key, data);
-#endif
- assert(data != NULL);
- return data->report_data;
-}
-
-static void InitParamChecker(layer_data *data, const VkAllocationCallbacks *pAllocator)
-{
- VkDebugReportCallbackEXT callback;
- uint32_t report_flags = getLayerOptionFlags("ParamCheckerReportFlags", 0);
-
- uint32_t debug_action = 0;
- getLayerOptionEnum("ParamCheckerDebugAction", (uint32_t *) &debug_action);
- if(debug_action & VK_DBG_LAYER_ACTION_LOG_MSG)
- {
- FILE *log_output = NULL;
- const char* option_str = getLayerOption("ParamCheckerLogFilename");
- log_output = getLayerLogOutput(option_str, "ParamChecker");
- VkDebugReportCallbackCreateInfoEXT dbgCreateInfo;
- memset(&dbgCreateInfo, 0, sizeof(dbgCreateInfo));
- dbgCreateInfo.sType = VK_STRUCTURE_TYPE_DEBUG_REPORT_CREATE_INFO_EXT;
- dbgCreateInfo.flags = report_flags;
- dbgCreateInfo.pfnCallback = log_callback;
- dbgCreateInfo.pUserData = log_output;
-
- layer_create_msg_callback(data->report_data, &dbgCreateInfo, pAllocator, &callback);
- data->logging_callback.push_back(callback);
- }
-
- if (debug_action & VK_DBG_LAYER_ACTION_DEBUG_OUTPUT) {
- VkDebugReportCallbackCreateInfoEXT dbgCreateInfo;
- memset(&dbgCreateInfo, 0, sizeof(dbgCreateInfo));
- dbgCreateInfo.sType = VK_STRUCTURE_TYPE_DEBUG_REPORT_CREATE_INFO_EXT;
- dbgCreateInfo.flags = report_flags;
- dbgCreateInfo.pfnCallback = win32_debug_output_msg;
- dbgCreateInfo.pUserData = NULL;
-
- layer_create_msg_callback(data->report_data, &dbgCreateInfo, pAllocator, &callback);
- data->logging_callback.push_back(callback);
- }
-}
-
-VK_LAYER_EXPORT VKAPI_ATTR VkResult VKAPI_CALL vkCreateDebugReportCallbackEXT(
- VkInstance instance,
- const VkDebugReportCallbackCreateInfoEXT *pCreateInfo,
- const VkAllocationCallbacks *pAllocator,
- VkDebugReportCallbackEXT* pMsgCallback)
-{
- VkLayerInstanceDispatchTable *pTable = get_dispatch_table(pc_instance_table_map, instance);
- VkResult result = pTable->CreateDebugReportCallbackEXT(instance, pCreateInfo, pAllocator, pMsgCallback);
-
- if (result == VK_SUCCESS)
- {
- layer_data *data = get_my_data_ptr(get_dispatch_key(instance), layer_data_map);
- result = layer_create_msg_callback(data->report_data, pCreateInfo, pAllocator, pMsgCallback);
- }
-
- return result;
-}
-
-VK_LAYER_EXPORT VKAPI_ATTR void VKAPI_CALL vkDestroyDebugReportCallbackEXT(
- VkInstance instance,
- VkDebugReportCallbackEXT msgCallback,
- const VkAllocationCallbacks *pAllocator)
-{
- VkLayerInstanceDispatchTable *pTable = get_dispatch_table(pc_instance_table_map, instance);
- pTable->DestroyDebugReportCallbackEXT(instance, msgCallback, pAllocator);
-
- layer_data *data = get_my_data_ptr(get_dispatch_key(instance), layer_data_map);
- layer_destroy_msg_callback(data->report_data, msgCallback, pAllocator);
-}
-
-VK_LAYER_EXPORT VKAPI_ATTR void VKAPI_CALL vkDebugReportMessageEXT(
- VkInstance instance,
- VkDebugReportFlagsEXT flags,
- VkDebugReportObjectTypeEXT objType,
- uint64_t object,
- size_t location,
- int32_t msgCode,
- const char* pLayerPrefix,
- const char* pMsg)
-{
- VkLayerInstanceDispatchTable *pTable = get_dispatch_table(pc_instance_table_map, instance);
- pTable->DebugReportMessageEXT(instance, flags, objType, object, location, msgCode, pLayerPrefix, pMsg);
-}
-
-static const VkExtensionProperties instance_extensions[] = {
- {
- VK_EXT_DEBUG_REPORT_EXTENSION_NAME,
- VK_EXT_DEBUG_REPORT_SPEC_VERSION
- }
-};
-
-VK_LAYER_EXPORT VKAPI_ATTR VkResult VKAPI_CALL vkEnumerateInstanceExtensionProperties(
- const char *pLayerName,
- uint32_t *pCount,
- VkExtensionProperties* pProperties)
-{
- return util_GetExtensionProperties(1, instance_extensions, pCount, pProperties);
-}
-
-static const VkLayerProperties pc_global_layers[] = {
- {
- "VK_LAYER_LUNARG_param_checker",
- VK_API_VERSION,
- 1,
- "LunarG Validation Layer",
- }
-};
-
-VK_LAYER_EXPORT VKAPI_ATTR VkResult VKAPI_CALL vkEnumerateInstanceLayerProperties(
- uint32_t *pCount,
- VkLayerProperties* pProperties)
-{
- return util_GetLayerProperties(ARRAY_SIZE(pc_global_layers),
- pc_global_layers,
- pCount, pProperties);
-}
-
-VK_LAYER_EXPORT VKAPI_ATTR VkResult VKAPI_CALL vkEnumerateDeviceExtensionProperties(
- VkPhysicalDevice physicalDevice,
- const char* pLayerName,
- uint32_t* pCount,
- VkExtensionProperties* pProperties)
-{
- /* ParamChecker does not have any physical device extensions */
- if (pLayerName == NULL) {
- return get_dispatch_table(pc_instance_table_map, physicalDevice)->EnumerateDeviceExtensionProperties(
- physicalDevice,
- NULL,
- pCount,
- pProperties);
- } else {
- return util_GetExtensionProperties(0, NULL, pCount, pProperties);
- }
-}
-
-VK_LAYER_EXPORT VKAPI_ATTR VkResult VKAPI_CALL vkEnumerateDeviceLayerProperties(
- VkPhysicalDevice physicalDevice,
- uint32_t* pCount,
- VkLayerProperties* pProperties)
-{
-
- /* ParamChecker's physical device layers are the same as global */
- return util_GetLayerProperties(ARRAY_SIZE(pc_global_layers), pc_global_layers,
- pCount, pProperties);
-}
-
-static
-std::string EnumeratorString(VkResult const& enumerator)
-{
- switch(enumerator)
- {
- case VK_RESULT_MAX_ENUM:
- {
- return "VK_RESULT_MAX_ENUM";
- break;
- }
- case VK_ERROR_LAYER_NOT_PRESENT:
- {
- return "VK_ERROR_LAYER_NOT_PRESENT";
- break;
- }
- case VK_ERROR_INCOMPATIBLE_DRIVER:
- {
- return "VK_ERROR_INCOMPATIBLE_DRIVER";
- break;
- }
- case VK_ERROR_MEMORY_MAP_FAILED:
- {
- return "VK_ERROR_MEMORY_MAP_FAILED";
- break;
- }
- case VK_INCOMPLETE:
- {
- return "VK_INCOMPLETE";
- break;
- }
- case VK_ERROR_OUT_OF_HOST_MEMORY:
- {
- return "VK_ERROR_OUT_OF_HOST_MEMORY";
- break;
- }
- case VK_ERROR_INITIALIZATION_FAILED:
- {
- return "VK_ERROR_INITIALIZATION_FAILED";
- break;
- }
- case VK_NOT_READY:
- {
- return "VK_NOT_READY";
- break;
- }
- case VK_ERROR_OUT_OF_DEVICE_MEMORY:
- {
- return "VK_ERROR_OUT_OF_DEVICE_MEMORY";
- break;
- }
- case VK_EVENT_SET:
- {
- return "VK_EVENT_SET";
- break;
- }
- case VK_TIMEOUT:
- {
- return "VK_TIMEOUT";
- break;
- }
- case VK_EVENT_RESET:
- {
- return "VK_EVENT_RESET";
- break;
- }
- case VK_SUCCESS:
- {
- return "VK_SUCCESS";
- break;
- }
- case VK_ERROR_EXTENSION_NOT_PRESENT:
- {
- return "VK_ERROR_EXTENSION_NOT_PRESENT";
- break;
- }
- case VK_ERROR_DEVICE_LOST:
- {
- return "VK_ERROR_DEVICE_LOST";
- break;
- }
- default:
- {
- return "unrecognized enumerator";
- break;
- }
- }
-}
-
-static
-bool ValidateEnumerator(VkFormatFeatureFlagBits const& enumerator)
-{
- VkFormatFeatureFlagBits allFlags = (VkFormatFeatureFlagBits)(
- VK_FORMAT_FEATURE_STORAGE_IMAGE_ATOMIC_BIT |
- VK_FORMAT_FEATURE_STORAGE_IMAGE_BIT |
- VK_FORMAT_FEATURE_UNIFORM_TEXEL_BUFFER_BIT |
- VK_FORMAT_FEATURE_STORAGE_TEXEL_BUFFER_ATOMIC_BIT |
- VK_FORMAT_FEATURE_SAMPLED_IMAGE_BIT |
- VK_FORMAT_FEATURE_COLOR_ATTACHMENT_BLEND_BIT |
- VK_FORMAT_FEATURE_VERTEX_BUFFER_BIT |
- VK_FORMAT_FEATURE_STORAGE_TEXEL_BUFFER_BIT |
- VK_FORMAT_FEATURE_COLOR_ATTACHMENT_BIT |
- VK_FORMAT_FEATURE_DEPTH_STENCIL_ATTACHMENT_BIT |
- VK_FORMAT_FEATURE_BLIT_SRC_BIT |
- VK_FORMAT_FEATURE_BLIT_DST_BIT |
- VK_FORMAT_FEATURE_SAMPLED_IMAGE_FILTER_LINEAR_BIT);
- if(enumerator & (~allFlags))
- {
- return false;
- }
-
- return true;
-}
-
-static
-std::string EnumeratorString(VkFormatFeatureFlagBits const& enumerator)
-{
- if(!ValidateEnumerator(enumerator))
- {
- return "unrecognized enumerator";
- }
-
- std::vector<std::string> strings;
- if(enumerator & VK_FORMAT_FEATURE_STORAGE_IMAGE_ATOMIC_BIT)
- {
- strings.push_back("VK_FORMAT_FEATURE_STORAGE_IMAGE_ATOMIC_BIT");
- }
- if(enumerator & VK_FORMAT_FEATURE_STORAGE_IMAGE_BIT)
- {
- strings.push_back("VK_FORMAT_FEATURE_STORAGE_IMAGE_BIT");
- }
- if(enumerator & VK_FORMAT_FEATURE_UNIFORM_TEXEL_BUFFER_BIT)
- {
- strings.push_back("VK_FORMAT_FEATURE_UNIFORM_TEXEL_BUFFER_BIT");
- }
- if(enumerator & VK_FORMAT_FEATURE_STORAGE_TEXEL_BUFFER_ATOMIC_BIT)
- {
- strings.push_back("VK_FORMAT_FEATURE_STORAGE_TEXEL_BUFFER_ATOMIC_BIT");
- }
- if(enumerator & VK_FORMAT_FEATURE_SAMPLED_IMAGE_BIT)
- {
- strings.push_back("VK_FORMAT_FEATURE_SAMPLED_IMAGE_BIT");
- }
- if(enumerator & VK_FORMAT_FEATURE_COLOR_ATTACHMENT_BLEND_BIT)
- {
- strings.push_back("VK_FORMAT_FEATURE_COLOR_ATTACHMENT_BLEND_BIT");
- }
- if(enumerator & VK_FORMAT_FEATURE_VERTEX_BUFFER_BIT)
- {
- strings.push_back("VK_FORMAT_FEATURE_VERTEX_BUFFER_BIT");
- }
- if(enumerator & VK_FORMAT_FEATURE_STORAGE_TEXEL_BUFFER_BIT)
- {
- strings.push_back("VK_FORMAT_FEATURE_STORAGE_TEXEL_BUFFER_BIT");
- }
- if(enumerator & VK_FORMAT_FEATURE_COLOR_ATTACHMENT_BIT)
- {
- strings.push_back("VK_FORMAT_FEATURE_COLOR_ATTACHMENT_BIT");
- }
- if(enumerator & VK_FORMAT_FEATURE_DEPTH_STENCIL_ATTACHMENT_BIT)
- {
- strings.push_back("VK_FORMAT_FEATURE_DEPTH_STENCIL_ATTACHMENT_BIT");
- }
- if(enumerator & VK_FORMAT_FEATURE_BLIT_SRC_BIT)
- {
- strings.push_back("VK_FORMAT_FEATURE_BLIT_SRC_BIT");
- }
- if(enumerator & VK_FORMAT_FEATURE_BLIT_DST_BIT)
- {
- strings.push_back("VK_FORMAT_FEATURE_BLIT_DST_BIT");
- }
- if(enumerator & VK_FORMAT_FEATURE_SAMPLED_IMAGE_FILTER_LINEAR_BIT)
- {
- strings.push_back("VK_FORMAT_FEATURE_SAMPLED_IMAGE_FILTER_LINEAR_BIT");
- }
-
- std::string enumeratorString;
- for(auto const& string : strings)
- {
- enumeratorString += string;
-
- if(string != strings.back())
- {
- enumeratorString += '|';
- }
- }
-
- return enumeratorString;
-}
-
-static
-bool ValidateEnumerator(VkImageUsageFlagBits const& enumerator)
-{
- VkImageUsageFlagBits allFlags = (VkImageUsageFlagBits)(VK_IMAGE_USAGE_INPUT_ATTACHMENT_BIT |
- VK_IMAGE_USAGE_DEPTH_STENCIL_ATTACHMENT_BIT |
- VK_IMAGE_USAGE_COLOR_ATTACHMENT_BIT |
- VK_IMAGE_USAGE_STORAGE_BIT |
- VK_IMAGE_USAGE_SAMPLED_BIT |
- VK_IMAGE_USAGE_TRANSFER_DST_BIT |
- VK_IMAGE_USAGE_TRANSIENT_ATTACHMENT_BIT |
- VK_IMAGE_USAGE_TRANSFER_SRC_BIT);
- if(enumerator & (~allFlags))
- {
- return false;
- }
-
- return true;
-}
-
-static
-std::string EnumeratorString(VkImageUsageFlagBits const& enumerator)
-{
- if(!ValidateEnumerator(enumerator))
- {
- return "unrecognized enumerator";
- }
-
- std::vector<std::string> strings;
- if(enumerator & VK_IMAGE_USAGE_INPUT_ATTACHMENT_BIT)
- {
- strings.push_back("VK_IMAGE_USAGE_INPUT_ATTACHMENT_BIT");
- }
- if(enumerator & VK_IMAGE_USAGE_DEPTH_STENCIL_ATTACHMENT_BIT)
- {
- strings.push_back("VK_IMAGE_USAGE_DEPTH_STENCIL_ATTACHMENT_BIT");
- }
- if(enumerator & VK_IMAGE_USAGE_COLOR_ATTACHMENT_BIT)
- {
- strings.push_back("VK_IMAGE_USAGE_COLOR_ATTACHMENT_BIT");
- }
- if(enumerator & VK_IMAGE_USAGE_STORAGE_BIT)
- {
- strings.push_back("VK_IMAGE_USAGE_STORAGE_BIT");
- }
- if(enumerator & VK_IMAGE_USAGE_SAMPLED_BIT)
- {
- strings.push_back("VK_IMAGE_USAGE_SAMPLED_BIT");
- }
- if(enumerator & VK_IMAGE_USAGE_TRANSFER_DST_BIT)
- {
- strings.push_back("VK_IMAGE_USAGE_TRANSFER_DST_BIT");
- }
- if(enumerator & VK_IMAGE_USAGE_TRANSIENT_ATTACHMENT_BIT)
- {
- strings.push_back("VK_IMAGE_USAGE_TRANSIENT_ATTACHMENT_BIT");
- }
- if(enumerator & VK_IMAGE_USAGE_TRANSFER_SRC_BIT)
- {
- strings.push_back("VK_IMAGE_USAGE_TRANSFER_SRC_BIT");
- }
-
- std::string enumeratorString;
- for(auto const& string : strings)
- {
- enumeratorString += string;
-
- if(string != strings.back())
- {
- enumeratorString += '|';
- }
- }
-
- return enumeratorString;
-}
-
-static
-bool ValidateEnumerator(VkQueueFlagBits const& enumerator)
-{
- VkQueueFlagBits allFlags = (VkQueueFlagBits)(
- VK_QUEUE_TRANSFER_BIT |
- VK_QUEUE_COMPUTE_BIT |
- VK_QUEUE_SPARSE_BINDING_BIT |
- VK_QUEUE_GRAPHICS_BIT);
- if(enumerator & (~allFlags))
- {
- return false;
- }
-
- return true;
-}
-
-static
-std::string EnumeratorString(VkQueueFlagBits const& enumerator)
-{
- if(!ValidateEnumerator(enumerator))
- {
- return "unrecognized enumerator";
- }
-
- std::vector<std::string> strings;
- if(enumerator & VK_QUEUE_TRANSFER_BIT)
- {
- strings.push_back("VK_QUEUE_TRANSFER_BIT");
- }
- if(enumerator & VK_QUEUE_COMPUTE_BIT)
- {
- strings.push_back("VK_QUEUE_COMPUTE_BIT");
- }
- if(enumerator & VK_QUEUE_SPARSE_BINDING_BIT)
- {
- strings.push_back("VK_QUEUE_SPARSE_BINDING_BIT");
- }
- if(enumerator & VK_QUEUE_GRAPHICS_BIT)
- {
- strings.push_back("VK_QUEUE_GRAPHICS_BIT");
- }
-
- std::string enumeratorString;
- for(auto const& string : strings)
- {
- enumeratorString += string;
-
- if(string != strings.back())
- {
- enumeratorString += '|';
- }
- }
-
- return enumeratorString;
-}
-
-static
-bool ValidateEnumerator(VkMemoryPropertyFlagBits const& enumerator)
-{
- VkMemoryPropertyFlagBits allFlags = (VkMemoryPropertyFlagBits)(VK_MEMORY_PROPERTY_LAZILY_ALLOCATED_BIT |
- VK_MEMORY_PROPERTY_HOST_COHERENT_BIT |
- VK_MEMORY_PROPERTY_HOST_VISIBLE_BIT |
- VK_MEMORY_PROPERTY_HOST_CACHED_BIT |
- VK_MEMORY_PROPERTY_DEVICE_LOCAL_BIT);
- if(enumerator & (~allFlags))
- {
- return false;
- }
-
- return true;
-}
-
-static
-std::string EnumeratorString(VkMemoryPropertyFlagBits const& enumerator)
-{
- if(!ValidateEnumerator(enumerator))
- {
- return "unrecognized enumerator";
- }
-
- std::vector<std::string> strings;
- if(enumerator & VK_MEMORY_PROPERTY_LAZILY_ALLOCATED_BIT)
- {
- strings.push_back("VK_MEMORY_PROPERTY_LAZILY_ALLOCATED_BIT");
- }
- if(enumerator & VK_MEMORY_PROPERTY_HOST_COHERENT_BIT)
- {
- strings.push_back("VK_MEMORY_PROPERTY_HOST_COHERENT_BIT");
- }
- if(enumerator & VK_MEMORY_PROPERTY_HOST_VISIBLE_BIT)
- {
- strings.push_back("VK_MEMORY_PROPERTY_HOST_VISIBLE_BIT");
- }
- if(enumerator & VK_MEMORY_PROPERTY_HOST_CACHED_BIT)
- {
- strings.push_back("VK_MEMORY_PROPERTY_HOST_CACHED_BIT");
- }
- if(enumerator & VK_MEMORY_PROPERTY_DEVICE_LOCAL_BIT)
- {
- strings.push_back("VK_MEMORY_PROPERTY_DEVICE_LOCAL_BIT");
- }
-
- std::string enumeratorString;
- for(auto const& string : strings)
- {
- enumeratorString += string;
-
- if(string != strings.back())
- {
- enumeratorString += '|';
- }
- }
-
- return enumeratorString;
-}
-
-static
-bool ValidateEnumerator(VkMemoryHeapFlagBits const& enumerator)
-{
- VkMemoryHeapFlagBits allFlags = (VkMemoryHeapFlagBits)(VK_MEMORY_HEAP_DEVICE_LOCAL_BIT);
- if(enumerator & (~allFlags))
- {
- return false;
- }
-
- return true;
-}
-
-static
-std::string EnumeratorString(VkMemoryHeapFlagBits const& enumerator)
-{
- if(!ValidateEnumerator(enumerator))
- {
- return "unrecognized enumerator";
- }
-
- std::vector<std::string> strings;
- if(enumerator & VK_MEMORY_HEAP_DEVICE_LOCAL_BIT)
- {
- strings.push_back("VK_MEMORY_HEAP_DEVICE_LOCAL_BIT");
- }
-
- std::string enumeratorString;
- for(auto const& string : strings)
- {
- enumeratorString += string;
-
- if(string != strings.back())
- {
- enumeratorString += '|';
- }
- }
-
- return enumeratorString;
-}
-
-static
-bool ValidateEnumerator(VkSparseImageFormatFlagBits const& enumerator)
-{
- VkSparseImageFormatFlagBits allFlags = (VkSparseImageFormatFlagBits)(VK_SPARSE_IMAGE_FORMAT_NONSTANDARD_BLOCK_SIZE_BIT |
- VK_SPARSE_IMAGE_FORMAT_ALIGNED_MIP_SIZE_BIT |
- VK_SPARSE_IMAGE_FORMAT_SINGLE_MIPTAIL_BIT);
- if(enumerator & (~allFlags))
- {
- return false;
- }
-
- return true;
-}
-
-static
-std::string EnumeratorString(VkSparseImageFormatFlagBits const& enumerator)
-{
- if(!ValidateEnumerator(enumerator))
- {
- return "unrecognized enumerator";
- }
-
- std::vector<std::string> strings;
- if(enumerator & VK_SPARSE_IMAGE_FORMAT_NONSTANDARD_BLOCK_SIZE_BIT)
- {
- strings.push_back("VK_SPARSE_IMAGE_FORMAT_NONSTANDARD_BLOCK_SIZE_BIT");
- }
- if(enumerator & VK_SPARSE_IMAGE_FORMAT_ALIGNED_MIP_SIZE_BIT)
- {
- strings.push_back("VK_SPARSE_IMAGE_FORMAT_ALIGNED_MIP_SIZE_BIT");
- }
- if(enumerator & VK_SPARSE_IMAGE_FORMAT_SINGLE_MIPTAIL_BIT)
- {
- strings.push_back("VK_SPARSE_IMAGE_FORMAT_SINGLE_MIPTAIL_BIT");
- }
-
- std::string enumeratorString;
- for(auto const& string : strings)
- {
- enumeratorString += string;
-
- if(string != strings.back())
- {
- enumeratorString += '|';
- }
- }
-
- return enumeratorString;
-}
-
-static
-bool ValidateEnumerator(VkFenceCreateFlagBits const& enumerator)
-{
- VkFenceCreateFlagBits allFlags = (VkFenceCreateFlagBits)(VK_FENCE_CREATE_SIGNALED_BIT);
- if(enumerator & (~allFlags))
- {
- return false;
- }
-
- return true;
-}
-
-static
-std::string EnumeratorString(VkFenceCreateFlagBits const& enumerator)
-{
- if(!ValidateEnumerator(enumerator))
- {
- return "unrecognized enumerator";
- }
-
- std::vector<std::string> strings;
- if(enumerator & VK_FENCE_CREATE_SIGNALED_BIT)
- {
- strings.push_back("VK_FENCE_CREATE_SIGNALED_BIT");
- }
-
- std::string enumeratorString;
- for(auto const& string : strings)
- {
- enumeratorString += string;
-
- if(string != strings.back())
- {
- enumeratorString += '|';
- }
- }
-
- return enumeratorString;
-}
-
-static
-bool ValidateEnumerator(VkQueryPipelineStatisticFlagBits const& enumerator)
-{
- VkQueryPipelineStatisticFlagBits allFlags = (VkQueryPipelineStatisticFlagBits)(VK_QUERY_PIPELINE_STATISTIC_COMPUTE_SHADER_INVOCATIONS_BIT |
- VK_QUERY_PIPELINE_STATISTIC_INPUT_ASSEMBLY_VERTICES_BIT |
- VK_QUERY_PIPELINE_STATISTIC_INPUT_ASSEMBLY_PRIMITIVES_BIT |
- VK_QUERY_PIPELINE_STATISTIC_CLIPPING_INVOCATIONS_BIT |
- VK_QUERY_PIPELINE_STATISTIC_VERTEX_SHADER_INVOCATIONS_BIT |
- VK_QUERY_PIPELINE_STATISTIC_GEOMETRY_SHADER_PRIMITIVES_BIT |
- VK_QUERY_PIPELINE_STATISTIC_FRAGMENT_SHADER_INVOCATIONS_BIT |
- VK_QUERY_PIPELINE_STATISTIC_CLIPPING_PRIMITIVES_BIT |
- VK_QUERY_PIPELINE_STATISTIC_TESSELLATION_CONTROL_SHADER_PATCHES_BIT |
- VK_QUERY_PIPELINE_STATISTIC_GEOMETRY_SHADER_INVOCATIONS_BIT |
- VK_QUERY_PIPELINE_STATISTIC_TESSELLATION_EVALUATION_SHADER_INVOCATIONS_BIT);
- if(enumerator & (~allFlags))
- {
- return false;
- }
-
- return true;
-}
-
-static
-std::string EnumeratorString(VkQueryPipelineStatisticFlagBits const& enumerator)
-{
- if(!ValidateEnumerator(enumerator))
- {
- return "unrecognized enumerator";
- }
-
- std::vector<std::string> strings;
- if(enumerator & VK_QUERY_PIPELINE_STATISTIC_COMPUTE_SHADER_INVOCATIONS_BIT)
- {
- strings.push_back("VK_QUERY_PIPELINE_STATISTIC_COMPUTE_SHADER_INVOCATIONS_BIT");
- }
- if(enumerator & VK_QUERY_PIPELINE_STATISTIC_INPUT_ASSEMBLY_VERTICES_BIT)
- {
- strings.push_back("VK_QUERY_PIPELINE_STATISTIC_INPUT_ASSEMBLY_VERTICES_BIT");
- }
- if(enumerator & VK_QUERY_PIPELINE_STATISTIC_INPUT_ASSEMBLY_PRIMITIVES_BIT)
- {
- strings.push_back("VK_QUERY_PIPELINE_STATISTIC_INPUT_ASSEMBLY_PRIMITIVES_BIT");
- }
- if(enumerator & VK_QUERY_PIPELINE_STATISTIC_CLIPPING_INVOCATIONS_BIT)
- {
- strings.push_back("VK_QUERY_PIPELINE_STATISTIC_CLIPPING_INVOCATIONS_BIT");
- }
- if(enumerator & VK_QUERY_PIPELINE_STATISTIC_VERTEX_SHADER_INVOCATIONS_BIT)
- {
- strings.push_back("VK_QUERY_PIPELINE_STATISTIC_VERTEX_SHADER_INVOCATIONS_BIT");
- }
- if(enumerator & VK_QUERY_PIPELINE_STATISTIC_GEOMETRY_SHADER_PRIMITIVES_BIT)
- {
- strings.push_back("VK_QUERY_PIPELINE_STATISTIC_GEOMETRY_SHADER_PRIMITIVES_BIT");
- }
- if(enumerator & VK_QUERY_PIPELINE_STATISTIC_FRAGMENT_SHADER_INVOCATIONS_BIT)
- {
- strings.push_back("VK_QUERY_PIPELINE_STATISTIC_FRAGMENT_SHADER_INVOCATIONS_BIT");
- }
- if(enumerator & VK_QUERY_PIPELINE_STATISTIC_CLIPPING_PRIMITIVES_BIT)
- {
- strings.push_back("VK_QUERY_PIPELINE_STATISTIC_CLIPPING_PRIMITIVES_BIT");
- }
- if(enumerator & VK_QUERY_PIPELINE_STATISTIC_TESSELLATION_CONTROL_SHADER_PATCHES_BIT)
- {
- strings.push_back("VK_QUERY_PIPELINE_STATISTIC_TESSELLATION_CONTROL_SHADER_PATCHES_BIT");
- }
- if(enumerator & VK_QUERY_PIPELINE_STATISTIC_GEOMETRY_SHADER_INVOCATIONS_BIT)
- {
- strings.push_back("VK_QUERY_PIPELINE_STATISTIC_GEOMETRY_SHADER_INVOCATIONS_BIT");
- }
- if(enumerator & VK_QUERY_PIPELINE_STATISTIC_TESSELLATION_EVALUATION_SHADER_INVOCATIONS_BIT)
- {
- strings.push_back("VK_QUERY_PIPELINE_STATISTIC_TESSELLATION_EVALUATION_SHADER_INVOCATIONS_BIT");
- }
-
- std::string enumeratorString;
- for(auto const& string : strings)
- {
- enumeratorString += string;
-
- if(string != strings.back())
- {
- enumeratorString += '|';
- }
- }
-
- return enumeratorString;
-}
-
-static
-bool ValidateEnumerator(VkQueryResultFlagBits const& enumerator)
-{
- VkQueryResultFlagBits allFlags = (VkQueryResultFlagBits)(VK_QUERY_RESULT_PARTIAL_BIT |
- VK_QUERY_RESULT_WITH_AVAILABILITY_BIT |
- VK_QUERY_RESULT_WAIT_BIT |
- VK_QUERY_RESULT_64_BIT);
- if(enumerator & (~allFlags))
- {
- return false;
- }
-
- return true;
-}
-
-static
-std::string EnumeratorString(VkQueryResultFlagBits const& enumerator)
-{
- if(!ValidateEnumerator(enumerator))
- {
- return "unrecognized enumerator";
- }
-
- std::vector<std::string> strings;
- if(enumerator & VK_QUERY_RESULT_PARTIAL_BIT)
- {
- strings.push_back("VK_QUERY_RESULT_PARTIAL_BIT");
- }
- if(enumerator & VK_QUERY_RESULT_WITH_AVAILABILITY_BIT)
- {
- strings.push_back("VK_QUERY_RESULT_WITH_AVAILABILITY_BIT");
- }
- if(enumerator & VK_QUERY_RESULT_WAIT_BIT)
- {
- strings.push_back("VK_QUERY_RESULT_WAIT_BIT");
- }
- if(enumerator & VK_QUERY_RESULT_64_BIT)
- {
- strings.push_back("VK_QUERY_RESULT_64_BIT");
- }
-
- std::string enumeratorString;
- for(auto const& string : strings)
- {
- enumeratorString += string;
-
- if(string != strings.back())
- {
- enumeratorString += '|';
- }
- }
-
- return enumeratorString;
-}
-
-static
-bool ValidateEnumerator(VkBufferUsageFlagBits const& enumerator)
-{
- VkBufferUsageFlagBits allFlags = (VkBufferUsageFlagBits)(VK_BUFFER_USAGE_VERTEX_BUFFER_BIT |
- VK_BUFFER_USAGE_INDEX_BUFFER_BIT |
- VK_BUFFER_USAGE_INDIRECT_BUFFER_BIT |
- VK_BUFFER_USAGE_STORAGE_TEXEL_BUFFER_BIT |
- VK_BUFFER_USAGE_STORAGE_BUFFER_BIT |
- VK_BUFFER_USAGE_TRANSFER_DST_BIT |
- VK_BUFFER_USAGE_UNIFORM_TEXEL_BUFFER_BIT |
- VK_BUFFER_USAGE_TRANSFER_SRC_BIT |
- VK_BUFFER_USAGE_UNIFORM_BUFFER_BIT);
- if(enumerator & (~allFlags))
- {
- return false;
- }
-
- return true;
-}
-
-static
-std::string EnumeratorString(VkBufferUsageFlagBits const& enumerator)
-{
- if(!ValidateEnumerator(enumerator))
- {
- return "unrecognized enumerator";
- }
-
- std::vector<std::string> strings;
- if(enumerator & VK_BUFFER_USAGE_VERTEX_BUFFER_BIT)
- {
- strings.push_back("VK_BUFFER_USAGE_VERTEX_BUFFER_BIT");
- }
- if(enumerator & VK_BUFFER_USAGE_INDEX_BUFFER_BIT)
- {
- strings.push_back("VK_BUFFER_USAGE_INDEX_BUFFER_BIT");
- }
- if(enumerator & VK_BUFFER_USAGE_INDIRECT_BUFFER_BIT)
- {
- strings.push_back("VK_BUFFER_USAGE_INDIRECT_BUFFER_BIT");
- }
- if(enumerator & VK_BUFFER_USAGE_STORAGE_TEXEL_BUFFER_BIT)
- {
- strings.push_back("VK_BUFFER_USAGE_STORAGE_TEXEL_BUFFER_BIT");
- }
- if(enumerator & VK_BUFFER_USAGE_STORAGE_BUFFER_BIT)
- {
- strings.push_back("VK_BUFFER_USAGE_STORAGE_BUFFER_BIT");
- }
- if(enumerator & VK_BUFFER_USAGE_TRANSFER_DST_BIT)
- {
- strings.push_back("VK_BUFFER_USAGE_TRANSFER_DST_BIT");
- }
- if(enumerator & VK_BUFFER_USAGE_UNIFORM_TEXEL_BUFFER_BIT)
- {
- strings.push_back("VK_BUFFER_USAGE_UNIFORM_TEXEL_BUFFER_BIT");
- }
- if(enumerator & VK_BUFFER_USAGE_TRANSFER_SRC_BIT)
- {
- strings.push_back("VK_BUFFER_USAGE_TRANSFER_SRC_BIT");
- }
- if(enumerator & VK_BUFFER_USAGE_UNIFORM_BUFFER_BIT)
- {
- strings.push_back("VK_BUFFER_USAGE_UNIFORM_BUFFER_BIT");
- }
-
- std::string enumeratorString;
- for(auto const& string : strings)
- {
- enumeratorString += string;
-
- if(string != strings.back())
- {
- enumeratorString += '|';
- }
- }
-
- return enumeratorString;
-}
-
-static
-bool ValidateEnumerator(VkBufferCreateFlagBits const& enumerator)
-{
- VkBufferCreateFlagBits allFlags = (VkBufferCreateFlagBits)(VK_BUFFER_CREATE_SPARSE_ALIASED_BIT |
- VK_BUFFER_CREATE_SPARSE_RESIDENCY_BIT |
- VK_BUFFER_CREATE_SPARSE_BINDING_BIT);
- if(enumerator & (~allFlags))
- {
- return false;
- }
-
- return true;
-}
-
-static
-std::string EnumeratorString(VkBufferCreateFlagBits const& enumerator)
-{
- if(!ValidateEnumerator(enumerator))
- {
- return "unrecognized enumerator";
- }
-
- std::vector<std::string> strings;
- if(enumerator & VK_BUFFER_CREATE_SPARSE_ALIASED_BIT)
- {
- strings.push_back("VK_BUFFER_CREATE_SPARSE_ALIASED_BIT");
- }
- if(enumerator & VK_BUFFER_CREATE_SPARSE_RESIDENCY_BIT)
- {
- strings.push_back("VK_BUFFER_CREATE_SPARSE_RESIDENCY_BIT");
- }
- if(enumerator & VK_BUFFER_CREATE_SPARSE_BINDING_BIT)
- {
- strings.push_back("VK_BUFFER_CREATE_SPARSE_BINDING_BIT");
- }
-
- std::string enumeratorString;
- for(auto const& string : strings)
- {
- enumeratorString += string;
-
- if(string != strings.back())
- {
- enumeratorString += '|';
- }
- }
-
- return enumeratorString;
-}
-
-static
-bool ValidateEnumerator(VkImageCreateFlagBits const& enumerator)
-{
- VkImageCreateFlagBits allFlags = (VkImageCreateFlagBits)(VK_IMAGE_CREATE_CUBE_COMPATIBLE_BIT |
- VK_IMAGE_CREATE_SPARSE_ALIASED_BIT |
- VK_IMAGE_CREATE_SPARSE_RESIDENCY_BIT |
- VK_IMAGE_CREATE_MUTABLE_FORMAT_BIT |
- VK_IMAGE_CREATE_SPARSE_BINDING_BIT);
- if(enumerator & (~allFlags))
- {
- return false;
- }
-
- return true;
-}
-
-static
-std::string EnumeratorString(VkImageCreateFlagBits const& enumerator)
-{
- if(!ValidateEnumerator(enumerator))
- {
- return "unrecognized enumerator";
- }
-
- std::vector<std::string> strings;
- if(enumerator & VK_IMAGE_CREATE_CUBE_COMPATIBLE_BIT)
- {
- strings.push_back("VK_IMAGE_CREATE_CUBE_COMPATIBLE_BIT");
- }
- if(enumerator & VK_IMAGE_CREATE_SPARSE_ALIASED_BIT)
- {
- strings.push_back("VK_IMAGE_CREATE_SPARSE_ALIASED_BIT");
- }
- if(enumerator & VK_IMAGE_CREATE_SPARSE_RESIDENCY_BIT)
- {
- strings.push_back("VK_IMAGE_CREATE_SPARSE_RESIDENCY_BIT");
- }
- if(enumerator & VK_IMAGE_CREATE_MUTABLE_FORMAT_BIT)
- {
- strings.push_back("VK_IMAGE_CREATE_MUTABLE_FORMAT_BIT");
- }
- if(enumerator & VK_IMAGE_CREATE_SPARSE_BINDING_BIT)
- {
- strings.push_back("VK_IMAGE_CREATE_SPARSE_BINDING_BIT");
- }
-
- std::string enumeratorString;
- for(auto const& string : strings)
- {
- enumeratorString += string;
-
- if(string != strings.back())
- {
- enumeratorString += '|';
- }
- }
-
- return enumeratorString;
-}
-
-static
-bool ValidateEnumerator(VkColorComponentFlagBits const& enumerator)
-{
- VkColorComponentFlagBits allFlags = (VkColorComponentFlagBits)(VK_COLOR_COMPONENT_A_BIT |
- VK_COLOR_COMPONENT_B_BIT |
- VK_COLOR_COMPONENT_G_BIT |
- VK_COLOR_COMPONENT_R_BIT);
- if(enumerator & (~allFlags))
- {
- return false;
- }
-
- return true;
-}
-
-static
-std::string EnumeratorString(VkColorComponentFlagBits const& enumerator)
-{
- if(!ValidateEnumerator(enumerator))
- {
- return "unrecognized enumerator";
- }
-
- std::vector<std::string> strings;
- if(enumerator & VK_COLOR_COMPONENT_A_BIT)
- {
- strings.push_back("VK_COLOR_COMPONENT_A_BIT");
- }
- if(enumerator & VK_COLOR_COMPONENT_B_BIT)
- {
- strings.push_back("VK_COLOR_COMPONENT_B_BIT");
- }
- if(enumerator & VK_COLOR_COMPONENT_G_BIT)
- {
- strings.push_back("VK_COLOR_COMPONENT_G_BIT");
- }
- if(enumerator & VK_COLOR_COMPONENT_R_BIT)
- {
- strings.push_back("VK_COLOR_COMPONENT_R_BIT");
- }
-
- std::string enumeratorString;
- for(auto const& string : strings)
- {
- enumeratorString += string;
-
- if(string != strings.back())
- {
- enumeratorString += '|';
- }
- }
-
- return enumeratorString;
-}
-
-static
-bool ValidateEnumerator(VkPipelineCreateFlagBits const& enumerator)
-{
- VkPipelineCreateFlagBits allFlags = (VkPipelineCreateFlagBits)(VK_PIPELINE_CREATE_DERIVATIVE_BIT |
- VK_PIPELINE_CREATE_ALLOW_DERIVATIVES_BIT |
- VK_PIPELINE_CREATE_DISABLE_OPTIMIZATION_BIT);
- if(enumerator & (~allFlags))
- {
- return false;
- }
-
- return true;
-}
-
-static
-std::string EnumeratorString(VkPipelineCreateFlagBits const& enumerator)
-{
- if(!ValidateEnumerator(enumerator))
- {
- return "unrecognized enumerator";
- }
-
- std::vector<std::string> strings;
- if(enumerator & VK_PIPELINE_CREATE_DERIVATIVE_BIT)
- {
- strings.push_back("VK_PIPELINE_CREATE_DERIVATIVE_BIT");
- }
- if(enumerator & VK_PIPELINE_CREATE_ALLOW_DERIVATIVES_BIT)
- {
- strings.push_back("VK_PIPELINE_CREATE_ALLOW_DERIVATIVES_BIT");
- }
- if(enumerator & VK_PIPELINE_CREATE_DISABLE_OPTIMIZATION_BIT)
- {
- strings.push_back("VK_PIPELINE_CREATE_DISABLE_OPTIMIZATION_BIT");
- }
-
- std::string enumeratorString;
- for(auto const& string : strings)
- {
- enumeratorString += string;
-
- if(string != strings.back())
- {
- enumeratorString += '|';
- }
- }
-
- return enumeratorString;
-}
-
-static
-bool ValidateEnumerator(VkShaderStageFlagBits const& enumerator)
-{
- VkShaderStageFlagBits allFlags = (VkShaderStageFlagBits)(VK_SHADER_STAGE_ALL |
- VK_SHADER_STAGE_FRAGMENT_BIT |
- VK_SHADER_STAGE_GEOMETRY_BIT |
- VK_SHADER_STAGE_COMPUTE_BIT |
- VK_SHADER_STAGE_TESSELLATION_EVALUATION_BIT |
- VK_SHADER_STAGE_TESSELLATION_CONTROL_BIT |
- VK_SHADER_STAGE_VERTEX_BIT);
- if(enumerator & (~allFlags))
- {
- return false;
- }
-
- return true;
-}
-
-static
-std::string EnumeratorString(VkShaderStageFlagBits const& enumerator)
-{
- if(!ValidateEnumerator(enumerator))
- {
- return "unrecognized enumerator";
- }
-
- std::vector<std::string> strings;
- if(enumerator & VK_SHADER_STAGE_ALL)
- {
- strings.push_back("VK_SHADER_STAGE_ALL");
- }
- if(enumerator & VK_SHADER_STAGE_FRAGMENT_BIT)
- {
- strings.push_back("VK_SHADER_STAGE_FRAGMENT_BIT");
- }
- if(enumerator & VK_SHADER_STAGE_GEOMETRY_BIT)
- {
- strings.push_back("VK_SHADER_STAGE_GEOMETRY_BIT");
- }
- if(enumerator & VK_SHADER_STAGE_COMPUTE_BIT)
- {
- strings.push_back("VK_SHADER_STAGE_COMPUTE_BIT");
- }
- if(enumerator & VK_SHADER_STAGE_TESSELLATION_EVALUATION_BIT)
- {
- strings.push_back("VK_SHADER_STAGE_TESSELLATION_EVALUATION_BIT");
- }
- if(enumerator & VK_SHADER_STAGE_TESSELLATION_CONTROL_BIT)
- {
- strings.push_back("VK_SHADER_STAGE_TESSELLATION_CONTROL_BIT");
- }
- if(enumerator & VK_SHADER_STAGE_VERTEX_BIT)
- {
- strings.push_back("VK_SHADER_STAGE_VERTEX_BIT");
- }
-
- std::string enumeratorString;
- for(auto const& string : strings)
- {
- enumeratorString += string;
-
- if(string != strings.back())
- {
- enumeratorString += '|';
- }
- }
-
- return enumeratorString;
-}
-
-static
-bool ValidateEnumerator(VkPipelineStageFlagBits const& enumerator)
-{
- VkPipelineStageFlagBits allFlags = (VkPipelineStageFlagBits)(
- VK_PIPELINE_STAGE_ALL_COMMANDS_BIT |
- VK_PIPELINE_STAGE_ALL_GRAPHICS_BIT |
- VK_PIPELINE_STAGE_HOST_BIT |
- VK_PIPELINE_STAGE_BOTTOM_OF_PIPE_BIT |
- VK_PIPELINE_STAGE_TRANSFER_BIT |
- VK_PIPELINE_STAGE_COMPUTE_SHADER_BIT |
- VK_PIPELINE_STAGE_COLOR_ATTACHMENT_OUTPUT_BIT |
- VK_PIPELINE_STAGE_LATE_FRAGMENT_TESTS_BIT |
- VK_PIPELINE_STAGE_EARLY_FRAGMENT_TESTS_BIT |
- VK_PIPELINE_STAGE_GEOMETRY_SHADER_BIT |
- VK_PIPELINE_STAGE_FRAGMENT_SHADER_BIT |
- VK_PIPELINE_STAGE_TESSELLATION_EVALUATION_SHADER_BIT |
- VK_PIPELINE_STAGE_TESSELLATION_CONTROL_SHADER_BIT |
- VK_PIPELINE_STAGE_VERTEX_SHADER_BIT |
- VK_PIPELINE_STAGE_VERTEX_INPUT_BIT |
- VK_PIPELINE_STAGE_DRAW_INDIRECT_BIT |
- VK_PIPELINE_STAGE_TOP_OF_PIPE_BIT);
- if(enumerator & (~allFlags))
- {
- return false;
- }
-
- return true;
-}
-
-static
-std::string EnumeratorString(VkPipelineStageFlagBits const& enumerator)
-{
- if(!ValidateEnumerator(enumerator))
- {
- return "unrecognized enumerator";
- }
-
- std::vector<std::string> strings;
- if(enumerator & VK_PIPELINE_STAGE_ALL_COMMANDS_BIT)
- {
- strings.push_back("VK_PIPELINE_STAGE_ALL_COMMANDS_BIT");
- }
- if(enumerator & VK_PIPELINE_STAGE_ALL_GRAPHICS_BIT)
- {
- strings.push_back("VK_PIPELINE_STAGE_ALL_GRAPHICS_BIT");
- }
- if(enumerator & VK_PIPELINE_STAGE_HOST_BIT)
- {
- strings.push_back("VK_PIPELINE_STAGE_HOST_BIT");
- }
- if(enumerator & VK_PIPELINE_STAGE_TRANSFER_BIT)
- {
- strings.push_back("VK_PIPELINE_STAGE_TRANSFER_BIT");
- }
- if(enumerator & VK_PIPELINE_STAGE_COLOR_ATTACHMENT_OUTPUT_BIT)
- {
- strings.push_back("VK_PIPELINE_STAGE_COLOR_ATTACHMENT_OUTPUT_BIT");
- }
- if(enumerator & VK_PIPELINE_STAGE_BOTTOM_OF_PIPE_BIT)
- {
- strings.push_back("VK_PIPELINE_STAGE_BOTTOM_OF_PIPE_BIT");
- }
- if(enumerator & VK_PIPELINE_STAGE_TOP_OF_PIPE_BIT)
- {
- strings.push_back("VK_PIPELINE_STAGE_TOP_OF_PIPE_BIT");
- }
- if(enumerator & VK_PIPELINE_STAGE_COMPUTE_SHADER_BIT)
- {
- strings.push_back("VK_PIPELINE_STAGE_COMPUTE_SHADER_BIT");
- }
- if(enumerator & VK_PIPELINE_STAGE_TESSELLATION_CONTROL_SHADER_BIT)
- {
- strings.push_back("VK_PIPELINE_STAGE_TESSELLATION_CONTROL_SHADER_BIT");
- }
- if(enumerator & VK_PIPELINE_STAGE_EARLY_FRAGMENT_TESTS_BIT)
- {
- strings.push_back("VK_PIPELINE_STAGE_EARLY_FRAGMENT_TESTS_BIT");
- }
- if(enumerator & VK_PIPELINE_STAGE_DRAW_INDIRECT_BIT)
- {
- strings.push_back("VK_PIPELINE_STAGE_DRAW_INDIRECT_BIT");
- }
- if(enumerator & VK_PIPELINE_STAGE_VERTEX_SHADER_BIT)
- {
- strings.push_back("VK_PIPELINE_STAGE_VERTEX_SHADER_BIT");
- }
- if(enumerator & VK_PIPELINE_STAGE_TESSELLATION_EVALUATION_SHADER_BIT)
- {
- strings.push_back("VK_PIPELINE_STAGE_TESSELLATION_EVALUATION_SHADER_BIT");
- }
- if(enumerator & VK_PIPELINE_STAGE_VERTEX_INPUT_BIT)
- {
- strings.push_back("VK_PIPELINE_STAGE_VERTEX_INPUT_BIT");
- }
- if(enumerator & VK_PIPELINE_STAGE_GEOMETRY_SHADER_BIT)
- {
- strings.push_back("VK_PIPELINE_STAGE_GEOMETRY_SHADER_BIT");
- }
- if(enumerator & VK_PIPELINE_STAGE_LATE_FRAGMENT_TESTS_BIT)
- {
- strings.push_back("VK_PIPELINE_STAGE_LATE_FRAGMENT_TESTS_BIT");
- }
- if(enumerator & VK_PIPELINE_STAGE_FRAGMENT_SHADER_BIT)
- {
- strings.push_back("VK_PIPELINE_STAGE_FRAGMENT_SHADER_BIT");
- }
-
- std::string enumeratorString;
- for(auto const& string : strings)
- {
- enumeratorString += string;
-
- if(string != strings.back())
- {
- enumeratorString += '|';
- }
- }
-
- return enumeratorString;
-}
-
-static
-bool ValidateEnumerator(VkAccessFlagBits const& enumerator)
-{
- VkAccessFlagBits allFlags = (VkAccessFlagBits)(
- VK_ACCESS_INDIRECT_COMMAND_READ_BIT |
- VK_ACCESS_INDEX_READ_BIT |
- VK_ACCESS_VERTEX_ATTRIBUTE_READ_BIT |
- VK_ACCESS_UNIFORM_READ_BIT |
- VK_ACCESS_INPUT_ATTACHMENT_READ_BIT |
- VK_ACCESS_SHADER_READ_BIT |
- VK_ACCESS_SHADER_WRITE_BIT |
- VK_ACCESS_COLOR_ATTACHMENT_READ_BIT |
- VK_ACCESS_COLOR_ATTACHMENT_WRITE_BIT |
- VK_ACCESS_DEPTH_STENCIL_ATTACHMENT_READ_BIT |
- VK_ACCESS_DEPTH_STENCIL_ATTACHMENT_WRITE_BIT |
- VK_ACCESS_TRANSFER_READ_BIT |
- VK_ACCESS_TRANSFER_WRITE_BIT |
- VK_ACCESS_HOST_READ_BIT |
- VK_ACCESS_HOST_WRITE_BIT |
- VK_ACCESS_MEMORY_READ_BIT |
- VK_ACCESS_MEMORY_WRITE_BIT);
-
- if(enumerator & (~allFlags))
- {
- return false;
- }
-
- return true;
-}
-
-static
-std::string EnumeratorString(VkAccessFlagBits const& enumerator)
-{
- if(!ValidateEnumerator(enumerator))
- {
- return "unrecognized enumerator";
- }
-
- std::vector<std::string> strings;
- if(enumerator & VK_ACCESS_INDIRECT_COMMAND_READ_BIT)
- {
- strings.push_back("VK_ACCESS_INDIRECT_COMMAND_READ_BIT");
- }
- if(enumerator & VK_ACCESS_INDEX_READ_BIT)
- {
- strings.push_back("VK_ACCESS_INDEX_READ_BIT");
- }
- if(enumerator & VK_ACCESS_VERTEX_ATTRIBUTE_READ_BIT)
- {
- strings.push_back("VK_ACCESS_VERTEX_ATTRIBUTE_READ_BIT");
- }
- if(enumerator & VK_ACCESS_UNIFORM_READ_BIT)
- {
- strings.push_back("VK_ACCESS_UNIFORM_READ_BIT");
- }
- if(enumerator & VK_ACCESS_INPUT_ATTACHMENT_READ_BIT)
- {
- strings.push_back("VK_ACCESS_INPUT_ATTACHMENT_READ_BIT");
- }
- if(enumerator & VK_ACCESS_SHADER_READ_BIT)
- {
- strings.push_back("VK_ACCESS_SHADER_READ_BIT");
- }
- if(enumerator & VK_ACCESS_SHADER_WRITE_BIT)
- {
- strings.push_back("VK_ACCESS_SHADER_WRITE_BIT");
- }
- if(enumerator & VK_ACCESS_COLOR_ATTACHMENT_READ_BIT)
- {
- strings.push_back("VK_ACCESS_COLOR_ATTACHMENT_READ_BIT");
- }
- if(enumerator & VK_ACCESS_COLOR_ATTACHMENT_WRITE_BIT)
- {
- strings.push_back("VK_ACCESS_COLOR_ATTACHMENT_WRITE_BIT");
- }
- if(enumerator & VK_ACCESS_DEPTH_STENCIL_ATTACHMENT_READ_BIT)
- {
- strings.push_back("VK_ACCESS_DEPTH_STENCIL_ATTACHMENT_READ_BIT");
- }
- if(enumerator & VK_ACCESS_DEPTH_STENCIL_ATTACHMENT_WRITE_BIT)
- {
- strings.push_back("VK_ACCESS_DEPTH_STENCIL_ATTACHMENT_WRITE_BIT");
- }
- if(enumerator & VK_ACCESS_TRANSFER_READ_BIT)
- {
- strings.push_back("VK_ACCESS_TRANSFER_READ_BIT");
- }
- if(enumerator & VK_ACCESS_TRANSFER_WRITE_BIT)
- {
- strings.push_back("VK_ACCESS_TRANSFER_WRITE_BIT");
- }
- if(enumerator & VK_ACCESS_HOST_READ_BIT)
- {
- strings.push_back("VK_ACCESS_HOST_READ_BIT");
- }
- if(enumerator & VK_ACCESS_HOST_WRITE_BIT)
- {
- strings.push_back("VK_ACCESS_HOST_WRITE_BIT");
- }
- if(enumerator & VK_ACCESS_MEMORY_READ_BIT)
- {
- strings.push_back("VK_ACCESS_MEMORY_READ_BIT");
- }
- if(enumerator & VK_ACCESS_MEMORY_WRITE_BIT)
- {
- strings.push_back("VK_ACCESS_MEMORY_WRITE_BIT");
- }
-
- std::string enumeratorString;
- for(auto const& string : strings)
- {
- enumeratorString += string;
-
- if(string != strings.back())
- {
- enumeratorString += '|';
- }
- }
-
- return enumeratorString;
-}
-
-static
-bool ValidateEnumerator(VkCommandPoolCreateFlagBits const& enumerator)
-{
- VkCommandPoolCreateFlagBits allFlags = (VkCommandPoolCreateFlagBits)(VK_COMMAND_POOL_CREATE_RESET_COMMAND_BUFFER_BIT |
- VK_COMMAND_POOL_CREATE_TRANSIENT_BIT);
- if(enumerator & (~allFlags))
- {
- return false;
- }
-
- return true;
-}
-
-static
-std::string EnumeratorString(VkCommandPoolCreateFlagBits const& enumerator)
-{
- if(!ValidateEnumerator(enumerator))
- {
- return "unrecognized enumerator";
- }
-
- std::vector<std::string> strings;
- if(enumerator & VK_COMMAND_POOL_CREATE_RESET_COMMAND_BUFFER_BIT)
- {
- strings.push_back("VK_COMMAND_POOL_CREATE_RESET_COMMAND_BUFFER_BIT");
- }
- if(enumerator & VK_COMMAND_POOL_CREATE_TRANSIENT_BIT)
- {
- strings.push_back("VK_COMMAND_POOL_CREATE_TRANSIENT_BIT");
- }
-
- std::string enumeratorString;
- for(auto const& string : strings)
- {
- enumeratorString += string;
-
- if(string != strings.back())
- {
- enumeratorString += '|';
- }
- }
-
- return enumeratorString;
-}
-
-static
-bool ValidateEnumerator(VkCommandPoolResetFlagBits const& enumerator)
-{
- VkCommandPoolResetFlagBits allFlags = (VkCommandPoolResetFlagBits)(VK_COMMAND_POOL_RESET_RELEASE_RESOURCES_BIT);
- if(enumerator & (~allFlags))
- {
- return false;
- }
-
- return true;
-}
-
-static
-std::string EnumeratorString(VkCommandPoolResetFlagBits const& enumerator)
-{
- if(!ValidateEnumerator(enumerator))
- {
- return "unrecognized enumerator";
- }
-
- std::vector<std::string> strings;
- if(enumerator & VK_COMMAND_POOL_RESET_RELEASE_RESOURCES_BIT)
- {
- strings.push_back("VK_COMMAND_POOL_RESET_RELEASE_RESOURCES_BIT");
- }
-
- std::string enumeratorString;
- for(auto const& string : strings)
- {
- enumeratorString += string;
-
- if(string != strings.back())
- {
- enumeratorString += '|';
- }
- }
-
- return enumeratorString;
-}
-
-static
-bool ValidateEnumerator(VkCommandBufferUsageFlags const& enumerator)
-{
- VkCommandBufferUsageFlags allFlags = (VkCommandBufferUsageFlags)(VK_COMMAND_BUFFER_USAGE_ONE_TIME_SUBMIT_BIT |
- VK_COMMAND_BUFFER_USAGE_RENDER_PASS_CONTINUE_BIT |
- VK_COMMAND_BUFFER_USAGE_SIMULTANEOUS_USE_BIT);
- if(enumerator & (~allFlags))
- {
- return false;
- }
-
- return true;
-}
-
-static
-std::string EnumeratorString(VkCommandBufferUsageFlags const& enumerator)
-{
- if(!ValidateEnumerator(enumerator))
- {
- return "unrecognized enumerator";
- }
-
- std::vector<std::string> strings;
- if(enumerator & VK_COMMAND_BUFFER_USAGE_SIMULTANEOUS_USE_BIT)
- {
- strings.push_back("VK_COMMAND_BUFFER_USAGE_SIMULTANEOUS_USE_BIT");
- }
- if(enumerator & VK_COMMAND_BUFFER_USAGE_ONE_TIME_SUBMIT_BIT)
- {
- strings.push_back("VK_COMMAND_BUFFER_USAGE_ONE_TIME_SUBMIT_BIT");
- }
- if(enumerator & VK_COMMAND_BUFFER_USAGE_RENDER_PASS_CONTINUE_BIT)
- {
- strings.push_back("VK_COMMAND_BUFFER_USAGE_RENDER_PASS_CONTINUE_BIT");
- }
-
- std::string enumeratorString;
- for(auto const& string : strings)
- {
- enumeratorString += string;
-
- if(string != strings.back())
- {
- enumeratorString += '|';
- }
- }
-
- return enumeratorString;
-}
-
-static
-bool ValidateEnumerator(VkCommandBufferResetFlagBits const& enumerator)
-{
- VkCommandBufferResetFlagBits allFlags = (VkCommandBufferResetFlagBits)(VK_COMMAND_BUFFER_RESET_RELEASE_RESOURCES_BIT);
- if(enumerator & (~allFlags))
- {
- return false;
- }
-
- return true;
-}
-
-static
-std::string EnumeratorString(VkCommandBufferResetFlagBits const& enumerator)
-{
- if(!ValidateEnumerator(enumerator))
- {
- return "unrecognized enumerator";
- }
-
- std::vector<std::string> strings;
- if(enumerator & VK_COMMAND_BUFFER_RESET_RELEASE_RESOURCES_BIT)
- {
- strings.push_back("VK_COMMAND_BUFFER_RESET_RELEASE_RESOURCES_BIT");
- }
-
- std::string enumeratorString;
- for(auto const& string : strings)
- {
- enumeratorString += string;
-
- if(string != strings.back())
- {
- enumeratorString += '|';
- }
- }
-
- return enumeratorString;
-}
-
-static
-bool ValidateEnumerator(VkImageAspectFlagBits const& enumerator)
-{
- VkImageAspectFlagBits allFlags = (VkImageAspectFlagBits)(VK_IMAGE_ASPECT_METADATA_BIT |
- VK_IMAGE_ASPECT_STENCIL_BIT |
- VK_IMAGE_ASPECT_DEPTH_BIT |
- VK_IMAGE_ASPECT_COLOR_BIT);
- if(enumerator & (~allFlags))
- {
- return false;
- }
-
- return true;
-}
-
-static
-std::string EnumeratorString(VkImageAspectFlagBits const& enumerator)
-{
- if(!ValidateEnumerator(enumerator))
- {
- return "unrecognized enumerator";
- }
-
- std::vector<std::string> strings;
- if(enumerator & VK_IMAGE_ASPECT_METADATA_BIT)
- {
- strings.push_back("VK_IMAGE_ASPECT_METADATA_BIT");
- }
- if(enumerator & VK_IMAGE_ASPECT_STENCIL_BIT)
- {
- strings.push_back("VK_IMAGE_ASPECT_STENCIL_BIT");
- }
- if(enumerator & VK_IMAGE_ASPECT_DEPTH_BIT)
- {
- strings.push_back("VK_IMAGE_ASPECT_DEPTH_BIT");
- }
- if(enumerator & VK_IMAGE_ASPECT_COLOR_BIT)
- {
- strings.push_back("VK_IMAGE_ASPECT_COLOR_BIT");
- }
-
- std::string enumeratorString;
- for(auto const& string : strings)
- {
- enumeratorString += string;
-
- if(string != strings.back())
- {
- enumeratorString += '|';
- }
- }
-
- return enumeratorString;
-}
-
-static
-bool ValidateEnumerator(VkQueryControlFlagBits const& enumerator)
-{
- VkQueryControlFlagBits allFlags = (VkQueryControlFlagBits)(VK_QUERY_CONTROL_PRECISE_BIT);
- if(enumerator & (~allFlags))
- {
- return false;
- }
-
- return true;
-}
-
-static
-std::string EnumeratorString(VkQueryControlFlagBits const& enumerator)
-{
- if(!ValidateEnumerator(enumerator))
- {
- return "unrecognized enumerator";
- }
-
- std::vector<std::string> strings;
- if(enumerator & VK_QUERY_CONTROL_PRECISE_BIT)
- {
- strings.push_back("VK_QUERY_CONTROL_PRECISE_BIT");
- }
-
- std::string enumeratorString;
- for(auto const& string : strings)
- {
- enumeratorString += string;
-
- if(string != strings.back())
- {
- enumeratorString += '|';
- }
- }
-
- return enumeratorString;
-}
-
-static const int MaxParamCheckerStringLength = 256;
-
-static
-VkBool32 validate_string(layer_data *my_data, const char *apiName, const char *stringName, const char *validateString)
-{
- VkBool32 skipCall = VK_FALSE;
-
- VkStringErrorFlags result = vk_string_validate(MaxParamCheckerStringLength, validateString);
-
- if (result == VK_STRING_ERROR_NONE) {
- return skipCall;
- } else if (result & VK_STRING_ERROR_LENGTH) {
- skipCall = log_msg(my_data->report_data, VK_DEBUG_REPORT_ERROR_BIT_EXT, (VkDebugReportObjectTypeEXT)0, 0, __LINE__, 1, "PARAMCHECK",
- "%s: string %s exceeds max length %d", apiName, stringName, MaxParamCheckerStringLength);
- } else if (result & VK_STRING_ERROR_BAD_DATA) {
- skipCall = log_msg(my_data->report_data, VK_DEBUG_REPORT_ERROR_BIT_EXT, (VkDebugReportObjectTypeEXT)0, 0, __LINE__, 1, "PARAMCHECK",
- "%s: string %s contains invalid characters or is badly formed", apiName, stringName);
- }
- return skipCall;
-}
-
-VK_LAYER_EXPORT VKAPI_ATTR VkResult VKAPI_CALL vkCreateInstance(
- const VkInstanceCreateInfo* pCreateInfo,
- const VkAllocationCallbacks* pAllocator,
- VkInstance* pInstance)
-{
- VkResult result = VK_ERROR_VALIDATION_FAILED_EXT;
- VkBool32 skipCall = VK_FALSE;
-
- if (skipCall == VK_FALSE) {
- VkLayerInstanceCreateInfo *chain_info = get_chain_info(pCreateInfo, VK_LAYER_LINK_INFO);
- assert(chain_info->u.pLayerInfo);
- PFN_vkGetInstanceProcAddr fpGetInstanceProcAddr = chain_info->u.pLayerInfo->pfnNextGetInstanceProcAddr;
- PFN_vkCreateInstance fpCreateInstance = (PFN_vkCreateInstance) fpGetInstanceProcAddr(NULL, "vkCreateInstance");
- if (fpCreateInstance == NULL) {
- return VK_ERROR_INITIALIZATION_FAILED;
- }
-
- // Advance the link info for the next element on the chain
- chain_info->u.pLayerInfo = chain_info->u.pLayerInfo->pNext;
-
- result = fpCreateInstance(pCreateInfo, pAllocator, pInstance);
- if (result != VK_SUCCESS)
- return result;
-
- layer_data *my_data = get_my_data_ptr(get_dispatch_key(*pInstance), layer_data_map);
- VkLayerInstanceDispatchTable *pTable = initInstanceTable(*pInstance, fpGetInstanceProcAddr, pc_instance_table_map);
-
- my_data->report_data = debug_report_create_instance(
- pTable,
- *pInstance,
- pCreateInfo->enabledExtensionCount,
- pCreateInfo->ppEnabledExtensionNames);
-
- InitParamChecker(my_data, pAllocator);
- }
-
- // Ordinarily we'd check these before calling down the chain, but none of the layer
- // support is in place until now, if we survive we can report the issue now.
- layer_data *my_device_data = get_my_data_ptr(get_dispatch_key(*pInstance), layer_data_map);
- if (pCreateInfo->pApplicationInfo) {
- if (pCreateInfo->pApplicationInfo->pApplicationName) {
- skipCall |= validate_string(my_device_data, "vkCreateInstance()", "VkInstanceCreateInfo->VkApplicationInfo->pApplicationName",
- pCreateInfo->pApplicationInfo->pApplicationName);
- }
-
- if (pCreateInfo->pApplicationInfo->pEngineName) {
- skipCall |= validate_string(my_device_data, "vkCreateInstance()", "VkInstanceCreateInfo->VkApplicationInfo->pEngineName",
- pCreateInfo->pApplicationInfo->pEngineName);
- }
- }
-
- return result;
-}
-
-VK_LAYER_EXPORT VKAPI_ATTR void VKAPI_CALL vkDestroyInstance(
- VkInstance instance,
- const VkAllocationCallbacks* pAllocator)
-{
- // Grab the key before the instance is destroyed.
- dispatch_key key = get_dispatch_key(instance);
- VkLayerInstanceDispatchTable *pTable = get_dispatch_table(pc_instance_table_map, instance);
- pTable->DestroyInstance(instance, pAllocator);
-
- // Clean up logging callback, if any
- layer_data *my_data = get_my_data_ptr(key, layer_data_map);
- while (my_data->logging_callback.size() > 0) {
- VkDebugReportCallbackEXT callback = my_data->logging_callback.back();
- layer_destroy_msg_callback(my_data->report_data, callback, pAllocator);
- my_data->logging_callback.pop_back();
- }
-
- layer_debug_report_destroy_instance(mid(instance));
- layer_data_map.erase(pTable);
-
- pc_instance_table_map.erase(key);
-}
-
-bool PostEnumeratePhysicalDevices(
- VkInstance instance,
- uint32_t* pPhysicalDeviceCount,
- VkPhysicalDevice* pPhysicalDevices,
- VkResult result)
-{
-
- if(pPhysicalDeviceCount != nullptr)
- {
- }
-
- if(pPhysicalDevices != nullptr)
- {
- }
-
- if(result < VK_SUCCESS)
- {
- std::string reason = "vkEnumeratePhysicalDevices parameter, VkResult result, is " + EnumeratorString(result);
- log_msg(mid(instance), VK_DEBUG_REPORT_ERROR_BIT_EXT, (VkDebugReportObjectTypeEXT)0, 0, __LINE__, 1, "PARAMCHECK", "%s", reason.c_str());
- return false;
- }
-
- return true;
-}
-
-VK_LAYER_EXPORT VKAPI_ATTR VkResult VKAPI_CALL vkEnumeratePhysicalDevices(
- VkInstance instance,
- uint32_t* pPhysicalDeviceCount,
- VkPhysicalDevice* pPhysicalDevices)
-{
- VkResult result = VK_ERROR_VALIDATION_FAILED_EXT;
- VkBool32 skipCall = VK_FALSE;
- layer_data* my_data = get_my_data_ptr(get_dispatch_key(instance), layer_data_map);
- assert(my_data != NULL);
-
- skipCall |= param_check_vkEnumeratePhysicalDevices(
- my_data->report_data,
- pPhysicalDeviceCount,
- pPhysicalDevices);
-
- if (skipCall == VK_FALSE) {
- result = get_dispatch_table(pc_instance_table_map, instance)->EnumeratePhysicalDevices(instance, pPhysicalDeviceCount, pPhysicalDevices);
-
- PostEnumeratePhysicalDevices(instance, pPhysicalDeviceCount, pPhysicalDevices, result);
- }
-
- return result;
-}
-
-bool PostGetPhysicalDeviceFeatures(
- VkPhysicalDevice physicalDevice,
- VkPhysicalDeviceFeatures* pFeatures)
-{
-
- if(pFeatures != nullptr)
- {
- }
-
- return true;
-}
-
-VK_LAYER_EXPORT VKAPI_ATTR void VKAPI_CALL vkGetPhysicalDeviceFeatures(
- VkPhysicalDevice physicalDevice,
- VkPhysicalDeviceFeatures* pFeatures)
-{
- VkBool32 skipCall = VK_FALSE;
- layer_data* my_data = get_my_data_ptr(get_dispatch_key(physicalDevice), layer_data_map);
- assert(my_data != NULL);
-
- skipCall |= param_check_vkGetPhysicalDeviceFeatures(
- my_data->report_data,
- pFeatures);
-
- if (skipCall == VK_FALSE) {
- get_dispatch_table(pc_instance_table_map, physicalDevice)->GetPhysicalDeviceFeatures(physicalDevice, pFeatures);
-
- PostGetPhysicalDeviceFeatures(physicalDevice, pFeatures);
- }
-}
-
-bool PostGetPhysicalDeviceFormatProperties(
- VkPhysicalDevice physicalDevice,
- VkFormat format,
- VkFormatProperties* pFormatProperties)
-{
-
- if(format < VK_FORMAT_BEGIN_RANGE ||
- format > VK_FORMAT_END_RANGE)
- {
- log_msg(mdd(physicalDevice), VK_DEBUG_REPORT_ERROR_BIT_EXT, (VkDebugReportObjectTypeEXT)0, 0, __LINE__, 1, "PARAMCHECK",
- "vkGetPhysicalDeviceFormatProperties parameter, VkFormat format, is an unrecognized enumerator");
- return false;
- }
-
- if(pFormatProperties != nullptr)
- {
- }
-
- return true;
-}
-
-VK_LAYER_EXPORT VKAPI_ATTR void VKAPI_CALL vkGetPhysicalDeviceFormatProperties(
- VkPhysicalDevice physicalDevice,
- VkFormat format,
- VkFormatProperties* pFormatProperties)
-{
- VkBool32 skipCall = VK_FALSE;
- layer_data* my_data = get_my_data_ptr(get_dispatch_key(physicalDevice), layer_data_map);
- assert(my_data != NULL);
-
-
- skipCall |= param_check_vkGetPhysicalDeviceFormatProperties(
- my_data->report_data,
- format,
- pFormatProperties);
-
- if (skipCall == VK_FALSE) {
- get_dispatch_table(pc_instance_table_map, physicalDevice)->GetPhysicalDeviceFormatProperties(physicalDevice, format, pFormatProperties);
-
- PostGetPhysicalDeviceFormatProperties(physicalDevice, format, pFormatProperties);
- }
-}
-
-bool PostGetPhysicalDeviceImageFormatProperties(
- VkPhysicalDevice physicalDevice,
- VkFormat format,
- VkImageType type,
- VkImageTiling tiling,
- VkImageUsageFlags usage,
- VkImageCreateFlags flags,
- VkImageFormatProperties* pImageFormatProperties,
- VkResult result)
-{
-
- if(format < VK_FORMAT_BEGIN_RANGE ||
- format > VK_FORMAT_END_RANGE)
- {
- log_msg(mdd(physicalDevice), VK_DEBUG_REPORT_ERROR_BIT_EXT, (VkDebugReportObjectTypeEXT)0, 0, __LINE__, 1, "PARAMCHECK",
- "vkGetPhysicalDeviceImageFormatProperties parameter, VkFormat format, is an unrecognized enumerator");
- return false;
- }
-
- if(type < VK_IMAGE_TYPE_BEGIN_RANGE ||
- type > VK_IMAGE_TYPE_END_RANGE)
- {
- log_msg(mdd(physicalDevice), VK_DEBUG_REPORT_ERROR_BIT_EXT, (VkDebugReportObjectTypeEXT)0, 0, __LINE__, 1, "PARAMCHECK",
- "vkGetPhysicalDeviceImageFormatProperties parameter, VkImageType type, is an unrecognized enumerator");
- return false;
- }
-
- if(tiling < VK_IMAGE_TILING_BEGIN_RANGE ||
- tiling > VK_IMAGE_TILING_END_RANGE)
- {
- log_msg(mdd(physicalDevice), VK_DEBUG_REPORT_ERROR_BIT_EXT, (VkDebugReportObjectTypeEXT)0, 0, __LINE__, 1, "PARAMCHECK",
- "vkGetPhysicalDeviceImageFormatProperties parameter, VkImageTiling tiling, is an unrecognized enumerator");
- return false;
- }
-
-
- if(pImageFormatProperties != nullptr)
- {
- }
-
- if(result < VK_SUCCESS)
- {
- std::string reason = "vkGetPhysicalDeviceImageFormatProperties parameter, VkResult result, is " + EnumeratorString(result);
- log_msg(mdd(physicalDevice), VK_DEBUG_REPORT_ERROR_BIT_EXT, (VkDebugReportObjectTypeEXT)0, 0, __LINE__, 1, "PARAMCHECK", "%s", reason.c_str());
- return false;
- }
-
- return true;
-}
-
-VK_LAYER_EXPORT VKAPI_ATTR VkResult VKAPI_CALL vkGetPhysicalDeviceImageFormatProperties(
- VkPhysicalDevice physicalDevice,
- VkFormat format,
- VkImageType type,
- VkImageTiling tiling,
- VkImageUsageFlags usage,
- VkImageCreateFlags flags,
- VkImageFormatProperties* pImageFormatProperties)
-{
- VkResult result = VK_ERROR_VALIDATION_FAILED_EXT;
- VkBool32 skipCall = VK_FALSE;
- layer_data* my_data = get_my_data_ptr(get_dispatch_key(physicalDevice), layer_data_map);
- assert(my_data != NULL);
-
- skipCall |= param_check_vkGetPhysicalDeviceImageFormatProperties(
- my_data->report_data,
- format,
- type,
- tiling,
- usage,
- flags,
- pImageFormatProperties);
-
- if (skipCall == VK_FALSE) {
- result = get_dispatch_table(pc_instance_table_map, physicalDevice)->GetPhysicalDeviceImageFormatProperties(physicalDevice, format, type, tiling, usage, flags, pImageFormatProperties);
-
- PostGetPhysicalDeviceImageFormatProperties(physicalDevice, format, type, tiling, usage, flags, pImageFormatProperties, result);
- }
-
- return result;
-}
-
-bool PostGetPhysicalDeviceProperties(
- VkPhysicalDevice physicalDevice,
- VkPhysicalDeviceProperties* pProperties)
-{
-
- if(pProperties != nullptr)
- {
- if(pProperties->deviceType < VK_PHYSICAL_DEVICE_TYPE_BEGIN_RANGE ||
- pProperties->deviceType > VK_PHYSICAL_DEVICE_TYPE_END_RANGE)
- {
- log_msg(mdd(physicalDevice), VK_DEBUG_REPORT_ERROR_BIT_EXT, (VkDebugReportObjectTypeEXT)0, 0, __LINE__, 1, "PARAMCHECK",
- "vkGetPhysicalDeviceProperties parameter, VkPhysicalDeviceType pProperties->deviceType, is an unrecognized enumerator");
- return false;
- }
-
- }
-
- return true;
-}
-
-VK_LAYER_EXPORT VKAPI_ATTR void VKAPI_CALL vkGetPhysicalDeviceProperties(
- VkPhysicalDevice physicalDevice,
- VkPhysicalDeviceProperties* pProperties)
-{
- VkBool32 skipCall = VK_FALSE;
- layer_data* my_data = get_my_data_ptr(get_dispatch_key(physicalDevice), layer_data_map);
- assert(my_data != NULL);
-
- skipCall |= param_check_vkGetPhysicalDeviceProperties(
- my_data->report_data,
- pProperties);
-
- if (skipCall == VK_FALSE) {
- get_dispatch_table(pc_instance_table_map, physicalDevice)->GetPhysicalDeviceProperties(physicalDevice, pProperties);
-
- PostGetPhysicalDeviceProperties(physicalDevice, pProperties);
- }
-}
-
-bool PostGetPhysicalDeviceQueueFamilyProperties(
- VkPhysicalDevice physicalDevice,
- uint32_t* pCount,
- VkQueueFamilyProperties* pQueueProperties)
-{
-
- if(pQueueProperties == nullptr && pCount != nullptr)
- {
- }
-
- if(pQueueProperties != nullptr)
- {
- }
-
- return true;
-}
-
-VK_LAYER_EXPORT VKAPI_ATTR void VKAPI_CALL vkGetPhysicalDeviceQueueFamilyProperties(
- VkPhysicalDevice physicalDevice,
- uint32_t* pQueueFamilyPropertyCount,
- VkQueueFamilyProperties* pQueueFamilyProperties)
-{
- VkBool32 skipCall = VK_FALSE;
- layer_data* my_data = get_my_data_ptr(get_dispatch_key(physicalDevice), layer_data_map);
- assert(my_data != NULL);
-
- skipCall |= param_check_vkGetPhysicalDeviceQueueFamilyProperties(
- my_data->report_data,
- pQueueFamilyPropertyCount,
- pQueueFamilyProperties);
-
- if (skipCall == VK_FALSE) {
- get_dispatch_table(pc_instance_table_map, physicalDevice)->GetPhysicalDeviceQueueFamilyProperties(physicalDevice, pQueueFamilyPropertyCount, pQueueFamilyProperties);
-
- PostGetPhysicalDeviceQueueFamilyProperties(physicalDevice, pQueueFamilyPropertyCount, pQueueFamilyProperties);
- }
-}
-
-bool PostGetPhysicalDeviceMemoryProperties(
- VkPhysicalDevice physicalDevice,
- VkPhysicalDeviceMemoryProperties* pMemoryProperties)
-{
-
- if(pMemoryProperties != nullptr)
- {
- }
-
- return true;
-}
-
-VK_LAYER_EXPORT VKAPI_ATTR void VKAPI_CALL vkGetPhysicalDeviceMemoryProperties(
- VkPhysicalDevice physicalDevice,
- VkPhysicalDeviceMemoryProperties* pMemoryProperties)
-{
- VkBool32 skipCall = VK_FALSE;
- layer_data* my_data = get_my_data_ptr(get_dispatch_key(physicalDevice), layer_data_map);
- assert(my_data != NULL);
-
- skipCall |= param_check_vkGetPhysicalDeviceMemoryProperties(
- my_data->report_data,
- pMemoryProperties);
-
- if (skipCall == VK_FALSE) {
- get_dispatch_table(pc_instance_table_map, physicalDevice)->GetPhysicalDeviceMemoryProperties(physicalDevice, pMemoryProperties);
-
- PostGetPhysicalDeviceMemoryProperties(physicalDevice, pMemoryProperties);
- }
-}
-
-void validateDeviceCreateInfo(VkPhysicalDevice physicalDevice, const VkDeviceCreateInfo* pCreateInfo, const std::vector<VkQueueFamilyProperties> properties) {
- std::unordered_set<uint32_t> set;
- for (uint32_t i = 0; i < pCreateInfo->queueCreateInfoCount; ++i) {
- if (set.count(pCreateInfo->pQueueCreateInfos[i].queueFamilyIndex)) {
- log_msg(mdd(physicalDevice), VK_DEBUG_REPORT_ERROR_BIT_EXT, (VkDebugReportObjectTypeEXT)0, 0, __LINE__, 1, "PARAMCHECK",
- "VkDeviceCreateInfo parameter, uint32_t pQueueCreateInfos[%d]->queueFamilyIndex, is not unique within this structure.", i);
- } else {
- set.insert(pCreateInfo->pQueueCreateInfos[i].queueFamilyIndex);
- }
- if (pCreateInfo->pQueueCreateInfos[i].queueCount == 0) {
- log_msg(mdd(physicalDevice), VK_DEBUG_REPORT_ERROR_BIT_EXT, (VkDebugReportObjectTypeEXT)0, 0, __LINE__, 1, "PARAMCHECK",
- "VkDeviceCreateInfo parameter, uint32_t pQueueCreateInfos[%d]->queueCount, cannot be zero.", i);
- }
- for (uint32_t j = 0; j < pCreateInfo->pQueueCreateInfos[i].queueCount; ++j) {
- if (pCreateInfo->pQueueCreateInfos[i].pQueuePriorities == nullptr) {
- log_msg(mdd(physicalDevice), VK_DEBUG_REPORT_ERROR_BIT_EXT, (VkDebugReportObjectTypeEXT)0, 0, __LINE__, 1, "PARAMCHECK",
- "VkDeviceCreateInfo parameter, uint32_t pQueueCreateInfos[%d]->pQueuePriorities, must not be NULL.", i);
- } else if (pCreateInfo->pQueueCreateInfos[i].pQueuePriorities[j] < 0.f || pCreateInfo->pQueueCreateInfos[i].pQueuePriorities[j] > 1.f) {
- log_msg(mdd(physicalDevice), VK_DEBUG_REPORT_ERROR_BIT_EXT, (VkDebugReportObjectTypeEXT)0, 0, __LINE__, 1, "PARAMCHECK",
- "VkDeviceCreateInfo parameter, uint32_t pQueueCreateInfos[%d]->pQueuePriorities[%d], must be between 0 and 1. Actual value is %f", i, j, pCreateInfo->pQueueCreateInfos[i].pQueuePriorities[j]);
- }
- }
- if (pCreateInfo->pQueueCreateInfos[i].queueFamilyIndex >= properties.size()) {
- log_msg(mdd(physicalDevice), VK_DEBUG_REPORT_ERROR_BIT_EXT, (VkDebugReportObjectTypeEXT)0, 0, __LINE__, 1, "PARAMCHECK",
- "VkDeviceCreateInfo parameter, uint32_t pQueueCreateInfos[%d]->queueFamilyIndex cannot be more than the number of queue families.", i);
- } else if (pCreateInfo->pQueueCreateInfos[i].queueCount > properties[pCreateInfo->pQueueCreateInfos[i].queueFamilyIndex].queueCount) {
- log_msg(mdd(physicalDevice), VK_DEBUG_REPORT_ERROR_BIT_EXT, (VkDebugReportObjectTypeEXT)0, 0, __LINE__, 1, "PARAMCHECK",
- "VkDeviceCreateInfo parameter, uint32_t pQueueCreateInfos[%d]->queueCount cannot be more than the number of queues for the given family index.", i);
- }
- }
-}
-
-void storeCreateDeviceData(VkDevice device, const VkDeviceCreateInfo* pCreateInfo) {
- layer_data *my_device_data = get_my_data_ptr(get_dispatch_key(device), layer_data_map);
- for (uint32_t i = 0; i < pCreateInfo->queueCreateInfoCount; ++i) {
- my_device_data->queueFamilyIndexMap.insert(
- std::make_pair(pCreateInfo->pQueueCreateInfos[i].queueFamilyIndex, pCreateInfo->pQueueCreateInfos[i].queueCount));
- }
-}
-
-VK_LAYER_EXPORT VKAPI_ATTR VkResult VKAPI_CALL vkCreateDevice(
- VkPhysicalDevice physicalDevice,
- const VkDeviceCreateInfo* pCreateInfo,
- const VkAllocationCallbacks* pAllocator,
- VkDevice* pDevice)
-{
- /*
- * NOTE: We do not validate physicalDevice or any dispatchable
- * object as the first parameter. We couldn't get here if it was wrong!
- */
-
- VkResult result = VK_ERROR_VALIDATION_FAILED_EXT;
- VkBool32 skipCall = VK_FALSE;
- layer_data *my_instance_data = get_my_data_ptr(get_dispatch_key(physicalDevice), layer_data_map);
-
- if ((pCreateInfo->enabledLayerCount > 0) && (pCreateInfo->ppEnabledLayerNames != NULL)) {
- for (auto i = 0; i < pCreateInfo->enabledLayerCount; i++) {
- skipCall |= validate_string(my_instance_data, "vkCreateDevice()", "VkDeviceCreateInfo->ppEnabledLayerNames",
- pCreateInfo->ppEnabledLayerNames[i]);
- }
- }
-
- if ((pCreateInfo->enabledExtensionCount > 0) && (pCreateInfo->ppEnabledExtensionNames != NULL)) {
- for (auto i = 0; i < pCreateInfo->enabledExtensionCount; i++) {
- skipCall |= validate_string(my_instance_data, "vkCreateDevice()", "VkDeviceCreateInfo->ppEnabledExtensionNames",
- pCreateInfo->ppEnabledExtensionNames[i]);
- }
- }
-
- if (skipCall == VK_FALSE) {
- VkLayerDeviceCreateInfo *chain_info = get_chain_info(pCreateInfo, VK_LAYER_LINK_INFO);
- assert(chain_info->u.pLayerInfo);
- PFN_vkGetInstanceProcAddr fpGetInstanceProcAddr = chain_info->u.pLayerInfo->pfnNextGetInstanceProcAddr;
- PFN_vkGetDeviceProcAddr fpGetDeviceProcAddr = chain_info->u.pLayerInfo->pfnNextGetDeviceProcAddr;
- PFN_vkCreateDevice fpCreateDevice = (PFN_vkCreateDevice) fpGetInstanceProcAddr(NULL, "vkCreateDevice");
- if (fpCreateDevice == NULL) {
- return VK_ERROR_INITIALIZATION_FAILED;
- }
-
- // Advance the link info for the next element on the chain
- chain_info->u.pLayerInfo = chain_info->u.pLayerInfo->pNext;
-
- result = fpCreateDevice(physicalDevice, pCreateInfo, pAllocator, pDevice);
- if (result != VK_SUCCESS) {
- return result;
- }
-
- layer_data *my_device_data = get_my_data_ptr(get_dispatch_key(*pDevice), layer_data_map);
- my_device_data->report_data = layer_debug_report_create_device(my_instance_data->report_data, *pDevice);
- initDeviceTable(*pDevice, fpGetDeviceProcAddr, pc_device_table_map);
-
- uint32_t count;
- get_dispatch_table(pc_instance_table_map, physicalDevice)->GetPhysicalDeviceQueueFamilyProperties(physicalDevice, &count, nullptr);
- std::vector<VkQueueFamilyProperties> properties(count);
- get_dispatch_table(pc_instance_table_map, physicalDevice)->GetPhysicalDeviceQueueFamilyProperties(physicalDevice, &count, &properties[0]);
-
- validateDeviceCreateInfo(physicalDevice, pCreateInfo, properties);
- storeCreateDeviceData(*pDevice, pCreateInfo);
- }
-
- return result;
-}
-
-VK_LAYER_EXPORT VKAPI_ATTR void VKAPI_CALL vkDestroyDevice(
- VkDevice device,
- const VkAllocationCallbacks* pAllocator)
-{
- layer_debug_report_destroy_device(device);
-
- dispatch_key key = get_dispatch_key(device);
-#if DISPATCH_MAP_DEBUG
- fprintf(stderr, "Device: %p, key: %p\n", device, key);
-#endif
-
- get_dispatch_table(pc_device_table_map, device)->DestroyDevice(device, pAllocator);
- pc_device_table_map.erase(key);
-}
-
-bool PreGetDeviceQueue(
- VkDevice device,
- uint32_t queueFamilyIndex,
- uint32_t queueIndex)
-{
- layer_data *my_device_data = get_my_data_ptr(get_dispatch_key(device), layer_data_map);
- auto queue_data = my_device_data->queueFamilyIndexMap.find(queueFamilyIndex);
- if (queue_data == my_device_data->queueFamilyIndexMap.end()) {
- log_msg(mdd(device), VK_DEBUG_REPORT_ERROR_BIT_EXT, (VkDebugReportObjectTypeEXT)0, 0, __LINE__, 1, "PARAMCHECK",
- "VkGetDeviceQueue parameter, uint32_t queueFamilyIndex %d, must have been given when the device was created.", queueFamilyIndex);
- return false;
- }
- if (queue_data->second <= queueIndex) {
- log_msg(mdd(device), VK_DEBUG_REPORT_ERROR_BIT_EXT, (VkDebugReportObjectTypeEXT)0, 0, __LINE__, 1, "PARAMCHECK",
- "VkGetDeviceQueue parameter, uint32_t queueIndex %d, must be less than the number of queues given when the device was created.", queueIndex);
- return false;
- }
- return true;
-}
-
-bool PostGetDeviceQueue(
- VkDevice device,
- uint32_t queueFamilyIndex,
- uint32_t queueIndex,
- VkQueue* pQueue)
-{
-
-
-
- if(pQueue != nullptr)
- {
- }
-
- return true;
-}
-
-VK_LAYER_EXPORT VKAPI_ATTR void VKAPI_CALL vkGetDeviceQueue(
- VkDevice device,
- uint32_t queueFamilyIndex,
- uint32_t queueIndex,
- VkQueue* pQueue)
-{
- VkBool32 skipCall = VK_FALSE;
- layer_data* my_data = get_my_data_ptr(get_dispatch_key(device), layer_data_map);
- assert(my_data != NULL);
-
- skipCall |= param_check_vkGetDeviceQueue(
- my_data->report_data,
- queueFamilyIndex,
- queueIndex,
- pQueue);
-
- if (skipCall == VK_FALSE) {
- PreGetDeviceQueue(device, queueFamilyIndex, queueIndex);
-
- get_dispatch_table(pc_device_table_map, device)->GetDeviceQueue(device, queueFamilyIndex, queueIndex, pQueue);
-
- PostGetDeviceQueue(device, queueFamilyIndex, queueIndex, pQueue);
- }
-}
-
-bool PreQueueSubmit(
- VkQueue queue,
- const VkSubmitInfo* submit)
-{
- if(submit->sType != VK_STRUCTURE_TYPE_SUBMIT_INFO) {
- log_msg(mdd(queue), VK_DEBUG_REPORT_ERROR_BIT_EXT, (VkDebugReportObjectTypeEXT)0, 0, __LINE__, 1, "PARAMCHECK",
- "vkQueueSubmit parameter, VkStructureType pSubmits->sType, is an invalid enumerator");
- return false;
- }
-
- if(submit->pCommandBuffers != nullptr)
- {
- }
-
- return true;
-}
-
-bool PostQueueSubmit(
- VkQueue queue,
- uint32_t commandBufferCount,
- VkFence fence,
- VkResult result)
-{
-
-
-
- if(result < VK_SUCCESS)
- {
- std::string reason = "vkQueueSubmit parameter, VkResult result, is " + EnumeratorString(result);
- log_msg(mdd(queue), VK_DEBUG_REPORT_ERROR_BIT_EXT, (VkDebugReportObjectTypeEXT)0, 0, __LINE__, 1, "PARAMCHECK", "%s", reason.c_str());
- return false;
- }
-
- return true;
-}
-
-VK_LAYER_EXPORT VKAPI_ATTR VkResult VKAPI_CALL vkQueueSubmit(
- VkQueue queue,
- uint32_t submitCount,
- const VkSubmitInfo* pSubmits,
- VkFence fence)
-{
- VkResult result = VK_ERROR_VALIDATION_FAILED_EXT;
- VkBool32 skipCall = VK_FALSE;
- layer_data* my_data = get_my_data_ptr(get_dispatch_key(queue), layer_data_map);
- assert(my_data != NULL);
-
- skipCall |= param_check_vkQueueSubmit(
- my_data->report_data,
- submitCount,
- pSubmits,
- fence);
-
- if (skipCall == VK_FALSE) {
- for (uint32_t i = 0; i < submitCount; i++) {
- PreQueueSubmit(queue, &pSubmits[i]);
- }
-
- result = get_dispatch_table(pc_device_table_map, queue)->QueueSubmit(queue, submitCount, pSubmits, fence);
-
- PostQueueSubmit(queue, submitCount, fence, result);
- }
-
- return result;
-}
-
-bool PostQueueWaitIdle(
- VkQueue queue,
- VkResult result)
-{
-
- if(result < VK_SUCCESS)
- {
- std::string reason = "vkQueueWaitIdle parameter, VkResult result, is " + EnumeratorString(result);
- log_msg(mdd(queue), VK_DEBUG_REPORT_ERROR_BIT_EXT, (VkDebugReportObjectTypeEXT)0, 0, __LINE__, 1, "PARAMCHECK", "%s", reason.c_str());
- return false;
- }
-
- return true;
-}
-
-VK_LAYER_EXPORT VKAPI_ATTR VkResult VKAPI_CALL vkQueueWaitIdle(
- VkQueue queue)
-{
- VkResult result = get_dispatch_table(pc_device_table_map, queue)->QueueWaitIdle(queue);
-
- PostQueueWaitIdle(queue, result);
-
- return result;
-}
-
-bool PostDeviceWaitIdle(
- VkDevice device,
- VkResult result)
-{
-
- if(result < VK_SUCCESS)
- {
- std::string reason = "vkDeviceWaitIdle parameter, VkResult result, is " + EnumeratorString(result);
- log_msg(mdd(device), VK_DEBUG_REPORT_ERROR_BIT_EXT, (VkDebugReportObjectTypeEXT)0, 0, __LINE__, 1, "PARAMCHECK", "%s", reason.c_str());
- return false;
- }
-
- return true;
-}
-
-VK_LAYER_EXPORT VKAPI_ATTR VkResult VKAPI_CALL vkDeviceWaitIdle(
- VkDevice device)
-{
- VkResult result = get_dispatch_table(pc_device_table_map, device)->DeviceWaitIdle(device);
-
- PostDeviceWaitIdle(device, result);
-
- return result;
-}
-
-bool PreAllocateMemory(
- VkDevice device,
- const VkMemoryAllocateInfo* pAllocateInfo)
-{
- if(pAllocateInfo != nullptr)
- {
- if(pAllocateInfo->sType != VK_STRUCTURE_TYPE_MEMORY_ALLOCATE_INFO)
- {
- log_msg(mdd(device), VK_DEBUG_REPORT_ERROR_BIT_EXT, (VkDebugReportObjectTypeEXT)0, 0, __LINE__, 1, "PARAMCHECK",
- "vkAllocateMemory parameter, VkStructureType pAllocateInfo->sType, is an invalid enumerator");
- return false;
- }
- }
-
- return true;
-}
-
-bool PostAllocateMemory(
- VkDevice device,
- VkDeviceMemory* pMemory,
- VkResult result)
-{
-
- if(pMemory != nullptr)
- {
- }
-
- if(result < VK_SUCCESS)
- {
- std::string reason = "vkAllocateMemory parameter, VkResult result, is " + EnumeratorString(result);
- log_msg(mdd(device), VK_DEBUG_REPORT_ERROR_BIT_EXT, (VkDebugReportObjectTypeEXT)0, 0, __LINE__, 1, "PARAMCHECK", "%s", reason.c_str());
- return false;
- }
-
- return true;
-}
-
-VK_LAYER_EXPORT VKAPI_ATTR VkResult VKAPI_CALL vkAllocateMemory(
- VkDevice device,
- const VkMemoryAllocateInfo* pAllocateInfo,
- const VkAllocationCallbacks* pAllocator,
- VkDeviceMemory* pMemory)
-{
- VkResult result = VK_ERROR_VALIDATION_FAILED_EXT;
- VkBool32 skipCall = VK_FALSE;
- layer_data* my_data = get_my_data_ptr(get_dispatch_key(device), layer_data_map);
- assert(my_data != NULL);
-
- skipCall |= param_check_vkAllocateMemory(
- my_data->report_data,
- pAllocateInfo,
- pAllocator,
- pMemory);
-
- if (skipCall == VK_FALSE) {
- PreAllocateMemory(device, pAllocateInfo);
-
- result = get_dispatch_table(pc_device_table_map, device)->AllocateMemory(device, pAllocateInfo, pAllocator, pMemory);
-
- PostAllocateMemory(device, pMemory, result);
- }
-
- return result;
-}
-
-bool PostMapMemory(
- VkDevice device,
- VkDeviceMemory mem,
- VkDeviceSize offset,
- VkDeviceSize size,
- VkMemoryMapFlags flags,
- void** ppData,
- VkResult result)
-{
-
-
-
-
-
- if(ppData != nullptr)
- {
- }
-
- if(result < VK_SUCCESS)
- {
- std::string reason = "vkMapMemory parameter, VkResult result, is " + EnumeratorString(result);
- log_msg(mdd(device), VK_DEBUG_REPORT_ERROR_BIT_EXT, (VkDebugReportObjectTypeEXT)0, 0, __LINE__, 1, "PARAMCHECK", "%s", reason.c_str());
- return false;
- }
-
- return true;
-}
-
-VK_LAYER_EXPORT VKAPI_ATTR VkResult VKAPI_CALL vkMapMemory(
- VkDevice device,
- VkDeviceMemory memory,
- VkDeviceSize offset,
- VkDeviceSize size,
- VkMemoryMapFlags flags,
- void** ppData)
-{
- VkResult result = VK_ERROR_VALIDATION_FAILED_EXT;
- VkBool32 skipCall = VK_FALSE;
- layer_data* my_data = get_my_data_ptr(get_dispatch_key(device), layer_data_map);
- assert(my_data != NULL);
-
- skipCall |= param_check_vkMapMemory(
- my_data->report_data,
- memory,
- offset,
- size,
- flags,
- ppData);
-
- if (skipCall == VK_FALSE) {
- result = get_dispatch_table(pc_device_table_map, device)->MapMemory(device, memory, offset, size, flags, ppData);
-
- PostMapMemory(device, memory, offset, size, flags, ppData, result);
- }
-
- return result;
-}
-
-bool PreFlushMappedMemoryRanges(
- VkDevice device,
- const VkMappedMemoryRange* pMemoryRanges)
-{
- if(pMemoryRanges != nullptr)
- {
- if(pMemoryRanges->sType != VK_STRUCTURE_TYPE_MAPPED_MEMORY_RANGE)
- {
- log_msg(mdd(device), VK_DEBUG_REPORT_ERROR_BIT_EXT, (VkDebugReportObjectTypeEXT)0, 0, __LINE__, 1, "PARAMCHECK",
- "vkFlushMappedMemoryRanges parameter, VkStructureType pMemoryRanges->sType, is an invalid enumerator");
- return false;
- }
- }
-
- return true;
-}
-
-bool PostFlushMappedMemoryRanges(
- VkDevice device,
- uint32_t memoryRangeCount,
- VkResult result)
-{
-
-
- if(result < VK_SUCCESS)
- {
- std::string reason = "vkFlushMappedMemoryRanges parameter, VkResult result, is " + EnumeratorString(result);
- log_msg(mdd(device), VK_DEBUG_REPORT_ERROR_BIT_EXT, (VkDebugReportObjectTypeEXT)0, 0, __LINE__, 1, "PARAMCHECK", "%s", reason.c_str());
- return false;
- }
-
- return true;
-}
-
-VK_LAYER_EXPORT VKAPI_ATTR VkResult VKAPI_CALL vkFlushMappedMemoryRanges(
- VkDevice device,
- uint32_t memoryRangeCount,
- const VkMappedMemoryRange* pMemoryRanges)
-{
- VkResult result = VK_ERROR_VALIDATION_FAILED_EXT;
- VkBool32 skipCall = VK_FALSE;
- layer_data* my_data = get_my_data_ptr(get_dispatch_key(device), layer_data_map);
- assert(my_data != NULL);
-
- skipCall |= param_check_vkFlushMappedMemoryRanges(
- my_data->report_data,
- memoryRangeCount,
- pMemoryRanges);
-
- if (skipCall == VK_FALSE) {
- PreFlushMappedMemoryRanges(device, pMemoryRanges);
-
- result = get_dispatch_table(pc_device_table_map, device)->FlushMappedMemoryRanges(device, memoryRangeCount, pMemoryRanges);
-
- PostFlushMappedMemoryRanges(device, memoryRangeCount, result);
- }
-
- return result;
-}
-
-bool PreInvalidateMappedMemoryRanges(
- VkDevice device,
- const VkMappedMemoryRange* pMemoryRanges)
-{
- if(pMemoryRanges != nullptr)
- {
- if(pMemoryRanges->sType != VK_STRUCTURE_TYPE_MAPPED_MEMORY_RANGE)
- {
- log_msg(mdd(device), VK_DEBUG_REPORT_ERROR_BIT_EXT, (VkDebugReportObjectTypeEXT)0, 0, __LINE__, 1, "PARAMCHECK",
- "vkInvalidateMappedMemoryRanges parameter, VkStructureType pMemoryRanges->sType, is an invalid enumerator");
- return false;
- }
- }
-
- return true;
-}
-
-bool PostInvalidateMappedMemoryRanges(
- VkDevice device,
- uint32_t memoryRangeCount,
- VkResult result)
-{
-
-
- if(result < VK_SUCCESS)
- {
- std::string reason = "vkInvalidateMappedMemoryRanges parameter, VkResult result, is " + EnumeratorString(result);
- log_msg(mdd(device), VK_DEBUG_REPORT_ERROR_BIT_EXT, (VkDebugReportObjectTypeEXT)0, 0, __LINE__, 1, "PARAMCHECK", "%s", reason.c_str());
- return false;
- }
-
- return true;
-}
-
-VK_LAYER_EXPORT VKAPI_ATTR VkResult VKAPI_CALL vkInvalidateMappedMemoryRanges(
- VkDevice device,
- uint32_t memoryRangeCount,
- const VkMappedMemoryRange* pMemoryRanges)
-{
- VkResult result = VK_ERROR_VALIDATION_FAILED_EXT;
- VkBool32 skipCall = VK_FALSE;
- layer_data* my_data = get_my_data_ptr(get_dispatch_key(device), layer_data_map);
- assert(my_data != NULL);
-
- skipCall |= param_check_vkInvalidateMappedMemoryRanges(
- my_data->report_data,
- memoryRangeCount,
- pMemoryRanges);
-
- if (skipCall == VK_FALSE) {
- PreInvalidateMappedMemoryRanges(device, pMemoryRanges);
-
- result = get_dispatch_table(pc_device_table_map, device)->InvalidateMappedMemoryRanges(device, memoryRangeCount, pMemoryRanges);
-
- PostInvalidateMappedMemoryRanges(device, memoryRangeCount, result);
- }
-
- return result;
-}
-
-bool PostGetDeviceMemoryCommitment(
- VkDevice device,
- VkDeviceMemory memory,
- VkDeviceSize* pCommittedMemoryInBytes)
-{
-
-
- if(pCommittedMemoryInBytes != nullptr)
- {
- }
-
- return true;
-}
-
-VK_LAYER_EXPORT VKAPI_ATTR void VKAPI_CALL vkGetDeviceMemoryCommitment(
- VkDevice device,
- VkDeviceMemory memory,
- VkDeviceSize* pCommittedMemoryInBytes)
-{
- VkBool32 skipCall = VK_FALSE;
- layer_data* my_data = get_my_data_ptr(get_dispatch_key(device), layer_data_map);
- assert(my_data != NULL);
-
- skipCall |= param_check_vkGetDeviceMemoryCommitment(
- my_data->report_data,
- memory,
- pCommittedMemoryInBytes);
-
- if (skipCall == VK_FALSE) {
- get_dispatch_table(pc_device_table_map, device)->GetDeviceMemoryCommitment(device, memory, pCommittedMemoryInBytes);
-
- PostGetDeviceMemoryCommitment(device, memory, pCommittedMemoryInBytes);
- }
-}
-
-bool PostBindBufferMemory(
- VkDevice device,
- VkBuffer buffer,
- VkDeviceMemory mem,
- VkDeviceSize memoryOffset,
- VkResult result)
-{
-
-
-
-
- if(result < VK_SUCCESS)
- {
- std::string reason = "vkBindBufferMemory parameter, VkResult result, is " + EnumeratorString(result);
- log_msg(mdd(device), VK_DEBUG_REPORT_ERROR_BIT_EXT, (VkDebugReportObjectTypeEXT)0, 0, __LINE__, 1, "PARAMCHECK", "%s", reason.c_str());
- return false;
- }
-
- return true;
-}
-
-VK_LAYER_EXPORT VKAPI_ATTR VkResult VKAPI_CALL vkBindBufferMemory(
- VkDevice device,
- VkBuffer buffer,
- VkDeviceMemory mem,
- VkDeviceSize memoryOffset)
-{
- VkResult result = get_dispatch_table(pc_device_table_map, device)->BindBufferMemory(device, buffer, mem, memoryOffset);
-
- PostBindBufferMemory(device, buffer, mem, memoryOffset, result);
-
- return result;
-}
-
-bool PostBindImageMemory(
- VkDevice device,
- VkImage image,
- VkDeviceMemory mem,
- VkDeviceSize memoryOffset,
- VkResult result)
-{
-
-
-
-
- if(result < VK_SUCCESS)
- {
- std::string reason = "vkBindImageMemory parameter, VkResult result, is " + EnumeratorString(result);
- log_msg(mdd(device), VK_DEBUG_REPORT_ERROR_BIT_EXT, (VkDebugReportObjectTypeEXT)0, 0, __LINE__, 1, "PARAMCHECK", "%s", reason.c_str());
- return false;
- }
-
- return true;
-}
-
-VK_LAYER_EXPORT VKAPI_ATTR VkResult VKAPI_CALL vkBindImageMemory(
- VkDevice device,
- VkImage image,
- VkDeviceMemory mem,
- VkDeviceSize memoryOffset)
-{
- VkResult result = get_dispatch_table(pc_device_table_map, device)->BindImageMemory(device, image, mem, memoryOffset);
-
- PostBindImageMemory(device, image, mem, memoryOffset, result);
-
- return result;
-}
-
-bool PostGetBufferMemoryRequirements(
- VkDevice device,
- VkBuffer buffer,
- VkMemoryRequirements* pMemoryRequirements)
-{
-
-
- if(pMemoryRequirements != nullptr)
- {
- }
-
- return true;
-}
-
-VK_LAYER_EXPORT VKAPI_ATTR void VKAPI_CALL vkGetBufferMemoryRequirements(
- VkDevice device,
- VkBuffer buffer,
- VkMemoryRequirements* pMemoryRequirements)
-{
- VkBool32 skipCall = VK_FALSE;
- layer_data* my_data = get_my_data_ptr(get_dispatch_key(device), layer_data_map);
- assert(my_data != NULL);
-
- skipCall |= param_check_vkGetBufferMemoryRequirements(
- my_data->report_data,
- buffer,
- pMemoryRequirements);
-
- if (skipCall == VK_FALSE) {
- get_dispatch_table(pc_device_table_map, device)->GetBufferMemoryRequirements(device, buffer, pMemoryRequirements);
-
- PostGetBufferMemoryRequirements(device, buffer, pMemoryRequirements);
- }
-}
-
-bool PostGetImageMemoryRequirements(
- VkDevice device,
- VkImage image,
- VkMemoryRequirements* pMemoryRequirements)
-{
-
-
- if(pMemoryRequirements != nullptr)
- {
- }
-
- return true;
-}
-
-VK_LAYER_EXPORT VKAPI_ATTR void VKAPI_CALL vkGetImageMemoryRequirements(
- VkDevice device,
- VkImage image,
- VkMemoryRequirements* pMemoryRequirements)
-{
- VkBool32 skipCall = VK_FALSE;
- layer_data* my_data = get_my_data_ptr(get_dispatch_key(device), layer_data_map);
- assert(my_data != NULL);
-
- skipCall |= param_check_vkGetImageMemoryRequirements(
- my_data->report_data,
- image,
- pMemoryRequirements);
-
- if (skipCall == VK_FALSE) {
- get_dispatch_table(pc_device_table_map, device)->GetImageMemoryRequirements(device, image, pMemoryRequirements);
-
- PostGetImageMemoryRequirements(device, image, pMemoryRequirements);
- }
-}
-
-bool PostGetImageSparseMemoryRequirements(
- VkDevice device,
- VkImage image,
- uint32_t* pNumRequirements,
- VkSparseImageMemoryRequirements* pSparseMemoryRequirements)
-{
-
-
- if(pNumRequirements != nullptr)
- {
- }
-
- if(pSparseMemoryRequirements != nullptr)
- {
- if ((pSparseMemoryRequirements->formatProperties.aspectMask &
- (VK_IMAGE_ASPECT_COLOR_BIT | VK_IMAGE_ASPECT_DEPTH_BIT | VK_IMAGE_ASPECT_STENCIL_BIT | VK_IMAGE_ASPECT_METADATA_BIT)) == 0)
- {
- log_msg(mdd(device), VK_DEBUG_REPORT_ERROR_BIT_EXT, (VkDebugReportObjectTypeEXT)0, 0, __LINE__, 1, "PARAMCHECK",
- "vkGetImageSparseMemoryRequirements parameter, VkImageAspect pSparseMemoryRequirements->formatProperties.aspectMask, is an unrecognized enumerator");
- return false;
- }
- }
-
- return true;
-}
-
-VK_LAYER_EXPORT VKAPI_ATTR void VKAPI_CALL vkGetImageSparseMemoryRequirements(
- VkDevice device,
- VkImage image,
- uint32_t* pSparseMemoryRequirementCount,
- VkSparseImageMemoryRequirements* pSparseMemoryRequirements)
-{
- VkBool32 skipCall = VK_FALSE;
- layer_data* my_data = get_my_data_ptr(get_dispatch_key(device), layer_data_map);
- assert(my_data != NULL);
-
- skipCall |= param_check_vkGetImageSparseMemoryRequirements(
- my_data->report_data,
- image,
- pSparseMemoryRequirementCount,
- pSparseMemoryRequirements);
-
- if (skipCall == VK_FALSE) {
- get_dispatch_table(pc_device_table_map, device)->GetImageSparseMemoryRequirements(device, image, pSparseMemoryRequirementCount, pSparseMemoryRequirements);
-
- PostGetImageSparseMemoryRequirements(device, image, pSparseMemoryRequirementCount, pSparseMemoryRequirements);
- }
-}
-
-bool PostGetPhysicalDeviceSparseImageFormatProperties(
- VkPhysicalDevice physicalDevice,
- VkFormat format,
- VkImageType type,
- VkSampleCountFlagBits samples,
- VkImageUsageFlags usage,
- VkImageTiling tiling,
- uint32_t* pNumProperties,
- VkSparseImageFormatProperties* pProperties)
-{
-
- if(format < VK_FORMAT_BEGIN_RANGE ||
- format > VK_FORMAT_END_RANGE)
- {
- log_msg(mdd(physicalDevice), VK_DEBUG_REPORT_ERROR_BIT_EXT, (VkDebugReportObjectTypeEXT)0, 0, __LINE__, 1, "PARAMCHECK",
- "vkGetPhysicalDeviceSparseImageFormatProperties parameter, VkFormat format, is an unrecognized enumerator");
- return false;
- }
-
- if(type < VK_IMAGE_TYPE_BEGIN_RANGE ||
- type > VK_IMAGE_TYPE_END_RANGE)
- {
- log_msg(mdd(physicalDevice), VK_DEBUG_REPORT_ERROR_BIT_EXT, (VkDebugReportObjectTypeEXT)0, 0, __LINE__, 1, "PARAMCHECK",
- "vkGetPhysicalDeviceSparseImageFormatProperties parameter, VkImageType type, is an unrecognized enumerator");
- return false;
- }
-
-
-
- if(tiling < VK_IMAGE_TILING_BEGIN_RANGE ||
- tiling > VK_IMAGE_TILING_END_RANGE)
- {
- log_msg(mdd(physicalDevice), VK_DEBUG_REPORT_ERROR_BIT_EXT, (VkDebugReportObjectTypeEXT)0, 0, __LINE__, 1, "PARAMCHECK",
- "vkGetPhysicalDeviceSparseImageFormatProperties parameter, VkImageTiling tiling, is an unrecognized enumerator");
- return false;
- }
-
- if(pNumProperties != nullptr)
- {
- }
-
- if(pProperties != nullptr)
- {
- if ((pProperties->aspectMask &
- (VK_IMAGE_ASPECT_COLOR_BIT | VK_IMAGE_ASPECT_DEPTH_BIT | VK_IMAGE_ASPECT_STENCIL_BIT | VK_IMAGE_ASPECT_METADATA_BIT)) == 0)
- {
- log_msg(mdd(physicalDevice), VK_DEBUG_REPORT_ERROR_BIT_EXT, (VkDebugReportObjectTypeEXT)0, 0, __LINE__, 1, "PARAMCHECK",
- "vkGetPhysicalDeviceSparseImageFormatProperties parameter, VkImageAspect pProperties->aspectMask, is an unrecognized enumerator");
- return false;
- }
- }
-
- return true;
-}
-
-VK_LAYER_EXPORT VKAPI_ATTR void VKAPI_CALL vkGetPhysicalDeviceSparseImageFormatProperties(
- VkPhysicalDevice physicalDevice,
- VkFormat format,
- VkImageType type,
- VkSampleCountFlagBits samples,
- VkImageUsageFlags usage,
- VkImageTiling tiling,
- uint32_t* pPropertyCount,
- VkSparseImageFormatProperties* pProperties)
-{
- VkBool32 skipCall = VK_FALSE;
- layer_data* my_data = get_my_data_ptr(get_dispatch_key(physicalDevice), layer_data_map);
- assert(my_data != NULL);
-
- skipCall |= param_check_vkGetPhysicalDeviceSparseImageFormatProperties(
- my_data->report_data,
- format,
- type,
- samples,
- usage,
- tiling,
- pPropertyCount,
- pProperties);
-
- if (skipCall == VK_FALSE) {
- get_dispatch_table(pc_instance_table_map, physicalDevice)->GetPhysicalDeviceSparseImageFormatProperties(physicalDevice, format, type, samples, usage, tiling, pPropertyCount, pProperties);
-
- PostGetPhysicalDeviceSparseImageFormatProperties(physicalDevice, format, type, samples, usage, tiling, pPropertyCount, pProperties);
- }
-}
-
-bool PreQueueBindSparse(
- VkQueue queue,
- uint32_t bindInfoCount,
- const VkBindSparseInfo* pBindInfo)
-{
- if(pBindInfo != nullptr)
- {
- }
-
- return true;
-}
-
-bool PostQueueBindSparse(
- VkQueue queue,
- uint32_t bindInfoCount,
- const VkBindSparseInfo* pBindInfo,
- VkFence fence,
- VkResult result)
-{
-
-
-
- if(result < VK_SUCCESS)
- {
- std::string reason = "vkQueueBindSparse parameter, VkResult result, is " + EnumeratorString(result);
- log_msg(mdd(queue), VK_DEBUG_REPORT_ERROR_BIT_EXT, (VkDebugReportObjectTypeEXT)0, 0, __LINE__, 1, "PARAMCHECK", "%s", reason.c_str());
- return false;
- }
-
- return true;
-}
-
-VK_LAYER_EXPORT VKAPI_ATTR VkResult VKAPI_CALL vkQueueBindSparse(
- VkQueue queue,
- uint32_t bindInfoCount,
- const VkBindSparseInfo* pBindInfo,
- VkFence fence)
-{
- VkResult result = VK_ERROR_VALIDATION_FAILED_EXT;
- VkBool32 skipCall = VK_FALSE;
- layer_data* my_data = get_my_data_ptr(get_dispatch_key(queue), layer_data_map);
- assert(my_data != NULL);
-
- skipCall |= param_check_vkQueueBindSparse(
- my_data->report_data,
- bindInfoCount,
- pBindInfo,
- fence);
-
- if (skipCall == VK_FALSE) {
- PreQueueBindSparse(queue, bindInfoCount, pBindInfo);
-
- result = get_dispatch_table(pc_device_table_map, queue)->QueueBindSparse(queue, bindInfoCount, pBindInfo, fence);
-
- PostQueueBindSparse(queue, bindInfoCount, pBindInfo, fence, result);
- }
-
- return result;
-}
-
-bool PreCreateFence(
- VkDevice device,
- const VkFenceCreateInfo* pCreateInfo)
-{
- if(pCreateInfo != nullptr)
- {
- if(pCreateInfo->sType != VK_STRUCTURE_TYPE_FENCE_CREATE_INFO)
- {
- log_msg(mdd(device), VK_DEBUG_REPORT_ERROR_BIT_EXT, (VkDebugReportObjectTypeEXT)0, 0, __LINE__, 1, "PARAMCHECK",
- "vkCreateFence parameter, VkStructureType pCreateInfo->sType, is an invalid enumerator");
- return false;
- }
- }
-
- return true;
-}
-
-bool PostCreateFence(
- VkDevice device,
- VkFence* pFence,
- VkResult result)
-{
-
- if(pFence != nullptr)
- {
- }
-
- if(result < VK_SUCCESS)
- {
- std::string reason = "vkCreateFence parameter, VkResult result, is " + EnumeratorString(result);
- log_msg(mdd(device), VK_DEBUG_REPORT_ERROR_BIT_EXT, (VkDebugReportObjectTypeEXT)0, 0, __LINE__, 1, "PARAMCHECK", "%s", reason.c_str());
- return false;
- }
-
- return true;
-}
-
-VK_LAYER_EXPORT VKAPI_ATTR VkResult VKAPI_CALL vkCreateFence(
- VkDevice device,
- const VkFenceCreateInfo* pCreateInfo,
- const VkAllocationCallbacks* pAllocator,
- VkFence* pFence)
-{
- VkResult result = VK_ERROR_VALIDATION_FAILED_EXT;
- VkBool32 skipCall = VK_FALSE;
- layer_data* my_data = get_my_data_ptr(get_dispatch_key(device), layer_data_map);
- assert(my_data != NULL);
-
- skipCall |= param_check_vkCreateFence(
- my_data->report_data,
- pCreateInfo,
- pAllocator,
- pFence);
-
- if (skipCall == VK_FALSE) {
- PreCreateFence(device, pCreateInfo);
-
- result = get_dispatch_table(pc_device_table_map, device)->CreateFence(device, pCreateInfo, pAllocator, pFence);
-
- PostCreateFence(device, pFence, result);
- }
-
- return result;
-}
-
-bool PreResetFences(
- VkDevice device,
- const VkFence* pFences)
-{
- if(pFences != nullptr)
- {
- }
-
- return true;
-}
-
-bool PostResetFences(
- VkDevice device,
- uint32_t fenceCount,
- VkResult result)
-{
-
-
- if(result < VK_SUCCESS)
- {
- std::string reason = "vkResetFences parameter, VkResult result, is " + EnumeratorString(result);
- log_msg(mdd(device), VK_DEBUG_REPORT_ERROR_BIT_EXT, (VkDebugReportObjectTypeEXT)0, 0, __LINE__, 1, "PARAMCHECK", "%s", reason.c_str());
- return false;
- }
-
- return true;
-}
-
-VK_LAYER_EXPORT VKAPI_ATTR VkResult VKAPI_CALL vkResetFences(
- VkDevice device,
- uint32_t fenceCount,
- const VkFence* pFences)
-{
- VkResult result = VK_ERROR_VALIDATION_FAILED_EXT;
- VkBool32 skipCall = VK_FALSE;
- layer_data* my_data = get_my_data_ptr(get_dispatch_key(device), layer_data_map);
- assert(my_data != NULL);
-
- skipCall |= param_check_vkResetFences(
- my_data->report_data,
- fenceCount,
- pFences);
-
- if (skipCall == VK_FALSE) {
- PreResetFences(device, pFences);
-
- result = get_dispatch_table(pc_device_table_map, device)->ResetFences(device, fenceCount, pFences);
-
- PostResetFences(device, fenceCount, result);
- }
-
- return result;
-}
-
-bool PostGetFenceStatus(
- VkDevice device,
- VkFence fence,
- VkResult result)
-{
-
-
- if(result < VK_SUCCESS)
- {
- std::string reason = "vkGetFenceStatus parameter, VkResult result, is " + EnumeratorString(result);
- log_msg(mdd(device), VK_DEBUG_REPORT_ERROR_BIT_EXT, (VkDebugReportObjectTypeEXT)0, 0, __LINE__, 1, "PARAMCHECK", "%s", reason.c_str());
- return false;
- }
-
- return true;
-}
-
-VK_LAYER_EXPORT VKAPI_ATTR VkResult VKAPI_CALL vkGetFenceStatus(
- VkDevice device,
- VkFence fence)
-{
- VkResult result = get_dispatch_table(pc_device_table_map, device)->GetFenceStatus(device, fence);
-
- PostGetFenceStatus(device, fence, result);
-
- return result;
-}
-
-bool PreWaitForFences(
- VkDevice device,
- const VkFence* pFences)
-{
- if(pFences != nullptr)
- {
- }
-
- return true;
-}
-
-bool PostWaitForFences(
- VkDevice device,
- uint32_t fenceCount,
- VkBool32 waitAll,
- uint64_t timeout,
- VkResult result)
-{
-
-
-
-
- if(result < VK_SUCCESS)
- {
- std::string reason = "vkWaitForFences parameter, VkResult result, is " + EnumeratorString(result);
- log_msg(mdd(device), VK_DEBUG_REPORT_ERROR_BIT_EXT, (VkDebugReportObjectTypeEXT)0, 0, __LINE__, 1, "PARAMCHECK", "%s", reason.c_str());
- return false;
- }
-
- return true;
-}
-
-VK_LAYER_EXPORT VKAPI_ATTR VkResult VKAPI_CALL vkWaitForFences(
- VkDevice device,
- uint32_t fenceCount,
- const VkFence* pFences,
- VkBool32 waitAll,
- uint64_t timeout)
-{
- VkResult result = VK_ERROR_VALIDATION_FAILED_EXT;
- VkBool32 skipCall = VK_FALSE;
- layer_data* my_data = get_my_data_ptr(get_dispatch_key(device), layer_data_map);
- assert(my_data != NULL);
-
- skipCall |= param_check_vkWaitForFences(
- my_data->report_data,
- fenceCount,
- pFences,
- waitAll,
- timeout);
-
- if (skipCall == VK_FALSE) {
- PreWaitForFences(device, pFences);
-
- result = get_dispatch_table(pc_device_table_map, device)->WaitForFences(device, fenceCount, pFences, waitAll, timeout);
-
- PostWaitForFences(device, fenceCount, waitAll, timeout, result);
- }
-
- return result;
-}
-
-bool PreCreateSemaphore(
- VkDevice device,
- const VkSemaphoreCreateInfo* pCreateInfo)
-{
- if(pCreateInfo != nullptr)
- {
- if(pCreateInfo->sType != VK_STRUCTURE_TYPE_SEMAPHORE_CREATE_INFO)
- {
- log_msg(mdd(device), VK_DEBUG_REPORT_ERROR_BIT_EXT, (VkDebugReportObjectTypeEXT)0, 0, __LINE__, 1, "PARAMCHECK",
- "vkCreateSemaphore parameter, VkStructureType pCreateInfo->sType, is an invalid enumerator");
- return false;
- }
- }
-
- return true;
-}
-
-bool PostCreateSemaphore(
- VkDevice device,
- VkSemaphore* pSemaphore,
- VkResult result)
-{
-
- if(pSemaphore != nullptr)
- {
- }
-
- if(result < VK_SUCCESS)
- {
- std::string reason = "vkCreateSemaphore parameter, VkResult result, is " + EnumeratorString(result);
- log_msg(mdd(device), VK_DEBUG_REPORT_ERROR_BIT_EXT, (VkDebugReportObjectTypeEXT)0, 0, __LINE__, 1, "PARAMCHECK", "%s", reason.c_str());
- return false;
- }
-
- return true;
-}
-
-VK_LAYER_EXPORT VKAPI_ATTR VkResult VKAPI_CALL vkCreateSemaphore(
- VkDevice device,
- const VkSemaphoreCreateInfo* pCreateInfo,
- const VkAllocationCallbacks* pAllocator,
- VkSemaphore* pSemaphore)
-{
- VkResult result = VK_ERROR_VALIDATION_FAILED_EXT;
- VkBool32 skipCall = VK_FALSE;
- layer_data* my_data = get_my_data_ptr(get_dispatch_key(device), layer_data_map);
- assert(my_data != NULL);
-
- skipCall |= param_check_vkCreateSemaphore(
- my_data->report_data,
- pCreateInfo,
- pAllocator,
- pSemaphore);
-
- if (skipCall == VK_FALSE) {
- PreCreateSemaphore(device, pCreateInfo);
-
- result = get_dispatch_table(pc_device_table_map, device)->CreateSemaphore(device, pCreateInfo, pAllocator, pSemaphore);
-
- PostCreateSemaphore(device, pSemaphore, result);
- }
-
- return result;
-}
-
-bool PreCreateEvent(
- VkDevice device,
- const VkEventCreateInfo* pCreateInfo)
-{
- if(pCreateInfo != nullptr)
- {
- if(pCreateInfo->sType != VK_STRUCTURE_TYPE_EVENT_CREATE_INFO)
- {
- log_msg(mdd(device), VK_DEBUG_REPORT_ERROR_BIT_EXT, (VkDebugReportObjectTypeEXT)0, 0, __LINE__, 1, "PARAMCHECK",
- "vkCreateEvent parameter, VkStructureType pCreateInfo->sType, is an invalid enumerator");
- return false;
- }
- }
-
- return true;
-}
-
-bool PostCreateEvent(
- VkDevice device,
- VkEvent* pEvent,
- VkResult result)
-{
-
- if(pEvent != nullptr)
- {
- }
-
- if(result < VK_SUCCESS)
- {
- std::string reason = "vkCreateEvent parameter, VkResult result, is " + EnumeratorString(result);
- log_msg(mdd(device), VK_DEBUG_REPORT_ERROR_BIT_EXT, (VkDebugReportObjectTypeEXT)0, 0, __LINE__, 1, "PARAMCHECK", "%s", reason.c_str());
- return false;
- }
-
- return true;
-}
-
-VK_LAYER_EXPORT VKAPI_ATTR VkResult VKAPI_CALL vkCreateEvent(
- VkDevice device,
- const VkEventCreateInfo* pCreateInfo,
- const VkAllocationCallbacks* pAllocator,
- VkEvent* pEvent)
-{
- VkResult result = VK_ERROR_VALIDATION_FAILED_EXT;
- VkBool32 skipCall = VK_FALSE;
- layer_data* my_data = get_my_data_ptr(get_dispatch_key(device), layer_data_map);
- assert(my_data != NULL);
-
- skipCall |= param_check_vkCreateEvent(
- my_data->report_data,
- pCreateInfo,
- pAllocator,
- pEvent);
-
- if (skipCall == VK_FALSE) {
- PreCreateEvent(device, pCreateInfo);
-
- result = get_dispatch_table(pc_device_table_map, device)->CreateEvent(device, pCreateInfo, pAllocator, pEvent);
-
- PostCreateEvent(device, pEvent, result);
- }
-
- return result;
-}
-
-bool PostGetEventStatus(
- VkDevice device,
- VkEvent event,
- VkResult result)
-{
-
-
- if(result < VK_SUCCESS)
- {
- std::string reason = "vkGetEventStatus parameter, VkResult result, is " + EnumeratorString(result);
- log_msg(mdd(device), VK_DEBUG_REPORT_ERROR_BIT_EXT, (VkDebugReportObjectTypeEXT)0, 0, __LINE__, 1, "PARAMCHECK", "%s", reason.c_str());
- return false;
- }
-
- return true;
-}
-
-VK_LAYER_EXPORT VKAPI_ATTR VkResult VKAPI_CALL vkGetEventStatus(
- VkDevice device,
- VkEvent event)
-{
- VkResult result = get_dispatch_table(pc_device_table_map, device)->GetEventStatus(device, event);
-
- PostGetEventStatus(device, event, result);
-
- return result;
-}
-
-bool PostSetEvent(
- VkDevice device,
- VkEvent event,
- VkResult result)
-{
-
-
- if(result < VK_SUCCESS)
- {
- std::string reason = "vkSetEvent parameter, VkResult result, is " + EnumeratorString(result);
- log_msg(mdd(device), VK_DEBUG_REPORT_ERROR_BIT_EXT, (VkDebugReportObjectTypeEXT)0, 0, __LINE__, 1, "PARAMCHECK", "%s", reason.c_str());
- return false;
- }
-
- return true;
-}
-
-VK_LAYER_EXPORT VKAPI_ATTR VkResult VKAPI_CALL vkSetEvent(
- VkDevice device,
- VkEvent event)
-{
- VkResult result = get_dispatch_table(pc_device_table_map, device)->SetEvent(device, event);
-
- PostSetEvent(device, event, result);
-
- return result;
-}
-
-bool PostResetEvent(
- VkDevice device,
- VkEvent event,
- VkResult result)
-{
-
-
- if(result < VK_SUCCESS)
- {
- std::string reason = "vkResetEvent parameter, VkResult result, is " + EnumeratorString(result);
- log_msg(mdd(device), VK_DEBUG_REPORT_ERROR_BIT_EXT, (VkDebugReportObjectTypeEXT)0, 0, __LINE__, 1, "PARAMCHECK", "%s", reason.c_str());
- return false;
- }
-
- return true;
-}
-
-VK_LAYER_EXPORT VKAPI_ATTR VkResult VKAPI_CALL vkResetEvent(
- VkDevice device,
- VkEvent event)
-{
- VkResult result = get_dispatch_table(pc_device_table_map, device)->ResetEvent(device, event);
-
- PostResetEvent(device, event, result);
-
- return result;
-}
-
-bool PreCreateQueryPool(
- VkDevice device,
- const VkQueryPoolCreateInfo* pCreateInfo)
-{
- if(pCreateInfo != nullptr)
- {
- if(pCreateInfo->sType != VK_STRUCTURE_TYPE_QUERY_POOL_CREATE_INFO)
- {
- log_msg(mdd(device), VK_DEBUG_REPORT_ERROR_BIT_EXT, (VkDebugReportObjectTypeEXT)0, 0, __LINE__, 1, "PARAMCHECK",
- "vkCreateQueryPool parameter, VkStructureType pCreateInfo->sType, is an invalid enumerator");
- return false;
- }
- if(pCreateInfo->queryType < VK_QUERY_TYPE_BEGIN_RANGE ||
- pCreateInfo->queryType > VK_QUERY_TYPE_END_RANGE)
- {
- log_msg(mdd(device), VK_DEBUG_REPORT_ERROR_BIT_EXT, (VkDebugReportObjectTypeEXT)0, 0, __LINE__, 1, "PARAMCHECK",
- "vkCreateQueryPool parameter, VkQueryType pCreateInfo->queryType, is an unrecognized enumerator");
- return false;
- }
- }
-
- return true;
-}
-
-bool PostCreateQueryPool(
- VkDevice device,
- VkQueryPool* pQueryPool,
- VkResult result)
-{
-
- if(pQueryPool != nullptr)
- {
- }
-
- if(result < VK_SUCCESS)
- {
- std::string reason = "vkCreateQueryPool parameter, VkResult result, is " + EnumeratorString(result);
- log_msg(mdd(device), VK_DEBUG_REPORT_ERROR_BIT_EXT, (VkDebugReportObjectTypeEXT)0, 0, __LINE__, 1, "PARAMCHECK", "%s", reason.c_str());
- return false;
- }
-
- return true;
-}
-
-VK_LAYER_EXPORT VKAPI_ATTR VkResult VKAPI_CALL vkCreateQueryPool(
- VkDevice device,
- const VkQueryPoolCreateInfo* pCreateInfo,
- const VkAllocationCallbacks* pAllocator,
- VkQueryPool* pQueryPool)
-{
- VkResult result = VK_ERROR_VALIDATION_FAILED_EXT;
- VkBool32 skipCall = VK_FALSE;
- layer_data* my_data = get_my_data_ptr(get_dispatch_key(device), layer_data_map);
- assert(my_data != NULL);
-
- skipCall |= param_check_vkCreateQueryPool(
- my_data->report_data,
- pCreateInfo,
- pAllocator,
- pQueryPool);
-
- if (skipCall == VK_FALSE) {
- PreCreateQueryPool(device, pCreateInfo);
-
- result = get_dispatch_table(pc_device_table_map, device)->CreateQueryPool(device, pCreateInfo, pAllocator, pQueryPool);
-
- PostCreateQueryPool(device, pQueryPool, result);
- }
-
- return result;
-}
-
-bool PostGetQueryPoolResults(
- VkDevice device,
- VkQueryPool queryPool,
- uint32_t firstQuery,
- uint32_t queryCount,
- size_t dataSize,
- void* pData,
- VkDeviceSize stride,
- VkQueryResultFlags flags,
- VkResult result)
-{
-
-
-
-
- if(pData != nullptr)
- {
- }
-
-
- if(result < VK_SUCCESS)
- {
- std::string reason = "vkGetQueryPoolResults parameter, VkResult result, is " + EnumeratorString(result);
- log_msg(mdd(device), VK_DEBUG_REPORT_ERROR_BIT_EXT, (VkDebugReportObjectTypeEXT)0, 0, __LINE__, 1, "PARAMCHECK", "%s", reason.c_str());
- return false;
- }
-
- return true;
-}
-
-VK_LAYER_EXPORT VKAPI_ATTR VkResult VKAPI_CALL vkGetQueryPoolResults(
- VkDevice device,
- VkQueryPool queryPool,
- uint32_t firstQuery,
- uint32_t queryCount,
- size_t dataSize,
- void* pData,
- VkDeviceSize stride,
- VkQueryResultFlags flags)
-{
- VkResult result = VK_ERROR_VALIDATION_FAILED_EXT;
- VkBool32 skipCall = VK_FALSE;
- layer_data* my_data = get_my_data_ptr(get_dispatch_key(device), layer_data_map);
- assert(my_data != NULL);
-
- skipCall |= param_check_vkGetQueryPoolResults(
- my_data->report_data,
- queryPool,
- firstQuery,
- queryCount,
- dataSize,
- pData,
- stride,
- flags);
-
- if (skipCall == VK_FALSE) {
- result = get_dispatch_table(pc_device_table_map, device)->GetQueryPoolResults(device, queryPool, firstQuery, queryCount, dataSize, pData, stride, flags);
-
- PostGetQueryPoolResults(device, queryPool, firstQuery, queryCount, dataSize, pData, stride, flags, result);
- }
-
- return result;
-}
-
-bool PreCreateBuffer(
- VkDevice device,
- const VkBufferCreateInfo* pCreateInfo)
-{
- if(pCreateInfo != nullptr)
- {
- if(pCreateInfo->sType != VK_STRUCTURE_TYPE_BUFFER_CREATE_INFO)
- {
- log_msg(mdd(device), VK_DEBUG_REPORT_ERROR_BIT_EXT, (VkDebugReportObjectTypeEXT)0, 0, __LINE__, 1, "PARAMCHECK",
- "vkCreateBuffer parameter, VkStructureType pCreateInfo->sType, is an invalid enumerator");
- return false;
- }
- if(pCreateInfo->sharingMode < VK_SHARING_MODE_BEGIN_RANGE ||
- pCreateInfo->sharingMode > VK_SHARING_MODE_END_RANGE)
- {
- log_msg(mdd(device), VK_DEBUG_REPORT_ERROR_BIT_EXT, (VkDebugReportObjectTypeEXT)0, 0, __LINE__, 1, "PARAMCHECK",
- "vkCreateBuffer parameter, VkSharingMode pCreateInfo->sharingMode, is an unrecognized enumerator");
- return false;
- }
- if(pCreateInfo->pQueueFamilyIndices != nullptr)
- {
- }
- }
-
- return true;
-}
-
-bool PostCreateBuffer(
- VkDevice device,
- VkBuffer* pBuffer,
- VkResult result)
-{
-
- if(pBuffer != nullptr)
- {
- }
-
- if(result < VK_SUCCESS)
- {
- std::string reason = "vkCreateBuffer parameter, VkResult result, is " + EnumeratorString(result);
- log_msg(mdd(device), VK_DEBUG_REPORT_ERROR_BIT_EXT, (VkDebugReportObjectTypeEXT)0, 0, __LINE__, 1, "PARAMCHECK", "%s", reason.c_str());
- return false;
- }
-
- return true;
-}
-
-VK_LAYER_EXPORT VKAPI_ATTR VkResult VKAPI_CALL vkCreateBuffer(
- VkDevice device,
- const VkBufferCreateInfo* pCreateInfo,
- const VkAllocationCallbacks* pAllocator,
- VkBuffer* pBuffer)
-{
- VkResult result = VK_ERROR_VALIDATION_FAILED_EXT;
- VkBool32 skipCall = VK_FALSE;
- layer_data* my_data = get_my_data_ptr(get_dispatch_key(device), layer_data_map);
- assert(my_data != NULL);
-
- skipCall |= param_check_vkCreateBuffer(
- my_data->report_data,
- pCreateInfo,
- pAllocator,
- pBuffer);
-
- if (skipCall == VK_FALSE) {
- PreCreateBuffer(device, pCreateInfo);
-
- result = get_dispatch_table(pc_device_table_map, device)->CreateBuffer(device, pCreateInfo, pAllocator, pBuffer);
-
- PostCreateBuffer(device, pBuffer, result);
- }
-
- return result;
-}
-
-bool PreCreateBufferView(
- VkDevice device,
- const VkBufferViewCreateInfo* pCreateInfo)
-{
- if(pCreateInfo != nullptr)
- {
- if(pCreateInfo->sType != VK_STRUCTURE_TYPE_BUFFER_VIEW_CREATE_INFO)
- {
- log_msg(mdd(device), VK_DEBUG_REPORT_ERROR_BIT_EXT, (VkDebugReportObjectTypeEXT)0, 0, __LINE__, 1, "PARAMCHECK",
- "vkCreateBufferView parameter, VkStructureType pCreateInfo->sType, is an invalid enumerator");
- return false;
- }
- if(pCreateInfo->format < VK_FORMAT_BEGIN_RANGE ||
- pCreateInfo->format > VK_FORMAT_END_RANGE)
- {
- log_msg(mdd(device), VK_DEBUG_REPORT_ERROR_BIT_EXT, (VkDebugReportObjectTypeEXT)0, 0, __LINE__, 1, "PARAMCHECK",
- "vkCreateBufferView parameter, VkFormat pCreateInfo->format, is an unrecognized enumerator");
- return false;
- }
- }
-
- return true;
-}
-
-bool PostCreateBufferView(
- VkDevice device,
- VkBufferView* pView,
- VkResult result)
-{
-
- if(pView != nullptr)
- {
- }
-
- if(result < VK_SUCCESS)
- {
- std::string reason = "vkCreateBufferView parameter, VkResult result, is " + EnumeratorString(result);
- log_msg(mdd(device), VK_DEBUG_REPORT_ERROR_BIT_EXT, (VkDebugReportObjectTypeEXT)0, 0, __LINE__, 1, "PARAMCHECK", "%s", reason.c_str());
- return false;
- }
-
- return true;
-}
-
-VK_LAYER_EXPORT VKAPI_ATTR VkResult VKAPI_CALL vkCreateBufferView(
- VkDevice device,
- const VkBufferViewCreateInfo* pCreateInfo,
- const VkAllocationCallbacks* pAllocator,
- VkBufferView* pView)
-{
- VkResult result = VK_ERROR_VALIDATION_FAILED_EXT;
- VkBool32 skipCall = VK_FALSE;
- layer_data* my_data = get_my_data_ptr(get_dispatch_key(device), layer_data_map);
- assert(my_data != NULL);
-
- skipCall |= param_check_vkCreateBufferView(
- my_data->report_data,
- pCreateInfo,
- pAllocator,
- pView);
-
- if (skipCall == VK_FALSE) {
- PreCreateBufferView(device, pCreateInfo);
-
- result = get_dispatch_table(pc_device_table_map, device)->CreateBufferView(device, pCreateInfo, pAllocator, pView);
-
- PostCreateBufferView(device, pView, result);
- }
-
- return result;
-}
-
-bool PreCreateImage(
- VkDevice device,
- const VkImageCreateInfo* pCreateInfo)
-{
- if(pCreateInfo != nullptr)
- {
- if(pCreateInfo->sType != VK_STRUCTURE_TYPE_IMAGE_CREATE_INFO)
- {
- log_msg(mdd(device), VK_DEBUG_REPORT_ERROR_BIT_EXT, (VkDebugReportObjectTypeEXT)0, 0, __LINE__, 1, "PARAMCHECK",
- "vkCreateImage parameter, VkStructureType pCreateInfo->sType, is an invalid enumerator");
- return false;
- }
- if(pCreateInfo->imageType < VK_IMAGE_TYPE_BEGIN_RANGE ||
- pCreateInfo->imageType > VK_IMAGE_TYPE_END_RANGE)
- {
- log_msg(mdd(device), VK_DEBUG_REPORT_ERROR_BIT_EXT, (VkDebugReportObjectTypeEXT)0, 0, __LINE__, 1, "PARAMCHECK",
- "vkCreateImage parameter, VkImageType pCreateInfo->imageType, is an unrecognized enumerator");
- return false;
- }
- if(pCreateInfo->format < VK_FORMAT_BEGIN_RANGE ||
- pCreateInfo->format > VK_FORMAT_END_RANGE)
- {
- log_msg(mdd(device), VK_DEBUG_REPORT_ERROR_BIT_EXT, (VkDebugReportObjectTypeEXT)0, 0, __LINE__, 1, "PARAMCHECK",
- "vkCreateImage parameter, VkFormat pCreateInfo->format, is an unrecognized enumerator");
- return false;
- }
- if(pCreateInfo->tiling < VK_IMAGE_TILING_BEGIN_RANGE ||
- pCreateInfo->tiling > VK_IMAGE_TILING_END_RANGE)
- {
- log_msg(mdd(device), VK_DEBUG_REPORT_ERROR_BIT_EXT, (VkDebugReportObjectTypeEXT)0, 0, __LINE__, 1, "PARAMCHECK",
- "vkCreateImage parameter, VkImageTiling pCreateInfo->tiling, is an unrecognized enumerator");
- return false;
- }
- if(pCreateInfo->sharingMode < VK_SHARING_MODE_BEGIN_RANGE ||
- pCreateInfo->sharingMode > VK_SHARING_MODE_END_RANGE)
- {
- log_msg(mdd(device), VK_DEBUG_REPORT_ERROR_BIT_EXT, (VkDebugReportObjectTypeEXT)0, 0, __LINE__, 1, "PARAMCHECK",
- "vkCreateImage parameter, VkSharingMode pCreateInfo->sharingMode, is an unrecognized enumerator");
- return false;
- }
- if(pCreateInfo->pQueueFamilyIndices != nullptr)
- {
- }
- }
-
- return true;
-}
-
-bool PostCreateImage(
- VkDevice device,
- VkImage* pImage,
- VkResult result)
-{
-
- if(pImage != nullptr)
- {
- }
-
- if(result < VK_SUCCESS)
- {
- std::string reason = "vkCreateImage parameter, VkResult result, is " + EnumeratorString(result);
- log_msg(mdd(device), VK_DEBUG_REPORT_ERROR_BIT_EXT, (VkDebugReportObjectTypeEXT)0, 0, __LINE__, 1, "PARAMCHECK", "%s", reason.c_str());
- return false;
- }
-
- return true;
-}
-
-VK_LAYER_EXPORT VKAPI_ATTR VkResult VKAPI_CALL vkCreateImage(
- VkDevice device,
- const VkImageCreateInfo* pCreateInfo,
- const VkAllocationCallbacks* pAllocator,
- VkImage* pImage)
-{
- VkResult result = VK_ERROR_VALIDATION_FAILED_EXT;
- VkBool32 skipCall = VK_FALSE;
- layer_data* my_data = get_my_data_ptr(get_dispatch_key(device), layer_data_map);
- assert(my_data != NULL);
-
- skipCall |= param_check_vkCreateImage(
- my_data->report_data,
- pCreateInfo,
- pAllocator,
- pImage);
-
- if (skipCall == VK_FALSE) {
- PreCreateImage(device, pCreateInfo);
-
- result = get_dispatch_table(pc_device_table_map, device)->CreateImage(device, pCreateInfo, pAllocator, pImage);
-
- PostCreateImage(device, pImage, result);
- }
-
- return result;
-}
-
-bool PreGetImageSubresourceLayout(
- VkDevice device,
- const VkImageSubresource* pSubresource)
-{
- if(pSubresource != nullptr)
- {
- if ((pSubresource->aspectMask &
- (VK_IMAGE_ASPECT_COLOR_BIT | VK_IMAGE_ASPECT_DEPTH_BIT | VK_IMAGE_ASPECT_STENCIL_BIT | VK_IMAGE_ASPECT_METADATA_BIT)) == 0)
- {
- log_msg(mdd(device), VK_DEBUG_REPORT_ERROR_BIT_EXT, (VkDebugReportObjectTypeEXT)0, 0, __LINE__, 1, "PARAMCHECK",
- "vkGetImageSubresourceLayout parameter, VkImageAspect pSubresource->aspectMask, is an unrecognized enumerator");
- return false;
- }
- }
-
- return true;
-}
-
-bool PostGetImageSubresourceLayout(
- VkDevice device,
- VkImage image,
- VkSubresourceLayout* pLayout)
-{
-
-
- if(pLayout != nullptr)
- {
- }
-
- return true;
-}
-
-VK_LAYER_EXPORT VKAPI_ATTR void VKAPI_CALL vkGetImageSubresourceLayout(
- VkDevice device,
- VkImage image,
- const VkImageSubresource* pSubresource,
- VkSubresourceLayout* pLayout)
-{
- VkBool32 skipCall = VK_FALSE;
- layer_data* my_data = get_my_data_ptr(get_dispatch_key(device), layer_data_map);
- assert(my_data != NULL);
-
- skipCall |= param_check_vkGetImageSubresourceLayout(
- my_data->report_data,
- image,
- pSubresource,
- pLayout);
-
- if (skipCall == VK_FALSE) {
- PreGetImageSubresourceLayout(device, pSubresource);
-
- get_dispatch_table(pc_device_table_map, device)->GetImageSubresourceLayout(device, image, pSubresource, pLayout);
-
- PostGetImageSubresourceLayout(device, image, pLayout);
- }
-}
-
-bool PreCreateImageView(
- VkDevice device,
- const VkImageViewCreateInfo* pCreateInfo)
-{
- if(pCreateInfo != nullptr)
- {
- if(pCreateInfo->sType != VK_STRUCTURE_TYPE_IMAGE_VIEW_CREATE_INFO)
- {
- log_msg(mdd(device), VK_DEBUG_REPORT_ERROR_BIT_EXT, (VkDebugReportObjectTypeEXT)0, 0, __LINE__, 1, "PARAMCHECK",
- "vkCreateImageView parameter, VkStructureType pCreateInfo->sType, is an invalid enumerator");
- return false;
- }
- if(pCreateInfo->viewType < VK_IMAGE_VIEW_TYPE_BEGIN_RANGE ||
- pCreateInfo->viewType > VK_IMAGE_VIEW_TYPE_END_RANGE)
- {
- log_msg(mdd(device), VK_DEBUG_REPORT_ERROR_BIT_EXT, (VkDebugReportObjectTypeEXT)0, 0, __LINE__, 1, "PARAMCHECK",
- "vkCreateImageView parameter, VkImageViewType pCreateInfo->viewType, is an unrecognized enumerator");
- return false;
- }
- if(pCreateInfo->format < VK_FORMAT_BEGIN_RANGE ||
- pCreateInfo->format > VK_FORMAT_END_RANGE)
- {
- log_msg(mdd(device), VK_DEBUG_REPORT_ERROR_BIT_EXT, (VkDebugReportObjectTypeEXT)0, 0, __LINE__, 1, "PARAMCHECK",
- "vkCreateImageView parameter, VkFormat pCreateInfo->format, is an unrecognized enumerator");
- return false;
- }
- if(pCreateInfo->components.r < VK_COMPONENT_SWIZZLE_BEGIN_RANGE ||
- pCreateInfo->components.r > VK_COMPONENT_SWIZZLE_END_RANGE)
- {
- log_msg(mdd(device), VK_DEBUG_REPORT_ERROR_BIT_EXT, (VkDebugReportObjectTypeEXT)0, 0, __LINE__, 1, "PARAMCHECK",
- "vkCreateImageView parameter, VkComponentSwizzle pCreateInfo->components.r, is an unrecognized enumerator");
- return false;
- }
- if(pCreateInfo->components.g < VK_COMPONENT_SWIZZLE_BEGIN_RANGE ||
- pCreateInfo->components.g > VK_COMPONENT_SWIZZLE_END_RANGE)
- {
- log_msg(mdd(device), VK_DEBUG_REPORT_ERROR_BIT_EXT, (VkDebugReportObjectTypeEXT)0, 0, __LINE__, 1, "PARAMCHECK",
- "vkCreateImageView parameter, VkComponentSwizzle pCreateInfo->components.g, is an unrecognized enumerator");
- return false;
- }
- if(pCreateInfo->components.b < VK_COMPONENT_SWIZZLE_BEGIN_RANGE ||
- pCreateInfo->components.b > VK_COMPONENT_SWIZZLE_END_RANGE)
- {
- log_msg(mdd(device), VK_DEBUG_REPORT_ERROR_BIT_EXT, (VkDebugReportObjectTypeEXT)0, 0, __LINE__, 1, "PARAMCHECK",
- "vkCreateImageView parameter, VkComponentSwizzle pCreateInfo->components.b, is an unrecognized enumerator");
- return false;
- }
- if(pCreateInfo->components.a < VK_COMPONENT_SWIZZLE_BEGIN_RANGE ||
- pCreateInfo->components.a > VK_COMPONENT_SWIZZLE_END_RANGE)
- {
- log_msg(mdd(device), VK_DEBUG_REPORT_ERROR_BIT_EXT, (VkDebugReportObjectTypeEXT)0, 0, __LINE__, 1, "PARAMCHECK",
- "vkCreateImageView parameter, VkComponentSwizzle pCreateInfo->components.a, is an unrecognized enumerator");
- return false;
- }
- }
-
- return true;
-}
-
-bool PostCreateImageView(
- VkDevice device,
- VkImageView* pView,
- VkResult result)
-{
-
- if(pView != nullptr)
- {
- }
-
- if(result < VK_SUCCESS)
- {
- std::string reason = "vkCreateImageView parameter, VkResult result, is " + EnumeratorString(result);
- log_msg(mdd(device), VK_DEBUG_REPORT_ERROR_BIT_EXT, (VkDebugReportObjectTypeEXT)0, 0, __LINE__, 1, "PARAMCHECK", "%s", reason.c_str());
- return false;
- }
-
- return true;
-}
-
-VK_LAYER_EXPORT VKAPI_ATTR VkResult VKAPI_CALL vkCreateImageView(
- VkDevice device,
- const VkImageViewCreateInfo* pCreateInfo,
- const VkAllocationCallbacks* pAllocator,
- VkImageView* pView)
-{
- VkResult result = VK_ERROR_VALIDATION_FAILED_EXT;
- VkBool32 skipCall = VK_FALSE;
- layer_data* my_data = get_my_data_ptr(get_dispatch_key(device), layer_data_map);
- assert(my_data != NULL);
-
- skipCall |= param_check_vkCreateImageView(
- my_data->report_data,
- pCreateInfo,
- pAllocator,
- pView);
-
- if (skipCall == VK_FALSE) {
- PreCreateImageView(device, pCreateInfo);
-
- result = get_dispatch_table(pc_device_table_map, device)->CreateImageView(device, pCreateInfo, pAllocator, pView);
-
- PostCreateImageView(device, pView, result);
- }
-
- return result;
-}
-
-bool PreCreateShaderModule(
- VkDevice device,
- const VkShaderModuleCreateInfo* pCreateInfo)
-{
- if(pCreateInfo) {
- if(pCreateInfo->sType != VK_STRUCTURE_TYPE_SHADER_MODULE_CREATE_INFO) {
- log_msg(mdd(device), VK_DEBUG_REPORT_ERROR_BIT_EXT, (VkDebugReportObjectTypeEXT)0, 0, __LINE__, 1, "PARAMCHECK",
- "vkCreateShaderModule parameter, VkStructureType pCreateInfo->sType, is an invalid enumerator");
- return false;
- }
- if(!pCreateInfo->pCode) {
- log_msg(mdd(device), VK_DEBUG_REPORT_ERROR_BIT_EXT, (VkDebugReportObjectTypeEXT)0, 0, __LINE__, 1, "PARAMCHECK",
- "vkCreateShaderModule paramter, void* pCreateInfo->pCode, is null");
- return false;
- }
- } else {
- log_msg(mdd(device), VK_DEBUG_REPORT_ERROR_BIT_EXT, (VkDebugReportObjectTypeEXT)0, 0, __LINE__, 1, "PARAMCHECK",
- "vkCreateShaderModule parameter, VkShaderModuleCreateInfo pCreateInfo, is null");
- return false;
- }
-
- return true;
-}
-
-bool PostCreateShaderModule(
- VkDevice device,
- VkShaderModule* pShaderModule,
- VkResult result)
-{
- if(result < VK_SUCCESS) {
- std::string reason = "vkCreateShaderModule parameter, VkResult result, is " + EnumeratorString(result);
- log_msg(mdd(device), VK_DEBUG_REPORT_ERROR_BIT_EXT, (VkDebugReportObjectTypeEXT)0, 0, __LINE__, 1, "PARAMCHECK", "%s", reason.c_str());
- return false;
- }
-
- return true;
-}
-
-VK_LAYER_EXPORT VKAPI_ATTR VkResult VKAPI_CALL vkCreateShaderModule(
- VkDevice device,
- const VkShaderModuleCreateInfo* pCreateInfo,
- const VkAllocationCallbacks* pAllocator,
- VkShaderModule* pShaderModule)
-{
- VkResult result = VK_ERROR_VALIDATION_FAILED_EXT;
- VkBool32 skipCall = VK_FALSE;
- layer_data* my_data = get_my_data_ptr(get_dispatch_key(device), layer_data_map);
- assert(my_data != NULL);
-
- skipCall |= param_check_vkCreateShaderModule(
- my_data->report_data,
- pCreateInfo,
- pAllocator,
- pShaderModule);
-
- if (skipCall == VK_FALSE) {
- PreCreateShaderModule(device, pCreateInfo);
-
- result = get_dispatch_table(pc_device_table_map, device)->CreateShaderModule(device, pCreateInfo, pAllocator, pShaderModule);
-
- PostCreateShaderModule(device, pShaderModule, result);
- }
-
- return result;
-}
-
-bool PreCreatePipelineCache(
- VkDevice device,
- const VkPipelineCacheCreateInfo* pCreateInfo)
-{
- if(pCreateInfo != nullptr)
- {
- if(pCreateInfo->sType != VK_STRUCTURE_TYPE_PIPELINE_CACHE_CREATE_INFO)
- {
- log_msg(mdd(device), VK_DEBUG_REPORT_ERROR_BIT_EXT, (VkDebugReportObjectTypeEXT)0, 0, __LINE__, 1, "PARAMCHECK",
- "vkCreatePipelineCache parameter, VkStructureType pCreateInfo->sType, is an invalid enumerator");
- return false;
- }
- if(pCreateInfo->pInitialData != nullptr)
- {
- }
- }
-
- return true;
-}
-
-bool PostCreatePipelineCache(
- VkDevice device,
- VkPipelineCache* pPipelineCache,
- VkResult result)
-{
-
- if(pPipelineCache != nullptr)
- {
- }
-
- if(result < VK_SUCCESS)
- {
- std::string reason = "vkCreatePipelineCache parameter, VkResult result, is " + EnumeratorString(result);
- log_msg(mdd(device), VK_DEBUG_REPORT_ERROR_BIT_EXT, (VkDebugReportObjectTypeEXT)0, 0, __LINE__, 1, "PARAMCHECK", "%s", reason.c_str());
- return false;
- }
-
- return true;
-}
-
-VK_LAYER_EXPORT VKAPI_ATTR VkResult VKAPI_CALL vkCreatePipelineCache(
- VkDevice device,
- const VkPipelineCacheCreateInfo* pCreateInfo,
- const VkAllocationCallbacks* pAllocator,
- VkPipelineCache* pPipelineCache)
-{
- VkResult result = VK_ERROR_VALIDATION_FAILED_EXT;
- VkBool32 skipCall = VK_FALSE;
- layer_data* my_data = get_my_data_ptr(get_dispatch_key(device), layer_data_map);
- assert(my_data != NULL);
-
- skipCall |= param_check_vkCreatePipelineCache(
- my_data->report_data,
- pCreateInfo,
- pAllocator,
- pPipelineCache);
-
- if (skipCall == VK_FALSE) {
- PreCreatePipelineCache(device, pCreateInfo);
-
- result = get_dispatch_table(pc_device_table_map, device)->CreatePipelineCache(device, pCreateInfo, pAllocator, pPipelineCache);
-
- PostCreatePipelineCache(device, pPipelineCache, result);
- }
-
- return result;
-}
-
-bool PostGetPipelineCacheData(
- VkDevice device,
- VkPipelineCache pipelineCache,
- size_t* pDataSize,
- void* pData,
- VkResult result)
-{
-
-
- if(pDataSize != nullptr)
- {
- }
-
- if(pData != nullptr)
- {
- }
-
- if(result < VK_SUCCESS)
- {
- std::string reason = "vkGetPipelineCacheData parameter, VkResult result, is " + EnumeratorString(result);
- log_msg(mdd(device), VK_DEBUG_REPORT_ERROR_BIT_EXT, (VkDebugReportObjectTypeEXT)0, 0, __LINE__, 1, "PARAMCHECK", "%s", reason.c_str());
- return false;
- }
-
- return true;
-}
-
-VK_LAYER_EXPORT VKAPI_ATTR VkResult VKAPI_CALL vkGetPipelineCacheData(
- VkDevice device,
- VkPipelineCache pipelineCache,
- size_t* pDataSize,
- void* pData)
-{
- VkResult result = VK_ERROR_VALIDATION_FAILED_EXT;
- VkBool32 skipCall = VK_FALSE;
- layer_data* my_data = get_my_data_ptr(get_dispatch_key(device), layer_data_map);
- assert(my_data != NULL);
-
- skipCall |= param_check_vkGetPipelineCacheData(
- my_data->report_data,
- pipelineCache,
- pDataSize,
- pData);
-
- if (skipCall == VK_FALSE) {
- result = get_dispatch_table(pc_device_table_map, device)->GetPipelineCacheData(device, pipelineCache, pDataSize, pData);
-
- PostGetPipelineCacheData(device, pipelineCache, pDataSize, pData, result);
- }
-
- return result;
-}
-
-bool PreMergePipelineCaches(
- VkDevice device,
- const VkPipelineCache* pSrcCaches)
-{
- if(pSrcCaches != nullptr)
- {
- }
-
- return true;
-}
-
-bool PostMergePipelineCaches(
- VkDevice device,
- VkPipelineCache dstCache,
- uint32_t srcCacheCount,
- VkResult result)
-{
-
-
-
- if(result < VK_SUCCESS)
- {
- std::string reason = "vkMergePipelineCaches parameter, VkResult result, is " + EnumeratorString(result);
- log_msg(mdd(device), VK_DEBUG_REPORT_ERROR_BIT_EXT, (VkDebugReportObjectTypeEXT)0, 0, __LINE__, 1, "PARAMCHECK", "%s", reason.c_str());
- return false;
- }
-
- return true;
-}
-
-VK_LAYER_EXPORT VKAPI_ATTR VkResult VKAPI_CALL vkMergePipelineCaches(
- VkDevice device,
- VkPipelineCache dstCache,
- uint32_t srcCacheCount,
- const VkPipelineCache* pSrcCaches)
-{
- VkResult result = VK_ERROR_VALIDATION_FAILED_EXT;
- VkBool32 skipCall = VK_FALSE;
- layer_data* my_data = get_my_data_ptr(get_dispatch_key(device), layer_data_map);
- assert(my_data != NULL);
-
- skipCall |= param_check_vkMergePipelineCaches(
- my_data->report_data,
- dstCache,
- srcCacheCount,
- pSrcCaches);
-
- if (skipCall == VK_FALSE) {
- PreMergePipelineCaches(device, pSrcCaches);
-
- result = get_dispatch_table(pc_device_table_map, device)->MergePipelineCaches(device, dstCache, srcCacheCount, pSrcCaches);
-
- PostMergePipelineCaches(device, dstCache, srcCacheCount, result);
- }
-
- return result;
-}
-
-bool PreCreateGraphicsPipelines(
- VkDevice device,
- const VkGraphicsPipelineCreateInfo* pCreateInfos)
-{
- layer_data *data = get_my_data_ptr(get_dispatch_key(device), layer_data_map);
-
- // TODO: Handle count
- if(pCreateInfos != nullptr)
- {
- if(pCreateInfos->sType != VK_STRUCTURE_TYPE_GRAPHICS_PIPELINE_CREATE_INFO)
- {
- log_msg(mdd(device), VK_DEBUG_REPORT_ERROR_BIT_EXT, (VkDebugReportObjectTypeEXT)0, 0, __LINE__, 1, "PARAMCHECK",
- "vkCreateGraphicsPipelines parameter, VkStructureType pCreateInfos->sType, is an invalid enumerator");
- return false;
- }
- if(pCreateInfos->pStages != nullptr)
- {
- if(pCreateInfos->pStages->sType != VK_STRUCTURE_TYPE_PIPELINE_SHADER_STAGE_CREATE_INFO)
- {
- log_msg(mdd(device), VK_DEBUG_REPORT_ERROR_BIT_EXT, (VkDebugReportObjectTypeEXT)0, 0, __LINE__, 1, "PARAMCHECK",
- "vkCreateGraphicsPipelines parameter, VkStructureType pCreateInfos->pStages->sType, is an invalid enumerator");
- return false;
- }
- if(pCreateInfos->pStages->pSpecializationInfo != nullptr)
- {
- if(pCreateInfos->pStages->pSpecializationInfo->pMapEntries != nullptr)
- {
- }
- if(pCreateInfos->pStages->pSpecializationInfo->pData != nullptr)
- {
- }
- }
- }
- if(pCreateInfos->pVertexInputState != nullptr)
- {
- if(pCreateInfos->pVertexInputState->sType != VK_STRUCTURE_TYPE_PIPELINE_VERTEX_INPUT_STATE_CREATE_INFO)
- {
- log_msg(mdd(device), VK_DEBUG_REPORT_ERROR_BIT_EXT, (VkDebugReportObjectTypeEXT)0, 0, __LINE__, 1, "PARAMCHECK",
- "vkCreateGraphicsPipelines parameter, VkStructureType pCreateInfos->pVertexInputState->sType, is an invalid enumerator");
- return false;
- }
- if(pCreateInfos->pVertexInputState->pVertexBindingDescriptions != nullptr)
- {
- if(pCreateInfos->pVertexInputState->pVertexBindingDescriptions->inputRate < VK_VERTEX_INPUT_RATE_BEGIN_RANGE ||
- pCreateInfos->pVertexInputState->pVertexBindingDescriptions->inputRate > VK_VERTEX_INPUT_RATE_END_RANGE)
- {
- log_msg(mdd(device), VK_DEBUG_REPORT_ERROR_BIT_EXT, (VkDebugReportObjectTypeEXT)0, 0, __LINE__, 1, "PARAMCHECK",
- "vkCreateGraphicsPipelines parameter, VkVertexInputRate pCreateInfos->pVertexInputState->pVertexBindingDescriptions->inputRate, is an unrecognized enumerator");
- return false;
- }
- }
- if(pCreateInfos->pVertexInputState->pVertexAttributeDescriptions != nullptr)
- {
- if(pCreateInfos->pVertexInputState->pVertexAttributeDescriptions->format < VK_FORMAT_BEGIN_RANGE ||
- pCreateInfos->pVertexInputState->pVertexAttributeDescriptions->format > VK_FORMAT_END_RANGE)
- {
- log_msg(mdd(device), VK_DEBUG_REPORT_ERROR_BIT_EXT, (VkDebugReportObjectTypeEXT)0, 0, __LINE__, 1, "PARAMCHECK",
- "vkCreateGraphicsPipelines parameter, VkFormat pCreateInfos->pVertexInputState->pVertexAttributeDescriptions->format, is an unrecognized enumerator");
- return false;
- }
- }
- }
- if(pCreateInfos->pInputAssemblyState != nullptr)
- {
- if(pCreateInfos->pInputAssemblyState->sType != VK_STRUCTURE_TYPE_PIPELINE_INPUT_ASSEMBLY_STATE_CREATE_INFO)
- {
- log_msg(mdd(device), VK_DEBUG_REPORT_ERROR_BIT_EXT, (VkDebugReportObjectTypeEXT)0, 0, __LINE__, 1, "PARAMCHECK",
- "vkCreateGraphicsPipelines parameter, VkStructureType pCreateInfos->pInputAssemblyState->sType, is an invalid enumerator");
- return false;
- }
- if(pCreateInfos->pInputAssemblyState->topology < VK_PRIMITIVE_TOPOLOGY_BEGIN_RANGE ||
- pCreateInfos->pInputAssemblyState->topology > VK_PRIMITIVE_TOPOLOGY_END_RANGE)
- {
- log_msg(mdd(device), VK_DEBUG_REPORT_ERROR_BIT_EXT, (VkDebugReportObjectTypeEXT)0, 0, __LINE__, 1, "PARAMCHECK",
- "vkCreateGraphicsPipelines parameter, VkPrimitiveTopology pCreateInfos->pInputAssemblyState->topology, is an unrecognized enumerator");
- return false;
- }
- }
- if(pCreateInfos->pTessellationState != nullptr)
- {
- if(pCreateInfos->pTessellationState->sType != VK_STRUCTURE_TYPE_PIPELINE_TESSELLATION_STATE_CREATE_INFO)
- {
- log_msg(mdd(device), VK_DEBUG_REPORT_ERROR_BIT_EXT, (VkDebugReportObjectTypeEXT)0, 0, __LINE__, 1, "PARAMCHECK",
- "vkCreateGraphicsPipelines parameter, VkStructureType pCreateInfos->pTessellationState->sType, is an invalid enumerator");
- return false;
- }
- }
- if(pCreateInfos->pViewportState != nullptr)
- {
- if(pCreateInfos->pViewportState->sType != VK_STRUCTURE_TYPE_PIPELINE_VIEWPORT_STATE_CREATE_INFO)
- {
- log_msg(mdd(device), VK_DEBUG_REPORT_ERROR_BIT_EXT, (VkDebugReportObjectTypeEXT)0, 0, __LINE__, 1, "PARAMCHECK",
- "vkCreateGraphicsPipelines parameter, VkStructureType pCreateInfos->pViewportState->sType, is an invalid enumerator");
- return false;
- }
- }
- if(pCreateInfos->pRasterizationState != nullptr)
- {
- if(pCreateInfos->pRasterizationState->sType != VK_STRUCTURE_TYPE_PIPELINE_RASTERIZATION_STATE_CREATE_INFO)
- {
- log_msg(mdd(device), VK_DEBUG_REPORT_ERROR_BIT_EXT, (VkDebugReportObjectTypeEXT)0, 0, __LINE__, 1, "PARAMCHECK",
- "vkCreateGraphicsPipelines parameter, VkStructureType pCreateInfos->pRasterizationState->sType, is an invalid enumerator");
- return false;
- }
- if(pCreateInfos->pRasterizationState->polygonMode < VK_POLYGON_MODE_BEGIN_RANGE ||
- pCreateInfos->pRasterizationState->polygonMode > VK_POLYGON_MODE_END_RANGE)
- {
- log_msg(mdd(device), VK_DEBUG_REPORT_ERROR_BIT_EXT, (VkDebugReportObjectTypeEXT)0, 0, __LINE__, 1, "PARAMCHECK",
- "vkCreateGraphicsPipelines parameter, VkPolygonMode pCreateInfos->pRasterizationState->polygonMode, is an unrecognized enumerator");
- return false;
- }
- if(pCreateInfos->pRasterizationState->cullMode & ~VK_CULL_MODE_FRONT_AND_BACK)
- {
- log_msg(mdd(device), VK_DEBUG_REPORT_ERROR_BIT_EXT, (VkDebugReportObjectTypeEXT)0, 0, __LINE__, 1, "PARAMCHECK",
- "vkCreateGraphicsPipelines parameter, VkCullMode pCreateInfos->pRasterizationState->cullMode, is an unrecognized enumerator");
- return false;
- }
- if(pCreateInfos->pRasterizationState->frontFace < VK_FRONT_FACE_BEGIN_RANGE ||
- pCreateInfos->pRasterizationState->frontFace > VK_FRONT_FACE_END_RANGE)
- {
- log_msg(mdd(device), VK_DEBUG_REPORT_ERROR_BIT_EXT, (VkDebugReportObjectTypeEXT)0, 0, __LINE__, 1, "PARAMCHECK",
- "vkCreateGraphicsPipelines parameter, VkFrontFace pCreateInfos->pRasterizationState->frontFace, is an unrecognized enumerator");
- return false;
- }
- }
- if(pCreateInfos->pMultisampleState != nullptr)
- {
- if(pCreateInfos->pMultisampleState->sType != VK_STRUCTURE_TYPE_PIPELINE_MULTISAMPLE_STATE_CREATE_INFO)
- {
- log_msg(mdd(device), VK_DEBUG_REPORT_ERROR_BIT_EXT, (VkDebugReportObjectTypeEXT)0, 0, __LINE__, 1, "PARAMCHECK",
- "vkCreateGraphicsPipelines parameter, VkStructureType pCreateInfos->pMultisampleState->sType, is an invalid enumerator");
- return false;
- }
- }
- if(pCreateInfos->pDepthStencilState != nullptr)
- {
- if(pCreateInfos->pDepthStencilState->sType != VK_STRUCTURE_TYPE_PIPELINE_DEPTH_STENCIL_STATE_CREATE_INFO)
- {
- log_msg(mdd(device), VK_DEBUG_REPORT_ERROR_BIT_EXT, (VkDebugReportObjectTypeEXT)0, 0, __LINE__, 1, "PARAMCHECK",
- "vkCreateGraphicsPipelines parameter, VkStructureType pCreateInfos->pDepthStencilState->sType, is an invalid enumerator");
- return false;
- }
- if(pCreateInfos->pDepthStencilState->depthCompareOp < VK_COMPARE_OP_BEGIN_RANGE ||
- pCreateInfos->pDepthStencilState->depthCompareOp > VK_COMPARE_OP_END_RANGE)
- {
- log_msg(mdd(device), VK_DEBUG_REPORT_ERROR_BIT_EXT, (VkDebugReportObjectTypeEXT)0, 0, __LINE__, 1, "PARAMCHECK",
- "vkCreateGraphicsPipelines parameter, VkCompareOp pCreateInfos->pDepthStencilState->depthCompareOp, is an unrecognized enumerator");
- return false;
- }
- if(pCreateInfos->pDepthStencilState->front.failOp < VK_STENCIL_OP_BEGIN_RANGE ||
- pCreateInfos->pDepthStencilState->front.failOp > VK_STENCIL_OP_END_RANGE)
- {
- log_msg(mdd(device), VK_DEBUG_REPORT_ERROR_BIT_EXT, (VkDebugReportObjectTypeEXT)0, 0, __LINE__, 1, "PARAMCHECK",
- "vkCreateGraphicsPipelines parameter, VkStencilOp pCreateInfos->pDepthStencilState->front.failOp, is an unrecognized enumerator");
- return false;
- }
- if(pCreateInfos->pDepthStencilState->front.passOp < VK_STENCIL_OP_BEGIN_RANGE ||
- pCreateInfos->pDepthStencilState->front.passOp > VK_STENCIL_OP_END_RANGE)
- {
- log_msg(mdd(device), VK_DEBUG_REPORT_ERROR_BIT_EXT, (VkDebugReportObjectTypeEXT)0, 0, __LINE__, 1, "PARAMCHECK",
- "vkCreateGraphicsPipelines parameter, VkStencilOp pCreateInfos->pDepthStencilState->front.passOp, is an unrecognized enumerator");
- return false;
- }
- if(pCreateInfos->pDepthStencilState->front.depthFailOp < VK_STENCIL_OP_BEGIN_RANGE ||
- pCreateInfos->pDepthStencilState->front.depthFailOp > VK_STENCIL_OP_END_RANGE)
- {
- log_msg(mdd(device), VK_DEBUG_REPORT_ERROR_BIT_EXT, (VkDebugReportObjectTypeEXT)0, 0, __LINE__, 1, "PARAMCHECK",
- "vkCreateGraphicsPipelines parameter, VkStencilOp pCreateInfos->pDepthStencilState->front.depthFailOp, is an unrecognized enumerator");
- return false;
- }
- if(pCreateInfos->pDepthStencilState->front.compareOp < VK_COMPARE_OP_BEGIN_RANGE ||
- pCreateInfos->pDepthStencilState->front.compareOp > VK_COMPARE_OP_END_RANGE)
- {
- log_msg(mdd(device), VK_DEBUG_REPORT_ERROR_BIT_EXT, (VkDebugReportObjectTypeEXT)0, 0, __LINE__, 1, "PARAMCHECK",
- "vkCreateGraphicsPipelines parameter, VkCompareOp pCreateInfos->pDepthStencilState->front.compareOp, is an unrecognized enumerator");
- return false;
- }
- if(pCreateInfos->pDepthStencilState->back.failOp < VK_STENCIL_OP_BEGIN_RANGE ||
- pCreateInfos->pDepthStencilState->back.failOp > VK_STENCIL_OP_END_RANGE)
- {
- log_msg(mdd(device), VK_DEBUG_REPORT_ERROR_BIT_EXT, (VkDebugReportObjectTypeEXT)0, 0, __LINE__, 1, "PARAMCHECK",
- "vkCreateGraphicsPipelines parameter, VkStencilOp pCreateInfos->pDepthStencilState->back.failOp, is an unrecognized enumerator");
- return false;
- }
- if(pCreateInfos->pDepthStencilState->back.passOp < VK_STENCIL_OP_BEGIN_RANGE ||
- pCreateInfos->pDepthStencilState->back.passOp > VK_STENCIL_OP_END_RANGE)
- {
- log_msg(mdd(device), VK_DEBUG_REPORT_ERROR_BIT_EXT, (VkDebugReportObjectTypeEXT)0, 0, __LINE__, 1, "PARAMCHECK",
- "vkCreateGraphicsPipelines parameter, VkStencilOp pCreateInfos->pDepthStencilState->back.passOp, is an unrecognized enumerator");
- return false;
- }
- if(pCreateInfos->pDepthStencilState->back.depthFailOp < VK_STENCIL_OP_BEGIN_RANGE ||
- pCreateInfos->pDepthStencilState->back.depthFailOp > VK_STENCIL_OP_END_RANGE)
- {
- log_msg(mdd(device), VK_DEBUG_REPORT_ERROR_BIT_EXT, (VkDebugReportObjectTypeEXT)0, 0, __LINE__, 1, "PARAMCHECK",
- "vkCreateGraphicsPipelines parameter, VkStencilOp pCreateInfos->pDepthStencilState->back.depthFailOp, is an unrecognized enumerator");
- return false;
- }
- if(pCreateInfos->pDepthStencilState->back.compareOp < VK_COMPARE_OP_BEGIN_RANGE ||
- pCreateInfos->pDepthStencilState->back.compareOp > VK_COMPARE_OP_END_RANGE)
- {
- log_msg(mdd(device), VK_DEBUG_REPORT_ERROR_BIT_EXT, (VkDebugReportObjectTypeEXT)0, 0, __LINE__, 1, "PARAMCHECK",
- "vkCreateGraphicsPipelines parameter, VkCompareOp pCreateInfos->pDepthStencilState->back.compareOp, is an unrecognized enumerator");
- return false;
- }
- }
- if(pCreateInfos->pColorBlendState != nullptr)
- {
- if(pCreateInfos->pColorBlendState->sType != VK_STRUCTURE_TYPE_PIPELINE_COLOR_BLEND_STATE_CREATE_INFO)
- {
- log_msg(mdd(device), VK_DEBUG_REPORT_ERROR_BIT_EXT, (VkDebugReportObjectTypeEXT)0, 0, __LINE__, 1, "PARAMCHECK",
- "vkCreateGraphicsPipelines parameter, VkStructureType pCreateInfos->pColorBlendState->sType, is an invalid enumerator");
- return false;
- }
- if (pCreateInfos->pColorBlendState->logicOpEnable == VK_TRUE &&
- (pCreateInfos->pColorBlendState->logicOp < VK_LOGIC_OP_BEGIN_RANGE ||
- pCreateInfos->pColorBlendState->logicOp > VK_LOGIC_OP_END_RANGE)) {
- log_msg(mdd(device), VK_DEBUG_REPORT_ERROR_BIT_EXT, (VkDebugReportObjectTypeEXT)0, 0, __LINE__, 1, "PARAMCHECK",
- "vkCreateGraphicsPipelines parameter, VkLogicOp pCreateInfos->pColorBlendState->logicOp, is an unrecognized enumerator");
- return false;
- }
- if(pCreateInfos->pColorBlendState->pAttachments != nullptr && pCreateInfos->pColorBlendState->pAttachments->blendEnable == VK_TRUE)
- {
- if(pCreateInfos->pColorBlendState->pAttachments->srcColorBlendFactor < VK_BLEND_FACTOR_BEGIN_RANGE ||
- pCreateInfos->pColorBlendState->pAttachments->srcColorBlendFactor > VK_BLEND_FACTOR_END_RANGE)
- {
- log_msg(mdd(device), VK_DEBUG_REPORT_ERROR_BIT_EXT, (VkDebugReportObjectTypeEXT)0, 0, __LINE__, 1, "PARAMCHECK",
- "vkCreateGraphicsPipelines parameter, VkBlendFactor pCreateInfos->pColorBlendState->pAttachments->srcColorBlendFactor, is an unrecognized enumerator");
- return false;
- }
- if(pCreateInfos->pColorBlendState->pAttachments->dstColorBlendFactor < VK_BLEND_FACTOR_BEGIN_RANGE ||
- pCreateInfos->pColorBlendState->pAttachments->dstColorBlendFactor > VK_BLEND_FACTOR_END_RANGE)
- {
- log_msg(mdd(device), VK_DEBUG_REPORT_ERROR_BIT_EXT, (VkDebugReportObjectTypeEXT)0, 0, __LINE__, 1, "PARAMCHECK",
- "vkCreateGraphicsPipelines parameter, VkBlendFactor pCreateInfos->pColorBlendState->pAttachments->dstColorBlendFactor, is an unrecognized enumerator");
- return false;
- }
- if(pCreateInfos->pColorBlendState->pAttachments->colorBlendOp < VK_BLEND_OP_BEGIN_RANGE ||
- pCreateInfos->pColorBlendState->pAttachments->colorBlendOp > VK_BLEND_OP_END_RANGE)
- {
- log_msg(mdd(device), VK_DEBUG_REPORT_ERROR_BIT_EXT, (VkDebugReportObjectTypeEXT)0, 0, __LINE__, 1, "PARAMCHECK",
- "vkCreateGraphicsPipelines parameter, VkBlendOp pCreateInfos->pColorBlendState->pAttachments->colorBlendOp, is an unrecognized enumerator");
- return false;
- }
- if(pCreateInfos->pColorBlendState->pAttachments->srcAlphaBlendFactor < VK_BLEND_FACTOR_BEGIN_RANGE ||
- pCreateInfos->pColorBlendState->pAttachments->srcAlphaBlendFactor > VK_BLEND_FACTOR_END_RANGE)
- {
- log_msg(mdd(device), VK_DEBUG_REPORT_ERROR_BIT_EXT, (VkDebugReportObjectTypeEXT)0, 0, __LINE__, 1, "PARAMCHECK",
- "vkCreateGraphicsPipelines parameter, VkBlendFactor pCreateInfos->pColorBlendState->pAttachments->srcAlphaBlendFactor, is an unrecognized enumerator");
- return false;
- }
- if(pCreateInfos->pColorBlendState->pAttachments->dstAlphaBlendFactor < VK_BLEND_FACTOR_BEGIN_RANGE ||
- pCreateInfos->pColorBlendState->pAttachments->dstAlphaBlendFactor > VK_BLEND_FACTOR_END_RANGE)
- {
- log_msg(mdd(device), VK_DEBUG_REPORT_ERROR_BIT_EXT, (VkDebugReportObjectTypeEXT)0, 0, __LINE__, 1, "PARAMCHECK",
- "vkCreateGraphicsPipelines parameter, VkBlendFactor pCreateInfos->pColorBlendState->pAttachments->dstAlphaBlendFactor, is an unrecognized enumerator");
- return false;
- }
- if(pCreateInfos->pColorBlendState->pAttachments->alphaBlendOp < VK_BLEND_OP_BEGIN_RANGE ||
- pCreateInfos->pColorBlendState->pAttachments->alphaBlendOp > VK_BLEND_OP_END_RANGE)
- {
- log_msg(mdd(device), VK_DEBUG_REPORT_ERROR_BIT_EXT, (VkDebugReportObjectTypeEXT)0, 0, __LINE__, 1, "PARAMCHECK",
- "vkCreateGraphicsPipelines parameter, VkBlendOp pCreateInfos->pColorBlendState->pAttachments->alphaBlendOp, is an unrecognized enumerator");
- return false;
- }
- }
- }
- if(pCreateInfos->renderPass == VK_NULL_HANDLE)
- {
- log_msg(mdd(device), VK_DEBUG_REPORT_ERROR_BIT_EXT, (VkDebugReportObjectTypeEXT)0, 0, __LINE__, 1, "PARAMCHECK",
- "vkCreateGraphicsPipelines parameter, VkRenderPass pCreateInfos->renderPass, is null pointer");
- }
-
- int i = 0;
- for (auto j = 0; j < pCreateInfos[i].stageCount; j++) {
- validate_string(data, "vkCreateGraphicsPipelines()", "pCreateInfos[i].pStages[j].pName", pCreateInfos[i].pStages[j].pName);
- }
-
- }
-
- return true;
-}
-
-bool PostCreateGraphicsPipelines(
- VkDevice device,
- VkPipelineCache pipelineCache,
- uint32_t count,
- VkPipeline* pPipelines,
- VkResult result)
-{
-
-
-
- if(pPipelines != nullptr)
- {
- }
-
- if(result < VK_SUCCESS)
- {
- std::string reason = "vkCreateGraphicsPipelines parameter, VkResult result, is " + EnumeratorString(result);
- log_msg(mdd(device), VK_DEBUG_REPORT_ERROR_BIT_EXT, (VkDebugReportObjectTypeEXT)0, 0, __LINE__, 1, "PARAMCHECK", "%s", reason.c_str());
- return false;
- }
-
- return true;
-}
-
-VK_LAYER_EXPORT VKAPI_ATTR VkResult VKAPI_CALL vkCreateGraphicsPipelines(
- VkDevice device,
- VkPipelineCache pipelineCache,
- uint32_t createInfoCount,
- const VkGraphicsPipelineCreateInfo* pCreateInfos,
- const VkAllocationCallbacks* pAllocator,
- VkPipeline* pPipelines)
-{
- VkResult result = VK_ERROR_VALIDATION_FAILED_EXT;
- VkBool32 skipCall = VK_FALSE;
- layer_data* my_data = get_my_data_ptr(get_dispatch_key(device), layer_data_map);
- assert(my_data != NULL);
-
- skipCall |= param_check_vkCreateGraphicsPipelines(
- my_data->report_data,
- pipelineCache,
- createInfoCount,
- pCreateInfos,
- pAllocator,
- pPipelines);
-
- if (skipCall == VK_FALSE) {
- PreCreateGraphicsPipelines(device, pCreateInfos);
-
- result = get_dispatch_table(pc_device_table_map, device)->CreateGraphicsPipelines(device, pipelineCache, createInfoCount, pCreateInfos, pAllocator, pPipelines);
-
- PostCreateGraphicsPipelines(device, pipelineCache, createInfoCount, pPipelines, result);
- }
-
- return result;
-}
-
-bool PreCreateComputePipelines(
- VkDevice device,
- const VkComputePipelineCreateInfo* pCreateInfos)
-{
- layer_data *data = get_my_data_ptr(get_dispatch_key(device), layer_data_map);
-
- if(pCreateInfos != nullptr)
- {
- // TODO: Handle count!
- if(pCreateInfos->sType != VK_STRUCTURE_TYPE_COMPUTE_PIPELINE_CREATE_INFO)
- {
- log_msg(mdd(device), VK_DEBUG_REPORT_ERROR_BIT_EXT, (VkDebugReportObjectTypeEXT)0, 0, __LINE__, 1, "PARAMCHECK",
- "vkCreateComputePipelines parameter, VkStructureType pCreateInfos->sType, is an invalid enumerator");
- return false;
- }
- if(pCreateInfos->stage.sType != VK_STRUCTURE_TYPE_PIPELINE_SHADER_STAGE_CREATE_INFO)
- {
- log_msg(mdd(device), VK_DEBUG_REPORT_ERROR_BIT_EXT, (VkDebugReportObjectTypeEXT)0, 0, __LINE__, 1, "PARAMCHECK",
- "vkCreateComputePipelines parameter, VkStructureType pCreateInfos->cs.sType, is an invalid enumerator");
- return false;
- }
- if(pCreateInfos->stage.pSpecializationInfo != nullptr)
- {
- if(pCreateInfos->stage.pSpecializationInfo->pMapEntries != nullptr)
- {
- }
- if(pCreateInfos->stage.pSpecializationInfo->pData != nullptr)
- {
- }
- }
-
- int i = 0;
- validate_string(data, "vkCreateComputePipelines()", "pCreateInfos[i].stage.pName", pCreateInfos[i].stage.pName);
- }
-
- return true;
-}
-
-bool PostCreateComputePipelines(
- VkDevice device,
- VkPipelineCache pipelineCache,
- uint32_t count,
- VkPipeline* pPipelines,
- VkResult result)
-{
-
-
-
- if(pPipelines != nullptr)
- {
- }
-
- if(result < VK_SUCCESS)
- {
- std::string reason = "vkCreateComputePipelines parameter, VkResult result, is " + EnumeratorString(result);
- log_msg(mdd(device), VK_DEBUG_REPORT_ERROR_BIT_EXT, (VkDebugReportObjectTypeEXT)0, 0, __LINE__, 1, "PARAMCHECK", "%s", reason.c_str());
- return false;
- }
-
- return true;
-}
-
-VK_LAYER_EXPORT VKAPI_ATTR VkResult VKAPI_CALL vkCreateComputePipelines(
- VkDevice device,
- VkPipelineCache pipelineCache,
- uint32_t createInfoCount,
- const VkComputePipelineCreateInfo* pCreateInfos,
- const VkAllocationCallbacks* pAllocator,
- VkPipeline* pPipelines)
-{
- VkResult result = VK_ERROR_VALIDATION_FAILED_EXT;
- VkBool32 skipCall = VK_FALSE;
- layer_data* my_data = get_my_data_ptr(get_dispatch_key(device), layer_data_map);
- assert(my_data != NULL);
-
- skipCall |= param_check_vkCreateComputePipelines(
- my_data->report_data,
- pipelineCache,
- createInfoCount,
- pCreateInfos,
- pAllocator,
- pPipelines);
-
- if (skipCall == VK_FALSE) {
- PreCreateComputePipelines(device, pCreateInfos);
-
- result = get_dispatch_table(pc_device_table_map, device)->CreateComputePipelines(device, pipelineCache, createInfoCount, pCreateInfos, pAllocator, pPipelines);
-
- PostCreateComputePipelines(device, pipelineCache, createInfoCount, pPipelines, result);
- }
-
- return result;
-}
-
-bool PreCreatePipelineLayout(
- VkDevice device,
- const VkPipelineLayoutCreateInfo* pCreateInfo)
-{
- if(pCreateInfo != nullptr)
- {
- if(pCreateInfo->sType != VK_STRUCTURE_TYPE_PIPELINE_LAYOUT_CREATE_INFO)
- {
- log_msg(mdd(device), VK_DEBUG_REPORT_ERROR_BIT_EXT, (VkDebugReportObjectTypeEXT)0, 0, __LINE__, 1, "PARAMCHECK",
- "vkCreatePipelineLayout parameter, VkStructureType pCreateInfo->sType, is an invalid enumerator");
- return false;
- }
- if(pCreateInfo->pSetLayouts != nullptr)
- {
- }
- if(pCreateInfo->pPushConstantRanges != nullptr)
- {
- }
- }
-
- return true;
-}
-
-bool PostCreatePipelineLayout(
- VkDevice device,
- VkPipelineLayout* pPipelineLayout,
- VkResult result)
-{
-
- if(pPipelineLayout != nullptr)
- {
- }
-
- if(result < VK_SUCCESS)
- {
- std::string reason = "vkCreatePipelineLayout parameter, VkResult result, is " + EnumeratorString(result);
- log_msg(mdd(device), VK_DEBUG_REPORT_ERROR_BIT_EXT, (VkDebugReportObjectTypeEXT)0, 0, __LINE__, 1, "PARAMCHECK", "%s", reason.c_str());
- return false;
- }
-
- return true;
-}
-
-VK_LAYER_EXPORT VKAPI_ATTR VkResult VKAPI_CALL vkCreatePipelineLayout(
- VkDevice device,
- const VkPipelineLayoutCreateInfo* pCreateInfo,
- const VkAllocationCallbacks* pAllocator,
- VkPipelineLayout* pPipelineLayout)
-{
- VkResult result = VK_ERROR_VALIDATION_FAILED_EXT;
- VkBool32 skipCall = VK_FALSE;
- layer_data* my_data = get_my_data_ptr(get_dispatch_key(device), layer_data_map);
- assert(my_data != NULL);
-
- skipCall |= param_check_vkCreatePipelineLayout(
- my_data->report_data,
- pCreateInfo,
- pAllocator,
- pPipelineLayout);
-
- if (skipCall == VK_FALSE) {
- PreCreatePipelineLayout(device, pCreateInfo);
-
- result = get_dispatch_table(pc_device_table_map, device)->CreatePipelineLayout(device, pCreateInfo, pAllocator, pPipelineLayout);
-
- PostCreatePipelineLayout(device, pPipelineLayout, result);
- }
-
- return result;
-}
-
-bool PreCreateSampler(
- VkDevice device,
- const VkSamplerCreateInfo* pCreateInfo)
-{
- if(pCreateInfo != nullptr)
- {
- if(pCreateInfo->sType != VK_STRUCTURE_TYPE_SAMPLER_CREATE_INFO)
- {
- log_msg(mdd(device), VK_DEBUG_REPORT_ERROR_BIT_EXT, (VkDebugReportObjectTypeEXT)0, 0, __LINE__, 1, "PARAMCHECK",
- "vkCreateSampler parameter, VkStructureType pCreateInfo->sType, is an invalid enumerator");
- return false;
- }
- if(pCreateInfo->magFilter < VK_FILTER_BEGIN_RANGE ||
- pCreateInfo->magFilter > VK_FILTER_END_RANGE)
- {
- log_msg(mdd(device), VK_DEBUG_REPORT_ERROR_BIT_EXT, (VkDebugReportObjectTypeEXT)0, 0, __LINE__, 1, "PARAMCHECK",
- "vkCreateSampler parameter, VkFilter pCreateInfo->magFilter, is an unrecognized enumerator");
- return false;
- }
- if(pCreateInfo->minFilter < VK_FILTER_BEGIN_RANGE ||
- pCreateInfo->minFilter > VK_FILTER_END_RANGE)
- {
- log_msg(mdd(device), VK_DEBUG_REPORT_ERROR_BIT_EXT, (VkDebugReportObjectTypeEXT)0, 0, __LINE__, 1, "PARAMCHECK",
- "vkCreateSampler parameter, VkFilter pCreateInfo->minFilter, is an unrecognized enumerator");
- return false;
- }
- if(pCreateInfo->mipmapMode < VK_SAMPLER_MIPMAP_MODE_BEGIN_RANGE ||
- pCreateInfo->mipmapMode > VK_SAMPLER_MIPMAP_MODE_END_RANGE)
- {
- log_msg(mdd(device), VK_DEBUG_REPORT_ERROR_BIT_EXT, (VkDebugReportObjectTypeEXT)0, 0, __LINE__, 1, "PARAMCHECK",
- "vkCreateSampler parameter, VkSamplerMipmapMode pCreateInfo->mipmapMode, is an unrecognized enumerator");
- return false;
- }
- if(pCreateInfo->addressModeU < VK_SAMPLER_ADDRESS_MODE_BEGIN_RANGE ||
- pCreateInfo->addressModeU > VK_SAMPLER_ADDRESS_MODE_END_RANGE)
- {
- log_msg(mdd(device), VK_DEBUG_REPORT_ERROR_BIT_EXT, (VkDebugReportObjectTypeEXT)0, 0, __LINE__, 1, "PARAMCHECK",
- "vkCreateSampler parameter, VkTexAddress pCreateInfo->addressModeU, is an unrecognized enumerator");
- return false;
- }
- if(pCreateInfo->addressModeV < VK_SAMPLER_ADDRESS_MODE_BEGIN_RANGE ||
- pCreateInfo->addressModeV > VK_SAMPLER_ADDRESS_MODE_END_RANGE)
- {
- log_msg(mdd(device), VK_DEBUG_REPORT_ERROR_BIT_EXT, (VkDebugReportObjectTypeEXT)0, 0, __LINE__, 1, "PARAMCHECK",
- "vkCreateSampler parameter, VkTexAddress pCreateInfo->addressModeV, is an unrecognized enumerator");
- return false;
- }
- if(pCreateInfo->addressModeW < VK_SAMPLER_ADDRESS_MODE_BEGIN_RANGE ||
- pCreateInfo->addressModeW > VK_SAMPLER_ADDRESS_MODE_END_RANGE)
- {
- log_msg(mdd(device), VK_DEBUG_REPORT_ERROR_BIT_EXT, (VkDebugReportObjectTypeEXT)0, 0, __LINE__, 1, "PARAMCHECK",
- "vkCreateSampler parameter, VkTexAddress pCreateInfo->addressModeW, is an unrecognized enumerator");
- return false;
- }
- if(pCreateInfo->compareEnable)
- {
- if(pCreateInfo->compareOp < VK_COMPARE_OP_BEGIN_RANGE ||
- pCreateInfo->compareOp > VK_COMPARE_OP_END_RANGE)
- {
- log_msg(mdd(device), VK_DEBUG_REPORT_ERROR_BIT_EXT, (VkDebugReportObjectTypeEXT)0, 0, __LINE__, 1, "PARAMCHECK",
- "vkCreateSampler parameter, VkCompareOp pCreateInfo->compareOp, is an unrecognized enumerator");
- return false;
- }
- }
- if(pCreateInfo->borderColor < VK_BORDER_COLOR_BEGIN_RANGE ||
- pCreateInfo->borderColor > VK_BORDER_COLOR_END_RANGE)
- {
- log_msg(mdd(device), VK_DEBUG_REPORT_ERROR_BIT_EXT, (VkDebugReportObjectTypeEXT)0, 0, __LINE__, 1, "PARAMCHECK",
- "vkCreateSampler parameter, VkBorderColor pCreateInfo->borderColor, is an unrecognized enumerator");
- return false;
- }
- }
-
- return true;
-}
-
-bool PostCreateSampler(
- VkDevice device,
- VkSampler* pSampler,
- VkResult result)
-{
-
- if(pSampler != nullptr)
- {
- }
-
- if(result < VK_SUCCESS)
- {
- std::string reason = "vkCreateSampler parameter, VkResult result, is " + EnumeratorString(result);
- log_msg(mdd(device), VK_DEBUG_REPORT_ERROR_BIT_EXT, (VkDebugReportObjectTypeEXT)0, 0, __LINE__, 1, "PARAMCHECK", "%s", reason.c_str());
- return false;
- }
-
- return true;
-}
-
-VK_LAYER_EXPORT VKAPI_ATTR VkResult VKAPI_CALL vkCreateSampler(
- VkDevice device,
- const VkSamplerCreateInfo* pCreateInfo,
- const VkAllocationCallbacks* pAllocator,
- VkSampler* pSampler)
-{
- VkResult result = VK_ERROR_VALIDATION_FAILED_EXT;
- VkBool32 skipCall = VK_FALSE;
- layer_data* my_data = get_my_data_ptr(get_dispatch_key(device), layer_data_map);
- assert(my_data != NULL);
-
- skipCall |= param_check_vkCreateSampler(
- my_data->report_data,
- pCreateInfo,
- pAllocator,
- pSampler);
-
- if (skipCall == VK_FALSE) {
- PreCreateSampler(device, pCreateInfo);
-
- result = get_dispatch_table(pc_device_table_map, device)->CreateSampler(device, pCreateInfo, pAllocator, pSampler);
-
- PostCreateSampler(device, pSampler, result);
- }
-
- return result;
-}
-
-bool PreCreateDescriptorSetLayout(
- VkDevice device,
- const VkDescriptorSetLayoutCreateInfo* pCreateInfo)
-{
- if(pCreateInfo != nullptr)
- {
- if(pCreateInfo->sType != VK_STRUCTURE_TYPE_DESCRIPTOR_SET_LAYOUT_CREATE_INFO)
- {
- log_msg(mdd(device), VK_DEBUG_REPORT_ERROR_BIT_EXT, (VkDebugReportObjectTypeEXT)0, 0, __LINE__, 1, "PARAMCHECK",
- "vkCreateDescriptorSetLayout parameter, VkStructureType pCreateInfo->sType, is an invalid enumerator");
- return false;
- }
- if(pCreateInfo->pBindings != nullptr)
- {
- if(pCreateInfo->pBindings->descriptorType < VK_DESCRIPTOR_TYPE_BEGIN_RANGE ||
- pCreateInfo->pBindings->descriptorType > VK_DESCRIPTOR_TYPE_END_RANGE)
- {
- log_msg(mdd(device), VK_DEBUG_REPORT_ERROR_BIT_EXT, (VkDebugReportObjectTypeEXT)0, 0, __LINE__, 1, "PARAMCHECK",
- "vkCreateDescriptorSetLayout parameter, VkDescriptorType pCreateInfo->pBindings->descriptorType, is an unrecognized enumerator");
- return false;
- }
- if(pCreateInfo->pBindings->pImmutableSamplers != nullptr)
- {
- }
- }
- }
-
- return true;
-}
-
-bool PostCreateDescriptorSetLayout(
- VkDevice device,
- VkDescriptorSetLayout* pSetLayout,
- VkResult result)
-{
-
- if(pSetLayout != nullptr)
- {
- }
-
- if(result < VK_SUCCESS)
- {
- std::string reason = "vkCreateDescriptorSetLayout parameter, VkResult result, is " + EnumeratorString(result);
- log_msg(mdd(device), VK_DEBUG_REPORT_ERROR_BIT_EXT, (VkDebugReportObjectTypeEXT)0, 0, __LINE__, 1, "PARAMCHECK", "%s", reason.c_str());
- return false;
- }
-
- return true;
-}
-
-VK_LAYER_EXPORT VKAPI_ATTR VkResult VKAPI_CALL vkCreateDescriptorSetLayout(
- VkDevice device,
- const VkDescriptorSetLayoutCreateInfo* pCreateInfo,
- const VkAllocationCallbacks* pAllocator,
- VkDescriptorSetLayout* pSetLayout)
-{
- VkResult result = VK_ERROR_VALIDATION_FAILED_EXT;
- VkBool32 skipCall = VK_FALSE;
- layer_data* my_data = get_my_data_ptr(get_dispatch_key(device), layer_data_map);
- assert(my_data != NULL);
-
- skipCall |= param_check_vkCreateDescriptorSetLayout(
- my_data->report_data,
- pCreateInfo,
- pAllocator,
- pSetLayout);
-
- if (skipCall == VK_FALSE) {
- PreCreateDescriptorSetLayout(device, pCreateInfo);
-
- result = get_dispatch_table(pc_device_table_map, device)->CreateDescriptorSetLayout(device, pCreateInfo, pAllocator, pSetLayout);
-
- PostCreateDescriptorSetLayout(device, pSetLayout, result);
- }
-
- return result;
-}
-
-bool PreCreateDescriptorPool(
- VkDevice device,
- const VkDescriptorPoolCreateInfo* pCreateInfo)
-{
- if(pCreateInfo != nullptr)
- {
- if(pCreateInfo->sType != VK_STRUCTURE_TYPE_DESCRIPTOR_POOL_CREATE_INFO)
- {
- log_msg(mdd(device), VK_DEBUG_REPORT_ERROR_BIT_EXT, (VkDebugReportObjectTypeEXT)0, 0, __LINE__, 1, "PARAMCHECK",
- "vkCreateDescriptorPool parameter, VkStructureType pCreateInfo->sType, is an invalid enumerator");
- return false;
- }
- if(pCreateInfo->pPoolSizes != nullptr)
- {
- if(pCreateInfo->pPoolSizes->type < VK_DESCRIPTOR_TYPE_BEGIN_RANGE ||
- pCreateInfo->pPoolSizes->type > VK_DESCRIPTOR_TYPE_END_RANGE)
- {
- log_msg(mdd(device), VK_DEBUG_REPORT_ERROR_BIT_EXT, (VkDebugReportObjectTypeEXT)0, 0, __LINE__, 1, "PARAMCHECK",
- "vkCreateDescriptorPool parameter, VkDescriptorType pCreateInfo->pTypeCount->type, is an unrecognized enumerator");
- return false;
- }
- }
- }
-
- return true;
-}
-
-bool PostCreateDescriptorPool(
- VkDevice device,
- uint32_t maxSets,
- VkDescriptorPool* pDescriptorPool,
- VkResult result)
-{
-
- /* TODOVV: How do we validate maxSets? Probably belongs in the limits layer? */
-
- if(pDescriptorPool != nullptr)
- {
- }
-
- if(result < VK_SUCCESS)
- {
- std::string reason = "vkCreateDescriptorPool parameter, VkResult result, is " + EnumeratorString(result);
- log_msg(mdd(device), VK_DEBUG_REPORT_ERROR_BIT_EXT, (VkDebugReportObjectTypeEXT)0, 0, __LINE__, 1, "PARAMCHECK", "%s", reason.c_str());
- return false;
- }
-
- return true;
-}
-
-VK_LAYER_EXPORT VKAPI_ATTR VkResult VKAPI_CALL vkCreateDescriptorPool(
- VkDevice device,
- const VkDescriptorPoolCreateInfo* pCreateInfo,
- const VkAllocationCallbacks* pAllocator,
- VkDescriptorPool* pDescriptorPool)
-{
- VkResult result = VK_ERROR_VALIDATION_FAILED_EXT;
- VkBool32 skipCall = VK_FALSE;
- layer_data* my_data = get_my_data_ptr(get_dispatch_key(device), layer_data_map);
- assert(my_data != NULL);
-
- skipCall |= param_check_vkCreateDescriptorPool(
- my_data->report_data,
- pCreateInfo,
- pAllocator,
- pDescriptorPool);
-
- if (skipCall == VK_FALSE) {
- PreCreateDescriptorPool(device, pCreateInfo);
-
- result = get_dispatch_table(pc_device_table_map, device)->CreateDescriptorPool(device, pCreateInfo, pAllocator, pDescriptorPool);
-
- PostCreateDescriptorPool(device, pCreateInfo->maxSets, pDescriptorPool, result);
- }
-
- return result;
-}
-
-bool PostResetDescriptorPool(
- VkDevice device,
- VkDescriptorPool descriptorPool,
- VkResult result)
-{
-
-
- if(result < VK_SUCCESS)
- {
- std::string reason = "vkResetDescriptorPool parameter, VkResult result, is " + EnumeratorString(result);
- log_msg(mdd(device), VK_DEBUG_REPORT_ERROR_BIT_EXT, (VkDebugReportObjectTypeEXT)0, 0, __LINE__, 1, "PARAMCHECK", "%s", reason.c_str());
- return false;
- }
-
- return true;
-}
-
-VK_LAYER_EXPORT VKAPI_ATTR VkResult VKAPI_CALL vkResetDescriptorPool(
- VkDevice device,
- VkDescriptorPool descriptorPool,
- VkDescriptorPoolResetFlags flags)
-{
- VkResult result = get_dispatch_table(pc_device_table_map, device)->ResetDescriptorPool(device, descriptorPool, flags);
-
- PostResetDescriptorPool(device, descriptorPool, result);
-
- return result;
-}
-
-bool PreAllocateDescriptorSets(
- VkDevice device,
- const VkDescriptorSetLayout* pSetLayouts)
-{
- if(pSetLayouts != nullptr)
- {
- }
-
- return true;
-}
-
-bool PostAllocateDescriptorSets(
- VkDevice device,
- VkDescriptorPool descriptorPool,
- uint32_t count,
- VkDescriptorSet* pDescriptorSets,
- VkResult result)
-{
-
-
- if(pDescriptorSets != nullptr)
- {
- }
-
- if(result < VK_SUCCESS)
- {
- std::string reason = "vkAllocateDescriptorSets parameter, VkResult result, is " + EnumeratorString(result);
- log_msg(mdd(device), VK_DEBUG_REPORT_ERROR_BIT_EXT, (VkDebugReportObjectTypeEXT)0, 0, __LINE__, 1, "PARAMCHECK", "%s", reason.c_str());
- return false;
- }
-
- return true;
-}
-
-VK_LAYER_EXPORT VKAPI_ATTR VkResult VKAPI_CALL vkAllocateDescriptorSets(
- VkDevice device,
- const VkDescriptorSetAllocateInfo* pAllocateInfo,
- VkDescriptorSet* pDescriptorSets)
-{
- VkResult result = VK_ERROR_VALIDATION_FAILED_EXT;
- VkBool32 skipCall = VK_FALSE;
- layer_data* my_data = get_my_data_ptr(get_dispatch_key(device), layer_data_map);
- assert(my_data != NULL);
-
- skipCall |= param_check_vkAllocateDescriptorSets(
- my_data->report_data,
- pAllocateInfo,
- pDescriptorSets);
-
- if (skipCall == VK_FALSE) {
- PreAllocateDescriptorSets(device, pAllocateInfo->pSetLayouts);
-
- result = get_dispatch_table(pc_device_table_map, device)->AllocateDescriptorSets(device, pAllocateInfo, pDescriptorSets);
-
- PostAllocateDescriptorSets(device, pAllocateInfo->descriptorPool, pAllocateInfo->descriptorSetCount, pDescriptorSets, result);
- }
-
- return result;
-}
-
-bool PreFreeDescriptorSets(
- VkDevice device,
- const VkDescriptorSet* pDescriptorSets)
-{
- if(pDescriptorSets != nullptr)
- {
- }
-
- return true;
-}
-
-bool PostFreeDescriptorSets(
- VkDevice device,
- VkDescriptorPool descriptorPool,
- uint32_t count,
- VkResult result)
-{
-
-
-
- if(result < VK_SUCCESS)
- {
- std::string reason = "vkFreeDescriptorSets parameter, VkResult result, is " + EnumeratorString(result);
- log_msg(mdd(device), VK_DEBUG_REPORT_ERROR_BIT_EXT, (VkDebugReportObjectTypeEXT)0, 0, __LINE__, 1, "PARAMCHECK", "%s", reason.c_str());
- return false;
- }
-
- return true;
-}
-
-VK_LAYER_EXPORT VKAPI_ATTR VkResult VKAPI_CALL vkFreeDescriptorSets(
- VkDevice device,
- VkDescriptorPool descriptorPool,
- uint32_t descriptorSetCount,
- const VkDescriptorSet* pDescriptorSets)
-{
- VkResult result = VK_ERROR_VALIDATION_FAILED_EXT;
- VkBool32 skipCall = VK_FALSE;
- layer_data* my_data = get_my_data_ptr(get_dispatch_key(device), layer_data_map);
- assert(my_data != NULL);
-
- skipCall |= param_check_vkFreeDescriptorSets(
- my_data->report_data,
- descriptorPool,
- descriptorSetCount,
- pDescriptorSets);
-
- if (skipCall == VK_FALSE) {
- PreFreeDescriptorSets(device, pDescriptorSets);
-
- result = get_dispatch_table(pc_device_table_map, device)->FreeDescriptorSets(device, descriptorPool, descriptorSetCount, pDescriptorSets);
-
- PostFreeDescriptorSets(device, descriptorPool, descriptorSetCount, result);
- }
-
- return result;
-}
-
-bool PreUpdateDescriptorSets(
- VkDevice device,
- const VkWriteDescriptorSet* pDescriptorWrites,
- const VkCopyDescriptorSet* pDescriptorCopies)
-{
- if(pDescriptorWrites != nullptr)
- {
- if(pDescriptorWrites->sType != VK_STRUCTURE_TYPE_WRITE_DESCRIPTOR_SET)
- {
- log_msg(mdd(device), VK_DEBUG_REPORT_ERROR_BIT_EXT, (VkDebugReportObjectTypeEXT)0, 0, __LINE__, 1, "PARAMCHECK",
- "vkUpdateDescriptorSets parameter, VkStructureType pDescriptorWrites->sType, is an invalid enumerator");
- return false;
- }
- if(pDescriptorWrites->descriptorType < VK_DESCRIPTOR_TYPE_BEGIN_RANGE ||
- pDescriptorWrites->descriptorType > VK_DESCRIPTOR_TYPE_END_RANGE)
- {
- log_msg(mdd(device), VK_DEBUG_REPORT_ERROR_BIT_EXT, (VkDebugReportObjectTypeEXT)0, 0, __LINE__, 1, "PARAMCHECK",
- "vkUpdateDescriptorSets parameter, VkDescriptorType pDescriptorWrites->descriptorType, is an unrecognized enumerator");
- return false;
- }
- /* TODO: Validate other parts of pImageInfo, pBufferInfo, pTexelBufferView? */
- /* TODO: This test should probably only be done if descriptorType is correct type of descriptor */
- if(pDescriptorWrites->pImageInfo != nullptr)
- {
- if (((pDescriptorWrites->pImageInfo->imageLayout < VK_IMAGE_LAYOUT_BEGIN_RANGE) ||
- (pDescriptorWrites->pImageInfo->imageLayout > VK_IMAGE_LAYOUT_END_RANGE)) &&
- (pDescriptorWrites->pImageInfo->imageLayout != VK_IMAGE_LAYOUT_PRESENT_SRC_KHR))
- {
- log_msg(mdd(device), VK_DEBUG_REPORT_ERROR_BIT_EXT, (VkDebugReportObjectTypeEXT)0, 0, __LINE__, 1, "PARAMCHECK",
- "vkUpdateDescriptorSets parameter, VkImageLayout pDescriptorWrites->pDescriptors->imageLayout, is an unrecognized enumerator");
- return false;
- }
- }
- }
-
- if(pDescriptorCopies != nullptr)
- {
- if(pDescriptorCopies->sType != VK_STRUCTURE_TYPE_COPY_DESCRIPTOR_SET)
- {
- log_msg(mdd(device), VK_DEBUG_REPORT_ERROR_BIT_EXT, (VkDebugReportObjectTypeEXT)0, 0, __LINE__, 1, "PARAMCHECK",
- "vkUpdateDescriptorSets parameter, VkStructureType pDescriptorCopies->sType, is an invalid enumerator");
- return false;
- }
- }
-
- return true;
-}
-
-VK_LAYER_EXPORT VKAPI_ATTR void VKAPI_CALL vkUpdateDescriptorSets(
- VkDevice device,
- uint32_t descriptorWriteCount,
- const VkWriteDescriptorSet* pDescriptorWrites,
- uint32_t descriptorCopyCount,
- const VkCopyDescriptorSet* pDescriptorCopies)
-{
- VkBool32 skipCall = VK_FALSE;
- layer_data* my_data = get_my_data_ptr(get_dispatch_key(device), layer_data_map);
- assert(my_data != NULL);
-
- skipCall |= param_check_vkUpdateDescriptorSets(
- my_data->report_data,
- descriptorWriteCount,
- pDescriptorWrites,
- descriptorCopyCount,
- pDescriptorCopies);
-
- if (skipCall == VK_FALSE) {
- PreUpdateDescriptorSets(device, pDescriptorWrites, pDescriptorCopies);
-
- get_dispatch_table(pc_device_table_map, device)->UpdateDescriptorSets(device, descriptorWriteCount, pDescriptorWrites, descriptorCopyCount, pDescriptorCopies);
- }
-}
-
-bool PreCreateFramebuffer(
- VkDevice device,
- const VkFramebufferCreateInfo* pCreateInfo)
-{
- if(pCreateInfo != nullptr)
- {
- if(pCreateInfo->sType != VK_STRUCTURE_TYPE_FRAMEBUFFER_CREATE_INFO)
- {
- log_msg(mdd(device), VK_DEBUG_REPORT_ERROR_BIT_EXT, (VkDebugReportObjectTypeEXT)0, 0, __LINE__, 1, "PARAMCHECK",
- "vkCreateFramebuffer parameter, VkStructureType pCreateInfo->sType, is an invalid enumerator");
- return false;
- }
- if(pCreateInfo->pAttachments != nullptr)
- {
- }
- }
-
- return true;
-}
-
-bool PostCreateFramebuffer(
- VkDevice device,
- VkFramebuffer* pFramebuffer,
- VkResult result)
-{
-
- if(pFramebuffer != nullptr)
- {
- }
-
- if(result < VK_SUCCESS)
- {
- std::string reason = "vkCreateFramebuffer parameter, VkResult result, is " + EnumeratorString(result);
- log_msg(mdd(device), VK_DEBUG_REPORT_ERROR_BIT_EXT, (VkDebugReportObjectTypeEXT)0, 0, __LINE__, 1, "PARAMCHECK", "%s", reason.c_str());
- return false;
- }
-
- return true;
-}
-
-VK_LAYER_EXPORT VKAPI_ATTR VkResult VKAPI_CALL vkCreateFramebuffer(
- VkDevice device,
- const VkFramebufferCreateInfo* pCreateInfo,
- const VkAllocationCallbacks* pAllocator,
- VkFramebuffer* pFramebuffer)
-{
- VkResult result = VK_ERROR_VALIDATION_FAILED_EXT;
- VkBool32 skipCall = VK_FALSE;
- layer_data* my_data = get_my_data_ptr(get_dispatch_key(device), layer_data_map);
- assert(my_data != NULL);
-
- skipCall |= param_check_vkCreateFramebuffer(
- my_data->report_data,
- pCreateInfo,
- pAllocator,
- pFramebuffer);
-
- if (skipCall == VK_FALSE) {
- PreCreateFramebuffer(device, pCreateInfo);
-
- result = get_dispatch_table(pc_device_table_map, device)->CreateFramebuffer(device, pCreateInfo, pAllocator, pFramebuffer);
-
- PostCreateFramebuffer(device, pFramebuffer, result);
- }
-
- return result;
-}
-
-bool PreCreateRenderPass(
- VkDevice device,
- const VkRenderPassCreateInfo* pCreateInfo)
-{
- if(pCreateInfo != nullptr)
- {
- if(pCreateInfo->sType != VK_STRUCTURE_TYPE_RENDER_PASS_CREATE_INFO)
- {
- log_msg(mdd(device), VK_DEBUG_REPORT_ERROR_BIT_EXT, (VkDebugReportObjectTypeEXT)0, 0, __LINE__, 1, "PARAMCHECK",
- "vkCreateRenderPass parameter, VkStructureType pCreateInfo->sType, is an invalid enumerator");
- return false;
- }
- if(pCreateInfo->pAttachments != nullptr)
- {
- if(pCreateInfo->pAttachments->format < VK_FORMAT_BEGIN_RANGE ||
- pCreateInfo->pAttachments->format > VK_FORMAT_END_RANGE)
- {
- log_msg(mdd(device), VK_DEBUG_REPORT_ERROR_BIT_EXT, (VkDebugReportObjectTypeEXT)0, 0, __LINE__, 1, "PARAMCHECK",
- "vkCreateRenderPass parameter, VkFormat pCreateInfo->pAttachments->format, is an unrecognized enumerator");
- return false;
- }
- if(pCreateInfo->pAttachments->loadOp < VK_ATTACHMENT_LOAD_OP_BEGIN_RANGE ||
- pCreateInfo->pAttachments->loadOp > VK_ATTACHMENT_LOAD_OP_END_RANGE)
- {
- log_msg(mdd(device), VK_DEBUG_REPORT_ERROR_BIT_EXT, (VkDebugReportObjectTypeEXT)0, 0, __LINE__, 1, "PARAMCHECK",
- "vkCreateRenderPass parameter, VkAttachmentLoadOp pCreateInfo->pAttachments->loadOp, is an unrecognized enumerator");
- return false;
- }
- if(pCreateInfo->pAttachments->storeOp < VK_ATTACHMENT_STORE_OP_BEGIN_RANGE ||
- pCreateInfo->pAttachments->storeOp > VK_ATTACHMENT_STORE_OP_END_RANGE)
- {
- log_msg(mdd(device), VK_DEBUG_REPORT_ERROR_BIT_EXT, (VkDebugReportObjectTypeEXT)0, 0, __LINE__, 1, "PARAMCHECK",
- "vkCreateRenderPass parameter, VkAttachmentStoreOp pCreateInfo->pAttachments->storeOp, is an unrecognized enumerator");
- return false;
- }
- if(pCreateInfo->pAttachments->stencilLoadOp < VK_ATTACHMENT_LOAD_OP_BEGIN_RANGE ||
- pCreateInfo->pAttachments->stencilLoadOp > VK_ATTACHMENT_LOAD_OP_END_RANGE)
- {
- log_msg(mdd(device), VK_DEBUG_REPORT_ERROR_BIT_EXT, (VkDebugReportObjectTypeEXT)0, 0, __LINE__, 1, "PARAMCHECK",
- "vkCreateRenderPass parameter, VkAttachmentLoadOp pCreateInfo->pAttachments->stencilLoadOp, is an unrecognized enumerator");
- return false;
- }
- if(pCreateInfo->pAttachments->stencilStoreOp < VK_ATTACHMENT_STORE_OP_BEGIN_RANGE ||
- pCreateInfo->pAttachments->stencilStoreOp > VK_ATTACHMENT_STORE_OP_END_RANGE)
- {
- log_msg(mdd(device), VK_DEBUG_REPORT_ERROR_BIT_EXT, (VkDebugReportObjectTypeEXT)0, 0, __LINE__, 1, "PARAMCHECK",
- "vkCreateRenderPass parameter, VkAttachmentStoreOp pCreateInfo->pAttachments->stencilStoreOp, is an unrecognized enumerator");
- return false;
- }
- if (((pCreateInfo->pAttachments->initialLayout < VK_IMAGE_LAYOUT_BEGIN_RANGE) ||
- (pCreateInfo->pAttachments->initialLayout > VK_IMAGE_LAYOUT_END_RANGE)) &&
- (pCreateInfo->pAttachments->initialLayout != VK_IMAGE_LAYOUT_PRESENT_SRC_KHR))
- {
- log_msg(mdd(device), VK_DEBUG_REPORT_ERROR_BIT_EXT, (VkDebugReportObjectTypeEXT)0, 0, __LINE__, 1, "PARAMCHECK",
- "vkCreateRenderPass parameter, VkImageLayout pCreateInfo->pAttachments->initialLayout, is an unrecognized enumerator");
- return false;
- }
- if (((pCreateInfo->pAttachments->initialLayout < VK_IMAGE_LAYOUT_BEGIN_RANGE) ||
- (pCreateInfo->pAttachments->initialLayout > VK_IMAGE_LAYOUT_END_RANGE)) &&
- (pCreateInfo->pAttachments->initialLayout != VK_IMAGE_LAYOUT_PRESENT_SRC_KHR))
- {
- log_msg(mdd(device), VK_DEBUG_REPORT_ERROR_BIT_EXT, (VkDebugReportObjectTypeEXT)0, 0, __LINE__, 1, "PARAMCHECK",
- "vkCreateRenderPass parameter, VkImageLayout pCreateInfo->pAttachments->finalLayout, is an unrecognized enumerator");
- return false;
- }
- }
- if(pCreateInfo->pSubpasses != nullptr)
- {
- if(pCreateInfo->pSubpasses->pipelineBindPoint < VK_PIPELINE_BIND_POINT_BEGIN_RANGE ||
- pCreateInfo->pSubpasses->pipelineBindPoint > VK_PIPELINE_BIND_POINT_END_RANGE)
- {
- log_msg(mdd(device), VK_DEBUG_REPORT_ERROR_BIT_EXT, (VkDebugReportObjectTypeEXT)0, 0, __LINE__, 1, "PARAMCHECK",
- "vkCreateRenderPass parameter, VkPipelineBindPoint pCreateInfo->pSubpasses->pipelineBindPoint, is an unrecognized enumerator");
- return false;
- }
- if(pCreateInfo->pSubpasses->pInputAttachments != nullptr)
- {
- if (((pCreateInfo->pSubpasses->pInputAttachments->layout < VK_IMAGE_LAYOUT_BEGIN_RANGE) ||
- (pCreateInfo->pSubpasses->pInputAttachments->layout > VK_IMAGE_LAYOUT_END_RANGE)) &&
- (pCreateInfo->pSubpasses->pInputAttachments->layout != VK_IMAGE_LAYOUT_PRESENT_SRC_KHR))
- {
- log_msg(mdd(device), VK_DEBUG_REPORT_ERROR_BIT_EXT, (VkDebugReportObjectTypeEXT)0, 0, __LINE__, 1, "PARAMCHECK",
- "vkCreateRenderPass parameter, VkImageLayout pCreateInfo->pSubpasses->pInputAttachments->layout, is an unrecognized enumerator");
- return false;
- }
- }
- if(pCreateInfo->pSubpasses->pColorAttachments != nullptr)
- {
- if (((pCreateInfo->pSubpasses->pColorAttachments->layout < VK_IMAGE_LAYOUT_BEGIN_RANGE) ||
- (pCreateInfo->pSubpasses->pColorAttachments->layout > VK_IMAGE_LAYOUT_END_RANGE)) &&
- (pCreateInfo->pSubpasses->pColorAttachments->layout != VK_IMAGE_LAYOUT_PRESENT_SRC_KHR))
- {
- log_msg(mdd(device), VK_DEBUG_REPORT_ERROR_BIT_EXT, (VkDebugReportObjectTypeEXT)0, 0, __LINE__, 1, "PARAMCHECK",
- "vkCreateRenderPass parameter, VkImageLayout pCreateInfo->pSubpasses->pColorAttachments->layout, is an unrecognized enumerator");
- return false;
- }
- }
- if(pCreateInfo->pSubpasses->pResolveAttachments != nullptr)
- {
- if (((pCreateInfo->pSubpasses->pResolveAttachments->layout < VK_IMAGE_LAYOUT_BEGIN_RANGE) ||
- (pCreateInfo->pSubpasses->pResolveAttachments->layout > VK_IMAGE_LAYOUT_END_RANGE)) &&
- (pCreateInfo->pSubpasses->pResolveAttachments->layout != VK_IMAGE_LAYOUT_PRESENT_SRC_KHR))
- {
- log_msg(mdd(device), VK_DEBUG_REPORT_ERROR_BIT_EXT, (VkDebugReportObjectTypeEXT)0, 0, __LINE__, 1, "PARAMCHECK",
- "vkCreateRenderPass parameter, VkImageLayout pCreateInfo->pSubpasses->pResolveAttachments->layout, is an unrecognized enumerator");
- return false;
- }
- }
- if(pCreateInfo->pSubpasses->pDepthStencilAttachment &&
- ((pCreateInfo->pSubpasses->pDepthStencilAttachment->layout < VK_IMAGE_LAYOUT_BEGIN_RANGE) ||
- (pCreateInfo->pSubpasses->pDepthStencilAttachment->layout > VK_IMAGE_LAYOUT_END_RANGE)) &&
- (pCreateInfo->pSubpasses->pDepthStencilAttachment->layout != VK_IMAGE_LAYOUT_PRESENT_SRC_KHR))
- {
- log_msg(mdd(device), VK_DEBUG_REPORT_ERROR_BIT_EXT, (VkDebugReportObjectTypeEXT)0, 0, __LINE__, 1, "PARAMCHECK",
- "vkCreateRenderPass parameter, VkImageLayout pCreateInfo->pSubpasses->pDepthStencilAttachment->layout, is an unrecognized enumerator");
- return false;
- }
- }
- if(pCreateInfo->pDependencies != nullptr)
- {
- }
- }
-
- return true;
-}
-
-bool PostCreateRenderPass(
- VkDevice device,
- VkRenderPass* pRenderPass,
- VkResult result)
-{
-
- if(pRenderPass != nullptr)
- {
- }
-
- if(result < VK_SUCCESS)
- {
- std::string reason = "vkCreateRenderPass parameter, VkResult result, is " + EnumeratorString(result);
- log_msg(mdd(device), VK_DEBUG_REPORT_ERROR_BIT_EXT, (VkDebugReportObjectTypeEXT)0, 0, __LINE__, 1, "PARAMCHECK", "%s", reason.c_str());
- return false;
- }
-
- return true;
-}
-
-VK_LAYER_EXPORT VKAPI_ATTR VkResult VKAPI_CALL vkCreateRenderPass(
- VkDevice device,
- const VkRenderPassCreateInfo* pCreateInfo,
- const VkAllocationCallbacks* pAllocator,
- VkRenderPass* pRenderPass)
-{
- VkResult result = VK_ERROR_VALIDATION_FAILED_EXT;
- VkBool32 skipCall = VK_FALSE;
- layer_data* my_data = get_my_data_ptr(get_dispatch_key(device), layer_data_map);
- assert(my_data != NULL);
-
- skipCall |= param_check_vkCreateRenderPass(
- my_data->report_data,
- pCreateInfo,
- pAllocator,
- pRenderPass);
-
- if (skipCall == VK_FALSE) {
- PreCreateRenderPass(device, pCreateInfo);
-
- result = get_dispatch_table(pc_device_table_map, device)->CreateRenderPass(device, pCreateInfo, pAllocator, pRenderPass);
-
- PostCreateRenderPass(device, pRenderPass, result);
- }
-
- return result;
-}
-
-bool PostGetRenderAreaGranularity(
- VkDevice device,
- VkRenderPass renderPass,
- VkExtent2D* pGranularity)
-{
-
-
- if(pGranularity != nullptr)
- {
- }
-
- return true;
-}
-
-VK_LAYER_EXPORT VKAPI_ATTR void VKAPI_CALL vkGetRenderAreaGranularity(
- VkDevice device,
- VkRenderPass renderPass,
- VkExtent2D* pGranularity)
-{
- VkBool32 skipCall = VK_FALSE;
- layer_data* my_data = get_my_data_ptr(get_dispatch_key(device), layer_data_map);
- assert(my_data != NULL);
-
- skipCall |= param_check_vkGetRenderAreaGranularity(
- my_data->report_data,
- renderPass,
- pGranularity);
-
- if (skipCall == VK_FALSE) {
- get_dispatch_table(pc_device_table_map, device)->GetRenderAreaGranularity(device, renderPass, pGranularity);
-
- PostGetRenderAreaGranularity(device, renderPass, pGranularity);
- }
-}
-
-bool PreCreateCommandPool(
- VkDevice device,
- const VkCommandPoolCreateInfo* pCreateInfo)
-{
- if(pCreateInfo != nullptr)
- {
- if(pCreateInfo->sType != VK_STRUCTURE_TYPE_COMMAND_POOL_CREATE_INFO)
- {
- log_msg(mdd(device), VK_DEBUG_REPORT_ERROR_BIT_EXT, (VkDebugReportObjectTypeEXT)0, 0, __LINE__, 1, "PARAMCHECK",
- "vkCreateCommandPool parameter, VkStructureType pCreateInfo->sType, is an invalid enumerator");
- return false;
- }
- }
-
- return true;
-}
-
-bool PostCreateCommandPool(
- VkDevice device,
- VkCommandPool* pCommandPool,
- VkResult result)
-{
-
- if(pCommandPool != nullptr)
- {
- }
-
- if(result < VK_SUCCESS)
- {
- std::string reason = "vkCreateCommandPool parameter, VkResult result, is " + EnumeratorString(result);
- log_msg(mdd(device), VK_DEBUG_REPORT_ERROR_BIT_EXT, (VkDebugReportObjectTypeEXT)0, 0, __LINE__, 1, "PARAMCHECK", "%s", reason.c_str());
- return false;
- }
-
- return true;
-}
-
-VK_LAYER_EXPORT VKAPI_ATTR VkResult VKAPI_CALL vkCreateCommandPool(
- VkDevice device,
- const VkCommandPoolCreateInfo* pCreateInfo,
- const VkAllocationCallbacks* pAllocator,
- VkCommandPool* pCommandPool)
-{
- VkResult result = VK_ERROR_VALIDATION_FAILED_EXT;
- VkBool32 skipCall = VK_FALSE;
- layer_data* my_data = get_my_data_ptr(get_dispatch_key(device), layer_data_map);
- assert(my_data != NULL);
-
- skipCall |= param_check_vkCreateCommandPool(
- my_data->report_data,
- pCreateInfo,
- pAllocator,
- pCommandPool);
-
- if (skipCall == VK_FALSE) {
- PreCreateCommandPool(device, pCreateInfo);
-
- result = get_dispatch_table(pc_device_table_map, device)->CreateCommandPool(device, pCreateInfo, pAllocator, pCommandPool);
-
- PostCreateCommandPool(device, pCommandPool, result);
- }
-
- return result;
-}
-
-bool PostResetCommandPool(
- VkDevice device,
- VkCommandPool commandPool,
- VkCommandPoolResetFlags flags,
- VkResult result)
-{
-
-
-
- if(result < VK_SUCCESS)
- {
- std::string reason = "vkResetCommandPool parameter, VkResult result, is " + EnumeratorString(result);
- log_msg(mdd(device), VK_DEBUG_REPORT_ERROR_BIT_EXT, (VkDebugReportObjectTypeEXT)0, 0, __LINE__, 1, "PARAMCHECK", "%s", reason.c_str());
- return false;
- }
-
- return true;
-}
-
-VK_LAYER_EXPORT VKAPI_ATTR VkResult VKAPI_CALL vkResetCommandPool(
- VkDevice device,
- VkCommandPool commandPool,
- VkCommandPoolResetFlags flags)
-{
- VkResult result = get_dispatch_table(pc_device_table_map, device)->ResetCommandPool(device, commandPool, flags);
-
- PostResetCommandPool(device, commandPool, flags, result);
-
- return result;
-}
-
-bool PreCreateCommandBuffer(
- VkDevice device,
- const VkCommandBufferAllocateInfo* pCreateInfo)
-{
- if(pCreateInfo != nullptr)
- {
- if(pCreateInfo->sType != VK_STRUCTURE_TYPE_COMMAND_BUFFER_ALLOCATE_INFO)
- {
- log_msg(mdd(device), VK_DEBUG_REPORT_ERROR_BIT_EXT, (VkDebugReportObjectTypeEXT)0, 0, __LINE__, 1, "PARAMCHECK",
- "vkAllocateCommandBuffers parameter, VkStructureType pCreateInfo->sType, is an invalid enumerator");
- return false;
- }
- if(pCreateInfo->level < VK_COMMAND_BUFFER_LEVEL_BEGIN_RANGE ||
- pCreateInfo->level > VK_COMMAND_BUFFER_LEVEL_END_RANGE)
- {
- log_msg(mdd(device), VK_DEBUG_REPORT_ERROR_BIT_EXT, (VkDebugReportObjectTypeEXT)0, 0, __LINE__, 1, "PARAMCHECK",
- "vkAllocateCommandBuffers parameter, VkCommandBufferLevel pCreateInfo->level, is an unrecognized enumerator");
- return false;
- }
- }
-
- return true;
-}
-
-bool PostCreateCommandBuffer(
- VkDevice device,
- VkCommandBuffer* pCommandBuffer,
- VkResult result)
-{
-
- if(pCommandBuffer != nullptr)
- {
- }
-
- if(result < VK_SUCCESS)
- {
- std::string reason = "vkAllocateCommandBuffers parameter, VkResult result, is " + EnumeratorString(result);
- log_msg(mdd(device), VK_DEBUG_REPORT_ERROR_BIT_EXT, (VkDebugReportObjectTypeEXT)0, 0, __LINE__, 1, "PARAMCHECK", "%s", reason.c_str());
- return false;
- }
-
- return true;
-}
-
-VK_LAYER_EXPORT VKAPI_ATTR VkResult VKAPI_CALL vkAllocateCommandBuffers(
- VkDevice device,
- const VkCommandBufferAllocateInfo* pAllocateInfo,
- VkCommandBuffer* pCommandBuffers)
-{
- VkResult result = VK_ERROR_VALIDATION_FAILED_EXT;
- VkBool32 skipCall = VK_FALSE;
- layer_data* my_data = get_my_data_ptr(get_dispatch_key(device), layer_data_map);
- assert(my_data != NULL);
-
- skipCall |= param_check_vkAllocateCommandBuffers(
- my_data->report_data,
- pAllocateInfo,
- pCommandBuffers);
-
- if (skipCall == VK_FALSE) {
- PreCreateCommandBuffer(device, pAllocateInfo);
-
- result = get_dispatch_table(pc_device_table_map, device)->AllocateCommandBuffers(device, pAllocateInfo, pCommandBuffers);
-
- PostCreateCommandBuffer(device, pCommandBuffers, result);
- }
-
- return result;
-}
-
-bool PreBeginCommandBuffer(
- VkCommandBuffer commandBuffer,
- const VkCommandBufferBeginInfo* pBeginInfo)
-{
- if(pBeginInfo != nullptr)
- {
- if(pBeginInfo->sType != VK_STRUCTURE_TYPE_COMMAND_BUFFER_BEGIN_INFO)
- {
- log_msg(mdd(commandBuffer), VK_DEBUG_REPORT_ERROR_BIT_EXT, (VkDebugReportObjectTypeEXT)0, 0, __LINE__, 1, "PARAMCHECK",
- "vkBeginCommandBuffer parameter, VkStructureType pBeginInfo->sType, is an invalid enumerator");
- return false;
- }
- }
-
- return true;
-}
-
-bool PostBeginCommandBuffer(
- VkCommandBuffer commandBuffer,
- VkResult result)
-{
-
- if(result < VK_SUCCESS)
- {
- std::string reason = "vkBeginCommandBuffer parameter, VkResult result, is " + EnumeratorString(result);
- log_msg(mdd(commandBuffer), VK_DEBUG_REPORT_ERROR_BIT_EXT, (VkDebugReportObjectTypeEXT)0, 0, __LINE__, 1, "PARAMCHECK", "%s", reason.c_str());
- return false;
- }
-
- return true;
-}
-
-VK_LAYER_EXPORT VKAPI_ATTR VkResult VKAPI_CALL vkBeginCommandBuffer(
- VkCommandBuffer commandBuffer,
- const VkCommandBufferBeginInfo* pBeginInfo)
-{
- VkResult result = VK_ERROR_VALIDATION_FAILED_EXT;
- VkBool32 skipCall = VK_FALSE;
- layer_data* my_data = get_my_data_ptr(get_dispatch_key(commandBuffer), layer_data_map);
- assert(my_data != NULL);
-
- skipCall |= param_check_vkBeginCommandBuffer(
- my_data->report_data,
- pBeginInfo);
-
- if (skipCall == VK_FALSE) {
- PreBeginCommandBuffer(commandBuffer, pBeginInfo);
-
- result = get_dispatch_table(pc_device_table_map, commandBuffer)->BeginCommandBuffer(commandBuffer, pBeginInfo);
-
- PostBeginCommandBuffer(commandBuffer, result);
- }
-
- return result;
-}
-
-bool PostEndCommandBuffer(
- VkCommandBuffer commandBuffer,
- VkResult result)
-{
-
- if(result < VK_SUCCESS)
- {
- std::string reason = "vkEndCommandBuffer parameter, VkResult result, is " + EnumeratorString(result);
- log_msg(mdd(commandBuffer), VK_DEBUG_REPORT_ERROR_BIT_EXT, (VkDebugReportObjectTypeEXT)0, 0, __LINE__, 1, "PARAMCHECK", "%s", reason.c_str());
- return false;
- }
-
- return true;
-}
-
-VK_LAYER_EXPORT VKAPI_ATTR VkResult VKAPI_CALL vkEndCommandBuffer(
- VkCommandBuffer commandBuffer)
-{
- VkResult result = get_dispatch_table(pc_device_table_map, commandBuffer)->EndCommandBuffer(commandBuffer);
-
- PostEndCommandBuffer(commandBuffer, result);
-
- return result;
-}
-
-bool PostResetCommandBuffer(
- VkCommandBuffer commandBuffer,
- VkCommandBufferResetFlags flags,
- VkResult result)
-{
-
-
- if(result < VK_SUCCESS)
- {
- std::string reason = "vkResetCommandBuffer parameter, VkResult result, is " + EnumeratorString(result);
- log_msg(mdd(commandBuffer), VK_DEBUG_REPORT_ERROR_BIT_EXT, (VkDebugReportObjectTypeEXT)0, 0, __LINE__, 1, "PARAMCHECK", "%s", reason.c_str());
- return false;
- }
-
- return true;
-}
-
-VK_LAYER_EXPORT VKAPI_ATTR VkResult VKAPI_CALL vkResetCommandBuffer(
- VkCommandBuffer commandBuffer,
- VkCommandBufferResetFlags flags)
-{
- VkResult result = get_dispatch_table(pc_device_table_map, commandBuffer)->ResetCommandBuffer(commandBuffer, flags);
-
- PostResetCommandBuffer(commandBuffer, flags, result);
-
- return result;
-}
-
-bool PostCmdBindPipeline(
- VkCommandBuffer commandBuffer,
- VkPipelineBindPoint pipelineBindPoint,
- VkPipeline pipeline)
-{
-
- if(pipelineBindPoint < VK_PIPELINE_BIND_POINT_BEGIN_RANGE ||
- pipelineBindPoint > VK_PIPELINE_BIND_POINT_END_RANGE)
- {
- log_msg(mdd(commandBuffer), VK_DEBUG_REPORT_ERROR_BIT_EXT, (VkDebugReportObjectTypeEXT)0, 0, __LINE__, 1, "PARAMCHECK",
- "vkCmdBindPipeline parameter, VkPipelineBindPoint pipelineBindPoint, is an unrecognized enumerator");
- return false;
- }
-
-
- return true;
-}
-
-VK_LAYER_EXPORT VKAPI_ATTR void VKAPI_CALL vkCmdBindPipeline(
- VkCommandBuffer commandBuffer,
- VkPipelineBindPoint pipelineBindPoint,
- VkPipeline pipeline)
-{
- get_dispatch_table(pc_device_table_map, commandBuffer)->CmdBindPipeline(commandBuffer, pipelineBindPoint, pipeline);
-
- PostCmdBindPipeline(commandBuffer, pipelineBindPoint, pipeline);
-}
-
-VK_LAYER_EXPORT VKAPI_ATTR void VKAPI_CALL vkCmdSetViewport(VkCommandBuffer commandBuffer, uint32_t firstViewport, uint32_t viewportCount, const VkViewport* pViewports)
-{
- VkBool32 skipCall = VK_FALSE;
- layer_data* my_data = get_my_data_ptr(get_dispatch_key(commandBuffer), layer_data_map);
- assert(my_data != NULL);
-
- skipCall |= param_check_vkCmdSetViewport(
- my_data->report_data,
- firstViewport,
- viewportCount,
- pViewports);
-
- if (skipCall == VK_FALSE) {
- get_dispatch_table(pc_device_table_map, commandBuffer)->CmdSetViewport(commandBuffer, firstViewport, viewportCount, pViewports);
- }
-}
-
-VK_LAYER_EXPORT VKAPI_ATTR void VKAPI_CALL vkCmdSetScissor(VkCommandBuffer commandBuffer, uint32_t firstScissor, uint32_t scissorCount, const VkRect2D* pScissors)
-{
- VkBool32 skipCall = VK_FALSE;
- layer_data* my_data = get_my_data_ptr(get_dispatch_key(commandBuffer), layer_data_map);
- assert(my_data != NULL);
-
- skipCall |= param_check_vkCmdSetScissor(
- my_data->report_data,
- firstScissor,
- scissorCount,
- pScissors);
-
- if (skipCall == VK_FALSE) {
- get_dispatch_table(pc_device_table_map, commandBuffer)->CmdSetScissor(commandBuffer, firstScissor, scissorCount, pScissors);
- }
-}
-
-VK_LAYER_EXPORT VKAPI_ATTR void VKAPI_CALL vkCmdSetLineWidth(VkCommandBuffer commandBuffer, float lineWidth)
-{
- get_dispatch_table(pc_device_table_map, commandBuffer)->CmdSetLineWidth(commandBuffer, lineWidth);
-}
-
-VK_LAYER_EXPORT VKAPI_ATTR void VKAPI_CALL vkCmdSetDepthBias(VkCommandBuffer commandBuffer, float depthBiasConstantFactor, float depthBiasClamp, float depthBiasSlopeFactor)
-{
- get_dispatch_table(pc_device_table_map, commandBuffer)->CmdSetDepthBias(commandBuffer, depthBiasConstantFactor, depthBiasClamp, depthBiasSlopeFactor);
-}
-
-VK_LAYER_EXPORT VKAPI_ATTR void VKAPI_CALL vkCmdSetBlendConstants(VkCommandBuffer commandBuffer, const float blendConstants[4])
-{
- VkBool32 skipCall = VK_FALSE;
- layer_data* my_data = get_my_data_ptr(get_dispatch_key(commandBuffer), layer_data_map);
- assert(my_data != NULL);
-
- skipCall |= param_check_vkCmdSetBlendConstants(
- my_data->report_data,
- blendConstants);
-
- if (skipCall == VK_FALSE) {
- get_dispatch_table(pc_device_table_map, commandBuffer)->CmdSetBlendConstants(commandBuffer, blendConstants);
- }
-}
-
-VK_LAYER_EXPORT VKAPI_ATTR void VKAPI_CALL vkCmdSetDepthBounds(VkCommandBuffer commandBuffer, float minDepthBounds, float maxDepthBounds)
-{
- get_dispatch_table(pc_device_table_map, commandBuffer)->CmdSetDepthBounds(commandBuffer, minDepthBounds, maxDepthBounds);
-}
-
-VK_LAYER_EXPORT VKAPI_ATTR void VKAPI_CALL vkCmdSetStencilCompareMask(VkCommandBuffer commandBuffer, VkStencilFaceFlags faceMask, uint32_t compareMask)
-{
- get_dispatch_table(pc_device_table_map, commandBuffer)->CmdSetStencilCompareMask(commandBuffer, faceMask, compareMask);
-}
-
-VK_LAYER_EXPORT VKAPI_ATTR void VKAPI_CALL vkCmdSetStencilWriteMask(VkCommandBuffer commandBuffer, VkStencilFaceFlags faceMask, uint32_t writeMask)
-{
- get_dispatch_table(pc_device_table_map, commandBuffer)->CmdSetStencilWriteMask(commandBuffer, faceMask, writeMask);
-}
-
-VK_LAYER_EXPORT VKAPI_ATTR void VKAPI_CALL vkCmdSetStencilReference(VkCommandBuffer commandBuffer, VkStencilFaceFlags faceMask, uint32_t reference)
-{
- get_dispatch_table(pc_device_table_map, commandBuffer)->CmdSetStencilReference(commandBuffer, faceMask, reference);
-}
-
-bool PreCmdBindDescriptorSets(
- VkCommandBuffer commandBuffer,
- const VkDescriptorSet* pDescriptorSets,
- const uint32_t* pDynamicOffsets)
-{
- if(pDescriptorSets != nullptr)
- {
- }
-
- if(pDynamicOffsets != nullptr)
- {
- }
-
- return true;
-}
-
-bool PostCmdBindDescriptorSets(
- VkCommandBuffer commandBuffer,
- VkPipelineBindPoint pipelineBindPoint,
- VkPipelineLayout layout,
- uint32_t firstSet,
- uint32_t setCount,
- uint32_t dynamicOffsetCount)
-{
-
- if(pipelineBindPoint < VK_PIPELINE_BIND_POINT_BEGIN_RANGE ||
- pipelineBindPoint > VK_PIPELINE_BIND_POINT_END_RANGE)
- {
- log_msg(mdd(commandBuffer), VK_DEBUG_REPORT_ERROR_BIT_EXT, (VkDebugReportObjectTypeEXT)0, 0, __LINE__, 1, "PARAMCHECK",
- "vkCmdBindDescriptorSets parameter, VkPipelineBindPoint pipelineBindPoint, is an unrecognized enumerator");
- return false;
- }
-
-
-
-
-
- return true;
-}
-
-VK_LAYER_EXPORT VKAPI_ATTR void VKAPI_CALL vkCmdBindDescriptorSets(
- VkCommandBuffer commandBuffer,
- VkPipelineBindPoint pipelineBindPoint,
- VkPipelineLayout layout,
- uint32_t firstSet,
- uint32_t descriptorSetCount,
- const VkDescriptorSet* pDescriptorSets,
- uint32_t dynamicOffsetCount,
- const uint32_t* pDynamicOffsets)
-{
- VkBool32 skipCall = VK_FALSE;
- layer_data* my_data = get_my_data_ptr(get_dispatch_key(commandBuffer), layer_data_map);
- assert(my_data != NULL);
-
- skipCall |= param_check_vkCmdBindDescriptorSets(
- my_data->report_data,
- pipelineBindPoint,
- layout,
- firstSet,
- descriptorSetCount,
- pDescriptorSets,
- dynamicOffsetCount,
- pDynamicOffsets);
-
- if (skipCall == VK_FALSE) {
- PreCmdBindDescriptorSets(commandBuffer, pDescriptorSets, pDynamicOffsets);
-
- get_dispatch_table(pc_device_table_map, commandBuffer)->CmdBindDescriptorSets(commandBuffer, pipelineBindPoint, layout, firstSet, descriptorSetCount, pDescriptorSets, dynamicOffsetCount, pDynamicOffsets);
-
- PostCmdBindDescriptorSets(commandBuffer, pipelineBindPoint, layout, firstSet, descriptorSetCount, dynamicOffsetCount);
- }
-}
-
-bool PostCmdBindIndexBuffer(
- VkCommandBuffer commandBuffer,
- VkBuffer buffer,
- VkDeviceSize offset,
- VkIndexType indexType)
-{
-
-
-
- if(indexType < VK_INDEX_TYPE_BEGIN_RANGE ||
- indexType > VK_INDEX_TYPE_END_RANGE)
- {
- log_msg(mdd(commandBuffer), VK_DEBUG_REPORT_ERROR_BIT_EXT, (VkDebugReportObjectTypeEXT)0, 0, __LINE__, 1, "PARAMCHECK",
- "vkCmdBindIndexBuffer parameter, VkIndexType indexType, is an unrecognized enumerator");
- return false;
- }
-
- return true;
-}
-
-VK_LAYER_EXPORT VKAPI_ATTR void VKAPI_CALL vkCmdBindIndexBuffer(
- VkCommandBuffer commandBuffer,
- VkBuffer buffer,
- VkDeviceSize offset,
- VkIndexType indexType)
-{
- get_dispatch_table(pc_device_table_map, commandBuffer)->CmdBindIndexBuffer(commandBuffer, buffer, offset, indexType);
-
- PostCmdBindIndexBuffer(commandBuffer, buffer, offset, indexType);
-}
-
-bool PreCmdBindVertexBuffers(
- VkCommandBuffer commandBuffer,
- const VkBuffer* pBuffers,
- const VkDeviceSize* pOffsets)
-{
- if(pBuffers != nullptr)
- {
- }
-
- if(pOffsets != nullptr)
- {
- }
-
- return true;
-}
-
-bool PostCmdBindVertexBuffers(
- VkCommandBuffer commandBuffer,
- uint32_t firstBinding,
- uint32_t bindingCount)
-{
-
-
-
- return true;
-}
-
-VK_LAYER_EXPORT VKAPI_ATTR void VKAPI_CALL vkCmdBindVertexBuffers(
- VkCommandBuffer commandBuffer,
- uint32_t firstBinding,
- uint32_t bindingCount,
- const VkBuffer* pBuffers,
- const VkDeviceSize* pOffsets)
-{
- VkBool32 skipCall = VK_FALSE;
- layer_data* my_data = get_my_data_ptr(get_dispatch_key(commandBuffer), layer_data_map);
- assert(my_data != NULL);
-
- skipCall |= param_check_vkCmdBindVertexBuffers(
- my_data->report_data,
- firstBinding,
- bindingCount,
- pBuffers,
- pOffsets);
-
- if (skipCall == VK_FALSE) {
- PreCmdBindVertexBuffers(commandBuffer, pBuffers, pOffsets);
-
- get_dispatch_table(pc_device_table_map, commandBuffer)->CmdBindVertexBuffers(commandBuffer, firstBinding, bindingCount, pBuffers, pOffsets);
-
- PostCmdBindVertexBuffers(commandBuffer, firstBinding, bindingCount);
- }
-}
-
-bool PreCmdDraw(
- VkCommandBuffer commandBuffer,
- uint32_t vertexCount,
- uint32_t instanceCount,
- uint32_t firstVertex,
- uint32_t firstInstance)
-{
- if (vertexCount == 0) {
- // TODO: Verify against Valid Usage section. I don't see a non-zero vertexCount listed, may need to add that and make
- // this an error or leave as is.
- log_msg(mdd(commandBuffer), VK_DEBUG_REPORT_WARNING_BIT_EXT, (VkDebugReportObjectTypeEXT)0, 0, __LINE__, 1, "PARAMCHECK",
- "vkCmdDraw parameter, uint32_t vertexCount, is 0");
- return false;
- }
-
- if (instanceCount == 0) {
- // TODO: Verify against Valid Usage section. I don't see a non-zero instanceCount listed, may need to add that and make
- // this an error or leave as is.
- log_msg(mdd(commandBuffer), VK_DEBUG_REPORT_WARNING_BIT_EXT, (VkDebugReportObjectTypeEXT)0, 0, __LINE__, 1, "PARAMCHECK",
- "vkCmdDraw parameter, uint32_t instanceCount, is 0");
- return false;
- }
-
- return true;
-}
-
-bool PostCmdDraw(
- VkCommandBuffer commandBuffer,
- uint32_t firstVertex,
- uint32_t vertexCount,
- uint32_t firstInstance,
- uint32_t instanceCount)
-{
-
-
-
-
-
- return true;
-}
-
-VK_LAYER_EXPORT VKAPI_ATTR void VKAPI_CALL vkCmdDraw(
- VkCommandBuffer commandBuffer,
- uint32_t vertexCount,
- uint32_t instanceCount,
- uint32_t firstVertex,
- uint32_t firstInstance)
-{
- PreCmdDraw(commandBuffer, vertexCount, instanceCount, firstVertex, firstInstance);
-
- get_dispatch_table(pc_device_table_map, commandBuffer)->CmdDraw(commandBuffer, vertexCount, instanceCount, firstVertex, firstInstance);
-
- PostCmdDraw(commandBuffer, firstVertex, vertexCount, firstInstance, instanceCount);
-}
-
-bool PostCmdDrawIndexed(
- VkCommandBuffer commandBuffer,
- uint32_t firstIndex,
- uint32_t indexCount,
- int32_t vertexOffset,
- uint32_t firstInstance,
- uint32_t instanceCount)
-{
-
-
-
-
-
-
- return true;
-}
-
-VK_LAYER_EXPORT VKAPI_ATTR void VKAPI_CALL vkCmdDrawIndexed(
- VkCommandBuffer commandBuffer,
- uint32_t indexCount,
- uint32_t instanceCount,
- uint32_t firstIndex,
- int32_t vertexOffset,
- uint32_t firstInstance)
-{
- get_dispatch_table(pc_device_table_map, commandBuffer)->CmdDrawIndexed(commandBuffer, indexCount, instanceCount, firstIndex, vertexOffset, firstInstance);
-
- PostCmdDrawIndexed(commandBuffer, firstIndex, indexCount, vertexOffset, firstInstance, instanceCount);
-}
-
-bool PostCmdDrawIndirect(
- VkCommandBuffer commandBuffer,
- VkBuffer buffer,
- VkDeviceSize offset,
- uint32_t count,
- uint32_t stride)
-{
-
-
-
-
-
- return true;
-}
-
-VK_LAYER_EXPORT VKAPI_ATTR void VKAPI_CALL vkCmdDrawIndirect(
- VkCommandBuffer commandBuffer,
- VkBuffer buffer,
- VkDeviceSize offset,
- uint32_t count,
- uint32_t stride)
-{
- get_dispatch_table(pc_device_table_map, commandBuffer)->CmdDrawIndirect(commandBuffer, buffer, offset, count, stride);
-
- PostCmdDrawIndirect(commandBuffer, buffer, offset, count, stride);
-}
-
-bool PostCmdDrawIndexedIndirect(
- VkCommandBuffer commandBuffer,
- VkBuffer buffer,
- VkDeviceSize offset,
- uint32_t count,
- uint32_t stride)
-{
-
-
-
-
-
- return true;
-}
-
-VK_LAYER_EXPORT VKAPI_ATTR void VKAPI_CALL vkCmdDrawIndexedIndirect(
- VkCommandBuffer commandBuffer,
- VkBuffer buffer,
- VkDeviceSize offset,
- uint32_t count,
- uint32_t stride)
-{
- get_dispatch_table(pc_device_table_map, commandBuffer)->CmdDrawIndexedIndirect(commandBuffer, buffer, offset, count, stride);
-
- PostCmdDrawIndexedIndirect(commandBuffer, buffer, offset, count, stride);
-}
-
-bool PostCmdDispatch(
- VkCommandBuffer commandBuffer,
- uint32_t x,
- uint32_t y,
- uint32_t z)
-{
-
-
-
-
- return true;
-}
-
-VK_LAYER_EXPORT VKAPI_ATTR void VKAPI_CALL vkCmdDispatch(
- VkCommandBuffer commandBuffer,
- uint32_t x,
- uint32_t y,
- uint32_t z)
-{
- get_dispatch_table(pc_device_table_map, commandBuffer)->CmdDispatch(commandBuffer, x, y, z);
-
- PostCmdDispatch(commandBuffer, x, y, z);
-}
-
-bool PostCmdDispatchIndirect(
- VkCommandBuffer commandBuffer,
- VkBuffer buffer,
- VkDeviceSize offset)
-{
-
-
-
- return true;
-}
-
-VK_LAYER_EXPORT VKAPI_ATTR void VKAPI_CALL vkCmdDispatchIndirect(
- VkCommandBuffer commandBuffer,
- VkBuffer buffer,
- VkDeviceSize offset)
-{
- get_dispatch_table(pc_device_table_map, commandBuffer)->CmdDispatchIndirect(commandBuffer, buffer, offset);
-
- PostCmdDispatchIndirect(commandBuffer, buffer, offset);
-}
-
-bool PreCmdCopyBuffer(
- VkCommandBuffer commandBuffer,
- const VkBufferCopy* pRegions)
-{
- if(pRegions != nullptr)
- {
- }
-
- return true;
-}
-
-bool PostCmdCopyBuffer(
- VkCommandBuffer commandBuffer,
- VkBuffer srcBuffer,
- VkBuffer dstBuffer,
- uint32_t regionCount)
-{
-
-
-
-
- return true;
-}
-
-VK_LAYER_EXPORT VKAPI_ATTR void VKAPI_CALL vkCmdCopyBuffer(
- VkCommandBuffer commandBuffer,
- VkBuffer srcBuffer,
- VkBuffer dstBuffer,
- uint32_t regionCount,
- const VkBufferCopy* pRegions)
-{
- VkBool32 skipCall = VK_FALSE;
- layer_data* my_data = get_my_data_ptr(get_dispatch_key(commandBuffer), layer_data_map);
- assert(my_data != NULL);
-
- skipCall |= param_check_vkCmdCopyBuffer(
- my_data->report_data,
- srcBuffer,
- dstBuffer,
- regionCount,
- pRegions);
-
- if (skipCall == VK_FALSE) {
- PreCmdCopyBuffer(commandBuffer, pRegions);
-
- get_dispatch_table(pc_device_table_map, commandBuffer)->CmdCopyBuffer(commandBuffer, srcBuffer, dstBuffer, regionCount, pRegions);
-
- PostCmdCopyBuffer(commandBuffer, srcBuffer, dstBuffer, regionCount);
- }
-}
-
-bool PreCmdCopyImage(
- VkCommandBuffer commandBuffer,
- const VkImageCopy* pRegions)
-{
- if(pRegions != nullptr)
- {
- if ((pRegions->srcSubresource.aspectMask &
- (VK_IMAGE_ASPECT_COLOR_BIT | VK_IMAGE_ASPECT_DEPTH_BIT | VK_IMAGE_ASPECT_STENCIL_BIT | VK_IMAGE_ASPECT_METADATA_BIT)) == 0)
- {
- log_msg(mdd(commandBuffer), VK_DEBUG_REPORT_ERROR_BIT_EXT, (VkDebugReportObjectTypeEXT)0, 0, __LINE__, 1, "PARAMCHECK",
- "vkCmdCopyImage parameter, VkImageAspect pRegions->srcSubresource.aspectMask, is an unrecognized enumerator");
- return false;
- }
- if ((pRegions->dstSubresource.aspectMask &
- (VK_IMAGE_ASPECT_COLOR_BIT | VK_IMAGE_ASPECT_DEPTH_BIT | VK_IMAGE_ASPECT_STENCIL_BIT | VK_IMAGE_ASPECT_METADATA_BIT)) == 0)
- {
- log_msg(mdd(commandBuffer), VK_DEBUG_REPORT_ERROR_BIT_EXT, (VkDebugReportObjectTypeEXT)0, 0, __LINE__, 1, "PARAMCHECK",
- "vkCmdCopyImage parameter, VkImageAspect pRegions->dstSubresource.aspectMask, is an unrecognized enumerator");
- return false;
- }
- }
-
- return true;
-}
-
-bool PostCmdCopyImage(
- VkCommandBuffer commandBuffer,
- VkImage srcImage,
- VkImageLayout srcImageLayout,
- VkImage dstImage,
- VkImageLayout dstImageLayout,
- uint32_t regionCount)
-{
- if (((srcImageLayout < VK_IMAGE_LAYOUT_BEGIN_RANGE) ||
- (srcImageLayout > VK_IMAGE_LAYOUT_END_RANGE)) &&
- (srcImageLayout != VK_IMAGE_LAYOUT_PRESENT_SRC_KHR))
- {
- log_msg(mdd(commandBuffer), VK_DEBUG_REPORT_ERROR_BIT_EXT, (VkDebugReportObjectTypeEXT)0, 0, __LINE__, 1, "PARAMCHECK",
- "vkCmdCopyImage parameter, VkImageLayout srcImageLayout, is an unrecognized enumerator");
- return false;
- }
-
-
- if (((dstImageLayout < VK_IMAGE_LAYOUT_BEGIN_RANGE) ||
- (dstImageLayout > VK_IMAGE_LAYOUT_END_RANGE)) &&
- (dstImageLayout != VK_IMAGE_LAYOUT_PRESENT_SRC_KHR))
- {
- log_msg(mdd(commandBuffer), VK_DEBUG_REPORT_ERROR_BIT_EXT, (VkDebugReportObjectTypeEXT)0, 0, __LINE__, 1, "PARAMCHECK",
- "vkCmdCopyImage parameter, VkImageLayout dstImageLayout, is an unrecognized enumerator");
- return false;
- }
-
-
- return true;
-}
-
-VK_LAYER_EXPORT VKAPI_ATTR void VKAPI_CALL vkCmdCopyImage(
- VkCommandBuffer commandBuffer,
- VkImage srcImage,
- VkImageLayout srcImageLayout,
- VkImage dstImage,
- VkImageLayout dstImageLayout,
- uint32_t regionCount,
- const VkImageCopy* pRegions)
-{
- VkBool32 skipCall = VK_FALSE;
- layer_data* my_data = get_my_data_ptr(get_dispatch_key(commandBuffer), layer_data_map);
- assert(my_data != NULL);
-
- skipCall |= param_check_vkCmdCopyImage(
- my_data->report_data,
- srcImage,
- srcImageLayout,
- dstImage,
- dstImageLayout,
- regionCount,
- pRegions);
-
- if (skipCall == VK_FALSE) {
- PreCmdCopyImage(commandBuffer, pRegions);
-
- get_dispatch_table(pc_device_table_map, commandBuffer)->CmdCopyImage(commandBuffer, srcImage, srcImageLayout, dstImage, dstImageLayout, regionCount, pRegions);
-
- PostCmdCopyImage(commandBuffer, srcImage, srcImageLayout, dstImage, dstImageLayout, regionCount);
- }
-}
-
-bool PreCmdBlitImage(
- VkCommandBuffer commandBuffer,
- const VkImageBlit* pRegions)
-{
- if(pRegions != nullptr)
- {
- if ((pRegions->srcSubresource.aspectMask &
- (VK_IMAGE_ASPECT_COLOR_BIT | VK_IMAGE_ASPECT_DEPTH_BIT | VK_IMAGE_ASPECT_STENCIL_BIT | VK_IMAGE_ASPECT_METADATA_BIT)) == 0)
- {
- log_msg(mdd(commandBuffer), VK_DEBUG_REPORT_ERROR_BIT_EXT, (VkDebugReportObjectTypeEXT)0, 0, __LINE__, 1, "PARAMCHECK",
- "vkCmdCopyImage parameter, VkImageAspect pRegions->srcSubresource.aspectMask, is an unrecognized enumerator");
- return false;
- }
- if ((pRegions->dstSubresource.aspectMask &
- (VK_IMAGE_ASPECT_COLOR_BIT | VK_IMAGE_ASPECT_DEPTH_BIT | VK_IMAGE_ASPECT_STENCIL_BIT | VK_IMAGE_ASPECT_METADATA_BIT)) == 0)
- {
- log_msg(mdd(commandBuffer), VK_DEBUG_REPORT_ERROR_BIT_EXT, (VkDebugReportObjectTypeEXT)0, 0, __LINE__, 1, "PARAMCHECK",
- "vkCmdCopyImage parameter, VkImageAspect pRegions->dstSubresource.aspectMask, is an unrecognized enumerator");
- return false;
- }
- }
-
- return true;
-}
-
-bool PostCmdBlitImage(
- VkCommandBuffer commandBuffer,
- VkImage srcImage,
- VkImageLayout srcImageLayout,
- VkImage dstImage,
- VkImageLayout dstImageLayout,
- uint32_t regionCount,
- VkFilter filter)
-{
-
-
- if (((srcImageLayout < VK_IMAGE_LAYOUT_BEGIN_RANGE) ||
- (srcImageLayout > VK_IMAGE_LAYOUT_END_RANGE)) &&
- (srcImageLayout != VK_IMAGE_LAYOUT_PRESENT_SRC_KHR))
- {
- log_msg(mdd(commandBuffer), VK_DEBUG_REPORT_ERROR_BIT_EXT, (VkDebugReportObjectTypeEXT)0, 0, __LINE__, 1, "PARAMCHECK",
- "vkCmdBlitImage parameter, VkImageLayout srcImageLayout, is an unrecognized enumerator");
- return false;
- }
-
-
- if (((dstImageLayout < VK_IMAGE_LAYOUT_BEGIN_RANGE) ||
- (dstImageLayout > VK_IMAGE_LAYOUT_END_RANGE)) &&
- (dstImageLayout != VK_IMAGE_LAYOUT_PRESENT_SRC_KHR))
- {
- log_msg(mdd(commandBuffer), VK_DEBUG_REPORT_ERROR_BIT_EXT, (VkDebugReportObjectTypeEXT)0, 0, __LINE__, 1, "PARAMCHECK",
- "vkCmdBlitImage parameter, VkImageLayout dstImageLayout, is an unrecognized enumerator");
- return false;
- }
-
-
- if(filter < VK_FILTER_BEGIN_RANGE ||
- filter > VK_FILTER_END_RANGE)
- {
- log_msg(mdd(commandBuffer), VK_DEBUG_REPORT_ERROR_BIT_EXT, (VkDebugReportObjectTypeEXT)0, 0, __LINE__, 1, "PARAMCHECK",
- "vkCmdBlitImage parameter, VkFilter filter, is an unrecognized enumerator");
- return false;
- }
-
- return true;
-}
-
-VK_LAYER_EXPORT VKAPI_ATTR void VKAPI_CALL vkCmdBlitImage(
- VkCommandBuffer commandBuffer,
- VkImage srcImage,
- VkImageLayout srcImageLayout,
- VkImage dstImage,
- VkImageLayout dstImageLayout,
- uint32_t regionCount,
- const VkImageBlit* pRegions,
- VkFilter filter)
-{
- VkBool32 skipCall = VK_FALSE;
- layer_data* my_data = get_my_data_ptr(get_dispatch_key(commandBuffer), layer_data_map);
- assert(my_data != NULL);
-
- skipCall |= param_check_vkCmdBlitImage(
- my_data->report_data,
- srcImage,
- srcImageLayout,
- dstImage,
- dstImageLayout,
- regionCount,
- pRegions,
- filter);
-
- if (skipCall == VK_FALSE) {
- PreCmdBlitImage(commandBuffer, pRegions);
-
- get_dispatch_table(pc_device_table_map, commandBuffer)->CmdBlitImage(commandBuffer, srcImage, srcImageLayout, dstImage, dstImageLayout, regionCount, pRegions, filter);
-
- PostCmdBlitImage(commandBuffer, srcImage, srcImageLayout, dstImage, dstImageLayout, regionCount, filter);
- }
-}
-
-bool PreCmdCopyBufferToImage(
- VkCommandBuffer commandBuffer,
- const VkBufferImageCopy* pRegions)
-{
- if(pRegions != nullptr)
- {
- if ((pRegions->imageSubresource.aspectMask &
- (VK_IMAGE_ASPECT_COLOR_BIT | VK_IMAGE_ASPECT_DEPTH_BIT | VK_IMAGE_ASPECT_STENCIL_BIT | VK_IMAGE_ASPECT_METADATA_BIT)) == 0)
- {
- log_msg(mdd(commandBuffer), VK_DEBUG_REPORT_ERROR_BIT_EXT, (VkDebugReportObjectTypeEXT)0, 0, __LINE__, 1, "PARAMCHECK",
- "vkCmdCopyBufferToImage parameter, VkImageAspect pRegions->imageSubresource.aspectMask, is an unrecognized enumerator");
- return false;
- }
- }
-
- return true;
-}
-
-bool PostCmdCopyBufferToImage(
- VkCommandBuffer commandBuffer,
- VkBuffer srcBuffer,
- VkImage dstImage,
- VkImageLayout dstImageLayout,
- uint32_t regionCount)
-{
-
-
-
- if (((dstImageLayout < VK_IMAGE_LAYOUT_BEGIN_RANGE) ||
- (dstImageLayout > VK_IMAGE_LAYOUT_END_RANGE)) &&
- (dstImageLayout != VK_IMAGE_LAYOUT_PRESENT_SRC_KHR))
- {
- log_msg(mdd(commandBuffer), VK_DEBUG_REPORT_ERROR_BIT_EXT, (VkDebugReportObjectTypeEXT)0, 0, __LINE__, 1, "PARAMCHECK",
- "vkCmdCopyBufferToImage parameter, VkImageLayout dstImageLayout, is an unrecognized enumerator");
- return false;
- }
-
-
- return true;
-}
-
-VK_LAYER_EXPORT VKAPI_ATTR void VKAPI_CALL vkCmdCopyBufferToImage(
- VkCommandBuffer commandBuffer,
- VkBuffer srcBuffer,
- VkImage dstImage,
- VkImageLayout dstImageLayout,
- uint32_t regionCount,
- const VkBufferImageCopy* pRegions)
-{
- VkBool32 skipCall = VK_FALSE;
- layer_data* my_data = get_my_data_ptr(get_dispatch_key(commandBuffer), layer_data_map);
- assert(my_data != NULL);
-
- skipCall |= param_check_vkCmdCopyBufferToImage(
- my_data->report_data,
- srcBuffer,
- dstImage,
- dstImageLayout,
- regionCount,
- pRegions);
-
- if (skipCall == VK_FALSE) {
- PreCmdCopyBufferToImage(commandBuffer, pRegions);
-
- get_dispatch_table(pc_device_table_map, commandBuffer)->CmdCopyBufferToImage(commandBuffer, srcBuffer, dstImage, dstImageLayout, regionCount, pRegions);
-
- PostCmdCopyBufferToImage(commandBuffer, srcBuffer, dstImage, dstImageLayout, regionCount);
- }
-}
-
-bool PreCmdCopyImageToBuffer(
- VkCommandBuffer commandBuffer,
- const VkBufferImageCopy* pRegions)
-{
- if(pRegions != nullptr)
- {
- if ((pRegions->imageSubresource.aspectMask &
- (VK_IMAGE_ASPECT_COLOR_BIT | VK_IMAGE_ASPECT_DEPTH_BIT | VK_IMAGE_ASPECT_STENCIL_BIT | VK_IMAGE_ASPECT_METADATA_BIT)) == 0)
- {
- log_msg(mdd(commandBuffer), VK_DEBUG_REPORT_ERROR_BIT_EXT, (VkDebugReportObjectTypeEXT)0, 0, __LINE__, 1, "PARAMCHECK",
- "vkCmdCopyImageToBuffer parameter, VkImageAspect pRegions->imageSubresource.aspectMask, is an unrecognized enumerator");
- return false;
- }
- }
-
- return true;
-}
-
-bool PostCmdCopyImageToBuffer(
- VkCommandBuffer commandBuffer,
- VkImage srcImage,
- VkImageLayout srcImageLayout,
- VkBuffer dstBuffer,
- uint32_t regionCount)
-{
-
-
- if (((srcImageLayout < VK_IMAGE_LAYOUT_BEGIN_RANGE) ||
- (srcImageLayout > VK_IMAGE_LAYOUT_END_RANGE)) &&
- (srcImageLayout != VK_IMAGE_LAYOUT_PRESENT_SRC_KHR))
- {
- log_msg(mdd(commandBuffer), VK_DEBUG_REPORT_ERROR_BIT_EXT, (VkDebugReportObjectTypeEXT)0, 0, __LINE__, 1, "PARAMCHECK",
- "vkCmdCopyImageToBuffer parameter, VkImageLayout srcImageLayout, is an unrecognized enumerator");
- return false;
- }
-
-
-
- return true;
-}
-
-VK_LAYER_EXPORT VKAPI_ATTR void VKAPI_CALL vkCmdCopyImageToBuffer(
- VkCommandBuffer commandBuffer,
- VkImage srcImage,
- VkImageLayout srcImageLayout,
- VkBuffer dstBuffer,
- uint32_t regionCount,
- const VkBufferImageCopy* pRegions)
-{
- VkBool32 skipCall = VK_FALSE;
- layer_data* my_data = get_my_data_ptr(get_dispatch_key(commandBuffer), layer_data_map);
- assert(my_data != NULL);
-
- skipCall |= param_check_vkCmdCopyImageToBuffer(
- my_data->report_data,
- srcImage,
- srcImageLayout,
- dstBuffer,
- regionCount,
- pRegions);
-
- if (skipCall == VK_FALSE) {
- PreCmdCopyImageToBuffer(commandBuffer, pRegions);
-
- get_dispatch_table(pc_device_table_map, commandBuffer)->CmdCopyImageToBuffer(commandBuffer, srcImage, srcImageLayout, dstBuffer, regionCount, pRegions);
-
- PostCmdCopyImageToBuffer(commandBuffer, srcImage, srcImageLayout, dstBuffer, regionCount);
- }
-}
-
-bool PreCmdUpdateBuffer(
- VkCommandBuffer commandBuffer,
- const uint32_t* pData)
-{
- if(pData != nullptr)
- {
- }
-
- return true;
-}
-
-bool PostCmdUpdateBuffer(
- VkCommandBuffer commandBuffer,
- VkBuffer dstBuffer,
- VkDeviceSize dstOffset,
- VkDeviceSize dataSize)
-{
-
-
-
-
- return true;
-}
-
-VK_LAYER_EXPORT VKAPI_ATTR void VKAPI_CALL vkCmdUpdateBuffer(
- VkCommandBuffer commandBuffer,
- VkBuffer dstBuffer,
- VkDeviceSize dstOffset,
- VkDeviceSize dataSize,
- const uint32_t* pData)
-{
- VkBool32 skipCall = VK_FALSE;
- layer_data* my_data = get_my_data_ptr(get_dispatch_key(commandBuffer), layer_data_map);
- assert(my_data != NULL);
-
- skipCall |= param_check_vkCmdUpdateBuffer(
- my_data->report_data,
- dstBuffer,
- dstOffset,
- dataSize,
- pData);
-
- if (skipCall == VK_FALSE) {
- PreCmdUpdateBuffer(commandBuffer, pData);
-
- get_dispatch_table(pc_device_table_map, commandBuffer)->CmdUpdateBuffer(commandBuffer, dstBuffer, dstOffset, dataSize, pData);
-
- PostCmdUpdateBuffer(commandBuffer, dstBuffer, dstOffset, dataSize);
- }
-}
-
-bool PostCmdFillBuffer(
- VkCommandBuffer commandBuffer,
- VkBuffer dstBuffer,
- VkDeviceSize dstOffset,
- VkDeviceSize size,
- uint32_t data)
-{
-
-
-
-
-
- return true;
-}
-
-VK_LAYER_EXPORT VKAPI_ATTR void VKAPI_CALL vkCmdFillBuffer(
- VkCommandBuffer commandBuffer,
- VkBuffer dstBuffer,
- VkDeviceSize dstOffset,
- VkDeviceSize size,
- uint32_t data)
-{
- get_dispatch_table(pc_device_table_map, commandBuffer)->CmdFillBuffer(commandBuffer, dstBuffer, dstOffset, size, data);
-
- PostCmdFillBuffer(commandBuffer, dstBuffer, dstOffset, size, data);
-}
-
-bool PreCmdClearColorImage(
- VkCommandBuffer commandBuffer,
- const VkClearColorValue* pColor,
- const VkImageSubresourceRange* pRanges)
-{
- if(pColor != nullptr)
- {
- }
-
- if(pRanges != nullptr)
- {
- /* TODO: How should we validate pRanges->aspectMask */
- }
-
- return true;
-}
-
-bool PostCmdClearColorImage(
- VkCommandBuffer commandBuffer,
- VkImage image,
- VkImageLayout imageLayout,
- uint32_t rangeCount)
-{
-
-
- if (((imageLayout < VK_IMAGE_LAYOUT_BEGIN_RANGE) ||
- (imageLayout > VK_IMAGE_LAYOUT_END_RANGE)) &&
- (imageLayout != VK_IMAGE_LAYOUT_PRESENT_SRC_KHR))
- {
- log_msg(mdd(commandBuffer), VK_DEBUG_REPORT_ERROR_BIT_EXT, (VkDebugReportObjectTypeEXT)0, 0, __LINE__, 1, "PARAMCHECK",
- "vkCmdClearColorImage parameter, VkImageLayout imageLayout, is an unrecognized enumerator");
- return false;
- }
-
-
- return true;
-}
-
-VK_LAYER_EXPORT VKAPI_ATTR void VKAPI_CALL vkCmdClearColorImage(
- VkCommandBuffer commandBuffer,
- VkImage image,
- VkImageLayout imageLayout,
- const VkClearColorValue* pColor,
- uint32_t rangeCount,
- const VkImageSubresourceRange* pRanges)
-{
- VkBool32 skipCall = VK_FALSE;
- layer_data* my_data = get_my_data_ptr(get_dispatch_key(commandBuffer), layer_data_map);
- assert(my_data != NULL);
-
- skipCall |= param_check_vkCmdClearColorImage(
- my_data->report_data,
- image,
- imageLayout,
- pColor,
- rangeCount,
- pRanges);
-
- if (skipCall == VK_FALSE) {
- PreCmdClearColorImage(commandBuffer, pColor, pRanges);
-
- get_dispatch_table(pc_device_table_map, commandBuffer)->CmdClearColorImage(commandBuffer, image, imageLayout, pColor, rangeCount, pRanges);
-
- PostCmdClearColorImage(commandBuffer, image, imageLayout, rangeCount);
- }
-}
-
-bool PreCmdClearDepthStencilImage(
- VkCommandBuffer commandBuffer,
- const VkImageSubresourceRange* pRanges)
-{
- if(pRanges != nullptr)
- {
- /*
- * TODO: How do we validation pRanges->aspectMask?
- * Allows values are: VK_IMAGE_ASPECT_DEPTH_BIT and
- * VK_IMAGE_ASPECT_STENCIL_BIT.
- */
- }
-
- return true;
-}
-
-bool PostCmdClearDepthStencilImage(
- VkCommandBuffer commandBuffer,
- VkImage image,
- VkImageLayout imageLayout,
- const VkClearDepthStencilValue* pDepthStencil,
- uint32_t rangeCount)
-{
-
-
- if (((imageLayout < VK_IMAGE_LAYOUT_BEGIN_RANGE) ||
- (imageLayout > VK_IMAGE_LAYOUT_END_RANGE)) &&
- (imageLayout != VK_IMAGE_LAYOUT_PRESENT_SRC_KHR))
- {
- log_msg(mdd(commandBuffer), VK_DEBUG_REPORT_ERROR_BIT_EXT, (VkDebugReportObjectTypeEXT)0, 0, __LINE__, 1, "PARAMCHECK",
- "vkCmdClearDepthStencilImage parameter, VkImageLayout imageLayout, is an unrecognized enumerator");
- return false;
- }
-
-
-
-
- return true;
-}
-
-VK_LAYER_EXPORT VKAPI_ATTR void VKAPI_CALL vkCmdClearDepthStencilImage(
- VkCommandBuffer commandBuffer,
- VkImage image,
- VkImageLayout imageLayout,
- const VkClearDepthStencilValue* pDepthStencil,
- uint32_t rangeCount,
- const VkImageSubresourceRange* pRanges)
-{
- VkBool32 skipCall = VK_FALSE;
- layer_data* my_data = get_my_data_ptr(get_dispatch_key(commandBuffer), layer_data_map);
- assert(my_data != NULL);
-
- skipCall |= param_check_vkCmdClearDepthStencilImage(
- my_data->report_data,
- image,
- imageLayout,
- pDepthStencil,
- rangeCount,
- pRanges);
-
- if (skipCall == VK_FALSE) {
- PreCmdClearDepthStencilImage(commandBuffer, pRanges);
-
- get_dispatch_table(pc_device_table_map, commandBuffer)->CmdClearDepthStencilImage(commandBuffer, image, imageLayout, pDepthStencil, rangeCount, pRanges);
-
- PostCmdClearDepthStencilImage(commandBuffer, image, imageLayout, pDepthStencil, rangeCount);
- }
-}
-
-bool PreCmdClearAttachments(
- VkCommandBuffer commandBuffer,
- const VkClearColorValue* pColor,
- const VkClearRect* pRects)
-{
- if(pColor != nullptr)
- {
- }
-
- if(pRects != nullptr)
- {
- }
-
- return true;
-}
-
-VK_LAYER_EXPORT VKAPI_ATTR void VKAPI_CALL vkCmdClearAttachments(
- VkCommandBuffer commandBuffer,
- uint32_t attachmentCount,
- const VkClearAttachment* pAttachments,
- uint32_t rectCount,
- const VkClearRect* pRects)
-{
- VkBool32 skipCall = VK_FALSE;
- layer_data* my_data = get_my_data_ptr(get_dispatch_key(commandBuffer), layer_data_map);
- assert(my_data != NULL);
-
- skipCall |= param_check_vkCmdClearAttachments(
- my_data->report_data,
- attachmentCount,
- pAttachments,
- rectCount,
- pRects);
-
- if (skipCall == VK_FALSE) {
- for (uint32_t i = 0; i < attachmentCount; i++) {
- PreCmdClearAttachments(commandBuffer, &pAttachments[i].clearValue.color, pRects);
- }
-
- get_dispatch_table(pc_device_table_map, commandBuffer)->CmdClearAttachments(commandBuffer, attachmentCount, pAttachments, rectCount, pRects);
- }
-}
-
-bool PreCmdResolveImage(
- VkCommandBuffer commandBuffer,
- const VkImageResolve* pRegions)
-{
- if(pRegions != nullptr)
- {
- if ((pRegions->srcSubresource.aspectMask &
- (VK_IMAGE_ASPECT_COLOR_BIT | VK_IMAGE_ASPECT_DEPTH_BIT | VK_IMAGE_ASPECT_STENCIL_BIT | VK_IMAGE_ASPECT_METADATA_BIT)) == 0)
- {
- log_msg(mdd(commandBuffer), VK_DEBUG_REPORT_ERROR_BIT_EXT, (VkDebugReportObjectTypeEXT)0, 0, __LINE__, 1, "PARAMCHECK",
- "vkCmdResolveImage parameter, VkImageAspect pRegions->srcSubresource.aspectMask, is an unrecognized enumerator");
- return false;
- }
- if ((pRegions->dstSubresource.aspectMask &
- (VK_IMAGE_ASPECT_COLOR_BIT | VK_IMAGE_ASPECT_DEPTH_BIT | VK_IMAGE_ASPECT_STENCIL_BIT | VK_IMAGE_ASPECT_METADATA_BIT)) == 0)
- {
- log_msg(mdd(commandBuffer), VK_DEBUG_REPORT_ERROR_BIT_EXT, (VkDebugReportObjectTypeEXT)0, 0, __LINE__, 1, "PARAMCHECK",
- "vkCmdResolveImage parameter, VkImageAspect pRegions->dstSubresource.aspectMask, is an unrecognized enumerator");
- return false;
- }
- }
-
- return true;
-}
-
-bool PostCmdResolveImage(
- VkCommandBuffer commandBuffer,
- VkImage srcImage,
- VkImageLayout srcImageLayout,
- VkImage dstImage,
- VkImageLayout dstImageLayout,
- uint32_t regionCount)
-{
-
-
- if (((srcImageLayout < VK_IMAGE_LAYOUT_BEGIN_RANGE) ||
- (srcImageLayout > VK_IMAGE_LAYOUT_END_RANGE)) &&
- (srcImageLayout != VK_IMAGE_LAYOUT_PRESENT_SRC_KHR))
- {
- log_msg(mdd(commandBuffer), VK_DEBUG_REPORT_ERROR_BIT_EXT, (VkDebugReportObjectTypeEXT)0, 0, __LINE__, 1, "PARAMCHECK",
- "vkCmdResolveImage parameter, VkImageLayout srcImageLayout, is an unrecognized enumerator");
- return false;
- }
-
-
- if (((dstImageLayout < VK_IMAGE_LAYOUT_BEGIN_RANGE) ||
- (dstImageLayout > VK_IMAGE_LAYOUT_END_RANGE)) &&
- (dstImageLayout != VK_IMAGE_LAYOUT_PRESENT_SRC_KHR))
- {
- log_msg(mdd(commandBuffer), VK_DEBUG_REPORT_ERROR_BIT_EXT, (VkDebugReportObjectTypeEXT)0, 0, __LINE__, 1, "PARAMCHECK",
- "vkCmdResolveImage parameter, VkImageLayout dstImageLayout, is an unrecognized enumerator");
- return false;
- }
-
-
- return true;
-}
-
-VK_LAYER_EXPORT VKAPI_ATTR void VKAPI_CALL vkCmdResolveImage(
- VkCommandBuffer commandBuffer,
- VkImage srcImage,
- VkImageLayout srcImageLayout,
- VkImage dstImage,
- VkImageLayout dstImageLayout,
- uint32_t regionCount,
- const VkImageResolve* pRegions)
-{
- VkBool32 skipCall = VK_FALSE;
- layer_data* my_data = get_my_data_ptr(get_dispatch_key(commandBuffer), layer_data_map);
- assert(my_data != NULL);
-
- skipCall |= param_check_vkCmdResolveImage(
- my_data->report_data,
- srcImage,
- srcImageLayout,
- dstImage,
- dstImageLayout,
- regionCount,
- pRegions);
-
- if (skipCall == VK_FALSE) {
- PreCmdResolveImage(commandBuffer, pRegions);
-
- get_dispatch_table(pc_device_table_map, commandBuffer)->CmdResolveImage(commandBuffer, srcImage, srcImageLayout, dstImage, dstImageLayout, regionCount, pRegions);
-
- PostCmdResolveImage(commandBuffer, srcImage, srcImageLayout, dstImage, dstImageLayout, regionCount);
- }
-}
-
-bool PostCmdSetEvent(
- VkCommandBuffer commandBuffer,
- VkEvent event,
- VkPipelineStageFlags stageMask)
-{
-
-
-
- return true;
-}
-
-VK_LAYER_EXPORT VKAPI_ATTR void VKAPI_CALL vkCmdSetEvent(
- VkCommandBuffer commandBuffer,
- VkEvent event,
- VkPipelineStageFlags stageMask)
-{
- get_dispatch_table(pc_device_table_map, commandBuffer)->CmdSetEvent(commandBuffer, event, stageMask);
-
- PostCmdSetEvent(commandBuffer, event, stageMask);
-}
-
-bool PostCmdResetEvent(
- VkCommandBuffer commandBuffer,
- VkEvent event,
- VkPipelineStageFlags stageMask)
-{
-
-
-
- return true;
-}
-
-VK_LAYER_EXPORT VKAPI_ATTR void VKAPI_CALL vkCmdResetEvent(
- VkCommandBuffer commandBuffer,
- VkEvent event,
- VkPipelineStageFlags stageMask)
-{
- get_dispatch_table(pc_device_table_map, commandBuffer)->CmdResetEvent(commandBuffer, event, stageMask);
-
- PostCmdResetEvent(commandBuffer, event, stageMask);
-}
-
-bool PreCmdWaitEvents(
- VkCommandBuffer commandBuffer,
- const VkEvent* pEvents,
- uint32_t memoryBarrierCount,
- const VkMemoryBarrier *pMemoryBarriers,
- uint32_t bufferMemoryBarrierCount,
- const VkBufferMemoryBarrier *pBufferMemoryBarriers,
- uint32_t imageMemoryBarrierCount,
- const VkImageMemoryBarrier *pImageMemoryBarriers)
-{
- if(pEvents != nullptr)
- {
- }
-
- if(pMemoryBarriers != nullptr)
- {
- }
-
- return true;
-}
-
-bool PostCmdWaitEvents(
- VkCommandBuffer commandBuffer,
- uint32_t eventCount,
- VkPipelineStageFlags srcStageMask,
- VkPipelineStageFlags dstStageMask,
- uint32_t memoryBarrierCount)
-{
-
-
-
-
-
- return true;
-}
-
-VK_LAYER_EXPORT VKAPI_ATTR void VKAPI_CALL vkCmdWaitEvents(
- VkCommandBuffer commandBuffer,
- uint32_t eventCount,
- const VkEvent *pEvents,
- VkPipelineStageFlags srcStageMask,
- VkPipelineStageFlags dstStageMask,
- uint32_t memoryBarrierCount,
- const VkMemoryBarrier *pMemoryBarriers,
- uint32_t bufferMemoryBarrierCount,
- const VkBufferMemoryBarrier *pBufferMemoryBarriers,
- uint32_t imageMemoryBarrierCount,
- const VkImageMemoryBarrier *pImageMemoryBarriers)
-{
- VkBool32 skipCall = VK_FALSE;
- layer_data* my_data = get_my_data_ptr(get_dispatch_key(commandBuffer), layer_data_map);
- assert(my_data != NULL);
-
- skipCall |= param_check_vkCmdWaitEvents(
- my_data->report_data,
- eventCount,
- pEvents,
- srcStageMask,
- dstStageMask,
- memoryBarrierCount,
- pMemoryBarriers,
- bufferMemoryBarrierCount,
- pBufferMemoryBarriers,
- imageMemoryBarrierCount,
- pImageMemoryBarriers);
-
- if (skipCall == VK_FALSE) {
- PreCmdWaitEvents(commandBuffer, pEvents, memoryBarrierCount, pMemoryBarriers, bufferMemoryBarrierCount, pBufferMemoryBarriers, imageMemoryBarrierCount, pImageMemoryBarriers);
-
- get_dispatch_table(pc_device_table_map, commandBuffer)->CmdWaitEvents(commandBuffer, eventCount, pEvents, srcStageMask, dstStageMask, memoryBarrierCount, pMemoryBarriers, bufferMemoryBarrierCount, pBufferMemoryBarriers, imageMemoryBarrierCount, pImageMemoryBarriers);
-
- PostCmdWaitEvents(commandBuffer, eventCount, srcStageMask, dstStageMask, memoryBarrierCount);
- }
-}
-
-bool PreCmdPipelineBarrier(
- VkCommandBuffer commandBuffer,
- uint32_t memoryBarrierCount,
- const VkMemoryBarrier *pMemoryBarriers,
- uint32_t bufferMemoryBarrierCount,
- const VkBufferMemoryBarrier *pBufferMemoryBarriers,
- uint32_t imageMemoryBarrierCount,
- const VkImageMemoryBarrier *pImageMemoryBarriers)
-{
- if(pMemoryBarriers != nullptr)
- {
- }
-
- return true;
-}
-
-bool PostCmdPipelineBarrier(
- VkCommandBuffer commandBuffer,
- VkPipelineStageFlags srcStageMask,
- VkPipelineStageFlags dstStageMask,
- VkDependencyFlags dependencyFlags,
- uint32_t memoryBarrierCount)
-{
-
-
-
-
-
- return true;
-}
-
-VK_LAYER_EXPORT VKAPI_ATTR void VKAPI_CALL vkCmdPipelineBarrier(
- VkCommandBuffer commandBuffer,
- VkPipelineStageFlags srcStageMask,
- VkPipelineStageFlags dstStageMask,
- VkDependencyFlags dependencyFlags,
- uint32_t memoryBarrierCount,
- const VkMemoryBarrier *pMemoryBarriers,
- uint32_t bufferMemoryBarrierCount,
- const VkBufferMemoryBarrier *pBufferMemoryBarriers,
- uint32_t imageMemoryBarrierCount,
- const VkImageMemoryBarrier *pImageMemoryBarriers)
-{
- VkBool32 skipCall = VK_FALSE;
- layer_data* my_data = get_my_data_ptr(get_dispatch_key(commandBuffer), layer_data_map);
- assert(my_data != NULL);
-
- skipCall |= param_check_vkCmdPipelineBarrier(
- my_data->report_data,
- srcStageMask,
- dstStageMask,
- dependencyFlags,
- memoryBarrierCount,
- pMemoryBarriers,
- bufferMemoryBarrierCount,
- pBufferMemoryBarriers,
- imageMemoryBarrierCount,
- pImageMemoryBarriers);
-
- if (skipCall == VK_FALSE) {
- PreCmdPipelineBarrier(commandBuffer, memoryBarrierCount, pMemoryBarriers, bufferMemoryBarrierCount, pBufferMemoryBarriers, imageMemoryBarrierCount, pImageMemoryBarriers);
-
- get_dispatch_table(pc_device_table_map, commandBuffer)->CmdPipelineBarrier(commandBuffer, srcStageMask, dstStageMask, dependencyFlags, memoryBarrierCount, pMemoryBarriers, bufferMemoryBarrierCount, pBufferMemoryBarriers, imageMemoryBarrierCount, pImageMemoryBarriers);
-
- PostCmdPipelineBarrier(commandBuffer, srcStageMask, dstStageMask, dependencyFlags, memoryBarrierCount);
- }
-}
-
-bool PostCmdBeginQuery(
- VkCommandBuffer commandBuffer,
- VkQueryPool queryPool,
- uint32_t slot,
- VkQueryControlFlags flags)
-{
-
-
-
-
- return true;
-}
-
-VK_LAYER_EXPORT VKAPI_ATTR void VKAPI_CALL vkCmdBeginQuery(
- VkCommandBuffer commandBuffer,
- VkQueryPool queryPool,
- uint32_t slot,
- VkQueryControlFlags flags)
-{
- get_dispatch_table(pc_device_table_map, commandBuffer)->CmdBeginQuery(commandBuffer, queryPool, slot, flags);
-
- PostCmdBeginQuery(commandBuffer, queryPool, slot, flags);
-}
-
-bool PostCmdEndQuery(
- VkCommandBuffer commandBuffer,
- VkQueryPool queryPool,
- uint32_t slot)
-{
-
-
-
- return true;
-}
-
-VK_LAYER_EXPORT VKAPI_ATTR void VKAPI_CALL vkCmdEndQuery(
- VkCommandBuffer commandBuffer,
- VkQueryPool queryPool,
- uint32_t slot)
-{
- get_dispatch_table(pc_device_table_map, commandBuffer)->CmdEndQuery(commandBuffer, queryPool, slot);
-
- PostCmdEndQuery(commandBuffer, queryPool, slot);
-}
-
-bool PostCmdResetQueryPool(
- VkCommandBuffer commandBuffer,
- VkQueryPool queryPool,
- uint32_t firstQuery,
- uint32_t queryCount)
-{
-
-
-
-
- return true;
-}
-
-VK_LAYER_EXPORT VKAPI_ATTR void VKAPI_CALL vkCmdResetQueryPool(
- VkCommandBuffer commandBuffer,
- VkQueryPool queryPool,
- uint32_t firstQuery,
- uint32_t queryCount)
-{
- get_dispatch_table(pc_device_table_map, commandBuffer)->CmdResetQueryPool(commandBuffer, queryPool, firstQuery, queryCount);
-
- PostCmdResetQueryPool(commandBuffer, queryPool, firstQuery, queryCount);
-}
-
-bool PostCmdWriteTimestamp(
- VkCommandBuffer commandBuffer,
- VkPipelineStageFlagBits pipelineStage,
- VkQueryPool queryPool,
- uint32_t slot)
-{
-
- ValidateEnumerator(pipelineStage);
-
- return true;
-}
-
-VK_LAYER_EXPORT VKAPI_ATTR void VKAPI_CALL vkCmdWriteTimestamp(
- VkCommandBuffer commandBuffer,
- VkPipelineStageFlagBits pipelineStage,
- VkQueryPool queryPool,
- uint32_t slot)
-{
- get_dispatch_table(pc_device_table_map, commandBuffer)->CmdWriteTimestamp(commandBuffer, pipelineStage, queryPool, slot);
-
- PostCmdWriteTimestamp(commandBuffer, pipelineStage, queryPool, slot);
-}
-
-bool PostCmdCopyQueryPoolResults(
- VkCommandBuffer commandBuffer,
- VkQueryPool queryPool,
- uint32_t firstQuery,
- uint32_t queryCount,
- VkBuffer dstBuffer,
- VkDeviceSize dstOffset,
- VkDeviceSize stride,
- VkQueryResultFlags flags)
-{
-
-
-
-
-
-
-
-
- return true;
-}
-
-VK_LAYER_EXPORT VKAPI_ATTR void VKAPI_CALL vkCmdCopyQueryPoolResults(
- VkCommandBuffer commandBuffer,
- VkQueryPool queryPool,
- uint32_t firstQuery,
- uint32_t queryCount,
- VkBuffer dstBuffer,
- VkDeviceSize dstOffset,
- VkDeviceSize stride,
- VkQueryResultFlags flags)
-{
- get_dispatch_table(pc_device_table_map, commandBuffer)->CmdCopyQueryPoolResults(commandBuffer, queryPool, firstQuery, queryCount, dstBuffer, dstOffset, stride, flags);
-
- PostCmdCopyQueryPoolResults(commandBuffer, queryPool, firstQuery, queryCount, dstBuffer, dstOffset, stride, flags);
-}
-
-bool PreCmdPushConstants(
- VkCommandBuffer commandBuffer,
- const void* pValues)
-{
- if(pValues != nullptr)
- {
- }
-
- return true;
-}
-
-bool PostCmdPushConstants(
- VkCommandBuffer commandBuffer,
- VkPipelineLayout layout,
- VkShaderStageFlags stageFlags,
- uint32_t offset,
- uint32_t size)
-{
-
-
-
-
-
- return true;
-}
-
-VK_LAYER_EXPORT VKAPI_ATTR void VKAPI_CALL vkCmdPushConstants(
- VkCommandBuffer commandBuffer,
- VkPipelineLayout layout,
- VkShaderStageFlags stageFlags,
- uint32_t offset,
- uint32_t size,
- const void* pValues)
-{
- VkBool32 skipCall = VK_FALSE;
- layer_data* my_data = get_my_data_ptr(get_dispatch_key(commandBuffer), layer_data_map);
- assert(my_data != NULL);
-
- skipCall |= param_check_vkCmdPushConstants(
- my_data->report_data,
- layout,
- stageFlags,
- offset,
- size,
- pValues);
-
- if (skipCall == VK_FALSE) {
- PreCmdPushConstants(commandBuffer, pValues);
-
- get_dispatch_table(pc_device_table_map, commandBuffer)->CmdPushConstants(commandBuffer, layout, stageFlags, offset, size, pValues);
-
- PostCmdPushConstants(commandBuffer, layout, stageFlags, offset, size);
- }
-}
-
-bool PreCmdBeginRenderPass(
- VkCommandBuffer commandBuffer,
- const VkRenderPassBeginInfo* pRenderPassBegin)
-{
- if(pRenderPassBegin != nullptr)
- {
- if(pRenderPassBegin->sType != VK_STRUCTURE_TYPE_RENDER_PASS_BEGIN_INFO)
- {
- log_msg(mdd(commandBuffer), VK_DEBUG_REPORT_ERROR_BIT_EXT, (VkDebugReportObjectTypeEXT)0, 0, __LINE__, 1, "PARAMCHECK",
- "vkCmdBeginRenderPass parameter, VkStructureType pRenderPassBegin->sType, is an invalid enumerator");
- return false;
- }
- if(pRenderPassBegin->pClearValues != nullptr)
- {
- }
- }
-
- return true;
-}
-
-bool PostCmdBeginRenderPass(
- VkCommandBuffer commandBuffer,
- VkSubpassContents contents)
-{
-
- if(contents < VK_SUBPASS_CONTENTS_BEGIN_RANGE ||
- contents > VK_SUBPASS_CONTENTS_END_RANGE)
- {
- log_msg(mdd(commandBuffer), VK_DEBUG_REPORT_ERROR_BIT_EXT, (VkDebugReportObjectTypeEXT)0, 0, __LINE__, 1, "PARAMCHECK",
- "vkCmdBeginRenderPass parameter, VkSubpassContents contents, is an unrecognized enumerator");
- return false;
- }
-
- return true;
-}
-
-VK_LAYER_EXPORT VKAPI_ATTR void VKAPI_CALL vkCmdBeginRenderPass(
- VkCommandBuffer commandBuffer,
- const VkRenderPassBeginInfo* pRenderPassBegin,
- VkSubpassContents contents)
-{
- VkBool32 skipCall = VK_FALSE;
- layer_data* my_data = get_my_data_ptr(get_dispatch_key(commandBuffer), layer_data_map);
- assert(my_data != NULL);
-
- skipCall |= param_check_vkCmdBeginRenderPass(
- my_data->report_data,
- pRenderPassBegin,
- contents);
-
- if (skipCall == VK_FALSE) {
- PreCmdBeginRenderPass(commandBuffer, pRenderPassBegin);
-
- get_dispatch_table(pc_device_table_map, commandBuffer)->CmdBeginRenderPass(commandBuffer, pRenderPassBegin, contents);
-
- PostCmdBeginRenderPass(commandBuffer, contents);
- }
-}
-
-bool PostCmdNextSubpass(
- VkCommandBuffer commandBuffer,
- VkSubpassContents contents)
-{
-
- if(contents < VK_SUBPASS_CONTENTS_BEGIN_RANGE ||
- contents > VK_SUBPASS_CONTENTS_END_RANGE)
- {
- log_msg(mdd(commandBuffer), VK_DEBUG_REPORT_ERROR_BIT_EXT, (VkDebugReportObjectTypeEXT)0, 0, __LINE__, 1, "PARAMCHECK",
- "vkCmdNextSubpass parameter, VkSubpassContents contents, is an unrecognized enumerator");
- return false;
- }
-
- return true;
-}
-
-VK_LAYER_EXPORT VKAPI_ATTR void VKAPI_CALL vkCmdNextSubpass(
- VkCommandBuffer commandBuffer,
- VkSubpassContents contents)
-{
- get_dispatch_table(pc_device_table_map, commandBuffer)->CmdNextSubpass(commandBuffer, contents);
-
- PostCmdNextSubpass(commandBuffer, contents);
-}
-
-bool PostCmdEndRenderPass(
- VkCommandBuffer commandBuffer)
-{
-
- return true;
-}
-
-VK_LAYER_EXPORT VKAPI_ATTR void VKAPI_CALL vkCmdEndRenderPass(
- VkCommandBuffer commandBuffer)
-{
- get_dispatch_table(pc_device_table_map, commandBuffer)->CmdEndRenderPass(commandBuffer);
-
- PostCmdEndRenderPass(commandBuffer);
-}
-
-bool PreCmdExecuteCommands(
- VkCommandBuffer commandBuffer,
- const VkCommandBuffer* pCommandBuffers)
-{
- if(pCommandBuffers != nullptr)
- {
- }
-
- return true;
-}
-
-bool PostCmdExecuteCommands(
- VkCommandBuffer commandBuffer,
- uint32_t commandBuffersCount)
-{
-
-
- return true;
-}
-
-VK_LAYER_EXPORT VKAPI_ATTR void VKAPI_CALL vkCmdExecuteCommands(
- VkCommandBuffer commandBuffer,
- uint32_t commandBufferCount,
- const VkCommandBuffer* pCommandBuffers)
-{
- VkBool32 skipCall = VK_FALSE;
- layer_data* my_data = get_my_data_ptr(get_dispatch_key(commandBuffer), layer_data_map);
- assert(my_data != NULL);
-
- skipCall |= param_check_vkCmdExecuteCommands(
- my_data->report_data,
- commandBufferCount,
- pCommandBuffers);
-
- if (skipCall == VK_FALSE) {
- PreCmdExecuteCommands(commandBuffer, pCommandBuffers);
-
- get_dispatch_table(pc_device_table_map, commandBuffer)->CmdExecuteCommands(commandBuffer, commandBufferCount, pCommandBuffers);
-
- PostCmdExecuteCommands(commandBuffer, commandBufferCount);
- }
-}
-
-VK_LAYER_EXPORT VKAPI_ATTR PFN_vkVoidFunction VKAPI_CALL vkGetDeviceProcAddr(VkDevice device, const char* funcName)
-{
- layer_data *data = get_my_data_ptr(get_dispatch_key(device), layer_data_map);
-
- if (validate_string(data, "vkGetDeviceProcAddr()", "funcName", funcName) == VK_TRUE) {
- return NULL;
- }
-
- if (!strcmp(funcName, "vkGetDeviceProcAddr"))
- return (PFN_vkVoidFunction) vkGetDeviceProcAddr;
- if (!strcmp(funcName, "vkDestroyDevice"))
- return (PFN_vkVoidFunction) vkDestroyDevice;
- if (!strcmp(funcName, "vkGetDeviceQueue"))
- return (PFN_vkVoidFunction) vkGetDeviceQueue;
- if (!strcmp(funcName, "vkQueueSubmit"))
- return (PFN_vkVoidFunction) vkQueueSubmit;
- if (!strcmp(funcName, "vkQueueWaitIdle"))
- return (PFN_vkVoidFunction) vkQueueWaitIdle;
- if (!strcmp(funcName, "vkDeviceWaitIdle"))
- return (PFN_vkVoidFunction) vkDeviceWaitIdle;
- if (!strcmp(funcName, "vkAllocateMemory"))
- return (PFN_vkVoidFunction) vkAllocateMemory;
- if (!strcmp(funcName, "vkMapMemory"))
- return (PFN_vkVoidFunction) vkMapMemory;
- if (!strcmp(funcName, "vkFlushMappedMemoryRanges"))
- return (PFN_vkVoidFunction) vkFlushMappedMemoryRanges;
- if (!strcmp(funcName, "vkInvalidateMappedMemoryRanges"))
- return (PFN_vkVoidFunction) vkInvalidateMappedMemoryRanges;
- if (!strcmp(funcName, "vkCreateFence"))
- return (PFN_vkVoidFunction) vkCreateFence;
- if (!strcmp(funcName, "vkResetFences"))
- return (PFN_vkVoidFunction) vkResetFences;
- if (!strcmp(funcName, "vkGetFenceStatus"))
- return (PFN_vkVoidFunction) vkGetFenceStatus;
- if (!strcmp(funcName, "vkWaitForFences"))
- return (PFN_vkVoidFunction) vkWaitForFences;
- if (!strcmp(funcName, "vkCreateSemaphore"))
- return (PFN_vkVoidFunction) vkCreateSemaphore;
- if (!strcmp(funcName, "vkCreateEvent"))
- return (PFN_vkVoidFunction) vkCreateEvent;
- if (!strcmp(funcName, "vkGetEventStatus"))
- return (PFN_vkVoidFunction) vkGetEventStatus;
- if (!strcmp(funcName, "vkSetEvent"))
- return (PFN_vkVoidFunction) vkSetEvent;
- if (!strcmp(funcName, "vkResetEvent"))
- return (PFN_vkVoidFunction) vkResetEvent;
- if (!strcmp(funcName, "vkCreateQueryPool"))
- return (PFN_vkVoidFunction) vkCreateQueryPool;
- if (!strcmp(funcName, "vkGetQueryPoolResults"))
- return (PFN_vkVoidFunction) vkGetQueryPoolResults;
- if (!strcmp(funcName, "vkCreateBuffer"))
- return (PFN_vkVoidFunction) vkCreateBuffer;
- if (!strcmp(funcName, "vkCreateBufferView"))
- return (PFN_vkVoidFunction) vkCreateBufferView;
- if (!strcmp(funcName, "vkCreateImage"))
- return (PFN_vkVoidFunction) vkCreateImage;
- if (!strcmp(funcName, "vkGetImageSubresourceLayout"))
- return (PFN_vkVoidFunction) vkGetImageSubresourceLayout;
- if (!strcmp(funcName, "vkCreateImageView"))
- return (PFN_vkVoidFunction) vkCreateImageView;
- if (!strcmp(funcName, "vkCreateShaderModule"))
- return (PFN_vkVoidFunction) vkCreateShaderModule;
- if (!strcmp(funcName, "vkCreateGraphicsPipelines"))
- return (PFN_vkVoidFunction) vkCreateGraphicsPipelines;
- if (!strcmp(funcName, "vkCreateComputePipelines"))
- return (PFN_vkVoidFunction) vkCreateComputePipelines;
- if (!strcmp(funcName, "vkCreatePipelineLayout"))
- return (PFN_vkVoidFunction) vkCreatePipelineLayout;
- if (!strcmp(funcName, "vkCreateSampler"))
- return (PFN_vkVoidFunction) vkCreateSampler;
- if (!strcmp(funcName, "vkCreateDescriptorSetLayout"))
- return (PFN_vkVoidFunction) vkCreateDescriptorSetLayout;
- if (!strcmp(funcName, "vkCreateDescriptorPool"))
- return (PFN_vkVoidFunction) vkCreateDescriptorPool;
- if (!strcmp(funcName, "vkResetDescriptorPool"))
- return (PFN_vkVoidFunction) vkResetDescriptorPool;
- if (!strcmp(funcName, "vkAllocateDescriptorSets"))
- return (PFN_vkVoidFunction) vkAllocateDescriptorSets;
- if (!strcmp(funcName, "vkCmdSetViewport"))
- return (PFN_vkVoidFunction) vkCmdSetViewport;
- if (!strcmp(funcName, "vkCmdSetScissor"))
- return (PFN_vkVoidFunction) vkCmdSetScissor;
- if (!strcmp(funcName, "vkCmdSetLineWidth"))
- return (PFN_vkVoidFunction) vkCmdSetLineWidth;
- if (!strcmp(funcName, "vkCmdSetDepthBias"))
- return (PFN_vkVoidFunction) vkCmdSetDepthBias;
- if (!strcmp(funcName, "vkCmdSetBlendConstants"))
- return (PFN_vkVoidFunction) vkCmdSetBlendConstants;
- if (!strcmp(funcName, "vkCmdSetDepthBounds"))
- return (PFN_vkVoidFunction) vkCmdSetDepthBounds;
- if (!strcmp(funcName, "vkCmdSetStencilCompareMask"))
- return (PFN_vkVoidFunction) vkCmdSetStencilCompareMask;
- if (!strcmp(funcName, "vkCmdSetStencilWriteMask"))
- return (PFN_vkVoidFunction) vkCmdSetStencilWriteMask;
- if (!strcmp(funcName, "vkCmdSetStencilReference"))
- return (PFN_vkVoidFunction) vkCmdSetStencilReference;
- if (!strcmp(funcName, "vkAllocateCommandBuffers"))
- return (PFN_vkVoidFunction) vkAllocateCommandBuffers;
- if (!strcmp(funcName, "vkBeginCommandBuffer"))
- return (PFN_vkVoidFunction) vkBeginCommandBuffer;
- if (!strcmp(funcName, "vkEndCommandBuffer"))
- return (PFN_vkVoidFunction) vkEndCommandBuffer;
- if (!strcmp(funcName, "vkResetCommandBuffer"))
- return (PFN_vkVoidFunction) vkResetCommandBuffer;
- if (!strcmp(funcName, "vkCmdBindPipeline"))
- return (PFN_vkVoidFunction) vkCmdBindPipeline;
- if (!strcmp(funcName, "vkCmdBindDescriptorSets"))
- return (PFN_vkVoidFunction) vkCmdBindDescriptorSets;
- if (!strcmp(funcName, "vkCmdBindVertexBuffers"))
- return (PFN_vkVoidFunction) vkCmdBindVertexBuffers;
- if (!strcmp(funcName, "vkCmdBindIndexBuffer"))
- return (PFN_vkVoidFunction) vkCmdBindIndexBuffer;
- if (!strcmp(funcName, "vkCmdDraw"))
- return (PFN_vkVoidFunction) vkCmdDraw;
- if (!strcmp(funcName, "vkCmdDrawIndexed"))
- return (PFN_vkVoidFunction) vkCmdDrawIndexed;
- if (!strcmp(funcName, "vkCmdDrawIndirect"))
- return (PFN_vkVoidFunction) vkCmdDrawIndirect;
- if (!strcmp(funcName, "vkCmdDrawIndexedIndirect"))
- return (PFN_vkVoidFunction) vkCmdDrawIndexedIndirect;
- if (!strcmp(funcName, "vkCmdDispatch"))
- return (PFN_vkVoidFunction) vkCmdDispatch;
- if (!strcmp(funcName, "vkCmdDispatchIndirect"))
- return (PFN_vkVoidFunction) vkCmdDispatchIndirect;
- if (!strcmp(funcName, "vkCmdCopyBuffer"))
- return (PFN_vkVoidFunction) vkCmdCopyBuffer;
- if (!strcmp(funcName, "vkCmdCopyImage"))
- return (PFN_vkVoidFunction) vkCmdCopyImage;
- if (!strcmp(funcName, "vkCmdBlitImage"))
- return (PFN_vkVoidFunction) vkCmdBlitImage;
- if (!strcmp(funcName, "vkCmdCopyBufferToImage"))
- return (PFN_vkVoidFunction) vkCmdCopyBufferToImage;
- if (!strcmp(funcName, "vkCmdCopyImageToBuffer"))
- return (PFN_vkVoidFunction) vkCmdCopyImageToBuffer;
- if (!strcmp(funcName, "vkCmdUpdateBuffer"))
- return (PFN_vkVoidFunction) vkCmdUpdateBuffer;
- if (!strcmp(funcName, "vkCmdFillBuffer"))
- return (PFN_vkVoidFunction) vkCmdFillBuffer;
- if (!strcmp(funcName, "vkCmdClearColorImage"))
- return (PFN_vkVoidFunction) vkCmdClearColorImage;
- if (!strcmp(funcName, "vkCmdResolveImage"))
- return (PFN_vkVoidFunction) vkCmdResolveImage;
- if (!strcmp(funcName, "vkCmdSetEvent"))
- return (PFN_vkVoidFunction) vkCmdSetEvent;
- if (!strcmp(funcName, "vkCmdResetEvent"))
- return (PFN_vkVoidFunction) vkCmdResetEvent;
- if (!strcmp(funcName, "vkCmdWaitEvents"))
- return (PFN_vkVoidFunction) vkCmdWaitEvents;
- if (!strcmp(funcName, "vkCmdPipelineBarrier"))
- return (PFN_vkVoidFunction) vkCmdPipelineBarrier;
- if (!strcmp(funcName, "vkCmdBeginQuery"))
- return (PFN_vkVoidFunction) vkCmdBeginQuery;
- if (!strcmp(funcName, "vkCmdEndQuery"))
- return (PFN_vkVoidFunction) vkCmdEndQuery;
- if (!strcmp(funcName, "vkCmdResetQueryPool"))
- return (PFN_vkVoidFunction) vkCmdResetQueryPool;
- if (!strcmp(funcName, "vkCmdWriteTimestamp"))
- return (PFN_vkVoidFunction) vkCmdWriteTimestamp;
- if (!strcmp(funcName, "vkCmdCopyQueryPoolResults"))
- return (PFN_vkVoidFunction) vkCmdCopyQueryPoolResults;
- if (!strcmp(funcName, "vkCreateFramebuffer"))
- return (PFN_vkVoidFunction) vkCreateFramebuffer;
- if (!strcmp(funcName, "vkCreateRenderPass"))
- return (PFN_vkVoidFunction) vkCreateRenderPass;
- if (!strcmp(funcName, "vkCmdBeginRenderPass"))
- return (PFN_vkVoidFunction) vkCmdBeginRenderPass;
- if (!strcmp(funcName, "vkCmdNextSubpass"))
- return (PFN_vkVoidFunction) vkCmdNextSubpass;
-
- if (device == NULL) {
- return NULL;
- }
-
- if (get_dispatch_table(pc_device_table_map, device)->GetDeviceProcAddr == NULL)
- return NULL;
- return get_dispatch_table(pc_device_table_map, device)->GetDeviceProcAddr(device, funcName);
-}
-
-VK_LAYER_EXPORT VKAPI_ATTR PFN_vkVoidFunction VKAPI_CALL vkGetInstanceProcAddr(VkInstance instance, const char* funcName)
-{
- if (!strcmp(funcName, "vkGetInstanceProcAddr"))
- return (PFN_vkVoidFunction) vkGetInstanceProcAddr;
- if (!strcmp(funcName, "vkCreateInstance"))
- return (PFN_vkVoidFunction) vkCreateInstance;
- if (!strcmp(funcName, "vkDestroyInstance"))
- return (PFN_vkVoidFunction) vkDestroyInstance;
- if (!strcmp(funcName, "vkCreateDevice"))
- return (PFN_vkVoidFunction) vkCreateDevice;
- if (!strcmp(funcName, "vkEnumeratePhysicalDevices"))
- return (PFN_vkVoidFunction) vkEnumeratePhysicalDevices;
- if (!strcmp(funcName, "vkGetPhysicalDeviceProperties"))
- return (PFN_vkVoidFunction) vkGetPhysicalDeviceProperties;
- if (!strcmp(funcName, "vkGetPhysicalDeviceFeatures"))
- return (PFN_vkVoidFunction) vkGetPhysicalDeviceFeatures;
- if (!strcmp(funcName, "vkGetPhysicalDeviceFormatProperties"))
- return (PFN_vkVoidFunction) vkGetPhysicalDeviceFormatProperties;
- if (!strcmp(funcName, "vkEnumerateInstanceLayerProperties"))
- return (PFN_vkVoidFunction) vkEnumerateInstanceLayerProperties;
- if (!strcmp(funcName, "vkEnumerateInstanceExtensionProperties"))
- return (PFN_vkVoidFunction) vkEnumerateInstanceExtensionProperties;
- if (!strcmp(funcName, "vkEnumerateDeviceLayerProperties"))
- return (PFN_vkVoidFunction) vkEnumerateDeviceLayerProperties;
- if (!strcmp(funcName, "vkEnumerateDeviceExtensionProperties"))
- return (PFN_vkVoidFunction) vkEnumerateDeviceExtensionProperties;
-
- if (instance == NULL) {
- return NULL;
- }
-
- layer_data *data = get_my_data_ptr(get_dispatch_key(instance), layer_data_map);
-
- PFN_vkVoidFunction fptr = debug_report_get_instance_proc_addr(data->report_data, funcName);
- if(fptr)
- return fptr;
-
- if (get_dispatch_table(pc_instance_table_map, instance)->GetInstanceProcAddr == NULL)
- return NULL;
- return get_dispatch_table(pc_instance_table_map, instance)->GetInstanceProcAddr(instance, funcName);
-}
diff --git a/layers/param_checker_utils.h b/layers/param_checker_utils.h
deleted file mode 100644
index bef82578e..000000000
--- a/layers/param_checker_utils.h
+++ /dev/null
@@ -1,332 +0,0 @@
-/* Copyright (c) 2015-2016 The Khronos Group Inc.
- * Copyright (c) 2015-2016 Valve Corporation
- * Copyright (c) 2015-2016 LunarG, Inc.
- * Copyright (C) 2015-2016 Google Inc.
- *
- * Permission is hereby granted, free of charge, to any person obtaining a copy
- * of this software and/or associated documentation files (the "Materials"), to
- * deal in the Materials without restriction, including without limitation the
- * rights to use, copy, modify, merge, publish, distribute, sublicense, and/or
- * sell copies of the Materials, and to permit persons to whom the Materials
- * are furnished to do so, subject to the following conditions:
- *
- * The above copyright notice(s) and this permission notice shall be included
- * in all copies or substantial portions of the Materials.
- *
- * The Materials are Confidential Information as defined by the Khronos
- * Membership Agreement until designated non-confidential by Khronos, at which
- * point this condition clause shall be removed.
- *
- * THE MATERIALS ARE PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
- * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
- * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT.
- *
- * IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM,
- * DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR
- * OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE MATERIALS OR THE
- * USE OR OTHER DEALINGS IN THE MATERIALS
- *
- * Author: Dustin Graves <dustin@lunarg.com>
- */
-
-#ifndef PARAM_CHECKER_UTILS_H
-#define PARAM_CHECKER_UTILS_H
-
-#include "vulkan/vulkan.h"
-#include "vk_layer_logging.h"
-
-/**
- * Validate a required pointer.
- *
- * Verify that a required pointer is not NULL;
- *
- * @param report_data debug_report_data object for routing validation messages.
- * @param apiName Name of API call being validated.
- * @param parameterName Name of parameter being validated.
- * @param value Pointer to validate.
- * @return Boolean value indicating that the call should be skipped.
- */
-static
-VkBool32 validate_required_pointer(
- debug_report_data* report_data,
- const char* apiName,
- const char* parameterName,
- const void* value)
-{
- VkBool32 skipCall = VK_FALSE;
-
- if (value == NULL) {
- skipCall |= log_msg(
- report_data, VK_DEBUG_REPORT_ERROR_BIT_EXT,
- (VkDebugReportObjectTypeEXT)0, 0, __LINE__, 1,
- "PARAMCHECK", "%s: required parameter %s specified as NULL",
- apiName, parameterName);
- }
-
- return skipCall;
-}
-
-/**
- * Validate pointer to array count and pointer to array.
- *
- * Verify that required count and array parameters are not NULL. If count
- * is not NULL and its value is not optional, verify that it is not 0.
- *
- * @param report_data debug_report_data object for routing validation messages.
- * @param apiName Name of API call being validated.
- * @param countName Name of count parameter.
- * @param arrayName Name of array parameter.
- * @param count Pointer to the number of elements in the array.
- * @param array Array to validate.
- * @param countPtrRequired The 'count' parameter may not be NULL when true.
- * @param countValueRequired The '*count' value may not be 0 when true.
- * @param arrayRequired The 'array' parameter may not be NULL when true.
- * @return Boolean value indicating that the call should be skipped.
- */
-template <typename T>
-VkBool32 validate_array(
- debug_report_data* report_data,
- const char* apiName,
- const char* countName,
- const char* arrayName,
- const T* count,
- const void* array,
- VkBool32 countPtrRequired,
- VkBool32 countValueRequired,
- VkBool32 arrayRequired)
-{
- VkBool32 skipCall = VK_FALSE;
-
- if (count == NULL) {
- if (countPtrRequired == VK_TRUE) {
- skipCall |= log_msg(
- report_data, VK_DEBUG_REPORT_ERROR_BIT_EXT,
- (VkDebugReportObjectTypeEXT)0, 0, __LINE__, 1,
- "PARAMCHECK", "%s: required parameter %s specified as NULL",
- apiName, countName);
- }
- } else {
- skipCall |= validate_array(
- report_data, apiName, countName, arrayName, (*count), array,
- countValueRequired, arrayRequired);
- }
-
- return skipCall;
-}
-
-/**
- * Validate array count and pointer to array.
- *
- * Verify that required count and array parameters are not 0 or NULL.
- *
- * @param report_data debug_report_data object for routing validation messages.
- * @param apiName Name of API call being validated.
- * @param countName Name of count parameter.
- * @param arrayName Name of array parameter.
- * @param count Number of elements in the array.
- * @param array Array to validate.
- * @param countRequired The 'count' parameter may not be 0 when true.
- * @param arrayRequired The 'array' parameter may not be NULL when true.
- * @return Boolean value indicating that the call should be skipped.
- */
-template <typename T>
-VkBool32 validate_array(
- debug_report_data* report_data,
- const char* apiName,
- const char* countName,
- const char* arrayName,
- T count,
- const void* array,
- VkBool32 countRequired,
- VkBool32 arrayRequired)
-{
- VkBool32 skipCall = VK_FALSE;
-
- // Count parameters not tagged as optional cannot be 0
- if ((count == 0) && (countRequired == VK_TRUE)) {
- skipCall |= log_msg(
- report_data, VK_DEBUG_REPORT_ERROR_BIT_EXT,
- (VkDebugReportObjectTypeEXT)0, 0, __LINE__, 1,
- "PARAMCHECK", "%s: parameter %s must be greater than 0",
- apiName, countName);
- }
-
- // Array parameters not tagged as optional cannot be NULL,
- // unless the count is 0
- if ((array == NULL) && (arrayRequired == VK_TRUE) && (count != 0)) {
- skipCall |= log_msg(
- report_data, VK_DEBUG_REPORT_ERROR_BIT_EXT,
- (VkDebugReportObjectTypeEXT)0, 0, __LINE__, 1,
- "PARAMCHECK", "%s: required parameter %s specified as NULL",
- apiName, arrayName);
- }
-
- return skipCall;
-}
-
-/**
- * Validate an Vulkan structure type.
- *
- * @param report_data debug_report_data object for routing validation messages.
- * @param apiName Name of API call being validated.
- * @param parameterName Name of struct parameter being validated.
- * @param sTypeName Name of expected VkStructureType value.
- * @param value Pointer to the struct to validate.
- * @param sType VkStructureType for structure validation.
- * @param required The parameter may not be NULL when true.
- * @return Boolean value indicating that the call should be skipped.
- */
-template <typename T>
-VkBool32 validate_struct_type(
- debug_report_data* report_data,
- const char* apiName,
- const char* parameterName,
- const char* sTypeName,
- const T* value,
- VkStructureType sType,
- VkBool32 required)
-{
- VkBool32 skipCall = VK_FALSE;
-
- if (value == NULL) {
- if (required == VK_TRUE) {
- skipCall |= log_msg(
- report_data, VK_DEBUG_REPORT_ERROR_BIT_EXT,
- (VkDebugReportObjectTypeEXT)0, 0, __LINE__, 1,
- "PARAMCHECK", "%s: required parameter %s specified as NULL",
- apiName, parameterName);
- }
- } else if (value->sType != sType) {
- skipCall |= log_msg(report_data, VK_DEBUG_REPORT_ERROR_BIT_EXT,
- (VkDebugReportObjectTypeEXT)0, 0, __LINE__, 1,
- "PARAMCHECK", "%s: parameter %s->sType must be %s",
- apiName, parameterName, sTypeName);
- }
-
- return skipCall;
-}
-
-/**
- * Validate an array of Vulkan structures.
- *
- * Verify that required count and array parameters are not NULL. If count
- * is not NULL and its value is not optional, verify that it is not 0.
- * If the array contains 1 or more structures, verify that each structure's
- * sType field is set to the correct VkStructureType value.
- *
- * @param report_data debug_report_data object for routing validation messages.
- * @param apiName Name of API call being validated.
- * @param countName Name of count parameter.
- * @param arrayName Name of array parameter.
- * @param sTypeName Name of expected VkStructureType value.
- * @param count Pointer to the number of elements in the array.
- * @param array Array to validate.
- * @param sType VkStructureType for structure validation.
- * @param countPtrRequired The 'count' parameter may not be NULL when true.
- * @param countValueRequired The '*count' value may not be 0 when true.
- * @param arrayRequired The 'array' parameter may not be NULL when true.
- * @return Boolean value indicating that the call should be skipped.
- */
-template <typename T>
-VkBool32 validate_struct_type_array(
- debug_report_data* report_data,
- const char* apiName,
- const char* countName,
- const char* arrayName,
- const char* sTypeName,
- const uint32_t* count,
- const T* array,
- VkStructureType sType,
- VkBool32 countPtrRequired,
- VkBool32 countValueRequired,
- VkBool32 arrayRequired)
-{
- VkBool32 skipCall = VK_FALSE;
-
- if (count == NULL) {
- if (countPtrRequired == VK_TRUE) {
- skipCall |= log_msg(
- report_data, VK_DEBUG_REPORT_ERROR_BIT_EXT,
- (VkDebugReportObjectTypeEXT)0, 0, __LINE__, 1,
- "PARAMCHECK", "%s: required parameter %s specified as NULL",
- apiName, countName);
- }
- } else {
- skipCall |= validate_struct_type_array(
- report_data, apiName, countName, arrayName, sTypeName,
- (*count), array, sType, countValueRequired, arrayRequired);
- }
-
- return skipCall;
-}
-
-/**
- * Validate an array of Vulkan structures
- *
- * Verify that required count and array parameters are not 0 or NULL. If
- * the array contains 1 or more structures, verify that each structure's
- * sType field is set to the correct VkStructureType value.
- *
- * @param report_data debug_report_data object for routing validation messages.
- * @param apiName Name of API call being validated.
- * @param countName Name of count parameter.
- * @param arrayName Name of array parameter.
- * @param sTypeName Name of expected VkStructureType value.
- * @param count Number of elements in the array.
- * @param array Array to validate.
- * @param sType VkStructureType for structure validation.
- * @param countRequired The 'count' parameter may not be 0 when true.
- * @param arrayRequired The 'array' parameter may not be NULL when true.
- * @return Boolean value indicating that the call should be skipped.
- */
-template <typename T>
-VkBool32 validate_struct_type_array(
- debug_report_data* report_data,
- const char* apiName,
- const char* countName,
- const char* arrayName,
- const char* sTypeName,
- uint32_t count,
- const T* array,
- VkStructureType sType,
- VkBool32 countRequired,
- VkBool32 arrayRequired)
-{
- VkBool32 skipCall = VK_FALSE;
-
- if ((count == 0) || (array == NULL)) {
- // Count parameters not tagged as optional cannot be 0
- if ((count == 0) && (countRequired == VK_TRUE)) {
- skipCall |= log_msg(
- report_data, VK_DEBUG_REPORT_ERROR_BIT_EXT,
- (VkDebugReportObjectTypeEXT)0, 0, __LINE__, 1,
- "PARAMCHECK", "%s: parameter %s must be greater than 0",
- apiName, countName);
- }
-
- // Array parameters not tagged as optional cannot be NULL,
- // unless the count is 0
- if ((array == NULL) && (arrayRequired == VK_TRUE) && (count != 0)) {
- skipCall |= log_msg(
- report_data, VK_DEBUG_REPORT_ERROR_BIT_EXT,
- (VkDebugReportObjectTypeEXT)0, 0, __LINE__, 1,
- "PARAMCHECK", "%s: required parameter %s specified as NULL",
- apiName, arrayName);
- }
- } else {
- // Verify that all structs in the array have the correct type
- for (uint32_t i = 0; i < count; ++i) {
- if (array[i].sType != sType) {
- skipCall |= log_msg(
- report_data, VK_DEBUG_REPORT_ERROR_BIT_EXT,
- (VkDebugReportObjectTypeEXT)0, 0, __LINE__, 1,
- "PARAMCHECK", "%s: parameter %s[%d].sType must be %s",
- apiName, arrayName, i, sTypeName);
- }
- }
- }
-
- return skipCall;
-}
-
-#endif // PARAM_CHECKER_UTILS_H
diff --git a/layers/parameter_validation.cpp b/layers/parameter_validation.cpp
new file mode 100644
index 000000000..e66a117a0
--- /dev/null
+++ b/layers/parameter_validation.cpp
@@ -0,0 +1,5164 @@
+/* Copyright (c) 2015-2016 The Khronos Group Inc.
+ * Copyright (c) 2015-2016 Valve Corporation
+ * Copyright (c) 2015-2016 LunarG, Inc.
+ * Copyright (C) 2015-2016 Google Inc.
+ *
+ * Permission is hereby granted, free of charge, to any person obtaining a copy
+ * of this software and/or associated documentation files (the "Materials"), to
+ * deal in the Materials without restriction, including without limitation the
+ * rights to use, copy, modify, merge, publish, distribute, sublicense, and/or
+ * sell copies of the Materials, and to permit persons to whom the Materials
+ * are furnished to do so, subject to the following conditions:
+ *
+ * The above copyright notice(s) and this permission notice shall be included
+ * in all copies or substantial portions of the Materials.
+ *
+ * THE MATERIALS ARE PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
+ * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
+ * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT.
+ *
+ * IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM,
+ * DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR
+ * OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE MATERIALS OR THE
+ * USE OR OTHER DEALINGS IN THE MATERIALS
+ *
+ * Author: Jeremy Hayes <jeremy@lunarg.com>
+ * Author: Tony Barbour <tony@LunarG.com>
+ * Author: Mark Lobodzinski <mark@LunarG.com>
+ * Author: Dustin Graves <dustin@lunarg.com>
+ */
+
+#include <stdio.h>
+#include <stdlib.h>
+#include <string.h>
+
+#include <iostream>
+#include <string>
+#include <sstream>
+#include <unordered_map>
+#include <unordered_set>
+#include <vector>
+
+#include "vk_loader_platform.h"
+#include "vulkan/vk_layer.h"
+#include "vk_layer_config.h"
+#include "vk_enum_validate_helper.h"
+#include "vk_struct_validate_helper.h"
+
+#include "vk_layer_table.h"
+#include "vk_layer_data.h"
+#include "vk_layer_logging.h"
+#include "vk_layer_extension_utils.h"
+#include "vk_layer_utils.h"
+
+#include "parameter_validation.h"
+
+struct layer_data {
+ debug_report_data *report_data;
+ std::vector<VkDebugReportCallbackEXT> logging_callback;
+
+ // TODO: Split instance/device structs
+ // Device Data
+ // Map for queue family index to queue count
+ std::unordered_map<uint32_t, uint32_t> queueFamilyIndexMap;
+
+ layer_data() : report_data(nullptr){};
+};
+
+static std::unordered_map<void *, layer_data *> layer_data_map;
+static device_table_map pc_device_table_map;
+static instance_table_map pc_instance_table_map;
+
+// "my instance data"
+debug_report_data *mid(VkInstance object) {
+ dispatch_key key = get_dispatch_key(object);
+ layer_data *data = get_my_data_ptr(key, layer_data_map);
+#if DISPATCH_MAP_DEBUG
+ fprintf(stderr, "MID: map: %p, object: %p, key: %p, data: %p\n", &layer_data_map, object, key, data);
+#endif
+ assert(data != NULL);
+
+ return data->report_data;
+}
+
+// "my device data"
+debug_report_data *mdd(void *object) {
+ dispatch_key key = get_dispatch_key(object);
+ layer_data *data = get_my_data_ptr(key, layer_data_map);
+#if DISPATCH_MAP_DEBUG
+ fprintf(stderr, "MDD: map: %p, object: %p, key: %p, data: %p\n", &layer_data_map, object, key, data);
+#endif
+ assert(data != NULL);
+ return data->report_data;
+}
+
+static void init_parameter_validation(layer_data *my_data, const VkAllocationCallbacks *pAllocator) {
+
+ layer_debug_actions(my_data->report_data, my_data->logging_callback, pAllocator, "lunarg_parameter_validation");
+}
+
+VK_LAYER_EXPORT VKAPI_ATTR VkResult VKAPI_CALL
+vkCreateDebugReportCallbackEXT(VkInstance instance, const VkDebugReportCallbackCreateInfoEXT *pCreateInfo,
+ const VkAllocationCallbacks *pAllocator, VkDebugReportCallbackEXT *pMsgCallback) {
+ VkLayerInstanceDispatchTable *pTable = get_dispatch_table(pc_instance_table_map, instance);
+ VkResult result = pTable->CreateDebugReportCallbackEXT(instance, pCreateInfo, pAllocator, pMsgCallback);
+
+ if (result == VK_SUCCESS) {
+ layer_data *data = get_my_data_ptr(get_dispatch_key(instance), layer_data_map);
+ result = layer_create_msg_callback(data->report_data, pCreateInfo, pAllocator, pMsgCallback);
+ }
+
+ return result;
+}
+
+VK_LAYER_EXPORT VKAPI_ATTR void VKAPI_CALL vkDestroyDebugReportCallbackEXT(VkInstance instance,
+ VkDebugReportCallbackEXT msgCallback,
+ const VkAllocationCallbacks *pAllocator) {
+ VkLayerInstanceDispatchTable *pTable = get_dispatch_table(pc_instance_table_map, instance);
+ pTable->DestroyDebugReportCallbackEXT(instance, msgCallback, pAllocator);
+
+ layer_data *data = get_my_data_ptr(get_dispatch_key(instance), layer_data_map);
+ layer_destroy_msg_callback(data->report_data, msgCallback, pAllocator);
+}
+
+VK_LAYER_EXPORT VKAPI_ATTR void VKAPI_CALL
+vkDebugReportMessageEXT(VkInstance instance, VkDebugReportFlagsEXT flags, VkDebugReportObjectTypeEXT objType, uint64_t object,
+ size_t location, int32_t msgCode, const char *pLayerPrefix, const char *pMsg) {
+ VkLayerInstanceDispatchTable *pTable = get_dispatch_table(pc_instance_table_map, instance);
+ pTable->DebugReportMessageEXT(instance, flags, objType, object, location, msgCode, pLayerPrefix, pMsg);
+}
+
+static const VkExtensionProperties instance_extensions[] = {{VK_EXT_DEBUG_REPORT_EXTENSION_NAME, VK_EXT_DEBUG_REPORT_SPEC_VERSION}};
+
+VK_LAYER_EXPORT VKAPI_ATTR VkResult VKAPI_CALL
+vkEnumerateInstanceExtensionProperties(const char *pLayerName, uint32_t *pCount, VkExtensionProperties *pProperties) {
+ return util_GetExtensionProperties(1, instance_extensions, pCount, pProperties);
+}
+
+static const VkLayerProperties pc_global_layers[] = {{
+ "VK_LAYER_LUNARG_parameter_validation", VK_LAYER_API_VERSION, 1, "LunarG Validation Layer",
+}};
+
+VK_LAYER_EXPORT VKAPI_ATTR VkResult VKAPI_CALL
+vkEnumerateInstanceLayerProperties(uint32_t *pCount, VkLayerProperties *pProperties) {
+ return util_GetLayerProperties(ARRAY_SIZE(pc_global_layers), pc_global_layers, pCount, pProperties);
+}
+
+VK_LAYER_EXPORT VKAPI_ATTR VkResult VKAPI_CALL vkEnumerateDeviceExtensionProperties(VkPhysicalDevice physicalDevice,
+ const char *pLayerName, uint32_t *pCount,
+ VkExtensionProperties *pProperties) {
+ /* parameter_validation does not have any physical device extensions */
+ if (pLayerName == NULL) {
+ return get_dispatch_table(pc_instance_table_map, physicalDevice)
+ ->EnumerateDeviceExtensionProperties(physicalDevice, NULL, pCount, pProperties);
+ } else {
+ return util_GetExtensionProperties(0, NULL, pCount, pProperties);
+ }
+}
+
+VK_LAYER_EXPORT VKAPI_ATTR VkResult VKAPI_CALL
+vkEnumerateDeviceLayerProperties(VkPhysicalDevice physicalDevice, uint32_t *pCount, VkLayerProperties *pProperties) {
+
+ /* parameter_validation's physical device layers are the same as global */
+ return util_GetLayerProperties(ARRAY_SIZE(pc_global_layers), pc_global_layers, pCount, pProperties);
+}
+
+static std::string EnumeratorString(VkResult const &enumerator) {
+ switch (enumerator) {
+ case VK_RESULT_MAX_ENUM: {
+ return "VK_RESULT_MAX_ENUM";
+ break;
+ }
+ case VK_ERROR_LAYER_NOT_PRESENT: {
+ return "VK_ERROR_LAYER_NOT_PRESENT";
+ break;
+ }
+ case VK_ERROR_INCOMPATIBLE_DRIVER: {
+ return "VK_ERROR_INCOMPATIBLE_DRIVER";
+ break;
+ }
+ case VK_ERROR_MEMORY_MAP_FAILED: {
+ return "VK_ERROR_MEMORY_MAP_FAILED";
+ break;
+ }
+ case VK_INCOMPLETE: {
+ return "VK_INCOMPLETE";
+ break;
+ }
+ case VK_ERROR_OUT_OF_HOST_MEMORY: {
+ return "VK_ERROR_OUT_OF_HOST_MEMORY";
+ break;
+ }
+ case VK_ERROR_INITIALIZATION_FAILED: {
+ return "VK_ERROR_INITIALIZATION_FAILED";
+ break;
+ }
+ case VK_NOT_READY: {
+ return "VK_NOT_READY";
+ break;
+ }
+ case VK_ERROR_OUT_OF_DEVICE_MEMORY: {
+ return "VK_ERROR_OUT_OF_DEVICE_MEMORY";
+ break;
+ }
+ case VK_EVENT_SET: {
+ return "VK_EVENT_SET";
+ break;
+ }
+ case VK_TIMEOUT: {
+ return "VK_TIMEOUT";
+ break;
+ }
+ case VK_EVENT_RESET: {
+ return "VK_EVENT_RESET";
+ break;
+ }
+ case VK_SUCCESS: {
+ return "VK_SUCCESS";
+ break;
+ }
+ case VK_ERROR_EXTENSION_NOT_PRESENT: {
+ return "VK_ERROR_EXTENSION_NOT_PRESENT";
+ break;
+ }
+ case VK_ERROR_DEVICE_LOST: {
+ return "VK_ERROR_DEVICE_LOST";
+ break;
+ }
+ default: {
+ return "unrecognized enumerator";
+ break;
+ }
+ }
+}
+
+static bool ValidateEnumerator(VkFormatFeatureFlagBits const &enumerator) {
+ VkFormatFeatureFlagBits allFlags = (VkFormatFeatureFlagBits)(
+ VK_FORMAT_FEATURE_STORAGE_IMAGE_ATOMIC_BIT | VK_FORMAT_FEATURE_STORAGE_IMAGE_BIT |
+ VK_FORMAT_FEATURE_UNIFORM_TEXEL_BUFFER_BIT | VK_FORMAT_FEATURE_STORAGE_TEXEL_BUFFER_ATOMIC_BIT |
+ VK_FORMAT_FEATURE_SAMPLED_IMAGE_BIT | VK_FORMAT_FEATURE_COLOR_ATTACHMENT_BLEND_BIT | VK_FORMAT_FEATURE_VERTEX_BUFFER_BIT |
+ VK_FORMAT_FEATURE_STORAGE_TEXEL_BUFFER_BIT | VK_FORMAT_FEATURE_COLOR_ATTACHMENT_BIT |
+ VK_FORMAT_FEATURE_DEPTH_STENCIL_ATTACHMENT_BIT | VK_FORMAT_FEATURE_BLIT_SRC_BIT | VK_FORMAT_FEATURE_BLIT_DST_BIT |
+ VK_FORMAT_FEATURE_SAMPLED_IMAGE_FILTER_LINEAR_BIT);
+ if (enumerator & (~allFlags)) {
+ return false;
+ }
+
+ return true;
+}
+
+static std::string EnumeratorString(VkFormatFeatureFlagBits const &enumerator) {
+ if (!ValidateEnumerator(enumerator)) {
+ return "unrecognized enumerator";
+ }
+
+ std::vector<std::string> strings;
+ if (enumerator & VK_FORMAT_FEATURE_STORAGE_IMAGE_ATOMIC_BIT) {
+ strings.push_back("VK_FORMAT_FEATURE_STORAGE_IMAGE_ATOMIC_BIT");
+ }
+ if (enumerator & VK_FORMAT_FEATURE_STORAGE_IMAGE_BIT) {
+ strings.push_back("VK_FORMAT_FEATURE_STORAGE_IMAGE_BIT");
+ }
+ if (enumerator & VK_FORMAT_FEATURE_UNIFORM_TEXEL_BUFFER_BIT) {
+ strings.push_back("VK_FORMAT_FEATURE_UNIFORM_TEXEL_BUFFER_BIT");
+ }
+ if (enumerator & VK_FORMAT_FEATURE_STORAGE_TEXEL_BUFFER_ATOMIC_BIT) {
+ strings.push_back("VK_FORMAT_FEATURE_STORAGE_TEXEL_BUFFER_ATOMIC_BIT");
+ }
+ if (enumerator & VK_FORMAT_FEATURE_SAMPLED_IMAGE_BIT) {
+ strings.push_back("VK_FORMAT_FEATURE_SAMPLED_IMAGE_BIT");
+ }
+ if (enumerator & VK_FORMAT_FEATURE_COLOR_ATTACHMENT_BLEND_BIT) {
+ strings.push_back("VK_FORMAT_FEATURE_COLOR_ATTACHMENT_BLEND_BIT");
+ }
+ if (enumerator & VK_FORMAT_FEATURE_VERTEX_BUFFER_BIT) {
+ strings.push_back("VK_FORMAT_FEATURE_VERTEX_BUFFER_BIT");
+ }
+ if (enumerator & VK_FORMAT_FEATURE_STORAGE_TEXEL_BUFFER_BIT) {
+ strings.push_back("VK_FORMAT_FEATURE_STORAGE_TEXEL_BUFFER_BIT");
+ }
+ if (enumerator & VK_FORMAT_FEATURE_COLOR_ATTACHMENT_BIT) {
+ strings.push_back("VK_FORMAT_FEATURE_COLOR_ATTACHMENT_BIT");
+ }
+ if (enumerator & VK_FORMAT_FEATURE_DEPTH_STENCIL_ATTACHMENT_BIT) {
+ strings.push_back("VK_FORMAT_FEATURE_DEPTH_STENCIL_ATTACHMENT_BIT");
+ }
+ if (enumerator & VK_FORMAT_FEATURE_BLIT_SRC_BIT) {
+ strings.push_back("VK_FORMAT_FEATURE_BLIT_SRC_BIT");
+ }
+ if (enumerator & VK_FORMAT_FEATURE_BLIT_DST_BIT) {
+ strings.push_back("VK_FORMAT_FEATURE_BLIT_DST_BIT");
+ }
+ if (enumerator & VK_FORMAT_FEATURE_SAMPLED_IMAGE_FILTER_LINEAR_BIT) {
+ strings.push_back("VK_FORMAT_FEATURE_SAMPLED_IMAGE_FILTER_LINEAR_BIT");
+ }
+
+ std::string enumeratorString;
+ for (auto const &string : strings) {
+ enumeratorString += string;
+
+ if (string != strings.back()) {
+ enumeratorString += '|';
+ }
+ }
+
+ return enumeratorString;
+}
+
+static bool ValidateEnumerator(VkImageUsageFlagBits const &enumerator) {
+ VkImageUsageFlagBits allFlags = (VkImageUsageFlagBits)(
+ VK_IMAGE_USAGE_INPUT_ATTACHMENT_BIT | VK_IMAGE_USAGE_DEPTH_STENCIL_ATTACHMENT_BIT | VK_IMAGE_USAGE_COLOR_ATTACHMENT_BIT |
+ VK_IMAGE_USAGE_STORAGE_BIT | VK_IMAGE_USAGE_SAMPLED_BIT | VK_IMAGE_USAGE_TRANSFER_DST_BIT |
+ VK_IMAGE_USAGE_TRANSIENT_ATTACHMENT_BIT | VK_IMAGE_USAGE_TRANSFER_SRC_BIT);
+ if (enumerator & (~allFlags)) {
+ return false;
+ }
+
+ return true;
+}
+
+static std::string EnumeratorString(VkImageUsageFlagBits const &enumerator) {
+ if (!ValidateEnumerator(enumerator)) {
+ return "unrecognized enumerator";
+ }
+
+ std::vector<std::string> strings;
+ if (enumerator & VK_IMAGE_USAGE_INPUT_ATTACHMENT_BIT) {
+ strings.push_back("VK_IMAGE_USAGE_INPUT_ATTACHMENT_BIT");
+ }
+ if (enumerator & VK_IMAGE_USAGE_DEPTH_STENCIL_ATTACHMENT_BIT) {
+ strings.push_back("VK_IMAGE_USAGE_DEPTH_STENCIL_ATTACHMENT_BIT");
+ }
+ if (enumerator & VK_IMAGE_USAGE_COLOR_ATTACHMENT_BIT) {
+ strings.push_back("VK_IMAGE_USAGE_COLOR_ATTACHMENT_BIT");
+ }
+ if (enumerator & VK_IMAGE_USAGE_STORAGE_BIT) {
+ strings.push_back("VK_IMAGE_USAGE_STORAGE_BIT");
+ }
+ if (enumerator & VK_IMAGE_USAGE_SAMPLED_BIT) {
+ strings.push_back("VK_IMAGE_USAGE_SAMPLED_BIT");
+ }
+ if (enumerator & VK_IMAGE_USAGE_TRANSFER_DST_BIT) {
+ strings.push_back("VK_IMAGE_USAGE_TRANSFER_DST_BIT");
+ }
+ if (enumerator & VK_IMAGE_USAGE_TRANSIENT_ATTACHMENT_BIT) {
+ strings.push_back("VK_IMAGE_USAGE_TRANSIENT_ATTACHMENT_BIT");
+ }
+ if (enumerator & VK_IMAGE_USAGE_TRANSFER_SRC_BIT) {
+ strings.push_back("VK_IMAGE_USAGE_TRANSFER_SRC_BIT");
+ }
+
+ std::string enumeratorString;
+ for (auto const &string : strings) {
+ enumeratorString += string;
+
+ if (string != strings.back()) {
+ enumeratorString += '|';
+ }
+ }
+
+ return enumeratorString;
+}
+
+static bool ValidateEnumerator(VkQueueFlagBits const &enumerator) {
+ VkQueueFlagBits allFlags =
+ (VkQueueFlagBits)(VK_QUEUE_TRANSFER_BIT | VK_QUEUE_COMPUTE_BIT | VK_QUEUE_SPARSE_BINDING_BIT | VK_QUEUE_GRAPHICS_BIT);
+ if (enumerator & (~allFlags)) {
+ return false;
+ }
+
+ return true;
+}
+
+static std::string EnumeratorString(VkQueueFlagBits const &enumerator) {
+ if (!ValidateEnumerator(enumerator)) {
+ return "unrecognized enumerator";
+ }
+
+ std::vector<std::string> strings;
+ if (enumerator & VK_QUEUE_TRANSFER_BIT) {
+ strings.push_back("VK_QUEUE_TRANSFER_BIT");
+ }
+ if (enumerator & VK_QUEUE_COMPUTE_BIT) {
+ strings.push_back("VK_QUEUE_COMPUTE_BIT");
+ }
+ if (enumerator & VK_QUEUE_SPARSE_BINDING_BIT) {
+ strings.push_back("VK_QUEUE_SPARSE_BINDING_BIT");
+ }
+ if (enumerator & VK_QUEUE_GRAPHICS_BIT) {
+ strings.push_back("VK_QUEUE_GRAPHICS_BIT");
+ }
+
+ std::string enumeratorString;
+ for (auto const &string : strings) {
+ enumeratorString += string;
+
+ if (string != strings.back()) {
+ enumeratorString += '|';
+ }
+ }
+
+ return enumeratorString;
+}
+
+static bool ValidateEnumerator(VkMemoryPropertyFlagBits const &enumerator) {
+ VkMemoryPropertyFlagBits allFlags = (VkMemoryPropertyFlagBits)(
+ VK_MEMORY_PROPERTY_LAZILY_ALLOCATED_BIT | VK_MEMORY_PROPERTY_HOST_COHERENT_BIT | VK_MEMORY_PROPERTY_HOST_VISIBLE_BIT |
+ VK_MEMORY_PROPERTY_HOST_CACHED_BIT | VK_MEMORY_PROPERTY_DEVICE_LOCAL_BIT);
+ if (enumerator & (~allFlags)) {
+ return false;
+ }
+
+ return true;
+}
+
+static std::string EnumeratorString(VkMemoryPropertyFlagBits const &enumerator) {
+ if (!ValidateEnumerator(enumerator)) {
+ return "unrecognized enumerator";
+ }
+
+ std::vector<std::string> strings;
+ if (enumerator & VK_MEMORY_PROPERTY_LAZILY_ALLOCATED_BIT) {
+ strings.push_back("VK_MEMORY_PROPERTY_LAZILY_ALLOCATED_BIT");
+ }
+ if (enumerator & VK_MEMORY_PROPERTY_HOST_COHERENT_BIT) {
+ strings.push_back("VK_MEMORY_PROPERTY_HOST_COHERENT_BIT");
+ }
+ if (enumerator & VK_MEMORY_PROPERTY_HOST_VISIBLE_BIT) {
+ strings.push_back("VK_MEMORY_PROPERTY_HOST_VISIBLE_BIT");
+ }
+ if (enumerator & VK_MEMORY_PROPERTY_HOST_CACHED_BIT) {
+ strings.push_back("VK_MEMORY_PROPERTY_HOST_CACHED_BIT");
+ }
+ if (enumerator & VK_MEMORY_PROPERTY_DEVICE_LOCAL_BIT) {
+ strings.push_back("VK_MEMORY_PROPERTY_DEVICE_LOCAL_BIT");
+ }
+
+ std::string enumeratorString;
+ for (auto const &string : strings) {
+ enumeratorString += string;
+
+ if (string != strings.back()) {
+ enumeratorString += '|';
+ }
+ }
+
+ return enumeratorString;
+}
+
+static bool ValidateEnumerator(VkMemoryHeapFlagBits const &enumerator) {
+ VkMemoryHeapFlagBits allFlags = (VkMemoryHeapFlagBits)(VK_MEMORY_HEAP_DEVICE_LOCAL_BIT);
+ if (enumerator & (~allFlags)) {
+ return false;
+ }
+
+ return true;
+}
+
+static std::string EnumeratorString(VkMemoryHeapFlagBits const &enumerator) {
+ if (!ValidateEnumerator(enumerator)) {
+ return "unrecognized enumerator";
+ }
+
+ std::vector<std::string> strings;
+ if (enumerator & VK_MEMORY_HEAP_DEVICE_LOCAL_BIT) {
+ strings.push_back("VK_MEMORY_HEAP_DEVICE_LOCAL_BIT");
+ }
+
+ std::string enumeratorString;
+ for (auto const &string : strings) {
+ enumeratorString += string;
+
+ if (string != strings.back()) {
+ enumeratorString += '|';
+ }
+ }
+
+ return enumeratorString;
+}
+
+static bool ValidateEnumerator(VkSparseImageFormatFlagBits const &enumerator) {
+ VkSparseImageFormatFlagBits allFlags =
+ (VkSparseImageFormatFlagBits)(VK_SPARSE_IMAGE_FORMAT_NONSTANDARD_BLOCK_SIZE_BIT |
+ VK_SPARSE_IMAGE_FORMAT_ALIGNED_MIP_SIZE_BIT | VK_SPARSE_IMAGE_FORMAT_SINGLE_MIPTAIL_BIT);
+ if (enumerator & (~allFlags)) {
+ return false;
+ }
+
+ return true;
+}
+
+static std::string EnumeratorString(VkSparseImageFormatFlagBits const &enumerator) {
+ if (!ValidateEnumerator(enumerator)) {
+ return "unrecognized enumerator";
+ }
+
+ std::vector<std::string> strings;
+ if (enumerator & VK_SPARSE_IMAGE_FORMAT_NONSTANDARD_BLOCK_SIZE_BIT) {
+ strings.push_back("VK_SPARSE_IMAGE_FORMAT_NONSTANDARD_BLOCK_SIZE_BIT");
+ }
+ if (enumerator & VK_SPARSE_IMAGE_FORMAT_ALIGNED_MIP_SIZE_BIT) {
+ strings.push_back("VK_SPARSE_IMAGE_FORMAT_ALIGNED_MIP_SIZE_BIT");
+ }
+ if (enumerator & VK_SPARSE_IMAGE_FORMAT_SINGLE_MIPTAIL_BIT) {
+ strings.push_back("VK_SPARSE_IMAGE_FORMAT_SINGLE_MIPTAIL_BIT");
+ }
+
+ std::string enumeratorString;
+ for (auto const &string : strings) {
+ enumeratorString += string;
+
+ if (string != strings.back()) {
+ enumeratorString += '|';
+ }
+ }
+
+ return enumeratorString;
+}
+
+static bool ValidateEnumerator(VkFenceCreateFlagBits const &enumerator) {
+ VkFenceCreateFlagBits allFlags = (VkFenceCreateFlagBits)(VK_FENCE_CREATE_SIGNALED_BIT);
+ if (enumerator & (~allFlags)) {
+ return false;
+ }
+
+ return true;
+}
+
+static std::string EnumeratorString(VkFenceCreateFlagBits const &enumerator) {
+ if (!ValidateEnumerator(enumerator)) {
+ return "unrecognized enumerator";
+ }
+
+ std::vector<std::string> strings;
+ if (enumerator & VK_FENCE_CREATE_SIGNALED_BIT) {
+ strings.push_back("VK_FENCE_CREATE_SIGNALED_BIT");
+ }
+
+ std::string enumeratorString;
+ for (auto const &string : strings) {
+ enumeratorString += string;
+
+ if (string != strings.back()) {
+ enumeratorString += '|';
+ }
+ }
+
+ return enumeratorString;
+}
+
+static bool ValidateEnumerator(VkQueryPipelineStatisticFlagBits const &enumerator) {
+ VkQueryPipelineStatisticFlagBits allFlags = (VkQueryPipelineStatisticFlagBits)(
+ VK_QUERY_PIPELINE_STATISTIC_COMPUTE_SHADER_INVOCATIONS_BIT | VK_QUERY_PIPELINE_STATISTIC_INPUT_ASSEMBLY_VERTICES_BIT |
+ VK_QUERY_PIPELINE_STATISTIC_INPUT_ASSEMBLY_PRIMITIVES_BIT | VK_QUERY_PIPELINE_STATISTIC_CLIPPING_INVOCATIONS_BIT |
+ VK_QUERY_PIPELINE_STATISTIC_VERTEX_SHADER_INVOCATIONS_BIT | VK_QUERY_PIPELINE_STATISTIC_GEOMETRY_SHADER_PRIMITIVES_BIT |
+ VK_QUERY_PIPELINE_STATISTIC_FRAGMENT_SHADER_INVOCATIONS_BIT | VK_QUERY_PIPELINE_STATISTIC_CLIPPING_PRIMITIVES_BIT |
+ VK_QUERY_PIPELINE_STATISTIC_TESSELLATION_CONTROL_SHADER_PATCHES_BIT |
+ VK_QUERY_PIPELINE_STATISTIC_GEOMETRY_SHADER_INVOCATIONS_BIT |
+ VK_QUERY_PIPELINE_STATISTIC_TESSELLATION_EVALUATION_SHADER_INVOCATIONS_BIT);
+ if (enumerator & (~allFlags)) {
+ return false;
+ }
+
+ return true;
+}
+
+static std::string EnumeratorString(VkQueryPipelineStatisticFlagBits const &enumerator) {
+ if (!ValidateEnumerator(enumerator)) {
+ return "unrecognized enumerator";
+ }
+
+ std::vector<std::string> strings;
+ if (enumerator & VK_QUERY_PIPELINE_STATISTIC_COMPUTE_SHADER_INVOCATIONS_BIT) {
+ strings.push_back("VK_QUERY_PIPELINE_STATISTIC_COMPUTE_SHADER_INVOCATIONS_BIT");
+ }
+ if (enumerator & VK_QUERY_PIPELINE_STATISTIC_INPUT_ASSEMBLY_VERTICES_BIT) {
+ strings.push_back("VK_QUERY_PIPELINE_STATISTIC_INPUT_ASSEMBLY_VERTICES_BIT");
+ }
+ if (enumerator & VK_QUERY_PIPELINE_STATISTIC_INPUT_ASSEMBLY_PRIMITIVES_BIT) {
+ strings.push_back("VK_QUERY_PIPELINE_STATISTIC_INPUT_ASSEMBLY_PRIMITIVES_BIT");
+ }
+ if (enumerator & VK_QUERY_PIPELINE_STATISTIC_CLIPPING_INVOCATIONS_BIT) {
+ strings.push_back("VK_QUERY_PIPELINE_STATISTIC_CLIPPING_INVOCATIONS_BIT");
+ }
+ if (enumerator & VK_QUERY_PIPELINE_STATISTIC_VERTEX_SHADER_INVOCATIONS_BIT) {
+ strings.push_back("VK_QUERY_PIPELINE_STATISTIC_VERTEX_SHADER_INVOCATIONS_BIT");
+ }
+ if (enumerator & VK_QUERY_PIPELINE_STATISTIC_GEOMETRY_SHADER_PRIMITIVES_BIT) {
+ strings.push_back("VK_QUERY_PIPELINE_STATISTIC_GEOMETRY_SHADER_PRIMITIVES_BIT");
+ }
+ if (enumerator & VK_QUERY_PIPELINE_STATISTIC_FRAGMENT_SHADER_INVOCATIONS_BIT) {
+ strings.push_back("VK_QUERY_PIPELINE_STATISTIC_FRAGMENT_SHADER_INVOCATIONS_BIT");
+ }
+ if (enumerator & VK_QUERY_PIPELINE_STATISTIC_CLIPPING_PRIMITIVES_BIT) {
+ strings.push_back("VK_QUERY_PIPELINE_STATISTIC_CLIPPING_PRIMITIVES_BIT");
+ }
+ if (enumerator & VK_QUERY_PIPELINE_STATISTIC_TESSELLATION_CONTROL_SHADER_PATCHES_BIT) {
+ strings.push_back("VK_QUERY_PIPELINE_STATISTIC_TESSELLATION_CONTROL_SHADER_PATCHES_BIT");
+ }
+ if (enumerator & VK_QUERY_PIPELINE_STATISTIC_GEOMETRY_SHADER_INVOCATIONS_BIT) {
+ strings.push_back("VK_QUERY_PIPELINE_STATISTIC_GEOMETRY_SHADER_INVOCATIONS_BIT");
+ }
+ if (enumerator & VK_QUERY_PIPELINE_STATISTIC_TESSELLATION_EVALUATION_SHADER_INVOCATIONS_BIT) {
+ strings.push_back("VK_QUERY_PIPELINE_STATISTIC_TESSELLATION_EVALUATION_SHADER_INVOCATIONS_BIT");
+ }
+
+ std::string enumeratorString;
+ for (auto const &string : strings) {
+ enumeratorString += string;
+
+ if (string != strings.back()) {
+ enumeratorString += '|';
+ }
+ }
+
+ return enumeratorString;
+}
+
+static bool ValidateEnumerator(VkQueryResultFlagBits const &enumerator) {
+ VkQueryResultFlagBits allFlags = (VkQueryResultFlagBits)(VK_QUERY_RESULT_PARTIAL_BIT | VK_QUERY_RESULT_WITH_AVAILABILITY_BIT |
+ VK_QUERY_RESULT_WAIT_BIT | VK_QUERY_RESULT_64_BIT);
+ if (enumerator & (~allFlags)) {
+ return false;
+ }
+
+ return true;
+}
+
+static std::string EnumeratorString(VkQueryResultFlagBits const &enumerator) {
+ if (!ValidateEnumerator(enumerator)) {
+ return "unrecognized enumerator";
+ }
+
+ std::vector<std::string> strings;
+ if (enumerator & VK_QUERY_RESULT_PARTIAL_BIT) {
+ strings.push_back("VK_QUERY_RESULT_PARTIAL_BIT");
+ }
+ if (enumerator & VK_QUERY_RESULT_WITH_AVAILABILITY_BIT) {
+ strings.push_back("VK_QUERY_RESULT_WITH_AVAILABILITY_BIT");
+ }
+ if (enumerator & VK_QUERY_RESULT_WAIT_BIT) {
+ strings.push_back("VK_QUERY_RESULT_WAIT_BIT");
+ }
+ if (enumerator & VK_QUERY_RESULT_64_BIT) {
+ strings.push_back("VK_QUERY_RESULT_64_BIT");
+ }
+
+ std::string enumeratorString;
+ for (auto const &string : strings) {
+ enumeratorString += string;
+
+ if (string != strings.back()) {
+ enumeratorString += '|';
+ }
+ }
+
+ return enumeratorString;
+}
+
+static bool ValidateEnumerator(VkBufferUsageFlagBits const &enumerator) {
+ VkBufferUsageFlagBits allFlags = (VkBufferUsageFlagBits)(
+ VK_BUFFER_USAGE_VERTEX_BUFFER_BIT | VK_BUFFER_USAGE_INDEX_BUFFER_BIT | VK_BUFFER_USAGE_INDIRECT_BUFFER_BIT |
+ VK_BUFFER_USAGE_STORAGE_TEXEL_BUFFER_BIT | VK_BUFFER_USAGE_STORAGE_BUFFER_BIT | VK_BUFFER_USAGE_TRANSFER_DST_BIT |
+ VK_BUFFER_USAGE_UNIFORM_TEXEL_BUFFER_BIT | VK_BUFFER_USAGE_TRANSFER_SRC_BIT | VK_BUFFER_USAGE_UNIFORM_BUFFER_BIT);
+ if (enumerator & (~allFlags)) {
+ return false;
+ }
+
+ return true;
+}
+
+static std::string EnumeratorString(VkBufferUsageFlagBits const &enumerator) {
+ if (!ValidateEnumerator(enumerator)) {
+ return "unrecognized enumerator";
+ }
+
+ std::vector<std::string> strings;
+ if (enumerator & VK_BUFFER_USAGE_VERTEX_BUFFER_BIT) {
+ strings.push_back("VK_BUFFER_USAGE_VERTEX_BUFFER_BIT");
+ }
+ if (enumerator & VK_BUFFER_USAGE_INDEX_BUFFER_BIT) {
+ strings.push_back("VK_BUFFER_USAGE_INDEX_BUFFER_BIT");
+ }
+ if (enumerator & VK_BUFFER_USAGE_INDIRECT_BUFFER_BIT) {
+ strings.push_back("VK_BUFFER_USAGE_INDIRECT_BUFFER_BIT");
+ }
+ if (enumerator & VK_BUFFER_USAGE_STORAGE_TEXEL_BUFFER_BIT) {
+ strings.push_back("VK_BUFFER_USAGE_STORAGE_TEXEL_BUFFER_BIT");
+ }
+ if (enumerator & VK_BUFFER_USAGE_STORAGE_BUFFER_BIT) {
+ strings.push_back("VK_BUFFER_USAGE_STORAGE_BUFFER_BIT");
+ }
+ if (enumerator & VK_BUFFER_USAGE_TRANSFER_DST_BIT) {
+ strings.push_back("VK_BUFFER_USAGE_TRANSFER_DST_BIT");
+ }
+ if (enumerator & VK_BUFFER_USAGE_UNIFORM_TEXEL_BUFFER_BIT) {
+ strings.push_back("VK_BUFFER_USAGE_UNIFORM_TEXEL_BUFFER_BIT");
+ }
+ if (enumerator & VK_BUFFER_USAGE_TRANSFER_SRC_BIT) {
+ strings.push_back("VK_BUFFER_USAGE_TRANSFER_SRC_BIT");
+ }
+ if (enumerator & VK_BUFFER_USAGE_UNIFORM_BUFFER_BIT) {
+ strings.push_back("VK_BUFFER_USAGE_UNIFORM_BUFFER_BIT");
+ }
+
+ std::string enumeratorString;
+ for (auto const &string : strings) {
+ enumeratorString += string;
+
+ if (string != strings.back()) {
+ enumeratorString += '|';
+ }
+ }
+
+ return enumeratorString;
+}
+
+static bool ValidateEnumerator(VkBufferCreateFlagBits const &enumerator) {
+ VkBufferCreateFlagBits allFlags = (VkBufferCreateFlagBits)(
+ VK_BUFFER_CREATE_SPARSE_ALIASED_BIT | VK_BUFFER_CREATE_SPARSE_RESIDENCY_BIT | VK_BUFFER_CREATE_SPARSE_BINDING_BIT);
+ if (enumerator & (~allFlags)) {
+ return false;
+ }
+
+ return true;
+}
+
+static std::string EnumeratorString(VkBufferCreateFlagBits const &enumerator) {
+ if (!ValidateEnumerator(enumerator)) {
+ return "unrecognized enumerator";
+ }
+
+ std::vector<std::string> strings;
+ if (enumerator & VK_BUFFER_CREATE_SPARSE_ALIASED_BIT) {
+ strings.push_back("VK_BUFFER_CREATE_SPARSE_ALIASED_BIT");
+ }
+ if (enumerator & VK_BUFFER_CREATE_SPARSE_RESIDENCY_BIT) {
+ strings.push_back("VK_BUFFER_CREATE_SPARSE_RESIDENCY_BIT");
+ }
+ if (enumerator & VK_BUFFER_CREATE_SPARSE_BINDING_BIT) {
+ strings.push_back("VK_BUFFER_CREATE_SPARSE_BINDING_BIT");
+ }
+
+ std::string enumeratorString;
+ for (auto const &string : strings) {
+ enumeratorString += string;
+
+ if (string != strings.back()) {
+ enumeratorString += '|';
+ }
+ }
+
+ return enumeratorString;
+}
+
+static bool ValidateEnumerator(VkImageCreateFlagBits const &enumerator) {
+ VkImageCreateFlagBits allFlags = (VkImageCreateFlagBits)(
+ VK_IMAGE_CREATE_CUBE_COMPATIBLE_BIT | VK_IMAGE_CREATE_SPARSE_ALIASED_BIT | VK_IMAGE_CREATE_SPARSE_RESIDENCY_BIT |
+ VK_IMAGE_CREATE_MUTABLE_FORMAT_BIT | VK_IMAGE_CREATE_SPARSE_BINDING_BIT);
+ if (enumerator & (~allFlags)) {
+ return false;
+ }
+
+ return true;
+}
+
+static std::string EnumeratorString(VkImageCreateFlagBits const &enumerator) {
+ if (!ValidateEnumerator(enumerator)) {
+ return "unrecognized enumerator";
+ }
+
+ std::vector<std::string> strings;
+ if (enumerator & VK_IMAGE_CREATE_CUBE_COMPATIBLE_BIT) {
+ strings.push_back("VK_IMAGE_CREATE_CUBE_COMPATIBLE_BIT");
+ }
+ if (enumerator & VK_IMAGE_CREATE_SPARSE_ALIASED_BIT) {
+ strings.push_back("VK_IMAGE_CREATE_SPARSE_ALIASED_BIT");
+ }
+ if (enumerator & VK_IMAGE_CREATE_SPARSE_RESIDENCY_BIT) {
+ strings.push_back("VK_IMAGE_CREATE_SPARSE_RESIDENCY_BIT");
+ }
+ if (enumerator & VK_IMAGE_CREATE_MUTABLE_FORMAT_BIT) {
+ strings.push_back("VK_IMAGE_CREATE_MUTABLE_FORMAT_BIT");
+ }
+ if (enumerator & VK_IMAGE_CREATE_SPARSE_BINDING_BIT) {
+ strings.push_back("VK_IMAGE_CREATE_SPARSE_BINDING_BIT");
+ }
+
+ std::string enumeratorString;
+ for (auto const &string : strings) {
+ enumeratorString += string;
+
+ if (string != strings.back()) {
+ enumeratorString += '|';
+ }
+ }
+
+ return enumeratorString;
+}
+
+static bool ValidateEnumerator(VkColorComponentFlagBits const &enumerator) {
+ VkColorComponentFlagBits allFlags = (VkColorComponentFlagBits)(VK_COLOR_COMPONENT_A_BIT | VK_COLOR_COMPONENT_B_BIT |
+ VK_COLOR_COMPONENT_G_BIT | VK_COLOR_COMPONENT_R_BIT);
+ if (enumerator & (~allFlags)) {
+ return false;
+ }
+
+ return true;
+}
+
+static std::string EnumeratorString(VkColorComponentFlagBits const &enumerator) {
+ if (!ValidateEnumerator(enumerator)) {
+ return "unrecognized enumerator";
+ }
+
+ std::vector<std::string> strings;
+ if (enumerator & VK_COLOR_COMPONENT_A_BIT) {
+ strings.push_back("VK_COLOR_COMPONENT_A_BIT");
+ }
+ if (enumerator & VK_COLOR_COMPONENT_B_BIT) {
+ strings.push_back("VK_COLOR_COMPONENT_B_BIT");
+ }
+ if (enumerator & VK_COLOR_COMPONENT_G_BIT) {
+ strings.push_back("VK_COLOR_COMPONENT_G_BIT");
+ }
+ if (enumerator & VK_COLOR_COMPONENT_R_BIT) {
+ strings.push_back("VK_COLOR_COMPONENT_R_BIT");
+ }
+
+ std::string enumeratorString;
+ for (auto const &string : strings) {
+ enumeratorString += string;
+
+ if (string != strings.back()) {
+ enumeratorString += '|';
+ }
+ }
+
+ return enumeratorString;
+}
+
+static bool ValidateEnumerator(VkPipelineCreateFlagBits const &enumerator) {
+ VkPipelineCreateFlagBits allFlags = (VkPipelineCreateFlagBits)(
+ VK_PIPELINE_CREATE_DERIVATIVE_BIT | VK_PIPELINE_CREATE_ALLOW_DERIVATIVES_BIT | VK_PIPELINE_CREATE_DISABLE_OPTIMIZATION_BIT);
+ if (enumerator & (~allFlags)) {
+ return false;
+ }
+
+ return true;
+}
+
+static std::string EnumeratorString(VkPipelineCreateFlagBits const &enumerator) {
+ if (!ValidateEnumerator(enumerator)) {
+ return "unrecognized enumerator";
+ }
+
+ std::vector<std::string> strings;
+ if (enumerator & VK_PIPELINE_CREATE_DERIVATIVE_BIT) {
+ strings.push_back("VK_PIPELINE_CREATE_DERIVATIVE_BIT");
+ }
+ if (enumerator & VK_PIPELINE_CREATE_ALLOW_DERIVATIVES_BIT) {
+ strings.push_back("VK_PIPELINE_CREATE_ALLOW_DERIVATIVES_BIT");
+ }
+ if (enumerator & VK_PIPELINE_CREATE_DISABLE_OPTIMIZATION_BIT) {
+ strings.push_back("VK_PIPELINE_CREATE_DISABLE_OPTIMIZATION_BIT");
+ }
+
+ std::string enumeratorString;
+ for (auto const &string : strings) {
+ enumeratorString += string;
+
+ if (string != strings.back()) {
+ enumeratorString += '|';
+ }
+ }
+
+ return enumeratorString;
+}
+
+static bool ValidateEnumerator(VkShaderStageFlagBits const &enumerator) {
+ VkShaderStageFlagBits allFlags = (VkShaderStageFlagBits)(
+ VK_SHADER_STAGE_ALL | VK_SHADER_STAGE_FRAGMENT_BIT | VK_SHADER_STAGE_GEOMETRY_BIT | VK_SHADER_STAGE_COMPUTE_BIT |
+ VK_SHADER_STAGE_TESSELLATION_EVALUATION_BIT | VK_SHADER_STAGE_TESSELLATION_CONTROL_BIT | VK_SHADER_STAGE_VERTEX_BIT);
+ if (enumerator & (~allFlags)) {
+ return false;
+ }
+
+ return true;
+}
+
+static std::string EnumeratorString(VkShaderStageFlagBits const &enumerator) {
+ if (!ValidateEnumerator(enumerator)) {
+ return "unrecognized enumerator";
+ }
+
+ std::vector<std::string> strings;
+ if (enumerator & VK_SHADER_STAGE_ALL) {
+ strings.push_back("VK_SHADER_STAGE_ALL");
+ }
+ if (enumerator & VK_SHADER_STAGE_FRAGMENT_BIT) {
+ strings.push_back("VK_SHADER_STAGE_FRAGMENT_BIT");
+ }
+ if (enumerator & VK_SHADER_STAGE_GEOMETRY_BIT) {
+ strings.push_back("VK_SHADER_STAGE_GEOMETRY_BIT");
+ }
+ if (enumerator & VK_SHADER_STAGE_COMPUTE_BIT) {
+ strings.push_back("VK_SHADER_STAGE_COMPUTE_BIT");
+ }
+ if (enumerator & VK_SHADER_STAGE_TESSELLATION_EVALUATION_BIT) {
+ strings.push_back("VK_SHADER_STAGE_TESSELLATION_EVALUATION_BIT");
+ }
+ if (enumerator & VK_SHADER_STAGE_TESSELLATION_CONTROL_BIT) {
+ strings.push_back("VK_SHADER_STAGE_TESSELLATION_CONTROL_BIT");
+ }
+ if (enumerator & VK_SHADER_STAGE_VERTEX_BIT) {
+ strings.push_back("VK_SHADER_STAGE_VERTEX_BIT");
+ }
+
+ std::string enumeratorString;
+ for (auto const &string : strings) {
+ enumeratorString += string;
+
+ if (string != strings.back()) {
+ enumeratorString += '|';
+ }
+ }
+
+ return enumeratorString;
+}
+
+static bool ValidateEnumerator(VkPipelineStageFlagBits const &enumerator) {
+ VkPipelineStageFlagBits allFlags = (VkPipelineStageFlagBits)(
+ VK_PIPELINE_STAGE_ALL_COMMANDS_BIT | VK_PIPELINE_STAGE_ALL_GRAPHICS_BIT | VK_PIPELINE_STAGE_HOST_BIT |
+ VK_PIPELINE_STAGE_BOTTOM_OF_PIPE_BIT | VK_PIPELINE_STAGE_TRANSFER_BIT | VK_PIPELINE_STAGE_COMPUTE_SHADER_BIT |
+ VK_PIPELINE_STAGE_COLOR_ATTACHMENT_OUTPUT_BIT | VK_PIPELINE_STAGE_LATE_FRAGMENT_TESTS_BIT |
+ VK_PIPELINE_STAGE_EARLY_FRAGMENT_TESTS_BIT | VK_PIPELINE_STAGE_GEOMETRY_SHADER_BIT | VK_PIPELINE_STAGE_FRAGMENT_SHADER_BIT |
+ VK_PIPELINE_STAGE_TESSELLATION_EVALUATION_SHADER_BIT | VK_PIPELINE_STAGE_TESSELLATION_CONTROL_SHADER_BIT |
+ VK_PIPELINE_STAGE_VERTEX_SHADER_BIT | VK_PIPELINE_STAGE_VERTEX_INPUT_BIT | VK_PIPELINE_STAGE_DRAW_INDIRECT_BIT |
+ VK_PIPELINE_STAGE_TOP_OF_PIPE_BIT);
+ if (enumerator & (~allFlags)) {
+ return false;
+ }
+
+ return true;
+}
+
+static std::string EnumeratorString(VkPipelineStageFlagBits const &enumerator) {
+ if (!ValidateEnumerator(enumerator)) {
+ return "unrecognized enumerator";
+ }
+
+ std::vector<std::string> strings;
+ if (enumerator & VK_PIPELINE_STAGE_ALL_COMMANDS_BIT) {
+ strings.push_back("VK_PIPELINE_STAGE_ALL_COMMANDS_BIT");
+ }
+ if (enumerator & VK_PIPELINE_STAGE_ALL_GRAPHICS_BIT) {
+ strings.push_back("VK_PIPELINE_STAGE_ALL_GRAPHICS_BIT");
+ }
+ if (enumerator & VK_PIPELINE_STAGE_HOST_BIT) {
+ strings.push_back("VK_PIPELINE_STAGE_HOST_BIT");
+ }
+ if (enumerator & VK_PIPELINE_STAGE_TRANSFER_BIT) {
+ strings.push_back("VK_PIPELINE_STAGE_TRANSFER_BIT");
+ }
+ if (enumerator & VK_PIPELINE_STAGE_COLOR_ATTACHMENT_OUTPUT_BIT) {
+ strings.push_back("VK_PIPELINE_STAGE_COLOR_ATTACHMENT_OUTPUT_BIT");
+ }
+ if (enumerator & VK_PIPELINE_STAGE_BOTTOM_OF_PIPE_BIT) {
+ strings.push_back("VK_PIPELINE_STAGE_BOTTOM_OF_PIPE_BIT");
+ }
+ if (enumerator & VK_PIPELINE_STAGE_TOP_OF_PIPE_BIT) {
+ strings.push_back("VK_PIPELINE_STAGE_TOP_OF_PIPE_BIT");
+ }
+ if (enumerator & VK_PIPELINE_STAGE_COMPUTE_SHADER_BIT) {
+ strings.push_back("VK_PIPELINE_STAGE_COMPUTE_SHADER_BIT");
+ }
+ if (enumerator & VK_PIPELINE_STAGE_TESSELLATION_CONTROL_SHADER_BIT) {
+ strings.push_back("VK_PIPELINE_STAGE_TESSELLATION_CONTROL_SHADER_BIT");
+ }
+ if (enumerator & VK_PIPELINE_STAGE_EARLY_FRAGMENT_TESTS_BIT) {
+ strings.push_back("VK_PIPELINE_STAGE_EARLY_FRAGMENT_TESTS_BIT");
+ }
+ if (enumerator & VK_PIPELINE_STAGE_DRAW_INDIRECT_BIT) {
+ strings.push_back("VK_PIPELINE_STAGE_DRAW_INDIRECT_BIT");
+ }
+ if (enumerator & VK_PIPELINE_STAGE_VERTEX_SHADER_BIT) {
+ strings.push_back("VK_PIPELINE_STAGE_VERTEX_SHADER_BIT");
+ }
+ if (enumerator & VK_PIPELINE_STAGE_TESSELLATION_EVALUATION_SHADER_BIT) {
+ strings.push_back("VK_PIPELINE_STAGE_TESSELLATION_EVALUATION_SHADER_BIT");
+ }
+ if (enumerator & VK_PIPELINE_STAGE_VERTEX_INPUT_BIT) {
+ strings.push_back("VK_PIPELINE_STAGE_VERTEX_INPUT_BIT");
+ }
+ if (enumerator & VK_PIPELINE_STAGE_GEOMETRY_SHADER_BIT) {
+ strings.push_back("VK_PIPELINE_STAGE_GEOMETRY_SHADER_BIT");
+ }
+ if (enumerator & VK_PIPELINE_STAGE_LATE_FRAGMENT_TESTS_BIT) {
+ strings.push_back("VK_PIPELINE_STAGE_LATE_FRAGMENT_TESTS_BIT");
+ }
+ if (enumerator & VK_PIPELINE_STAGE_FRAGMENT_SHADER_BIT) {
+ strings.push_back("VK_PIPELINE_STAGE_FRAGMENT_SHADER_BIT");
+ }
+
+ std::string enumeratorString;
+ for (auto const &string : strings) {
+ enumeratorString += string;
+
+ if (string != strings.back()) {
+ enumeratorString += '|';
+ }
+ }
+
+ return enumeratorString;
+}
+
+static bool ValidateEnumerator(VkAccessFlagBits const &enumerator) {
+ VkAccessFlagBits allFlags = (VkAccessFlagBits)(
+ VK_ACCESS_INDIRECT_COMMAND_READ_BIT | VK_ACCESS_INDEX_READ_BIT | VK_ACCESS_VERTEX_ATTRIBUTE_READ_BIT |
+ VK_ACCESS_UNIFORM_READ_BIT | VK_ACCESS_INPUT_ATTACHMENT_READ_BIT | VK_ACCESS_SHADER_READ_BIT | VK_ACCESS_SHADER_WRITE_BIT |
+ VK_ACCESS_COLOR_ATTACHMENT_READ_BIT | VK_ACCESS_COLOR_ATTACHMENT_WRITE_BIT | VK_ACCESS_DEPTH_STENCIL_ATTACHMENT_READ_BIT |
+ VK_ACCESS_DEPTH_STENCIL_ATTACHMENT_WRITE_BIT | VK_ACCESS_TRANSFER_READ_BIT | VK_ACCESS_TRANSFER_WRITE_BIT |
+ VK_ACCESS_HOST_READ_BIT | VK_ACCESS_HOST_WRITE_BIT | VK_ACCESS_MEMORY_READ_BIT | VK_ACCESS_MEMORY_WRITE_BIT);
+
+ if (enumerator & (~allFlags)) {
+ return false;
+ }
+
+ return true;
+}
+
+static std::string EnumeratorString(VkAccessFlagBits const &enumerator) {
+ if (!ValidateEnumerator(enumerator)) {
+ return "unrecognized enumerator";
+ }
+
+ std::vector<std::string> strings;
+ if (enumerator & VK_ACCESS_INDIRECT_COMMAND_READ_BIT) {
+ strings.push_back("VK_ACCESS_INDIRECT_COMMAND_READ_BIT");
+ }
+ if (enumerator & VK_ACCESS_INDEX_READ_BIT) {
+ strings.push_back("VK_ACCESS_INDEX_READ_BIT");
+ }
+ if (enumerator & VK_ACCESS_VERTEX_ATTRIBUTE_READ_BIT) {
+ strings.push_back("VK_ACCESS_VERTEX_ATTRIBUTE_READ_BIT");
+ }
+ if (enumerator & VK_ACCESS_UNIFORM_READ_BIT) {
+ strings.push_back("VK_ACCESS_UNIFORM_READ_BIT");
+ }
+ if (enumerator & VK_ACCESS_INPUT_ATTACHMENT_READ_BIT) {
+ strings.push_back("VK_ACCESS_INPUT_ATTACHMENT_READ_BIT");
+ }
+ if (enumerator & VK_ACCESS_SHADER_READ_BIT) {
+ strings.push_back("VK_ACCESS_SHADER_READ_BIT");
+ }
+ if (enumerator & VK_ACCESS_SHADER_WRITE_BIT) {
+ strings.push_back("VK_ACCESS_SHADER_WRITE_BIT");
+ }
+ if (enumerator & VK_ACCESS_COLOR_ATTACHMENT_READ_BIT) {
+ strings.push_back("VK_ACCESS_COLOR_ATTACHMENT_READ_BIT");
+ }
+ if (enumerator & VK_ACCESS_COLOR_ATTACHMENT_WRITE_BIT) {
+ strings.push_back("VK_ACCESS_COLOR_ATTACHMENT_WRITE_BIT");
+ }
+ if (enumerator & VK_ACCESS_DEPTH_STENCIL_ATTACHMENT_READ_BIT) {
+ strings.push_back("VK_ACCESS_DEPTH_STENCIL_ATTACHMENT_READ_BIT");
+ }
+ if (enumerator & VK_ACCESS_DEPTH_STENCIL_ATTACHMENT_WRITE_BIT) {
+ strings.push_back("VK_ACCESS_DEPTH_STENCIL_ATTACHMENT_WRITE_BIT");
+ }
+ if (enumerator & VK_ACCESS_TRANSFER_READ_BIT) {
+ strings.push_back("VK_ACCESS_TRANSFER_READ_BIT");
+ }
+ if (enumerator & VK_ACCESS_TRANSFER_WRITE_BIT) {
+ strings.push_back("VK_ACCESS_TRANSFER_WRITE_BIT");
+ }
+ if (enumerator & VK_ACCESS_HOST_READ_BIT) {
+ strings.push_back("VK_ACCESS_HOST_READ_BIT");
+ }
+ if (enumerator & VK_ACCESS_HOST_WRITE_BIT) {
+ strings.push_back("VK_ACCESS_HOST_WRITE_BIT");
+ }
+ if (enumerator & VK_ACCESS_MEMORY_READ_BIT) {
+ strings.push_back("VK_ACCESS_MEMORY_READ_BIT");
+ }
+ if (enumerator & VK_ACCESS_MEMORY_WRITE_BIT) {
+ strings.push_back("VK_ACCESS_MEMORY_WRITE_BIT");
+ }
+
+ std::string enumeratorString;
+ for (auto const &string : strings) {
+ enumeratorString += string;
+
+ if (string != strings.back()) {
+ enumeratorString += '|';
+ }
+ }
+
+ return enumeratorString;
+}
+
+static bool ValidateEnumerator(VkCommandPoolCreateFlagBits const &enumerator) {
+ VkCommandPoolCreateFlagBits allFlags =
+ (VkCommandPoolCreateFlagBits)(VK_COMMAND_POOL_CREATE_RESET_COMMAND_BUFFER_BIT | VK_COMMAND_POOL_CREATE_TRANSIENT_BIT);
+ if (enumerator & (~allFlags)) {
+ return false;
+ }
+
+ return true;
+}
+
+static std::string EnumeratorString(VkCommandPoolCreateFlagBits const &enumerator) {
+ if (!ValidateEnumerator(enumerator)) {
+ return "unrecognized enumerator";
+ }
+
+ std::vector<std::string> strings;
+ if (enumerator & VK_COMMAND_POOL_CREATE_RESET_COMMAND_BUFFER_BIT) {
+ strings.push_back("VK_COMMAND_POOL_CREATE_RESET_COMMAND_BUFFER_BIT");
+ }
+ if (enumerator & VK_COMMAND_POOL_CREATE_TRANSIENT_BIT) {
+ strings.push_back("VK_COMMAND_POOL_CREATE_TRANSIENT_BIT");
+ }
+
+ std::string enumeratorString;
+ for (auto const &string : strings) {
+ enumeratorString += string;
+
+ if (string != strings.back()) {
+ enumeratorString += '|';
+ }
+ }
+
+ return enumeratorString;
+}
+
+static bool ValidateEnumerator(VkCommandPoolResetFlagBits const &enumerator) {
+ VkCommandPoolResetFlagBits allFlags = (VkCommandPoolResetFlagBits)(VK_COMMAND_POOL_RESET_RELEASE_RESOURCES_BIT);
+ if (enumerator & (~allFlags)) {
+ return false;
+ }
+
+ return true;
+}
+
+static std::string EnumeratorString(VkCommandPoolResetFlagBits const &enumerator) {
+ if (!ValidateEnumerator(enumerator)) {
+ return "unrecognized enumerator";
+ }
+
+ std::vector<std::string> strings;
+ if (enumerator & VK_COMMAND_POOL_RESET_RELEASE_RESOURCES_BIT) {
+ strings.push_back("VK_COMMAND_POOL_RESET_RELEASE_RESOURCES_BIT");
+ }
+
+ std::string enumeratorString;
+ for (auto const &string : strings) {
+ enumeratorString += string;
+
+ if (string != strings.back()) {
+ enumeratorString += '|';
+ }
+ }
+
+ return enumeratorString;
+}
+
+static bool ValidateEnumerator(VkCommandBufferUsageFlags const &enumerator) {
+ VkCommandBufferUsageFlags allFlags =
+ (VkCommandBufferUsageFlags)(VK_COMMAND_BUFFER_USAGE_ONE_TIME_SUBMIT_BIT | VK_COMMAND_BUFFER_USAGE_RENDER_PASS_CONTINUE_BIT |
+ VK_COMMAND_BUFFER_USAGE_SIMULTANEOUS_USE_BIT);
+ if (enumerator & (~allFlags)) {
+ return false;
+ }
+
+ return true;
+}
+
+static std::string EnumeratorString(VkCommandBufferUsageFlags const &enumerator) {
+ if (!ValidateEnumerator(enumerator)) {
+ return "unrecognized enumerator";
+ }
+
+ std::vector<std::string> strings;
+ if (enumerator & VK_COMMAND_BUFFER_USAGE_SIMULTANEOUS_USE_BIT) {
+ strings.push_back("VK_COMMAND_BUFFER_USAGE_SIMULTANEOUS_USE_BIT");
+ }
+ if (enumerator & VK_COMMAND_BUFFER_USAGE_ONE_TIME_SUBMIT_BIT) {
+ strings.push_back("VK_COMMAND_BUFFER_USAGE_ONE_TIME_SUBMIT_BIT");
+ }
+ if (enumerator & VK_COMMAND_BUFFER_USAGE_RENDER_PASS_CONTINUE_BIT) {
+ strings.push_back("VK_COMMAND_BUFFER_USAGE_RENDER_PASS_CONTINUE_BIT");
+ }
+
+ std::string enumeratorString;
+ for (auto const &string : strings) {
+ enumeratorString += string;
+
+ if (string != strings.back()) {
+ enumeratorString += '|';
+ }
+ }
+
+ return enumeratorString;
+}
+
+static bool ValidateEnumerator(VkCommandBufferResetFlagBits const &enumerator) {
+ VkCommandBufferResetFlagBits allFlags = (VkCommandBufferResetFlagBits)(VK_COMMAND_BUFFER_RESET_RELEASE_RESOURCES_BIT);
+ if (enumerator & (~allFlags)) {
+ return false;
+ }
+
+ return true;
+}
+
+static std::string EnumeratorString(VkCommandBufferResetFlagBits const &enumerator) {
+ if (!ValidateEnumerator(enumerator)) {
+ return "unrecognized enumerator";
+ }
+
+ std::vector<std::string> strings;
+ if (enumerator & VK_COMMAND_BUFFER_RESET_RELEASE_RESOURCES_BIT) {
+ strings.push_back("VK_COMMAND_BUFFER_RESET_RELEASE_RESOURCES_BIT");
+ }
+
+ std::string enumeratorString;
+ for (auto const &string : strings) {
+ enumeratorString += string;
+
+ if (string != strings.back()) {
+ enumeratorString += '|';
+ }
+ }
+
+ return enumeratorString;
+}
+
+static bool ValidateEnumerator(VkImageAspectFlagBits const &enumerator) {
+ VkImageAspectFlagBits allFlags = (VkImageAspectFlagBits)(VK_IMAGE_ASPECT_METADATA_BIT | VK_IMAGE_ASPECT_STENCIL_BIT |
+ VK_IMAGE_ASPECT_DEPTH_BIT | VK_IMAGE_ASPECT_COLOR_BIT);
+ if (enumerator & (~allFlags)) {
+ return false;
+ }
+
+ return true;
+}
+
+static std::string EnumeratorString(VkImageAspectFlagBits const &enumerator) {
+ if (!ValidateEnumerator(enumerator)) {
+ return "unrecognized enumerator";
+ }
+
+ std::vector<std::string> strings;
+ if (enumerator & VK_IMAGE_ASPECT_METADATA_BIT) {
+ strings.push_back("VK_IMAGE_ASPECT_METADATA_BIT");
+ }
+ if (enumerator & VK_IMAGE_ASPECT_STENCIL_BIT) {
+ strings.push_back("VK_IMAGE_ASPECT_STENCIL_BIT");
+ }
+ if (enumerator & VK_IMAGE_ASPECT_DEPTH_BIT) {
+ strings.push_back("VK_IMAGE_ASPECT_DEPTH_BIT");
+ }
+ if (enumerator & VK_IMAGE_ASPECT_COLOR_BIT) {
+ strings.push_back("VK_IMAGE_ASPECT_COLOR_BIT");
+ }
+
+ std::string enumeratorString;
+ for (auto const &string : strings) {
+ enumeratorString += string;
+
+ if (string != strings.back()) {
+ enumeratorString += '|';
+ }
+ }
+
+ return enumeratorString;
+}
+
+static bool validate_queue_family_indices(VkDevice device, const char *function_name, const uint32_t count,
+ const uint32_t *indices) {
+ bool skipCall = false;
+ layer_data *my_device_data = get_my_data_ptr(get_dispatch_key(device), layer_data_map);
+
+ for (auto i = 0u; i < count; i++) {
+ if (indices[i] == VK_QUEUE_FAMILY_IGNORED) {
+ skipCall |=
+ log_msg(mdd(device), VK_DEBUG_REPORT_ERROR_BIT_EXT, (VkDebugReportObjectTypeEXT)0, 0, __LINE__, 1, "PARAMCHECK",
+ "%s: the specified queueFamilyIndex cannot be VK_QUEUE_FAMILY_IGNORED.", function_name);
+ } else {
+ const auto &queue_data = my_device_data->queueFamilyIndexMap.find(indices[i]);
+ if (queue_data == my_device_data->queueFamilyIndexMap.end()) {
+ skipCall |= log_msg(
+ mdd(device), VK_DEBUG_REPORT_ERROR_BIT_EXT, (VkDebugReportObjectTypeEXT)0, 0, __LINE__, 1, "PARAMCHECK",
+ "VkGetDeviceQueue parameter, uint32_t queueFamilyIndex %d, must have been given when the device was created.",
+ indices[i]);
+ return false;
+ }
+ }
+ }
+ return skipCall;
+}
+
+static bool ValidateEnumerator(VkQueryControlFlagBits const &enumerator) {
+ VkQueryControlFlagBits allFlags = (VkQueryControlFlagBits)(VK_QUERY_CONTROL_PRECISE_BIT);
+ if (enumerator & (~allFlags)) {
+ return false;
+ }
+
+ return true;
+}
+
+static std::string EnumeratorString(VkQueryControlFlagBits const &enumerator) {
+ if (!ValidateEnumerator(enumerator)) {
+ return "unrecognized enumerator";
+ }
+
+ std::vector<std::string> strings;
+ if (enumerator & VK_QUERY_CONTROL_PRECISE_BIT) {
+ strings.push_back("VK_QUERY_CONTROL_PRECISE_BIT");
+ }
+
+ std::string enumeratorString;
+ for (auto const &string : strings) {
+ enumeratorString += string;
+
+ if (string != strings.back()) {
+ enumeratorString += '|';
+ }
+ }
+
+ return enumeratorString;
+}
+
+static const int MaxParamCheckerStringLength = 256;
+
+static VkBool32 validate_string(debug_report_data *report_data, const char *apiName, const char *stringName,
+ const char *validateString) {
+ assert(apiName != nullptr);
+ assert(stringName != nullptr);
+ assert(validateString != nullptr);
+
+ VkBool32 skipCall = VK_FALSE;
+
+ VkStringErrorFlags result = vk_string_validate(MaxParamCheckerStringLength, validateString);
+
+ if (result == VK_STRING_ERROR_NONE) {
+ return skipCall;
+ } else if (result & VK_STRING_ERROR_LENGTH) {
+ skipCall = log_msg(report_data, VK_DEBUG_REPORT_ERROR_BIT_EXT, (VkDebugReportObjectTypeEXT)0, 0, __LINE__, 1, "PARAMCHECK",
+ "%s: string %s exceeds max length %d", apiName, stringName, MaxParamCheckerStringLength);
+ } else if (result & VK_STRING_ERROR_BAD_DATA) {
+ skipCall = log_msg(report_data, VK_DEBUG_REPORT_ERROR_BIT_EXT, (VkDebugReportObjectTypeEXT)0, 0, __LINE__, 1, "PARAMCHECK",
+ "%s: string %s contains invalid characters or is badly formed", apiName, stringName);
+ }
+ return skipCall;
+}
+
+VK_LAYER_EXPORT VKAPI_ATTR VkResult VKAPI_CALL
+vkCreateInstance(const VkInstanceCreateInfo *pCreateInfo, const VkAllocationCallbacks *pAllocator, VkInstance *pInstance) {
+ VkResult result = VK_ERROR_VALIDATION_FAILED_EXT;
+
+ VkLayerInstanceCreateInfo *chain_info = get_chain_info(pCreateInfo, VK_LAYER_LINK_INFO);
+ assert(chain_info != nullptr);
+ assert(chain_info->u.pLayerInfo != nullptr);
+
+ PFN_vkGetInstanceProcAddr fpGetInstanceProcAddr = chain_info->u.pLayerInfo->pfnNextGetInstanceProcAddr;
+ PFN_vkCreateInstance fpCreateInstance = (PFN_vkCreateInstance)fpGetInstanceProcAddr(NULL, "vkCreateInstance");
+ if (fpCreateInstance == NULL) {
+ return VK_ERROR_INITIALIZATION_FAILED;
+ }
+
+ // Advance the link info for the next element on the chain
+ chain_info->u.pLayerInfo = chain_info->u.pLayerInfo->pNext;
+
+ result = fpCreateInstance(pCreateInfo, pAllocator, pInstance);
+ if (result != VK_SUCCESS) {
+ return result;
+ }
+
+ layer_data *my_instance_data = get_my_data_ptr(get_dispatch_key(*pInstance), layer_data_map);
+ assert(my_instance_data != nullptr);
+
+ VkLayerInstanceDispatchTable *pTable = initInstanceTable(*pInstance, fpGetInstanceProcAddr, pc_instance_table_map);
+
+ my_instance_data->report_data =
+ debug_report_create_instance(pTable, *pInstance, pCreateInfo->enabledExtensionCount, pCreateInfo->ppEnabledExtensionNames);
+
+ init_parameter_validation(my_instance_data, pAllocator);
+
+ // Ordinarily we'd check these before calling down the chain, but none of the layer
+ // support is in place until now, if we survive we can report the issue now.
+ parameter_validation_vkCreateInstance(my_instance_data->report_data, pCreateInfo, pAllocator, pInstance);
+
+ if (pCreateInfo->pApplicationInfo) {
+ if (pCreateInfo->pApplicationInfo->pApplicationName) {
+ validate_string(my_instance_data->report_data, "vkCreateInstance", "pCreateInfo->VkApplicationInfo->pApplicationName",
+ pCreateInfo->pApplicationInfo->pApplicationName);
+ }
+
+ if (pCreateInfo->pApplicationInfo->pEngineName) {
+ validate_string(my_instance_data->report_data, "vkCreateInstance", "pCreateInfo->VkApplicationInfo->pEngineName",
+ pCreateInfo->pApplicationInfo->pEngineName);
+ }
+ }
+
+ return result;
+}
+
+VK_LAYER_EXPORT VKAPI_ATTR void VKAPI_CALL vkDestroyInstance(VkInstance instance, const VkAllocationCallbacks *pAllocator) {
+ // Grab the key before the instance is destroyed.
+ dispatch_key key = get_dispatch_key(instance);
+ VkBool32 skipCall = VK_FALSE;
+ layer_data *my_data = get_my_data_ptr(key, layer_data_map);
+ assert(my_data != NULL);
+
+ skipCall |= parameter_validation_vkDestroyInstance(my_data->report_data, pAllocator);
+
+ if (skipCall == VK_FALSE) {
+ VkLayerInstanceDispatchTable *pTable = get_dispatch_table(pc_instance_table_map, instance);
+ pTable->DestroyInstance(instance, pAllocator);
+
+ // Clean up logging callback, if any
+ while (my_data->logging_callback.size() > 0) {
+ VkDebugReportCallbackEXT callback = my_data->logging_callback.back();
+ layer_destroy_msg_callback(my_data->report_data, callback, pAllocator);
+ my_data->logging_callback.pop_back();
+ }
+
+ layer_debug_report_destroy_instance(mid(instance));
+ layer_data_map.erase(pTable);
+
+ pc_instance_table_map.erase(key);
+ layer_data_map.erase(key);
+ }
+}
+
+bool PostEnumeratePhysicalDevices(VkInstance instance, uint32_t *pPhysicalDeviceCount, VkPhysicalDevice *pPhysicalDevices,
+ VkResult result) {
+ if (result < VK_SUCCESS) {
+ std::string reason = "vkEnumeratePhysicalDevices parameter, VkResult result, is " + EnumeratorString(result);
+ log_msg(mid(instance), VK_DEBUG_REPORT_ERROR_BIT_EXT, (VkDebugReportObjectTypeEXT)0, 0, __LINE__, 1, "PARAMCHECK", "%s",
+ reason.c_str());
+ return false;
+ }
+
+ return true;
+}
+
+VK_LAYER_EXPORT VKAPI_ATTR VkResult VKAPI_CALL
+vkEnumeratePhysicalDevices(VkInstance instance, uint32_t *pPhysicalDeviceCount, VkPhysicalDevice *pPhysicalDevices) {
+ VkResult result = VK_ERROR_VALIDATION_FAILED_EXT;
+ VkBool32 skipCall = VK_FALSE;
+ layer_data *my_data = get_my_data_ptr(get_dispatch_key(instance), layer_data_map);
+ assert(my_data != NULL);
+
+ skipCall |= parameter_validation_vkEnumeratePhysicalDevices(my_data->report_data, pPhysicalDeviceCount, pPhysicalDevices);
+
+ if (skipCall == VK_FALSE) {
+ result = get_dispatch_table(pc_instance_table_map, instance)
+ ->EnumeratePhysicalDevices(instance, pPhysicalDeviceCount, pPhysicalDevices);
+
+ PostEnumeratePhysicalDevices(instance, pPhysicalDeviceCount, pPhysicalDevices, result);
+ }
+
+ return result;
+}
+
+VK_LAYER_EXPORT VKAPI_ATTR void VKAPI_CALL
+vkGetPhysicalDeviceFeatures(VkPhysicalDevice physicalDevice, VkPhysicalDeviceFeatures *pFeatures) {
+ VkBool32 skipCall = VK_FALSE;
+ layer_data *my_data = get_my_data_ptr(get_dispatch_key(physicalDevice), layer_data_map);
+ assert(my_data != NULL);
+
+ skipCall |= parameter_validation_vkGetPhysicalDeviceFeatures(my_data->report_data, pFeatures);
+
+ if (skipCall == VK_FALSE) {
+ get_dispatch_table(pc_instance_table_map, physicalDevice)->GetPhysicalDeviceFeatures(physicalDevice, pFeatures);
+ }
+}
+
+bool PostGetPhysicalDeviceFormatProperties(VkPhysicalDevice physicalDevice, VkFormat format,
+ VkFormatProperties *pFormatProperties) {
+
+ if (format < VK_FORMAT_BEGIN_RANGE || format > VK_FORMAT_END_RANGE) {
+ log_msg(mdd(physicalDevice), VK_DEBUG_REPORT_ERROR_BIT_EXT, (VkDebugReportObjectTypeEXT)0, 0, __LINE__, 1, "PARAMCHECK",
+ "vkGetPhysicalDeviceFormatProperties parameter, VkFormat format, is an unrecognized enumerator");
+ return false;
+ }
+
+ return true;
+}
+
+VK_LAYER_EXPORT VKAPI_ATTR void VKAPI_CALL
+vkGetPhysicalDeviceFormatProperties(VkPhysicalDevice physicalDevice, VkFormat format, VkFormatProperties *pFormatProperties) {
+ VkBool32 skipCall = VK_FALSE;
+ layer_data *my_data = get_my_data_ptr(get_dispatch_key(physicalDevice), layer_data_map);
+ assert(my_data != NULL);
+
+ skipCall |= parameter_validation_vkGetPhysicalDeviceFormatProperties(my_data->report_data, format, pFormatProperties);
+
+ if (skipCall == VK_FALSE) {
+ get_dispatch_table(pc_instance_table_map, physicalDevice)
+ ->GetPhysicalDeviceFormatProperties(physicalDevice, format, pFormatProperties);
+
+ PostGetPhysicalDeviceFormatProperties(physicalDevice, format, pFormatProperties);
+ }
+}
+
+bool PostGetPhysicalDeviceImageFormatProperties(VkPhysicalDevice physicalDevice, VkFormat format, VkImageType type,
+ VkImageTiling tiling, VkImageUsageFlags usage, VkImageCreateFlags flags,
+ VkImageFormatProperties *pImageFormatProperties, VkResult result) {
+
+ if (format < VK_FORMAT_BEGIN_RANGE || format > VK_FORMAT_END_RANGE) {
+ log_msg(mdd(physicalDevice), VK_DEBUG_REPORT_ERROR_BIT_EXT, (VkDebugReportObjectTypeEXT)0, 0, __LINE__, 1, "PARAMCHECK",
+ "vkGetPhysicalDeviceImageFormatProperties parameter, VkFormat format, is an unrecognized enumerator");
+ return false;
+ }
+
+ if (type < VK_IMAGE_TYPE_BEGIN_RANGE || type > VK_IMAGE_TYPE_END_RANGE) {
+ log_msg(mdd(physicalDevice), VK_DEBUG_REPORT_ERROR_BIT_EXT, (VkDebugReportObjectTypeEXT)0, 0, __LINE__, 1, "PARAMCHECK",
+ "vkGetPhysicalDeviceImageFormatProperties parameter, VkImageType type, is an unrecognized enumerator");
+ return false;
+ }
+
+ if (tiling < VK_IMAGE_TILING_BEGIN_RANGE || tiling > VK_IMAGE_TILING_END_RANGE) {
+ log_msg(mdd(physicalDevice), VK_DEBUG_REPORT_ERROR_BIT_EXT, (VkDebugReportObjectTypeEXT)0, 0, __LINE__, 1, "PARAMCHECK",
+ "vkGetPhysicalDeviceImageFormatProperties parameter, VkImageTiling tiling, is an unrecognized enumerator");
+ return false;
+ }
+
+ if (result < VK_SUCCESS) {
+ std::string reason = "vkGetPhysicalDeviceImageFormatProperties parameter, VkResult result, is " + EnumeratorString(result);
+ log_msg(mdd(physicalDevice), VK_DEBUG_REPORT_ERROR_BIT_EXT, (VkDebugReportObjectTypeEXT)0, 0, __LINE__, 1, "PARAMCHECK",
+ "%s", reason.c_str());
+ return false;
+ }
+
+ return true;
+}
+
+VK_LAYER_EXPORT VKAPI_ATTR VkResult VKAPI_CALL
+vkGetPhysicalDeviceImageFormatProperties(VkPhysicalDevice physicalDevice, VkFormat format, VkImageType type, VkImageTiling tiling,
+ VkImageUsageFlags usage, VkImageCreateFlags flags,
+ VkImageFormatProperties *pImageFormatProperties) {
+ VkResult result = VK_ERROR_VALIDATION_FAILED_EXT;
+ VkBool32 skipCall = VK_FALSE;
+ layer_data *my_data = get_my_data_ptr(get_dispatch_key(physicalDevice), layer_data_map);
+ assert(my_data != NULL);
+
+ skipCall |= parameter_validation_vkGetPhysicalDeviceImageFormatProperties(my_data->report_data, format, type, tiling, usage, flags,
+ pImageFormatProperties);
+
+ if (skipCall == VK_FALSE) {
+ result = get_dispatch_table(pc_instance_table_map, physicalDevice)
+ ->GetPhysicalDeviceImageFormatProperties(physicalDevice, format, type, tiling, usage, flags,
+ pImageFormatProperties);
+
+ PostGetPhysicalDeviceImageFormatProperties(physicalDevice, format, type, tiling, usage, flags, pImageFormatProperties,
+ result);
+ }
+
+ return result;
+}
+
+bool PostGetPhysicalDeviceProperties(VkPhysicalDevice physicalDevice, VkPhysicalDeviceProperties *pProperties) {
+
+ if (pProperties != nullptr) {
+ if (pProperties->deviceType < VK_PHYSICAL_DEVICE_TYPE_BEGIN_RANGE ||
+ pProperties->deviceType > VK_PHYSICAL_DEVICE_TYPE_END_RANGE) {
+ log_msg(mdd(physicalDevice), VK_DEBUG_REPORT_ERROR_BIT_EXT, (VkDebugReportObjectTypeEXT)0, 0, __LINE__, 1, "PARAMCHECK",
+ "vkGetPhysicalDeviceProperties parameter, VkPhysicalDeviceType pProperties->deviceType, is an unrecognized "
+ "enumerator");
+ return false;
+ }
+ }
+
+ return true;
+}
+
+VK_LAYER_EXPORT VKAPI_ATTR void VKAPI_CALL
+vkGetPhysicalDeviceProperties(VkPhysicalDevice physicalDevice, VkPhysicalDeviceProperties *pProperties) {
+ VkBool32 skipCall = VK_FALSE;
+ layer_data *my_data = get_my_data_ptr(get_dispatch_key(physicalDevice), layer_data_map);
+ assert(my_data != NULL);
+
+ skipCall |= parameter_validation_vkGetPhysicalDeviceProperties(my_data->report_data, pProperties);
+
+ if (skipCall == VK_FALSE) {
+ get_dispatch_table(pc_instance_table_map, physicalDevice)->GetPhysicalDeviceProperties(physicalDevice, pProperties);
+
+ PostGetPhysicalDeviceProperties(physicalDevice, pProperties);
+ }
+}
+
+VK_LAYER_EXPORT VKAPI_ATTR void VKAPI_CALL
+vkGetPhysicalDeviceQueueFamilyProperties(VkPhysicalDevice physicalDevice, uint32_t *pQueueFamilyPropertyCount,
+ VkQueueFamilyProperties *pQueueFamilyProperties) {
+ VkBool32 skipCall = VK_FALSE;
+ layer_data *my_data = get_my_data_ptr(get_dispatch_key(physicalDevice), layer_data_map);
+ assert(my_data != NULL);
+
+ skipCall |= parameter_validation_vkGetPhysicalDeviceQueueFamilyProperties(my_data->report_data, pQueueFamilyPropertyCount,
+ pQueueFamilyProperties);
+
+ if (skipCall == VK_FALSE) {
+ get_dispatch_table(pc_instance_table_map, physicalDevice)
+ ->GetPhysicalDeviceQueueFamilyProperties(physicalDevice, pQueueFamilyPropertyCount, pQueueFamilyProperties);
+ }
+}
+
+VK_LAYER_EXPORT VKAPI_ATTR void VKAPI_CALL
+vkGetPhysicalDeviceMemoryProperties(VkPhysicalDevice physicalDevice, VkPhysicalDeviceMemoryProperties *pMemoryProperties) {
+ VkBool32 skipCall = VK_FALSE;
+ layer_data *my_data = get_my_data_ptr(get_dispatch_key(physicalDevice), layer_data_map);
+ assert(my_data != NULL);
+
+ skipCall |= parameter_validation_vkGetPhysicalDeviceMemoryProperties(my_data->report_data, pMemoryProperties);
+
+ if (skipCall == VK_FALSE) {
+ get_dispatch_table(pc_instance_table_map, physicalDevice)
+ ->GetPhysicalDeviceMemoryProperties(physicalDevice, pMemoryProperties);
+ }
+}
+
+void validateDeviceCreateInfo(VkPhysicalDevice physicalDevice, const VkDeviceCreateInfo *pCreateInfo,
+ const std::vector<VkQueueFamilyProperties> properties) {
+ std::unordered_set<uint32_t> set;
+
+ if ((pCreateInfo != nullptr) && (pCreateInfo->pQueueCreateInfos != nullptr)) {
+ for (uint32_t i = 0; i < pCreateInfo->queueCreateInfoCount; ++i) {
+ if (set.count(pCreateInfo->pQueueCreateInfos[i].queueFamilyIndex)) {
+ log_msg(mdd(physicalDevice), VK_DEBUG_REPORT_ERROR_BIT_EXT, (VkDebugReportObjectTypeEXT)0, 0, __LINE__, 1,
+ "PARAMCHECK",
+ "VkDeviceCreateInfo parameter, uint32_t pQueueCreateInfos[%d]->queueFamilyIndex, is not unique within this "
+ "structure.",
+ i);
+ } else {
+ set.insert(pCreateInfo->pQueueCreateInfos[i].queueFamilyIndex);
+ }
+
+ if (pCreateInfo->pQueueCreateInfos[i].queueCount == 0) {
+ log_msg(mdd(physicalDevice), VK_DEBUG_REPORT_ERROR_BIT_EXT, (VkDebugReportObjectTypeEXT)0, 0, __LINE__, 1,
+ "PARAMCHECK", "VkDeviceCreateInfo parameter, uint32_t pQueueCreateInfos[%d]->queueCount, cannot be zero.",
+ i);
+ }
+
+ if (pCreateInfo->pQueueCreateInfos[i].pQueuePriorities != nullptr) {
+ for (uint32_t j = 0; j < pCreateInfo->pQueueCreateInfos[i].queueCount; ++j) {
+ if ((pCreateInfo->pQueueCreateInfos[i].pQueuePriorities[j] < 0.f) ||
+ (pCreateInfo->pQueueCreateInfos[i].pQueuePriorities[j] > 1.f)) {
+ log_msg(mdd(physicalDevice), VK_DEBUG_REPORT_ERROR_BIT_EXT, (VkDebugReportObjectTypeEXT)0, 0, __LINE__, 1,
+ "PARAMCHECK",
+ "VkDeviceCreateInfo parameter, uint32_t pQueueCreateInfos[%d]->pQueuePriorities[%d], must be "
+ "between 0 and 1. Actual value is %f",
+ i, j, pCreateInfo->pQueueCreateInfos[i].pQueuePriorities[j]);
+ }
+ }
+ }
+
+ if (pCreateInfo->pQueueCreateInfos[i].queueFamilyIndex >= properties.size()) {
+ log_msg(
+ mdd(physicalDevice), VK_DEBUG_REPORT_ERROR_BIT_EXT, (VkDebugReportObjectTypeEXT)0, 0, __LINE__, 1, "PARAMCHECK",
+ "VkDeviceCreateInfo parameter, uint32_t pQueueCreateInfos[%d]->queueFamilyIndex cannot be more than the number "
+ "of queue families.",
+ i);
+ } else if (pCreateInfo->pQueueCreateInfos[i].queueCount >
+ properties[pCreateInfo->pQueueCreateInfos[i].queueFamilyIndex].queueCount) {
+ log_msg(
+ mdd(physicalDevice), VK_DEBUG_REPORT_ERROR_BIT_EXT, (VkDebugReportObjectTypeEXT)0, 0, __LINE__, 1, "PARAMCHECK",
+ "VkDeviceCreateInfo parameter, uint32_t pQueueCreateInfos[%d]->queueCount cannot be more than the number of "
+ "queues for the given family index.",
+ i);
+ }
+ }
+ }
+}
+
+void storeCreateDeviceData(VkDevice device, const VkDeviceCreateInfo *pCreateInfo) {
+ layer_data *my_device_data = get_my_data_ptr(get_dispatch_key(device), layer_data_map);
+
+ if ((pCreateInfo != nullptr) && (pCreateInfo->pQueueCreateInfos != nullptr)) {
+ for (uint32_t i = 0; i < pCreateInfo->queueCreateInfoCount; ++i) {
+ my_device_data->queueFamilyIndexMap.insert(
+ std::make_pair(pCreateInfo->pQueueCreateInfos[i].queueFamilyIndex, pCreateInfo->pQueueCreateInfos[i].queueCount));
+ }
+ }
+}
+
+VK_LAYER_EXPORT VKAPI_ATTR VkResult VKAPI_CALL vkCreateDevice(VkPhysicalDevice physicalDevice,
+ const VkDeviceCreateInfo *pCreateInfo,
+ const VkAllocationCallbacks *pAllocator, VkDevice *pDevice) {
+ /*
+ * NOTE: We do not validate physicalDevice or any dispatchable
+ * object as the first parameter. We couldn't get here if it was wrong!
+ */
+
+ VkResult result = VK_ERROR_VALIDATION_FAILED_EXT;
+ VkBool32 skipCall = VK_FALSE;
+ layer_data *my_instance_data = get_my_data_ptr(get_dispatch_key(physicalDevice), layer_data_map);
+ assert(my_instance_data != nullptr);
+
+ skipCall |= parameter_validation_vkCreateDevice(my_instance_data->report_data, pCreateInfo, pAllocator, pDevice);
+
+ if (pCreateInfo != NULL) {
+ if ((pCreateInfo->enabledLayerCount > 0) && (pCreateInfo->ppEnabledLayerNames != NULL)) {
+ for (auto i = 0; i < pCreateInfo->enabledLayerCount; i++) {
+ skipCall |= validate_string(my_instance_data->report_data, "vkCreateDevice", "pCreateInfo->ppEnabledLayerNames",
+ pCreateInfo->ppEnabledLayerNames[i]);
+ }
+ }
+
+ if ((pCreateInfo->enabledExtensionCount > 0) && (pCreateInfo->ppEnabledExtensionNames != NULL)) {
+ for (auto i = 0; i < pCreateInfo->enabledExtensionCount; i++) {
+ skipCall |= validate_string(my_instance_data->report_data, "vkCreateDevice", "pCreateInfo->ppEnabledExtensionNames",
+ pCreateInfo->ppEnabledExtensionNames[i]);
+ }
+ }
+ }
+
+ if (skipCall == VK_FALSE) {
+ VkLayerDeviceCreateInfo *chain_info = get_chain_info(pCreateInfo, VK_LAYER_LINK_INFO);
+ assert(chain_info != nullptr);
+ assert(chain_info->u.pLayerInfo != nullptr);
+
+ PFN_vkGetInstanceProcAddr fpGetInstanceProcAddr = chain_info->u.pLayerInfo->pfnNextGetInstanceProcAddr;
+ PFN_vkGetDeviceProcAddr fpGetDeviceProcAddr = chain_info->u.pLayerInfo->pfnNextGetDeviceProcAddr;
+ PFN_vkCreateDevice fpCreateDevice = (PFN_vkCreateDevice)fpGetInstanceProcAddr(NULL, "vkCreateDevice");
+ if (fpCreateDevice == NULL) {
+ return VK_ERROR_INITIALIZATION_FAILED;
+ }
+
+ // Advance the link info for the next element on the chain
+ chain_info->u.pLayerInfo = chain_info->u.pLayerInfo->pNext;
+
+ result = fpCreateDevice(physicalDevice, pCreateInfo, pAllocator, pDevice);
+ if (result != VK_SUCCESS) {
+ return result;
+ }
+
+ layer_data *my_device_data = get_my_data_ptr(get_dispatch_key(*pDevice), layer_data_map);
+ assert(my_device_data != nullptr);
+
+ my_device_data->report_data = layer_debug_report_create_device(my_instance_data->report_data, *pDevice);
+ initDeviceTable(*pDevice, fpGetDeviceProcAddr, pc_device_table_map);
+
+ uint32_t count;
+ get_dispatch_table(pc_instance_table_map, physicalDevice)
+ ->GetPhysicalDeviceQueueFamilyProperties(physicalDevice, &count, nullptr);
+ std::vector<VkQueueFamilyProperties> properties(count);
+ get_dispatch_table(pc_instance_table_map, physicalDevice)
+ ->GetPhysicalDeviceQueueFamilyProperties(physicalDevice, &count, &properties[0]);
+
+ validateDeviceCreateInfo(physicalDevice, pCreateInfo, properties);
+ storeCreateDeviceData(*pDevice, pCreateInfo);
+ }
+
+ return result;
+}
+
+VK_LAYER_EXPORT VKAPI_ATTR void VKAPI_CALL vkDestroyDevice(VkDevice device, const VkAllocationCallbacks *pAllocator) {
+ dispatch_key key = get_dispatch_key(device);
+ VkBool32 skipCall = VK_FALSE;
+ layer_data *my_data = get_my_data_ptr(key, layer_data_map);
+ assert(my_data != NULL);
+
+ skipCall |= parameter_validation_vkDestroyDevice(my_data->report_data, pAllocator);
+
+ if (skipCall == VK_FALSE) {
+ layer_debug_report_destroy_device(device);
+
+#if DISPATCH_MAP_DEBUG
+ fprintf(stderr, "Device: %p, key: %p\n", device, key);
+#endif
+
+ get_dispatch_table(pc_device_table_map, device)->DestroyDevice(device, pAllocator);
+ pc_device_table_map.erase(key);
+ }
+}
+
+bool PreGetDeviceQueue(VkDevice device, uint32_t queueFamilyIndex, uint32_t queueIndex) {
+ layer_data *my_device_data = get_my_data_ptr(get_dispatch_key(device), layer_data_map);
+ assert(my_device_data != nullptr);
+
+ validate_queue_family_indices(device, "vkGetDeviceQueue", 1, &queueFamilyIndex);
+
+ const auto &queue_data = my_device_data->queueFamilyIndexMap.find(queueFamilyIndex);
+ if (queue_data->second <= queueIndex) {
+ log_msg(mdd(device), VK_DEBUG_REPORT_ERROR_BIT_EXT, (VkDebugReportObjectTypeEXT)0, 0, __LINE__, 1, "PARAMCHECK",
+ "VkGetDeviceQueue parameter, uint32_t queueIndex %d, must be less than the number of queues given when the device "
+ "was created.",
+ queueIndex);
+ return false;
+ }
+
+ return true;
+}
+
+VK_LAYER_EXPORT VKAPI_ATTR void VKAPI_CALL
+vkGetDeviceQueue(VkDevice device, uint32_t queueFamilyIndex, uint32_t queueIndex, VkQueue *pQueue) {
+ VkBool32 skipCall = VK_FALSE;
+ layer_data *my_data = get_my_data_ptr(get_dispatch_key(device), layer_data_map);
+ assert(my_data != NULL);
+
+ skipCall |= parameter_validation_vkGetDeviceQueue(my_data->report_data, queueFamilyIndex, queueIndex, pQueue);
+
+ if (skipCall == VK_FALSE) {
+ PreGetDeviceQueue(device, queueFamilyIndex, queueIndex);
+
+ get_dispatch_table(pc_device_table_map, device)->GetDeviceQueue(device, queueFamilyIndex, queueIndex, pQueue);
+ }
+}
+
+bool PostQueueSubmit(VkQueue queue, uint32_t commandBufferCount, VkFence fence, VkResult result) {
+ if (result < VK_SUCCESS) {
+ std::string reason = "vkQueueSubmit parameter, VkResult result, is " + EnumeratorString(result);
+ log_msg(mdd(queue), VK_DEBUG_REPORT_ERROR_BIT_EXT, (VkDebugReportObjectTypeEXT)0, 0, __LINE__, 1, "PARAMCHECK", "%s",
+ reason.c_str());
+ return false;
+ }
+
+ return true;
+}
+
+VK_LAYER_EXPORT VKAPI_ATTR VkResult VKAPI_CALL
+vkQueueSubmit(VkQueue queue, uint32_t submitCount, const VkSubmitInfo *pSubmits, VkFence fence) {
+ VkResult result = VK_ERROR_VALIDATION_FAILED_EXT;
+ VkBool32 skipCall = VK_FALSE;
+ layer_data *my_data = get_my_data_ptr(get_dispatch_key(queue), layer_data_map);
+ assert(my_data != NULL);
+
+ skipCall |= parameter_validation_vkQueueSubmit(my_data->report_data, submitCount, pSubmits, fence);
+
+ if (skipCall == VK_FALSE) {
+ result = get_dispatch_table(pc_device_table_map, queue)->QueueSubmit(queue, submitCount, pSubmits, fence);
+
+ PostQueueSubmit(queue, submitCount, fence, result);
+ }
+
+ return result;
+}
+
+bool PostQueueWaitIdle(VkQueue queue, VkResult result) {
+
+ if (result < VK_SUCCESS) {
+ std::string reason = "vkQueueWaitIdle parameter, VkResult result, is " + EnumeratorString(result);
+ log_msg(mdd(queue), VK_DEBUG_REPORT_ERROR_BIT_EXT, (VkDebugReportObjectTypeEXT)0, 0, __LINE__, 1, "PARAMCHECK", "%s",
+ reason.c_str());
+ return false;
+ }
+
+ return true;
+}
+
+VK_LAYER_EXPORT VKAPI_ATTR VkResult VKAPI_CALL vkQueueWaitIdle(VkQueue queue) {
+ VkResult result = get_dispatch_table(pc_device_table_map, queue)->QueueWaitIdle(queue);
+
+ PostQueueWaitIdle(queue, result);
+
+ return result;
+}
+
+bool PostDeviceWaitIdle(VkDevice device, VkResult result) {
+
+ if (result < VK_SUCCESS) {
+ std::string reason = "vkDeviceWaitIdle parameter, VkResult result, is " + EnumeratorString(result);
+ log_msg(mdd(device), VK_DEBUG_REPORT_ERROR_BIT_EXT, (VkDebugReportObjectTypeEXT)0, 0, __LINE__, 1, "PARAMCHECK", "%s",
+ reason.c_str());
+ return false;
+ }
+
+ return true;
+}
+
+VK_LAYER_EXPORT VKAPI_ATTR VkResult VKAPI_CALL vkDeviceWaitIdle(VkDevice device) {
+ VkResult result = get_dispatch_table(pc_device_table_map, device)->DeviceWaitIdle(device);
+
+ PostDeviceWaitIdle(device, result);
+
+ return result;
+}
+
+bool PostAllocateMemory(VkDevice device, VkDeviceMemory *pMemory, VkResult result) {
+ if (result < VK_SUCCESS) {
+ std::string reason = "vkAllocateMemory parameter, VkResult result, is " + EnumeratorString(result);
+ log_msg(mdd(device), VK_DEBUG_REPORT_ERROR_BIT_EXT, (VkDebugReportObjectTypeEXT)0, 0, __LINE__, 1, "PARAMCHECK", "%s",
+ reason.c_str());
+ return false;
+ }
+
+ return true;
+}
+
+VK_LAYER_EXPORT VKAPI_ATTR VkResult VKAPI_CALL vkAllocateMemory(VkDevice device, const VkMemoryAllocateInfo *pAllocateInfo,
+ const VkAllocationCallbacks *pAllocator, VkDeviceMemory *pMemory) {
+ VkResult result = VK_ERROR_VALIDATION_FAILED_EXT;
+ VkBool32 skipCall = VK_FALSE;
+ layer_data *my_data = get_my_data_ptr(get_dispatch_key(device), layer_data_map);
+ assert(my_data != NULL);
+
+ skipCall |= parameter_validation_vkAllocateMemory(my_data->report_data, pAllocateInfo, pAllocator, pMemory);
+
+ if (skipCall == VK_FALSE) {
+ result = get_dispatch_table(pc_device_table_map, device)->AllocateMemory(device, pAllocateInfo, pAllocator, pMemory);
+
+ PostAllocateMemory(device, pMemory, result);
+ }
+
+ return result;
+}
+
+VK_LAYER_EXPORT VKAPI_ATTR void VKAPI_CALL
+vkFreeMemory(VkDevice device, VkDeviceMemory memory, const VkAllocationCallbacks *pAllocator) {
+ VkBool32 skipCall = VK_FALSE;
+ layer_data *my_data = get_my_data_ptr(get_dispatch_key(device), layer_data_map);
+ assert(my_data != NULL);
+
+ skipCall |= parameter_validation_vkFreeMemory(my_data->report_data, memory, pAllocator);
+
+ if (skipCall == VK_FALSE) {
+ get_dispatch_table(pc_device_table_map, device)->FreeMemory(device, memory, pAllocator);
+ }
+}
+
+bool PostMapMemory(VkDevice device, VkDeviceMemory mem, VkDeviceSize offset, VkDeviceSize size, VkMemoryMapFlags flags,
+ void **ppData, VkResult result) {
+ if (result < VK_SUCCESS) {
+ std::string reason = "vkMapMemory parameter, VkResult result, is " + EnumeratorString(result);
+ log_msg(mdd(device), VK_DEBUG_REPORT_ERROR_BIT_EXT, (VkDebugReportObjectTypeEXT)0, 0, __LINE__, 1, "PARAMCHECK", "%s",
+ reason.c_str());
+ return false;
+ }
+
+ return true;
+}
+
+VK_LAYER_EXPORT VKAPI_ATTR VkResult VKAPI_CALL
+vkMapMemory(VkDevice device, VkDeviceMemory memory, VkDeviceSize offset, VkDeviceSize size, VkMemoryMapFlags flags, void **ppData) {
+ VkResult result = VK_ERROR_VALIDATION_FAILED_EXT;
+ VkBool32 skipCall = VK_FALSE;
+ layer_data *my_data = get_my_data_ptr(get_dispatch_key(device), layer_data_map);
+ assert(my_data != NULL);
+
+ skipCall |= parameter_validation_vkMapMemory(my_data->report_data, memory, offset, size, flags, ppData);
+
+ if (skipCall == VK_FALSE) {
+ result = get_dispatch_table(pc_device_table_map, device)->MapMemory(device, memory, offset, size, flags, ppData);
+
+ PostMapMemory(device, memory, offset, size, flags, ppData, result);
+ }
+
+ return result;
+}
+
+bool PostFlushMappedMemoryRanges(VkDevice device, uint32_t memoryRangeCount, VkResult result) {
+
+ if (result < VK_SUCCESS) {
+ std::string reason = "vkFlushMappedMemoryRanges parameter, VkResult result, is " + EnumeratorString(result);
+ log_msg(mdd(device), VK_DEBUG_REPORT_ERROR_BIT_EXT, (VkDebugReportObjectTypeEXT)0, 0, __LINE__, 1, "PARAMCHECK", "%s",
+ reason.c_str());
+ return false;
+ }
+
+ return true;
+}
+
+VK_LAYER_EXPORT VKAPI_ATTR VkResult VKAPI_CALL
+vkFlushMappedMemoryRanges(VkDevice device, uint32_t memoryRangeCount, const VkMappedMemoryRange *pMemoryRanges) {
+ VkResult result = VK_ERROR_VALIDATION_FAILED_EXT;
+ VkBool32 skipCall = VK_FALSE;
+ layer_data *my_data = get_my_data_ptr(get_dispatch_key(device), layer_data_map);
+ assert(my_data != NULL);
+
+ skipCall |= parameter_validation_vkFlushMappedMemoryRanges(my_data->report_data, memoryRangeCount, pMemoryRanges);
+
+ if (skipCall == VK_FALSE) {
+ result = get_dispatch_table(pc_device_table_map, device)->FlushMappedMemoryRanges(device, memoryRangeCount, pMemoryRanges);
+
+ PostFlushMappedMemoryRanges(device, memoryRangeCount, result);
+ }
+
+ return result;
+}
+
+bool PostInvalidateMappedMemoryRanges(VkDevice device, uint32_t memoryRangeCount, VkResult result) {
+
+ if (result < VK_SUCCESS) {
+ std::string reason = "vkInvalidateMappedMemoryRanges parameter, VkResult result, is " + EnumeratorString(result);
+ log_msg(mdd(device), VK_DEBUG_REPORT_ERROR_BIT_EXT, (VkDebugReportObjectTypeEXT)0, 0, __LINE__, 1, "PARAMCHECK", "%s",
+ reason.c_str());
+ return false;
+ }
+
+ return true;
+}
+
+VK_LAYER_EXPORT VKAPI_ATTR VkResult VKAPI_CALL
+vkInvalidateMappedMemoryRanges(VkDevice device, uint32_t memoryRangeCount, const VkMappedMemoryRange *pMemoryRanges) {
+ VkResult result = VK_ERROR_VALIDATION_FAILED_EXT;
+ VkBool32 skipCall = VK_FALSE;
+ layer_data *my_data = get_my_data_ptr(get_dispatch_key(device), layer_data_map);
+ assert(my_data != NULL);
+
+ skipCall |= parameter_validation_vkInvalidateMappedMemoryRanges(my_data->report_data, memoryRangeCount, pMemoryRanges);
+
+ if (skipCall == VK_FALSE) {
+ result =
+ get_dispatch_table(pc_device_table_map, device)->InvalidateMappedMemoryRanges(device, memoryRangeCount, pMemoryRanges);
+
+ PostInvalidateMappedMemoryRanges(device, memoryRangeCount, result);
+ }
+
+ return result;
+}
+
+VK_LAYER_EXPORT VKAPI_ATTR void VKAPI_CALL
+vkGetDeviceMemoryCommitment(VkDevice device, VkDeviceMemory memory, VkDeviceSize *pCommittedMemoryInBytes) {
+ VkBool32 skipCall = VK_FALSE;
+ layer_data *my_data = get_my_data_ptr(get_dispatch_key(device), layer_data_map);
+ assert(my_data != NULL);
+
+ skipCall |= parameter_validation_vkGetDeviceMemoryCommitment(my_data->report_data, memory, pCommittedMemoryInBytes);
+
+ if (skipCall == VK_FALSE) {
+ get_dispatch_table(pc_device_table_map, device)->GetDeviceMemoryCommitment(device, memory, pCommittedMemoryInBytes);
+ }
+}
+
+bool PostBindBufferMemory(VkDevice device, VkBuffer buffer, VkDeviceMemory mem, VkDeviceSize memoryOffset, VkResult result) {
+
+ if (result < VK_SUCCESS) {
+ std::string reason = "vkBindBufferMemory parameter, VkResult result, is " + EnumeratorString(result);
+ log_msg(mdd(device), VK_DEBUG_REPORT_ERROR_BIT_EXT, (VkDebugReportObjectTypeEXT)0, 0, __LINE__, 1, "PARAMCHECK", "%s",
+ reason.c_str());
+ return false;
+ }
+
+ return true;
+}
+
+VK_LAYER_EXPORT VKAPI_ATTR VkResult VKAPI_CALL
+vkBindBufferMemory(VkDevice device, VkBuffer buffer, VkDeviceMemory mem, VkDeviceSize memoryOffset) {
+ VkResult result = get_dispatch_table(pc_device_table_map, device)->BindBufferMemory(device, buffer, mem, memoryOffset);
+
+ PostBindBufferMemory(device, buffer, mem, memoryOffset, result);
+
+ return result;
+}
+
+bool PostBindImageMemory(VkDevice device, VkImage image, VkDeviceMemory mem, VkDeviceSize memoryOffset, VkResult result) {
+
+ if (result < VK_SUCCESS) {
+ std::string reason = "vkBindImageMemory parameter, VkResult result, is " + EnumeratorString(result);
+ log_msg(mdd(device), VK_DEBUG_REPORT_ERROR_BIT_EXT, (VkDebugReportObjectTypeEXT)0, 0, __LINE__, 1, "PARAMCHECK", "%s",
+ reason.c_str());
+ return false;
+ }
+
+ return true;
+}
+
+VK_LAYER_EXPORT VKAPI_ATTR VkResult VKAPI_CALL
+vkBindImageMemory(VkDevice device, VkImage image, VkDeviceMemory mem, VkDeviceSize memoryOffset) {
+ VkResult result = get_dispatch_table(pc_device_table_map, device)->BindImageMemory(device, image, mem, memoryOffset);
+
+ PostBindImageMemory(device, image, mem, memoryOffset, result);
+
+ return result;
+}
+
+VK_LAYER_EXPORT VKAPI_ATTR void VKAPI_CALL
+vkGetBufferMemoryRequirements(VkDevice device, VkBuffer buffer, VkMemoryRequirements *pMemoryRequirements) {
+ VkBool32 skipCall = VK_FALSE;
+ layer_data *my_data = get_my_data_ptr(get_dispatch_key(device), layer_data_map);
+ assert(my_data != NULL);
+
+ skipCall |= parameter_validation_vkGetBufferMemoryRequirements(my_data->report_data, buffer, pMemoryRequirements);
+
+ if (skipCall == VK_FALSE) {
+ get_dispatch_table(pc_device_table_map, device)->GetBufferMemoryRequirements(device, buffer, pMemoryRequirements);
+ }
+}
+
+VK_LAYER_EXPORT VKAPI_ATTR void VKAPI_CALL
+vkGetImageMemoryRequirements(VkDevice device, VkImage image, VkMemoryRequirements *pMemoryRequirements) {
+ VkBool32 skipCall = VK_FALSE;
+ layer_data *my_data = get_my_data_ptr(get_dispatch_key(device), layer_data_map);
+ assert(my_data != NULL);
+
+ skipCall |= parameter_validation_vkGetImageMemoryRequirements(my_data->report_data, image, pMemoryRequirements);
+
+ if (skipCall == VK_FALSE) {
+ get_dispatch_table(pc_device_table_map, device)->GetImageMemoryRequirements(device, image, pMemoryRequirements);
+ }
+}
+
+bool PostGetImageSparseMemoryRequirements(VkDevice device, VkImage image, uint32_t *pNumRequirements,
+ VkSparseImageMemoryRequirements *pSparseMemoryRequirements) {
+ if (pSparseMemoryRequirements != nullptr) {
+ if ((pSparseMemoryRequirements->formatProperties.aspectMask &
+ (VK_IMAGE_ASPECT_COLOR_BIT | VK_IMAGE_ASPECT_DEPTH_BIT | VK_IMAGE_ASPECT_STENCIL_BIT |
+ VK_IMAGE_ASPECT_METADATA_BIT)) == 0) {
+ log_msg(mdd(device), VK_DEBUG_REPORT_ERROR_BIT_EXT, (VkDebugReportObjectTypeEXT)0, 0, __LINE__, 1, "PARAMCHECK",
+ "vkGetImageSparseMemoryRequirements parameter, VkImageAspect "
+ "pSparseMemoryRequirements->formatProperties.aspectMask, is an unrecognized enumerator");
+ return false;
+ }
+ }
+
+ return true;
+}
+
+VK_LAYER_EXPORT VKAPI_ATTR void VKAPI_CALL
+vkGetImageSparseMemoryRequirements(VkDevice device, VkImage image, uint32_t *pSparseMemoryRequirementCount,
+ VkSparseImageMemoryRequirements *pSparseMemoryRequirements) {
+ VkBool32 skipCall = VK_FALSE;
+ layer_data *my_data = get_my_data_ptr(get_dispatch_key(device), layer_data_map);
+ assert(my_data != NULL);
+
+ skipCall |= parameter_validation_vkGetImageSparseMemoryRequirements(my_data->report_data, image, pSparseMemoryRequirementCount,
+ pSparseMemoryRequirements);
+
+ if (skipCall == VK_FALSE) {
+ get_dispatch_table(pc_device_table_map, device)
+ ->GetImageSparseMemoryRequirements(device, image, pSparseMemoryRequirementCount, pSparseMemoryRequirements);
+
+ PostGetImageSparseMemoryRequirements(device, image, pSparseMemoryRequirementCount, pSparseMemoryRequirements);
+ }
+}
+
+bool PostGetPhysicalDeviceSparseImageFormatProperties(VkPhysicalDevice physicalDevice, VkFormat format, VkImageType type,
+ VkSampleCountFlagBits samples, VkImageUsageFlags usage, VkImageTiling tiling,
+ uint32_t *pNumProperties, VkSparseImageFormatProperties *pProperties) {
+
+ if (format < VK_FORMAT_BEGIN_RANGE || format > VK_FORMAT_END_RANGE) {
+ log_msg(mdd(physicalDevice), VK_DEBUG_REPORT_ERROR_BIT_EXT, (VkDebugReportObjectTypeEXT)0, 0, __LINE__, 1, "PARAMCHECK",
+ "vkGetPhysicalDeviceSparseImageFormatProperties parameter, VkFormat format, is an unrecognized enumerator");
+ return false;
+ }
+
+ if (type < VK_IMAGE_TYPE_BEGIN_RANGE || type > VK_IMAGE_TYPE_END_RANGE) {
+ log_msg(mdd(physicalDevice), VK_DEBUG_REPORT_ERROR_BIT_EXT, (VkDebugReportObjectTypeEXT)0, 0, __LINE__, 1, "PARAMCHECK",
+ "vkGetPhysicalDeviceSparseImageFormatProperties parameter, VkImageType type, is an unrecognized enumerator");
+ return false;
+ }
+
+ if (tiling < VK_IMAGE_TILING_BEGIN_RANGE || tiling > VK_IMAGE_TILING_END_RANGE) {
+ log_msg(mdd(physicalDevice), VK_DEBUG_REPORT_ERROR_BIT_EXT, (VkDebugReportObjectTypeEXT)0, 0, __LINE__, 1, "PARAMCHECK",
+ "vkGetPhysicalDeviceSparseImageFormatProperties parameter, VkImageTiling tiling, is an unrecognized enumerator");
+ return false;
+ }
+
+ if (pProperties != nullptr) {
+ if ((pProperties->aspectMask & (VK_IMAGE_ASPECT_COLOR_BIT | VK_IMAGE_ASPECT_DEPTH_BIT | VK_IMAGE_ASPECT_STENCIL_BIT |
+ VK_IMAGE_ASPECT_METADATA_BIT)) == 0) {
+ log_msg(mdd(physicalDevice), VK_DEBUG_REPORT_ERROR_BIT_EXT, (VkDebugReportObjectTypeEXT)0, 0, __LINE__, 1, "PARAMCHECK",
+ "vkGetPhysicalDeviceSparseImageFormatProperties parameter, VkImageAspect pProperties->aspectMask, is an "
+ "unrecognized enumerator");
+ return false;
+ }
+ }
+
+ return true;
+}
+
+VK_LAYER_EXPORT VKAPI_ATTR void VKAPI_CALL
+vkGetPhysicalDeviceSparseImageFormatProperties(VkPhysicalDevice physicalDevice, VkFormat format, VkImageType type,
+ VkSampleCountFlagBits samples, VkImageUsageFlags usage, VkImageTiling tiling,
+ uint32_t *pPropertyCount, VkSparseImageFormatProperties *pProperties) {
+ VkBool32 skipCall = VK_FALSE;
+ layer_data *my_data = get_my_data_ptr(get_dispatch_key(physicalDevice), layer_data_map);
+ assert(my_data != NULL);
+
+ skipCall |= parameter_validation_vkGetPhysicalDeviceSparseImageFormatProperties(my_data->report_data, format, type, samples, usage,
+ tiling, pPropertyCount, pProperties);
+
+ if (skipCall == VK_FALSE) {
+ get_dispatch_table(pc_instance_table_map, physicalDevice)
+ ->GetPhysicalDeviceSparseImageFormatProperties(physicalDevice, format, type, samples, usage, tiling, pPropertyCount,
+ pProperties);
+
+ PostGetPhysicalDeviceSparseImageFormatProperties(physicalDevice, format, type, samples, usage, tiling, pPropertyCount,
+ pProperties);
+ }
+}
+
+bool PostQueueBindSparse(VkQueue queue, uint32_t bindInfoCount, const VkBindSparseInfo *pBindInfo, VkFence fence, VkResult result) {
+
+ if (result < VK_SUCCESS) {
+ std::string reason = "vkQueueBindSparse parameter, VkResult result, is " + EnumeratorString(result);
+ log_msg(mdd(queue), VK_DEBUG_REPORT_ERROR_BIT_EXT, (VkDebugReportObjectTypeEXT)0, 0, __LINE__, 1, "PARAMCHECK", "%s",
+ reason.c_str());
+ return false;
+ }
+
+ return true;
+}
+
+VK_LAYER_EXPORT VKAPI_ATTR VkResult VKAPI_CALL
+vkQueueBindSparse(VkQueue queue, uint32_t bindInfoCount, const VkBindSparseInfo *pBindInfo, VkFence fence) {
+ VkResult result = VK_ERROR_VALIDATION_FAILED_EXT;
+ VkBool32 skipCall = VK_FALSE;
+ layer_data *my_data = get_my_data_ptr(get_dispatch_key(queue), layer_data_map);
+ assert(my_data != NULL);
+
+ skipCall |= parameter_validation_vkQueueBindSparse(my_data->report_data, bindInfoCount, pBindInfo, fence);
+
+ if (skipCall == VK_FALSE) {
+ result = get_dispatch_table(pc_device_table_map, queue)->QueueBindSparse(queue, bindInfoCount, pBindInfo, fence);
+
+ PostQueueBindSparse(queue, bindInfoCount, pBindInfo, fence, result);
+ }
+
+ return result;
+}
+
+bool PostCreateFence(VkDevice device, VkFence *pFence, VkResult result) {
+ if (result < VK_SUCCESS) {
+ std::string reason = "vkCreateFence parameter, VkResult result, is " + EnumeratorString(result);
+ log_msg(mdd(device), VK_DEBUG_REPORT_ERROR_BIT_EXT, (VkDebugReportObjectTypeEXT)0, 0, __LINE__, 1, "PARAMCHECK", "%s",
+ reason.c_str());
+ return false;
+ }
+
+ return true;
+}
+
+VK_LAYER_EXPORT VKAPI_ATTR VkResult VKAPI_CALL
+vkCreateFence(VkDevice device, const VkFenceCreateInfo *pCreateInfo, const VkAllocationCallbacks *pAllocator, VkFence *pFence) {
+ VkResult result = VK_ERROR_VALIDATION_FAILED_EXT;
+ VkBool32 skipCall = VK_FALSE;
+ layer_data *my_data = get_my_data_ptr(get_dispatch_key(device), layer_data_map);
+ assert(my_data != NULL);
+
+ skipCall |= parameter_validation_vkCreateFence(my_data->report_data, pCreateInfo, pAllocator, pFence);
+
+ if (skipCall == VK_FALSE) {
+ result = get_dispatch_table(pc_device_table_map, device)->CreateFence(device, pCreateInfo, pAllocator, pFence);
+
+ PostCreateFence(device, pFence, result);
+ }
+
+ return result;
+}
+
+VK_LAYER_EXPORT VKAPI_ATTR void VKAPI_CALL vkDestroyFence(VkDevice device, VkFence fence, const VkAllocationCallbacks *pAllocator) {
+ VkBool32 skipCall = VK_FALSE;
+ layer_data *my_data = get_my_data_ptr(get_dispatch_key(device), layer_data_map);
+ assert(my_data != NULL);
+
+ skipCall |= parameter_validation_vkDestroyFence(my_data->report_data, fence, pAllocator);
+
+ if (skipCall == VK_FALSE) {
+ get_dispatch_table(pc_device_table_map, device)->DestroyFence(device, fence, pAllocator);
+ }
+}
+
+bool PostResetFences(VkDevice device, uint32_t fenceCount, VkResult result) {
+
+ if (result < VK_SUCCESS) {
+ std::string reason = "vkResetFences parameter, VkResult result, is " + EnumeratorString(result);
+ log_msg(mdd(device), VK_DEBUG_REPORT_ERROR_BIT_EXT, (VkDebugReportObjectTypeEXT)0, 0, __LINE__, 1, "PARAMCHECK", "%s",
+ reason.c_str());
+ return false;
+ }
+
+ return true;
+}
+
+VK_LAYER_EXPORT VKAPI_ATTR VkResult VKAPI_CALL vkResetFences(VkDevice device, uint32_t fenceCount, const VkFence *pFences) {
+ VkResult result = VK_ERROR_VALIDATION_FAILED_EXT;
+ VkBool32 skipCall = VK_FALSE;
+ layer_data *my_data = get_my_data_ptr(get_dispatch_key(device), layer_data_map);
+ assert(my_data != NULL);
+
+ skipCall |= parameter_validation_vkResetFences(my_data->report_data, fenceCount, pFences);
+
+ if (skipCall == VK_FALSE) {
+ result = get_dispatch_table(pc_device_table_map, device)->ResetFences(device, fenceCount, pFences);
+
+ PostResetFences(device, fenceCount, result);
+ }
+
+ return result;
+}
+
+bool PostGetFenceStatus(VkDevice device, VkFence fence, VkResult result) {
+
+ if (result < VK_SUCCESS) {
+ std::string reason = "vkGetFenceStatus parameter, VkResult result, is " + EnumeratorString(result);
+ log_msg(mdd(device), VK_DEBUG_REPORT_ERROR_BIT_EXT, (VkDebugReportObjectTypeEXT)0, 0, __LINE__, 1, "PARAMCHECK", "%s",
+ reason.c_str());
+ return false;
+ }
+
+ return true;
+}
+
+VK_LAYER_EXPORT VKAPI_ATTR VkResult VKAPI_CALL vkGetFenceStatus(VkDevice device, VkFence fence) {
+ VkResult result = get_dispatch_table(pc_device_table_map, device)->GetFenceStatus(device, fence);
+
+ PostGetFenceStatus(device, fence, result);
+
+ return result;
+}
+
+bool PostWaitForFences(VkDevice device, uint32_t fenceCount, VkBool32 waitAll, uint64_t timeout, VkResult result) {
+
+ if (result < VK_SUCCESS) {
+ std::string reason = "vkWaitForFences parameter, VkResult result, is " + EnumeratorString(result);
+ log_msg(mdd(device), VK_DEBUG_REPORT_ERROR_BIT_EXT, (VkDebugReportObjectTypeEXT)0, 0, __LINE__, 1, "PARAMCHECK", "%s",
+ reason.c_str());
+ return false;
+ }
+
+ return true;
+}
+
+VK_LAYER_EXPORT VKAPI_ATTR VkResult VKAPI_CALL
+vkWaitForFences(VkDevice device, uint32_t fenceCount, const VkFence *pFences, VkBool32 waitAll, uint64_t timeout) {
+ VkResult result = VK_ERROR_VALIDATION_FAILED_EXT;
+ VkBool32 skipCall = VK_FALSE;
+ layer_data *my_data = get_my_data_ptr(get_dispatch_key(device), layer_data_map);
+ assert(my_data != NULL);
+
+ skipCall |= parameter_validation_vkWaitForFences(my_data->report_data, fenceCount, pFences, waitAll, timeout);
+
+ if (skipCall == VK_FALSE) {
+ result = get_dispatch_table(pc_device_table_map, device)->WaitForFences(device, fenceCount, pFences, waitAll, timeout);
+
+ PostWaitForFences(device, fenceCount, waitAll, timeout, result);
+ }
+
+ return result;
+}
+
+bool PostCreateSemaphore(VkDevice device, VkSemaphore *pSemaphore, VkResult result) {
+ if (result < VK_SUCCESS) {
+ std::string reason = "vkCreateSemaphore parameter, VkResult result, is " + EnumeratorString(result);
+ log_msg(mdd(device), VK_DEBUG_REPORT_ERROR_BIT_EXT, (VkDebugReportObjectTypeEXT)0, 0, __LINE__, 1, "PARAMCHECK", "%s",
+ reason.c_str());
+ return false;
+ }
+
+ return true;
+}
+
+VK_LAYER_EXPORT VKAPI_ATTR VkResult VKAPI_CALL vkCreateSemaphore(VkDevice device, const VkSemaphoreCreateInfo *pCreateInfo,
+ const VkAllocationCallbacks *pAllocator, VkSemaphore *pSemaphore) {
+ VkResult result = VK_ERROR_VALIDATION_FAILED_EXT;
+ VkBool32 skipCall = VK_FALSE;
+ layer_data *my_data = get_my_data_ptr(get_dispatch_key(device), layer_data_map);
+ assert(my_data != NULL);
+
+ skipCall |= parameter_validation_vkCreateSemaphore(my_data->report_data, pCreateInfo, pAllocator, pSemaphore);
+
+ if (skipCall == VK_FALSE) {
+ result = get_dispatch_table(pc_device_table_map, device)->CreateSemaphore(device, pCreateInfo, pAllocator, pSemaphore);
+
+ PostCreateSemaphore(device, pSemaphore, result);
+ }
+
+ return result;
+}
+
+VK_LAYER_EXPORT VKAPI_ATTR void VKAPI_CALL
+vkDestroySemaphore(VkDevice device, VkSemaphore semaphore, const VkAllocationCallbacks *pAllocator) {
+ VkBool32 skipCall = VK_FALSE;
+ layer_data *my_data = get_my_data_ptr(get_dispatch_key(device), layer_data_map);
+ assert(my_data != NULL);
+
+ skipCall |= parameter_validation_vkDestroySemaphore(my_data->report_data, semaphore, pAllocator);
+
+ if (skipCall == VK_FALSE) {
+ get_dispatch_table(pc_device_table_map, device)->DestroySemaphore(device, semaphore, pAllocator);
+ }
+}
+
+bool PostCreateEvent(VkDevice device, VkEvent *pEvent, VkResult result) {
+ if (result < VK_SUCCESS) {
+ std::string reason = "vkCreateEvent parameter, VkResult result, is " + EnumeratorString(result);
+ log_msg(mdd(device), VK_DEBUG_REPORT_ERROR_BIT_EXT, (VkDebugReportObjectTypeEXT)0, 0, __LINE__, 1, "PARAMCHECK", "%s",
+ reason.c_str());
+ return false;
+ }
+
+ return true;
+}
+
+VK_LAYER_EXPORT VKAPI_ATTR VkResult VKAPI_CALL
+vkCreateEvent(VkDevice device, const VkEventCreateInfo *pCreateInfo, const VkAllocationCallbacks *pAllocator, VkEvent *pEvent) {
+ VkResult result = VK_ERROR_VALIDATION_FAILED_EXT;
+ VkBool32 skipCall = VK_FALSE;
+ layer_data *my_data = get_my_data_ptr(get_dispatch_key(device), layer_data_map);
+ assert(my_data != NULL);
+
+ skipCall |= parameter_validation_vkCreateEvent(my_data->report_data, pCreateInfo, pAllocator, pEvent);
+
+ if (skipCall == VK_FALSE) {
+ result = get_dispatch_table(pc_device_table_map, device)->CreateEvent(device, pCreateInfo, pAllocator, pEvent);
+
+ PostCreateEvent(device, pEvent, result);
+ }
+
+ return result;
+}
+
+VK_LAYER_EXPORT VKAPI_ATTR void VKAPI_CALL vkDestroyEvent(VkDevice device, VkEvent event, const VkAllocationCallbacks *pAllocator) {
+ VkBool32 skipCall = VK_FALSE;
+ layer_data *my_data = get_my_data_ptr(get_dispatch_key(device), layer_data_map);
+ assert(my_data != NULL);
+
+ skipCall |= parameter_validation_vkDestroyEvent(my_data->report_data, event, pAllocator);
+
+ if (skipCall == VK_FALSE) {
+ get_dispatch_table(pc_device_table_map, device)->DestroyEvent(device, event, pAllocator);
+ }
+}
+
+bool PostGetEventStatus(VkDevice device, VkEvent event, VkResult result) {
+
+ if (result < VK_SUCCESS) {
+ std::string reason = "vkGetEventStatus parameter, VkResult result, is " + EnumeratorString(result);
+ log_msg(mdd(device), VK_DEBUG_REPORT_ERROR_BIT_EXT, (VkDebugReportObjectTypeEXT)0, 0, __LINE__, 1, "PARAMCHECK", "%s",
+ reason.c_str());
+ return false;
+ }
+
+ return true;
+}
+
+VK_LAYER_EXPORT VKAPI_ATTR VkResult VKAPI_CALL vkGetEventStatus(VkDevice device, VkEvent event) {
+ VkResult result = get_dispatch_table(pc_device_table_map, device)->GetEventStatus(device, event);
+
+ PostGetEventStatus(device, event, result);
+
+ return result;
+}
+
+bool PostSetEvent(VkDevice device, VkEvent event, VkResult result) {
+
+ if (result < VK_SUCCESS) {
+ std::string reason = "vkSetEvent parameter, VkResult result, is " + EnumeratorString(result);
+ log_msg(mdd(device), VK_DEBUG_REPORT_ERROR_BIT_EXT, (VkDebugReportObjectTypeEXT)0, 0, __LINE__, 1, "PARAMCHECK", "%s",
+ reason.c_str());
+ return false;
+ }
+
+ return true;
+}
+
+VK_LAYER_EXPORT VKAPI_ATTR VkResult VKAPI_CALL vkSetEvent(VkDevice device, VkEvent event) {
+ VkResult result = get_dispatch_table(pc_device_table_map, device)->SetEvent(device, event);
+
+ PostSetEvent(device, event, result);
+
+ return result;
+}
+
+bool PostResetEvent(VkDevice device, VkEvent event, VkResult result) {
+
+ if (result < VK_SUCCESS) {
+ std::string reason = "vkResetEvent parameter, VkResult result, is " + EnumeratorString(result);
+ log_msg(mdd(device), VK_DEBUG_REPORT_ERROR_BIT_EXT, (VkDebugReportObjectTypeEXT)0, 0, __LINE__, 1, "PARAMCHECK", "%s",
+ reason.c_str());
+ return false;
+ }
+
+ return true;
+}
+
+VK_LAYER_EXPORT VKAPI_ATTR VkResult VKAPI_CALL vkResetEvent(VkDevice device, VkEvent event) {
+ VkResult result = get_dispatch_table(pc_device_table_map, device)->ResetEvent(device, event);
+
+ PostResetEvent(device, event, result);
+
+ return result;
+}
+
+bool PreCreateQueryPool(VkDevice device, const VkQueryPoolCreateInfo *pCreateInfo) {
+ if (pCreateInfo != nullptr) {
+ if (pCreateInfo->queryType < VK_QUERY_TYPE_BEGIN_RANGE || pCreateInfo->queryType > VK_QUERY_TYPE_END_RANGE) {
+ log_msg(mdd(device), VK_DEBUG_REPORT_ERROR_BIT_EXT, (VkDebugReportObjectTypeEXT)0, 0, __LINE__, 1, "PARAMCHECK",
+ "vkCreateQueryPool parameter, VkQueryType pCreateInfo->queryType, is an unrecognized enumerator");
+ return false;
+ }
+ }
+
+ return true;
+}
+
+bool PostCreateQueryPool(VkDevice device, VkQueryPool *pQueryPool, VkResult result) {
+ if (result < VK_SUCCESS) {
+ std::string reason = "vkCreateQueryPool parameter, VkResult result, is " + EnumeratorString(result);
+ log_msg(mdd(device), VK_DEBUG_REPORT_ERROR_BIT_EXT, (VkDebugReportObjectTypeEXT)0, 0, __LINE__, 1, "PARAMCHECK", "%s",
+ reason.c_str());
+ return false;
+ }
+
+ return true;
+}
+
+VK_LAYER_EXPORT VKAPI_ATTR VkResult VKAPI_CALL vkCreateQueryPool(VkDevice device, const VkQueryPoolCreateInfo *pCreateInfo,
+ const VkAllocationCallbacks *pAllocator, VkQueryPool *pQueryPool) {
+ VkResult result = VK_ERROR_VALIDATION_FAILED_EXT;
+ VkBool32 skipCall = VK_FALSE;
+ layer_data *my_data = get_my_data_ptr(get_dispatch_key(device), layer_data_map);
+ assert(my_data != NULL);
+
+ skipCall |= parameter_validation_vkCreateQueryPool(my_data->report_data, pCreateInfo, pAllocator, pQueryPool);
+
+ if (skipCall == VK_FALSE) {
+ PreCreateQueryPool(device, pCreateInfo);
+
+ result = get_dispatch_table(pc_device_table_map, device)->CreateQueryPool(device, pCreateInfo, pAllocator, pQueryPool);
+
+ PostCreateQueryPool(device, pQueryPool, result);
+ }
+
+ return result;
+}
+
+VK_LAYER_EXPORT VKAPI_ATTR void VKAPI_CALL
+vkDestroyQueryPool(VkDevice device, VkQueryPool queryPool, const VkAllocationCallbacks *pAllocator) {
+ VkBool32 skipCall = VK_FALSE;
+ layer_data *my_data = get_my_data_ptr(get_dispatch_key(device), layer_data_map);
+ assert(my_data != NULL);
+
+ skipCall |= parameter_validation_vkDestroyQueryPool(my_data->report_data, queryPool, pAllocator);
+
+ if (skipCall == VK_FALSE) {
+ get_dispatch_table(pc_device_table_map, device)->DestroyQueryPool(device, queryPool, pAllocator);
+ }
+}
+
+bool PostGetQueryPoolResults(VkDevice device, VkQueryPool queryPool, uint32_t firstQuery, uint32_t queryCount, size_t dataSize,
+ void *pData, VkDeviceSize stride, VkQueryResultFlags flags, VkResult result) {
+ if (result < VK_SUCCESS) {
+ std::string reason = "vkGetQueryPoolResults parameter, VkResult result, is " + EnumeratorString(result);
+ log_msg(mdd(device), VK_DEBUG_REPORT_ERROR_BIT_EXT, (VkDebugReportObjectTypeEXT)0, 0, __LINE__, 1, "PARAMCHECK", "%s",
+ reason.c_str());
+ return false;
+ }
+
+ return true;
+}
+
+VK_LAYER_EXPORT VKAPI_ATTR VkResult VKAPI_CALL vkGetQueryPoolResults(VkDevice device, VkQueryPool queryPool, uint32_t firstQuery,
+ uint32_t queryCount, size_t dataSize, void *pData,
+ VkDeviceSize stride, VkQueryResultFlags flags) {
+ VkResult result = VK_ERROR_VALIDATION_FAILED_EXT;
+ VkBool32 skipCall = VK_FALSE;
+ layer_data *my_data = get_my_data_ptr(get_dispatch_key(device), layer_data_map);
+ assert(my_data != NULL);
+
+ skipCall |=
+ parameter_validation_vkGetQueryPoolResults(my_data->report_data, queryPool, firstQuery, queryCount, dataSize, pData, stride, flags);
+
+ if (skipCall == VK_FALSE) {
+ result = get_dispatch_table(pc_device_table_map, device)
+ ->GetQueryPoolResults(device, queryPool, firstQuery, queryCount, dataSize, pData, stride, flags);
+
+ PostGetQueryPoolResults(device, queryPool, firstQuery, queryCount, dataSize, pData, stride, flags, result);
+ }
+
+ return result;
+}
+
+bool PreCreateBuffer(VkDevice device, const VkBufferCreateInfo *pCreateInfo) {
+ if (pCreateInfo != nullptr) {
+ if (pCreateInfo->sharingMode < VK_SHARING_MODE_BEGIN_RANGE || pCreateInfo->sharingMode > VK_SHARING_MODE_END_RANGE) {
+ log_msg(mdd(device), VK_DEBUG_REPORT_ERROR_BIT_EXT, (VkDebugReportObjectTypeEXT)0, 0, __LINE__, 1, "PARAMCHECK",
+ "vkCreateBuffer parameter, VkSharingMode pCreateInfo->sharingMode, is an unrecognized enumerator");
+ return false;
+ } else if (pCreateInfo->sharingMode == VK_SHARING_MODE_CONCURRENT) {
+ validate_queue_family_indices(device, "vkCreateBuffer", pCreateInfo->queueFamilyIndexCount,
+ pCreateInfo->pQueueFamilyIndices);
+ }
+ }
+
+ return true;
+}
+
+bool PostCreateBuffer(VkDevice device, VkBuffer *pBuffer, VkResult result) {
+ if (result < VK_SUCCESS) {
+ std::string reason = "vkCreateBuffer parameter, VkResult result, is " + EnumeratorString(result);
+ log_msg(mdd(device), VK_DEBUG_REPORT_ERROR_BIT_EXT, (VkDebugReportObjectTypeEXT)0, 0, __LINE__, 1, "PARAMCHECK", "%s",
+ reason.c_str());
+ return false;
+ }
+
+ return true;
+}
+
+VK_LAYER_EXPORT VKAPI_ATTR VkResult VKAPI_CALL
+vkCreateBuffer(VkDevice device, const VkBufferCreateInfo *pCreateInfo, const VkAllocationCallbacks *pAllocator, VkBuffer *pBuffer) {
+ VkResult result = VK_ERROR_VALIDATION_FAILED_EXT;
+ VkBool32 skipCall = VK_FALSE;
+ layer_data *my_data = get_my_data_ptr(get_dispatch_key(device), layer_data_map);
+ assert(my_data != NULL);
+
+ skipCall |= parameter_validation_vkCreateBuffer(my_data->report_data, pCreateInfo, pAllocator, pBuffer);
+
+ if (skipCall == VK_FALSE) {
+ PreCreateBuffer(device, pCreateInfo);
+
+ result = get_dispatch_table(pc_device_table_map, device)->CreateBuffer(device, pCreateInfo, pAllocator, pBuffer);
+
+ PostCreateBuffer(device, pBuffer, result);
+ }
+
+ return result;
+}
+
+VK_LAYER_EXPORT VKAPI_ATTR void VKAPI_CALL
+vkDestroyBuffer(VkDevice device, VkBuffer buffer, const VkAllocationCallbacks *pAllocator) {
+ VkBool32 skipCall = VK_FALSE;
+ layer_data *my_data = get_my_data_ptr(get_dispatch_key(device), layer_data_map);
+ assert(my_data != NULL);
+
+ skipCall |= parameter_validation_vkDestroyBuffer(my_data->report_data, buffer, pAllocator);
+
+ if (skipCall == VK_FALSE) {
+ get_dispatch_table(pc_device_table_map, device)->DestroyBuffer(device, buffer, pAllocator);
+ }
+}
+
+bool PreCreateBufferView(VkDevice device, const VkBufferViewCreateInfo *pCreateInfo) {
+ if (pCreateInfo != nullptr) {
+ if (pCreateInfo->format < VK_FORMAT_BEGIN_RANGE || pCreateInfo->format > VK_FORMAT_END_RANGE) {
+ log_msg(mdd(device), VK_DEBUG_REPORT_ERROR_BIT_EXT, (VkDebugReportObjectTypeEXT)0, 0, __LINE__, 1, "PARAMCHECK",
+ "vkCreateBufferView parameter, VkFormat pCreateInfo->format, is an unrecognized enumerator");
+ return false;
+ }
+ }
+
+ return true;
+}
+
+bool PostCreateBufferView(VkDevice device, VkBufferView *pView, VkResult result) {
+ if (result < VK_SUCCESS) {
+ std::string reason = "vkCreateBufferView parameter, VkResult result, is " + EnumeratorString(result);
+ log_msg(mdd(device), VK_DEBUG_REPORT_ERROR_BIT_EXT, (VkDebugReportObjectTypeEXT)0, 0, __LINE__, 1, "PARAMCHECK", "%s",
+ reason.c_str());
+ return false;
+ }
+
+ return true;
+}
+
+VK_LAYER_EXPORT VKAPI_ATTR VkResult VKAPI_CALL vkCreateBufferView(VkDevice device, const VkBufferViewCreateInfo *pCreateInfo,
+ const VkAllocationCallbacks *pAllocator, VkBufferView *pView) {
+ VkResult result = VK_ERROR_VALIDATION_FAILED_EXT;
+ VkBool32 skipCall = VK_FALSE;
+ layer_data *my_data = get_my_data_ptr(get_dispatch_key(device), layer_data_map);
+ assert(my_data != NULL);
+
+ skipCall |= parameter_validation_vkCreateBufferView(my_data->report_data, pCreateInfo, pAllocator, pView);
+
+ if (skipCall == VK_FALSE) {
+ PreCreateBufferView(device, pCreateInfo);
+
+ result = get_dispatch_table(pc_device_table_map, device)->CreateBufferView(device, pCreateInfo, pAllocator, pView);
+
+ PostCreateBufferView(device, pView, result);
+ }
+
+ return result;
+}
+
+VK_LAYER_EXPORT VKAPI_ATTR void VKAPI_CALL
+vkDestroyBufferView(VkDevice device, VkBufferView bufferView, const VkAllocationCallbacks *pAllocator) {
+ VkBool32 skipCall = VK_FALSE;
+ layer_data *my_data = get_my_data_ptr(get_dispatch_key(device), layer_data_map);
+ assert(my_data != NULL);
+
+ skipCall |= parameter_validation_vkDestroyBufferView(my_data->report_data, bufferView, pAllocator);
+
+ if (skipCall == VK_FALSE) {
+ get_dispatch_table(pc_device_table_map, device)->DestroyBufferView(device, bufferView, pAllocator);
+ }
+}
+
+bool PreCreateImage(VkDevice device, const VkImageCreateInfo *pCreateInfo) {
+ if (pCreateInfo != nullptr) {
+ if (pCreateInfo->imageType < VK_IMAGE_TYPE_BEGIN_RANGE || pCreateInfo->imageType > VK_IMAGE_TYPE_END_RANGE) {
+ log_msg(mdd(device), VK_DEBUG_REPORT_ERROR_BIT_EXT, (VkDebugReportObjectTypeEXT)0, 0, __LINE__, 1, "PARAMCHECK",
+ "vkCreateImage parameter, VkImageType pCreateInfo->imageType, is an unrecognized enumerator");
+ return false;
+ }
+ if (pCreateInfo->format < VK_FORMAT_BEGIN_RANGE || pCreateInfo->format > VK_FORMAT_END_RANGE) {
+ log_msg(mdd(device), VK_DEBUG_REPORT_ERROR_BIT_EXT, (VkDebugReportObjectTypeEXT)0, 0, __LINE__, 1, "PARAMCHECK",
+ "vkCreateImage parameter, VkFormat pCreateInfo->format, is an unrecognized enumerator");
+ return false;
+ }
+ if (pCreateInfo->tiling < VK_IMAGE_TILING_BEGIN_RANGE || pCreateInfo->tiling > VK_IMAGE_TILING_END_RANGE) {
+ log_msg(mdd(device), VK_DEBUG_REPORT_ERROR_BIT_EXT, (VkDebugReportObjectTypeEXT)0, 0, __LINE__, 1, "PARAMCHECK",
+ "vkCreateImage parameter, VkImageTiling pCreateInfo->tiling, is an unrecognized enumerator");
+ return false;
+ }
+ if (pCreateInfo->sharingMode < VK_SHARING_MODE_BEGIN_RANGE || pCreateInfo->sharingMode > VK_SHARING_MODE_END_RANGE) {
+ log_msg(mdd(device), VK_DEBUG_REPORT_ERROR_BIT_EXT, (VkDebugReportObjectTypeEXT)0, 0, __LINE__, 1, "PARAMCHECK",
+ "vkCreateImage parameter, VkSharingMode pCreateInfo->sharingMode, is an unrecognized enumerator");
+ return false;
+ } else if (pCreateInfo->sharingMode == VK_SHARING_MODE_CONCURRENT) {
+ validate_queue_family_indices(device, "vkCreateImage", pCreateInfo->queueFamilyIndexCount,
+ pCreateInfo->pQueueFamilyIndices);
+ }
+ }
+
+ return true;
+}
+
+bool PostCreateImage(VkDevice device, VkImage *pImage, VkResult result) {
+ if (result < VK_SUCCESS) {
+ std::string reason = "vkCreateImage parameter, VkResult result, is " + EnumeratorString(result);
+ log_msg(mdd(device), VK_DEBUG_REPORT_ERROR_BIT_EXT, (VkDebugReportObjectTypeEXT)0, 0, __LINE__, 1, "PARAMCHECK", "%s",
+ reason.c_str());
+ return false;
+ }
+
+ return true;
+}
+
+VK_LAYER_EXPORT VKAPI_ATTR VkResult VKAPI_CALL
+vkCreateImage(VkDevice device, const VkImageCreateInfo *pCreateInfo, const VkAllocationCallbacks *pAllocator, VkImage *pImage) {
+ VkResult result = VK_ERROR_VALIDATION_FAILED_EXT;
+ VkBool32 skipCall = VK_FALSE;
+ layer_data *my_data = get_my_data_ptr(get_dispatch_key(device), layer_data_map);
+ assert(my_data != NULL);
+
+ skipCall |= parameter_validation_vkCreateImage(my_data->report_data, pCreateInfo, pAllocator, pImage);
+
+ if (skipCall == VK_FALSE) {
+ PreCreateImage(device, pCreateInfo);
+
+ result = get_dispatch_table(pc_device_table_map, device)->CreateImage(device, pCreateInfo, pAllocator, pImage);
+
+ PostCreateImage(device, pImage, result);
+ }
+
+ return result;
+}
+
+VK_LAYER_EXPORT VKAPI_ATTR void VKAPI_CALL vkDestroyImage(VkDevice device, VkImage image, const VkAllocationCallbacks *pAllocator) {
+ VkBool32 skipCall = VK_FALSE;
+ layer_data *my_data = get_my_data_ptr(get_dispatch_key(device), layer_data_map);
+ assert(my_data != NULL);
+
+ skipCall |= parameter_validation_vkDestroyImage(my_data->report_data, image, pAllocator);
+
+ if (skipCall == VK_FALSE) {
+ get_dispatch_table(pc_device_table_map, device)->DestroyImage(device, image, pAllocator);
+ }
+}
+
+bool PreGetImageSubresourceLayout(VkDevice device, const VkImageSubresource *pSubresource) {
+ if (pSubresource != nullptr) {
+ if ((pSubresource->aspectMask & (VK_IMAGE_ASPECT_COLOR_BIT | VK_IMAGE_ASPECT_DEPTH_BIT | VK_IMAGE_ASPECT_STENCIL_BIT |
+ VK_IMAGE_ASPECT_METADATA_BIT)) == 0) {
+ log_msg(mdd(device), VK_DEBUG_REPORT_ERROR_BIT_EXT, (VkDebugReportObjectTypeEXT)0, 0, __LINE__, 1, "PARAMCHECK",
+ "vkGetImageSubresourceLayout parameter, VkImageAspect pSubresource->aspectMask, is an unrecognized enumerator");
+ return false;
+ }
+ }
+
+ return true;
+}
+
+VK_LAYER_EXPORT VKAPI_ATTR void VKAPI_CALL
+vkGetImageSubresourceLayout(VkDevice device, VkImage image, const VkImageSubresource *pSubresource, VkSubresourceLayout *pLayout) {
+ VkBool32 skipCall = VK_FALSE;
+ layer_data *my_data = get_my_data_ptr(get_dispatch_key(device), layer_data_map);
+ assert(my_data != NULL);
+
+ skipCall |= parameter_validation_vkGetImageSubresourceLayout(my_data->report_data, image, pSubresource, pLayout);
+
+ if (skipCall == VK_FALSE) {
+ PreGetImageSubresourceLayout(device, pSubresource);
+
+ get_dispatch_table(pc_device_table_map, device)->GetImageSubresourceLayout(device, image, pSubresource, pLayout);
+ }
+}
+
+bool PreCreateImageView(VkDevice device, const VkImageViewCreateInfo *pCreateInfo) {
+ if (pCreateInfo != nullptr) {
+ if (pCreateInfo->viewType < VK_IMAGE_VIEW_TYPE_BEGIN_RANGE || pCreateInfo->viewType > VK_IMAGE_VIEW_TYPE_END_RANGE) {
+ log_msg(mdd(device), VK_DEBUG_REPORT_ERROR_BIT_EXT, (VkDebugReportObjectTypeEXT)0, 0, __LINE__, 1, "PARAMCHECK",
+ "vkCreateImageView parameter, VkImageViewType pCreateInfo->viewType, is an unrecognized enumerator");
+ return false;
+ }
+ if (pCreateInfo->format < VK_FORMAT_BEGIN_RANGE || pCreateInfo->format > VK_FORMAT_END_RANGE) {
+ log_msg(mdd(device), VK_DEBUG_REPORT_ERROR_BIT_EXT, (VkDebugReportObjectTypeEXT)0, 0, __LINE__, 1, "PARAMCHECK",
+ "vkCreateImageView parameter, VkFormat pCreateInfo->format, is an unrecognized enumerator");
+ return false;
+ }
+ if (pCreateInfo->components.r < VK_COMPONENT_SWIZZLE_BEGIN_RANGE ||
+ pCreateInfo->components.r > VK_COMPONENT_SWIZZLE_END_RANGE) {
+ log_msg(mdd(device), VK_DEBUG_REPORT_ERROR_BIT_EXT, (VkDebugReportObjectTypeEXT)0, 0, __LINE__, 1, "PARAMCHECK",
+ "vkCreateImageView parameter, VkComponentSwizzle pCreateInfo->components.r, is an unrecognized enumerator");
+ return false;
+ }
+ if (pCreateInfo->components.g < VK_COMPONENT_SWIZZLE_BEGIN_RANGE ||
+ pCreateInfo->components.g > VK_COMPONENT_SWIZZLE_END_RANGE) {
+ log_msg(mdd(device), VK_DEBUG_REPORT_ERROR_BIT_EXT, (VkDebugReportObjectTypeEXT)0, 0, __LINE__, 1, "PARAMCHECK",
+ "vkCreateImageView parameter, VkComponentSwizzle pCreateInfo->components.g, is an unrecognized enumerator");
+ return false;
+ }
+ if (pCreateInfo->components.b < VK_COMPONENT_SWIZZLE_BEGIN_RANGE ||
+ pCreateInfo->components.b > VK_COMPONENT_SWIZZLE_END_RANGE) {
+ log_msg(mdd(device), VK_DEBUG_REPORT_ERROR_BIT_EXT, (VkDebugReportObjectTypeEXT)0, 0, __LINE__, 1, "PARAMCHECK",
+ "vkCreateImageView parameter, VkComponentSwizzle pCreateInfo->components.b, is an unrecognized enumerator");
+ return false;
+ }
+ if (pCreateInfo->components.a < VK_COMPONENT_SWIZZLE_BEGIN_RANGE ||
+ pCreateInfo->components.a > VK_COMPONENT_SWIZZLE_END_RANGE) {
+ log_msg(mdd(device), VK_DEBUG_REPORT_ERROR_BIT_EXT, (VkDebugReportObjectTypeEXT)0, 0, __LINE__, 1, "PARAMCHECK",
+ "vkCreateImageView parameter, VkComponentSwizzle pCreateInfo->components.a, is an unrecognized enumerator");
+ return false;
+ }
+ }
+
+ return true;
+}
+
+bool PostCreateImageView(VkDevice device, VkImageView *pView, VkResult result) {
+ if (result < VK_SUCCESS) {
+ std::string reason = "vkCreateImageView parameter, VkResult result, is " + EnumeratorString(result);
+ log_msg(mdd(device), VK_DEBUG_REPORT_ERROR_BIT_EXT, (VkDebugReportObjectTypeEXT)0, 0, __LINE__, 1, "PARAMCHECK", "%s",
+ reason.c_str());
+ return false;
+ }
+
+ return true;
+}
+
+VK_LAYER_EXPORT VKAPI_ATTR VkResult VKAPI_CALL vkCreateImageView(VkDevice device, const VkImageViewCreateInfo *pCreateInfo,
+ const VkAllocationCallbacks *pAllocator, VkImageView *pView) {
+ VkResult result = VK_ERROR_VALIDATION_FAILED_EXT;
+ VkBool32 skipCall = VK_FALSE;
+ layer_data *my_data = get_my_data_ptr(get_dispatch_key(device), layer_data_map);
+ assert(my_data != NULL);
+
+ skipCall |= parameter_validation_vkCreateImageView(my_data->report_data, pCreateInfo, pAllocator, pView);
+
+ if (skipCall == VK_FALSE) {
+ PreCreateImageView(device, pCreateInfo);
+
+ result = get_dispatch_table(pc_device_table_map, device)->CreateImageView(device, pCreateInfo, pAllocator, pView);
+
+ PostCreateImageView(device, pView, result);
+ }
+
+ return result;
+}
+
+VK_LAYER_EXPORT VKAPI_ATTR void VKAPI_CALL
+vkDestroyImageView(VkDevice device, VkImageView imageView, const VkAllocationCallbacks *pAllocator) {
+ VkBool32 skipCall = VK_FALSE;
+ layer_data *my_data = get_my_data_ptr(get_dispatch_key(device), layer_data_map);
+ assert(my_data != NULL);
+
+ skipCall |= parameter_validation_vkDestroyImageView(my_data->report_data, imageView, pAllocator);
+
+ if (skipCall == VK_FALSE) {
+ get_dispatch_table(pc_device_table_map, device)->DestroyImageView(device, imageView, pAllocator);
+ }
+}
+
+bool PostCreateShaderModule(VkDevice device, VkShaderModule *pShaderModule, VkResult result) {
+ if (result < VK_SUCCESS) {
+ std::string reason = "vkCreateShaderModule parameter, VkResult result, is " + EnumeratorString(result);
+ log_msg(mdd(device), VK_DEBUG_REPORT_ERROR_BIT_EXT, (VkDebugReportObjectTypeEXT)0, 0, __LINE__, 1, "PARAMCHECK", "%s",
+ reason.c_str());
+ return false;
+ }
+
+ return true;
+}
+
+VK_LAYER_EXPORT VKAPI_ATTR VkResult VKAPI_CALL vkCreateShaderModule(VkDevice device, const VkShaderModuleCreateInfo *pCreateInfo,
+ const VkAllocationCallbacks *pAllocator,
+ VkShaderModule *pShaderModule) {
+ VkResult result = VK_ERROR_VALIDATION_FAILED_EXT;
+ VkBool32 skipCall = VK_FALSE;
+ layer_data *my_data = get_my_data_ptr(get_dispatch_key(device), layer_data_map);
+ assert(my_data != NULL);
+
+ skipCall |= parameter_validation_vkCreateShaderModule(my_data->report_data, pCreateInfo, pAllocator, pShaderModule);
+
+ if (skipCall == VK_FALSE) {
+ result =
+ get_dispatch_table(pc_device_table_map, device)->CreateShaderModule(device, pCreateInfo, pAllocator, pShaderModule);
+
+ PostCreateShaderModule(device, pShaderModule, result);
+ }
+
+ return result;
+}
+
+VK_LAYER_EXPORT VKAPI_ATTR void VKAPI_CALL
+vkDestroyShaderModule(VkDevice device, VkShaderModule shaderModule, const VkAllocationCallbacks *pAllocator) {
+ VkBool32 skipCall = VK_FALSE;
+ layer_data *my_data = get_my_data_ptr(get_dispatch_key(device), layer_data_map);
+ assert(my_data != NULL);
+
+ skipCall |= parameter_validation_vkDestroyShaderModule(my_data->report_data, shaderModule, pAllocator);
+
+ if (skipCall == VK_FALSE) {
+ get_dispatch_table(pc_device_table_map, device)->DestroyShaderModule(device, shaderModule, pAllocator);
+ }
+}
+
+bool PostCreatePipelineCache(VkDevice device, VkPipelineCache *pPipelineCache, VkResult result) {
+ if (result < VK_SUCCESS) {
+ std::string reason = "vkCreatePipelineCache parameter, VkResult result, is " + EnumeratorString(result);
+ log_msg(mdd(device), VK_DEBUG_REPORT_ERROR_BIT_EXT, (VkDebugReportObjectTypeEXT)0, 0, __LINE__, 1, "PARAMCHECK", "%s",
+ reason.c_str());
+ return false;
+ }
+
+ return true;
+}
+
+VK_LAYER_EXPORT VKAPI_ATTR VkResult VKAPI_CALL vkCreatePipelineCache(VkDevice device, const VkPipelineCacheCreateInfo *pCreateInfo,
+ const VkAllocationCallbacks *pAllocator,
+ VkPipelineCache *pPipelineCache) {
+ VkResult result = VK_ERROR_VALIDATION_FAILED_EXT;
+ VkBool32 skipCall = VK_FALSE;
+ layer_data *my_data = get_my_data_ptr(get_dispatch_key(device), layer_data_map);
+ assert(my_data != NULL);
+
+ skipCall |= parameter_validation_vkCreatePipelineCache(my_data->report_data, pCreateInfo, pAllocator, pPipelineCache);
+
+ if (skipCall == VK_FALSE) {
+ result =
+ get_dispatch_table(pc_device_table_map, device)->CreatePipelineCache(device, pCreateInfo, pAllocator, pPipelineCache);
+
+ PostCreatePipelineCache(device, pPipelineCache, result);
+ }
+
+ return result;
+}
+
+VK_LAYER_EXPORT VKAPI_ATTR void VKAPI_CALL
+vkDestroyPipelineCache(VkDevice device, VkPipelineCache pipelineCache, const VkAllocationCallbacks *pAllocator) {
+ VkBool32 skipCall = VK_FALSE;
+ layer_data *my_data = get_my_data_ptr(get_dispatch_key(device), layer_data_map);
+ assert(my_data != NULL);
+
+ skipCall |= parameter_validation_vkDestroyPipelineCache(my_data->report_data, pipelineCache, pAllocator);
+
+ if (skipCall == VK_FALSE) {
+ get_dispatch_table(pc_device_table_map, device)->DestroyPipelineCache(device, pipelineCache, pAllocator);
+ }
+}
+
+bool PostGetPipelineCacheData(VkDevice device, VkPipelineCache pipelineCache, size_t *pDataSize, void *pData, VkResult result) {
+ if (result < VK_SUCCESS) {
+ std::string reason = "vkGetPipelineCacheData parameter, VkResult result, is " + EnumeratorString(result);
+ log_msg(mdd(device), VK_DEBUG_REPORT_ERROR_BIT_EXT, (VkDebugReportObjectTypeEXT)0, 0, __LINE__, 1, "PARAMCHECK", "%s",
+ reason.c_str());
+ return false;
+ }
+
+ return true;
+}
+
+VK_LAYER_EXPORT VKAPI_ATTR VkResult VKAPI_CALL
+vkGetPipelineCacheData(VkDevice device, VkPipelineCache pipelineCache, size_t *pDataSize, void *pData) {
+ VkResult result = VK_ERROR_VALIDATION_FAILED_EXT;
+ VkBool32 skipCall = VK_FALSE;
+ layer_data *my_data = get_my_data_ptr(get_dispatch_key(device), layer_data_map);
+ assert(my_data != NULL);
+
+ skipCall |= parameter_validation_vkGetPipelineCacheData(my_data->report_data, pipelineCache, pDataSize, pData);
+
+ if (skipCall == VK_FALSE) {
+ result = get_dispatch_table(pc_device_table_map, device)->GetPipelineCacheData(device, pipelineCache, pDataSize, pData);
+
+ PostGetPipelineCacheData(device, pipelineCache, pDataSize, pData, result);
+ }
+
+ return result;
+}
+
+bool PostMergePipelineCaches(VkDevice device, VkPipelineCache dstCache, uint32_t srcCacheCount, VkResult result) {
+
+ if (result < VK_SUCCESS) {
+ std::string reason = "vkMergePipelineCaches parameter, VkResult result, is " + EnumeratorString(result);
+ log_msg(mdd(device), VK_DEBUG_REPORT_ERROR_BIT_EXT, (VkDebugReportObjectTypeEXT)0, 0, __LINE__, 1, "PARAMCHECK", "%s",
+ reason.c_str());
+ return false;
+ }
+
+ return true;
+}
+
+VK_LAYER_EXPORT VKAPI_ATTR VkResult VKAPI_CALL
+vkMergePipelineCaches(VkDevice device, VkPipelineCache dstCache, uint32_t srcCacheCount, const VkPipelineCache *pSrcCaches) {
+ VkResult result = VK_ERROR_VALIDATION_FAILED_EXT;
+ VkBool32 skipCall = VK_FALSE;
+ layer_data *my_data = get_my_data_ptr(get_dispatch_key(device), layer_data_map);
+ assert(my_data != NULL);
+
+ skipCall |= parameter_validation_vkMergePipelineCaches(my_data->report_data, dstCache, srcCacheCount, pSrcCaches);
+
+ if (skipCall == VK_FALSE) {
+ result = get_dispatch_table(pc_device_table_map, device)->MergePipelineCaches(device, dstCache, srcCacheCount, pSrcCaches);
+
+ PostMergePipelineCaches(device, dstCache, srcCacheCount, result);
+ }
+
+ return result;
+}
+
+bool PreCreateGraphicsPipelines(VkDevice device, const VkGraphicsPipelineCreateInfo *pCreateInfos) {
+ layer_data *data = get_my_data_ptr(get_dispatch_key(device), layer_data_map);
+
+ // TODO: Handle count
+ if (pCreateInfos != nullptr) {
+ if (pCreateInfos->flags | VK_PIPELINE_CREATE_DERIVATIVE_BIT) {
+ if (pCreateInfos->basePipelineIndex != -1) {
+ if (pCreateInfos->basePipelineHandle != VK_NULL_HANDLE) {
+ log_msg(mdd(device), VK_DEBUG_REPORT_ERROR_BIT_EXT, (VkDebugReportObjectTypeEXT)0, 0, __LINE__, 1, "PARAMCHECK",
+ "vkCreateGraphicsPipelines parameter, pCreateInfos->basePipelineHandle, must be VK_NULL_HANDLE if "
+ "pCreateInfos->flags "
+ "contains the VK_PIPELINE_CREATE_DERIVATIVE_BIT flag and pCreateInfos->basePipelineIndex is not -1");
+ return false;
+ }
+ }
+
+ if (pCreateInfos->basePipelineHandle != VK_NULL_HANDLE) {
+ if (pCreateInfos->basePipelineIndex != -1) {
+ log_msg(
+ mdd(device), VK_DEBUG_REPORT_ERROR_BIT_EXT, (VkDebugReportObjectTypeEXT)0, 0, __LINE__, 1, "PARAMCHECK",
+ "vkCreateGraphicsPipelines parameter, pCreateInfos->basePipelineIndex, must be -1 if pCreateInfos->flags "
+ "contains the VK_PIPELINE_CREATE_DERIVATIVE_BIT flag and pCreateInfos->basePipelineHandle is not "
+ "VK_NULL_HANDLE");
+ return false;
+ }
+ }
+ }
+
+ if (pCreateInfos->pVertexInputState != nullptr) {
+ if (pCreateInfos->pVertexInputState->pVertexBindingDescriptions != nullptr) {
+ if (pCreateInfos->pVertexInputState->pVertexBindingDescriptions->inputRate < VK_VERTEX_INPUT_RATE_BEGIN_RANGE ||
+ pCreateInfos->pVertexInputState->pVertexBindingDescriptions->inputRate > VK_VERTEX_INPUT_RATE_END_RANGE) {
+ log_msg(mdd(device), VK_DEBUG_REPORT_ERROR_BIT_EXT, (VkDebugReportObjectTypeEXT)0, 0, __LINE__, 1, "PARAMCHECK",
+ "vkCreateGraphicsPipelines parameter, VkVertexInputRate "
+ "pCreateInfos->pVertexInputState->pVertexBindingDescriptions->inputRate, is an unrecognized "
+ "enumerator");
+ return false;
+ }
+ }
+ if (pCreateInfos->pVertexInputState->pVertexAttributeDescriptions != nullptr) {
+ if (pCreateInfos->pVertexInputState->pVertexAttributeDescriptions->format < VK_FORMAT_BEGIN_RANGE ||
+ pCreateInfos->pVertexInputState->pVertexAttributeDescriptions->format > VK_FORMAT_END_RANGE) {
+ log_msg(mdd(device), VK_DEBUG_REPORT_ERROR_BIT_EXT, (VkDebugReportObjectTypeEXT)0, 0, __LINE__, 1, "PARAMCHECK",
+ "vkCreateGraphicsPipelines parameter, VkFormat "
+ "pCreateInfos->pVertexInputState->pVertexAttributeDescriptions->format, is an unrecognized enumerator");
+ return false;
+ }
+ }
+ }
+ if (pCreateInfos->pInputAssemblyState != nullptr) {
+ if (pCreateInfos->pInputAssemblyState->topology < VK_PRIMITIVE_TOPOLOGY_BEGIN_RANGE ||
+ pCreateInfos->pInputAssemblyState->topology > VK_PRIMITIVE_TOPOLOGY_END_RANGE) {
+ log_msg(mdd(device), VK_DEBUG_REPORT_ERROR_BIT_EXT, (VkDebugReportObjectTypeEXT)0, 0, __LINE__, 1, "PARAMCHECK",
+ "vkCreateGraphicsPipelines parameter, VkPrimitiveTopology pCreateInfos->pInputAssemblyState->topology, is "
+ "an unrecognized enumerator");
+ return false;
+ }
+ }
+ if (pCreateInfos->pRasterizationState != nullptr) {
+ if (pCreateInfos->pRasterizationState->polygonMode < VK_POLYGON_MODE_BEGIN_RANGE ||
+ pCreateInfos->pRasterizationState->polygonMode > VK_POLYGON_MODE_END_RANGE) {
+ log_msg(mdd(device), VK_DEBUG_REPORT_ERROR_BIT_EXT, (VkDebugReportObjectTypeEXT)0, 0, __LINE__, 1, "PARAMCHECK",
+ "vkCreateGraphicsPipelines parameter, VkPolygonMode pCreateInfos->pRasterizationState->polygonMode, is an "
+ "unrecognized enumerator");
+ return false;
+ }
+ if (pCreateInfos->pRasterizationState->cullMode & ~VK_CULL_MODE_FRONT_AND_BACK) {
+ log_msg(mdd(device), VK_DEBUG_REPORT_ERROR_BIT_EXT, (VkDebugReportObjectTypeEXT)0, 0, __LINE__, 1, "PARAMCHECK",
+ "vkCreateGraphicsPipelines parameter, VkCullMode pCreateInfos->pRasterizationState->cullMode, is an "
+ "unrecognized enumerator");
+ return false;
+ }
+ if (pCreateInfos->pRasterizationState->frontFace < VK_FRONT_FACE_BEGIN_RANGE ||
+ pCreateInfos->pRasterizationState->frontFace > VK_FRONT_FACE_END_RANGE) {
+ log_msg(mdd(device), VK_DEBUG_REPORT_ERROR_BIT_EXT, (VkDebugReportObjectTypeEXT)0, 0, __LINE__, 1, "PARAMCHECK",
+ "vkCreateGraphicsPipelines parameter, VkFrontFace pCreateInfos->pRasterizationState->frontFace, is an "
+ "unrecognized enumerator");
+ return false;
+ }
+ }
+ if (pCreateInfos->pDepthStencilState != nullptr) {
+ if (pCreateInfos->pDepthStencilState->depthCompareOp < VK_COMPARE_OP_BEGIN_RANGE ||
+ pCreateInfos->pDepthStencilState->depthCompareOp > VK_COMPARE_OP_END_RANGE) {
+ log_msg(mdd(device), VK_DEBUG_REPORT_ERROR_BIT_EXT, (VkDebugReportObjectTypeEXT)0, 0, __LINE__, 1, "PARAMCHECK",
+ "vkCreateGraphicsPipelines parameter, VkCompareOp pCreateInfos->pDepthStencilState->depthCompareOp, is an "
+ "unrecognized enumerator");
+ return false;
+ }
+ if (pCreateInfos->pDepthStencilState->front.failOp < VK_STENCIL_OP_BEGIN_RANGE ||
+ pCreateInfos->pDepthStencilState->front.failOp > VK_STENCIL_OP_END_RANGE) {
+ log_msg(mdd(device), VK_DEBUG_REPORT_ERROR_BIT_EXT, (VkDebugReportObjectTypeEXT)0, 0, __LINE__, 1, "PARAMCHECK",
+ "vkCreateGraphicsPipelines parameter, VkStencilOp pCreateInfos->pDepthStencilState->front.failOp, is an "
+ "unrecognized enumerator");
+ return false;
+ }
+ if (pCreateInfos->pDepthStencilState->front.passOp < VK_STENCIL_OP_BEGIN_RANGE ||
+ pCreateInfos->pDepthStencilState->front.passOp > VK_STENCIL_OP_END_RANGE) {
+ log_msg(mdd(device), VK_DEBUG_REPORT_ERROR_BIT_EXT, (VkDebugReportObjectTypeEXT)0, 0, __LINE__, 1, "PARAMCHECK",
+ "vkCreateGraphicsPipelines parameter, VkStencilOp pCreateInfos->pDepthStencilState->front.passOp, is an "
+ "unrecognized enumerator");
+ return false;
+ }
+ if (pCreateInfos->pDepthStencilState->front.depthFailOp < VK_STENCIL_OP_BEGIN_RANGE ||
+ pCreateInfos->pDepthStencilState->front.depthFailOp > VK_STENCIL_OP_END_RANGE) {
+ log_msg(mdd(device), VK_DEBUG_REPORT_ERROR_BIT_EXT, (VkDebugReportObjectTypeEXT)0, 0, __LINE__, 1, "PARAMCHECK",
+ "vkCreateGraphicsPipelines parameter, VkStencilOp pCreateInfos->pDepthStencilState->front.depthFailOp, is "
+ "an unrecognized enumerator");
+ return false;
+ }
+ if (pCreateInfos->pDepthStencilState->front.compareOp < VK_COMPARE_OP_BEGIN_RANGE ||
+ pCreateInfos->pDepthStencilState->front.compareOp > VK_COMPARE_OP_END_RANGE) {
+ log_msg(mdd(device), VK_DEBUG_REPORT_ERROR_BIT_EXT, (VkDebugReportObjectTypeEXT)0, 0, __LINE__, 1, "PARAMCHECK",
+ "vkCreateGraphicsPipelines parameter, VkCompareOp pCreateInfos->pDepthStencilState->front.compareOp, is an "
+ "unrecognized enumerator");
+ return false;
+ }
+ if (pCreateInfos->pDepthStencilState->back.failOp < VK_STENCIL_OP_BEGIN_RANGE ||
+ pCreateInfos->pDepthStencilState->back.failOp > VK_STENCIL_OP_END_RANGE) {
+ log_msg(mdd(device), VK_DEBUG_REPORT_ERROR_BIT_EXT, (VkDebugReportObjectTypeEXT)0, 0, __LINE__, 1, "PARAMCHECK",
+ "vkCreateGraphicsPipelines parameter, VkStencilOp pCreateInfos->pDepthStencilState->back.failOp, is an "
+ "unrecognized enumerator");
+ return false;
+ }
+ if (pCreateInfos->pDepthStencilState->back.passOp < VK_STENCIL_OP_BEGIN_RANGE ||
+ pCreateInfos->pDepthStencilState->back.passOp > VK_STENCIL_OP_END_RANGE) {
+ log_msg(mdd(device), VK_DEBUG_REPORT_ERROR_BIT_EXT, (VkDebugReportObjectTypeEXT)0, 0, __LINE__, 1, "PARAMCHECK",
+ "vkCreateGraphicsPipelines parameter, VkStencilOp pCreateInfos->pDepthStencilState->back.passOp, is an "
+ "unrecognized enumerator");
+ return false;
+ }
+ if (pCreateInfos->pDepthStencilState->back.depthFailOp < VK_STENCIL_OP_BEGIN_RANGE ||
+ pCreateInfos->pDepthStencilState->back.depthFailOp > VK_STENCIL_OP_END_RANGE) {
+ log_msg(mdd(device), VK_DEBUG_REPORT_ERROR_BIT_EXT, (VkDebugReportObjectTypeEXT)0, 0, __LINE__, 1, "PARAMCHECK",
+ "vkCreateGraphicsPipelines parameter, VkStencilOp pCreateInfos->pDepthStencilState->back.depthFailOp, is "
+ "an unrecognized enumerator");
+ return false;
+ }
+ if (pCreateInfos->pDepthStencilState->back.compareOp < VK_COMPARE_OP_BEGIN_RANGE ||
+ pCreateInfos->pDepthStencilState->back.compareOp > VK_COMPARE_OP_END_RANGE) {
+ log_msg(mdd(device), VK_DEBUG_REPORT_ERROR_BIT_EXT, (VkDebugReportObjectTypeEXT)0, 0, __LINE__, 1, "PARAMCHECK",
+ "vkCreateGraphicsPipelines parameter, VkCompareOp pCreateInfos->pDepthStencilState->back.compareOp, is an "
+ "unrecognized enumerator");
+ return false;
+ }
+ }
+ if (pCreateInfos->pColorBlendState != nullptr) {
+ if (pCreateInfos->pColorBlendState->logicOpEnable == VK_TRUE &&
+ (pCreateInfos->pColorBlendState->logicOp < VK_LOGIC_OP_BEGIN_RANGE ||
+ pCreateInfos->pColorBlendState->logicOp > VK_LOGIC_OP_END_RANGE)) {
+ log_msg(mdd(device), VK_DEBUG_REPORT_ERROR_BIT_EXT, (VkDebugReportObjectTypeEXT)0, 0, __LINE__, 1, "PARAMCHECK",
+ "vkCreateGraphicsPipelines parameter, VkLogicOp pCreateInfos->pColorBlendState->logicOp, is an "
+ "unrecognized enumerator");
+ return false;
+ }
+ if (pCreateInfos->pColorBlendState->pAttachments != nullptr &&
+ pCreateInfos->pColorBlendState->pAttachments->blendEnable == VK_TRUE) {
+ if (pCreateInfos->pColorBlendState->pAttachments->srcColorBlendFactor < VK_BLEND_FACTOR_BEGIN_RANGE ||
+ pCreateInfos->pColorBlendState->pAttachments->srcColorBlendFactor > VK_BLEND_FACTOR_END_RANGE) {
+ log_msg(mdd(device), VK_DEBUG_REPORT_ERROR_BIT_EXT, (VkDebugReportObjectTypeEXT)0, 0, __LINE__, 1, "PARAMCHECK",
+ "vkCreateGraphicsPipelines parameter, VkBlendFactor "
+ "pCreateInfos->pColorBlendState->pAttachments->srcColorBlendFactor, is an unrecognized enumerator");
+ return false;
+ }
+ if (pCreateInfos->pColorBlendState->pAttachments->dstColorBlendFactor < VK_BLEND_FACTOR_BEGIN_RANGE ||
+ pCreateInfos->pColorBlendState->pAttachments->dstColorBlendFactor > VK_BLEND_FACTOR_END_RANGE) {
+ log_msg(mdd(device), VK_DEBUG_REPORT_ERROR_BIT_EXT, (VkDebugReportObjectTypeEXT)0, 0, __LINE__, 1, "PARAMCHECK",
+ "vkCreateGraphicsPipelines parameter, VkBlendFactor "
+ "pCreateInfos->pColorBlendState->pAttachments->dstColorBlendFactor, is an unrecognized enumerator");
+ return false;
+ }
+ if (pCreateInfos->pColorBlendState->pAttachments->colorBlendOp < VK_BLEND_OP_BEGIN_RANGE ||
+ pCreateInfos->pColorBlendState->pAttachments->colorBlendOp > VK_BLEND_OP_END_RANGE) {
+ log_msg(mdd(device), VK_DEBUG_REPORT_ERROR_BIT_EXT, (VkDebugReportObjectTypeEXT)0, 0, __LINE__, 1, "PARAMCHECK",
+ "vkCreateGraphicsPipelines parameter, VkBlendOp "
+ "pCreateInfos->pColorBlendState->pAttachments->colorBlendOp, is an unrecognized enumerator");
+ return false;
+ }
+ if (pCreateInfos->pColorBlendState->pAttachments->srcAlphaBlendFactor < VK_BLEND_FACTOR_BEGIN_RANGE ||
+ pCreateInfos->pColorBlendState->pAttachments->srcAlphaBlendFactor > VK_BLEND_FACTOR_END_RANGE) {
+ log_msg(mdd(device), VK_DEBUG_REPORT_ERROR_BIT_EXT, (VkDebugReportObjectTypeEXT)0, 0, __LINE__, 1, "PARAMCHECK",
+ "vkCreateGraphicsPipelines parameter, VkBlendFactor "
+ "pCreateInfos->pColorBlendState->pAttachments->srcAlphaBlendFactor, is an unrecognized enumerator");
+ return false;
+ }
+ if (pCreateInfos->pColorBlendState->pAttachments->dstAlphaBlendFactor < VK_BLEND_FACTOR_BEGIN_RANGE ||
+ pCreateInfos->pColorBlendState->pAttachments->dstAlphaBlendFactor > VK_BLEND_FACTOR_END_RANGE) {
+ log_msg(mdd(device), VK_DEBUG_REPORT_ERROR_BIT_EXT, (VkDebugReportObjectTypeEXT)0, 0, __LINE__, 1, "PARAMCHECK",
+ "vkCreateGraphicsPipelines parameter, VkBlendFactor "
+ "pCreateInfos->pColorBlendState->pAttachments->dstAlphaBlendFactor, is an unrecognized enumerator");
+ return false;
+ }
+ if (pCreateInfos->pColorBlendState->pAttachments->alphaBlendOp < VK_BLEND_OP_BEGIN_RANGE ||
+ pCreateInfos->pColorBlendState->pAttachments->alphaBlendOp > VK_BLEND_OP_END_RANGE) {
+ log_msg(mdd(device), VK_DEBUG_REPORT_ERROR_BIT_EXT, (VkDebugReportObjectTypeEXT)0, 0, __LINE__, 1, "PARAMCHECK",
+ "vkCreateGraphicsPipelines parameter, VkBlendOp "
+ "pCreateInfos->pColorBlendState->pAttachments->alphaBlendOp, is an unrecognized enumerator");
+ return false;
+ }
+ }
+ }
+ if (pCreateInfos->renderPass == VK_NULL_HANDLE) {
+ log_msg(mdd(device), VK_DEBUG_REPORT_ERROR_BIT_EXT, (VkDebugReportObjectTypeEXT)0, 0, __LINE__, 1, "PARAMCHECK",
+ "vkCreateGraphicsPipelines parameter, VkRenderPass pCreateInfos->renderPass, is null pointer");
+ }
+
+ int i = 0;
+ for (auto j = 0; j < pCreateInfos[i].stageCount; j++) {
+ validate_string(data->report_data, "vkCreateGraphicsPipelines", "pCreateInfos[i].pStages[j].pName",
+ pCreateInfos[i].pStages[j].pName);
+ }
+ }
+
+ return true;
+}
+
+bool PostCreateGraphicsPipelines(VkDevice device, VkPipelineCache pipelineCache, uint32_t count, VkPipeline *pPipelines,
+ VkResult result) {
+ if (result < VK_SUCCESS) {
+ std::string reason = "vkCreateGraphicsPipelines parameter, VkResult result, is " + EnumeratorString(result);
+ log_msg(mdd(device), VK_DEBUG_REPORT_ERROR_BIT_EXT, (VkDebugReportObjectTypeEXT)0, 0, __LINE__, 1, "PARAMCHECK", "%s",
+ reason.c_str());
+ return false;
+ }
+
+ return true;
+}
+
+VK_LAYER_EXPORT VKAPI_ATTR VkResult VKAPI_CALL
+vkCreateGraphicsPipelines(VkDevice device, VkPipelineCache pipelineCache, uint32_t createInfoCount,
+ const VkGraphicsPipelineCreateInfo *pCreateInfos, const VkAllocationCallbacks *pAllocator,
+ VkPipeline *pPipelines) {
+ VkResult result = VK_ERROR_VALIDATION_FAILED_EXT;
+ VkBool32 skipCall = VK_FALSE;
+ layer_data *my_data = get_my_data_ptr(get_dispatch_key(device), layer_data_map);
+ assert(my_data != NULL);
+
+ skipCall |= parameter_validation_vkCreateGraphicsPipelines(my_data->report_data, pipelineCache, createInfoCount, pCreateInfos,
+ pAllocator, pPipelines);
+
+ if (skipCall == VK_FALSE) {
+ PreCreateGraphicsPipelines(device, pCreateInfos);
+
+ result = get_dispatch_table(pc_device_table_map, device)
+ ->CreateGraphicsPipelines(device, pipelineCache, createInfoCount, pCreateInfos, pAllocator, pPipelines);
+
+ PostCreateGraphicsPipelines(device, pipelineCache, createInfoCount, pPipelines, result);
+ }
+
+ return result;
+}
+
+bool PreCreateComputePipelines(VkDevice device, const VkComputePipelineCreateInfo *pCreateInfos) {
+ layer_data *data = get_my_data_ptr(get_dispatch_key(device), layer_data_map);
+
+ if (pCreateInfos != nullptr) {
+ // TODO: Handle count!
+ int i = 0;
+ validate_string(data->report_data, "vkCreateComputePipelines", "pCreateInfos[i].stage.pName", pCreateInfos[i].stage.pName);
+ }
+
+ return true;
+}
+
+bool PostCreateComputePipelines(VkDevice device, VkPipelineCache pipelineCache, uint32_t count, VkPipeline *pPipelines,
+ VkResult result) {
+ if (result < VK_SUCCESS) {
+ std::string reason = "vkCreateComputePipelines parameter, VkResult result, is " + EnumeratorString(result);
+ log_msg(mdd(device), VK_DEBUG_REPORT_ERROR_BIT_EXT, (VkDebugReportObjectTypeEXT)0, 0, __LINE__, 1, "PARAMCHECK", "%s",
+ reason.c_str());
+ return false;
+ }
+
+ return true;
+}
+
+VK_LAYER_EXPORT VKAPI_ATTR VkResult VKAPI_CALL
+vkCreateComputePipelines(VkDevice device, VkPipelineCache pipelineCache, uint32_t createInfoCount,
+ const VkComputePipelineCreateInfo *pCreateInfos, const VkAllocationCallbacks *pAllocator,
+ VkPipeline *pPipelines) {
+ VkResult result = VK_ERROR_VALIDATION_FAILED_EXT;
+ VkBool32 skipCall = VK_FALSE;
+ layer_data *my_data = get_my_data_ptr(get_dispatch_key(device), layer_data_map);
+ assert(my_data != NULL);
+
+ skipCall |= parameter_validation_vkCreateComputePipelines(my_data->report_data, pipelineCache, createInfoCount, pCreateInfos, pAllocator,
+ pPipelines);
+
+ if (skipCall == VK_FALSE) {
+ PreCreateComputePipelines(device, pCreateInfos);
+
+ result = get_dispatch_table(pc_device_table_map, device)
+ ->CreateComputePipelines(device, pipelineCache, createInfoCount, pCreateInfos, pAllocator, pPipelines);
+
+ PostCreateComputePipelines(device, pipelineCache, createInfoCount, pPipelines, result);
+ }
+
+ return result;
+}
+
+VK_LAYER_EXPORT VKAPI_ATTR void VKAPI_CALL
+vkDestroyPipeline(VkDevice device, VkPipeline pipeline, const VkAllocationCallbacks *pAllocator) {
+ VkBool32 skipCall = VK_FALSE;
+ layer_data *my_data = get_my_data_ptr(get_dispatch_key(device), layer_data_map);
+ assert(my_data != NULL);
+
+ skipCall |= parameter_validation_vkDestroyPipeline(my_data->report_data, pipeline, pAllocator);
+
+ if (skipCall == VK_FALSE) {
+ get_dispatch_table(pc_device_table_map, device)->DestroyPipeline(device, pipeline, pAllocator);
+ }
+}
+
+bool PostCreatePipelineLayout(VkDevice device, VkPipelineLayout *pPipelineLayout, VkResult result) {
+ if (result < VK_SUCCESS) {
+ std::string reason = "vkCreatePipelineLayout parameter, VkResult result, is " + EnumeratorString(result);
+ log_msg(mdd(device), VK_DEBUG_REPORT_ERROR_BIT_EXT, (VkDebugReportObjectTypeEXT)0, 0, __LINE__, 1, "PARAMCHECK", "%s",
+ reason.c_str());
+ return false;
+ }
+
+ return true;
+}
+
+VK_LAYER_EXPORT VKAPI_ATTR VkResult VKAPI_CALL
+vkCreatePipelineLayout(VkDevice device, const VkPipelineLayoutCreateInfo *pCreateInfo, const VkAllocationCallbacks *pAllocator,
+ VkPipelineLayout *pPipelineLayout) {
+ VkResult result = VK_ERROR_VALIDATION_FAILED_EXT;
+ VkBool32 skipCall = VK_FALSE;
+ layer_data *my_data = get_my_data_ptr(get_dispatch_key(device), layer_data_map);
+ assert(my_data != NULL);
+
+ skipCall |= parameter_validation_vkCreatePipelineLayout(my_data->report_data, pCreateInfo, pAllocator, pPipelineLayout);
+
+ if (skipCall == VK_FALSE) {
+ result =
+ get_dispatch_table(pc_device_table_map, device)->CreatePipelineLayout(device, pCreateInfo, pAllocator, pPipelineLayout);
+
+ PostCreatePipelineLayout(device, pPipelineLayout, result);
+ }
+
+ return result;
+}
+
+VK_LAYER_EXPORT VKAPI_ATTR void VKAPI_CALL
+vkDestroyPipelineLayout(VkDevice device, VkPipelineLayout pipelineLayout, const VkAllocationCallbacks *pAllocator) {
+ VkBool32 skipCall = VK_FALSE;
+ layer_data *my_data = get_my_data_ptr(get_dispatch_key(device), layer_data_map);
+ assert(my_data != NULL);
+
+ skipCall |= parameter_validation_vkDestroyPipelineLayout(my_data->report_data, pipelineLayout, pAllocator);
+
+ if (skipCall == VK_FALSE) {
+ get_dispatch_table(pc_device_table_map, device)->DestroyPipelineLayout(device, pipelineLayout, pAllocator);
+ }
+}
+
+bool PreCreateSampler(VkDevice device, const VkSamplerCreateInfo *pCreateInfo) {
+ if (pCreateInfo != nullptr) {
+ if (pCreateInfo->magFilter < VK_FILTER_BEGIN_RANGE || pCreateInfo->magFilter > VK_FILTER_END_RANGE) {
+ log_msg(mdd(device), VK_DEBUG_REPORT_ERROR_BIT_EXT, (VkDebugReportObjectTypeEXT)0, 0, __LINE__, 1, "PARAMCHECK",
+ "vkCreateSampler parameter, VkFilter pCreateInfo->magFilter, is an unrecognized enumerator");
+ return false;
+ }
+ if (pCreateInfo->minFilter < VK_FILTER_BEGIN_RANGE || pCreateInfo->minFilter > VK_FILTER_END_RANGE) {
+ log_msg(mdd(device), VK_DEBUG_REPORT_ERROR_BIT_EXT, (VkDebugReportObjectTypeEXT)0, 0, __LINE__, 1, "PARAMCHECK",
+ "vkCreateSampler parameter, VkFilter pCreateInfo->minFilter, is an unrecognized enumerator");
+ return false;
+ }
+ if (pCreateInfo->mipmapMode < VK_SAMPLER_MIPMAP_MODE_BEGIN_RANGE ||
+ pCreateInfo->mipmapMode > VK_SAMPLER_MIPMAP_MODE_END_RANGE) {
+ log_msg(mdd(device), VK_DEBUG_REPORT_ERROR_BIT_EXT, (VkDebugReportObjectTypeEXT)0, 0, __LINE__, 1, "PARAMCHECK",
+ "vkCreateSampler parameter, VkSamplerMipmapMode pCreateInfo->mipmapMode, is an unrecognized enumerator");
+ return false;
+ }
+ if (pCreateInfo->addressModeU < VK_SAMPLER_ADDRESS_MODE_BEGIN_RANGE ||
+ pCreateInfo->addressModeU > VK_SAMPLER_ADDRESS_MODE_END_RANGE) {
+ log_msg(mdd(device), VK_DEBUG_REPORT_ERROR_BIT_EXT, (VkDebugReportObjectTypeEXT)0, 0, __LINE__, 1, "PARAMCHECK",
+ "vkCreateSampler parameter, VkTexAddress pCreateInfo->addressModeU, is an unrecognized enumerator");
+ return false;
+ }
+ if (pCreateInfo->addressModeV < VK_SAMPLER_ADDRESS_MODE_BEGIN_RANGE ||
+ pCreateInfo->addressModeV > VK_SAMPLER_ADDRESS_MODE_END_RANGE) {
+ log_msg(mdd(device), VK_DEBUG_REPORT_ERROR_BIT_EXT, (VkDebugReportObjectTypeEXT)0, 0, __LINE__, 1, "PARAMCHECK",
+ "vkCreateSampler parameter, VkTexAddress pCreateInfo->addressModeV, is an unrecognized enumerator");
+ return false;
+ }
+ if (pCreateInfo->addressModeW < VK_SAMPLER_ADDRESS_MODE_BEGIN_RANGE ||
+ pCreateInfo->addressModeW > VK_SAMPLER_ADDRESS_MODE_END_RANGE) {
+ log_msg(mdd(device), VK_DEBUG_REPORT_ERROR_BIT_EXT, (VkDebugReportObjectTypeEXT)0, 0, __LINE__, 1, "PARAMCHECK",
+ "vkCreateSampler parameter, VkTexAddress pCreateInfo->addressModeW, is an unrecognized enumerator");
+ return false;
+ }
+ if (pCreateInfo->anisotropyEnable > VK_TRUE) {
+ log_msg(mdd(device), VK_DEBUG_REPORT_ERROR_BIT_EXT, (VkDebugReportObjectTypeEXT)0, 0, __LINE__, 1, "PARAMCHECK",
+ "vkCreateSampler parameter, VkBool32 pCreateInfo->anisotropyEnable, is an unrecognized boolean");
+ return false;
+ }
+ if (pCreateInfo->compareEnable > VK_TRUE) {
+ log_msg(mdd(device), VK_DEBUG_REPORT_ERROR_BIT_EXT, (VkDebugReportObjectTypeEXT)0, 0, __LINE__, 1, "PARAMCHECK",
+ "vkCreateSampler parameter, VkBool32 pCreateInfo->compareEnable, is an unrecognized boolean");
+ return false;
+ }
+ if (pCreateInfo->compareEnable) {
+ if (pCreateInfo->compareOp < VK_COMPARE_OP_BEGIN_RANGE || pCreateInfo->compareOp > VK_COMPARE_OP_END_RANGE) {
+ log_msg(mdd(device), VK_DEBUG_REPORT_ERROR_BIT_EXT, (VkDebugReportObjectTypeEXT)0, 0, __LINE__, 1, "PARAMCHECK",
+ "vkCreateSampler parameter, VkCompareOp pCreateInfo->compareOp, is an unrecognized enumerator");
+ return false;
+ }
+ }
+ if (pCreateInfo->borderColor < VK_BORDER_COLOR_BEGIN_RANGE || pCreateInfo->borderColor > VK_BORDER_COLOR_END_RANGE) {
+ log_msg(mdd(device), VK_DEBUG_REPORT_ERROR_BIT_EXT, (VkDebugReportObjectTypeEXT)0, 0, __LINE__, 1, "PARAMCHECK",
+ "vkCreateSampler parameter, VkBorderColor pCreateInfo->borderColor, is an unrecognized enumerator");
+ return false;
+ }
+ if (pCreateInfo->unnormalizedCoordinates > VK_TRUE) {
+ log_msg(mdd(device), VK_DEBUG_REPORT_ERROR_BIT_EXT, (VkDebugReportObjectTypeEXT)0, 0, __LINE__, 1, "PARAMCHECK",
+ "vkCreateSampler parameter, VkBool32 pCreateInfo->unnormalizedCoordinates, is an unrecognized boolean");
+ return false;
+ }
+ }
+
+ return true;
+}
+
+bool PostCreateSampler(VkDevice device, VkSampler *pSampler, VkResult result) {
+ if (result < VK_SUCCESS) {
+ std::string reason = "vkCreateSampler parameter, VkResult result, is " + EnumeratorString(result);
+ log_msg(mdd(device), VK_DEBUG_REPORT_ERROR_BIT_EXT, (VkDebugReportObjectTypeEXT)0, 0, __LINE__, 1, "PARAMCHECK", "%s",
+ reason.c_str());
+ return false;
+ }
+
+ return true;
+}
+
+VK_LAYER_EXPORT VKAPI_ATTR VkResult VKAPI_CALL vkCreateSampler(VkDevice device, const VkSamplerCreateInfo *pCreateInfo,
+ const VkAllocationCallbacks *pAllocator, VkSampler *pSampler) {
+ VkResult result = VK_ERROR_VALIDATION_FAILED_EXT;
+ VkBool32 skipCall = VK_FALSE;
+ layer_data *my_data = get_my_data_ptr(get_dispatch_key(device), layer_data_map);
+ assert(my_data != NULL);
+
+ skipCall |= parameter_validation_vkCreateSampler(my_data->report_data, pCreateInfo, pAllocator, pSampler);
+
+ if (skipCall == VK_FALSE) {
+ PreCreateSampler(device, pCreateInfo);
+
+ result = get_dispatch_table(pc_device_table_map, device)->CreateSampler(device, pCreateInfo, pAllocator, pSampler);
+
+ PostCreateSampler(device, pSampler, result);
+ }
+
+ return result;
+}
+
+VK_LAYER_EXPORT VKAPI_ATTR void VKAPI_CALL
+vkDestroySampler(VkDevice device, VkSampler sampler, const VkAllocationCallbacks *pAllocator) {
+ VkBool32 skipCall = VK_FALSE;
+ layer_data *my_data = get_my_data_ptr(get_dispatch_key(device), layer_data_map);
+ assert(my_data != NULL);
+
+ skipCall |= parameter_validation_vkDestroySampler(my_data->report_data, sampler, pAllocator);
+
+ if (skipCall == VK_FALSE) {
+ get_dispatch_table(pc_device_table_map, device)->DestroySampler(device, sampler, pAllocator);
+ }
+}
+
+bool PreCreateDescriptorSetLayout(VkDevice device, const VkDescriptorSetLayoutCreateInfo *pCreateInfo) {
+ if (pCreateInfo != nullptr) {
+ if (pCreateInfo->pBindings != nullptr) {
+ if (pCreateInfo->pBindings->descriptorType < VK_DESCRIPTOR_TYPE_BEGIN_RANGE ||
+ pCreateInfo->pBindings->descriptorType > VK_DESCRIPTOR_TYPE_END_RANGE) {
+ log_msg(mdd(device), VK_DEBUG_REPORT_ERROR_BIT_EXT, (VkDebugReportObjectTypeEXT)0, 0, __LINE__, 1, "PARAMCHECK",
+ "vkCreateDescriptorSetLayout parameter, VkDescriptorType pCreateInfo->pBindings->descriptorType, is an "
+ "unrecognized enumerator");
+ return false;
+ }
+ }
+ }
+
+ return true;
+}
+
+bool PostCreateDescriptorSetLayout(VkDevice device, VkDescriptorSetLayout *pSetLayout, VkResult result) {
+ if (result < VK_SUCCESS) {
+ std::string reason = "vkCreateDescriptorSetLayout parameter, VkResult result, is " + EnumeratorString(result);
+ log_msg(mdd(device), VK_DEBUG_REPORT_ERROR_BIT_EXT, (VkDebugReportObjectTypeEXT)0, 0, __LINE__, 1, "PARAMCHECK", "%s",
+ reason.c_str());
+ return false;
+ }
+
+ return true;
+}
+
+VK_LAYER_EXPORT VKAPI_ATTR VkResult VKAPI_CALL
+vkCreateDescriptorSetLayout(VkDevice device, const VkDescriptorSetLayoutCreateInfo *pCreateInfo,
+ const VkAllocationCallbacks *pAllocator, VkDescriptorSetLayout *pSetLayout) {
+ VkResult result = VK_ERROR_VALIDATION_FAILED_EXT;
+ VkBool32 skipCall = VK_FALSE;
+ layer_data *my_data = get_my_data_ptr(get_dispatch_key(device), layer_data_map);
+ assert(my_data != NULL);
+
+ skipCall |= parameter_validation_vkCreateDescriptorSetLayout(my_data->report_data, pCreateInfo, pAllocator, pSetLayout);
+
+ if (skipCall == VK_FALSE) {
+ PreCreateDescriptorSetLayout(device, pCreateInfo);
+
+ result =
+ get_dispatch_table(pc_device_table_map, device)->CreateDescriptorSetLayout(device, pCreateInfo, pAllocator, pSetLayout);
+
+ PostCreateDescriptorSetLayout(device, pSetLayout, result);
+ }
+
+ return result;
+}
+
+VK_LAYER_EXPORT VKAPI_ATTR void VKAPI_CALL
+vkDestroyDescriptorSetLayout(VkDevice device, VkDescriptorSetLayout descriptorSetLayout, const VkAllocationCallbacks *pAllocator) {
+ VkBool32 skipCall = VK_FALSE;
+ layer_data *my_data = get_my_data_ptr(get_dispatch_key(device), layer_data_map);
+ assert(my_data != NULL);
+
+ skipCall |= parameter_validation_vkDestroyDescriptorSetLayout(my_data->report_data, descriptorSetLayout, pAllocator);
+
+ if (skipCall == VK_FALSE) {
+ get_dispatch_table(pc_device_table_map, device)->DestroyDescriptorSetLayout(device, descriptorSetLayout, pAllocator);
+ }
+}
+
+bool PreCreateDescriptorPool(VkDevice device, const VkDescriptorPoolCreateInfo *pCreateInfo) {
+ if (pCreateInfo != nullptr) {
+ if (pCreateInfo->pPoolSizes != nullptr) {
+ if (pCreateInfo->pPoolSizes->type < VK_DESCRIPTOR_TYPE_BEGIN_RANGE ||
+ pCreateInfo->pPoolSizes->type > VK_DESCRIPTOR_TYPE_END_RANGE) {
+ log_msg(mdd(device), VK_DEBUG_REPORT_ERROR_BIT_EXT, (VkDebugReportObjectTypeEXT)0, 0, __LINE__, 1, "PARAMCHECK",
+ "vkCreateDescriptorPool parameter, VkDescriptorType pCreateInfo->pTypeCount->type, is an unrecognized "
+ "enumerator");
+ return false;
+ }
+ }
+ }
+
+ return true;
+}
+
+bool PostCreateDescriptorPool(VkDevice device, uint32_t maxSets, VkDescriptorPool *pDescriptorPool, VkResult result) {
+
+ /* TODOVV: How do we validate maxSets? Probably belongs in the limits layer? */
+
+ if (result < VK_SUCCESS) {
+ std::string reason = "vkCreateDescriptorPool parameter, VkResult result, is " + EnumeratorString(result);
+ log_msg(mdd(device), VK_DEBUG_REPORT_ERROR_BIT_EXT, (VkDebugReportObjectTypeEXT)0, 0, __LINE__, 1, "PARAMCHECK", "%s",
+ reason.c_str());
+ return false;
+ }
+
+ return true;
+}
+
+VK_LAYER_EXPORT VKAPI_ATTR VkResult VKAPI_CALL
+vkCreateDescriptorPool(VkDevice device, const VkDescriptorPoolCreateInfo *pCreateInfo, const VkAllocationCallbacks *pAllocator,
+ VkDescriptorPool *pDescriptorPool) {
+ VkResult result = VK_ERROR_VALIDATION_FAILED_EXT;
+ VkBool32 skipCall = VK_FALSE;
+ layer_data *my_data = get_my_data_ptr(get_dispatch_key(device), layer_data_map);
+ assert(my_data != NULL);
+
+ skipCall |= parameter_validation_vkCreateDescriptorPool(my_data->report_data, pCreateInfo, pAllocator, pDescriptorPool);
+
+ if (skipCall == VK_FALSE) {
+ PreCreateDescriptorPool(device, pCreateInfo);
+
+ result =
+ get_dispatch_table(pc_device_table_map, device)->CreateDescriptorPool(device, pCreateInfo, pAllocator, pDescriptorPool);
+
+ PostCreateDescriptorPool(device, pCreateInfo->maxSets, pDescriptorPool, result);
+ }
+
+ return result;
+}
+
+VK_LAYER_EXPORT VKAPI_ATTR void VKAPI_CALL
+vkDestroyDescriptorPool(VkDevice device, VkDescriptorPool descriptorPool, const VkAllocationCallbacks *pAllocator) {
+ VkBool32 skipCall = VK_FALSE;
+ layer_data *my_data = get_my_data_ptr(get_dispatch_key(device), layer_data_map);
+ assert(my_data != NULL);
+
+ skipCall |= parameter_validation_vkDestroyDescriptorPool(my_data->report_data, descriptorPool, pAllocator);
+
+ if (skipCall == VK_FALSE) {
+ get_dispatch_table(pc_device_table_map, device)->DestroyDescriptorPool(device, descriptorPool, pAllocator);
+ }
+}
+
+bool PostResetDescriptorPool(VkDevice device, VkDescriptorPool descriptorPool, VkResult result) {
+
+ if (result < VK_SUCCESS) {
+ std::string reason = "vkResetDescriptorPool parameter, VkResult result, is " + EnumeratorString(result);
+ log_msg(mdd(device), VK_DEBUG_REPORT_ERROR_BIT_EXT, (VkDebugReportObjectTypeEXT)0, 0, __LINE__, 1, "PARAMCHECK", "%s",
+ reason.c_str());
+ return false;
+ }
+
+ return true;
+}
+
+VK_LAYER_EXPORT VKAPI_ATTR VkResult VKAPI_CALL
+vkResetDescriptorPool(VkDevice device, VkDescriptorPool descriptorPool, VkDescriptorPoolResetFlags flags) {
+ VkResult result = get_dispatch_table(pc_device_table_map, device)->ResetDescriptorPool(device, descriptorPool, flags);
+
+ PostResetDescriptorPool(device, descriptorPool, result);
+
+ return result;
+}
+
+bool PostAllocateDescriptorSets(VkDevice device, VkDescriptorPool descriptorPool, uint32_t count, VkDescriptorSet *pDescriptorSets,
+ VkResult result) {
+ if (result < VK_SUCCESS) {
+ std::string reason = "vkAllocateDescriptorSets parameter, VkResult result, is " + EnumeratorString(result);
+ log_msg(mdd(device), VK_DEBUG_REPORT_ERROR_BIT_EXT, (VkDebugReportObjectTypeEXT)0, 0, __LINE__, 1, "PARAMCHECK", "%s",
+ reason.c_str());
+ return false;
+ }
+
+ return true;
+}
+
+VK_LAYER_EXPORT VKAPI_ATTR VkResult VKAPI_CALL
+vkAllocateDescriptorSets(VkDevice device, const VkDescriptorSetAllocateInfo *pAllocateInfo, VkDescriptorSet *pDescriptorSets) {
+ VkResult result = VK_ERROR_VALIDATION_FAILED_EXT;
+ VkBool32 skipCall = VK_FALSE;
+ layer_data *my_data = get_my_data_ptr(get_dispatch_key(device), layer_data_map);
+ assert(my_data != NULL);
+
+ skipCall |= parameter_validation_vkAllocateDescriptorSets(my_data->report_data, pAllocateInfo, pDescriptorSets);
+
+ if (skipCall == VK_FALSE) {
+ result = get_dispatch_table(pc_device_table_map, device)->AllocateDescriptorSets(device, pAllocateInfo, pDescriptorSets);
+
+ PostAllocateDescriptorSets(device, pAllocateInfo->descriptorPool, pAllocateInfo->descriptorSetCount, pDescriptorSets,
+ result);
+ }
+
+ return result;
+}
+
+bool PostFreeDescriptorSets(VkDevice device, VkDescriptorPool descriptorPool, uint32_t count, VkResult result) {
+
+ if (result < VK_SUCCESS) {
+ std::string reason = "vkFreeDescriptorSets parameter, VkResult result, is " + EnumeratorString(result);
+ log_msg(mdd(device), VK_DEBUG_REPORT_ERROR_BIT_EXT, (VkDebugReportObjectTypeEXT)0, 0, __LINE__, 1, "PARAMCHECK", "%s",
+ reason.c_str());
+ return false;
+ }
+
+ return true;
+}
+
+VK_LAYER_EXPORT VKAPI_ATTR VkResult VKAPI_CALL vkFreeDescriptorSets(VkDevice device, VkDescriptorPool descriptorPool,
+ uint32_t descriptorSetCount,
+ const VkDescriptorSet *pDescriptorSets) {
+ VkResult result = VK_ERROR_VALIDATION_FAILED_EXT;
+ VkBool32 skipCall = VK_FALSE;
+ layer_data *my_data = get_my_data_ptr(get_dispatch_key(device), layer_data_map);
+ assert(my_data != NULL);
+
+ skipCall |= parameter_validation_vkFreeDescriptorSets(my_data->report_data, descriptorPool, descriptorSetCount, pDescriptorSets);
+
+ if (skipCall == VK_FALSE) {
+ result = get_dispatch_table(pc_device_table_map, device)
+ ->FreeDescriptorSets(device, descriptorPool, descriptorSetCount, pDescriptorSets);
+
+ PostFreeDescriptorSets(device, descriptorPool, descriptorSetCount, result);
+ }
+
+ return result;
+}
+
+bool PreUpdateDescriptorSets(VkDevice device, const VkWriteDescriptorSet *pDescriptorWrites,
+ const VkCopyDescriptorSet *pDescriptorCopies) {
+ if (pDescriptorWrites != nullptr) {
+ if (pDescriptorWrites->descriptorType < VK_DESCRIPTOR_TYPE_BEGIN_RANGE ||
+ pDescriptorWrites->descriptorType > VK_DESCRIPTOR_TYPE_END_RANGE) {
+ log_msg(mdd(device), VK_DEBUG_REPORT_ERROR_BIT_EXT, (VkDebugReportObjectTypeEXT)0, 0, __LINE__, 1, "PARAMCHECK",
+ "vkUpdateDescriptorSets parameter, VkDescriptorType pDescriptorWrites->descriptorType, is an unrecognized "
+ "enumerator");
+ return false;
+ }
+ /* TODO: Validate other parts of pImageInfo, pBufferInfo, pTexelBufferView? */
+ /* TODO: This test should probably only be done if descriptorType is correct type of descriptor */
+ if (pDescriptorWrites->pImageInfo != nullptr) {
+ if (((pDescriptorWrites->pImageInfo->imageLayout < VK_IMAGE_LAYOUT_BEGIN_RANGE) ||
+ (pDescriptorWrites->pImageInfo->imageLayout > VK_IMAGE_LAYOUT_END_RANGE)) &&
+ (pDescriptorWrites->pImageInfo->imageLayout != VK_IMAGE_LAYOUT_PRESENT_SRC_KHR)) {
+ log_msg(mdd(device), VK_DEBUG_REPORT_ERROR_BIT_EXT, (VkDebugReportObjectTypeEXT)0, 0, __LINE__, 1, "PARAMCHECK",
+ "vkUpdateDescriptorSets parameter, VkImageLayout pDescriptorWrites->pDescriptors->imageLayout, is an "
+ "unrecognized enumerator");
+ return false;
+ }
+ }
+ }
+
+ return true;
+}
+
+VK_LAYER_EXPORT VKAPI_ATTR void VKAPI_CALL
+vkUpdateDescriptorSets(VkDevice device, uint32_t descriptorWriteCount, const VkWriteDescriptorSet *pDescriptorWrites,
+ uint32_t descriptorCopyCount, const VkCopyDescriptorSet *pDescriptorCopies) {
+ VkBool32 skipCall = VK_FALSE;
+ layer_data *my_data = get_my_data_ptr(get_dispatch_key(device), layer_data_map);
+ assert(my_data != NULL);
+
+ skipCall |= parameter_validation_vkUpdateDescriptorSets(my_data->report_data, descriptorWriteCount, pDescriptorWrites,
+ descriptorCopyCount, pDescriptorCopies);
+
+ if (skipCall == VK_FALSE) {
+ PreUpdateDescriptorSets(device, pDescriptorWrites, pDescriptorCopies);
+
+ get_dispatch_table(pc_device_table_map, device)
+ ->UpdateDescriptorSets(device, descriptorWriteCount, pDescriptorWrites, descriptorCopyCount, pDescriptorCopies);
+ }
+}
+
+bool PostCreateFramebuffer(VkDevice device, VkFramebuffer *pFramebuffer, VkResult result) {
+ if (result < VK_SUCCESS) {
+ std::string reason = "vkCreateFramebuffer parameter, VkResult result, is " + EnumeratorString(result);
+ log_msg(mdd(device), VK_DEBUG_REPORT_ERROR_BIT_EXT, (VkDebugReportObjectTypeEXT)0, 0, __LINE__, 1, "PARAMCHECK", "%s",
+ reason.c_str());
+ return false;
+ }
+
+ return true;
+}
+
+VK_LAYER_EXPORT VKAPI_ATTR VkResult VKAPI_CALL vkCreateFramebuffer(VkDevice device, const VkFramebufferCreateInfo *pCreateInfo,
+ const VkAllocationCallbacks *pAllocator,
+ VkFramebuffer *pFramebuffer) {
+ VkResult result = VK_ERROR_VALIDATION_FAILED_EXT;
+ VkBool32 skipCall = VK_FALSE;
+ layer_data *my_data = get_my_data_ptr(get_dispatch_key(device), layer_data_map);
+ assert(my_data != NULL);
+
+ skipCall |= parameter_validation_vkCreateFramebuffer(my_data->report_data, pCreateInfo, pAllocator, pFramebuffer);
+
+ if (skipCall == VK_FALSE) {
+ result = get_dispatch_table(pc_device_table_map, device)->CreateFramebuffer(device, pCreateInfo, pAllocator, pFramebuffer);
+
+ PostCreateFramebuffer(device, pFramebuffer, result);
+ }
+
+ return result;
+}
+
+VK_LAYER_EXPORT VKAPI_ATTR void VKAPI_CALL
+vkDestroyFramebuffer(VkDevice device, VkFramebuffer framebuffer, const VkAllocationCallbacks *pAllocator) {
+ VkBool32 skipCall = VK_FALSE;
+ layer_data *my_data = get_my_data_ptr(get_dispatch_key(device), layer_data_map);
+ assert(my_data != NULL);
+
+ skipCall |= parameter_validation_vkDestroyFramebuffer(my_data->report_data, framebuffer, pAllocator);
+
+ if (skipCall == VK_FALSE) {
+ get_dispatch_table(pc_device_table_map, device)->DestroyFramebuffer(device, framebuffer, pAllocator);
+ }
+}
+
+bool PreCreateRenderPass(VkDevice device, const VkRenderPassCreateInfo *pCreateInfo) {
+ if (pCreateInfo != nullptr) {
+ if (pCreateInfo->pAttachments != nullptr) {
+ if (pCreateInfo->pAttachments->format < VK_FORMAT_BEGIN_RANGE ||
+ pCreateInfo->pAttachments->format > VK_FORMAT_END_RANGE) {
+ log_msg(mdd(device), VK_DEBUG_REPORT_ERROR_BIT_EXT, (VkDebugReportObjectTypeEXT)0, 0, __LINE__, 1, "PARAMCHECK",
+ "vkCreateRenderPass parameter, VkFormat pCreateInfo->pAttachments->format, is an unrecognized enumerator");
+ return false;
+ }
+ if (pCreateInfo->pAttachments->loadOp < VK_ATTACHMENT_LOAD_OP_BEGIN_RANGE ||
+ pCreateInfo->pAttachments->loadOp > VK_ATTACHMENT_LOAD_OP_END_RANGE) {
+ log_msg(mdd(device), VK_DEBUG_REPORT_ERROR_BIT_EXT, (VkDebugReportObjectTypeEXT)0, 0, __LINE__, 1, "PARAMCHECK",
+ "vkCreateRenderPass parameter, VkAttachmentLoadOp pCreateInfo->pAttachments->loadOp, is an unrecognized "
+ "enumerator");
+ return false;
+ }
+ if (pCreateInfo->pAttachments->storeOp < VK_ATTACHMENT_STORE_OP_BEGIN_RANGE ||
+ pCreateInfo->pAttachments->storeOp > VK_ATTACHMENT_STORE_OP_END_RANGE) {
+ log_msg(mdd(device), VK_DEBUG_REPORT_ERROR_BIT_EXT, (VkDebugReportObjectTypeEXT)0, 0, __LINE__, 1, "PARAMCHECK",
+ "vkCreateRenderPass parameter, VkAttachmentStoreOp pCreateInfo->pAttachments->storeOp, is an unrecognized "
+ "enumerator");
+ return false;
+ }
+ if (pCreateInfo->pAttachments->stencilLoadOp < VK_ATTACHMENT_LOAD_OP_BEGIN_RANGE ||
+ pCreateInfo->pAttachments->stencilLoadOp > VK_ATTACHMENT_LOAD_OP_END_RANGE) {
+ log_msg(mdd(device), VK_DEBUG_REPORT_ERROR_BIT_EXT, (VkDebugReportObjectTypeEXT)0, 0, __LINE__, 1, "PARAMCHECK",
+ "vkCreateRenderPass parameter, VkAttachmentLoadOp pCreateInfo->pAttachments->stencilLoadOp, is an "
+ "unrecognized enumerator");
+ return false;
+ }
+ if (pCreateInfo->pAttachments->stencilStoreOp < VK_ATTACHMENT_STORE_OP_BEGIN_RANGE ||
+ pCreateInfo->pAttachments->stencilStoreOp > VK_ATTACHMENT_STORE_OP_END_RANGE) {
+ log_msg(mdd(device), VK_DEBUG_REPORT_ERROR_BIT_EXT, (VkDebugReportObjectTypeEXT)0, 0, __LINE__, 1, "PARAMCHECK",
+ "vkCreateRenderPass parameter, VkAttachmentStoreOp pCreateInfo->pAttachments->stencilStoreOp, is an "
+ "unrecognized enumerator");
+ return false;
+ }
+ if (((pCreateInfo->pAttachments->initialLayout < VK_IMAGE_LAYOUT_BEGIN_RANGE) ||
+ (pCreateInfo->pAttachments->initialLayout > VK_IMAGE_LAYOUT_END_RANGE)) &&
+ (pCreateInfo->pAttachments->initialLayout != VK_IMAGE_LAYOUT_PRESENT_SRC_KHR)) {
+ log_msg(mdd(device), VK_DEBUG_REPORT_ERROR_BIT_EXT, (VkDebugReportObjectTypeEXT)0, 0, __LINE__, 1, "PARAMCHECK",
+ "vkCreateRenderPass parameter, VkImageLayout pCreateInfo->pAttachments->initialLayout, is an unrecognized "
+ "enumerator");
+ return false;
+ }
+ if (((pCreateInfo->pAttachments->initialLayout < VK_IMAGE_LAYOUT_BEGIN_RANGE) ||
+ (pCreateInfo->pAttachments->initialLayout > VK_IMAGE_LAYOUT_END_RANGE)) &&
+ (pCreateInfo->pAttachments->initialLayout != VK_IMAGE_LAYOUT_PRESENT_SRC_KHR)) {
+ log_msg(mdd(device), VK_DEBUG_REPORT_ERROR_BIT_EXT, (VkDebugReportObjectTypeEXT)0, 0, __LINE__, 1, "PARAMCHECK",
+ "vkCreateRenderPass parameter, VkImageLayout pCreateInfo->pAttachments->finalLayout, is an unrecognized "
+ "enumerator");
+ return false;
+ }
+ }
+ if (pCreateInfo->pSubpasses != nullptr) {
+ if (pCreateInfo->pSubpasses->pipelineBindPoint < VK_PIPELINE_BIND_POINT_BEGIN_RANGE ||
+ pCreateInfo->pSubpasses->pipelineBindPoint > VK_PIPELINE_BIND_POINT_END_RANGE) {
+ log_msg(mdd(device), VK_DEBUG_REPORT_ERROR_BIT_EXT, (VkDebugReportObjectTypeEXT)0, 0, __LINE__, 1, "PARAMCHECK",
+ "vkCreateRenderPass parameter, VkPipelineBindPoint pCreateInfo->pSubpasses->pipelineBindPoint, is an "
+ "unrecognized enumerator");
+ return false;
+ }
+ if (pCreateInfo->pSubpasses->pInputAttachments != nullptr) {
+ if (((pCreateInfo->pSubpasses->pInputAttachments->layout < VK_IMAGE_LAYOUT_BEGIN_RANGE) ||
+ (pCreateInfo->pSubpasses->pInputAttachments->layout > VK_IMAGE_LAYOUT_END_RANGE)) &&
+ (pCreateInfo->pSubpasses->pInputAttachments->layout != VK_IMAGE_LAYOUT_PRESENT_SRC_KHR)) {
+ log_msg(mdd(device), VK_DEBUG_REPORT_ERROR_BIT_EXT, (VkDebugReportObjectTypeEXT)0, 0, __LINE__, 1, "PARAMCHECK",
+ "vkCreateRenderPass parameter, VkImageLayout pCreateInfo->pSubpasses->pInputAttachments->layout, is an "
+ "unrecognized enumerator");
+ return false;
+ }
+ }
+ if (pCreateInfo->pSubpasses->pColorAttachments != nullptr) {
+ if (((pCreateInfo->pSubpasses->pColorAttachments->layout < VK_IMAGE_LAYOUT_BEGIN_RANGE) ||
+ (pCreateInfo->pSubpasses->pColorAttachments->layout > VK_IMAGE_LAYOUT_END_RANGE)) &&
+ (pCreateInfo->pSubpasses->pColorAttachments->layout != VK_IMAGE_LAYOUT_PRESENT_SRC_KHR)) {
+ log_msg(mdd(device), VK_DEBUG_REPORT_ERROR_BIT_EXT, (VkDebugReportObjectTypeEXT)0, 0, __LINE__, 1, "PARAMCHECK",
+ "vkCreateRenderPass parameter, VkImageLayout pCreateInfo->pSubpasses->pColorAttachments->layout, is an "
+ "unrecognized enumerator");
+ return false;
+ }
+ }
+ if (pCreateInfo->pSubpasses->pResolveAttachments != nullptr) {
+ if (((pCreateInfo->pSubpasses->pResolveAttachments->layout < VK_IMAGE_LAYOUT_BEGIN_RANGE) ||
+ (pCreateInfo->pSubpasses->pResolveAttachments->layout > VK_IMAGE_LAYOUT_END_RANGE)) &&
+ (pCreateInfo->pSubpasses->pResolveAttachments->layout != VK_IMAGE_LAYOUT_PRESENT_SRC_KHR)) {
+ log_msg(mdd(device), VK_DEBUG_REPORT_ERROR_BIT_EXT, (VkDebugReportObjectTypeEXT)0, 0, __LINE__, 1, "PARAMCHECK",
+ "vkCreateRenderPass parameter, VkImageLayout pCreateInfo->pSubpasses->pResolveAttachments->layout, is "
+ "an unrecognized enumerator");
+ return false;
+ }
+ }
+ if (pCreateInfo->pSubpasses->pDepthStencilAttachment &&
+ ((pCreateInfo->pSubpasses->pDepthStencilAttachment->layout < VK_IMAGE_LAYOUT_BEGIN_RANGE) ||
+ (pCreateInfo->pSubpasses->pDepthStencilAttachment->layout > VK_IMAGE_LAYOUT_END_RANGE)) &&
+ (pCreateInfo->pSubpasses->pDepthStencilAttachment->layout != VK_IMAGE_LAYOUT_PRESENT_SRC_KHR)) {
+ log_msg(mdd(device), VK_DEBUG_REPORT_ERROR_BIT_EXT, (VkDebugReportObjectTypeEXT)0, 0, __LINE__, 1, "PARAMCHECK",
+ "vkCreateRenderPass parameter, VkImageLayout pCreateInfo->pSubpasses->pDepthStencilAttachment->layout, is "
+ "an unrecognized enumerator");
+ return false;
+ }
+ }
+ }
+
+ return true;
+}
+
+bool PostCreateRenderPass(VkDevice device, VkRenderPass *pRenderPass, VkResult result) {
+ if (result < VK_SUCCESS) {
+ std::string reason = "vkCreateRenderPass parameter, VkResult result, is " + EnumeratorString(result);
+ log_msg(mdd(device), VK_DEBUG_REPORT_ERROR_BIT_EXT, (VkDebugReportObjectTypeEXT)0, 0, __LINE__, 1, "PARAMCHECK", "%s",
+ reason.c_str());
+ return false;
+ }
+
+ return true;
+}
+
+VK_LAYER_EXPORT VKAPI_ATTR VkResult VKAPI_CALL vkCreateRenderPass(VkDevice device, const VkRenderPassCreateInfo *pCreateInfo,
+ const VkAllocationCallbacks *pAllocator,
+ VkRenderPass *pRenderPass) {
+ VkResult result = VK_ERROR_VALIDATION_FAILED_EXT;
+ VkBool32 skipCall = VK_FALSE;
+ layer_data *my_data = get_my_data_ptr(get_dispatch_key(device), layer_data_map);
+ assert(my_data != NULL);
+
+ skipCall |= parameter_validation_vkCreateRenderPass(my_data->report_data, pCreateInfo, pAllocator, pRenderPass);
+
+ if (skipCall == VK_FALSE) {
+ PreCreateRenderPass(device, pCreateInfo);
+
+ result = get_dispatch_table(pc_device_table_map, device)->CreateRenderPass(device, pCreateInfo, pAllocator, pRenderPass);
+
+ PostCreateRenderPass(device, pRenderPass, result);
+ }
+
+ return result;
+}
+
+VK_LAYER_EXPORT VKAPI_ATTR void VKAPI_CALL
+vkDestroyRenderPass(VkDevice device, VkRenderPass renderPass, const VkAllocationCallbacks *pAllocator) {
+ VkBool32 skipCall = VK_FALSE;
+ layer_data *my_data = get_my_data_ptr(get_dispatch_key(device), layer_data_map);
+ assert(my_data != NULL);
+
+ skipCall |= parameter_validation_vkDestroyRenderPass(my_data->report_data, renderPass, pAllocator);
+
+ if (skipCall == VK_FALSE) {
+ get_dispatch_table(pc_device_table_map, device)->DestroyRenderPass(device, renderPass, pAllocator);
+ }
+}
+
+VK_LAYER_EXPORT VKAPI_ATTR void VKAPI_CALL
+vkGetRenderAreaGranularity(VkDevice device, VkRenderPass renderPass, VkExtent2D *pGranularity) {
+ VkBool32 skipCall = VK_FALSE;
+ layer_data *my_data = get_my_data_ptr(get_dispatch_key(device), layer_data_map);
+ assert(my_data != NULL);
+
+ skipCall |= parameter_validation_vkGetRenderAreaGranularity(my_data->report_data, renderPass, pGranularity);
+
+ if (skipCall == VK_FALSE) {
+ get_dispatch_table(pc_device_table_map, device)->GetRenderAreaGranularity(device, renderPass, pGranularity);
+ }
+}
+
+bool PostCreateCommandPool(VkDevice device, VkCommandPool *pCommandPool, VkResult result) {
+ if (result < VK_SUCCESS) {
+ std::string reason = "vkCreateCommandPool parameter, VkResult result, is " + EnumeratorString(result);
+ log_msg(mdd(device), VK_DEBUG_REPORT_ERROR_BIT_EXT, (VkDebugReportObjectTypeEXT)0, 0, __LINE__, 1, "PARAMCHECK", "%s",
+ reason.c_str());
+ return false;
+ }
+
+ return true;
+}
+
+VK_LAYER_EXPORT VKAPI_ATTR VkResult VKAPI_CALL vkCreateCommandPool(VkDevice device, const VkCommandPoolCreateInfo *pCreateInfo,
+ const VkAllocationCallbacks *pAllocator,
+ VkCommandPool *pCommandPool) {
+ VkResult result = VK_ERROR_VALIDATION_FAILED_EXT;
+ bool skipCall = false;
+ layer_data *my_data = get_my_data_ptr(get_dispatch_key(device), layer_data_map);
+ assert(my_data != NULL);
+
+ skipCall |= validate_queue_family_indices(device, "vkCreateCommandPool", 1, &(pCreateInfo->queueFamilyIndex));
+
+ skipCall |= parameter_validation_vkCreateCommandPool(my_data->report_data, pCreateInfo, pAllocator, pCommandPool);
+
+ if (skipCall == VK_FALSE) {
+ result = get_dispatch_table(pc_device_table_map, device)->CreateCommandPool(device, pCreateInfo, pAllocator, pCommandPool);
+
+ PostCreateCommandPool(device, pCommandPool, result);
+ }
+
+ return result;
+}
+
+VK_LAYER_EXPORT VKAPI_ATTR void VKAPI_CALL
+vkDestroyCommandPool(VkDevice device, VkCommandPool commandPool, const VkAllocationCallbacks *pAllocator) {
+ VkBool32 skipCall = VK_FALSE;
+ layer_data *my_data = get_my_data_ptr(get_dispatch_key(device), layer_data_map);
+ assert(my_data != NULL);
+
+ skipCall |= parameter_validation_vkDestroyCommandPool(my_data->report_data, commandPool, pAllocator);
+
+ if (skipCall == VK_FALSE) {
+ get_dispatch_table(pc_device_table_map, device)->DestroyCommandPool(device, commandPool, pAllocator);
+ }
+}
+
+bool PostResetCommandPool(VkDevice device, VkCommandPool commandPool, VkCommandPoolResetFlags flags, VkResult result) {
+
+ if (result < VK_SUCCESS) {
+ std::string reason = "vkResetCommandPool parameter, VkResult result, is " + EnumeratorString(result);
+ log_msg(mdd(device), VK_DEBUG_REPORT_ERROR_BIT_EXT, (VkDebugReportObjectTypeEXT)0, 0, __LINE__, 1, "PARAMCHECK", "%s",
+ reason.c_str());
+ return false;
+ }
+
+ return true;
+}
+
+VK_LAYER_EXPORT VKAPI_ATTR VkResult VKAPI_CALL
+vkResetCommandPool(VkDevice device, VkCommandPool commandPool, VkCommandPoolResetFlags flags) {
+ VkResult result = get_dispatch_table(pc_device_table_map, device)->ResetCommandPool(device, commandPool, flags);
+
+ PostResetCommandPool(device, commandPool, flags, result);
+
+ return result;
+}
+
+bool PreCreateCommandBuffer(VkDevice device, const VkCommandBufferAllocateInfo *pCreateInfo) {
+ if (pCreateInfo != nullptr) {
+ if (pCreateInfo->level < VK_COMMAND_BUFFER_LEVEL_BEGIN_RANGE || pCreateInfo->level > VK_COMMAND_BUFFER_LEVEL_END_RANGE) {
+ log_msg(mdd(device), VK_DEBUG_REPORT_ERROR_BIT_EXT, (VkDebugReportObjectTypeEXT)0, 0, __LINE__, 1, "PARAMCHECK",
+ "vkAllocateCommandBuffers parameter, VkCommandBufferLevel pCreateInfo->level, is an unrecognized enumerator");
+ return false;
+ }
+ }
+
+ return true;
+}
+
+bool PostCreateCommandBuffer(VkDevice device, VkCommandBuffer *pCommandBuffer, VkResult result) {
+ if (result < VK_SUCCESS) {
+ std::string reason = "vkAllocateCommandBuffers parameter, VkResult result, is " + EnumeratorString(result);
+ log_msg(mdd(device), VK_DEBUG_REPORT_ERROR_BIT_EXT, (VkDebugReportObjectTypeEXT)0, 0, __LINE__, 1, "PARAMCHECK", "%s",
+ reason.c_str());
+ return false;
+ }
+
+ return true;
+}
+
+VK_LAYER_EXPORT VKAPI_ATTR VkResult VKAPI_CALL
+vkAllocateCommandBuffers(VkDevice device, const VkCommandBufferAllocateInfo *pAllocateInfo, VkCommandBuffer *pCommandBuffers) {
+ VkResult result = VK_ERROR_VALIDATION_FAILED_EXT;
+ VkBool32 skipCall = VK_FALSE;
+ layer_data *my_data = get_my_data_ptr(get_dispatch_key(device), layer_data_map);
+ assert(my_data != NULL);
+
+ skipCall |= parameter_validation_vkAllocateCommandBuffers(my_data->report_data, pAllocateInfo, pCommandBuffers);
+
+ if (skipCall == VK_FALSE) {
+ PreCreateCommandBuffer(device, pAllocateInfo);
+
+ result = get_dispatch_table(pc_device_table_map, device)->AllocateCommandBuffers(device, pAllocateInfo, pCommandBuffers);
+
+ PostCreateCommandBuffer(device, pCommandBuffers, result);
+ }
+
+ return result;
+}
+
+VK_LAYER_EXPORT VKAPI_ATTR void VKAPI_CALL vkFreeCommandBuffers(VkDevice device, VkCommandPool commandPool,
+ uint32_t commandBufferCount,
+ const VkCommandBuffer *pCommandBuffers) {
+ VkBool32 skipCall = VK_FALSE;
+ layer_data *my_data = get_my_data_ptr(get_dispatch_key(device), layer_data_map);
+ assert(my_data != NULL);
+
+ skipCall |= parameter_validation_vkFreeCommandBuffers(my_data->report_data, commandPool, commandBufferCount, pCommandBuffers);
+
+ if (skipCall == VK_FALSE) {
+ get_dispatch_table(pc_device_table_map, device)
+ ->FreeCommandBuffers(device, commandPool, commandBufferCount, pCommandBuffers);
+ }
+}
+
+bool PostBeginCommandBuffer(VkCommandBuffer commandBuffer, VkResult result) {
+
+ if (result < VK_SUCCESS) {
+ std::string reason = "vkBeginCommandBuffer parameter, VkResult result, is " + EnumeratorString(result);
+ log_msg(mdd(commandBuffer), VK_DEBUG_REPORT_ERROR_BIT_EXT, (VkDebugReportObjectTypeEXT)0, 0, __LINE__, 1, "PARAMCHECK",
+ "%s", reason.c_str());
+ return false;
+ }
+
+ return true;
+}
+
+VK_LAYER_EXPORT VKAPI_ATTR VkResult VKAPI_CALL
+vkBeginCommandBuffer(VkCommandBuffer commandBuffer, const VkCommandBufferBeginInfo *pBeginInfo) {
+ VkResult result = VK_ERROR_VALIDATION_FAILED_EXT;
+ VkBool32 skipCall = VK_FALSE;
+ layer_data *my_data = get_my_data_ptr(get_dispatch_key(commandBuffer), layer_data_map);
+ assert(my_data != NULL);
+
+ skipCall |= parameter_validation_vkBeginCommandBuffer(my_data->report_data, pBeginInfo);
+
+ if (skipCall == VK_FALSE) {
+ result = get_dispatch_table(pc_device_table_map, commandBuffer)->BeginCommandBuffer(commandBuffer, pBeginInfo);
+
+ PostBeginCommandBuffer(commandBuffer, result);
+ }
+
+ return result;
+}
+
+bool PostEndCommandBuffer(VkCommandBuffer commandBuffer, VkResult result) {
+
+ if (result < VK_SUCCESS) {
+ std::string reason = "vkEndCommandBuffer parameter, VkResult result, is " + EnumeratorString(result);
+ log_msg(mdd(commandBuffer), VK_DEBUG_REPORT_ERROR_BIT_EXT, (VkDebugReportObjectTypeEXT)0, 0, __LINE__, 1, "PARAMCHECK",
+ "%s", reason.c_str());
+ return false;
+ }
+
+ return true;
+}
+
+VK_LAYER_EXPORT VKAPI_ATTR VkResult VKAPI_CALL vkEndCommandBuffer(VkCommandBuffer commandBuffer) {
+ VkResult result = get_dispatch_table(pc_device_table_map, commandBuffer)->EndCommandBuffer(commandBuffer);
+
+ PostEndCommandBuffer(commandBuffer, result);
+
+ return result;
+}
+
+bool PostResetCommandBuffer(VkCommandBuffer commandBuffer, VkCommandBufferResetFlags flags, VkResult result) {
+
+ if (result < VK_SUCCESS) {
+ std::string reason = "vkResetCommandBuffer parameter, VkResult result, is " + EnumeratorString(result);
+ log_msg(mdd(commandBuffer), VK_DEBUG_REPORT_ERROR_BIT_EXT, (VkDebugReportObjectTypeEXT)0, 0, __LINE__, 1, "PARAMCHECK",
+ "%s", reason.c_str());
+ return false;
+ }
+
+ return true;
+}
+
+VK_LAYER_EXPORT VKAPI_ATTR VkResult VKAPI_CALL
+vkResetCommandBuffer(VkCommandBuffer commandBuffer, VkCommandBufferResetFlags flags) {
+ VkResult result = get_dispatch_table(pc_device_table_map, commandBuffer)->ResetCommandBuffer(commandBuffer, flags);
+
+ PostResetCommandBuffer(commandBuffer, flags, result);
+
+ return result;
+}
+
+bool PostCmdBindPipeline(VkCommandBuffer commandBuffer, VkPipelineBindPoint pipelineBindPoint, VkPipeline pipeline) {
+
+ if (pipelineBindPoint < VK_PIPELINE_BIND_POINT_BEGIN_RANGE || pipelineBindPoint > VK_PIPELINE_BIND_POINT_END_RANGE) {
+ log_msg(mdd(commandBuffer), VK_DEBUG_REPORT_ERROR_BIT_EXT, (VkDebugReportObjectTypeEXT)0, 0, __LINE__, 1, "PARAMCHECK",
+ "vkCmdBindPipeline parameter, VkPipelineBindPoint pipelineBindPoint, is an unrecognized enumerator");
+ return false;
+ }
+
+ return true;
+}
+
+VK_LAYER_EXPORT VKAPI_ATTR void VKAPI_CALL
+vkCmdBindPipeline(VkCommandBuffer commandBuffer, VkPipelineBindPoint pipelineBindPoint, VkPipeline pipeline) {
+ get_dispatch_table(pc_device_table_map, commandBuffer)->CmdBindPipeline(commandBuffer, pipelineBindPoint, pipeline);
+
+ PostCmdBindPipeline(commandBuffer, pipelineBindPoint, pipeline);
+}
+
+VK_LAYER_EXPORT VKAPI_ATTR void VKAPI_CALL
+vkCmdSetViewport(VkCommandBuffer commandBuffer, uint32_t firstViewport, uint32_t viewportCount, const VkViewport *pViewports) {
+ VkBool32 skipCall = VK_FALSE;
+ layer_data *my_data = get_my_data_ptr(get_dispatch_key(commandBuffer), layer_data_map);
+ assert(my_data != NULL);
+
+ skipCall |= parameter_validation_vkCmdSetViewport(my_data->report_data, firstViewport, viewportCount, pViewports);
+
+ if (skipCall == VK_FALSE) {
+ get_dispatch_table(pc_device_table_map, commandBuffer)
+ ->CmdSetViewport(commandBuffer, firstViewport, viewportCount, pViewports);
+ }
+}
+
+VK_LAYER_EXPORT VKAPI_ATTR void VKAPI_CALL
+vkCmdSetScissor(VkCommandBuffer commandBuffer, uint32_t firstScissor, uint32_t scissorCount, const VkRect2D *pScissors) {
+ VkBool32 skipCall = VK_FALSE;
+ layer_data *my_data = get_my_data_ptr(get_dispatch_key(commandBuffer), layer_data_map);
+ assert(my_data != NULL);
+
+ skipCall |= parameter_validation_vkCmdSetScissor(my_data->report_data, firstScissor, scissorCount, pScissors);
+
+ if (skipCall == VK_FALSE) {
+ get_dispatch_table(pc_device_table_map, commandBuffer)->CmdSetScissor(commandBuffer, firstScissor, scissorCount, pScissors);
+ }
+}
+
+VK_LAYER_EXPORT VKAPI_ATTR void VKAPI_CALL vkCmdSetLineWidth(VkCommandBuffer commandBuffer, float lineWidth) {
+ get_dispatch_table(pc_device_table_map, commandBuffer)->CmdSetLineWidth(commandBuffer, lineWidth);
+}
+
+VK_LAYER_EXPORT VKAPI_ATTR void VKAPI_CALL
+vkCmdSetDepthBias(VkCommandBuffer commandBuffer, float depthBiasConstantFactor, float depthBiasClamp, float depthBiasSlopeFactor) {
+ get_dispatch_table(pc_device_table_map, commandBuffer)
+ ->CmdSetDepthBias(commandBuffer, depthBiasConstantFactor, depthBiasClamp, depthBiasSlopeFactor);
+}
+
+VK_LAYER_EXPORT VKAPI_ATTR void VKAPI_CALL vkCmdSetBlendConstants(VkCommandBuffer commandBuffer, const float blendConstants[4]) {
+ VkBool32 skipCall = VK_FALSE;
+ layer_data *my_data = get_my_data_ptr(get_dispatch_key(commandBuffer), layer_data_map);
+ assert(my_data != NULL);
+
+ skipCall |= parameter_validation_vkCmdSetBlendConstants(my_data->report_data, blendConstants);
+
+ if (skipCall == VK_FALSE) {
+ get_dispatch_table(pc_device_table_map, commandBuffer)->CmdSetBlendConstants(commandBuffer, blendConstants);
+ }
+}
+
+VK_LAYER_EXPORT VKAPI_ATTR void VKAPI_CALL
+vkCmdSetDepthBounds(VkCommandBuffer commandBuffer, float minDepthBounds, float maxDepthBounds) {
+ get_dispatch_table(pc_device_table_map, commandBuffer)->CmdSetDepthBounds(commandBuffer, minDepthBounds, maxDepthBounds);
+}
+
+VK_LAYER_EXPORT VKAPI_ATTR void VKAPI_CALL
+vkCmdSetStencilCompareMask(VkCommandBuffer commandBuffer, VkStencilFaceFlags faceMask, uint32_t compareMask) {
+ get_dispatch_table(pc_device_table_map, commandBuffer)->CmdSetStencilCompareMask(commandBuffer, faceMask, compareMask);
+}
+
+VK_LAYER_EXPORT VKAPI_ATTR void VKAPI_CALL
+vkCmdSetStencilWriteMask(VkCommandBuffer commandBuffer, VkStencilFaceFlags faceMask, uint32_t writeMask) {
+ get_dispatch_table(pc_device_table_map, commandBuffer)->CmdSetStencilWriteMask(commandBuffer, faceMask, writeMask);
+}
+
+VK_LAYER_EXPORT VKAPI_ATTR void VKAPI_CALL
+vkCmdSetStencilReference(VkCommandBuffer commandBuffer, VkStencilFaceFlags faceMask, uint32_t reference) {
+ get_dispatch_table(pc_device_table_map, commandBuffer)->CmdSetStencilReference(commandBuffer, faceMask, reference);
+}
+
+bool PostCmdBindDescriptorSets(VkCommandBuffer commandBuffer, VkPipelineBindPoint pipelineBindPoint, VkPipelineLayout layout,
+ uint32_t firstSet, uint32_t setCount, uint32_t dynamicOffsetCount) {
+
+ if (pipelineBindPoint < VK_PIPELINE_BIND_POINT_BEGIN_RANGE || pipelineBindPoint > VK_PIPELINE_BIND_POINT_END_RANGE) {
+ log_msg(mdd(commandBuffer), VK_DEBUG_REPORT_ERROR_BIT_EXT, (VkDebugReportObjectTypeEXT)0, 0, __LINE__, 1, "PARAMCHECK",
+ "vkCmdBindDescriptorSets parameter, VkPipelineBindPoint pipelineBindPoint, is an unrecognized enumerator");
+ return false;
+ }
+
+ return true;
+}
+
+VK_LAYER_EXPORT VKAPI_ATTR void VKAPI_CALL
+vkCmdBindDescriptorSets(VkCommandBuffer commandBuffer, VkPipelineBindPoint pipelineBindPoint, VkPipelineLayout layout,
+ uint32_t firstSet, uint32_t descriptorSetCount, const VkDescriptorSet *pDescriptorSets,
+ uint32_t dynamicOffsetCount, const uint32_t *pDynamicOffsets) {
+ VkBool32 skipCall = VK_FALSE;
+ layer_data *my_data = get_my_data_ptr(get_dispatch_key(commandBuffer), layer_data_map);
+ assert(my_data != NULL);
+
+ skipCall |= parameter_validation_vkCmdBindDescriptorSets(my_data->report_data, pipelineBindPoint, layout, firstSet, descriptorSetCount,
+ pDescriptorSets, dynamicOffsetCount, pDynamicOffsets);
+
+ if (skipCall == VK_FALSE) {
+ get_dispatch_table(pc_device_table_map, commandBuffer)
+ ->CmdBindDescriptorSets(commandBuffer, pipelineBindPoint, layout, firstSet, descriptorSetCount, pDescriptorSets,
+ dynamicOffsetCount, pDynamicOffsets);
+
+ PostCmdBindDescriptorSets(commandBuffer, pipelineBindPoint, layout, firstSet, descriptorSetCount, dynamicOffsetCount);
+ }
+}
+
+bool PostCmdBindIndexBuffer(VkCommandBuffer commandBuffer, VkBuffer buffer, VkDeviceSize offset, VkIndexType indexType) {
+
+ if (indexType < VK_INDEX_TYPE_BEGIN_RANGE || indexType > VK_INDEX_TYPE_END_RANGE) {
+ log_msg(mdd(commandBuffer), VK_DEBUG_REPORT_ERROR_BIT_EXT, (VkDebugReportObjectTypeEXT)0, 0, __LINE__, 1, "PARAMCHECK",
+ "vkCmdBindIndexBuffer parameter, VkIndexType indexType, is an unrecognized enumerator");
+ return false;
+ }
+
+ return true;
+}
+
+VK_LAYER_EXPORT VKAPI_ATTR void VKAPI_CALL
+vkCmdBindIndexBuffer(VkCommandBuffer commandBuffer, VkBuffer buffer, VkDeviceSize offset, VkIndexType indexType) {
+ get_dispatch_table(pc_device_table_map, commandBuffer)->CmdBindIndexBuffer(commandBuffer, buffer, offset, indexType);
+
+ PostCmdBindIndexBuffer(commandBuffer, buffer, offset, indexType);
+}
+
+VK_LAYER_EXPORT VKAPI_ATTR void VKAPI_CALL vkCmdBindVertexBuffers(VkCommandBuffer commandBuffer, uint32_t firstBinding,
+ uint32_t bindingCount, const VkBuffer *pBuffers,
+ const VkDeviceSize *pOffsets) {
+ VkBool32 skipCall = VK_FALSE;
+ layer_data *my_data = get_my_data_ptr(get_dispatch_key(commandBuffer), layer_data_map);
+ assert(my_data != NULL);
+
+ skipCall |= parameter_validation_vkCmdBindVertexBuffers(my_data->report_data, firstBinding, bindingCount, pBuffers, pOffsets);
+
+ if (skipCall == VK_FALSE) {
+ get_dispatch_table(pc_device_table_map, commandBuffer)
+ ->CmdBindVertexBuffers(commandBuffer, firstBinding, bindingCount, pBuffers, pOffsets);
+ }
+}
+
+bool PreCmdDraw(VkCommandBuffer commandBuffer, uint32_t vertexCount, uint32_t instanceCount, uint32_t firstVertex,
+ uint32_t firstInstance) {
+ if (vertexCount == 0) {
+ // TODO: Verify against Valid Usage section. I don't see a non-zero vertexCount listed, may need to add that and make
+ // this an error or leave as is.
+ log_msg(mdd(commandBuffer), VK_DEBUG_REPORT_WARNING_BIT_EXT, (VkDebugReportObjectTypeEXT)0, 0, __LINE__, 1, "PARAMCHECK",
+ "vkCmdDraw parameter, uint32_t vertexCount, is 0");
+ return false;
+ }
+
+ if (instanceCount == 0) {
+ // TODO: Verify against Valid Usage section. I don't see a non-zero instanceCount listed, may need to add that and make
+ // this an error or leave as is.
+ log_msg(mdd(commandBuffer), VK_DEBUG_REPORT_WARNING_BIT_EXT, (VkDebugReportObjectTypeEXT)0, 0, __LINE__, 1, "PARAMCHECK",
+ "vkCmdDraw parameter, uint32_t instanceCount, is 0");
+ return false;
+ }
+
+ return true;
+}
+
+VK_LAYER_EXPORT VKAPI_ATTR void VKAPI_CALL vkCmdDraw(VkCommandBuffer commandBuffer, uint32_t vertexCount, uint32_t instanceCount,
+ uint32_t firstVertex, uint32_t firstInstance) {
+ PreCmdDraw(commandBuffer, vertexCount, instanceCount, firstVertex, firstInstance);
+
+ get_dispatch_table(pc_device_table_map, commandBuffer)
+ ->CmdDraw(commandBuffer, vertexCount, instanceCount, firstVertex, firstInstance);
+}
+
+VK_LAYER_EXPORT VKAPI_ATTR void VKAPI_CALL vkCmdDrawIndexed(VkCommandBuffer commandBuffer, uint32_t indexCount,
+ uint32_t instanceCount, uint32_t firstIndex, int32_t vertexOffset,
+ uint32_t firstInstance) {
+ get_dispatch_table(pc_device_table_map, commandBuffer)
+ ->CmdDrawIndexed(commandBuffer, indexCount, instanceCount, firstIndex, vertexOffset, firstInstance);
+}
+
+VK_LAYER_EXPORT VKAPI_ATTR void VKAPI_CALL
+vkCmdDrawIndirect(VkCommandBuffer commandBuffer, VkBuffer buffer, VkDeviceSize offset, uint32_t count, uint32_t stride) {
+ get_dispatch_table(pc_device_table_map, commandBuffer)->CmdDrawIndirect(commandBuffer, buffer, offset, count, stride);
+}
+
+VK_LAYER_EXPORT VKAPI_ATTR void VKAPI_CALL
+vkCmdDrawIndexedIndirect(VkCommandBuffer commandBuffer, VkBuffer buffer, VkDeviceSize offset, uint32_t count, uint32_t stride) {
+ get_dispatch_table(pc_device_table_map, commandBuffer)->CmdDrawIndexedIndirect(commandBuffer, buffer, offset, count, stride);
+}
+
+VK_LAYER_EXPORT VKAPI_ATTR void VKAPI_CALL vkCmdDispatch(VkCommandBuffer commandBuffer, uint32_t x, uint32_t y, uint32_t z) {
+ get_dispatch_table(pc_device_table_map, commandBuffer)->CmdDispatch(commandBuffer, x, y, z);
+}
+
+VK_LAYER_EXPORT VKAPI_ATTR void VKAPI_CALL
+vkCmdDispatchIndirect(VkCommandBuffer commandBuffer, VkBuffer buffer, VkDeviceSize offset) {
+ get_dispatch_table(pc_device_table_map, commandBuffer)->CmdDispatchIndirect(commandBuffer, buffer, offset);
+}
+
+VK_LAYER_EXPORT VKAPI_ATTR void VKAPI_CALL vkCmdCopyBuffer(VkCommandBuffer commandBuffer, VkBuffer srcBuffer, VkBuffer dstBuffer,
+ uint32_t regionCount, const VkBufferCopy *pRegions) {
+ VkBool32 skipCall = VK_FALSE;
+ layer_data *my_data = get_my_data_ptr(get_dispatch_key(commandBuffer), layer_data_map);
+ assert(my_data != NULL);
+
+ skipCall |= parameter_validation_vkCmdCopyBuffer(my_data->report_data, srcBuffer, dstBuffer, regionCount, pRegions);
+
+ if (skipCall == VK_FALSE) {
+ get_dispatch_table(pc_device_table_map, commandBuffer)
+ ->CmdCopyBuffer(commandBuffer, srcBuffer, dstBuffer, regionCount, pRegions);
+ }
+}
+
+bool PreCmdCopyImage(VkCommandBuffer commandBuffer, const VkImageCopy *pRegions) {
+ if (pRegions != nullptr) {
+ if ((pRegions->srcSubresource.aspectMask & (VK_IMAGE_ASPECT_COLOR_BIT | VK_IMAGE_ASPECT_DEPTH_BIT |
+ VK_IMAGE_ASPECT_STENCIL_BIT | VK_IMAGE_ASPECT_METADATA_BIT)) == 0) {
+ log_msg(mdd(commandBuffer), VK_DEBUG_REPORT_ERROR_BIT_EXT, (VkDebugReportObjectTypeEXT)0, 0, __LINE__, 1, "PARAMCHECK",
+ "vkCmdCopyImage parameter, VkImageAspect pRegions->srcSubresource.aspectMask, is an unrecognized enumerator");
+ return false;
+ }
+ if ((pRegions->dstSubresource.aspectMask & (VK_IMAGE_ASPECT_COLOR_BIT | VK_IMAGE_ASPECT_DEPTH_BIT |
+ VK_IMAGE_ASPECT_STENCIL_BIT | VK_IMAGE_ASPECT_METADATA_BIT)) == 0) {
+ log_msg(mdd(commandBuffer), VK_DEBUG_REPORT_ERROR_BIT_EXT, (VkDebugReportObjectTypeEXT)0, 0, __LINE__, 1, "PARAMCHECK",
+ "vkCmdCopyImage parameter, VkImageAspect pRegions->dstSubresource.aspectMask, is an unrecognized enumerator");
+ return false;
+ }
+ }
+
+ return true;
+}
+
+bool PostCmdCopyImage(VkCommandBuffer commandBuffer, VkImage srcImage, VkImageLayout srcImageLayout, VkImage dstImage,
+ VkImageLayout dstImageLayout, uint32_t regionCount) {
+ if (((srcImageLayout < VK_IMAGE_LAYOUT_BEGIN_RANGE) || (srcImageLayout > VK_IMAGE_LAYOUT_END_RANGE)) &&
+ (srcImageLayout != VK_IMAGE_LAYOUT_PRESENT_SRC_KHR)) {
+ log_msg(mdd(commandBuffer), VK_DEBUG_REPORT_ERROR_BIT_EXT, (VkDebugReportObjectTypeEXT)0, 0, __LINE__, 1, "PARAMCHECK",
+ "vkCmdCopyImage parameter, VkImageLayout srcImageLayout, is an unrecognized enumerator");
+ return false;
+ }
+
+ if (((dstImageLayout < VK_IMAGE_LAYOUT_BEGIN_RANGE) || (dstImageLayout > VK_IMAGE_LAYOUT_END_RANGE)) &&
+ (dstImageLayout != VK_IMAGE_LAYOUT_PRESENT_SRC_KHR)) {
+ log_msg(mdd(commandBuffer), VK_DEBUG_REPORT_ERROR_BIT_EXT, (VkDebugReportObjectTypeEXT)0, 0, __LINE__, 1, "PARAMCHECK",
+ "vkCmdCopyImage parameter, VkImageLayout dstImageLayout, is an unrecognized enumerator");
+ return false;
+ }
+
+ return true;
+}
+
+VK_LAYER_EXPORT VKAPI_ATTR void VKAPI_CALL
+vkCmdCopyImage(VkCommandBuffer commandBuffer, VkImage srcImage, VkImageLayout srcImageLayout, VkImage dstImage,
+ VkImageLayout dstImageLayout, uint32_t regionCount, const VkImageCopy *pRegions) {
+ VkBool32 skipCall = VK_FALSE;
+ layer_data *my_data = get_my_data_ptr(get_dispatch_key(commandBuffer), layer_data_map);
+ assert(my_data != NULL);
+
+ skipCall |=
+ parameter_validation_vkCmdCopyImage(my_data->report_data, srcImage, srcImageLayout, dstImage, dstImageLayout, regionCount, pRegions);
+
+ if (skipCall == VK_FALSE) {
+ PreCmdCopyImage(commandBuffer, pRegions);
+
+ get_dispatch_table(pc_device_table_map, commandBuffer)
+ ->CmdCopyImage(commandBuffer, srcImage, srcImageLayout, dstImage, dstImageLayout, regionCount, pRegions);
+
+ PostCmdCopyImage(commandBuffer, srcImage, srcImageLayout, dstImage, dstImageLayout, regionCount);
+ }
+}
+
+bool PreCmdBlitImage(VkCommandBuffer commandBuffer, const VkImageBlit *pRegions) {
+ if (pRegions != nullptr) {
+ if ((pRegions->srcSubresource.aspectMask & (VK_IMAGE_ASPECT_COLOR_BIT | VK_IMAGE_ASPECT_DEPTH_BIT |
+ VK_IMAGE_ASPECT_STENCIL_BIT | VK_IMAGE_ASPECT_METADATA_BIT)) == 0) {
+ log_msg(mdd(commandBuffer), VK_DEBUG_REPORT_ERROR_BIT_EXT, (VkDebugReportObjectTypeEXT)0, 0, __LINE__, 1, "PARAMCHECK",
+ "vkCmdCopyImage parameter, VkImageAspect pRegions->srcSubresource.aspectMask, is an unrecognized enumerator");
+ return false;
+ }
+ if ((pRegions->dstSubresource.aspectMask & (VK_IMAGE_ASPECT_COLOR_BIT | VK_IMAGE_ASPECT_DEPTH_BIT |
+ VK_IMAGE_ASPECT_STENCIL_BIT | VK_IMAGE_ASPECT_METADATA_BIT)) == 0) {
+ log_msg(mdd(commandBuffer), VK_DEBUG_REPORT_ERROR_BIT_EXT, (VkDebugReportObjectTypeEXT)0, 0, __LINE__, 1, "PARAMCHECK",
+ "vkCmdCopyImage parameter, VkImageAspect pRegions->dstSubresource.aspectMask, is an unrecognized enumerator");
+ return false;
+ }
+ }
+
+ return true;
+}
+
+bool PostCmdBlitImage(VkCommandBuffer commandBuffer, VkImage srcImage, VkImageLayout srcImageLayout, VkImage dstImage,
+ VkImageLayout dstImageLayout, uint32_t regionCount, VkFilter filter) {
+
+ if (((srcImageLayout < VK_IMAGE_LAYOUT_BEGIN_RANGE) || (srcImageLayout > VK_IMAGE_LAYOUT_END_RANGE)) &&
+ (srcImageLayout != VK_IMAGE_LAYOUT_PRESENT_SRC_KHR)) {
+ log_msg(mdd(commandBuffer), VK_DEBUG_REPORT_ERROR_BIT_EXT, (VkDebugReportObjectTypeEXT)0, 0, __LINE__, 1, "PARAMCHECK",
+ "vkCmdBlitImage parameter, VkImageLayout srcImageLayout, is an unrecognized enumerator");
+ return false;
+ }
+
+ if (((dstImageLayout < VK_IMAGE_LAYOUT_BEGIN_RANGE) || (dstImageLayout > VK_IMAGE_LAYOUT_END_RANGE)) &&
+ (dstImageLayout != VK_IMAGE_LAYOUT_PRESENT_SRC_KHR)) {
+ log_msg(mdd(commandBuffer), VK_DEBUG_REPORT_ERROR_BIT_EXT, (VkDebugReportObjectTypeEXT)0, 0, __LINE__, 1, "PARAMCHECK",
+ "vkCmdBlitImage parameter, VkImageLayout dstImageLayout, is an unrecognized enumerator");
+ return false;
+ }
+
+ if (filter < VK_FILTER_BEGIN_RANGE || filter > VK_FILTER_END_RANGE) {
+ log_msg(mdd(commandBuffer), VK_DEBUG_REPORT_ERROR_BIT_EXT, (VkDebugReportObjectTypeEXT)0, 0, __LINE__, 1, "PARAMCHECK",
+ "vkCmdBlitImage parameter, VkFilter filter, is an unrecognized enumerator");
+ return false;
+ }
+
+ return true;
+}
+
+VK_LAYER_EXPORT VKAPI_ATTR void VKAPI_CALL
+vkCmdBlitImage(VkCommandBuffer commandBuffer, VkImage srcImage, VkImageLayout srcImageLayout, VkImage dstImage,
+ VkImageLayout dstImageLayout, uint32_t regionCount, const VkImageBlit *pRegions, VkFilter filter) {
+ VkBool32 skipCall = VK_FALSE;
+ layer_data *my_data = get_my_data_ptr(get_dispatch_key(commandBuffer), layer_data_map);
+ assert(my_data != NULL);
+
+ skipCall |= parameter_validation_vkCmdBlitImage(my_data->report_data, srcImage, srcImageLayout, dstImage, dstImageLayout, regionCount,
+ pRegions, filter);
+
+ if (skipCall == VK_FALSE) {
+ PreCmdBlitImage(commandBuffer, pRegions);
+
+ get_dispatch_table(pc_device_table_map, commandBuffer)
+ ->CmdBlitImage(commandBuffer, srcImage, srcImageLayout, dstImage, dstImageLayout, regionCount, pRegions, filter);
+
+ PostCmdBlitImage(commandBuffer, srcImage, srcImageLayout, dstImage, dstImageLayout, regionCount, filter);
+ }
+}
+
+bool PreCmdCopyBufferToImage(VkCommandBuffer commandBuffer, const VkBufferImageCopy *pRegions) {
+ if (pRegions != nullptr) {
+ if ((pRegions->imageSubresource.aspectMask & (VK_IMAGE_ASPECT_COLOR_BIT | VK_IMAGE_ASPECT_DEPTH_BIT |
+ VK_IMAGE_ASPECT_STENCIL_BIT | VK_IMAGE_ASPECT_METADATA_BIT)) == 0) {
+ log_msg(mdd(commandBuffer), VK_DEBUG_REPORT_ERROR_BIT_EXT, (VkDebugReportObjectTypeEXT)0, 0, __LINE__, 1, "PARAMCHECK",
+ "vkCmdCopyBufferToImage parameter, VkImageAspect pRegions->imageSubresource.aspectMask, is an unrecognized "
+ "enumerator");
+ return false;
+ }
+ }
+
+ return true;
+}
+
+bool PostCmdCopyBufferToImage(VkCommandBuffer commandBuffer, VkBuffer srcBuffer, VkImage dstImage, VkImageLayout dstImageLayout,
+ uint32_t regionCount) {
+
+ if (((dstImageLayout < VK_IMAGE_LAYOUT_BEGIN_RANGE) || (dstImageLayout > VK_IMAGE_LAYOUT_END_RANGE)) &&
+ (dstImageLayout != VK_IMAGE_LAYOUT_PRESENT_SRC_KHR)) {
+ log_msg(mdd(commandBuffer), VK_DEBUG_REPORT_ERROR_BIT_EXT, (VkDebugReportObjectTypeEXT)0, 0, __LINE__, 1, "PARAMCHECK",
+ "vkCmdCopyBufferToImage parameter, VkImageLayout dstImageLayout, is an unrecognized enumerator");
+ return false;
+ }
+
+ return true;
+}
+
+VK_LAYER_EXPORT VKAPI_ATTR void VKAPI_CALL vkCmdCopyBufferToImage(VkCommandBuffer commandBuffer, VkBuffer srcBuffer,
+ VkImage dstImage, VkImageLayout dstImageLayout,
+ uint32_t regionCount, const VkBufferImageCopy *pRegions) {
+ VkBool32 skipCall = VK_FALSE;
+ layer_data *my_data = get_my_data_ptr(get_dispatch_key(commandBuffer), layer_data_map);
+ assert(my_data != NULL);
+
+ skipCall |=
+ parameter_validation_vkCmdCopyBufferToImage(my_data->report_data, srcBuffer, dstImage, dstImageLayout, regionCount, pRegions);
+
+ if (skipCall == VK_FALSE) {
+ PreCmdCopyBufferToImage(commandBuffer, pRegions);
+
+ get_dispatch_table(pc_device_table_map, commandBuffer)
+ ->CmdCopyBufferToImage(commandBuffer, srcBuffer, dstImage, dstImageLayout, regionCount, pRegions);
+
+ PostCmdCopyBufferToImage(commandBuffer, srcBuffer, dstImage, dstImageLayout, regionCount);
+ }
+}
+
+bool PreCmdCopyImageToBuffer(VkCommandBuffer commandBuffer, const VkBufferImageCopy *pRegions) {
+ if (pRegions != nullptr) {
+ if ((pRegions->imageSubresource.aspectMask & (VK_IMAGE_ASPECT_COLOR_BIT | VK_IMAGE_ASPECT_DEPTH_BIT |
+ VK_IMAGE_ASPECT_STENCIL_BIT | VK_IMAGE_ASPECT_METADATA_BIT)) == 0) {
+ log_msg(mdd(commandBuffer), VK_DEBUG_REPORT_ERROR_BIT_EXT, (VkDebugReportObjectTypeEXT)0, 0, __LINE__, 1, "PARAMCHECK",
+ "vkCmdCopyImageToBuffer parameter, VkImageAspect pRegions->imageSubresource.aspectMask, is an unrecognized "
+ "enumerator");
+ return false;
+ }
+ }
+
+ return true;
+}
+
+bool PostCmdCopyImageToBuffer(VkCommandBuffer commandBuffer, VkImage srcImage, VkImageLayout srcImageLayout, VkBuffer dstBuffer,
+ uint32_t regionCount) {
+
+ if (((srcImageLayout < VK_IMAGE_LAYOUT_BEGIN_RANGE) || (srcImageLayout > VK_IMAGE_LAYOUT_END_RANGE)) &&
+ (srcImageLayout != VK_IMAGE_LAYOUT_PRESENT_SRC_KHR)) {
+ log_msg(mdd(commandBuffer), VK_DEBUG_REPORT_ERROR_BIT_EXT, (VkDebugReportObjectTypeEXT)0, 0, __LINE__, 1, "PARAMCHECK",
+ "vkCmdCopyImageToBuffer parameter, VkImageLayout srcImageLayout, is an unrecognized enumerator");
+ return false;
+ }
+
+ return true;
+}
+
+VK_LAYER_EXPORT VKAPI_ATTR void VKAPI_CALL vkCmdCopyImageToBuffer(VkCommandBuffer commandBuffer, VkImage srcImage,
+ VkImageLayout srcImageLayout, VkBuffer dstBuffer,
+ uint32_t regionCount, const VkBufferImageCopy *pRegions) {
+ VkBool32 skipCall = VK_FALSE;
+ layer_data *my_data = get_my_data_ptr(get_dispatch_key(commandBuffer), layer_data_map);
+ assert(my_data != NULL);
+
+ skipCall |=
+ parameter_validation_vkCmdCopyImageToBuffer(my_data->report_data, srcImage, srcImageLayout, dstBuffer, regionCount, pRegions);
+
+ if (skipCall == VK_FALSE) {
+ PreCmdCopyImageToBuffer(commandBuffer, pRegions);
+
+ get_dispatch_table(pc_device_table_map, commandBuffer)
+ ->CmdCopyImageToBuffer(commandBuffer, srcImage, srcImageLayout, dstBuffer, regionCount, pRegions);
+
+ PostCmdCopyImageToBuffer(commandBuffer, srcImage, srcImageLayout, dstBuffer, regionCount);
+ }
+}
+
+VK_LAYER_EXPORT VKAPI_ATTR void VKAPI_CALL vkCmdUpdateBuffer(VkCommandBuffer commandBuffer, VkBuffer dstBuffer,
+ VkDeviceSize dstOffset, VkDeviceSize dataSize, const uint32_t *pData) {
+ VkBool32 skipCall = VK_FALSE;
+ layer_data *my_data = get_my_data_ptr(get_dispatch_key(commandBuffer), layer_data_map);
+ assert(my_data != NULL);
+
+ skipCall |= parameter_validation_vkCmdUpdateBuffer(my_data->report_data, dstBuffer, dstOffset, dataSize, pData);
+
+ if (skipCall == VK_FALSE) {
+ get_dispatch_table(pc_device_table_map, commandBuffer)
+ ->CmdUpdateBuffer(commandBuffer, dstBuffer, dstOffset, dataSize, pData);
+ }
+}
+
+VK_LAYER_EXPORT VKAPI_ATTR void VKAPI_CALL
+vkCmdFillBuffer(VkCommandBuffer commandBuffer, VkBuffer dstBuffer, VkDeviceSize dstOffset, VkDeviceSize size, uint32_t data) {
+ get_dispatch_table(pc_device_table_map, commandBuffer)->CmdFillBuffer(commandBuffer, dstBuffer, dstOffset, size, data);
+}
+
+bool PostCmdClearColorImage(VkCommandBuffer commandBuffer, VkImage image, VkImageLayout imageLayout, uint32_t rangeCount) {
+
+ if (((imageLayout < VK_IMAGE_LAYOUT_BEGIN_RANGE) || (imageLayout > VK_IMAGE_LAYOUT_END_RANGE)) &&
+ (imageLayout != VK_IMAGE_LAYOUT_PRESENT_SRC_KHR)) {
+ log_msg(mdd(commandBuffer), VK_DEBUG_REPORT_ERROR_BIT_EXT, (VkDebugReportObjectTypeEXT)0, 0, __LINE__, 1, "PARAMCHECK",
+ "vkCmdClearColorImage parameter, VkImageLayout imageLayout, is an unrecognized enumerator");
+ return false;
+ }
+
+ return true;
+}
+
+VK_LAYER_EXPORT VKAPI_ATTR void VKAPI_CALL vkCmdClearColorImage(VkCommandBuffer commandBuffer, VkImage image,
+ VkImageLayout imageLayout, const VkClearColorValue *pColor,
+ uint32_t rangeCount, const VkImageSubresourceRange *pRanges) {
+ VkBool32 skipCall = VK_FALSE;
+ layer_data *my_data = get_my_data_ptr(get_dispatch_key(commandBuffer), layer_data_map);
+ assert(my_data != NULL);
+
+ skipCall |= parameter_validation_vkCmdClearColorImage(my_data->report_data, image, imageLayout, pColor, rangeCount, pRanges);
+
+ if (skipCall == VK_FALSE) {
+ get_dispatch_table(pc_device_table_map, commandBuffer)
+ ->CmdClearColorImage(commandBuffer, image, imageLayout, pColor, rangeCount, pRanges);
+
+ PostCmdClearColorImage(commandBuffer, image, imageLayout, rangeCount);
+ }
+}
+
+bool PostCmdClearDepthStencilImage(VkCommandBuffer commandBuffer, VkImage image, VkImageLayout imageLayout,
+ const VkClearDepthStencilValue *pDepthStencil, uint32_t rangeCount) {
+
+ if (((imageLayout < VK_IMAGE_LAYOUT_BEGIN_RANGE) || (imageLayout > VK_IMAGE_LAYOUT_END_RANGE)) &&
+ (imageLayout != VK_IMAGE_LAYOUT_PRESENT_SRC_KHR)) {
+ log_msg(mdd(commandBuffer), VK_DEBUG_REPORT_ERROR_BIT_EXT, (VkDebugReportObjectTypeEXT)0, 0, __LINE__, 1, "PARAMCHECK",
+ "vkCmdClearDepthStencilImage parameter, VkImageLayout imageLayout, is an unrecognized enumerator");
+ return false;
+ }
+
+ return true;
+}
+
+VK_LAYER_EXPORT VKAPI_ATTR void VKAPI_CALL
+vkCmdClearDepthStencilImage(VkCommandBuffer commandBuffer, VkImage image, VkImageLayout imageLayout,
+ const VkClearDepthStencilValue *pDepthStencil, uint32_t rangeCount,
+ const VkImageSubresourceRange *pRanges) {
+ VkBool32 skipCall = VK_FALSE;
+ layer_data *my_data = get_my_data_ptr(get_dispatch_key(commandBuffer), layer_data_map);
+ assert(my_data != NULL);
+
+ skipCall |=
+ parameter_validation_vkCmdClearDepthStencilImage(my_data->report_data, image, imageLayout, pDepthStencil, rangeCount, pRanges);
+
+ if (skipCall == VK_FALSE) {
+ get_dispatch_table(pc_device_table_map, commandBuffer)
+ ->CmdClearDepthStencilImage(commandBuffer, image, imageLayout, pDepthStencil, rangeCount, pRanges);
+
+ PostCmdClearDepthStencilImage(commandBuffer, image, imageLayout, pDepthStencil, rangeCount);
+ }
+}
+
+VK_LAYER_EXPORT VKAPI_ATTR void VKAPI_CALL vkCmdClearAttachments(VkCommandBuffer commandBuffer, uint32_t attachmentCount,
+ const VkClearAttachment *pAttachments, uint32_t rectCount,
+ const VkClearRect *pRects) {
+ VkBool32 skipCall = VK_FALSE;
+ layer_data *my_data = get_my_data_ptr(get_dispatch_key(commandBuffer), layer_data_map);
+ assert(my_data != NULL);
+
+ skipCall |= parameter_validation_vkCmdClearAttachments(my_data->report_data, attachmentCount, pAttachments, rectCount, pRects);
+
+ if (skipCall == VK_FALSE) {
+ get_dispatch_table(pc_device_table_map, commandBuffer)
+ ->CmdClearAttachments(commandBuffer, attachmentCount, pAttachments, rectCount, pRects);
+ }
+}
+
+bool PreCmdResolveImage(VkCommandBuffer commandBuffer, const VkImageResolve *pRegions) {
+ if (pRegions != nullptr) {
+ if ((pRegions->srcSubresource.aspectMask & (VK_IMAGE_ASPECT_COLOR_BIT | VK_IMAGE_ASPECT_DEPTH_BIT |
+ VK_IMAGE_ASPECT_STENCIL_BIT | VK_IMAGE_ASPECT_METADATA_BIT)) == 0) {
+ log_msg(
+ mdd(commandBuffer), VK_DEBUG_REPORT_ERROR_BIT_EXT, (VkDebugReportObjectTypeEXT)0, 0, __LINE__, 1, "PARAMCHECK",
+ "vkCmdResolveImage parameter, VkImageAspect pRegions->srcSubresource.aspectMask, is an unrecognized enumerator");
+ return false;
+ }
+ if ((pRegions->dstSubresource.aspectMask & (VK_IMAGE_ASPECT_COLOR_BIT | VK_IMAGE_ASPECT_DEPTH_BIT |
+ VK_IMAGE_ASPECT_STENCIL_BIT | VK_IMAGE_ASPECT_METADATA_BIT)) == 0) {
+ log_msg(
+ mdd(commandBuffer), VK_DEBUG_REPORT_ERROR_BIT_EXT, (VkDebugReportObjectTypeEXT)0, 0, __LINE__, 1, "PARAMCHECK",
+ "vkCmdResolveImage parameter, VkImageAspect pRegions->dstSubresource.aspectMask, is an unrecognized enumerator");
+ return false;
+ }
+ }
+
+ return true;
+}
+
+bool PostCmdResolveImage(VkCommandBuffer commandBuffer, VkImage srcImage, VkImageLayout srcImageLayout, VkImage dstImage,
+ VkImageLayout dstImageLayout, uint32_t regionCount) {
+
+ if (((srcImageLayout < VK_IMAGE_LAYOUT_BEGIN_RANGE) || (srcImageLayout > VK_IMAGE_LAYOUT_END_RANGE)) &&
+ (srcImageLayout != VK_IMAGE_LAYOUT_PRESENT_SRC_KHR)) {
+ log_msg(mdd(commandBuffer), VK_DEBUG_REPORT_ERROR_BIT_EXT, (VkDebugReportObjectTypeEXT)0, 0, __LINE__, 1, "PARAMCHECK",
+ "vkCmdResolveImage parameter, VkImageLayout srcImageLayout, is an unrecognized enumerator");
+ return false;
+ }
+
+ if (((dstImageLayout < VK_IMAGE_LAYOUT_BEGIN_RANGE) || (dstImageLayout > VK_IMAGE_LAYOUT_END_RANGE)) &&
+ (dstImageLayout != VK_IMAGE_LAYOUT_PRESENT_SRC_KHR)) {
+ log_msg(mdd(commandBuffer), VK_DEBUG_REPORT_ERROR_BIT_EXT, (VkDebugReportObjectTypeEXT)0, 0, __LINE__, 1, "PARAMCHECK",
+ "vkCmdResolveImage parameter, VkImageLayout dstImageLayout, is an unrecognized enumerator");
+ return false;
+ }
+
+ return true;
+}
+
+VK_LAYER_EXPORT VKAPI_ATTR void VKAPI_CALL
+vkCmdResolveImage(VkCommandBuffer commandBuffer, VkImage srcImage, VkImageLayout srcImageLayout, VkImage dstImage,
+ VkImageLayout dstImageLayout, uint32_t regionCount, const VkImageResolve *pRegions) {
+ VkBool32 skipCall = VK_FALSE;
+ layer_data *my_data = get_my_data_ptr(get_dispatch_key(commandBuffer), layer_data_map);
+ assert(my_data != NULL);
+
+ skipCall |= parameter_validation_vkCmdResolveImage(my_data->report_data, srcImage, srcImageLayout, dstImage, dstImageLayout, regionCount,
+ pRegions);
+
+ if (skipCall == VK_FALSE) {
+ PreCmdResolveImage(commandBuffer, pRegions);
+
+ get_dispatch_table(pc_device_table_map, commandBuffer)
+ ->CmdResolveImage(commandBuffer, srcImage, srcImageLayout, dstImage, dstImageLayout, regionCount, pRegions);
+
+ PostCmdResolveImage(commandBuffer, srcImage, srcImageLayout, dstImage, dstImageLayout, regionCount);
+ }
+}
+
+VK_LAYER_EXPORT VKAPI_ATTR void VKAPI_CALL
+vkCmdSetEvent(VkCommandBuffer commandBuffer, VkEvent event, VkPipelineStageFlags stageMask) {
+ get_dispatch_table(pc_device_table_map, commandBuffer)->CmdSetEvent(commandBuffer, event, stageMask);
+}
+
+VK_LAYER_EXPORT VKAPI_ATTR void VKAPI_CALL
+vkCmdResetEvent(VkCommandBuffer commandBuffer, VkEvent event, VkPipelineStageFlags stageMask) {
+ get_dispatch_table(pc_device_table_map, commandBuffer)->CmdResetEvent(commandBuffer, event, stageMask);
+}
+
+VK_LAYER_EXPORT VKAPI_ATTR void VKAPI_CALL
+vkCmdWaitEvents(VkCommandBuffer commandBuffer, uint32_t eventCount, const VkEvent *pEvents, VkPipelineStageFlags srcStageMask,
+ VkPipelineStageFlags dstStageMask, uint32_t memoryBarrierCount, const VkMemoryBarrier *pMemoryBarriers,
+ uint32_t bufferMemoryBarrierCount, const VkBufferMemoryBarrier *pBufferMemoryBarriers,
+ uint32_t imageMemoryBarrierCount, const VkImageMemoryBarrier *pImageMemoryBarriers) {
+ VkBool32 skipCall = VK_FALSE;
+ layer_data *my_data = get_my_data_ptr(get_dispatch_key(commandBuffer), layer_data_map);
+ assert(my_data != NULL);
+
+ skipCall |= parameter_validation_vkCmdWaitEvents(my_data->report_data, eventCount, pEvents, srcStageMask, dstStageMask,
+ memoryBarrierCount, pMemoryBarriers, bufferMemoryBarrierCount, pBufferMemoryBarriers,
+ imageMemoryBarrierCount, pImageMemoryBarriers);
+
+ if (skipCall == VK_FALSE) {
+ get_dispatch_table(pc_device_table_map, commandBuffer)
+ ->CmdWaitEvents(commandBuffer, eventCount, pEvents, srcStageMask, dstStageMask, memoryBarrierCount, pMemoryBarriers,
+ bufferMemoryBarrierCount, pBufferMemoryBarriers, imageMemoryBarrierCount, pImageMemoryBarriers);
+ }
+}
+
+VK_LAYER_EXPORT VKAPI_ATTR void VKAPI_CALL
+vkCmdPipelineBarrier(VkCommandBuffer commandBuffer, VkPipelineStageFlags srcStageMask, VkPipelineStageFlags dstStageMask,
+ VkDependencyFlags dependencyFlags, uint32_t memoryBarrierCount, const VkMemoryBarrier *pMemoryBarriers,
+ uint32_t bufferMemoryBarrierCount, const VkBufferMemoryBarrier *pBufferMemoryBarriers,
+ uint32_t imageMemoryBarrierCount, const VkImageMemoryBarrier *pImageMemoryBarriers) {
+ VkBool32 skipCall = VK_FALSE;
+ layer_data *my_data = get_my_data_ptr(get_dispatch_key(commandBuffer), layer_data_map);
+ assert(my_data != NULL);
+
+ skipCall |= parameter_validation_vkCmdPipelineBarrier(my_data->report_data, srcStageMask, dstStageMask, dependencyFlags,
+ memoryBarrierCount, pMemoryBarriers, bufferMemoryBarrierCount,
+ pBufferMemoryBarriers, imageMemoryBarrierCount, pImageMemoryBarriers);
+
+ if (skipCall == VK_FALSE) {
+ get_dispatch_table(pc_device_table_map, commandBuffer)
+ ->CmdPipelineBarrier(commandBuffer, srcStageMask, dstStageMask, dependencyFlags, memoryBarrierCount, pMemoryBarriers,
+ bufferMemoryBarrierCount, pBufferMemoryBarriers, imageMemoryBarrierCount, pImageMemoryBarriers);
+ }
+}
+
+VK_LAYER_EXPORT VKAPI_ATTR void VKAPI_CALL
+vkCmdBeginQuery(VkCommandBuffer commandBuffer, VkQueryPool queryPool, uint32_t slot, VkQueryControlFlags flags) {
+ get_dispatch_table(pc_device_table_map, commandBuffer)->CmdBeginQuery(commandBuffer, queryPool, slot, flags);
+}
+
+VK_LAYER_EXPORT VKAPI_ATTR void VKAPI_CALL vkCmdEndQuery(VkCommandBuffer commandBuffer, VkQueryPool queryPool, uint32_t slot) {
+ get_dispatch_table(pc_device_table_map, commandBuffer)->CmdEndQuery(commandBuffer, queryPool, slot);
+}
+
+VK_LAYER_EXPORT VKAPI_ATTR void VKAPI_CALL
+vkCmdResetQueryPool(VkCommandBuffer commandBuffer, VkQueryPool queryPool, uint32_t firstQuery, uint32_t queryCount) {
+ get_dispatch_table(pc_device_table_map, commandBuffer)->CmdResetQueryPool(commandBuffer, queryPool, firstQuery, queryCount);
+}
+
+bool PostCmdWriteTimestamp(VkCommandBuffer commandBuffer, VkPipelineStageFlagBits pipelineStage, VkQueryPool queryPool,
+ uint32_t slot) {
+
+ ValidateEnumerator(pipelineStage);
+
+ return true;
+}
+
+VK_LAYER_EXPORT VKAPI_ATTR void VKAPI_CALL
+vkCmdWriteTimestamp(VkCommandBuffer commandBuffer, VkPipelineStageFlagBits pipelineStage, VkQueryPool queryPool, uint32_t slot) {
+ get_dispatch_table(pc_device_table_map, commandBuffer)->CmdWriteTimestamp(commandBuffer, pipelineStage, queryPool, slot);
+
+ PostCmdWriteTimestamp(commandBuffer, pipelineStage, queryPool, slot);
+}
+
+VK_LAYER_EXPORT VKAPI_ATTR void VKAPI_CALL
+vkCmdCopyQueryPoolResults(VkCommandBuffer commandBuffer, VkQueryPool queryPool, uint32_t firstQuery, uint32_t queryCount,
+ VkBuffer dstBuffer, VkDeviceSize dstOffset, VkDeviceSize stride, VkQueryResultFlags flags) {
+ get_dispatch_table(pc_device_table_map, commandBuffer)
+ ->CmdCopyQueryPoolResults(commandBuffer, queryPool, firstQuery, queryCount, dstBuffer, dstOffset, stride, flags);
+}
+
+VK_LAYER_EXPORT VKAPI_ATTR void VKAPI_CALL vkCmdPushConstants(VkCommandBuffer commandBuffer, VkPipelineLayout layout,
+ VkShaderStageFlags stageFlags, uint32_t offset, uint32_t size,
+ const void *pValues) {
+ VkBool32 skipCall = VK_FALSE;
+ layer_data *my_data = get_my_data_ptr(get_dispatch_key(commandBuffer), layer_data_map);
+ assert(my_data != NULL);
+
+ skipCall |= parameter_validation_vkCmdPushConstants(my_data->report_data, layout, stageFlags, offset, size, pValues);
+
+ if (skipCall == VK_FALSE) {
+ get_dispatch_table(pc_device_table_map, commandBuffer)
+ ->CmdPushConstants(commandBuffer, layout, stageFlags, offset, size, pValues);
+ }
+}
+
+bool PostCmdBeginRenderPass(VkCommandBuffer commandBuffer, VkSubpassContents contents) {
+
+ if (contents < VK_SUBPASS_CONTENTS_BEGIN_RANGE || contents > VK_SUBPASS_CONTENTS_END_RANGE) {
+ log_msg(mdd(commandBuffer), VK_DEBUG_REPORT_ERROR_BIT_EXT, (VkDebugReportObjectTypeEXT)0, 0, __LINE__, 1, "PARAMCHECK",
+ "vkCmdBeginRenderPass parameter, VkSubpassContents contents, is an unrecognized enumerator");
+ return false;
+ }
+
+ return true;
+}
+
+VK_LAYER_EXPORT VKAPI_ATTR void VKAPI_CALL
+vkCmdBeginRenderPass(VkCommandBuffer commandBuffer, const VkRenderPassBeginInfo *pRenderPassBegin, VkSubpassContents contents) {
+ VkBool32 skipCall = VK_FALSE;
+ layer_data *my_data = get_my_data_ptr(get_dispatch_key(commandBuffer), layer_data_map);
+ assert(my_data != NULL);
+
+ skipCall |= parameter_validation_vkCmdBeginRenderPass(my_data->report_data, pRenderPassBegin, contents);
+
+ if (skipCall == VK_FALSE) {
+ get_dispatch_table(pc_device_table_map, commandBuffer)->CmdBeginRenderPass(commandBuffer, pRenderPassBegin, contents);
+
+ PostCmdBeginRenderPass(commandBuffer, contents);
+ }
+}
+
+bool PostCmdNextSubpass(VkCommandBuffer commandBuffer, VkSubpassContents contents) {
+
+ if (contents < VK_SUBPASS_CONTENTS_BEGIN_RANGE || contents > VK_SUBPASS_CONTENTS_END_RANGE) {
+ log_msg(mdd(commandBuffer), VK_DEBUG_REPORT_ERROR_BIT_EXT, (VkDebugReportObjectTypeEXT)0, 0, __LINE__, 1, "PARAMCHECK",
+ "vkCmdNextSubpass parameter, VkSubpassContents contents, is an unrecognized enumerator");
+ return false;
+ }
+
+ return true;
+}
+
+VK_LAYER_EXPORT VKAPI_ATTR void VKAPI_CALL vkCmdNextSubpass(VkCommandBuffer commandBuffer, VkSubpassContents contents) {
+ get_dispatch_table(pc_device_table_map, commandBuffer)->CmdNextSubpass(commandBuffer, contents);
+
+ PostCmdNextSubpass(commandBuffer, contents);
+}
+
+VK_LAYER_EXPORT VKAPI_ATTR void VKAPI_CALL vkCmdEndRenderPass(VkCommandBuffer commandBuffer) {
+ get_dispatch_table(pc_device_table_map, commandBuffer)->CmdEndRenderPass(commandBuffer);
+}
+
+VK_LAYER_EXPORT VKAPI_ATTR void VKAPI_CALL
+vkCmdExecuteCommands(VkCommandBuffer commandBuffer, uint32_t commandBufferCount, const VkCommandBuffer *pCommandBuffers) {
+ VkBool32 skipCall = VK_FALSE;
+ layer_data *my_data = get_my_data_ptr(get_dispatch_key(commandBuffer), layer_data_map);
+ assert(my_data != NULL);
+
+ skipCall |= parameter_validation_vkCmdExecuteCommands(my_data->report_data, commandBufferCount, pCommandBuffers);
+
+ if (skipCall == VK_FALSE) {
+ get_dispatch_table(pc_device_table_map, commandBuffer)
+ ->CmdExecuteCommands(commandBuffer, commandBufferCount, pCommandBuffers);
+ }
+}
+
+VK_LAYER_EXPORT VKAPI_ATTR PFN_vkVoidFunction VKAPI_CALL vkGetDeviceProcAddr(VkDevice device, const char *funcName) {
+ layer_data *data = get_my_data_ptr(get_dispatch_key(device), layer_data_map);
+
+ if (validate_string(data->report_data, "vkGetDeviceProcAddr", "funcName", funcName) == VK_TRUE) {
+ return NULL;
+ }
+
+ if (!strcmp(funcName, "vkGetDeviceProcAddr"))
+ return (PFN_vkVoidFunction)vkGetDeviceProcAddr;
+ if (!strcmp(funcName, "vkDestroyDevice"))
+ return (PFN_vkVoidFunction)vkDestroyDevice;
+ if (!strcmp(funcName, "vkGetDeviceQueue"))
+ return (PFN_vkVoidFunction)vkGetDeviceQueue;
+ if (!strcmp(funcName, "vkQueueSubmit"))
+ return (PFN_vkVoidFunction)vkQueueSubmit;
+ if (!strcmp(funcName, "vkQueueWaitIdle"))
+ return (PFN_vkVoidFunction)vkQueueWaitIdle;
+ if (!strcmp(funcName, "vkDeviceWaitIdle"))
+ return (PFN_vkVoidFunction)vkDeviceWaitIdle;
+ if (!strcmp(funcName, "vkAllocateMemory"))
+ return (PFN_vkVoidFunction)vkAllocateMemory;
+ if (!strcmp(funcName, "vkFreeMemory"))
+ return (PFN_vkVoidFunction)vkFreeMemory;
+ if (!strcmp(funcName, "vkMapMemory"))
+ return (PFN_vkVoidFunction)vkMapMemory;
+ if (!strcmp(funcName, "vkFlushMappedMemoryRanges"))
+ return (PFN_vkVoidFunction)vkFlushMappedMemoryRanges;
+ if (!strcmp(funcName, "vkInvalidateMappedMemoryRanges"))
+ return (PFN_vkVoidFunction)vkInvalidateMappedMemoryRanges;
+ if (!strcmp(funcName, "vkCreateFence"))
+ return (PFN_vkVoidFunction)vkCreateFence;
+ if (!strcmp(funcName, "vkDestroyFence"))
+ return (PFN_vkVoidFunction)vkDestroyFence;
+ if (!strcmp(funcName, "vkResetFences"))
+ return (PFN_vkVoidFunction)vkResetFences;
+ if (!strcmp(funcName, "vkGetFenceStatus"))
+ return (PFN_vkVoidFunction)vkGetFenceStatus;
+ if (!strcmp(funcName, "vkWaitForFences"))
+ return (PFN_vkVoidFunction)vkWaitForFences;
+ if (!strcmp(funcName, "vkCreateSemaphore"))
+ return (PFN_vkVoidFunction)vkCreateSemaphore;
+ if (!strcmp(funcName, "vkDestroySemaphore"))
+ return (PFN_vkVoidFunction)vkDestroySemaphore;
+ if (!strcmp(funcName, "vkCreateEvent"))
+ return (PFN_vkVoidFunction)vkCreateEvent;
+ if (!strcmp(funcName, "vkDestroyEvent"))
+ return (PFN_vkVoidFunction)vkDestroyEvent;
+ if (!strcmp(funcName, "vkGetEventStatus"))
+ return (PFN_vkVoidFunction)vkGetEventStatus;
+ if (!strcmp(funcName, "vkSetEvent"))
+ return (PFN_vkVoidFunction)vkSetEvent;
+ if (!strcmp(funcName, "vkResetEvent"))
+ return (PFN_vkVoidFunction)vkResetEvent;
+ if (!strcmp(funcName, "vkCreateQueryPool"))
+ return (PFN_vkVoidFunction)vkCreateQueryPool;
+ if (!strcmp(funcName, "vkDestroyQueryPool"))
+ return (PFN_vkVoidFunction)vkDestroyQueryPool;
+ if (!strcmp(funcName, "vkGetQueryPoolResults"))
+ return (PFN_vkVoidFunction)vkGetQueryPoolResults;
+ if (!strcmp(funcName, "vkCreateBuffer"))
+ return (PFN_vkVoidFunction)vkCreateBuffer;
+ if (!strcmp(funcName, "vkDestroyBuffer"))
+ return (PFN_vkVoidFunction)vkDestroyBuffer;
+ if (!strcmp(funcName, "vkCreateBufferView"))
+ return (PFN_vkVoidFunction)vkCreateBufferView;
+ if (!strcmp(funcName, "vkDestroyBufferView"))
+ return (PFN_vkVoidFunction)vkDestroyBufferView;
+ if (!strcmp(funcName, "vkCreateImage"))
+ return (PFN_vkVoidFunction)vkCreateImage;
+ if (!strcmp(funcName, "vkDestroyImage"))
+ return (PFN_vkVoidFunction)vkDestroyImage;
+ if (!strcmp(funcName, "vkGetImageSubresourceLayout"))
+ return (PFN_vkVoidFunction)vkGetImageSubresourceLayout;
+ if (!strcmp(funcName, "vkCreateImageView"))
+ return (PFN_vkVoidFunction)vkCreateImageView;
+ if (!strcmp(funcName, "vkDestroyImageView"))
+ return (PFN_vkVoidFunction)vkDestroyImageView;
+ if (!strcmp(funcName, "vkCreateShaderModule"))
+ return (PFN_vkVoidFunction)vkCreateShaderModule;
+ if (!strcmp(funcName, "vkDestroyShaderModule"))
+ return (PFN_vkVoidFunction)vkDestroyShaderModule;
+ if (!strcmp(funcName, "vkCreatePipelineCache"))
+ return (PFN_vkVoidFunction)vkCreatePipelineCache;
+ if (!strcmp(funcName, "vkDestroyPipelineCache"))
+ return (PFN_vkVoidFunction)vkDestroyPipelineCache;
+ if (!strcmp(funcName, "vkGetPipelineCacheData"))
+ return (PFN_vkVoidFunction)vkGetPipelineCacheData;
+ if (!strcmp(funcName, "vkMergePipelineCaches"))
+ return (PFN_vkVoidFunction)vkMergePipelineCaches;
+ if (!strcmp(funcName, "vkCreateGraphicsPipelines"))
+ return (PFN_vkVoidFunction)vkCreateGraphicsPipelines;
+ if (!strcmp(funcName, "vkCreateComputePipelines"))
+ return (PFN_vkVoidFunction)vkCreateComputePipelines;
+ if (!strcmp(funcName, "vkDestroyPipeline"))
+ return (PFN_vkVoidFunction)vkDestroyPipeline;
+ if (!strcmp(funcName, "vkCreatePipelineLayout"))
+ return (PFN_vkVoidFunction)vkCreatePipelineLayout;
+ if (!strcmp(funcName, "vkDestroyPipelineLayout"))
+ return (PFN_vkVoidFunction)vkDestroyPipelineLayout;
+ if (!strcmp(funcName, "vkCreateSampler"))
+ return (PFN_vkVoidFunction)vkCreateSampler;
+ if (!strcmp(funcName, "vkDestroySampler"))
+ return (PFN_vkVoidFunction)vkDestroySampler;
+ if (!strcmp(funcName, "vkCreateDescriptorSetLayout"))
+ return (PFN_vkVoidFunction)vkCreateDescriptorSetLayout;
+ if (!strcmp(funcName, "vkDestroyDescriptorSetLayout"))
+ return (PFN_vkVoidFunction)vkDestroyDescriptorSetLayout;
+ if (!strcmp(funcName, "vkCreateDescriptorPool"))
+ return (PFN_vkVoidFunction)vkCreateDescriptorPool;
+ if (!strcmp(funcName, "vkDestroyDescriptorPool"))
+ return (PFN_vkVoidFunction)vkDestroyDescriptorPool;
+ if (!strcmp(funcName, "vkResetDescriptorPool"))
+ return (PFN_vkVoidFunction)vkResetDescriptorPool;
+ if (!strcmp(funcName, "vkAllocateDescriptorSets"))
+ return (PFN_vkVoidFunction)vkAllocateDescriptorSets;
+ if (!strcmp(funcName, "vkCmdSetViewport"))
+ return (PFN_vkVoidFunction)vkCmdSetViewport;
+ if (!strcmp(funcName, "vkCmdSetScissor"))
+ return (PFN_vkVoidFunction)vkCmdSetScissor;
+ if (!strcmp(funcName, "vkCmdSetLineWidth"))
+ return (PFN_vkVoidFunction)vkCmdSetLineWidth;
+ if (!strcmp(funcName, "vkCmdSetDepthBias"))
+ return (PFN_vkVoidFunction)vkCmdSetDepthBias;
+ if (!strcmp(funcName, "vkCmdSetBlendConstants"))
+ return (PFN_vkVoidFunction)vkCmdSetBlendConstants;
+ if (!strcmp(funcName, "vkCmdSetDepthBounds"))
+ return (PFN_vkVoidFunction)vkCmdSetDepthBounds;
+ if (!strcmp(funcName, "vkCmdSetStencilCompareMask"))
+ return (PFN_vkVoidFunction)vkCmdSetStencilCompareMask;
+ if (!strcmp(funcName, "vkCmdSetStencilWriteMask"))
+ return (PFN_vkVoidFunction)vkCmdSetStencilWriteMask;
+ if (!strcmp(funcName, "vkCmdSetStencilReference"))
+ return (PFN_vkVoidFunction)vkCmdSetStencilReference;
+ if (!strcmp(funcName, "vkAllocateCommandBuffers"))
+ return (PFN_vkVoidFunction)vkAllocateCommandBuffers;
+ if (!strcmp(funcName, "vkFreeCommandBuffers"))
+ return (PFN_vkVoidFunction)vkFreeCommandBuffers;
+ if (!strcmp(funcName, "vkBeginCommandBuffer"))
+ return (PFN_vkVoidFunction)vkBeginCommandBuffer;
+ if (!strcmp(funcName, "vkEndCommandBuffer"))
+ return (PFN_vkVoidFunction)vkEndCommandBuffer;
+ if (!strcmp(funcName, "vkResetCommandBuffer"))
+ return (PFN_vkVoidFunction)vkResetCommandBuffer;
+ if (!strcmp(funcName, "vkCmdBindPipeline"))
+ return (PFN_vkVoidFunction)vkCmdBindPipeline;
+ if (!strcmp(funcName, "vkCmdBindDescriptorSets"))
+ return (PFN_vkVoidFunction)vkCmdBindDescriptorSets;
+ if (!strcmp(funcName, "vkCmdBindVertexBuffers"))
+ return (PFN_vkVoidFunction)vkCmdBindVertexBuffers;
+ if (!strcmp(funcName, "vkCmdBindIndexBuffer"))
+ return (PFN_vkVoidFunction)vkCmdBindIndexBuffer;
+ if (!strcmp(funcName, "vkCmdDraw"))
+ return (PFN_vkVoidFunction)vkCmdDraw;
+ if (!strcmp(funcName, "vkCmdDrawIndexed"))
+ return (PFN_vkVoidFunction)vkCmdDrawIndexed;
+ if (!strcmp(funcName, "vkCmdDrawIndirect"))
+ return (PFN_vkVoidFunction)vkCmdDrawIndirect;
+ if (!strcmp(funcName, "vkCmdDrawIndexedIndirect"))
+ return (PFN_vkVoidFunction)vkCmdDrawIndexedIndirect;
+ if (!strcmp(funcName, "vkCmdDispatch"))
+ return (PFN_vkVoidFunction)vkCmdDispatch;
+ if (!strcmp(funcName, "vkCmdDispatchIndirect"))
+ return (PFN_vkVoidFunction)vkCmdDispatchIndirect;
+ if (!strcmp(funcName, "vkCmdCopyBuffer"))
+ return (PFN_vkVoidFunction)vkCmdCopyBuffer;
+ if (!strcmp(funcName, "vkCmdCopyImage"))
+ return (PFN_vkVoidFunction)vkCmdCopyImage;
+ if (!strcmp(funcName, "vkCmdBlitImage"))
+ return (PFN_vkVoidFunction)vkCmdBlitImage;
+ if (!strcmp(funcName, "vkCmdCopyBufferToImage"))
+ return (PFN_vkVoidFunction)vkCmdCopyBufferToImage;
+ if (!strcmp(funcName, "vkCmdCopyImageToBuffer"))
+ return (PFN_vkVoidFunction)vkCmdCopyImageToBuffer;
+ if (!strcmp(funcName, "vkCmdUpdateBuffer"))
+ return (PFN_vkVoidFunction)vkCmdUpdateBuffer;
+ if (!strcmp(funcName, "vkCmdFillBuffer"))
+ return (PFN_vkVoidFunction)vkCmdFillBuffer;
+ if (!strcmp(funcName, "vkCmdClearColorImage"))
+ return (PFN_vkVoidFunction)vkCmdClearColorImage;
+ if (!strcmp(funcName, "vkCmdResolveImage"))
+ return (PFN_vkVoidFunction)vkCmdResolveImage;
+ if (!strcmp(funcName, "vkCmdSetEvent"))
+ return (PFN_vkVoidFunction)vkCmdSetEvent;
+ if (!strcmp(funcName, "vkCmdResetEvent"))
+ return (PFN_vkVoidFunction)vkCmdResetEvent;
+ if (!strcmp(funcName, "vkCmdWaitEvents"))
+ return (PFN_vkVoidFunction)vkCmdWaitEvents;
+ if (!strcmp(funcName, "vkCmdPipelineBarrier"))
+ return (PFN_vkVoidFunction)vkCmdPipelineBarrier;
+ if (!strcmp(funcName, "vkCmdBeginQuery"))
+ return (PFN_vkVoidFunction)vkCmdBeginQuery;
+ if (!strcmp(funcName, "vkCmdEndQuery"))
+ return (PFN_vkVoidFunction)vkCmdEndQuery;
+ if (!strcmp(funcName, "vkCmdResetQueryPool"))
+ return (PFN_vkVoidFunction)vkCmdResetQueryPool;
+ if (!strcmp(funcName, "vkCmdWriteTimestamp"))
+ return (PFN_vkVoidFunction)vkCmdWriteTimestamp;
+ if (!strcmp(funcName, "vkCmdCopyQueryPoolResults"))
+ return (PFN_vkVoidFunction)vkCmdCopyQueryPoolResults;
+ if (!strcmp(funcName, "vkCreateFramebuffer"))
+ return (PFN_vkVoidFunction)vkCreateFramebuffer;
+ if (!strcmp(funcName, "vkDestroyFramebuffer"))
+ return (PFN_vkVoidFunction)vkDestroyFramebuffer;
+ if (!strcmp(funcName, "vkCreateRenderPass"))
+ return (PFN_vkVoidFunction)vkCreateRenderPass;
+ if (!strcmp(funcName, "vkDestroyRenderPass"))
+ return (PFN_vkVoidFunction)vkDestroyRenderPass;
+ if (!strcmp(funcName, "vkGetRenderAreaGranularity"))
+ return (PFN_vkVoidFunction)vkGetRenderAreaGranularity;
+ if (!strcmp(funcName, "vkCreateCommandPool"))
+ return (PFN_vkVoidFunction)vkCreateCommandPool;
+ if (!strcmp(funcName, "vkDestroyCommandPool"))
+ return (PFN_vkVoidFunction)vkDestroyCommandPool;
+ if (!strcmp(funcName, "vkCmdBeginRenderPass"))
+ return (PFN_vkVoidFunction)vkCmdBeginRenderPass;
+ if (!strcmp(funcName, "vkCmdNextSubpass"))
+ return (PFN_vkVoidFunction)vkCmdNextSubpass;
+
+ if (device == NULL) {
+ return NULL;
+ }
+
+ if (get_dispatch_table(pc_device_table_map, device)->GetDeviceProcAddr == NULL)
+ return NULL;
+ return get_dispatch_table(pc_device_table_map, device)->GetDeviceProcAddr(device, funcName);
+}
+
+VK_LAYER_EXPORT VKAPI_ATTR PFN_vkVoidFunction VKAPI_CALL vkGetInstanceProcAddr(VkInstance instance, const char *funcName) {
+ if (!strcmp(funcName, "vkGetInstanceProcAddr"))
+ return (PFN_vkVoidFunction)vkGetInstanceProcAddr;
+ if (!strcmp(funcName, "vkCreateInstance"))
+ return (PFN_vkVoidFunction)vkCreateInstance;
+ if (!strcmp(funcName, "vkDestroyInstance"))
+ return (PFN_vkVoidFunction)vkDestroyInstance;
+ if (!strcmp(funcName, "vkCreateDevice"))
+ return (PFN_vkVoidFunction)vkCreateDevice;
+ if (!strcmp(funcName, "vkEnumeratePhysicalDevices"))
+ return (PFN_vkVoidFunction)vkEnumeratePhysicalDevices;
+ if (!strcmp(funcName, "vkGetPhysicalDeviceProperties"))
+ return (PFN_vkVoidFunction)vkGetPhysicalDeviceProperties;
+ if (!strcmp(funcName, "vkGetPhysicalDeviceFeatures"))
+ return (PFN_vkVoidFunction)vkGetPhysicalDeviceFeatures;
+ if (!strcmp(funcName, "vkGetPhysicalDeviceFormatProperties"))
+ return (PFN_vkVoidFunction)vkGetPhysicalDeviceFormatProperties;
+ if (!strcmp(funcName, "vkEnumerateInstanceLayerProperties"))
+ return (PFN_vkVoidFunction)vkEnumerateInstanceLayerProperties;
+ if (!strcmp(funcName, "vkEnumerateInstanceExtensionProperties"))
+ return (PFN_vkVoidFunction)vkEnumerateInstanceExtensionProperties;
+ if (!strcmp(funcName, "vkEnumerateDeviceLayerProperties"))
+ return (PFN_vkVoidFunction)vkEnumerateDeviceLayerProperties;
+ if (!strcmp(funcName, "vkEnumerateDeviceExtensionProperties"))
+ return (PFN_vkVoidFunction)vkEnumerateDeviceExtensionProperties;
+
+ if (instance == NULL) {
+ return NULL;
+ }
+
+ layer_data *data = get_my_data_ptr(get_dispatch_key(instance), layer_data_map);
+
+ PFN_vkVoidFunction fptr = debug_report_get_instance_proc_addr(data->report_data, funcName);
+ if (fptr)
+ return fptr;
+
+ if (get_dispatch_table(pc_instance_table_map, instance)->GetInstanceProcAddr == NULL)
+ return NULL;
+ return get_dispatch_table(pc_instance_table_map, instance)->GetInstanceProcAddr(instance, funcName);
+}
diff --git a/layers/parameter_validation_utils.h b/layers/parameter_validation_utils.h
new file mode 100644
index 000000000..7cb80c06a
--- /dev/null
+++ b/layers/parameter_validation_utils.h
@@ -0,0 +1,377 @@
+/* Copyright (c) 2015-2016 The Khronos Group Inc.
+ * Copyright (c) 2015-2016 Valve Corporation
+ * Copyright (c) 2015-2016 LunarG, Inc.
+ * Copyright (C) 2015-2016 Google Inc.
+ *
+ * Permission is hereby granted, free of charge, to any person obtaining a copy
+ * of this software and/or associated documentation files (the "Materials"), to
+ * deal in the Materials without restriction, including without limitation the
+ * rights to use, copy, modify, merge, publish, distribute, sublicense, and/or
+ * sell copies of the Materials, and to permit persons to whom the Materials
+ * are furnished to do so, subject to the following conditions:
+ *
+ * The above copyright notice(s) and this permission notice shall be included
+ * in all copies or substantial portions of the Materials.
+ *
+ * The Materials are Confidential Information as defined by the Khronos
+ * Membership Agreement until designated non-confidential by Khronos, at which
+ * point this condition clause shall be removed.
+ *
+ * THE MATERIALS ARE PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
+ * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
+ * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT.
+ *
+ * IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM,
+ * DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR
+ * OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE MATERIALS OR THE
+ * USE OR OTHER DEALINGS IN THE MATERIALS
+ *
+ * Author: Dustin Graves <dustin@lunarg.com>
+ */
+
+#ifndef PARAMETER_VALIDATION_UTILS_H
+#define PARAMETER_VALIDATION_UTILS_H
+
+#include <algorithm>
+#include <string>
+
+#include "vulkan/vulkan.h"
+#include "vk_enum_string_helper.h"
+#include "vk_layer_logging.h"
+
+namespace {
+struct GenericHeader {
+ VkStructureType sType;
+ const void *pNext;
+};
+}
+
+// String returned by string_VkStructureType for an unrecognized type
+const std::string UnsupportedStructureTypeString = "Unhandled VkStructureType";
+
+/**
+ * Validate a required pointer.
+ *
+ * Verify that a required pointer is not NULL.
+ *
+ * @param report_data debug_report_data object for routing validation messages.
+ * @param apiName Name of API call being validated.
+ * @param parameterName Name of parameter being validated.
+ * @param value Pointer to validate.
+ * @return Boolean value indicating that the call should be skipped.
+ */
+static VkBool32 validate_required_pointer(debug_report_data *report_data, const char *apiName, const char *parameterName,
+ const void *value) {
+ VkBool32 skipCall = VK_FALSE;
+
+ if (value == NULL) {
+ skipCall |= log_msg(report_data, VK_DEBUG_REPORT_ERROR_BIT_EXT, (VkDebugReportObjectTypeEXT)0, 0, __LINE__, 1, "PARAMCHECK",
+ "%s: required parameter %s specified as NULL", apiName, parameterName);
+ }
+
+ return skipCall;
+}
+
+/**
+ * Validate pointer to array count and pointer to array.
+ *
+ * Verify that required count and array parameters are not NULL. If count
+ * is not NULL and its value is not optional, verify that it is not 0. If the
+ * array parameter is NULL, and it is not optional, verify that count is 0.
+ * The array parameter will typically be optional for this case (where count is
+ * a pointer), allowing the caller to retrieve the available count.
+ *
+ * @param report_data debug_report_data object for routing validation messages.
+ * @param apiName Name of API call being validated.
+ * @param countName Name of count parameter.
+ * @param arrayName Name of array parameter.
+ * @param count Pointer to the number of elements in the array.
+ * @param array Array to validate.
+ * @param countPtrRequired The 'count' parameter may not be NULL when true.
+ * @param countValueRequired The '*count' value may not be 0 when true.
+ * @param arrayRequired The 'array' parameter may not be NULL when true.
+ * @return Boolean value indicating that the call should be skipped.
+ */
+template <typename T>
+VkBool32 validate_array(debug_report_data *report_data, const char *apiName, const char *countName, const char *arrayName,
+ const T *count, const void *array, VkBool32 countPtrRequired, VkBool32 countValueRequired,
+ VkBool32 arrayRequired) {
+ VkBool32 skipCall = VK_FALSE;
+
+ if (count == NULL) {
+ if (countPtrRequired == VK_TRUE) {
+ skipCall |= log_msg(report_data, VK_DEBUG_REPORT_ERROR_BIT_EXT, (VkDebugReportObjectTypeEXT)0, 0, __LINE__, 1,
+ "PARAMCHECK", "%s: required parameter %s specified as NULL", apiName, countName);
+ }
+ } else {
+ skipCall |= validate_array(report_data, apiName, countName, arrayName, (*count), array, countValueRequired, arrayRequired);
+ }
+
+ return skipCall;
+}
+
+/**
+ * Validate array count and pointer to array.
+ *
+ * Verify that required count and array parameters are not 0 or NULL. If the
+ * count parameter is not optional, verify that it is not 0. If the array
+ * parameter is NULL, and it is not optional, verify that count is 0.
+ *
+ * @param report_data debug_report_data object for routing validation messages.
+ * @param apiName Name of API call being validated.
+ * @param countName Name of count parameter.
+ * @param arrayName Name of array parameter.
+ * @param count Number of elements in the array.
+ * @param array Array to validate.
+ * @param countRequired The 'count' parameter may not be 0 when true.
+ * @param arrayRequired The 'array' parameter may not be NULL when true.
+ * @return Boolean value indicating that the call should be skipped.
+ */
+template <typename T>
+VkBool32 validate_array(debug_report_data *report_data, const char *apiName, const char *countName, const char *arrayName, T count,
+ const void *array, VkBool32 countRequired, VkBool32 arrayRequired) {
+ VkBool32 skipCall = VK_FALSE;
+
+ // Count parameters not tagged as optional cannot be 0
+ if ((count == 0) && (countRequired == VK_TRUE)) {
+ skipCall |= log_msg(report_data, VK_DEBUG_REPORT_ERROR_BIT_EXT, (VkDebugReportObjectTypeEXT)0, 0, __LINE__, 1, "PARAMCHECK",
+ "%s: value of %s must be greater than 0", apiName, countName);
+ }
+
+ // Array parameters not tagged as optional cannot be NULL,
+ // unless the count is 0
+ if ((array == NULL) && (arrayRequired == VK_TRUE) && (count != 0)) {
+ skipCall |= log_msg(report_data, VK_DEBUG_REPORT_ERROR_BIT_EXT, (VkDebugReportObjectTypeEXT)0, 0, __LINE__, 1, "PARAMCHECK",
+ "%s: required parameter %s specified as NULL", apiName, arrayName);
+ }
+
+ return skipCall;
+}
+
+/**
+ * Validate an Vulkan structure type.
+ *
+ * @param report_data debug_report_data object for routing validation messages.
+ * @param apiName Name of API call being validated.
+ * @param parameterName Name of struct parameter being validated.
+ * @param sTypeName Name of expected VkStructureType value.
+ * @param value Pointer to the struct to validate.
+ * @param sType VkStructureType for structure validation.
+ * @param required The parameter may not be NULL when true.
+ * @return Boolean value indicating that the call should be skipped.
+ */
+template <typename T>
+VkBool32 validate_struct_type(debug_report_data *report_data, const char *apiName, const char *parameterName, const char *sTypeName,
+ const T *value, VkStructureType sType, VkBool32 required) {
+ VkBool32 skipCall = VK_FALSE;
+
+ if (value == NULL) {
+ if (required == VK_TRUE) {
+ skipCall |= log_msg(report_data, VK_DEBUG_REPORT_ERROR_BIT_EXT, (VkDebugReportObjectTypeEXT)0, 0, __LINE__, 1,
+ "PARAMCHECK", "%s: required parameter %s specified as NULL", apiName, parameterName);
+ }
+ } else if (value->sType != sType) {
+ skipCall |= log_msg(report_data, VK_DEBUG_REPORT_ERROR_BIT_EXT, (VkDebugReportObjectTypeEXT)0, 0, __LINE__, 1, "PARAMCHECK",
+ "%s: parameter %s->sType must be %s", apiName, parameterName, sTypeName);
+ }
+
+ return skipCall;
+}
+
+/**
+ * Validate an array of Vulkan structures.
+ *
+ * Verify that required count and array parameters are not NULL. If count
+ * is not NULL and its value is not optional, verify that it is not 0.
+ * If the array contains 1 or more structures, verify that each structure's
+ * sType field is set to the correct VkStructureType value.
+ *
+ * @param report_data debug_report_data object for routing validation messages.
+ * @param apiName Name of API call being validated.
+ * @param countName Name of count parameter.
+ * @param arrayName Name of array parameter.
+ * @param sTypeName Name of expected VkStructureType value.
+ * @param count Pointer to the number of elements in the array.
+ * @param array Array to validate.
+ * @param sType VkStructureType for structure validation.
+ * @param countPtrRequired The 'count' parameter may not be NULL when true.
+ * @param countValueRequired The '*count' value may not be 0 when true.
+ * @param arrayRequired The 'array' parameter may not be NULL when true.
+ * @return Boolean value indicating that the call should be skipped.
+ */
+template <typename T>
+VkBool32 validate_struct_type_array(debug_report_data *report_data, const char *apiName, const char *countName,
+ const char *arrayName, const char *sTypeName, const uint32_t *count, const T *array,
+ VkStructureType sType, VkBool32 countPtrRequired, VkBool32 countValueRequired,
+ VkBool32 arrayRequired) {
+ VkBool32 skipCall = VK_FALSE;
+
+ if (count == NULL) {
+ if (countPtrRequired == VK_TRUE) {
+ skipCall |= log_msg(report_data, VK_DEBUG_REPORT_ERROR_BIT_EXT, (VkDebugReportObjectTypeEXT)0, 0, __LINE__, 1,
+ "PARAMCHECK", "%s: required parameter %s specified as NULL", apiName, countName);
+ }
+ } else {
+ skipCall |= validate_struct_type_array(report_data, apiName, countName, arrayName, sTypeName, (*count), array, sType,
+ countValueRequired, arrayRequired);
+ }
+
+ return skipCall;
+}
+
+/**
+ * Validate an array of Vulkan structures
+ *
+ * Verify that required count and array parameters are not 0 or NULL. If
+ * the array contains 1 or more structures, verify that each structure's
+ * sType field is set to the correct VkStructureType value.
+ *
+ * @param report_data debug_report_data object for routing validation messages.
+ * @param apiName Name of API call being validated.
+ * @param countName Name of count parameter.
+ * @param arrayName Name of array parameter.
+ * @param sTypeName Name of expected VkStructureType value.
+ * @param count Number of elements in the array.
+ * @param array Array to validate.
+ * @param sType VkStructureType for structure validation.
+ * @param countRequired The 'count' parameter may not be 0 when true.
+ * @param arrayRequired The 'array' parameter may not be NULL when true.
+ * @return Boolean value indicating that the call should be skipped.
+ */
+template <typename T>
+VkBool32 validate_struct_type_array(debug_report_data *report_data, const char *apiName, const char *countName,
+ const char *arrayName, const char *sTypeName, uint32_t count, const T *array,
+ VkStructureType sType, VkBool32 countRequired, VkBool32 arrayRequired) {
+ VkBool32 skipCall = VK_FALSE;
+
+ if ((count == 0) || (array == NULL)) {
+ // Count parameters not tagged as optional cannot be 0
+ if ((count == 0) && (countRequired == VK_TRUE)) {
+ skipCall |= log_msg(report_data, VK_DEBUG_REPORT_ERROR_BIT_EXT, (VkDebugReportObjectTypeEXT)0, 0, __LINE__, 1,
+ "PARAMCHECK", "%s: parameter %s must be greater than 0", apiName, countName);
+ }
+
+ // Array parameters not tagged as optional cannot be NULL,
+ // unless the count is 0
+ if ((array == NULL) && (arrayRequired == VK_TRUE) && (count != 0)) {
+ skipCall |= log_msg(report_data, VK_DEBUG_REPORT_ERROR_BIT_EXT, (VkDebugReportObjectTypeEXT)0, 0, __LINE__, 1,
+ "PARAMCHECK", "%s: required parameter %s specified as NULL", apiName, arrayName);
+ }
+ } else {
+ // Verify that all structs in the array have the correct type
+ for (uint32_t i = 0; i < count; ++i) {
+ if (array[i].sType != sType) {
+ skipCall |= log_msg(report_data, VK_DEBUG_REPORT_ERROR_BIT_EXT, (VkDebugReportObjectTypeEXT)0, 0, __LINE__, 1,
+ "PARAMCHECK", "%s: parameter %s[%d].sType must be %s", apiName, arrayName, i, sTypeName);
+ }
+ }
+ }
+
+ return skipCall;
+}
+
+/**
+ * Validate string array count and content.
+ *
+ * Verify that required count and array parameters are not 0 or NULL. If the
+ * count parameter is not optional, verify that it is not 0. If the array
+ * parameter is NULL, and it is not optional, verify that count is 0. If the
+ * array parameter is not NULL, verify that none of the strings are NULL.
+ *
+ * @param report_data debug_report_data object for routing validation messages.
+ * @param apiName Name of API call being validated.
+ * @param countName Name of count parameter.
+ * @param arrayName Name of array parameter.
+ * @param count Number of strings in the array.
+ * @param array Array of strings to validate.
+ * @param countRequired The 'count' parameter may not be 0 when true.
+ * @param arrayRequired The 'array' parameter may not be NULL when true.
+ * @return Boolean value indicating that the call should be skipped.
+ */
+static VkBool32 validate_string_array(debug_report_data *report_data, const char *apiName, const char *countName,
+ const char *arrayName, uint32_t count, const char *const *array, VkBool32 countRequired,
+ VkBool32 arrayRequired) {
+ VkBool32 skipCall = VK_FALSE;
+
+ if ((count == 0) || (array == NULL)) {
+ // Count parameters not tagged as optional cannot be 0
+ if ((count == 0) && (countRequired == VK_TRUE)) {
+ skipCall |= log_msg(report_data, VK_DEBUG_REPORT_ERROR_BIT_EXT, (VkDebugReportObjectTypeEXT)0, 0, __LINE__, 1,
+ "PARAMCHECK", "%s: parameter %s must be greater than 0", apiName, countName);
+ }
+
+ // Array parameters not tagged as optional cannot be NULL,
+ // unless the count is 0
+ if ((array == NULL) && (arrayRequired == VK_TRUE) && (count != 0)) {
+ skipCall |= log_msg(report_data, VK_DEBUG_REPORT_ERROR_BIT_EXT, (VkDebugReportObjectTypeEXT)0, 0, __LINE__, 1,
+ "PARAMCHECK", "%s: required parameter %s specified as NULL", apiName, arrayName);
+ }
+ } else {
+ // Verify that strings in the array not NULL
+ for (uint32_t i = 0; i < count; ++i) {
+ if (array[i] == NULL) {
+ skipCall |= log_msg(report_data, VK_DEBUG_REPORT_ERROR_BIT_EXT, (VkDebugReportObjectTypeEXT)0, 0, __LINE__, 1,
+ "PARAMCHECK", "%s: required parameter %s[%d] specified as NULL", apiName, arrayName, i);
+ }
+ }
+ }
+
+ return skipCall;
+}
+
+/**
+ * Validate a structure's pNext member.
+ *
+ * Verify that the specified pNext value points to the head of a list of
+ * allowed extension structures. If no extension structures are allowed,
+ * verify that pNext is null.
+ *
+ * @param report_data debug_report_data object for routing validation messages.
+ * @param apiName Name of API call being validated.
+ * @param parameterName Name of parameter being validated.
+ * @param allowedStructNames Names of allowed structs.
+ * @param next Pointer to validate.
+ * @param allowedTypeCount total number of allowed structure types.
+ * @param allowedTypes array of strcuture types allowed for pNext.
+ * @return Boolean value indicating that the call should be skipped.
+ */
+static VkBool32 validate_struct_pnext(debug_report_data *report_data, const char *apiName, const char *parameterName,
+ const char *allowedStructNames, const void *next, size_t allowedTypeCount,
+ const VkStructureType *allowedTypes) {
+ VkBool32 skipCall = VK_FALSE;
+
+ if (next != NULL) {
+ if (allowedTypeCount == 0) {
+ skipCall |= log_msg(report_data, VK_DEBUG_REPORT_ERROR_BIT_EXT, (VkDebugReportObjectTypeEXT)0, 0, __LINE__, 1,
+ "PARAMCHECK", "%s: value of %s must be NULL", apiName, parameterName);
+ } else {
+ const VkStructureType *start = allowedTypes;
+ const VkStructureType *end = allowedTypes + allowedTypeCount;
+ const GenericHeader *current = reinterpret_cast<const GenericHeader *>(next);
+
+ while (current != NULL) {
+ if (std::find(start, end, current->sType) == end) {
+ std::string typeName = string_VkStructureType(current->sType);
+
+ if (typeName == UnsupportedStructureTypeString) {
+ skipCall |= log_msg(
+ report_data, VK_DEBUG_REPORT_ERROR_BIT_EXT, (VkDebugReportObjectTypeEXT)0, 0, __LINE__, 1, "PARAMCHECK",
+ "%s: %s chain includes a structure with unexpected VkStructureType (%d); Allowed structures are [%s]",
+ apiName, parameterName, current->sType, allowedStructNames);
+ } else {
+ skipCall |= log_msg(
+ report_data, VK_DEBUG_REPORT_ERROR_BIT_EXT, (VkDebugReportObjectTypeEXT)0, 0, __LINE__, 1, "PARAMCHECK",
+ "%s: %s chain includes a structure with unexpected VkStructureType %s; Allowed structures are [%s]",
+ apiName, parameterName, typeName.c_str(), allowedStructNames);
+ }
+ }
+
+ current = reinterpret_cast<const GenericHeader *>(current->pNext);
+ }
+ }
+ }
+
+ return skipCall;
+}
+
+#endif // PARAMETER_VALIDATION_UTILS_H
diff --git a/layers/swapchain.cpp b/layers/swapchain.cpp
index af22511b2..1feabfdd8 100644
--- a/layers/swapchain.cpp
+++ b/layers/swapchain.cpp
@@ -33,6 +33,7 @@
#include "swapchain.h"
#include "vk_layer_extension_utils.h"
#include "vk_enum_string_helper.h"
+#include "vk_layer_utils.h"
static int globalLockInitialized = 0;
static loader_platform_thread_mutex globalLock;
@@ -40,76 +41,56 @@ static loader_platform_thread_mutex globalLock;
// The following is for logging error messages:
static std::unordered_map<void *, layer_data *> layer_data_map;
-template layer_data *get_my_data_ptr<layer_data>(
- void *data_key,
- std::unordered_map<void *, layer_data *> &data_map);
-
-static const VkExtensionProperties instance_extensions[] = {
- {
- VK_EXT_DEBUG_REPORT_EXTENSION_NAME,
- VK_EXT_DEBUG_REPORT_SPEC_VERSION
- }
-};
-
-VK_LAYER_EXPORT VKAPI_ATTR VkResult VKAPI_CALL vkEnumerateInstanceExtensionProperties(
- const char *pLayerName,
- uint32_t *pCount,
- VkExtensionProperties* pProperties)
-{
- return util_GetExtensionProperties(ARRAY_SIZE(instance_extensions),
- instance_extensions, pCount,
- pProperties);
-}
+template layer_data *get_my_data_ptr<layer_data>(void *data_key, std::unordered_map<void *, layer_data *> &data_map);
+
+static const VkExtensionProperties instance_extensions[] = {{VK_EXT_DEBUG_REPORT_EXTENSION_NAME, VK_EXT_DEBUG_REPORT_SPEC_VERSION}};
VK_LAYER_EXPORT VKAPI_ATTR VkResult VKAPI_CALL
-vkEnumerateDeviceExtensionProperties(VkPhysicalDevice physicalDevice,
- const char *pLayerName, uint32_t *pCount,
- VkExtensionProperties *pProperties) {
+vkEnumerateInstanceExtensionProperties(const char *pLayerName, uint32_t *pCount, VkExtensionProperties *pProperties) {
+ return util_GetExtensionProperties(ARRAY_SIZE(instance_extensions), instance_extensions, pCount, pProperties);
+}
+
+VK_LAYER_EXPORT VKAPI_ATTR VkResult VKAPI_CALL vkEnumerateDeviceExtensionProperties(VkPhysicalDevice physicalDevice,
+ const char *pLayerName, uint32_t *pCount,
+ VkExtensionProperties *pProperties) {
if (pLayerName == NULL) {
dispatch_key key = get_dispatch_key(physicalDevice);
layer_data *my_data = get_my_data_ptr(key, layer_data_map);
- return my_data->instance_dispatch_table
- ->EnumerateDeviceExtensionProperties(physicalDevice, NULL, pCount,
- pProperties);
+ return my_data->instance_dispatch_table->EnumerateDeviceExtensionProperties(physicalDevice, NULL, pCount, pProperties);
} else {
return util_GetExtensionProperties(0, nullptr, pCount, pProperties);
}
}
static const VkLayerProperties swapchain_layers[] = {{
- "VK_LAYER_LUNARG_swapchain", VK_API_VERSION, 1, "LunarG Validation Layer",
+ "VK_LAYER_LUNARG_swapchain", VK_LAYER_API_VERSION, 1, "LunarG Validation Layer",
}};
-VK_LAYER_EXPORT VKAPI_ATTR VkResult VKAPI_CALL vkEnumerateInstanceLayerProperties(
- uint32_t *pCount,
- VkLayerProperties* pProperties)
-{
- return util_GetLayerProperties(ARRAY_SIZE(swapchain_layers),
- swapchain_layers, pCount, pProperties);
+VK_LAYER_EXPORT VKAPI_ATTR VkResult VKAPI_CALL
+vkEnumerateInstanceLayerProperties(uint32_t *pCount, VkLayerProperties *pProperties) {
+ return util_GetLayerProperties(ARRAY_SIZE(swapchain_layers), swapchain_layers, pCount, pProperties);
}
-VK_LAYER_EXPORT VKAPI_ATTR VkResult VKAPI_CALL vkEnumerateDeviceLayerProperties(
- VkPhysicalDevice physicalDevice, uint32_t *pCount,
- VkLayerProperties *pProperties) {
- return util_GetLayerProperties(ARRAY_SIZE(swapchain_layers),
- swapchain_layers, pCount, pProperties);
+VK_LAYER_EXPORT VKAPI_ATTR VkResult VKAPI_CALL
+vkEnumerateDeviceLayerProperties(VkPhysicalDevice physicalDevice, uint32_t *pCount, VkLayerProperties *pProperties) {
+ return util_GetLayerProperties(ARRAY_SIZE(swapchain_layers), swapchain_layers, pCount, pProperties);
}
-static void createDeviceRegisterExtensions(VkPhysicalDevice physicalDevice, const VkDeviceCreateInfo* pCreateInfo, VkDevice device)
-{
+static void createDeviceRegisterExtensions(VkPhysicalDevice physicalDevice, const VkDeviceCreateInfo *pCreateInfo,
+ VkDevice device) {
uint32_t i;
- layer_data *my_device_data = get_my_data_ptr(get_dispatch_key(device), layer_data_map);
+ layer_data *my_device_data = get_my_data_ptr(get_dispatch_key(device), layer_data_map);
layer_data *my_instance_data = get_my_data_ptr(get_dispatch_key(physicalDevice), layer_data_map);
- VkLayerDispatchTable *pDisp = my_device_data->device_dispatch_table;
- PFN_vkGetDeviceProcAddr gpa = pDisp->GetDeviceProcAddr;
+ VkLayerDispatchTable *pDisp = my_device_data->device_dispatch_table;
+ PFN_vkGetDeviceProcAddr gpa = pDisp->GetDeviceProcAddr;
- pDisp->CreateSwapchainKHR = (PFN_vkCreateSwapchainKHR) gpa(device, "vkCreateSwapchainKHR");
- pDisp->DestroySwapchainKHR = (PFN_vkDestroySwapchainKHR) gpa(device, "vkDestroySwapchainKHR");
- pDisp->GetSwapchainImagesKHR = (PFN_vkGetSwapchainImagesKHR) gpa(device, "vkGetSwapchainImagesKHR");
- pDisp->AcquireNextImageKHR = (PFN_vkAcquireNextImageKHR) gpa(device, "vkAcquireNextImageKHR");
- pDisp->QueuePresentKHR = (PFN_vkQueuePresentKHR) gpa(device, "vkQueuePresentKHR");
- pDisp->GetDeviceQueue = (PFN_vkGetDeviceQueue) gpa(device, "vkGetDeviceQueue");
+ pDisp->CreateSwapchainKHR = (PFN_vkCreateSwapchainKHR)gpa(device, "vkCreateSwapchainKHR");
+ pDisp->DestroySwapchainKHR = (PFN_vkDestroySwapchainKHR)gpa(device, "vkDestroySwapchainKHR");
+ pDisp->GetSwapchainImagesKHR = (PFN_vkGetSwapchainImagesKHR)gpa(device, "vkGetSwapchainImagesKHR");
+ pDisp->AcquireNextImageKHR = (PFN_vkAcquireNextImageKHR)gpa(device, "vkAcquireNextImageKHR");
+ pDisp->QueuePresentKHR = (PFN_vkQueuePresentKHR)gpa(device, "vkQueuePresentKHR");
+ pDisp->GetDeviceQueue = (PFN_vkGetDeviceQueue)gpa(device, "vkGetDeviceQueue");
SwpPhysicalDevice *pPhysicalDevice = &my_instance_data->physicalDeviceMap[physicalDevice];
if (pPhysicalDevice) {
@@ -119,7 +100,7 @@ static void createDeviceRegisterExtensions(VkPhysicalDevice physicalDevice, cons
// TBD: Should we leave error in (since Swapchain really needs this
// link)?
log_msg(my_instance_data->report_data, VK_DEBUG_REPORT_ERROR_BIT_EXT, VK_DEBUG_REPORT_OBJECT_TYPE_PHYSICAL_DEVICE_EXT,
- (uint64_t)physicalDevice , __LINE__, SWAPCHAIN_INVALID_HANDLE, "Swapchain",
+ (uint64_t)physicalDevice, __LINE__, SWAPCHAIN_INVALID_HANDLE, "Swapchain",
"vkCreateDevice() called with a non-valid VkPhysicalDevice.");
}
my_device_data->deviceMap[device].device = device;
@@ -136,40 +117,48 @@ static void createDeviceRegisterExtensions(VkPhysicalDevice physicalDevice, cons
}
}
-static void createInstanceRegisterExtensions(const VkInstanceCreateInfo* pCreateInfo, VkInstance instance)
-{
+static void createInstanceRegisterExtensions(const VkInstanceCreateInfo *pCreateInfo, VkInstance instance) {
uint32_t i;
layer_data *my_data = get_my_data_ptr(get_dispatch_key(instance), layer_data_map);
- VkLayerInstanceDispatchTable *pDisp = my_data->instance_dispatch_table;
+ VkLayerInstanceDispatchTable *pDisp = my_data->instance_dispatch_table;
PFN_vkGetInstanceProcAddr gpa = pDisp->GetInstanceProcAddr;
#ifdef VK_USE_PLATFORM_ANDROID_KHR
- pDisp->CreateAndroidSurfaceKHR = (PFN_vkCreateAndroidSurfaceKHR) gpa(instance, "vkCreateAndroidSurfaceKHR");
+ pDisp->CreateAndroidSurfaceKHR = (PFN_vkCreateAndroidSurfaceKHR)gpa(instance, "vkCreateAndroidSurfaceKHR");
#endif // VK_USE_PLATFORM_ANDROID_KHR
#ifdef VK_USE_PLATFORM_MIR_KHR
- pDisp->CreateMirSurfaceKHR = (PFN_vkCreateMirSurfaceKHR) gpa(instance, "vkCreateMirSurfaceKHR");
- pDisp->GetPhysicalDeviceMirPresentationSupportKHR = (PFN_vkGetPhysicalDeviceMirPresentationSupportKHR) gpa(instance, "vkGetPhysicalDeviceMirPresentationSupportKHR");
+ pDisp->CreateMirSurfaceKHR = (PFN_vkCreateMirSurfaceKHR)gpa(instance, "vkCreateMirSurfaceKHR");
+ pDisp->GetPhysicalDeviceMirPresentationSupportKHR =
+ (PFN_vkGetPhysicalDeviceMirPresentationSupportKHR)gpa(instance, "vkGetPhysicalDeviceMirPresentationSupportKHR");
#endif // VK_USE_PLATFORM_MIR_KHR
#ifdef VK_USE_PLATFORM_WAYLAND_KHR
- pDisp->CreateWaylandSurfaceKHR = (PFN_vkCreateWaylandSurfaceKHR) gpa(instance, "vkCreateWaylandSurfaceKHR");
- pDisp->GetPhysicalDeviceWaylandPresentationSupportKHR = (PFN_vkGetPhysicalDeviceWaylandPresentationSupportKHR) gpa(instance, "vkGetPhysicalDeviceWaylandPresentationSupportKHR");
+ pDisp->CreateWaylandSurfaceKHR = (PFN_vkCreateWaylandSurfaceKHR)gpa(instance, "vkCreateWaylandSurfaceKHR");
+ pDisp->GetPhysicalDeviceWaylandPresentationSupportKHR =
+ (PFN_vkGetPhysicalDeviceWaylandPresentationSupportKHR)gpa(instance, "vkGetPhysicalDeviceWaylandPresentationSupportKHR");
#endif // VK_USE_PLATFORM_WAYLAND_KHR
#ifdef VK_USE_PLATFORM_WIN32_KHR
- pDisp->CreateWin32SurfaceKHR = (PFN_vkCreateWin32SurfaceKHR) gpa(instance, "vkCreateWin32SurfaceKHR");
- pDisp->GetPhysicalDeviceWin32PresentationSupportKHR = (PFN_vkGetPhysicalDeviceWin32PresentationSupportKHR) gpa(instance, "vkGetPhysicalDeviceWin32PresentationSupportKHR");
+ pDisp->CreateWin32SurfaceKHR = (PFN_vkCreateWin32SurfaceKHR)gpa(instance, "vkCreateWin32SurfaceKHR");
+ pDisp->GetPhysicalDeviceWin32PresentationSupportKHR =
+ (PFN_vkGetPhysicalDeviceWin32PresentationSupportKHR)gpa(instance, "vkGetPhysicalDeviceWin32PresentationSupportKHR");
#endif // VK_USE_PLATFORM_WIN32_KHR
#ifdef VK_USE_PLATFORM_XCB_KHR
- pDisp->CreateXcbSurfaceKHR = (PFN_vkCreateXcbSurfaceKHR) gpa(instance, "vkCreateXcbSurfaceKHR");
- pDisp->GetPhysicalDeviceXcbPresentationSupportKHR = (PFN_vkGetPhysicalDeviceXcbPresentationSupportKHR) gpa(instance, "vkGetPhysicalDeviceXcbPresentationSupportKHR");
+ pDisp->CreateXcbSurfaceKHR = (PFN_vkCreateXcbSurfaceKHR)gpa(instance, "vkCreateXcbSurfaceKHR");
+ pDisp->GetPhysicalDeviceXcbPresentationSupportKHR =
+ (PFN_vkGetPhysicalDeviceXcbPresentationSupportKHR)gpa(instance, "vkGetPhysicalDeviceXcbPresentationSupportKHR");
#endif // VK_USE_PLATFORM_XCB_KHR
#ifdef VK_USE_PLATFORM_XLIB_KHR
- pDisp->CreateXlibSurfaceKHR = (PFN_vkCreateXlibSurfaceKHR) gpa(instance, "vkCreateXlibSurfaceKHR");
- pDisp->GetPhysicalDeviceXlibPresentationSupportKHR = (PFN_vkGetPhysicalDeviceXlibPresentationSupportKHR) gpa(instance, "vkGetPhysicalDeviceXlibPresentationSupportKHR");
+ pDisp->CreateXlibSurfaceKHR = (PFN_vkCreateXlibSurfaceKHR)gpa(instance, "vkCreateXlibSurfaceKHR");
+ pDisp->GetPhysicalDeviceXlibPresentationSupportKHR =
+ (PFN_vkGetPhysicalDeviceXlibPresentationSupportKHR)gpa(instance, "vkGetPhysicalDeviceXlibPresentationSupportKHR");
#endif // VK_USE_PLATFORM_XLIB_KHR
- pDisp->DestroySurfaceKHR = (PFN_vkDestroySurfaceKHR) gpa(instance, "vkDestroySurfaceKHR");
- pDisp->GetPhysicalDeviceSurfaceSupportKHR = (PFN_vkGetPhysicalDeviceSurfaceSupportKHR) gpa(instance, "vkGetPhysicalDeviceSurfaceSupportKHR");
- pDisp->GetPhysicalDeviceSurfaceCapabilitiesKHR = (PFN_vkGetPhysicalDeviceSurfaceCapabilitiesKHR) gpa(instance, "vkGetPhysicalDeviceSurfaceCapabilitiesKHR");
- pDisp->GetPhysicalDeviceSurfaceFormatsKHR = (PFN_vkGetPhysicalDeviceSurfaceFormatsKHR) gpa(instance, "vkGetPhysicalDeviceSurfaceFormatsKHR");
- pDisp->GetPhysicalDeviceSurfacePresentModesKHR = (PFN_vkGetPhysicalDeviceSurfacePresentModesKHR) gpa(instance, "vkGetPhysicalDeviceSurfacePresentModesKHR");
+ pDisp->DestroySurfaceKHR = (PFN_vkDestroySurfaceKHR)gpa(instance, "vkDestroySurfaceKHR");
+ pDisp->GetPhysicalDeviceSurfaceSupportKHR =
+ (PFN_vkGetPhysicalDeviceSurfaceSupportKHR)gpa(instance, "vkGetPhysicalDeviceSurfaceSupportKHR");
+ pDisp->GetPhysicalDeviceSurfaceCapabilitiesKHR =
+ (PFN_vkGetPhysicalDeviceSurfaceCapabilitiesKHR)gpa(instance, "vkGetPhysicalDeviceSurfaceCapabilitiesKHR");
+ pDisp->GetPhysicalDeviceSurfaceFormatsKHR =
+ (PFN_vkGetPhysicalDeviceSurfaceFormatsKHR)gpa(instance, "vkGetPhysicalDeviceSurfaceFormatsKHR");
+ pDisp->GetPhysicalDeviceSurfacePresentModesKHR =
+ (PFN_vkGetPhysicalDeviceSurfacePresentModesKHR)gpa(instance, "vkGetPhysicalDeviceSurfacePresentModesKHR");
// Remember this instance, and whether the VK_KHR_surface extension
// was enabled for it:
@@ -194,7 +183,6 @@ static void createInstanceRegisterExtensions(const VkInstanceCreateInfo* pCreate
my_data->instanceMap[instance].xlibSurfaceExtensionEnabled = false;
#endif // VK_USE_PLATFORM_XLIB_KHR
-
// Record whether the WSI instance extension was enabled for this
// VkInstance. No need to check if the extension was advertised by
// vkEnumerateInstanceExtensionProperties(), since the loader handles that.
@@ -207,116 +195,79 @@ static void createInstanceRegisterExtensions(const VkInstanceCreateInfo* pCreate
if (strcmp(pCreateInfo->ppEnabledExtensionNames[i], VK_KHR_ANDROID_SURFACE_EXTENSION_NAME) == 0) {
my_data->instanceMap[instance].androidSurfaceExtensionEnabled = true;
+ }
#endif // VK_USE_PLATFORM_ANDROID_KHR
#ifdef VK_USE_PLATFORM_MIR_KHR
if (strcmp(pCreateInfo->ppEnabledExtensionNames[i], VK_KHR_MIR_SURFACE_EXTENSION_NAME) == 0) {
my_data->instanceMap[instance].mirSurfaceExtensionEnabled = true;
+ }
#endif // VK_USE_PLATFORM_MIR_KHR
#ifdef VK_USE_PLATFORM_WAYLAND_KHR
if (strcmp(pCreateInfo->ppEnabledExtensionNames[i], VK_KHR_WAYLAND_SURFACE_EXTENSION_NAME) == 0) {
my_data->instanceMap[instance].waylandSurfaceExtensionEnabled = true;
+ }
#endif // VK_USE_PLATFORM_WAYLAND_KHR
#ifdef VK_USE_PLATFORM_WIN32_KHR
if (strcmp(pCreateInfo->ppEnabledExtensionNames[i], VK_KHR_WIN32_SURFACE_EXTENSION_NAME) == 0) {
my_data->instanceMap[instance].win32SurfaceExtensionEnabled = true;
+ }
#endif // VK_USE_PLATFORM_WIN32_KHR
#ifdef VK_USE_PLATFORM_XCB_KHR
if (strcmp(pCreateInfo->ppEnabledExtensionNames[i], VK_KHR_XCB_SURFACE_EXTENSION_NAME) == 0) {
my_data->instanceMap[instance].xcbSurfaceExtensionEnabled = true;
+ }
#endif // VK_USE_PLATFORM_XCB_KHR
#ifdef VK_USE_PLATFORM_XLIB_KHR
if (strcmp(pCreateInfo->ppEnabledExtensionNames[i], VK_KHR_XLIB_SURFACE_EXTENSION_NAME) == 0) {
my_data->instanceMap[instance].xlibSurfaceExtensionEnabled = true;
-#endif // VK_USE_PLATFORM_XLIB_KHR
}
+#endif // VK_USE_PLATFORM_XLIB_KHR
}
}
-
#include "vk_dispatch_table_helper.h"
-static void initSwapchain(layer_data *my_data, const VkAllocationCallbacks *pAllocator)
-{
- uint32_t report_flags = 0;
- uint32_t debug_action = 0;
- FILE *log_output = NULL;
- const char *option_str;
- VkDebugReportCallbackEXT callback;
-
- // Initialize Swapchain options:
- report_flags = getLayerOptionFlags("SwapchainReportFlags", 0);
- getLayerOptionEnum("SwapchainDebugAction", (uint32_t *) &debug_action);
-
- if (debug_action & VK_DBG_LAYER_ACTION_LOG_MSG)
- {
- // Turn on logging, since it was requested:
- option_str = getLayerOption("SwapchainLogFilename");
- log_output = getLayerLogOutput(option_str, "Swapchain");
- VkDebugReportCallbackCreateInfoEXT dbgInfo;
- memset(&dbgInfo, 0, sizeof(dbgInfo));
- dbgInfo.sType = VK_STRUCTURE_TYPE_DEBUG_REPORT_CREATE_INFO_EXT;
- dbgInfo.pfnCallback = log_callback;
- dbgInfo.pUserData = log_output;
- dbgInfo.flags = report_flags;
- layer_create_msg_callback(my_data->report_data,
- &dbgInfo,
- pAllocator,
- &callback);
- my_data->logging_callback.push_back(callback);
- }
- if (debug_action & VK_DBG_LAYER_ACTION_DEBUG_OUTPUT) {
- VkDebugReportCallbackCreateInfoEXT dbgInfo;
- memset(&dbgInfo, 0, sizeof(dbgInfo));
- dbgInfo.sType = VK_STRUCTURE_TYPE_DEBUG_REPORT_CREATE_INFO_EXT;
- dbgInfo.pfnCallback = win32_debug_output_msg;
- dbgInfo.pUserData = log_output;
- dbgInfo.flags = report_flags;
- layer_create_msg_callback(my_data->report_data, &dbgInfo, pAllocator, &callback);
- my_data->logging_callback.push_back(callback);
- }
- if (!globalLockInitialized)
- {
+static void init_swapchain(layer_data *my_data, const VkAllocationCallbacks *pAllocator) {
+
+ layer_debug_actions(my_data->report_data, my_data->logging_callback, pAllocator, "lunarg_swapchain");
+
+ if (!globalLockInitialized) {
loader_platform_thread_create_mutex(&globalLock);
globalLockInitialized = 1;
}
}
-static const char *surfaceTransformStr(VkSurfaceTransformFlagBitsKHR value)
-{
+static const char *surfaceTransformStr(VkSurfaceTransformFlagBitsKHR value) {
// Return a string corresponding to the value:
return string_VkSurfaceTransformFlagBitsKHR(value);
}
-static const char *surfaceCompositeAlphaStr(VkCompositeAlphaFlagBitsKHR value)
-{
+static const char *surfaceCompositeAlphaStr(VkCompositeAlphaFlagBitsKHR value) {
// Return a string corresponding to the value:
return string_VkCompositeAlphaFlagBitsKHR(value);
}
-static const char *presentModeStr(VkPresentModeKHR value)
-{
+static const char *presentModeStr(VkPresentModeKHR value) {
// Return a string corresponding to the value:
return string_VkPresentModeKHR(value);
}
-static const char *sharingModeStr(VkSharingMode value)
-{
+static const char *sharingModeStr(VkSharingMode value) {
// Return a string corresponding to the value:
return string_VkSharingMode(value);
}
-
-VK_LAYER_EXPORT VKAPI_ATTR VkResult VKAPI_CALL vkCreateInstance(const VkInstanceCreateInfo* pCreateInfo, const VkAllocationCallbacks* pAllocator, VkInstance* pInstance)
-{
+VK_LAYER_EXPORT VKAPI_ATTR VkResult VKAPI_CALL
+vkCreateInstance(const VkInstanceCreateInfo *pCreateInfo, const VkAllocationCallbacks *pAllocator, VkInstance *pInstance) {
VkLayerInstanceCreateInfo *chain_info = get_chain_info(pCreateInfo, VK_LAYER_LINK_INFO);
assert(chain_info->u.pLayerInfo);
PFN_vkGetInstanceProcAddr fpGetInstanceProcAddr = chain_info->u.pLayerInfo->pfnNextGetInstanceProcAddr;
- PFN_vkCreateInstance fpCreateInstance = (PFN_vkCreateInstance) fpGetInstanceProcAddr(NULL, "vkCreateInstance");
+ PFN_vkCreateInstance fpCreateInstance = (PFN_vkCreateInstance)fpGetInstanceProcAddr(NULL, "vkCreateInstance");
if (fpCreateInstance == NULL) {
return VK_ERROR_INITIALIZATION_FAILED;
}
@@ -333,21 +284,17 @@ VK_LAYER_EXPORT VKAPI_ATTR VkResult VKAPI_CALL vkCreateInstance(const VkInstance
my_data->instance_dispatch_table = new VkLayerInstanceDispatchTable;
layer_init_instance_dispatch_table(*pInstance, my_data->instance_dispatch_table, fpGetInstanceProcAddr);
- my_data->report_data = debug_report_create_instance(
- my_data->instance_dispatch_table,
- *pInstance,
- pCreateInfo->enabledExtensionCount,
- pCreateInfo->ppEnabledExtensionNames);
+ my_data->report_data = debug_report_create_instance(my_data->instance_dispatch_table, *pInstance,
+ pCreateInfo->enabledExtensionCount, pCreateInfo->ppEnabledExtensionNames);
// Call the following function after my_data is initialized:
createInstanceRegisterExtensions(pCreateInfo, *pInstance);
- initSwapchain(my_data, pAllocator);
+ init_swapchain(my_data, pAllocator);
return result;
}
-VK_LAYER_EXPORT VKAPI_ATTR void VKAPI_CALL vkDestroyInstance(VkInstance instance, const VkAllocationCallbacks* pAllocator)
-{
+VK_LAYER_EXPORT VKAPI_ATTR void VKAPI_CALL vkDestroyInstance(VkInstance instance, const VkAllocationCallbacks *pAllocator) {
dispatch_key key = get_dispatch_key(instance);
layer_data *my_data = get_my_data_ptr(key, layer_data_map);
SwpInstance *pInstance = &(my_data->instanceMap[instance]);
@@ -361,15 +308,13 @@ VK_LAYER_EXPORT VKAPI_ATTR void VKAPI_CALL vkDestroyInstance(VkInstance instance
if (pInstance) {
// Delete all of the SwpPhysicalDevice's, SwpSurface's, and the
// SwpInstance associated with this instance:
- for (auto it = pInstance->physicalDevices.begin() ;
- it != pInstance->physicalDevices.end() ; it++) {
+ for (auto it = pInstance->physicalDevices.begin(); it != pInstance->physicalDevices.end(); it++) {
// Free memory that was allocated for/by this SwpPhysicalDevice:
SwpPhysicalDevice *pPhysicalDevice = it->second;
if (pPhysicalDevice) {
if (pPhysicalDevice->pDevice) {
- LOG_ERROR(VK_DEBUG_REPORT_OBJECT_TYPE_DEVICE_EXT, instance, "VkInstance",
- SWAPCHAIN_DEL_OBJECT_BEFORE_CHILDREN,
+ LOG_ERROR(VK_DEBUG_REPORT_OBJECT_TYPE_DEVICE_EXT, instance, "VkInstance", SWAPCHAIN_DEL_OBJECT_BEFORE_CHILDREN,
"%s() called before all of its associated "
"VkDevices were destroyed.",
__FUNCTION__);
@@ -382,14 +327,12 @@ VK_LAYER_EXPORT VKAPI_ATTR void VKAPI_CALL vkDestroyInstance(VkInstance instance
// are simply pointed to by the SwpInstance):
my_data->physicalDeviceMap.erase(it->second->physicalDevice);
}
- for (auto it = pInstance->surfaces.begin() ;
- it != pInstance->surfaces.end() ; it++) {
+ for (auto it = pInstance->surfaces.begin(); it != pInstance->surfaces.end(); it++) {
// Free memory that was allocated for/by this SwpPhysicalDevice:
SwpSurface *pSurface = it->second;
if (pSurface) {
- LOG_ERROR(VK_DEBUG_REPORT_OBJECT_TYPE_DEVICE_EXT, instance, "VkInstance",
- SWAPCHAIN_DEL_OBJECT_BEFORE_CHILDREN,
+ LOG_ERROR(VK_DEBUG_REPORT_OBJECT_TYPE_DEVICE_EXT, instance, "VkInstance", SWAPCHAIN_DEL_OBJECT_BEFORE_CHILDREN,
"%s() called before all of its associated "
"VkSurfaceKHRs were destroyed.",
__FUNCTION__);
@@ -418,38 +361,29 @@ VK_LAYER_EXPORT VKAPI_ATTR void VKAPI_CALL vkDestroyInstance(VkInstance instance
}
}
-VK_LAYER_EXPORT VKAPI_ATTR void VKAPI_CALL vkGetPhysicalDeviceQueueFamilyProperties(
- VkPhysicalDevice physicalDevice,
- uint32_t* pQueueFamilyPropertyCount,
- VkQueueFamilyProperties* pQueueFamilyProperties)
-{
+VK_LAYER_EXPORT VKAPI_ATTR void VKAPI_CALL
+vkGetPhysicalDeviceQueueFamilyProperties(VkPhysicalDevice physicalDevice, uint32_t *pQueueFamilyPropertyCount,
+ VkQueueFamilyProperties *pQueueFamilyProperties) {
layer_data *my_data = get_my_data_ptr(get_dispatch_key(physicalDevice), layer_data_map);
// Call down the call chain:
- my_data->instance_dispatch_table->GetPhysicalDeviceQueueFamilyProperties(
- physicalDevice,
- pQueueFamilyPropertyCount,
- pQueueFamilyProperties);
+ my_data->instance_dispatch_table->GetPhysicalDeviceQueueFamilyProperties(physicalDevice, pQueueFamilyPropertyCount,
+ pQueueFamilyProperties);
// Record the result of this query:
loader_platform_thread_lock_mutex(&globalLock);
SwpPhysicalDevice *pPhysicalDevice = &my_data->physicalDeviceMap[physicalDevice];
- if (pPhysicalDevice &&
- pQueueFamilyPropertyCount && !pQueueFamilyProperties) {
+ if (pPhysicalDevice && pQueueFamilyPropertyCount && !pQueueFamilyProperties) {
pPhysicalDevice->gotQueueFamilyPropertyCount = true;
- pPhysicalDevice->numOfQueueFamilies =
- *pQueueFamilyPropertyCount;
+ pPhysicalDevice->numOfQueueFamilies = *pQueueFamilyPropertyCount;
}
loader_platform_thread_unlock_mutex(&globalLock);
}
#ifdef VK_USE_PLATFORM_ANDROID_KHR
-VK_LAYER_EXPORT VKAPI_ATTR VkResult VKAPI_CALL vkCreateAndroidSurfaceKHR(
- VkInstance instance,
- const VkAndroidSurfaceCreateInfoKHR* pCreateInfo,
- const VkAllocationCallbacks* pAllocator,
- VkSurfaceKHR* pSurface)
-{
+VK_LAYER_EXPORT VKAPI_ATTR VkResult VKAPI_CALL
+vkCreateAndroidSurfaceKHR(VkInstance instance, const VkAndroidSurfaceCreateInfoKHR *pCreateInfo,
+ const VkAllocationCallbacks *pAllocator, VkSurfaceKHR *pSurface) {
VkResult result = VK_SUCCESS;
VkBool32 skipCall = VK_FALSE;
layer_data *my_data = get_my_data_ptr(get_dispatch_key(instance), layer_data_map);
@@ -458,37 +392,27 @@ VK_LAYER_EXPORT VKAPI_ATTR VkResult VKAPI_CALL vkCreateAndroidSurfaceKHR(
// Validate that the platform extension was enabled:
if (pInstance && !pInstance->androidSurfaceExtensionEnabled) {
- skipCall |= LOG_ERROR(VK_DEBUG_REPORT_OBJECT_TYPE_INSTANCE_EXT,
- pInstance,
- "VkInstance",
- SWAPCHAIN_EXT_NOT_ENABLED_BUT_USED,
- "%s() called even though the %s extension was not enabled for this VkInstance.",
- __FUNCTION__, VK_KHR_ANDROID_SURFACE_EXTENSION_NAME);
+ skipCall |= LOG_ERROR(VK_DEBUG_REPORT_OBJECT_TYPE_INSTANCE_EXT, pInstance, "VkInstance", SWAPCHAIN_EXT_NOT_ENABLED_BUT_USED,
+ "%s() called even though the %s extension was not enabled for this VkInstance.", __FUNCTION__,
+ VK_KHR_ANDROID_SURFACE_EXTENSION_NAME);
}
if (!pCreateInfo) {
- skipCall |= LOG_ERROR_NULL_POINTER(VK_DEBUG_REPORT_OBJECT_TYPE_DEVICE_EXT,
- device,
- "pCreateInfo");
+ skipCall |= LOG_ERROR_NULL_POINTER(VK_DEBUG_REPORT_OBJECT_TYPE_DEVICE_EXT, device, "pCreateInfo");
} else {
if (pCreateInfo->sType != VK_STRUCTURE_TYPE_ANDROID_SURFACE_CREATE_INFO_KHR) {
- skipCall |= LOG_ERROR_WRONG_STYPE(VK_DEBUG_REPORT_OBJECT_TYPE_DEVICE_EXT,
- device,
- "pCreateInfo",
+ skipCall |= LOG_ERROR_WRONG_STYPE(VK_DEBUG_REPORT_OBJECT_TYPE_DEVICE_EXT, device, "pCreateInfo",
"VK_STRUCTURE_TYPE_ANDROID_SURFACE_CREATE_INFO_KHR");
}
if (pCreateInfo->pNext != NULL) {
- skipCall |= LOG_INFO_WRONG_NEXT(VK_DEBUG_REPORT_OBJECT_TYPE_DEVICE_EXT,
- device,
- "pCreateInfo");
+ skipCall |= LOG_INFO_WRONG_NEXT(VK_DEBUG_REPORT_OBJECT_TYPE_DEVICE_EXT, device, "pCreateInfo");
}
}
if (VK_FALSE == skipCall) {
// Call down the call chain:
loader_platform_thread_unlock_mutex(&globalLock);
- result = my_data->instance_dispatch_table->CreateAndroidSurfaceKHR(
- instance, pCreateInfo, pAllocator, pSurface);
+ result = my_data->instance_dispatch_table->CreateAndroidSurfaceKHR(instance, pCreateInfo, pAllocator, pSurface);
loader_platform_thread_lock_mutex(&globalLock);
// Obtain this pointer again after locking:
@@ -497,8 +421,7 @@ VK_LAYER_EXPORT VKAPI_ATTR VkResult VKAPI_CALL vkCreateAndroidSurfaceKHR(
// Record the VkSurfaceKHR returned by the ICD:
my_data->surfaceMap[*pSurface].surface = *pSurface;
my_data->surfaceMap[*pSurface].pInstance = pInstance;
- my_data->surfaceMap[*pSurface].usedAllocatorToCreate =
- (pAllocator != NULL);
+ my_data->surfaceMap[*pSurface].usedAllocatorToCreate = (pAllocator != NULL);
my_data->surfaceMap[*pSurface].numQueueFamilyIndexSupport = 0;
my_data->surfaceMap[*pSurface].pQueueFamilyIndexSupport = NULL;
// Point to the associated SwpInstance:
@@ -513,12 +436,9 @@ VK_LAYER_EXPORT VKAPI_ATTR VkResult VKAPI_CALL vkCreateAndroidSurfaceKHR(
#endif // VK_USE_PLATFORM_ANDROID_KHR
#ifdef VK_USE_PLATFORM_MIR_KHR
-VK_LAYER_EXPORT VKAPI_ATTR VkResult VKAPI_CALL vkCreateMirSurfaceKHR(
- VkInstance instance,
- const VkMirSurfaceCreateInfoKHR* pCreateInfo,
- const VkAllocationCallbacks* pAllocator,
- VkSurfaceKHR* pSurface)
-{
+VK_LAYER_EXPORT VKAPI_ATTR VkResult VKAPI_CALL
+vkCreateMirSurfaceKHR(VkInstance instance, const VkMirSurfaceCreateInfoKHR *pCreateInfo, const VkAllocationCallbacks *pAllocator,
+ VkSurfaceKHR *pSurface) {
VkResult result = VK_SUCCESS;
VkBool32 skipCall = VK_FALSE;
layer_data *my_data = get_my_data_ptr(get_dispatch_key(instance), layer_data_map);
@@ -527,37 +447,27 @@ VK_LAYER_EXPORT VKAPI_ATTR VkResult VKAPI_CALL vkCreateMirSurfaceKHR(
// Validate that the platform extension was enabled:
if (pInstance && !pInstance->mirSurfaceExtensionEnabled) {
- skipCall |= LOG_ERROR(VK_DEBUG_REPORT_OBJECT_TYPE_INSTANCE_EXT,
- pInstance,
- "VkInstance",
- SWAPCHAIN_EXT_NOT_ENABLED_BUT_USED,
- "%s() called even though the %s extension was not enabled for this VkInstance.",
- __FUNCTION__, VK_KHR_MIR_SURFACE_EXTENSION_NAME);
+ skipCall |= LOG_ERROR(VK_DEBUG_REPORT_OBJECT_TYPE_INSTANCE_EXT, pInstance, "VkInstance", SWAPCHAIN_EXT_NOT_ENABLED_BUT_USED,
+ "%s() called even though the %s extension was not enabled for this VkInstance.", __FUNCTION__,
+ VK_KHR_MIR_SURFACE_EXTENSION_NAME);
}
if (!pCreateInfo) {
- skipCall |= LOG_ERROR_NULL_POINTER(VK_DEBUG_REPORT_OBJECT_TYPE_DEVICE_EXT,
- device,
- "pCreateInfo");
+ skipCall |= LOG_ERROR_NULL_POINTER(VK_DEBUG_REPORT_OBJECT_TYPE_DEVICE_EXT, device, "pCreateInfo");
} else {
if (pCreateInfo->sType != VK_STRUCTURE_TYPE_MIR_SURFACE_CREATE_INFO_KHR) {
- skipCall |= LOG_ERROR_WRONG_STYPE(VK_DEBUG_REPORT_OBJECT_TYPE_DEVICE_EXT,
- device,
- "pCreateInfo",
+ skipCall |= LOG_ERROR_WRONG_STYPE(VK_DEBUG_REPORT_OBJECT_TYPE_DEVICE_EXT, device, "pCreateInfo",
"VK_STRUCTURE_TYPE_MIR_SURFACE_CREATE_INFO_KHR");
}
if (pCreateInfo->pNext != NULL) {
- skipCall |= LOG_INFO_WRONG_NEXT(VK_DEBUG_REPORT_OBJECT_TYPE_DEVICE_EXT,
- device,
- "pCreateInfo");
+ skipCall |= LOG_INFO_WRONG_NEXT(VK_DEBUG_REPORT_OBJECT_TYPE_DEVICE_EXT, device, "pCreateInfo");
}
}
if (VK_FALSE == skipCall) {
// Call down the call chain:
loader_platform_thread_unlock_mutex(&globalLock);
- result = my_data->instance_dispatch_table->CreateMirSurfaceKHR(
- instance, pCreateInfo, pAllocator, pSurface);
+ result = my_data->instance_dispatch_table->CreateMirSurfaceKHR(instance, pCreateInfo, pAllocator, pSurface);
loader_platform_thread_lock_mutex(&globalLock);
// Obtain this pointer again after locking:
@@ -566,8 +476,7 @@ VK_LAYER_EXPORT VKAPI_ATTR VkResult VKAPI_CALL vkCreateMirSurfaceKHR(
// Record the VkSurfaceKHR returned by the ICD:
my_data->surfaceMap[*pSurface].surface = *pSurface;
my_data->surfaceMap[*pSurface].pInstance = pInstance;
- my_data->surfaceMap[*pSurface].usedAllocatorToCreate =
- (pAllocator != NULL);
+ my_data->surfaceMap[*pSurface].usedAllocatorToCreate = (pAllocator != NULL);
my_data->surfaceMap[*pSurface].numQueueFamilyIndexSupport = 0;
my_data->surfaceMap[*pSurface].pQueueFamilyIndexSupport = NULL;
// Point to the associated SwpInstance:
@@ -580,11 +489,9 @@ VK_LAYER_EXPORT VKAPI_ATTR VkResult VKAPI_CALL vkCreateMirSurfaceKHR(
return VK_ERROR_VALIDATION_FAILED_EXT;
}
-VK_LAYER_EXPORT VKAPI_ATTR VkBool32 VKAPI_CALL vkGetPhysicalDeviceMirPresentationSupportKHR(
- VkPhysicalDevice physicalDevice,
- uint32_t queueFamilyIndex,
- MirConnection* connection)
-{
+VK_LAYER_EXPORT VKAPI_ATTR VkBool32 VKAPI_CALL vkGetPhysicalDeviceMirPresentationSupportKHR(VkPhysicalDevice physicalDevice,
+ uint32_t queueFamilyIndex,
+ MirConnection *connection) {
VkBool32 result = VK_FALSE;
VkBool32 skipCall = VK_FALSE;
layer_data *my_data = get_my_data_ptr(get_dispatch_key(physicalDevice), layer_data_map);
@@ -592,41 +499,32 @@ VK_LAYER_EXPORT VKAPI_ATTR VkBool32 VKAPI_CALL vkGetPhysicalDeviceMirPresentatio
SwpPhysicalDevice *pPhysicalDevice = &my_data->physicalDeviceMap[physicalDevice];
// Validate that the platform extension was enabled:
- if (pPhysicalDevice && pPhysicalDevice->pInstance &&
- !pPhysicalDevice->pInstance->mirSurfaceExtensionEnabled) {
- skipCall |= LOG_ERROR(VK_DEBUG_REPORT_OBJECT_TYPE_INSTANCE_EXT,
- pPhysicalDevice->pInstance,
- "VkInstance",
+ if (pPhysicalDevice && pPhysicalDevice->pInstance && !pPhysicalDevice->pInstance->mirSurfaceExtensionEnabled) {
+ skipCall |= LOG_ERROR(VK_DEBUG_REPORT_OBJECT_TYPE_INSTANCE_EXT, pPhysicalDevice->pInstance, "VkInstance",
SWAPCHAIN_EXT_NOT_ENABLED_BUT_USED,
- "%s() called even though the %s extension was not enabled for this VkInstance.",
- __FUNCTION__, VK_KHR_MIR_SURFACE_EXTENSION_NAME);
+ "%s() called even though the %s extension was not enabled for this VkInstance.", __FUNCTION__,
+ VK_KHR_MIR_SURFACE_EXTENSION_NAME);
}
- if (pPhysicalDevice->gotQueueFamilyPropertyCount &&
- (queueFamilyIndex >= pPhysicalDevice->numOfQueueFamilies)) {
- skipCall |= LOG_ERROR_QUEUE_FAMILY_INDEX_TOO_LARGE(VK_DEBUG_REPORT_OBJECT_TYPE_PHYSICAL_DEVICE_EXT,
- pPhysicalDevice,
- "VkPhysicalDevice",
- queueFamilyIndex,
- pPhysicalDevice->numOfQueueFamilies);
+ if (pPhysicalDevice->gotQueueFamilyPropertyCount && (queueFamilyIndex >= pPhysicalDevice->numOfQueueFamilies)) {
+ skipCall |=
+ LOG_ERROR_QUEUE_FAMILY_INDEX_TOO_LARGE(VK_DEBUG_REPORT_OBJECT_TYPE_PHYSICAL_DEVICE_EXT, pPhysicalDevice,
+ "VkPhysicalDevice", queueFamilyIndex, pPhysicalDevice->numOfQueueFamilies);
}
loader_platform_thread_unlock_mutex(&globalLock);
if (VK_FALSE == skipCall) {
// Call down the call chain:
- result = my_data->instance_dispatch_table->GetPhysicalDeviceMirPresentationSupportKHR(
- physicalDevice, queueFamilyIndex, connection);
+ result = my_data->instance_dispatch_table->GetPhysicalDeviceMirPresentationSupportKHR(physicalDevice, queueFamilyIndex,
+ connection);
}
return result;
}
#endif // VK_USE_PLATFORM_MIR_KHR
#ifdef VK_USE_PLATFORM_WAYLAND_KHR
-VK_LAYER_EXPORT VKAPI_ATTR VkResult VKAPI_CALL vkCreateWaylandSurfaceKHR(
- VkInstance instance,
- const VkWaylandSurfaceCreateInfoKHR* pCreateInfo,
- const VkAllocationCallbacks* pAllocator,
- VkSurfaceKHR* pSurface)
-{
+VK_LAYER_EXPORT VKAPI_ATTR VkResult VKAPI_CALL
+vkCreateWaylandSurfaceKHR(VkInstance instance, const VkWaylandSurfaceCreateInfoKHR *pCreateInfo,
+ const VkAllocationCallbacks *pAllocator, VkSurfaceKHR *pSurface) {
VkResult result = VK_SUCCESS;
VkBool32 skipCall = VK_FALSE;
layer_data *my_data = get_my_data_ptr(get_dispatch_key(instance), layer_data_map);
@@ -635,37 +533,27 @@ VK_LAYER_EXPORT VKAPI_ATTR VkResult VKAPI_CALL vkCreateWaylandSurfaceKHR(
// Validate that the platform extension was enabled:
if (pInstance && !pInstance->waylandSurfaceExtensionEnabled) {
- skipCall |= LOG_ERROR(VK_DEBUG_REPORT_OBJECT_TYPE_INSTANCE_EXT,
- pInstance,
- "VkInstance",
- SWAPCHAIN_EXT_NOT_ENABLED_BUT_USED,
- "%s() called even though the %s extension was not enabled for this VkInstance.",
- __FUNCTION__, VK_KHR_WAYLAND_SURFACE_EXTENSION_NAME);
+ skipCall |= LOG_ERROR(VK_DEBUG_REPORT_OBJECT_TYPE_INSTANCE_EXT, pInstance, "VkInstance", SWAPCHAIN_EXT_NOT_ENABLED_BUT_USED,
+ "%s() called even though the %s extension was not enabled for this VkInstance.", __FUNCTION__,
+ VK_KHR_WAYLAND_SURFACE_EXTENSION_NAME);
}
if (!pCreateInfo) {
- skipCall |= LOG_ERROR_NULL_POINTER(VK_DEBUG_REPORT_OBJECT_TYPE_DEVICE_EXT,
- device,
- "pCreateInfo");
+ skipCall |= LOG_ERROR_NULL_POINTER(VK_DEBUG_REPORT_OBJECT_TYPE_DEVICE_EXT, device, "pCreateInfo");
} else {
if (pCreateInfo->sType != VK_STRUCTURE_TYPE_WAYLAND_SURFACE_CREATE_INFO_KHR) {
- skipCall |= LOG_ERROR_WRONG_STYPE(VK_DEBUG_REPORT_OBJECT_TYPE_DEVICE_EXT,
- device,
- "pCreateInfo",
+ skipCall |= LOG_ERROR_WRONG_STYPE(VK_DEBUG_REPORT_OBJECT_TYPE_DEVICE_EXT, device, "pCreateInfo",
"VK_STRUCTURE_TYPE_WAYLAND_SURFACE_CREATE_INFO_KHR");
}
if (pCreateInfo->pNext != NULL) {
- skipCall |= LOG_INFO_WRONG_NEXT(VK_DEBUG_REPORT_OBJECT_TYPE_DEVICE_EXT,
- device,
- "pCreateInfo");
+ skipCall |= LOG_INFO_WRONG_NEXT(VK_DEBUG_REPORT_OBJECT_TYPE_DEVICE_EXT, device, "pCreateInfo");
}
}
if (VK_FALSE == skipCall) {
// Call down the call chain:
loader_platform_thread_unlock_mutex(&globalLock);
- result = my_data->instance_dispatch_table->CreateWaylandSurfaceKHR(
- instance, pCreateInfo, pAllocator, pSurface);
+ result = my_data->instance_dispatch_table->CreateWaylandSurfaceKHR(instance, pCreateInfo, pAllocator, pSurface);
loader_platform_thread_lock_mutex(&globalLock);
// Obtain this pointer again after locking:
@@ -674,8 +562,7 @@ VK_LAYER_EXPORT VKAPI_ATTR VkResult VKAPI_CALL vkCreateWaylandSurfaceKHR(
// Record the VkSurfaceKHR returned by the ICD:
my_data->surfaceMap[*pSurface].surface = *pSurface;
my_data->surfaceMap[*pSurface].pInstance = pInstance;
- my_data->surfaceMap[*pSurface].usedAllocatorToCreate =
- (pAllocator != NULL);
+ my_data->surfaceMap[*pSurface].usedAllocatorToCreate = (pAllocator != NULL);
my_data->surfaceMap[*pSurface].numQueueFamilyIndexSupport = 0;
my_data->surfaceMap[*pSurface].pQueueFamilyIndexSupport = NULL;
// Point to the associated SwpInstance:
@@ -688,11 +575,9 @@ VK_LAYER_EXPORT VKAPI_ATTR VkResult VKAPI_CALL vkCreateWaylandSurfaceKHR(
return VK_ERROR_VALIDATION_FAILED_EXT;
}
-VK_LAYER_EXPORT VKAPI_ATTR VkBool32 VKAPI_CALL vkGetPhysicalDeviceWaylandPresentationSupportKHR(
- VkPhysicalDevice physicalDevice,
- uint32_t queueFamilyIndex,
- struct wl_display* display)
-{
+VK_LAYER_EXPORT VKAPI_ATTR VkBool32 VKAPI_CALL vkGetPhysicalDeviceWaylandPresentationSupportKHR(VkPhysicalDevice physicalDevice,
+ uint32_t queueFamilyIndex,
+ struct wl_display *display) {
VkBool32 result = VK_FALSE;
VkBool32 skipCall = VK_FALSE;
layer_data *my_data = get_my_data_ptr(get_dispatch_key(physicalDevice), layer_data_map);
@@ -700,41 +585,32 @@ VK_LAYER_EXPORT VKAPI_ATTR VkBool32 VKAPI_CALL vkGetPhysicalDeviceWaylandPresent
SwpPhysicalDevice *pPhysicalDevice = &my_data->physicalDeviceMap[physicalDevice];
// Validate that the platform extension was enabled:
- if (pPhysicalDevice && pPhysicalDevice->pInstance &&
- !pPhysicalDevice->pInstance->waylandSurfaceExtensionEnabled) {
- skipCall |= LOG_ERROR(VK_DEBUG_REPORT_OBJECT_TYPE_INSTANCE_EXT,
- pPhysicalDevice->pInstance,
- "VkInstance",
+ if (pPhysicalDevice && pPhysicalDevice->pInstance && !pPhysicalDevice->pInstance->waylandSurfaceExtensionEnabled) {
+ skipCall |= LOG_ERROR(VK_DEBUG_REPORT_OBJECT_TYPE_INSTANCE_EXT, pPhysicalDevice->pInstance, "VkInstance",
SWAPCHAIN_EXT_NOT_ENABLED_BUT_USED,
- "%s() called even though the %s extension was not enabled for this VkInstance.",
- __FUNCTION__, VK_KHR_WAYLAND_SURFACE_EXTENSION_NAME);
+ "%s() called even though the %s extension was not enabled for this VkInstance.", __FUNCTION__,
+ VK_KHR_WAYLAND_SURFACE_EXTENSION_NAME);
}
- if (pPhysicalDevice->gotQueueFamilyPropertyCount &&
- (queueFamilyIndex >= pPhysicalDevice->numOfQueueFamilies)) {
- skipCall |= LOG_ERROR_QUEUE_FAMILY_INDEX_TOO_LARGE(VK_DEBUG_REPORT_OBJECT_TYPE_PHYSICAL_DEVICE_EXT,
- pPhysicalDevice,
- "VkPhysicalDevice",
- queueFamilyIndex,
- pPhysicalDevice->numOfQueueFamilies);
+ if (pPhysicalDevice->gotQueueFamilyPropertyCount && (queueFamilyIndex >= pPhysicalDevice->numOfQueueFamilies)) {
+ skipCall |=
+ LOG_ERROR_QUEUE_FAMILY_INDEX_TOO_LARGE(VK_DEBUG_REPORT_OBJECT_TYPE_PHYSICAL_DEVICE_EXT, pPhysicalDevice,
+ "VkPhysicalDevice", queueFamilyIndex, pPhysicalDevice->numOfQueueFamilies);
}
loader_platform_thread_unlock_mutex(&globalLock);
if (VK_FALSE == skipCall) {
// Call down the call chain:
- result = my_data->instance_dispatch_table->GetPhysicalDeviceWaylandPresentationSupportKHR(
- physicalDevice, queueFamilyIndex, display);
+ result = my_data->instance_dispatch_table->GetPhysicalDeviceWaylandPresentationSupportKHR(physicalDevice, queueFamilyIndex,
+ display);
}
return result;
}
#endif // VK_USE_PLATFORM_WAYLAND_KHR
#ifdef VK_USE_PLATFORM_WIN32_KHR
-VK_LAYER_EXPORT VKAPI_ATTR VkResult VKAPI_CALL vkCreateWin32SurfaceKHR(
- VkInstance instance,
- const VkWin32SurfaceCreateInfoKHR* pCreateInfo,
- const VkAllocationCallbacks* pAllocator,
- VkSurfaceKHR* pSurface)
-{
+VK_LAYER_EXPORT VKAPI_ATTR VkResult VKAPI_CALL
+vkCreateWin32SurfaceKHR(VkInstance instance, const VkWin32SurfaceCreateInfoKHR *pCreateInfo,
+ const VkAllocationCallbacks *pAllocator, VkSurfaceKHR *pSurface) {
VkResult result = VK_SUCCESS;
VkBool32 skipCall = VK_FALSE;
layer_data *my_data = get_my_data_ptr(get_dispatch_key(instance), layer_data_map);
@@ -743,37 +619,27 @@ VK_LAYER_EXPORT VKAPI_ATTR VkResult VKAPI_CALL vkCreateWin32SurfaceKHR(
// Validate that the platform extension was enabled:
if (pInstance && !pInstance->win32SurfaceExtensionEnabled) {
- skipCall |= LOG_ERROR(VK_DEBUG_REPORT_OBJECT_TYPE_INSTANCE_EXT,
- pInstance,
- "VkInstance",
- SWAPCHAIN_EXT_NOT_ENABLED_BUT_USED,
- "%s() called even though the %s extension was not enabled for this VkInstance.",
- __FUNCTION__, VK_KHR_WIN32_SURFACE_EXTENSION_NAME);
+ skipCall |= LOG_ERROR(VK_DEBUG_REPORT_OBJECT_TYPE_INSTANCE_EXT, pInstance, "VkInstance", SWAPCHAIN_EXT_NOT_ENABLED_BUT_USED,
+ "%s() called even though the %s extension was not enabled for this VkInstance.", __FUNCTION__,
+ VK_KHR_WIN32_SURFACE_EXTENSION_NAME);
}
if (!pCreateInfo) {
- skipCall |= LOG_ERROR_NULL_POINTER(VK_DEBUG_REPORT_OBJECT_TYPE_DEVICE_EXT,
- device,
- "pCreateInfo");
+ skipCall |= LOG_ERROR_NULL_POINTER(VK_DEBUG_REPORT_OBJECT_TYPE_DEVICE_EXT, device, "pCreateInfo");
} else {
if (pCreateInfo->sType != VK_STRUCTURE_TYPE_WIN32_SURFACE_CREATE_INFO_KHR) {
- skipCall |= LOG_ERROR_WRONG_STYPE(VK_DEBUG_REPORT_OBJECT_TYPE_DEVICE_EXT,
- device,
- "pCreateInfo",
+ skipCall |= LOG_ERROR_WRONG_STYPE(VK_DEBUG_REPORT_OBJECT_TYPE_DEVICE_EXT, device, "pCreateInfo",
"VK_STRUCTURE_TYPE_WIN32_SURFACE_CREATE_INFO_KHR");
}
if (pCreateInfo->pNext != NULL) {
- skipCall |= LOG_INFO_WRONG_NEXT(VK_DEBUG_REPORT_OBJECT_TYPE_DEVICE_EXT,
- device,
- "pCreateInfo");
+ skipCall |= LOG_INFO_WRONG_NEXT(VK_DEBUG_REPORT_OBJECT_TYPE_DEVICE_EXT, device, "pCreateInfo");
}
}
if (VK_FALSE == skipCall) {
// Call down the call chain:
loader_platform_thread_unlock_mutex(&globalLock);
- result = my_data->instance_dispatch_table->CreateWin32SurfaceKHR(
- instance, pCreateInfo, pAllocator, pSurface);
+ result = my_data->instance_dispatch_table->CreateWin32SurfaceKHR(instance, pCreateInfo, pAllocator, pSurface);
loader_platform_thread_lock_mutex(&globalLock);
// Obtain this pointer again after locking:
@@ -782,8 +648,7 @@ VK_LAYER_EXPORT VKAPI_ATTR VkResult VKAPI_CALL vkCreateWin32SurfaceKHR(
// Record the VkSurfaceKHR returned by the ICD:
my_data->surfaceMap[*pSurface].surface = *pSurface;
my_data->surfaceMap[*pSurface].pInstance = pInstance;
- my_data->surfaceMap[*pSurface].usedAllocatorToCreate =
- (pAllocator != NULL);
+ my_data->surfaceMap[*pSurface].usedAllocatorToCreate = (pAllocator != NULL);
my_data->surfaceMap[*pSurface].numQueueFamilyIndexSupport = 0;
my_data->surfaceMap[*pSurface].pQueueFamilyIndexSupport = NULL;
// Point to the associated SwpInstance:
@@ -796,10 +661,8 @@ VK_LAYER_EXPORT VKAPI_ATTR VkResult VKAPI_CALL vkCreateWin32SurfaceKHR(
return VK_ERROR_VALIDATION_FAILED_EXT;
}
-VK_LAYER_EXPORT VKAPI_ATTR VkBool32 VKAPI_CALL vkGetPhysicalDeviceWin32PresentationSupportKHR(
- VkPhysicalDevice physicalDevice,
- uint32_t queueFamilyIndex)
-{
+VK_LAYER_EXPORT VKAPI_ATTR VkBool32 VKAPI_CALL
+vkGetPhysicalDeviceWin32PresentationSupportKHR(VkPhysicalDevice physicalDevice, uint32_t queueFamilyIndex) {
VkBool32 result = VK_FALSE;
VkBool32 skipCall = VK_FALSE;
layer_data *my_data = get_my_data_ptr(get_dispatch_key(physicalDevice), layer_data_map);
@@ -807,41 +670,31 @@ VK_LAYER_EXPORT VKAPI_ATTR VkBool32 VKAPI_CALL vkGetPhysicalDeviceWin32Presentat
SwpPhysicalDevice *pPhysicalDevice = &my_data->physicalDeviceMap[physicalDevice];
// Validate that the platform extension was enabled:
- if (pPhysicalDevice && pPhysicalDevice->pInstance &&
- !pPhysicalDevice->pInstance->win32SurfaceExtensionEnabled) {
- skipCall |= LOG_ERROR(VK_DEBUG_REPORT_OBJECT_TYPE_INSTANCE_EXT,
- pPhysicalDevice->pInstance,
- "VkInstance",
+ if (pPhysicalDevice && pPhysicalDevice->pInstance && !pPhysicalDevice->pInstance->win32SurfaceExtensionEnabled) {
+ skipCall |= LOG_ERROR(VK_DEBUG_REPORT_OBJECT_TYPE_INSTANCE_EXT, pPhysicalDevice->pInstance, "VkInstance",
SWAPCHAIN_EXT_NOT_ENABLED_BUT_USED,
- "%s() called even though the %s extension was not enabled for this VkInstance.",
- __FUNCTION__, VK_KHR_WIN32_SURFACE_EXTENSION_NAME);
+ "%s() called even though the %s extension was not enabled for this VkInstance.", __FUNCTION__,
+ VK_KHR_WIN32_SURFACE_EXTENSION_NAME);
}
- if (pPhysicalDevice->gotQueueFamilyPropertyCount &&
- (queueFamilyIndex >= pPhysicalDevice->numOfQueueFamilies)) {
- skipCall |= LOG_ERROR_QUEUE_FAMILY_INDEX_TOO_LARGE(VK_DEBUG_REPORT_OBJECT_TYPE_PHYSICAL_DEVICE_EXT,
- pPhysicalDevice,
- "VkPhysicalDevice",
- queueFamilyIndex,
- pPhysicalDevice->numOfQueueFamilies);
+ if (pPhysicalDevice->gotQueueFamilyPropertyCount && (queueFamilyIndex >= pPhysicalDevice->numOfQueueFamilies)) {
+ skipCall |=
+ LOG_ERROR_QUEUE_FAMILY_INDEX_TOO_LARGE(VK_DEBUG_REPORT_OBJECT_TYPE_PHYSICAL_DEVICE_EXT, pPhysicalDevice,
+ "VkPhysicalDevice", queueFamilyIndex, pPhysicalDevice->numOfQueueFamilies);
}
loader_platform_thread_unlock_mutex(&globalLock);
if (VK_FALSE == skipCall) {
// Call down the call chain:
- result = my_data->instance_dispatch_table->GetPhysicalDeviceWin32PresentationSupportKHR(
- physicalDevice, queueFamilyIndex);
+ result = my_data->instance_dispatch_table->GetPhysicalDeviceWin32PresentationSupportKHR(physicalDevice, queueFamilyIndex);
}
return result;
}
#endif // VK_USE_PLATFORM_WIN32_KHR
#ifdef VK_USE_PLATFORM_XCB_KHR
-VK_LAYER_EXPORT VKAPI_ATTR VkResult VKAPI_CALL vkCreateXcbSurfaceKHR(
- VkInstance instance,
- const VkXcbSurfaceCreateInfoKHR* pCreateInfo,
- const VkAllocationCallbacks* pAllocator,
- VkSurfaceKHR* pSurface)
-{
+VK_LAYER_EXPORT VKAPI_ATTR VkResult VKAPI_CALL
+vkCreateXcbSurfaceKHR(VkInstance instance, const VkXcbSurfaceCreateInfoKHR *pCreateInfo, const VkAllocationCallbacks *pAllocator,
+ VkSurfaceKHR *pSurface) {
VkResult result = VK_SUCCESS;
VkBool32 skipCall = VK_FALSE;
layer_data *my_data = get_my_data_ptr(get_dispatch_key(instance), layer_data_map);
@@ -850,37 +703,27 @@ VK_LAYER_EXPORT VKAPI_ATTR VkResult VKAPI_CALL vkCreateXcbSurfaceKHR(
// Validate that the platform extension was enabled:
if (pInstance && !pInstance->xcbSurfaceExtensionEnabled) {
- skipCall |= LOG_ERROR(VK_DEBUG_REPORT_OBJECT_TYPE_INSTANCE_EXT,
- pInstance,
- "VkInstance",
- SWAPCHAIN_EXT_NOT_ENABLED_BUT_USED,
- "%s() called even though the %s extension was not enabled for this VkInstance.",
- __FUNCTION__, VK_KHR_XCB_SURFACE_EXTENSION_NAME);
+ skipCall |= LOG_ERROR(VK_DEBUG_REPORT_OBJECT_TYPE_INSTANCE_EXT, pInstance, "VkInstance", SWAPCHAIN_EXT_NOT_ENABLED_BUT_USED,
+ "%s() called even though the %s extension was not enabled for this VkInstance.", __FUNCTION__,
+ VK_KHR_XCB_SURFACE_EXTENSION_NAME);
}
if (!pCreateInfo) {
- skipCall |= LOG_ERROR_NULL_POINTER(VK_DEBUG_REPORT_OBJECT_TYPE_DEVICE_EXT,
- device,
- "pCreateInfo");
+ skipCall |= LOG_ERROR_NULL_POINTER(VK_DEBUG_REPORT_OBJECT_TYPE_DEVICE_EXT, device, "pCreateInfo");
} else {
if (pCreateInfo->sType != VK_STRUCTURE_TYPE_XCB_SURFACE_CREATE_INFO_KHR) {
- skipCall |= LOG_ERROR_WRONG_STYPE(VK_DEBUG_REPORT_OBJECT_TYPE_DEVICE_EXT,
- device,
- "pCreateInfo",
+ skipCall |= LOG_ERROR_WRONG_STYPE(VK_DEBUG_REPORT_OBJECT_TYPE_DEVICE_EXT, device, "pCreateInfo",
"VK_STRUCTURE_TYPE_XCB_SURFACE_CREATE_INFO_KHR");
}
if (pCreateInfo->pNext != NULL) {
- skipCall |= LOG_INFO_WRONG_NEXT(VK_DEBUG_REPORT_OBJECT_TYPE_DEVICE_EXT,
- device,
- "pCreateInfo");
+ skipCall |= LOG_INFO_WRONG_NEXT(VK_DEBUG_REPORT_OBJECT_TYPE_DEVICE_EXT, device, "pCreateInfo");
}
}
if (VK_FALSE == skipCall) {
// Call down the call chain:
loader_platform_thread_unlock_mutex(&globalLock);
- result = my_data->instance_dispatch_table->CreateXcbSurfaceKHR(
- instance, pCreateInfo, pAllocator, pSurface);
+ result = my_data->instance_dispatch_table->CreateXcbSurfaceKHR(instance, pCreateInfo, pAllocator, pSurface);
loader_platform_thread_lock_mutex(&globalLock);
// Obtain this pointer again after locking:
@@ -889,8 +732,7 @@ VK_LAYER_EXPORT VKAPI_ATTR VkResult VKAPI_CALL vkCreateXcbSurfaceKHR(
// Record the VkSurfaceKHR returned by the ICD:
my_data->surfaceMap[*pSurface].surface = *pSurface;
my_data->surfaceMap[*pSurface].pInstance = pInstance;
- my_data->surfaceMap[*pSurface].usedAllocatorToCreate =
- (pAllocator != NULL);
+ my_data->surfaceMap[*pSurface].usedAllocatorToCreate = (pAllocator != NULL);
my_data->surfaceMap[*pSurface].numQueueFamilyIndexSupport = 0;
my_data->surfaceMap[*pSurface].pQueueFamilyIndexSupport = NULL;
// Point to the associated SwpInstance:
@@ -903,12 +745,9 @@ VK_LAYER_EXPORT VKAPI_ATTR VkResult VKAPI_CALL vkCreateXcbSurfaceKHR(
return VK_ERROR_VALIDATION_FAILED_EXT;
}
-VK_LAYER_EXPORT VKAPI_ATTR VkBool32 VKAPI_CALL vkGetPhysicalDeviceXcbPresentationSupportKHR(
- VkPhysicalDevice physicalDevice,
- uint32_t queueFamilyIndex,
- xcb_connection_t* connection,
- xcb_visualid_t visual_id)
-{
+VK_LAYER_EXPORT VKAPI_ATTR VkBool32 VKAPI_CALL
+vkGetPhysicalDeviceXcbPresentationSupportKHR(VkPhysicalDevice physicalDevice, uint32_t queueFamilyIndex,
+ xcb_connection_t *connection, xcb_visualid_t visual_id) {
VkBool32 result = VK_FALSE;
VkBool32 skipCall = VK_FALSE;
layer_data *my_data = get_my_data_ptr(get_dispatch_key(physicalDevice), layer_data_map);
@@ -916,41 +755,32 @@ VK_LAYER_EXPORT VKAPI_ATTR VkBool32 VKAPI_CALL vkGetPhysicalDeviceXcbPresentatio
SwpPhysicalDevice *pPhysicalDevice = &my_data->physicalDeviceMap[physicalDevice];
// Validate that the platform extension was enabled:
- if (pPhysicalDevice && pPhysicalDevice->pInstance &&
- !pPhysicalDevice->pInstance->xcbSurfaceExtensionEnabled) {
- skipCall |= LOG_ERROR(VK_DEBUG_REPORT_OBJECT_TYPE_INSTANCE_EXT,
- pPhysicalDevice->pInstance,
- "VkInstance",
+ if (pPhysicalDevice && pPhysicalDevice->pInstance && !pPhysicalDevice->pInstance->xcbSurfaceExtensionEnabled) {
+ skipCall |= LOG_ERROR(VK_DEBUG_REPORT_OBJECT_TYPE_INSTANCE_EXT, pPhysicalDevice->pInstance, "VkInstance",
SWAPCHAIN_EXT_NOT_ENABLED_BUT_USED,
- "%s() called even though the %s extension was not enabled for this VkInstance.",
- __FUNCTION__, VK_KHR_XCB_SURFACE_EXTENSION_NAME);
+ "%s() called even though the %s extension was not enabled for this VkInstance.", __FUNCTION__,
+ VK_KHR_XCB_SURFACE_EXTENSION_NAME);
}
- if (pPhysicalDevice->gotQueueFamilyPropertyCount &&
- (queueFamilyIndex >= pPhysicalDevice->numOfQueueFamilies)) {
- skipCall |= LOG_ERROR_QUEUE_FAMILY_INDEX_TOO_LARGE(VK_DEBUG_REPORT_OBJECT_TYPE_PHYSICAL_DEVICE_EXT,
- pPhysicalDevice,
- "VkPhysicalDevice",
- queueFamilyIndex,
- pPhysicalDevice->numOfQueueFamilies);
+ if (pPhysicalDevice->gotQueueFamilyPropertyCount && (queueFamilyIndex >= pPhysicalDevice->numOfQueueFamilies)) {
+ skipCall |=
+ LOG_ERROR_QUEUE_FAMILY_INDEX_TOO_LARGE(VK_DEBUG_REPORT_OBJECT_TYPE_PHYSICAL_DEVICE_EXT, pPhysicalDevice,
+ "VkPhysicalDevice", queueFamilyIndex, pPhysicalDevice->numOfQueueFamilies);
}
loader_platform_thread_unlock_mutex(&globalLock);
if (VK_FALSE == skipCall) {
// Call down the call chain:
- result = my_data->instance_dispatch_table->GetPhysicalDeviceXcbPresentationSupportKHR(
- physicalDevice, queueFamilyIndex, connection, visual_id);
+ result = my_data->instance_dispatch_table->GetPhysicalDeviceXcbPresentationSupportKHR(physicalDevice, queueFamilyIndex,
+ connection, visual_id);
}
return result;
}
#endif // VK_USE_PLATFORM_XCB_KHR
#ifdef VK_USE_PLATFORM_XLIB_KHR
-VK_LAYER_EXPORT VKAPI_ATTR VkResult VKAPI_CALL vkCreateXlibSurfaceKHR(
- VkInstance instance,
- const VkXlibSurfaceCreateInfoKHR* pCreateInfo,
- const VkAllocationCallbacks* pAllocator,
- VkSurfaceKHR* pSurface)
-{
+VK_LAYER_EXPORT VKAPI_ATTR VkResult VKAPI_CALL
+vkCreateXlibSurfaceKHR(VkInstance instance, const VkXlibSurfaceCreateInfoKHR *pCreateInfo, const VkAllocationCallbacks *pAllocator,
+ VkSurfaceKHR *pSurface) {
VkResult result = VK_SUCCESS;
VkBool32 skipCall = VK_FALSE;
layer_data *my_data = get_my_data_ptr(get_dispatch_key(instance), layer_data_map);
@@ -959,37 +789,27 @@ VK_LAYER_EXPORT VKAPI_ATTR VkResult VKAPI_CALL vkCreateXlibSurfaceKHR(
// Validate that the platform extension was enabled:
if (pInstance && !pInstance->xlibSurfaceExtensionEnabled) {
- skipCall |= LOG_ERROR(VK_DEBUG_REPORT_OBJECT_TYPE_INSTANCE_EXT,
- pInstance,
- "VkInstance",
- SWAPCHAIN_EXT_NOT_ENABLED_BUT_USED,
- "%s() called even though the %s extension was not enabled for this VkInstance.",
- __FUNCTION__, VK_KHR_XLIB_SURFACE_EXTENSION_NAME);
+ skipCall |= LOG_ERROR(VK_DEBUG_REPORT_OBJECT_TYPE_INSTANCE_EXT, pInstance, "VkInstance", SWAPCHAIN_EXT_NOT_ENABLED_BUT_USED,
+ "%s() called even though the %s extension was not enabled for this VkInstance.", __FUNCTION__,
+ VK_KHR_XLIB_SURFACE_EXTENSION_NAME);
}
if (!pCreateInfo) {
- skipCall |= LOG_ERROR_NULL_POINTER(VK_DEBUG_REPORT_OBJECT_TYPE_DEVICE_EXT,
- device,
- "pCreateInfo");
+ skipCall |= LOG_ERROR_NULL_POINTER(VK_DEBUG_REPORT_OBJECT_TYPE_DEVICE_EXT, device, "pCreateInfo");
} else {
if (pCreateInfo->sType != VK_STRUCTURE_TYPE_XLIB_SURFACE_CREATE_INFO_KHR) {
- skipCall |= LOG_ERROR_WRONG_STYPE(VK_DEBUG_REPORT_OBJECT_TYPE_DEVICE_EXT,
- device,
- "pCreateInfo",
+ skipCall |= LOG_ERROR_WRONG_STYPE(VK_DEBUG_REPORT_OBJECT_TYPE_DEVICE_EXT, device, "pCreateInfo",
"VK_STRUCTURE_TYPE_XLIB_SURFACE_CREATE_INFO_KHR");
}
if (pCreateInfo->pNext != NULL) {
- skipCall |= LOG_INFO_WRONG_NEXT(VK_DEBUG_REPORT_OBJECT_TYPE_DEVICE_EXT,
- device,
- "pCreateInfo");
+ skipCall |= LOG_INFO_WRONG_NEXT(VK_DEBUG_REPORT_OBJECT_TYPE_DEVICE_EXT, device, "pCreateInfo");
}
}
if (VK_FALSE == skipCall) {
// Call down the call chain:
loader_platform_thread_unlock_mutex(&globalLock);
- result = my_data->instance_dispatch_table->CreateXlibSurfaceKHR(
- instance, pCreateInfo, pAllocator, pSurface);
+ result = my_data->instance_dispatch_table->CreateXlibSurfaceKHR(instance, pCreateInfo, pAllocator, pSurface);
loader_platform_thread_lock_mutex(&globalLock);
// Obtain this pointer again after locking:
@@ -998,8 +818,7 @@ VK_LAYER_EXPORT VKAPI_ATTR VkResult VKAPI_CALL vkCreateXlibSurfaceKHR(
// Record the VkSurfaceKHR returned by the ICD:
my_data->surfaceMap[*pSurface].surface = *pSurface;
my_data->surfaceMap[*pSurface].pInstance = pInstance;
- my_data->surfaceMap[*pSurface].usedAllocatorToCreate =
- (pAllocator != NULL);
+ my_data->surfaceMap[*pSurface].usedAllocatorToCreate = (pAllocator != NULL);
my_data->surfaceMap[*pSurface].numQueueFamilyIndexSupport = 0;
my_data->surfaceMap[*pSurface].pQueueFamilyIndexSupport = NULL;
// Point to the associated SwpInstance:
@@ -1012,12 +831,9 @@ VK_LAYER_EXPORT VKAPI_ATTR VkResult VKAPI_CALL vkCreateXlibSurfaceKHR(
return VK_ERROR_VALIDATION_FAILED_EXT;
}
-VK_LAYER_EXPORT VKAPI_ATTR VkBool32 VKAPI_CALL vkGetPhysicalDeviceXlibPresentationSupportKHR(
- VkPhysicalDevice physicalDevice,
- uint32_t queueFamilyIndex,
- Display* dpy,
- VisualID visualID)
-{
+VK_LAYER_EXPORT VKAPI_ATTR VkBool32 VKAPI_CALL vkGetPhysicalDeviceXlibPresentationSupportKHR(VkPhysicalDevice physicalDevice,
+ uint32_t queueFamilyIndex,
+ Display *dpy, VisualID visualID) {
VkBool32 result = VK_FALSE;
VkBool32 skipCall = VK_FALSE;
layer_data *my_data = get_my_data_ptr(get_dispatch_key(physicalDevice), layer_data_map);
@@ -1025,36 +841,30 @@ VK_LAYER_EXPORT VKAPI_ATTR VkBool32 VKAPI_CALL vkGetPhysicalDeviceXlibPresentati
SwpPhysicalDevice *pPhysicalDevice = &my_data->physicalDeviceMap[physicalDevice];
// Validate that the platform extension was enabled:
- if (pPhysicalDevice && pPhysicalDevice->pInstance &&
- !pPhysicalDevice->pInstance->xlibSurfaceExtensionEnabled) {
- skipCall |= LOG_ERROR(VK_DEBUG_REPORT_OBJECT_TYPE_INSTANCE_EXT,
- pPhysicalDevice->pInstance,
- "VkInstance",
+ if (pPhysicalDevice && pPhysicalDevice->pInstance && !pPhysicalDevice->pInstance->xlibSurfaceExtensionEnabled) {
+ skipCall |= LOG_ERROR(VK_DEBUG_REPORT_OBJECT_TYPE_INSTANCE_EXT, pPhysicalDevice->pInstance, "VkInstance",
SWAPCHAIN_EXT_NOT_ENABLED_BUT_USED,
- "%s() called even though the %s extension was not enabled for this VkInstance.",
- __FUNCTION__, VK_KHR_XLIB_SURFACE_EXTENSION_NAME);
+ "%s() called even though the %s extension was not enabled for this VkInstance.", __FUNCTION__,
+ VK_KHR_XLIB_SURFACE_EXTENSION_NAME);
}
- if (pPhysicalDevice->gotQueueFamilyPropertyCount &&
- (queueFamilyIndex >= pPhysicalDevice->numOfQueueFamilies)) {
- skipCall |= LOG_ERROR_QUEUE_FAMILY_INDEX_TOO_LARGE(VK_DEBUG_REPORT_OBJECT_TYPE_PHYSICAL_DEVICE_EXT,
- pPhysicalDevice,
- "VkPhysicalDevice",
- queueFamilyIndex,
- pPhysicalDevice->numOfQueueFamilies);
+ if (pPhysicalDevice->gotQueueFamilyPropertyCount && (queueFamilyIndex >= pPhysicalDevice->numOfQueueFamilies)) {
+ skipCall |=
+ LOG_ERROR_QUEUE_FAMILY_INDEX_TOO_LARGE(VK_DEBUG_REPORT_OBJECT_TYPE_PHYSICAL_DEVICE_EXT, pPhysicalDevice,
+ "VkPhysicalDevice", queueFamilyIndex, pPhysicalDevice->numOfQueueFamilies);
}
loader_platform_thread_unlock_mutex(&globalLock);
if (VK_FALSE == skipCall) {
// Call down the call chain:
- result = my_data->instance_dispatch_table->GetPhysicalDeviceXlibPresentationSupportKHR(
- physicalDevice, queueFamilyIndex, dpy, visualID);
+ result = my_data->instance_dispatch_table->GetPhysicalDeviceXlibPresentationSupportKHR(physicalDevice, queueFamilyIndex,
+ dpy, visualID);
}
return result;
}
#endif // VK_USE_PLATFORM_XLIB_KHR
-VK_LAYER_EXPORT VKAPI_ATTR void VKAPI_CALL vkDestroySurfaceKHR(VkInstance instance, VkSurfaceKHR surface, const VkAllocationCallbacks* pAllocator)
-{
+VK_LAYER_EXPORT VKAPI_ATTR void VKAPI_CALL
+vkDestroySurfaceKHR(VkInstance instance, VkSurfaceKHR surface, const VkAllocationCallbacks *pAllocator) {
VkBool32 skipCall = VK_FALSE;
layer_data *my_data = get_my_data_ptr(get_dispatch_key(instance), layer_data_map);
loader_platform_thread_lock_mutex(&globalLock);
@@ -1067,14 +877,12 @@ VK_LAYER_EXPORT VKAPI_ATTR void VKAPI_CALL vkDestroySurfaceKHR(VkInstance insta
pSurface->pInstance->surfaces.erase(surface);
}
if (!pSurface->swapchains.empty()) {
- LOG_ERROR(VK_DEBUG_REPORT_OBJECT_TYPE_DEVICE_EXT, instance, "VkInstance",
- SWAPCHAIN_DEL_OBJECT_BEFORE_CHILDREN,
+ LOG_ERROR(VK_DEBUG_REPORT_OBJECT_TYPE_DEVICE_EXT, instance, "VkInstance", SWAPCHAIN_DEL_OBJECT_BEFORE_CHILDREN,
"%s() called before all of its associated "
"VkSwapchainKHRs were destroyed.",
__FUNCTION__);
// Empty and then delete all SwpSwapchain's
- for (auto it = pSurface->swapchains.begin() ;
- it != pSurface->swapchains.end() ; it++) {
+ for (auto it = pSurface->swapchains.begin(); it != pSurface->swapchains.end(); it++) {
// Delete all SwpImage's
it->second->images.clear();
// In case the swapchain's device hasn't been destroyed yet
@@ -1088,8 +896,7 @@ VK_LAYER_EXPORT VKAPI_ATTR void VKAPI_CALL vkDestroySurfaceKHR(VkInstance insta
pSurface->swapchains.clear();
}
if ((pAllocator != NULL) != pSurface->usedAllocatorToCreate) {
- LOG_ERROR(VK_DEBUG_REPORT_OBJECT_TYPE_DEVICE_EXT, instance, "VkInstance",
- SWAPCHAIN_INCOMPATIBLE_ALLOCATOR,
+ LOG_ERROR(VK_DEBUG_REPORT_OBJECT_TYPE_DEVICE_EXT, instance, "VkInstance", SWAPCHAIN_INCOMPATIBLE_ALLOCATOR,
"%s() called with incompatible pAllocator from when "
"the object was created.",
__FUNCTION__);
@@ -1100,28 +907,24 @@ VK_LAYER_EXPORT VKAPI_ATTR void VKAPI_CALL vkDestroySurfaceKHR(VkInstance insta
if (VK_FALSE == skipCall) {
// Call down the call chain:
- my_data->instance_dispatch_table->DestroySurfaceKHR(
- instance, surface, pAllocator);
+ my_data->instance_dispatch_table->DestroySurfaceKHR(instance, surface, pAllocator);
}
}
-VK_LAYER_EXPORT VKAPI_ATTR VkResult VKAPI_CALL vkEnumeratePhysicalDevices(VkInstance instance, uint32_t* pPhysicalDeviceCount, VkPhysicalDevice* pPhysicalDevices)
-{
+VK_LAYER_EXPORT VKAPI_ATTR VkResult VKAPI_CALL
+vkEnumeratePhysicalDevices(VkInstance instance, uint32_t *pPhysicalDeviceCount, VkPhysicalDevice *pPhysicalDevices) {
VkResult result = VK_SUCCESS;
layer_data *my_data = get_my_data_ptr(get_dispatch_key(instance), layer_data_map);
// Call down the call chain:
- result = my_data->instance_dispatch_table->EnumeratePhysicalDevices(
- instance, pPhysicalDeviceCount, pPhysicalDevices);
+ result = my_data->instance_dispatch_table->EnumeratePhysicalDevices(instance, pPhysicalDeviceCount, pPhysicalDevices);
loader_platform_thread_lock_mutex(&globalLock);
SwpInstance *pInstance = &(my_data->instanceMap[instance]);
- if ((result == VK_SUCCESS) && pInstance && pPhysicalDevices &&
- (*pPhysicalDeviceCount > 0)) {
+ if ((result == VK_SUCCESS) && pInstance && pPhysicalDevices && (*pPhysicalDeviceCount > 0)) {
// Record the VkPhysicalDevices returned by the ICD:
for (uint32_t i = 0; i < *pPhysicalDeviceCount; i++) {
- my_data->physicalDeviceMap[pPhysicalDevices[i]].physicalDevice =
- pPhysicalDevices[i];
+ my_data->physicalDeviceMap[pPhysicalDevices[i]].physicalDevice = pPhysicalDevices[i];
my_data->physicalDeviceMap[pPhysicalDevices[i]].pInstance = pInstance;
my_data->physicalDeviceMap[pPhysicalDevices[i]].pDevice = NULL;
my_data->physicalDeviceMap[pPhysicalDevices[i]].gotQueueFamilyPropertyCount = false;
@@ -1132,8 +935,7 @@ VK_LAYER_EXPORT VKAPI_ATTR VkResult VKAPI_CALL vkEnumeratePhysicalDevices(VkInst
my_data->physicalDeviceMap[pPhysicalDevices[i]].pPresentModes = NULL;
// Point to the associated SwpInstance:
if (pInstance) {
- pInstance->physicalDevices[pPhysicalDevices[i]] =
- &my_data->physicalDeviceMap[pPhysicalDevices[i]];
+ pInstance->physicalDevices[pPhysicalDevices[i]] = &my_data->physicalDeviceMap[pPhysicalDevices[i]];
}
}
}
@@ -1141,14 +943,15 @@ VK_LAYER_EXPORT VKAPI_ATTR VkResult VKAPI_CALL vkEnumeratePhysicalDevices(VkInst
return result;
}
-VK_LAYER_EXPORT VKAPI_ATTR VkResult VKAPI_CALL vkCreateDevice(VkPhysicalDevice physicalDevice, const VkDeviceCreateInfo* pCreateInfo, const VkAllocationCallbacks* pAllocator, VkDevice* pDevice)
-{
+VK_LAYER_EXPORT VKAPI_ATTR VkResult VKAPI_CALL vkCreateDevice(VkPhysicalDevice physicalDevice,
+ const VkDeviceCreateInfo *pCreateInfo,
+ const VkAllocationCallbacks *pAllocator, VkDevice *pDevice) {
VkLayerDeviceCreateInfo *chain_info = get_chain_info(pCreateInfo, VK_LAYER_LINK_INFO);
assert(chain_info->u.pLayerInfo);
PFN_vkGetInstanceProcAddr fpGetInstanceProcAddr = chain_info->u.pLayerInfo->pfnNextGetInstanceProcAddr;
PFN_vkGetDeviceProcAddr fpGetDeviceProcAddr = chain_info->u.pLayerInfo->pfnNextGetDeviceProcAddr;
- PFN_vkCreateDevice fpCreateDevice = (PFN_vkCreateDevice) fpGetInstanceProcAddr(NULL, "vkCreateDevice");
+ PFN_vkCreateDevice fpCreateDevice = (PFN_vkCreateDevice)fpGetInstanceProcAddr(NULL, "vkCreateDevice");
if (fpCreateDevice == NULL) {
return VK_ERROR_INITIALIZATION_FAILED;
}
@@ -1176,8 +979,7 @@ VK_LAYER_EXPORT VKAPI_ATTR VkResult VKAPI_CALL vkCreateDevice(VkPhysicalDevice p
return result;
}
-VK_LAYER_EXPORT VKAPI_ATTR void VKAPI_CALL vkDestroyDevice(VkDevice device, const VkAllocationCallbacks* pAllocator)
-{
+VK_LAYER_EXPORT VKAPI_ATTR void VKAPI_CALL vkDestroyDevice(VkDevice device, const VkAllocationCallbacks *pAllocator) {
dispatch_key key = get_dispatch_key(device);
layer_data *my_data = get_my_data_ptr(key, layer_data_map);
@@ -1193,14 +995,12 @@ VK_LAYER_EXPORT VKAPI_ATTR void VKAPI_CALL vkDestroyDevice(VkDevice device, cons
pDevice->pPhysicalDevice->pDevice = NULL;
}
if (!pDevice->swapchains.empty()) {
- LOG_ERROR(VK_DEBUG_REPORT_OBJECT_TYPE_DEVICE_EXT, device, "VkDevice",
- SWAPCHAIN_DEL_OBJECT_BEFORE_CHILDREN,
+ LOG_ERROR(VK_DEBUG_REPORT_OBJECT_TYPE_DEVICE_EXT, device, "VkDevice", SWAPCHAIN_DEL_OBJECT_BEFORE_CHILDREN,
"%s() called before all of its associated "
"VkSwapchainKHRs were destroyed.",
__FUNCTION__);
// Empty and then delete all SwpSwapchain's
- for (auto it = pDevice->swapchains.begin() ;
- it != pDevice->swapchains.end() ; it++) {
+ for (auto it = pDevice->swapchains.begin(); it != pDevice->swapchains.end(); it++) {
// Delete all SwpImage's
it->second->images.clear();
// In case the swapchain's surface hasn't been destroyed yet
@@ -1220,12 +1020,9 @@ VK_LAYER_EXPORT VKAPI_ATTR void VKAPI_CALL vkDestroyDevice(VkDevice device, cons
loader_platform_thread_unlock_mutex(&globalLock);
}
-VK_LAYER_EXPORT VKAPI_ATTR VkResult VKAPI_CALL vkGetPhysicalDeviceSurfaceSupportKHR(
- VkPhysicalDevice physicalDevice,
- uint32_t queueFamilyIndex,
- VkSurfaceKHR surface,
- VkBool32* pSupported)
-{
+VK_LAYER_EXPORT VKAPI_ATTR VkResult VKAPI_CALL vkGetPhysicalDeviceSurfaceSupportKHR(VkPhysicalDevice physicalDevice,
+ uint32_t queueFamilyIndex, VkSurfaceKHR surface,
+ VkBool32 *pSupported) {
VkResult result = VK_SUCCESS;
VkBool32 skipCall = VK_FALSE;
layer_data *my_data = get_my_data_ptr(get_dispatch_key(physicalDevice), layer_data_map);
@@ -1233,44 +1030,32 @@ VK_LAYER_EXPORT VKAPI_ATTR VkResult VKAPI_CALL vkGetPhysicalDeviceSurfaceSupport
SwpPhysicalDevice *pPhysicalDevice = &my_data->physicalDeviceMap[physicalDevice];
// Validate that the surface extension was enabled:
- if (pPhysicalDevice && pPhysicalDevice->pInstance &&
- !pPhysicalDevice->pInstance->surfaceExtensionEnabled) {
- skipCall |= LOG_ERROR(VK_DEBUG_REPORT_OBJECT_TYPE_INSTANCE_EXT,
- pPhysicalDevice->pInstance,
- "VkInstance",
+ if (pPhysicalDevice && pPhysicalDevice->pInstance && !pPhysicalDevice->pInstance->surfaceExtensionEnabled) {
+ skipCall |= LOG_ERROR(VK_DEBUG_REPORT_OBJECT_TYPE_INSTANCE_EXT, pPhysicalDevice->pInstance, "VkInstance",
SWAPCHAIN_EXT_NOT_ENABLED_BUT_USED,
- "%s() called even though the %s extension was not enabled for this VkInstance.",
- __FUNCTION__, VK_KHR_SURFACE_EXTENSION_NAME);
+ "%s() called even though the %s extension was not enabled for this VkInstance.", __FUNCTION__,
+ VK_KHR_SURFACE_EXTENSION_NAME);
}
if (!pPhysicalDevice->gotQueueFamilyPropertyCount) {
- skipCall |= LOG_ERROR(VK_DEBUG_REPORT_OBJECT_TYPE_PHYSICAL_DEVICE_EXT,
- pPhysicalDevice,
- "VkPhysicalDevice",
- SWAPCHAIN_DID_NOT_QUERY_QUEUE_FAMILIES,
- "%s() called before calling the "
- "vkGetPhysicalDeviceQueueFamilyProperties "
- "function.",
+ skipCall |= LOG_ERROR(VK_DEBUG_REPORT_OBJECT_TYPE_PHYSICAL_DEVICE_EXT, pPhysicalDevice, "VkPhysicalDevice",
+ SWAPCHAIN_DID_NOT_QUERY_QUEUE_FAMILIES, "%s() called before calling the "
+ "vkGetPhysicalDeviceQueueFamilyProperties "
+ "function.",
__FUNCTION__);
- } else if (pPhysicalDevice->gotQueueFamilyPropertyCount &&
- (queueFamilyIndex >= pPhysicalDevice->numOfQueueFamilies)) {
- skipCall |= LOG_ERROR_QUEUE_FAMILY_INDEX_TOO_LARGE(VK_DEBUG_REPORT_OBJECT_TYPE_PHYSICAL_DEVICE_EXT,
- pPhysicalDevice,
- "VkPhysicalDevice",
- queueFamilyIndex,
- pPhysicalDevice->numOfQueueFamilies);
+ } else if (pPhysicalDevice->gotQueueFamilyPropertyCount && (queueFamilyIndex >= pPhysicalDevice->numOfQueueFamilies)) {
+ skipCall |=
+ LOG_ERROR_QUEUE_FAMILY_INDEX_TOO_LARGE(VK_DEBUG_REPORT_OBJECT_TYPE_PHYSICAL_DEVICE_EXT, pPhysicalDevice,
+ "VkPhysicalDevice", queueFamilyIndex, pPhysicalDevice->numOfQueueFamilies);
}
if (!pSupported) {
- skipCall |= LOG_ERROR_NULL_POINTER(VK_DEBUG_REPORT_OBJECT_TYPE_PHYSICAL_DEVICE_EXT,
- physicalDevice,
- "pSupported");
+ skipCall |= LOG_ERROR_NULL_POINTER(VK_DEBUG_REPORT_OBJECT_TYPE_PHYSICAL_DEVICE_EXT, physicalDevice, "pSupported");
}
if (VK_FALSE == skipCall) {
// Call down the call chain:
loader_platform_thread_unlock_mutex(&globalLock);
- result = my_data->instance_dispatch_table->GetPhysicalDeviceSurfaceSupportKHR(
- physicalDevice, queueFamilyIndex, surface,
- pSupported);
+ result = my_data->instance_dispatch_table->GetPhysicalDeviceSurfaceSupportKHR(physicalDevice, queueFamilyIndex, surface,
+ pSupported);
loader_platform_thread_lock_mutex(&globalLock);
// Obtain this pointer again after locking:
@@ -1278,24 +1063,20 @@ VK_LAYER_EXPORT VKAPI_ATTR VkResult VKAPI_CALL vkGetPhysicalDeviceSurfaceSupport
if ((result == VK_SUCCESS) && pSupported && pPhysicalDevice) {
// Record the result of this query:
SwpInstance *pInstance = pPhysicalDevice->pInstance;
- SwpSurface *pSurface =
- (pInstance) ? pInstance->surfaces[surface] : NULL;
+ SwpSurface *pSurface = (pInstance) ? pInstance->surfaces[surface] : NULL;
if (pSurface) {
pPhysicalDevice->supportedSurfaces[surface] = pSurface;
if (!pSurface->numQueueFamilyIndexSupport) {
if (pPhysicalDevice->gotQueueFamilyPropertyCount) {
- pSurface->pQueueFamilyIndexSupport = (VkBool32 *)
- malloc(pPhysicalDevice->numOfQueueFamilies *
- sizeof(VkBool32));
+ pSurface->pQueueFamilyIndexSupport =
+ (VkBool32 *)malloc(pPhysicalDevice->numOfQueueFamilies * sizeof(VkBool32));
if (pSurface->pQueueFamilyIndexSupport != NULL) {
- pSurface->numQueueFamilyIndexSupport =
- pPhysicalDevice->numOfQueueFamilies;
+ pSurface->numQueueFamilyIndexSupport = pPhysicalDevice->numOfQueueFamilies;
}
}
}
if (pSurface->numQueueFamilyIndexSupport) {
- pSurface->pQueueFamilyIndexSupport[queueFamilyIndex] =
- *pSupported;
+ pSurface->pQueueFamilyIndexSupport[queueFamilyIndex] = *pSupported;
}
}
}
@@ -1306,11 +1087,9 @@ VK_LAYER_EXPORT VKAPI_ATTR VkResult VKAPI_CALL vkGetPhysicalDeviceSurfaceSupport
return VK_ERROR_VALIDATION_FAILED_EXT;
}
-VK_LAYER_EXPORT VKAPI_ATTR VkResult VKAPI_CALL vkGetPhysicalDeviceSurfaceCapabilitiesKHR(
- VkPhysicalDevice physicalDevice,
- VkSurfaceKHR surface,
- VkSurfaceCapabilitiesKHR* pSurfaceCapabilities)
-{
+VK_LAYER_EXPORT VKAPI_ATTR VkResult VKAPI_CALL
+vkGetPhysicalDeviceSurfaceCapabilitiesKHR(VkPhysicalDevice physicalDevice, VkSurfaceKHR surface,
+ VkSurfaceCapabilitiesKHR *pSurfaceCapabilities) {
VkResult result = VK_SUCCESS;
VkBool32 skipCall = VK_FALSE;
layer_data *my_data = get_my_data_ptr(get_dispatch_key(physicalDevice), layer_data_map);
@@ -1318,26 +1097,21 @@ VK_LAYER_EXPORT VKAPI_ATTR VkResult VKAPI_CALL vkGetPhysicalDeviceSurfaceCapabil
SwpPhysicalDevice *pPhysicalDevice = &my_data->physicalDeviceMap[physicalDevice];
// Validate that the surface extension was enabled:
- if (pPhysicalDevice && pPhysicalDevice->pInstance &&
- !pPhysicalDevice->pInstance->surfaceExtensionEnabled) {
- skipCall |= LOG_ERROR(VK_DEBUG_REPORT_OBJECT_TYPE_INSTANCE_EXT,
- pPhysicalDevice->pInstance,
- "VkInstance",
+ if (pPhysicalDevice && pPhysicalDevice->pInstance && !pPhysicalDevice->pInstance->surfaceExtensionEnabled) {
+ skipCall |= LOG_ERROR(VK_DEBUG_REPORT_OBJECT_TYPE_INSTANCE_EXT, pPhysicalDevice->pInstance, "VkInstance",
SWAPCHAIN_EXT_NOT_ENABLED_BUT_USED,
- "%s() called even though the %s extension was not enabled for this VkInstance.",
- __FUNCTION__, VK_KHR_SURFACE_EXTENSION_NAME);
+ "%s() called even though the %s extension was not enabled for this VkInstance.", __FUNCTION__,
+ VK_KHR_SURFACE_EXTENSION_NAME);
}
if (!pSurfaceCapabilities) {
- skipCall |= LOG_ERROR_NULL_POINTER(VK_DEBUG_REPORT_OBJECT_TYPE_PHYSICAL_DEVICE_EXT,
- physicalDevice,
- "pSurfaceCapabilities");
+ skipCall |= LOG_ERROR_NULL_POINTER(VK_DEBUG_REPORT_OBJECT_TYPE_PHYSICAL_DEVICE_EXT, physicalDevice, "pSurfaceCapabilities");
}
if (VK_FALSE == skipCall) {
// Call down the call chain:
loader_platform_thread_unlock_mutex(&globalLock);
- result = my_data->instance_dispatch_table->GetPhysicalDeviceSurfaceCapabilitiesKHR(
- physicalDevice, surface, pSurfaceCapabilities);
+ result = my_data->instance_dispatch_table->GetPhysicalDeviceSurfaceCapabilitiesKHR(physicalDevice, surface,
+ pSurfaceCapabilities);
loader_platform_thread_lock_mutex(&globalLock);
// Obtain this pointer again after locking:
@@ -1345,7 +1119,7 @@ VK_LAYER_EXPORT VKAPI_ATTR VkResult VKAPI_CALL vkGetPhysicalDeviceSurfaceCapabil
if ((result == VK_SUCCESS) && pPhysicalDevice) {
// Record the result of this query:
pPhysicalDevice->gotSurfaceCapabilities = true;
-// FIXME: NEED TO COPY THIS DATA, BECAUSE pSurfaceCapabilities POINTS TO APP-ALLOCATED DATA
+ // FIXME: NEED TO COPY THIS DATA, BECAUSE pSurfaceCapabilities POINTS TO APP-ALLOCATED DATA
pPhysicalDevice->surfaceCapabilities = *pSurfaceCapabilities;
}
loader_platform_thread_unlock_mutex(&globalLock);
@@ -1355,12 +1129,9 @@ VK_LAYER_EXPORT VKAPI_ATTR VkResult VKAPI_CALL vkGetPhysicalDeviceSurfaceCapabil
return VK_ERROR_VALIDATION_FAILED_EXT;
}
-VK_LAYER_EXPORT VKAPI_ATTR VkResult VKAPI_CALL vkGetPhysicalDeviceSurfaceFormatsKHR(
- VkPhysicalDevice physicalDevice,
- VkSurfaceKHR surface,
- uint32_t* pSurfaceFormatCount,
- VkSurfaceFormatKHR* pSurfaceFormats)
-{
+VK_LAYER_EXPORT VKAPI_ATTR VkResult VKAPI_CALL
+vkGetPhysicalDeviceSurfaceFormatsKHR(VkPhysicalDevice physicalDevice, VkSurfaceKHR surface, uint32_t *pSurfaceFormatCount,
+ VkSurfaceFormatKHR *pSurfaceFormats) {
VkResult result = VK_SUCCESS;
VkBool32 skipCall = VK_FALSE;
layer_data *my_data = get_my_data_ptr(get_dispatch_key(physicalDevice), layer_data_map);
@@ -1368,54 +1139,40 @@ VK_LAYER_EXPORT VKAPI_ATTR VkResult VKAPI_CALL vkGetPhysicalDeviceSurfaceFormats
SwpPhysicalDevice *pPhysicalDevice = &my_data->physicalDeviceMap[physicalDevice];
// Validate that the surface extension was enabled:
- if (pPhysicalDevice && pPhysicalDevice->pInstance &&
- !pPhysicalDevice->pInstance->surfaceExtensionEnabled) {
- skipCall |= LOG_ERROR(VK_DEBUG_REPORT_OBJECT_TYPE_INSTANCE_EXT,
- pPhysicalDevice->pInstance,
- "VkInstance",
+ if (pPhysicalDevice && pPhysicalDevice->pInstance && !pPhysicalDevice->pInstance->surfaceExtensionEnabled) {
+ skipCall |= LOG_ERROR(VK_DEBUG_REPORT_OBJECT_TYPE_INSTANCE_EXT, pPhysicalDevice->pInstance, "VkInstance",
SWAPCHAIN_EXT_NOT_ENABLED_BUT_USED,
- "%s() called even though the %s extension was not enabled for this VkInstance.",
- __FUNCTION__, VK_KHR_SURFACE_EXTENSION_NAME);
+ "%s() called even though the %s extension was not enabled for this VkInstance.", __FUNCTION__,
+ VK_KHR_SURFACE_EXTENSION_NAME);
}
if (!pSurfaceFormatCount) {
- skipCall |= LOG_ERROR_NULL_POINTER(VK_DEBUG_REPORT_OBJECT_TYPE_PHYSICAL_DEVICE_EXT,
- physicalDevice,
- "pSurfaceFormatCount");
+ skipCall |= LOG_ERROR_NULL_POINTER(VK_DEBUG_REPORT_OBJECT_TYPE_PHYSICAL_DEVICE_EXT, physicalDevice, "pSurfaceFormatCount");
}
if (VK_FALSE == skipCall) {
// Call down the call chain:
loader_platform_thread_unlock_mutex(&globalLock);
- result = my_data->instance_dispatch_table->GetPhysicalDeviceSurfaceFormatsKHR(
- physicalDevice, surface, pSurfaceFormatCount, pSurfaceFormats);
+ result = my_data->instance_dispatch_table->GetPhysicalDeviceSurfaceFormatsKHR(physicalDevice, surface, pSurfaceFormatCount,
+ pSurfaceFormats);
loader_platform_thread_lock_mutex(&globalLock);
// Obtain this pointer again after locking:
pPhysicalDevice = &my_data->physicalDeviceMap[physicalDevice];
- if ((result == VK_SUCCESS) && pPhysicalDevice && !pSurfaceFormats &&
- pSurfaceFormatCount) {
+ if ((result == VK_SUCCESS) && pPhysicalDevice && !pSurfaceFormats && pSurfaceFormatCount) {
// Record the result of this preliminary query:
pPhysicalDevice->surfaceFormatCount = *pSurfaceFormatCount;
- }
- else if ((result == VK_SUCCESS) && pPhysicalDevice && pSurfaceFormats &&
- pSurfaceFormatCount) {
+ } else if ((result == VK_SUCCESS) && pPhysicalDevice && pSurfaceFormats && pSurfaceFormatCount) {
// Compare the preliminary value of *pSurfaceFormatCount with the
// value this time:
if (*pSurfaceFormatCount > pPhysicalDevice->surfaceFormatCount) {
- LOG_ERROR_INVALID_COUNT(VK_DEBUG_REPORT_OBJECT_TYPE_PHYSICAL_DEVICE_EXT,
- physicalDevice,
- "pSurfaceFormatCount",
- "pSurfaceFormats",
- *pSurfaceFormatCount,
- pPhysicalDevice->surfaceFormatCount);
- }
- else if (*pSurfaceFormatCount > 0) {
+ LOG_ERROR_INVALID_COUNT(VK_DEBUG_REPORT_OBJECT_TYPE_PHYSICAL_DEVICE_EXT, physicalDevice, "pSurfaceFormatCount",
+ "pSurfaceFormats", *pSurfaceFormatCount, pPhysicalDevice->surfaceFormatCount);
+ } else if (*pSurfaceFormatCount > 0) {
// Record the result of this query:
pPhysicalDevice->surfaceFormatCount = *pSurfaceFormatCount;
- pPhysicalDevice->pSurfaceFormats = (VkSurfaceFormatKHR *)
- malloc(*pSurfaceFormatCount * sizeof(VkSurfaceFormatKHR));
+ pPhysicalDevice->pSurfaceFormats = (VkSurfaceFormatKHR *)malloc(*pSurfaceFormatCount * sizeof(VkSurfaceFormatKHR));
if (pPhysicalDevice->pSurfaceFormats) {
- for (uint32_t i = 0 ; i < *pSurfaceFormatCount ; i++) {
+ for (uint32_t i = 0; i < *pSurfaceFormatCount; i++) {
pPhysicalDevice->pSurfaceFormats[i] = pSurfaceFormats[i];
}
} else {
@@ -1430,12 +1187,9 @@ VK_LAYER_EXPORT VKAPI_ATTR VkResult VKAPI_CALL vkGetPhysicalDeviceSurfaceFormats
return VK_ERROR_VALIDATION_FAILED_EXT;
}
-VK_LAYER_EXPORT VKAPI_ATTR VkResult VKAPI_CALL vkGetPhysicalDeviceSurfacePresentModesKHR(
- VkPhysicalDevice physicalDevice,
- VkSurfaceKHR surface,
- uint32_t* pPresentModeCount,
- VkPresentModeKHR* pPresentModes)
-{
+VK_LAYER_EXPORT VKAPI_ATTR VkResult VKAPI_CALL
+vkGetPhysicalDeviceSurfacePresentModesKHR(VkPhysicalDevice physicalDevice, VkSurfaceKHR surface, uint32_t *pPresentModeCount,
+ VkPresentModeKHR *pPresentModes) {
VkResult result = VK_SUCCESS;
VkBool32 skipCall = VK_FALSE;
layer_data *my_data = get_my_data_ptr(get_dispatch_key(physicalDevice), layer_data_map);
@@ -1443,54 +1197,40 @@ VK_LAYER_EXPORT VKAPI_ATTR VkResult VKAPI_CALL vkGetPhysicalDeviceSurfacePresent
SwpPhysicalDevice *pPhysicalDevice = &my_data->physicalDeviceMap[physicalDevice];
// Validate that the surface extension was enabled:
- if (pPhysicalDevice && pPhysicalDevice->pInstance &&
- !pPhysicalDevice->pInstance->surfaceExtensionEnabled) {
- skipCall |= LOG_ERROR(VK_DEBUG_REPORT_OBJECT_TYPE_INSTANCE_EXT,
- pPhysicalDevice->pInstance,
- "VkInstance",
+ if (pPhysicalDevice && pPhysicalDevice->pInstance && !pPhysicalDevice->pInstance->surfaceExtensionEnabled) {
+ skipCall |= LOG_ERROR(VK_DEBUG_REPORT_OBJECT_TYPE_INSTANCE_EXT, pPhysicalDevice->pInstance, "VkInstance",
SWAPCHAIN_EXT_NOT_ENABLED_BUT_USED,
- "%s() called even though the %s extension was not enabled for this VkInstance.",
- __FUNCTION__, VK_KHR_SURFACE_EXTENSION_NAME);
+ "%s() called even though the %s extension was not enabled for this VkInstance.", __FUNCTION__,
+ VK_KHR_SURFACE_EXTENSION_NAME);
}
if (!pPresentModeCount) {
- skipCall |= LOG_ERROR_NULL_POINTER(VK_DEBUG_REPORT_OBJECT_TYPE_PHYSICAL_DEVICE_EXT,
- physicalDevice,
- "pPresentModeCount");
+ skipCall |= LOG_ERROR_NULL_POINTER(VK_DEBUG_REPORT_OBJECT_TYPE_PHYSICAL_DEVICE_EXT, physicalDevice, "pPresentModeCount");
}
if (VK_FALSE == skipCall) {
// Call down the call chain:
loader_platform_thread_unlock_mutex(&globalLock);
- result = my_data->instance_dispatch_table->GetPhysicalDeviceSurfacePresentModesKHR(
- physicalDevice, surface, pPresentModeCount, pPresentModes);
+ result = my_data->instance_dispatch_table->GetPhysicalDeviceSurfacePresentModesKHR(physicalDevice, surface,
+ pPresentModeCount, pPresentModes);
loader_platform_thread_lock_mutex(&globalLock);
// Obtain this pointer again after locking:
pPhysicalDevice = &my_data->physicalDeviceMap[physicalDevice];
- if ((result == VK_SUCCESS) && pPhysicalDevice && !pPresentModes &&
- pPresentModeCount) {
+ if ((result == VK_SUCCESS) && pPhysicalDevice && !pPresentModes && pPresentModeCount) {
// Record the result of this preliminary query:
pPhysicalDevice->presentModeCount = *pPresentModeCount;
- }
- else if ((result == VK_SUCCESS) && pPhysicalDevice && pPresentModes &&
- pPresentModeCount) {
+ } else if ((result == VK_SUCCESS) && pPhysicalDevice && pPresentModes && pPresentModeCount) {
// Compare the preliminary value of *pPresentModeCount with the
// value this time:
if (*pPresentModeCount > pPhysicalDevice->presentModeCount) {
- LOG_ERROR_INVALID_COUNT(VK_DEBUG_REPORT_OBJECT_TYPE_PHYSICAL_DEVICE_EXT,
- physicalDevice,
- "pPresentModeCount",
- "pPresentModes",
- *pPresentModeCount,
- pPhysicalDevice->presentModeCount);
- }
- else if (*pPresentModeCount > 0) {
+ LOG_ERROR_INVALID_COUNT(VK_DEBUG_REPORT_OBJECT_TYPE_PHYSICAL_DEVICE_EXT, physicalDevice, "pPresentModeCount",
+ "pPresentModes", *pPresentModeCount, pPhysicalDevice->presentModeCount);
+ } else if (*pPresentModeCount > 0) {
// Record the result of this query:
pPhysicalDevice->presentModeCount = *pPresentModeCount;
- pPhysicalDevice->pPresentModes = (VkPresentModeKHR *)
- malloc(*pPresentModeCount * sizeof(VkPresentModeKHR));
+ pPhysicalDevice->pPresentModes = (VkPresentModeKHR *)malloc(*pPresentModeCount * sizeof(VkPresentModeKHR));
if (pPhysicalDevice->pPresentModes) {
- for (uint32_t i = 0 ; i < *pPresentModeCount ; i++) {
+ for (uint32_t i = 0; i < *pPresentModeCount; i++) {
pPhysicalDevice->pPresentModes[i] = pPresentModes[i];
}
} else {
@@ -1508,13 +1248,10 @@ VK_LAYER_EXPORT VKAPI_ATTR VkResult VKAPI_CALL vkGetPhysicalDeviceSurfacePresent
// This function does the up-front validation work for vkCreateSwapchainKHR(),
// and returns VK_TRUE if a logging callback indicates that the call down the
// chain should be skipped:
-static VkBool32 validateCreateSwapchainKHR(
- VkDevice device,
- const VkSwapchainCreateInfoKHR* pCreateInfo,
- VkSwapchainKHR* pSwapchain)
-{
-// TODO: Validate cases of re-creating a swapchain (the current code
-// assumes a new swapchain is being created).
+static VkBool32 validateCreateSwapchainKHR(VkDevice device, const VkSwapchainCreateInfoKHR *pCreateInfo,
+ VkSwapchainKHR *pSwapchain) {
+ // TODO: Validate cases of re-creating a swapchain (the current code
+ // assumes a new swapchain is being created).
VkBool32 skipCall = VK_FALSE;
layer_data *my_data = get_my_data_ptr(get_dispatch_key(device), layer_data_map);
char fn[] = "vkCreateSwapchainKHR";
@@ -1522,42 +1259,44 @@ static VkBool32 validateCreateSwapchainKHR(
// Validate that the swapchain extension was enabled:
if (pDevice && !pDevice->swapchainExtensionEnabled) {
- return LOG_ERROR(VK_DEBUG_REPORT_OBJECT_TYPE_DEVICE_EXT, device, "VkDevice",
- SWAPCHAIN_EXT_NOT_ENABLED_BUT_USED,
- "%s() called even though the %s extension was not enabled for this VkDevice.",
- fn, VK_KHR_SWAPCHAIN_EXTENSION_NAME );
+ return LOG_ERROR(VK_DEBUG_REPORT_OBJECT_TYPE_DEVICE_EXT, device, "VkDevice", SWAPCHAIN_EXT_NOT_ENABLED_BUT_USED,
+ "%s() called even though the %s extension was not enabled for this VkDevice.", fn,
+ VK_KHR_SWAPCHAIN_EXTENSION_NAME);
}
if (!pCreateInfo) {
- skipCall |= LOG_ERROR_NULL_POINTER(VK_DEBUG_REPORT_OBJECT_TYPE_DEVICE_EXT,
- device,
- "pCreateInfo");
+ skipCall |= LOG_ERROR_NULL_POINTER(VK_DEBUG_REPORT_OBJECT_TYPE_DEVICE_EXT, device, "pCreateInfo");
} else {
if (pCreateInfo->sType != VK_STRUCTURE_TYPE_SWAPCHAIN_CREATE_INFO_KHR) {
- skipCall |= LOG_ERROR_WRONG_STYPE(VK_DEBUG_REPORT_OBJECT_TYPE_DEVICE_EXT,
- device,
- "pCreateInfo",
+ skipCall |= LOG_ERROR_WRONG_STYPE(VK_DEBUG_REPORT_OBJECT_TYPE_DEVICE_EXT, device, "pCreateInfo",
"VK_STRUCTURE_TYPE_SWAPCHAIN_CREATE_INFO_KHR");
}
if (pCreateInfo->pNext != NULL) {
- skipCall |= LOG_INFO_WRONG_NEXT(VK_DEBUG_REPORT_OBJECT_TYPE_DEVICE_EXT,
- device,
- "pCreateInfo");
+ skipCall |= LOG_INFO_WRONG_NEXT(VK_DEBUG_REPORT_OBJECT_TYPE_DEVICE_EXT, device, "pCreateInfo");
}
}
if (!pSwapchain) {
- skipCall |= LOG_ERROR_NULL_POINTER(VK_DEBUG_REPORT_OBJECT_TYPE_DEVICE_EXT,
- device,
- "pSwapchain");
+ skipCall |= LOG_ERROR_NULL_POINTER(VK_DEBUG_REPORT_OBJECT_TYPE_DEVICE_EXT, device, "pSwapchain");
}
// Keep around a useful pointer to pPhysicalDevice:
SwpPhysicalDevice *pPhysicalDevice = pDevice->pPhysicalDevice;
+ // Validate pCreateInfo values with result of
+ // vkGetPhysicalDeviceQueueFamilyProperties
+ if (pPhysicalDevice && pPhysicalDevice->gotQueueFamilyPropertyCount) {
+ for (auto i = 0; i < pCreateInfo->queueFamilyIndexCount; i++) {
+ if (pCreateInfo->pQueueFamilyIndices[i] >= pPhysicalDevice->numOfQueueFamilies) {
+ skipCall |= LOG_ERROR_QUEUE_FAMILY_INDEX_TOO_LARGE(VK_DEBUG_REPORT_OBJECT_TYPE_PHYSICAL_DEVICE_EXT, pPhysicalDevice,
+ "VkPhysicalDevice", pCreateInfo->pQueueFamilyIndices[i],
+ pPhysicalDevice->numOfQueueFamilies);
+ }
+ }
+ }
+
// Validate pCreateInfo values with the results of
// vkGetPhysicalDeviceSurfaceCapabilitiesKHR():
if (!pPhysicalDevice || !pPhysicalDevice->gotSurfaceCapabilities) {
- skipCall |= LOG_ERROR(VK_DEBUG_REPORT_OBJECT_TYPE_DEVICE_EXT, device, "VkDevice",
- SWAPCHAIN_CREATE_SWAP_WITHOUT_QUERY,
+ skipCall |= LOG_ERROR(VK_DEBUG_REPORT_OBJECT_TYPE_DEVICE_EXT, device, "VkDevice", SWAPCHAIN_CREATE_SWAP_WITHOUT_QUERY,
"%s() called before calling "
"vkGetPhysicalDeviceSurfaceCapabilitiesKHR().",
fn);
@@ -1565,12 +1304,9 @@ static VkBool32 validateCreateSwapchainKHR(
// Validate pCreateInfo->surface to make sure that
// vkGetPhysicalDeviceSurfaceSupportKHR() reported this as a supported
// surface:
- SwpSurface *pSurface =
- ((pPhysicalDevice) ?
- pPhysicalDevice->supportedSurfaces[pCreateInfo->surface] : NULL);
+ SwpSurface *pSurface = ((pPhysicalDevice) ? pPhysicalDevice->supportedSurfaces[pCreateInfo->surface] : NULL);
if (!pSurface) {
- skipCall |= LOG_ERROR(VK_DEBUG_REPORT_OBJECT_TYPE_DEVICE_EXT, device, "VkDevice",
- SWAPCHAIN_CREATE_UNSUPPORTED_SURFACE,
+ skipCall |= LOG_ERROR(VK_DEBUG_REPORT_OBJECT_TYPE_DEVICE_EXT, device, "VkDevice", SWAPCHAIN_CREATE_UNSUPPORTED_SURFACE,
"%s() called with pCreateInfo->surface that "
"was not returned by "
"vkGetPhysicalDeviceSurfaceSupportKHR() "
@@ -1582,18 +1318,13 @@ static VkBool32 validateCreateSwapchainKHR(
// VkSurfaceCapabilitiesKHR::{min|max}ImageCount:
VkSurfaceCapabilitiesKHR *pCapabilities = &pPhysicalDevice->surfaceCapabilities;
if ((pCreateInfo->minImageCount < pCapabilities->minImageCount) ||
- ((pCapabilities->maxImageCount > 0) &&
- (pCreateInfo->minImageCount > pCapabilities->maxImageCount))) {
+ ((pCapabilities->maxImageCount > 0) && (pCreateInfo->minImageCount > pCapabilities->maxImageCount))) {
skipCall |= LOG_ERROR(VK_DEBUG_REPORT_OBJECT_TYPE_DEVICE_EXT, device, "VkDevice",
- SWAPCHAIN_CREATE_SWAP_BAD_MIN_IMG_COUNT,
- "%s() called with pCreateInfo->minImageCount "
- "= %d, which is outside the bounds returned "
- "by vkGetPhysicalDeviceSurfaceCapabilitiesKHR() (i.e. "
- "minImageCount = %d, maxImageCount = %d).",
- fn,
- pCreateInfo->minImageCount,
- pCapabilities->minImageCount,
- pCapabilities->maxImageCount);
+ SWAPCHAIN_CREATE_SWAP_BAD_MIN_IMG_COUNT, "%s() called with pCreateInfo->minImageCount "
+ "= %d, which is outside the bounds returned "
+ "by vkGetPhysicalDeviceSurfaceCapabilitiesKHR() (i.e. "
+ "minImageCount = %d, maxImageCount = %d).",
+ fn, pCreateInfo->minImageCount, pCapabilities->minImageCount, pCapabilities->maxImageCount);
}
// Validate pCreateInfo->imageExtent against
// VkSurfaceCapabilitiesKHR::{current|min|max}ImageExtent:
@@ -1602,43 +1333,32 @@ static VkBool32 validateCreateSwapchainKHR(
(pCreateInfo->imageExtent.width > pCapabilities->maxImageExtent.width) ||
(pCreateInfo->imageExtent.height < pCapabilities->minImageExtent.height) ||
(pCreateInfo->imageExtent.height > pCapabilities->maxImageExtent.height))) {
- skipCall |= LOG_ERROR(VK_DEBUG_REPORT_OBJECT_TYPE_DEVICE_EXT, device, "VkDevice",
- SWAPCHAIN_CREATE_SWAP_OUT_OF_BOUNDS_EXTENTS,
- "%s() called with pCreateInfo->imageExtent = "
- "(%d,%d), which is outside the bounds "
- "returned by vkGetPhysicalDeviceSurfaceCapabilitiesKHR(): "
- "currentExtent = (%d,%d), minImageExtent = "
- "(%d,%d), maxImageExtent = (%d,%d).",
- fn,
- pCreateInfo->imageExtent.width,
- pCreateInfo->imageExtent.height,
- pCapabilities->currentExtent.width,
- pCapabilities->currentExtent.height,
- pCapabilities->minImageExtent.width,
- pCapabilities->minImageExtent.height,
- pCapabilities->maxImageExtent.width,
- pCapabilities->maxImageExtent.height);
+ skipCall |= LOG_ERROR(
+ VK_DEBUG_REPORT_OBJECT_TYPE_DEVICE_EXT, device, "VkDevice", SWAPCHAIN_CREATE_SWAP_OUT_OF_BOUNDS_EXTENTS,
+ "%s() called with pCreateInfo->imageExtent = "
+ "(%d,%d), which is outside the bounds "
+ "returned by vkGetPhysicalDeviceSurfaceCapabilitiesKHR(): "
+ "currentExtent = (%d,%d), minImageExtent = "
+ "(%d,%d), maxImageExtent = (%d,%d).",
+ fn, pCreateInfo->imageExtent.width, pCreateInfo->imageExtent.height, pCapabilities->currentExtent.width,
+ pCapabilities->currentExtent.height, pCapabilities->minImageExtent.width, pCapabilities->minImageExtent.height,
+ pCapabilities->maxImageExtent.width, pCapabilities->maxImageExtent.height);
}
if ((pCapabilities->currentExtent.width != -1) &&
((pCreateInfo->imageExtent.width != pCapabilities->currentExtent.width) ||
(pCreateInfo->imageExtent.height != pCapabilities->currentExtent.height))) {
skipCall |= LOG_ERROR(VK_DEBUG_REPORT_OBJECT_TYPE_DEVICE_EXT, device, "VkDevice",
- SWAPCHAIN_CREATE_SWAP_EXTENTS_NO_MATCH_WIN,
- "%s() called with pCreateInfo->imageExtent = "
- "(%d,%d), which is not equal to the "
- "currentExtent = (%d,%d) returned by "
- "vkGetPhysicalDeviceSurfaceCapabilitiesKHR().",
- fn,
- pCreateInfo->imageExtent.width,
- pCreateInfo->imageExtent.height,
- pCapabilities->currentExtent.width,
- pCapabilities->currentExtent.height);
+ SWAPCHAIN_CREATE_SWAP_EXTENTS_NO_MATCH_WIN, "%s() called with pCreateInfo->imageExtent = "
+ "(%d,%d), which is not equal to the "
+ "currentExtent = (%d,%d) returned by "
+ "vkGetPhysicalDeviceSurfaceCapabilitiesKHR().",
+ fn, pCreateInfo->imageExtent.width, pCreateInfo->imageExtent.height,
+ pCapabilities->currentExtent.width, pCapabilities->currentExtent.height);
}
// Validate pCreateInfo->preTransform has one bit set (1st two
// lines of if-statement), which bit is also set in
// VkSurfaceCapabilitiesKHR::supportedTransforms (3rd line of if-statement):
- if (!pCreateInfo->preTransform ||
- (pCreateInfo->preTransform & (pCreateInfo->preTransform - 1)) ||
+ if (!pCreateInfo->preTransform || (pCreateInfo->preTransform & (pCreateInfo->preTransform - 1)) ||
!(pCreateInfo->preTransform & pCapabilities->supportedTransforms)) {
// This is an error situation; one for which we'd like to give
// the developer a helpful, multi-line error message. Build it
@@ -1647,34 +1367,27 @@ static VkBool32 validateCreateSwapchainKHR(
char str[1024];
// Here's the first part of the message:
sprintf(str, "%s() called with a non-supported "
- "pCreateInfo->preTransform (i.e. %s). "
- "Supported values are:\n",
- fn,
- surfaceTransformStr(pCreateInfo->preTransform));
+ "pCreateInfo->preTransform (i.e. %s). "
+ "Supported values are:\n",
+ fn, surfaceTransformStr(pCreateInfo->preTransform));
errorString += str;
for (int i = 0; i < 32; i++) {
// Build up the rest of the message:
if ((1 << i) & pCapabilities->supportedTransforms) {
- const char *newStr =
- surfaceTransformStr((VkSurfaceTransformFlagBitsKHR) (1 << i));
+ const char *newStr = surfaceTransformStr((VkSurfaceTransformFlagBitsKHR)(1 << i));
sprintf(str, " %s\n", newStr);
errorString += str;
}
}
// Log the message that we've built up:
- skipCall |= debug_report_log_msg(my_data->report_data,
- VK_DEBUG_REPORT_ERROR_BIT_EXT,
- VK_DEBUG_REPORT_OBJECT_TYPE_DEVICE_EXT,
- (uint64_t) device, __LINE__,
- SWAPCHAIN_CREATE_SWAP_BAD_PRE_TRANSFORM,
- LAYER_NAME,
- errorString.c_str());
+ skipCall |= debug_report_log_msg(my_data->report_data, VK_DEBUG_REPORT_ERROR_BIT_EXT,
+ VK_DEBUG_REPORT_OBJECT_TYPE_DEVICE_EXT, (uint64_t)device, __LINE__,
+ SWAPCHAIN_CREATE_SWAP_BAD_PRE_TRANSFORM, LAYER_NAME, errorString.c_str());
}
// Validate pCreateInfo->compositeAlpha has one bit set (1st two
// lines of if-statement), which bit is also set in
// VkSurfaceCapabilitiesKHR::supportedCompositeAlpha (3rd line of if-statement):
- if (!pCreateInfo->compositeAlpha ||
- (pCreateInfo->compositeAlpha & (pCreateInfo->compositeAlpha - 1)) ||
+ if (!pCreateInfo->compositeAlpha || (pCreateInfo->compositeAlpha & (pCreateInfo->compositeAlpha - 1)) ||
!((pCreateInfo->compositeAlpha) & pCapabilities->supportedCompositeAlpha)) {
// This is an error situation; one for which we'd like to give
// the developer a helpful, multi-line error message. Build it
@@ -1683,62 +1396,47 @@ static VkBool32 validateCreateSwapchainKHR(
char str[1024];
// Here's the first part of the message:
sprintf(str, "%s() called with a non-supported "
- "pCreateInfo->compositeAlpha (i.e. %s). "
- "Supported values are:\n",
- fn,
- surfaceCompositeAlphaStr(pCreateInfo->compositeAlpha));
+ "pCreateInfo->compositeAlpha (i.e. %s). "
+ "Supported values are:\n",
+ fn, surfaceCompositeAlphaStr(pCreateInfo->compositeAlpha));
errorString += str;
for (int i = 0; i < 32; i++) {
// Build up the rest of the message:
if ((1 << i) & pCapabilities->supportedCompositeAlpha) {
- const char *newStr =
- surfaceCompositeAlphaStr((VkCompositeAlphaFlagBitsKHR) (1 << i));
+ const char *newStr = surfaceCompositeAlphaStr((VkCompositeAlphaFlagBitsKHR)(1 << i));
sprintf(str, " %s\n", newStr);
errorString += str;
}
}
// Log the message that we've built up:
- skipCall |= debug_report_log_msg(my_data->report_data,
- VK_DEBUG_REPORT_ERROR_BIT_EXT,
- VK_DEBUG_REPORT_OBJECT_TYPE_DEVICE_EXT,
- (uint64_t) device, 0,
- SWAPCHAIN_CREATE_SWAP_BAD_COMPOSITE_ALPHA,
- LAYER_NAME,
- errorString.c_str());
+ skipCall |= debug_report_log_msg(my_data->report_data, VK_DEBUG_REPORT_ERROR_BIT_EXT,
+ VK_DEBUG_REPORT_OBJECT_TYPE_DEVICE_EXT, (uint64_t)device, 0,
+ SWAPCHAIN_CREATE_SWAP_BAD_COMPOSITE_ALPHA, LAYER_NAME, errorString.c_str());
}
// Validate pCreateInfo->imageArraySize against
// VkSurfaceCapabilitiesKHR::maxImageArraySize:
- if ((pCreateInfo->imageArrayLayers < 1) ||
- (pCreateInfo->imageArrayLayers > pCapabilities->maxImageArrayLayers)) {
+ if ((pCreateInfo->imageArrayLayers < 1) || (pCreateInfo->imageArrayLayers > pCapabilities->maxImageArrayLayers)) {
skipCall |= LOG_ERROR(VK_DEBUG_REPORT_OBJECT_TYPE_DEVICE_EXT, device, "VkDevice",
- SWAPCHAIN_CREATE_SWAP_BAD_IMG_ARRAY_SIZE,
- "%s() called with a non-supported "
- "pCreateInfo->imageArraySize (i.e. %d). "
- "Minimum value is 1, maximum value is %d.",
- fn,
- pCreateInfo->imageArrayLayers,
- pCapabilities->maxImageArrayLayers);
+ SWAPCHAIN_CREATE_SWAP_BAD_IMG_ARRAY_SIZE, "%s() called with a non-supported "
+ "pCreateInfo->imageArraySize (i.e. %d). "
+ "Minimum value is 1, maximum value is %d.",
+ fn, pCreateInfo->imageArrayLayers, pCapabilities->maxImageArrayLayers);
}
// Validate pCreateInfo->imageUsage against
// VkSurfaceCapabilitiesKHR::supportedUsageFlags:
- if (pCreateInfo->imageUsage !=
- (pCreateInfo->imageUsage & pCapabilities->supportedUsageFlags)) {
+ if (pCreateInfo->imageUsage != (pCreateInfo->imageUsage & pCapabilities->supportedUsageFlags)) {
skipCall |= LOG_ERROR(VK_DEBUG_REPORT_OBJECT_TYPE_DEVICE_EXT, device, "VkDevice",
- SWAPCHAIN_CREATE_SWAP_BAD_IMG_USAGE_FLAGS,
- "%s() called with a non-supported "
- "pCreateInfo->imageUsage (i.e. 0x%08x)."
- " Supported flag bits are 0x%08x.",
- fn,
- pCreateInfo->imageUsage,
- pCapabilities->supportedUsageFlags);
+ SWAPCHAIN_CREATE_SWAP_BAD_IMG_USAGE_FLAGS, "%s() called with a non-supported "
+ "pCreateInfo->imageUsage (i.e. 0x%08x)."
+ " Supported flag bits are 0x%08x.",
+ fn, pCreateInfo->imageUsage, pCapabilities->supportedUsageFlags);
}
}
// Validate pCreateInfo values with the results of
// vkGetPhysicalDeviceSurfaceFormatsKHR():
if (!pPhysicalDevice || !pPhysicalDevice->surfaceFormatCount) {
- skipCall |= LOG_ERROR(VK_DEBUG_REPORT_OBJECT_TYPE_DEVICE_EXT, device, "VkDevice",
- SWAPCHAIN_CREATE_SWAP_WITHOUT_QUERY,
+ skipCall |= LOG_ERROR(VK_DEBUG_REPORT_OBJECT_TYPE_DEVICE_EXT, device, "VkDevice", SWAPCHAIN_CREATE_SWAP_WITHOUT_QUERY,
"%s() called before calling "
"vkGetPhysicalDeviceSurfaceFormatsKHR().",
fn);
@@ -1748,7 +1446,7 @@ static VkBool32 validateCreateSwapchainKHR(
bool foundFormat = false;
bool foundColorSpace = false;
bool foundMatch = false;
- for (uint32_t i = 0 ; i < pPhysicalDevice->surfaceFormatCount ; i++) {
+ for (uint32_t i = 0; i < pPhysicalDevice->surfaceFormatCount; i++) {
if (pCreateInfo->imageFormat == pPhysicalDevice->pSurfaceFormats[i].format) {
// Validate pCreateInfo->imageColorSpace against
// VkSurfaceFormatKHR::colorSpace:
@@ -1766,30 +1464,23 @@ static VkBool32 validateCreateSwapchainKHR(
if (!foundMatch) {
if (!foundFormat) {
if (!foundColorSpace) {
- skipCall |= LOG_ERROR(VK_DEBUG_REPORT_OBJECT_TYPE_DEVICE_EXT, device,
- "VkDevice",
- SWAPCHAIN_CREATE_SWAP_BAD_IMG_FMT_CLR_SP,
- "%s() called with neither a "
- "supported pCreateInfo->imageFormat "
- "(i.e. %d) nor a supported "
- "pCreateInfo->imageColorSpace "
- "(i.e. %d).",
- fn,
- pCreateInfo->imageFormat,
- pCreateInfo->imageColorSpace);
+ skipCall |= LOG_ERROR(VK_DEBUG_REPORT_OBJECT_TYPE_DEVICE_EXT, device, "VkDevice",
+ SWAPCHAIN_CREATE_SWAP_BAD_IMG_FMT_CLR_SP, "%s() called with neither a "
+ "supported pCreateInfo->imageFormat "
+ "(i.e. %d) nor a supported "
+ "pCreateInfo->imageColorSpace "
+ "(i.e. %d).",
+ fn, pCreateInfo->imageFormat, pCreateInfo->imageColorSpace);
} else {
- skipCall |= LOG_ERROR(VK_DEBUG_REPORT_OBJECT_TYPE_DEVICE_EXT, device,
- "VkDevice",
- SWAPCHAIN_CREATE_SWAP_BAD_IMG_FORMAT,
- "%s() called with a non-supported "
- "pCreateInfo->imageFormat (i.e. %d).",
+ skipCall |= LOG_ERROR(VK_DEBUG_REPORT_OBJECT_TYPE_DEVICE_EXT, device, "VkDevice",
+ SWAPCHAIN_CREATE_SWAP_BAD_IMG_FORMAT, "%s() called with a non-supported "
+ "pCreateInfo->imageFormat (i.e. %d).",
fn, pCreateInfo->imageFormat);
}
} else if (!foundColorSpace) {
skipCall |= LOG_ERROR(VK_DEBUG_REPORT_OBJECT_TYPE_DEVICE_EXT, device, "VkDevice",
- SWAPCHAIN_CREATE_SWAP_BAD_IMG_COLOR_SPACE,
- "%s() called with a non-supported "
- "pCreateInfo->imageColorSpace (i.e. %d).",
+ SWAPCHAIN_CREATE_SWAP_BAD_IMG_COLOR_SPACE, "%s() called with a non-supported "
+ "pCreateInfo->imageColorSpace (i.e. %d).",
fn, pCreateInfo->imageColorSpace);
}
}
@@ -1798,16 +1489,17 @@ static VkBool32 validateCreateSwapchainKHR(
// Validate pCreateInfo values with the results of
// vkGetPhysicalDeviceSurfacePresentModesKHR():
if (!pPhysicalDevice || !pPhysicalDevice->presentModeCount) {
- skipCall |= LOG_ERROR(VK_DEBUG_REPORT_OBJECT_TYPE_DEVICE_EXT, device, "VkDevice",
- SWAPCHAIN_CREATE_SWAP_WITHOUT_QUERY,
- "%s() called before calling "
- "vkGetPhysicalDeviceSurfacePresentModesKHR().",
- fn);
+ if (!pCreateInfo || (pCreateInfo->presentMode != VK_PRESENT_MODE_FIFO_KHR)) {
+ skipCall |= LOG_ERROR(VK_DEBUG_REPORT_OBJECT_TYPE_DEVICE_EXT, device, "VkDevice", SWAPCHAIN_CREATE_SWAP_WITHOUT_QUERY,
+ "%s() called before calling "
+ "vkGetPhysicalDeviceSurfacePresentModesKHR().",
+ fn);
+ }
} else if (pCreateInfo) {
// Validate pCreateInfo->presentMode against
// vkGetPhysicalDeviceSurfacePresentModesKHR():
bool foundMatch = false;
- for (uint32_t i = 0 ; i < pPhysicalDevice->presentModeCount ; i++) {
+ for (uint32_t i = 0; i < pPhysicalDevice->presentModeCount; i++) {
if (pPhysicalDevice->pPresentModes[i] == pCreateInfo->presentMode) {
foundMatch = true;
break;
@@ -1815,49 +1507,37 @@ static VkBool32 validateCreateSwapchainKHR(
}
if (!foundMatch) {
skipCall |= LOG_ERROR(VK_DEBUG_REPORT_OBJECT_TYPE_DEVICE_EXT, device, "VkDevice",
- SWAPCHAIN_CREATE_SWAP_BAD_PRESENT_MODE,
- "%s() called with a non-supported "
- "pCreateInfo->presentMode (i.e. %s).",
- fn,
- presentModeStr(pCreateInfo->presentMode));
+ SWAPCHAIN_CREATE_SWAP_BAD_PRESENT_MODE, "%s() called with a non-supported "
+ "pCreateInfo->presentMode (i.e. %s).",
+ fn, presentModeStr(pCreateInfo->presentMode));
}
}
// Validate pCreateInfo->imageSharingMode and related values:
if (pCreateInfo->imageSharingMode == VK_SHARING_MODE_CONCURRENT) {
- if ((pCreateInfo->queueFamilyIndexCount <= 1) ||
- !pCreateInfo->pQueueFamilyIndices) {
+ if ((pCreateInfo->queueFamilyIndexCount <= 1) || !pCreateInfo->pQueueFamilyIndices) {
skipCall |= LOG_ERROR(VK_DEBUG_REPORT_OBJECT_TYPE_DEVICE_EXT, device, "VkDevice",
- SWAPCHAIN_CREATE_SWAP_BAD_SHARING_VALUES,
- "%s() called with a supported "
- "pCreateInfo->sharingMode of (i.e. %s),"
- "but with a bad value(s) for "
- "pCreateInfo->queueFamilyIndexCount or "
- "pCreateInfo->pQueueFamilyIndices).",
- fn,
- sharingModeStr(pCreateInfo->imageSharingMode));
+ SWAPCHAIN_CREATE_SWAP_BAD_SHARING_VALUES, "%s() called with a supported "
+ "pCreateInfo->sharingMode of (i.e. %s),"
+ "but with a bad value(s) for "
+ "pCreateInfo->queueFamilyIndexCount or "
+ "pCreateInfo->pQueueFamilyIndices).",
+ fn, sharingModeStr(pCreateInfo->imageSharingMode));
}
} else if (pCreateInfo->imageSharingMode != VK_SHARING_MODE_EXCLUSIVE) {
- skipCall |= LOG_ERROR(VK_DEBUG_REPORT_OBJECT_TYPE_DEVICE_EXT, device, "VkDevice",
- SWAPCHAIN_CREATE_SWAP_BAD_SHARING_MODE,
+ skipCall |= LOG_ERROR(VK_DEBUG_REPORT_OBJECT_TYPE_DEVICE_EXT, device, "VkDevice", SWAPCHAIN_CREATE_SWAP_BAD_SHARING_MODE,
"%s() called with a non-supported "
"pCreateInfo->imageSharingMode (i.e. %s).",
- fn,
- sharingModeStr(pCreateInfo->imageSharingMode));
+ fn, sharingModeStr(pCreateInfo->imageSharingMode));
}
// Validate pCreateInfo->clipped:
- if (pCreateInfo &&
- (pCreateInfo->clipped != VK_FALSE) &&
- (pCreateInfo->clipped != VK_TRUE)) {
- skipCall |= LOG_ERROR(VK_DEBUG_REPORT_OBJECT_TYPE_DEVICE_EXT,
- device, "VkDevice",
- SWAPCHAIN_BAD_BOOL,
+ if (pCreateInfo && (pCreateInfo->clipped != VK_FALSE) && (pCreateInfo->clipped != VK_TRUE)) {
+ skipCall |= LOG_ERROR(VK_DEBUG_REPORT_OBJECT_TYPE_DEVICE_EXT, device, "VkDevice", SWAPCHAIN_BAD_BOOL,
"%s() called with a VkBool32 value that is "
"neither VK_TRUE nor VK_FALSE, but has the "
"numeric value of %d.",
- fn,
- pCreateInfo->clipped);
+ fn, pCreateInfo->clipped);
}
// Validate pCreateInfo->oldSwapchain:
@@ -1865,50 +1545,40 @@ static VkBool32 validateCreateSwapchainKHR(
SwpSwapchain *pOldSwapchain = &my_data->swapchainMap[pCreateInfo->oldSwapchain];
if (pOldSwapchain) {
if (device != pOldSwapchain->pDevice->device) {
- skipCall |= LOG_ERROR(VK_DEBUG_REPORT_OBJECT_TYPE_DEVICE_EXT,
- device, "VkDevice",
- SWAPCHAIN_DESTROY_SWAP_DIFF_DEVICE,
- "%s() called with a different VkDevice "
- "than the VkSwapchainKHR was created with.",
+ skipCall |= LOG_ERROR(VK_DEBUG_REPORT_OBJECT_TYPE_DEVICE_EXT, device, "VkDevice",
+ SWAPCHAIN_DESTROY_SWAP_DIFF_DEVICE, "%s() called with a different VkDevice "
+ "than the VkSwapchainKHR was created with.",
__FUNCTION__);
}
if (pCreateInfo->surface != pOldSwapchain->pSurface->surface) {
- skipCall |= LOG_ERROR(VK_DEBUG_REPORT_OBJECT_TYPE_DEVICE_EXT,
- device, "VkDevice",
- SWAPCHAIN_CREATE_SWAP_DIFF_SURFACE,
- "%s() called with pCreateInfo->oldSwapchain "
- "that has a different VkSurfaceKHR than "
- "pCreateInfo->surface.",
+ skipCall |= LOG_ERROR(VK_DEBUG_REPORT_OBJECT_TYPE_DEVICE_EXT, device, "VkDevice",
+ SWAPCHAIN_CREATE_SWAP_DIFF_SURFACE, "%s() called with pCreateInfo->oldSwapchain "
+ "that has a different VkSurfaceKHR than "
+ "pCreateInfo->surface.",
fn);
}
} else {
// TBD: Leave this in (not sure object_track will check this)?
- skipCall |= LOG_ERROR_NON_VALID_OBJ(VK_DEBUG_REPORT_OBJECT_TYPE_SWAPCHAIN_KHR_EXT,
- pCreateInfo->oldSwapchain,
- "VkSwapchainKHR");
+ skipCall |=
+ LOG_ERROR_NON_VALID_OBJ(VK_DEBUG_REPORT_OBJECT_TYPE_SWAPCHAIN_KHR_EXT, pCreateInfo->oldSwapchain, "VkSwapchainKHR");
}
}
return skipCall;
}
-VK_LAYER_EXPORT VKAPI_ATTR VkResult VKAPI_CALL vkCreateSwapchainKHR(
- VkDevice device,
- const VkSwapchainCreateInfoKHR* pCreateInfo,
- const VkAllocationCallbacks* pAllocator,
- VkSwapchainKHR* pSwapchain)
-{
+VK_LAYER_EXPORT VKAPI_ATTR VkResult VKAPI_CALL vkCreateSwapchainKHR(VkDevice device, const VkSwapchainCreateInfoKHR *pCreateInfo,
+ const VkAllocationCallbacks *pAllocator,
+ VkSwapchainKHR *pSwapchain) {
VkResult result = VK_SUCCESS;
layer_data *my_data = get_my_data_ptr(get_dispatch_key(device), layer_data_map);
loader_platform_thread_lock_mutex(&globalLock);
- VkBool32 skipCall = validateCreateSwapchainKHR(device, pCreateInfo,
- pSwapchain);
+ VkBool32 skipCall = validateCreateSwapchainKHR(device, pCreateInfo, pSwapchain);
if (VK_FALSE == skipCall) {
// Call down the call chain:
loader_platform_thread_unlock_mutex(&globalLock);
- result = my_data->device_dispatch_table->CreateSwapchainKHR(
- device, pCreateInfo, pAllocator, pSwapchain);
+ result = my_data->device_dispatch_table->CreateSwapchainKHR(device, pCreateInfo, pAllocator, pSwapchain);
loader_platform_thread_lock_mutex(&globalLock);
if (result == VK_SUCCESS) {
@@ -1917,28 +1587,20 @@ VK_LAYER_EXPORT VKAPI_ATTR VkResult VKAPI_CALL vkCreateSwapchainKHR(
my_data->swapchainMap[*pSwapchain].swapchain = *pSwapchain;
if (pDevice) {
- pDevice->swapchains[*pSwapchain] =
- &my_data->swapchainMap[*pSwapchain];
+ pDevice->swapchains[*pSwapchain] = &my_data->swapchainMap[*pSwapchain];
}
my_data->swapchainMap[*pSwapchain].pDevice = pDevice;
my_data->swapchainMap[*pSwapchain].imageCount = 0;
- my_data->swapchainMap[*pSwapchain].usedAllocatorToCreate =
- (pAllocator != NULL);
+ my_data->swapchainMap[*pSwapchain].usedAllocatorToCreate = (pAllocator != NULL);
// Store a pointer to the surface
SwpPhysicalDevice *pPhysicalDevice = pDevice->pPhysicalDevice;
- SwpInstance *pInstance =
- (pPhysicalDevice) ? pPhysicalDevice->pInstance : NULL;
+ SwpInstance *pInstance = (pPhysicalDevice) ? pPhysicalDevice->pInstance : NULL;
layer_data *my_instance_data =
- ((pInstance) ?
- get_my_data_ptr(get_dispatch_key(pInstance->instance), layer_data_map) :
- NULL);
- SwpSurface *pSurface =
- ((my_data && pCreateInfo) ?
- &my_instance_data->surfaceMap[pCreateInfo->surface] : NULL);
+ ((pInstance) ? get_my_data_ptr(get_dispatch_key(pInstance->instance), layer_data_map) : NULL);
+ SwpSurface *pSurface = ((my_data && pCreateInfo) ? &my_instance_data->surfaceMap[pCreateInfo->surface] : NULL);
my_data->swapchainMap[*pSwapchain].pSurface = pSurface;
if (pSurface) {
- pSurface->swapchains[*pSwapchain] =
- &my_data->swapchainMap[*pSwapchain];
+ pSurface->swapchains[*pSwapchain] = &my_data->swapchainMap[*pSwapchain];
}
}
loader_platform_thread_unlock_mutex(&globalLock);
@@ -1948,16 +1610,13 @@ VK_LAYER_EXPORT VKAPI_ATTR VkResult VKAPI_CALL vkCreateSwapchainKHR(
return VK_ERROR_VALIDATION_FAILED_EXT;
}
-VK_LAYER_EXPORT VKAPI_ATTR void VKAPI_CALL vkDestroySwapchainKHR(
- VkDevice device,
- VkSwapchainKHR swapchain,
- const VkAllocationCallbacks* pAllocator)
-{
-// TODOs:
-//
-// - Implement a check for validity language that reads: All uses of
-// presentable images acquired from pname:swapchain and owned by the
-// application must: have completed execution
+VK_LAYER_EXPORT VKAPI_ATTR void VKAPI_CALL
+vkDestroySwapchainKHR(VkDevice device, VkSwapchainKHR swapchain, const VkAllocationCallbacks *pAllocator) {
+ // TODOs:
+ //
+ // - Implement a check for validity language that reads: All uses of
+ // presentable images acquired from pname:swapchain and owned by the
+ // application must: have completed execution
VkBool32 skipCall = VK_FALSE;
layer_data *my_data = get_my_data_ptr(get_dispatch_key(device), layer_data_map);
loader_platform_thread_lock_mutex(&globalLock);
@@ -1965,10 +1624,9 @@ VK_LAYER_EXPORT VKAPI_ATTR void VKAPI_CALL vkDestroySwapchainKHR(
// Validate that the swapchain extension was enabled:
if (pDevice && !pDevice->swapchainExtensionEnabled) {
- skipCall |= LOG_ERROR(VK_DEBUG_REPORT_OBJECT_TYPE_DEVICE_EXT, device, "VkDevice",
- SWAPCHAIN_EXT_NOT_ENABLED_BUT_USED,
- "%s() called even though the %s extension was not enabled for this VkDevice.",
- __FUNCTION__, VK_KHR_SWAPCHAIN_EXTENSION_NAME);
+ skipCall |= LOG_ERROR(VK_DEBUG_REPORT_OBJECT_TYPE_DEVICE_EXT, device, "VkDevice", SWAPCHAIN_EXT_NOT_ENABLED_BUT_USED,
+ "%s() called even though the %s extension was not enabled for this VkDevice.", __FUNCTION__,
+ VK_KHR_SWAPCHAIN_EXTENSION_NAME);
}
// Regardless of skipCall value, do some internal cleanup:
@@ -1978,8 +1636,7 @@ VK_LAYER_EXPORT VKAPI_ATTR void VKAPI_CALL vkDestroySwapchainKHR(
if (pSwapchain->pDevice) {
pSwapchain->pDevice->swapchains.erase(swapchain);
if (device != pSwapchain->pDevice->device) {
- LOG_ERROR(VK_DEBUG_REPORT_OBJECT_TYPE_DEVICE_EXT, device, "VkDevice",
- SWAPCHAIN_DESTROY_SWAP_DIFF_DEVICE,
+ LOG_ERROR(VK_DEBUG_REPORT_OBJECT_TYPE_DEVICE_EXT, device, "VkDevice", SWAPCHAIN_DESTROY_SWAP_DIFF_DEVICE,
"%s() called with a different VkDevice than the "
"VkSwapchainKHR was created with.",
__FUNCTION__);
@@ -1992,8 +1649,7 @@ VK_LAYER_EXPORT VKAPI_ATTR void VKAPI_CALL vkDestroySwapchainKHR(
pSwapchain->images.clear();
}
if ((pAllocator != NULL) != pSwapchain->usedAllocatorToCreate) {
- LOG_ERROR(VK_DEBUG_REPORT_OBJECT_TYPE_DEVICE_EXT, instance, "VkInstance",
- SWAPCHAIN_INCOMPATIBLE_ALLOCATOR,
+ LOG_ERROR(VK_DEBUG_REPORT_OBJECT_TYPE_DEVICE_EXT, instance, "VkInstance", SWAPCHAIN_INCOMPATIBLE_ALLOCATOR,
"%s() called with incompatible pAllocator from when "
"the object was created.",
__FUNCTION__);
@@ -2008,12 +1664,8 @@ VK_LAYER_EXPORT VKAPI_ATTR void VKAPI_CALL vkDestroySwapchainKHR(
}
}
-VK_LAYER_EXPORT VKAPI_ATTR VkResult VKAPI_CALL vkGetSwapchainImagesKHR(
- VkDevice device,
- VkSwapchainKHR swapchain,
- uint32_t* pSwapchainImageCount,
- VkImage* pSwapchainImages)
-{
+VK_LAYER_EXPORT VKAPI_ATTR VkResult VKAPI_CALL
+vkGetSwapchainImagesKHR(VkDevice device, VkSwapchainKHR swapchain, uint32_t *pSwapchainImageCount, VkImage *pSwapchainImages) {
VkResult result = VK_SUCCESS;
VkBool32 skipCall = VK_FALSE;
layer_data *my_data = get_my_data_ptr(get_dispatch_key(device), layer_data_map);
@@ -2022,48 +1674,36 @@ VK_LAYER_EXPORT VKAPI_ATTR VkResult VKAPI_CALL vkGetSwapchainImagesKHR(
// Validate that the swapchain extension was enabled:
if (pDevice && !pDevice->swapchainExtensionEnabled) {
- skipCall |= LOG_ERROR(VK_DEBUG_REPORT_OBJECT_TYPE_DEVICE_EXT, device, "VkDevice",
- SWAPCHAIN_EXT_NOT_ENABLED_BUT_USED,
- "%s() called even though the %s extension was not enabled for this VkDevice.",
- __FUNCTION__, VK_KHR_SWAPCHAIN_EXTENSION_NAME);
+ skipCall |= LOG_ERROR(VK_DEBUG_REPORT_OBJECT_TYPE_DEVICE_EXT, device, "VkDevice", SWAPCHAIN_EXT_NOT_ENABLED_BUT_USED,
+ "%s() called even though the %s extension was not enabled for this VkDevice.", __FUNCTION__,
+ VK_KHR_SWAPCHAIN_EXTENSION_NAME);
}
SwpSwapchain *pSwapchain = &my_data->swapchainMap[swapchain];
if (!pSwapchainImageCount) {
- skipCall |= LOG_ERROR_NULL_POINTER(VK_DEBUG_REPORT_OBJECT_TYPE_DEVICE_EXT,
- device,
- "pSwapchainImageCount");
+ skipCall |= LOG_ERROR_NULL_POINTER(VK_DEBUG_REPORT_OBJECT_TYPE_DEVICE_EXT, device, "pSwapchainImageCount");
}
if (VK_FALSE == skipCall) {
// Call down the call chain:
loader_platform_thread_unlock_mutex(&globalLock);
- result = my_data->device_dispatch_table->GetSwapchainImagesKHR(
- device, swapchain, pSwapchainImageCount, pSwapchainImages);
+ result = my_data->device_dispatch_table->GetSwapchainImagesKHR(device, swapchain, pSwapchainImageCount, pSwapchainImages);
loader_platform_thread_lock_mutex(&globalLock);
// Obtain this pointer again after locking:
pSwapchain = &my_data->swapchainMap[swapchain];
- if ((result == VK_SUCCESS) && pSwapchain && !pSwapchainImages &&
- pSwapchainImageCount) {
+ if ((result == VK_SUCCESS) && pSwapchain && !pSwapchainImages && pSwapchainImageCount) {
// Record the result of this preliminary query:
pSwapchain->imageCount = *pSwapchainImageCount;
- }
- else if ((result == VK_SUCCESS) && pSwapchain && pSwapchainImages &&
- pSwapchainImageCount) {
+ } else if ((result == VK_SUCCESS) && pSwapchain && pSwapchainImages && pSwapchainImageCount) {
// Compare the preliminary value of *pSwapchainImageCount with the
// value this time:
if (*pSwapchainImageCount > pSwapchain->imageCount) {
- LOG_ERROR_INVALID_COUNT(VK_DEBUG_REPORT_OBJECT_TYPE_DEVICE_EXT,
- device,
- "pSwapchainImageCount",
- "pSwapchainImages",
- *pSwapchainImageCount,
- pSwapchain->imageCount);
- }
- else if (*pSwapchainImageCount > 0) {
+ LOG_ERROR_INVALID_COUNT(VK_DEBUG_REPORT_OBJECT_TYPE_DEVICE_EXT, device, "pSwapchainImageCount", "pSwapchainImages",
+ *pSwapchainImageCount, pSwapchain->imageCount);
+ } else if (*pSwapchainImageCount > 0) {
// Record the images and their state:
pSwapchain->imageCount = *pSwapchainImageCount;
- for (uint32_t i = 0 ; i < *pSwapchainImageCount ; i++) {
+ for (uint32_t i = 0; i < *pSwapchainImageCount; i++) {
pSwapchain->images[i].image = pSwapchainImages[i];
pSwapchain->images[i].pSwapchain = pSwapchain;
pSwapchain->images[i].ownedByApp = false;
@@ -2077,26 +1717,20 @@ VK_LAYER_EXPORT VKAPI_ATTR VkResult VKAPI_CALL vkGetSwapchainImagesKHR(
return VK_ERROR_VALIDATION_FAILED_EXT;
}
-VK_LAYER_EXPORT VKAPI_ATTR VkResult VKAPI_CALL vkAcquireNextImageKHR(
- VkDevice device,
- VkSwapchainKHR swapchain,
- uint64_t timeout,
- VkSemaphore semaphore,
- VkFence fence,
- uint32_t* pImageIndex)
-{
-// TODOs:
-//
-// - Address the timeout. Possibilities include looking at the state of the
-// swapchain's images, depending on the timeout value.
-// - Implement a check for validity language that reads: If pname:semaphore is
-// not sname:VK_NULL_HANDLE it must: be unsignalled
-// - Implement a check for validity language that reads: If pname:fence is not
-// sname:VK_NULL_HANDLE it must: be unsignalled and mustnot: be associated
-// with any other queue command that has not yet completed execution on that
-// queue
-// - Record/update the state of the swapchain, in case an error occurs
-// (e.g. VK_ERROR_OUT_OF_DATE_KHR).
+VK_LAYER_EXPORT VKAPI_ATTR VkResult VKAPI_CALL vkAcquireNextImageKHR(VkDevice device, VkSwapchainKHR swapchain, uint64_t timeout,
+ VkSemaphore semaphore, VkFence fence, uint32_t *pImageIndex) {
+ // TODOs:
+ //
+ // - Address the timeout. Possibilities include looking at the state of the
+ // swapchain's images, depending on the timeout value.
+ // - Implement a check for validity language that reads: If pname:semaphore is
+ // not sname:VK_NULL_HANDLE it must: be unsignalled
+ // - Implement a check for validity language that reads: If pname:fence is not
+ // sname:VK_NULL_HANDLE it must: be unsignalled and mustnot: be associated
+ // with any other queue command that has not yet completed execution on that
+ // queue
+ // - Record/update the state of the swapchain, in case an error occurs
+ // (e.g. VK_ERROR_OUT_OF_DATE_KHR).
VkResult result = VK_SUCCESS;
VkBool32 skipCall = VK_FALSE;
layer_data *my_data = get_my_data_ptr(get_dispatch_key(device), layer_data_map);
@@ -2105,56 +1739,53 @@ VK_LAYER_EXPORT VKAPI_ATTR VkResult VKAPI_CALL vkAcquireNextImageKHR(
// Validate that the swapchain extension was enabled:
if (pDevice && !pDevice->swapchainExtensionEnabled) {
- skipCall |= LOG_ERROR(VK_DEBUG_REPORT_OBJECT_TYPE_DEVICE_EXT, device, "VkDevice",
- SWAPCHAIN_EXT_NOT_ENABLED_BUT_USED,
- "%s() called even though the %s extension was not enabled for this VkDevice.",
- __FUNCTION__, VK_KHR_SWAPCHAIN_EXTENSION_NAME);
+ skipCall |= LOG_ERROR(VK_DEBUG_REPORT_OBJECT_TYPE_DEVICE_EXT, device, "VkDevice", SWAPCHAIN_EXT_NOT_ENABLED_BUT_USED,
+ "%s() called even though the %s extension was not enabled for this VkDevice.", __FUNCTION__,
+ VK_KHR_SWAPCHAIN_EXTENSION_NAME);
+ }
+ if ((semaphore == VK_NULL_HANDLE) && (fence == VK_NULL_HANDLE)) {
+ skipCall |= LOG_ERROR(VK_DEBUG_REPORT_OBJECT_TYPE_DEVICE_EXT, device, "VkDevice", SWAPCHAIN_NO_SYNC_FOR_ACQUIRE,
+ "%s() called with both the semaphore and fence parameters set to "
+ "VK_NULL_HANDLE (at least one should be used).", __FUNCTION__);
}
SwpSwapchain *pSwapchain = &my_data->swapchainMap[swapchain];
if (pSwapchain) {
// Look to see if the application is trying to own too many images at
// the same time (i.e. not leave any to display):
uint32_t imagesOwnedByApp = 0;
- for (uint32_t i = 0 ; i < pSwapchain->imageCount ; i++) {
+ for (uint32_t i = 0; i < pSwapchain->imageCount; i++) {
if (pSwapchain->images[i].ownedByApp) {
imagesOwnedByApp++;
}
}
if (imagesOwnedByApp >= (pSwapchain->imageCount - 1)) {
- skipCall |= LOG_PERF_WARNING(VK_DEBUG_REPORT_OBJECT_TYPE_SWAPCHAIN_KHR_EXT,
- swapchain,
- "VkSwapchainKHR",
- SWAPCHAIN_APP_OWNS_TOO_MANY_IMAGES,
- "%s() called when the application "
- "already owns all presentable images "
- "in this swapchain except for the "
- "image currently being displayed. "
- "This call to %s() cannot succeed "
- "unless another thread calls the "
- "vkQueuePresentKHR() function in "
- "order to release ownership of one of "
- "the presentable images of this "
- "swapchain.",
+ skipCall |= LOG_PERF_WARNING(VK_DEBUG_REPORT_OBJECT_TYPE_SWAPCHAIN_KHR_EXT, swapchain, "VkSwapchainKHR",
+ SWAPCHAIN_APP_OWNS_TOO_MANY_IMAGES, "%s() called when the application "
+ "already owns all presentable images "
+ "in this swapchain except for the "
+ "image currently being displayed. "
+ "This call to %s() cannot succeed "
+ "unless another thread calls the "
+ "vkQueuePresentKHR() function in "
+ "order to release ownership of one of "
+ "the presentable images of this "
+ "swapchain.",
__FUNCTION__, __FUNCTION__);
}
}
if (!pImageIndex) {
- skipCall |= LOG_ERROR_NULL_POINTER(VK_DEBUG_REPORT_OBJECT_TYPE_DEVICE_EXT,
- device,
- "pImageIndex");
+ skipCall |= LOG_ERROR_NULL_POINTER(VK_DEBUG_REPORT_OBJECT_TYPE_DEVICE_EXT, device, "pImageIndex");
}
if (VK_FALSE == skipCall) {
// Call down the call chain:
loader_platform_thread_unlock_mutex(&globalLock);
- result = my_data->device_dispatch_table->AcquireNextImageKHR(
- device, swapchain, timeout, semaphore, fence, pImageIndex);
+ result = my_data->device_dispatch_table->AcquireNextImageKHR(device, swapchain, timeout, semaphore, fence, pImageIndex);
loader_platform_thread_lock_mutex(&globalLock);
// Obtain this pointer again after locking:
pSwapchain = &my_data->swapchainMap[swapchain];
- if (((result == VK_SUCCESS) || (result == VK_SUBOPTIMAL_KHR)) &&
- pSwapchain) {
+ if (((result == VK_SUCCESS) || (result == VK_SUBOPTIMAL_KHR)) && pSwapchain) {
// Change the state of the image (now owned by the application):
pSwapchain->images[*pImageIndex].ownedByApp = true;
}
@@ -2165,90 +1796,64 @@ VK_LAYER_EXPORT VKAPI_ATTR VkResult VKAPI_CALL vkAcquireNextImageKHR(
return VK_ERROR_VALIDATION_FAILED_EXT;
}
-VK_LAYER_EXPORT VKAPI_ATTR VkResult VKAPI_CALL vkQueuePresentKHR(
- VkQueue queue,
- const VkPresentInfoKHR* pPresentInfo)
-{
-// TODOs:
-//
-// - Implement a check for validity language that reads: Any given element of
-// sname:VkSemaphore in pname:pWaitSemaphores must: refer to a prior signal
-// of that sname:VkSemaphore that won't be consumed by any other wait on that
-// semaphore
-// - Record/update the state of the swapchain, in case an error occurs
-// (e.g. VK_ERROR_OUT_OF_DATE_KHR).
+VK_LAYER_EXPORT VKAPI_ATTR VkResult VKAPI_CALL vkQueuePresentKHR(VkQueue queue, const VkPresentInfoKHR *pPresentInfo) {
+ // TODOs:
+ //
+ // - Implement a check for validity language that reads: Any given element of
+ // sname:VkSemaphore in pname:pWaitSemaphores must: refer to a prior signal
+ // of that sname:VkSemaphore that won't be consumed by any other wait on that
+ // semaphore
+ // - Record/update the state of the swapchain, in case an error occurs
+ // (e.g. VK_ERROR_OUT_OF_DATE_KHR).
VkResult result = VK_SUCCESS;
VkBool32 skipCall = VK_FALSE;
layer_data *my_data = get_my_data_ptr(get_dispatch_key(queue), layer_data_map);
if (!pPresentInfo) {
- skipCall |= LOG_ERROR_NULL_POINTER(VK_DEBUG_REPORT_OBJECT_TYPE_DEVICE_EXT,
- device,
- "pPresentInfo");
+ skipCall |= LOG_ERROR_NULL_POINTER(VK_DEBUG_REPORT_OBJECT_TYPE_DEVICE_EXT, device, "pPresentInfo");
} else {
if (pPresentInfo->sType != VK_STRUCTURE_TYPE_PRESENT_INFO_KHR) {
- skipCall |= LOG_ERROR_WRONG_STYPE(VK_DEBUG_REPORT_OBJECT_TYPE_DEVICE_EXT,
- device,
- "pPresentInfo",
+ skipCall |= LOG_ERROR_WRONG_STYPE(VK_DEBUG_REPORT_OBJECT_TYPE_DEVICE_EXT, device, "pPresentInfo",
"VK_STRUCTURE_TYPE_PRESENT_INFO_KHR");
}
if (pPresentInfo->pNext != NULL) {
- skipCall |= LOG_INFO_WRONG_NEXT(VK_DEBUG_REPORT_OBJECT_TYPE_DEVICE_EXT,
- device,
- "pPresentInfo");
+ skipCall |= LOG_INFO_WRONG_NEXT(VK_DEBUG_REPORT_OBJECT_TYPE_DEVICE_EXT, device, "pPresentInfo");
}
if (!pPresentInfo->swapchainCount) {
- skipCall |= LOG_ERROR_ZERO_VALUE(VK_DEBUG_REPORT_OBJECT_TYPE_DEVICE_EXT,
- device,
- "pPresentInfo->swapchainCount");
+ skipCall |= LOG_ERROR_ZERO_VALUE(VK_DEBUG_REPORT_OBJECT_TYPE_DEVICE_EXT, device, "pPresentInfo->swapchainCount");
}
if (!pPresentInfo->pSwapchains) {
- skipCall |= LOG_ERROR_NULL_POINTER(VK_DEBUG_REPORT_OBJECT_TYPE_DEVICE_EXT,
- device,
- "pPresentInfo->pSwapchains");
+ skipCall |= LOG_ERROR_NULL_POINTER(VK_DEBUG_REPORT_OBJECT_TYPE_DEVICE_EXT, device, "pPresentInfo->pSwapchains");
}
if (!pPresentInfo->pImageIndices) {
- skipCall |= LOG_ERROR_NULL_POINTER(VK_DEBUG_REPORT_OBJECT_TYPE_DEVICE_EXT,
- device,
- "pPresentInfo->pImageIndices");
+ skipCall |= LOG_ERROR_NULL_POINTER(VK_DEBUG_REPORT_OBJECT_TYPE_DEVICE_EXT, device, "pPresentInfo->pImageIndices");
}
// Note: pPresentInfo->pResults is allowed to be NULL
}
loader_platform_thread_lock_mutex(&globalLock);
- for (uint32_t i = 0;
- pPresentInfo && (i < pPresentInfo->swapchainCount);
- i++) {
+ for (uint32_t i = 0; pPresentInfo && (i < pPresentInfo->swapchainCount); i++) {
uint32_t index = pPresentInfo->pImageIndices[i];
- SwpSwapchain *pSwapchain =
- &my_data->swapchainMap[pPresentInfo->pSwapchains[i]];
+ SwpSwapchain *pSwapchain = &my_data->swapchainMap[pPresentInfo->pSwapchains[i]];
if (pSwapchain) {
if (!pSwapchain->pDevice->swapchainExtensionEnabled) {
- skipCall |= LOG_ERROR(VK_DEBUG_REPORT_OBJECT_TYPE_DEVICE_EXT,
- pSwapchain->pDevice, "VkDevice",
+ skipCall |= LOG_ERROR(VK_DEBUG_REPORT_OBJECT_TYPE_DEVICE_EXT, pSwapchain->pDevice, "VkDevice",
SWAPCHAIN_EXT_NOT_ENABLED_BUT_USED,
- "%s() called even though the %s extension was not enabled for this VkDevice.",
- __FUNCTION__, VK_KHR_SWAPCHAIN_EXTENSION_NAME);
+ "%s() called even though the %s extension was not enabled for this VkDevice.", __FUNCTION__,
+ VK_KHR_SWAPCHAIN_EXTENSION_NAME);
}
if (index >= pSwapchain->imageCount) {
- skipCall |= LOG_ERROR(VK_DEBUG_REPORT_OBJECT_TYPE_SWAPCHAIN_KHR_EXT,
- pPresentInfo->pSwapchains[i],
- "VkSwapchainKHR",
- SWAPCHAIN_INDEX_TOO_LARGE,
- "%s() called for an index that is too "
- "large (i.e. %d). There are only %d "
- "images in this VkSwapchainKHR.\n",
- __FUNCTION__, index,
- pSwapchain->imageCount);
+ skipCall |= LOG_ERROR(VK_DEBUG_REPORT_OBJECT_TYPE_SWAPCHAIN_KHR_EXT, pPresentInfo->pSwapchains[i], "VkSwapchainKHR",
+ SWAPCHAIN_INDEX_TOO_LARGE, "%s() called for an index that is too "
+ "large (i.e. %d). There are only %d "
+ "images in this VkSwapchainKHR.\n",
+ __FUNCTION__, index, pSwapchain->imageCount);
} else {
if (!pSwapchain->images[index].ownedByApp) {
- skipCall |= LOG_ERROR(VK_DEBUG_REPORT_OBJECT_TYPE_SWAPCHAIN_KHR_EXT,
- pPresentInfo->pSwapchains[i],
- "VkSwapchainKHR",
- SWAPCHAIN_INDEX_NOT_IN_USE,
- "%s() returned an index (i.e. %d) "
- "for an image that is not owned by "
- "the application.",
+ skipCall |= LOG_ERROR(VK_DEBUG_REPORT_OBJECT_TYPE_SWAPCHAIN_KHR_EXT, pPresentInfo->pSwapchains[i],
+ "VkSwapchainKHR", SWAPCHAIN_INDEX_NOT_IN_USE, "%s() returned an index (i.e. %d) "
+ "for an image that is not owned by "
+ "the application.",
__FUNCTION__, index);
}
}
@@ -2260,16 +1865,14 @@ VK_LAYER_EXPORT VKAPI_ATTR VkResult VKAPI_CALL vkQueuePresentKHR(
// and the 2nd test is the validation check:
if ((pSurface->numQueueFamilyIndexSupport > queueFamilyIndex) &&
(!pSurface->pQueueFamilyIndexSupport[queueFamilyIndex])) {
- skipCall |= LOG_ERROR(VK_DEBUG_REPORT_OBJECT_TYPE_SWAPCHAIN_KHR_EXT,
- pPresentInfo->pSwapchains[i],
- "VkSwapchainKHR",
- SWAPCHAIN_SURFACE_NOT_SUPPORTED_WITH_QUEUE,
- "%s() called with a swapchain whose "
- "surface is not supported for "
- "presention on this device with the "
- "queueFamilyIndex (i.e. %d) of the "
- "given queue.",
- __FUNCTION__, queueFamilyIndex);
+ skipCall |=
+ LOG_ERROR(VK_DEBUG_REPORT_OBJECT_TYPE_SWAPCHAIN_KHR_EXT, pPresentInfo->pSwapchains[i], "VkSwapchainKHR",
+ SWAPCHAIN_SURFACE_NOT_SUPPORTED_WITH_QUEUE, "%s() called with a swapchain whose "
+ "surface is not supported for "
+ "presention on this device with the "
+ "queueFamilyIndex (i.e. %d) of the "
+ "given queue.",
+ __FUNCTION__, queueFamilyIndex);
}
}
}
@@ -2278,16 +1881,13 @@ VK_LAYER_EXPORT VKAPI_ATTR VkResult VKAPI_CALL vkQueuePresentKHR(
if (VK_FALSE == skipCall) {
// Call down the call chain:
loader_platform_thread_unlock_mutex(&globalLock);
- result = my_data->device_dispatch_table->QueuePresentKHR(queue,
- pPresentInfo);
+ result = my_data->device_dispatch_table->QueuePresentKHR(queue, pPresentInfo);
loader_platform_thread_lock_mutex(&globalLock);
- if (pPresentInfo &&
- ((result == VK_SUCCESS) || (result == VK_SUBOPTIMAL_KHR))) {
- for (uint32_t i = 0; i < pPresentInfo->swapchainCount ; i++) {
+ if (pPresentInfo && ((result == VK_SUCCESS) || (result == VK_SUBOPTIMAL_KHR))) {
+ for (uint32_t i = 0; i < pPresentInfo->swapchainCount; i++) {
int index = pPresentInfo->pImageIndices[i];
- SwpSwapchain *pSwapchain =
- &my_data->swapchainMap[pPresentInfo->pSwapchains[i]];
+ SwpSwapchain *pSwapchain = &my_data->swapchainMap[pPresentInfo->pSwapchains[i]];
if (pSwapchain) {
// Change the state of the image (no longer owned by the
// application):
@@ -2302,19 +1902,14 @@ VK_LAYER_EXPORT VKAPI_ATTR VkResult VKAPI_CALL vkQueuePresentKHR(
return VK_ERROR_VALIDATION_FAILED_EXT;
}
-VK_LAYER_EXPORT VKAPI_ATTR void VKAPI_CALL vkGetDeviceQueue(
- VkDevice device,
- uint32_t queueFamilyIndex,
- uint32_t queueIndex,
- VkQueue* pQueue)
-{
+VK_LAYER_EXPORT VKAPI_ATTR void VKAPI_CALL
+vkGetDeviceQueue(VkDevice device, uint32_t queueFamilyIndex, uint32_t queueIndex, VkQueue *pQueue) {
VkBool32 skipCall = VK_FALSE;
layer_data *my_data = get_my_data_ptr(get_dispatch_key(device), layer_data_map);
if (VK_FALSE == skipCall) {
// Call down the call chain:
- my_data->device_dispatch_table->GetDeviceQueue(
- device, queueFamilyIndex, queueIndex, pQueue);
+ my_data->device_dispatch_table->GetDeviceQueue(device, queueFamilyIndex, queueIndex, pQueue);
// Remember the queue's handle, and link it to the device:
loader_platform_thread_lock_mutex(&globalLock);
@@ -2329,15 +1924,12 @@ VK_LAYER_EXPORT VKAPI_ATTR void VKAPI_CALL vkGetDeviceQueue(
}
}
-
-VK_LAYER_EXPORT VKAPI_ATTR VkResult VKAPI_CALL vkCreateDebugReportCallbackEXT(
- VkInstance instance,
- const VkDebugReportCallbackCreateInfoEXT* pCreateInfo,
- const VkAllocationCallbacks* pAllocator,
- VkDebugReportCallbackEXT* pMsgCallback)
-{
+VK_LAYER_EXPORT VKAPI_ATTR VkResult VKAPI_CALL
+vkCreateDebugReportCallbackEXT(VkInstance instance, const VkDebugReportCallbackCreateInfoEXT *pCreateInfo,
+ const VkAllocationCallbacks *pAllocator, VkDebugReportCallbackEXT *pMsgCallback) {
layer_data *my_data = get_my_data_ptr(get_dispatch_key(instance), layer_data_map);
- VkResult result = my_data->instance_dispatch_table->CreateDebugReportCallbackEXT(instance, pCreateInfo, pAllocator, pMsgCallback);
+ VkResult result =
+ my_data->instance_dispatch_table->CreateDebugReportCallbackEXT(instance, pCreateInfo, pAllocator, pMsgCallback);
if (VK_SUCCESS == result) {
loader_platform_thread_lock_mutex(&globalLock);
result = layer_create_msg_callback(my_data->report_data, pCreateInfo, pAllocator, pMsgCallback);
@@ -2346,8 +1938,9 @@ VK_LAYER_EXPORT VKAPI_ATTR VkResult VKAPI_CALL vkCreateDebugReportCallbackEXT(
return result;
}
-VK_LAYER_EXPORT VKAPI_ATTR void VKAPI_CALL vkDestroyDebugReportCallbackEXT(VkInstance instance, VkDebugReportCallbackEXT msgCallback, const VkAllocationCallbacks *pAllocator)
-{
+VK_LAYER_EXPORT VKAPI_ATTR void VKAPI_CALL vkDestroyDebugReportCallbackEXT(VkInstance instance,
+ VkDebugReportCallbackEXT msgCallback,
+ const VkAllocationCallbacks *pAllocator) {
layer_data *my_data = get_my_data_ptr(get_dispatch_key(instance), layer_data_map);
my_data->instance_dispatch_table->DestroyDebugReportCallbackEXT(instance, msgCallback, pAllocator);
loader_platform_thread_lock_mutex(&globalLock);
@@ -2355,26 +1948,19 @@ VK_LAYER_EXPORT VKAPI_ATTR void VKAPI_CALL vkDestroyDebugReportCallbackEXT(VkIns
loader_platform_thread_unlock_mutex(&globalLock);
}
-VK_LAYER_EXPORT VKAPI_ATTR void VKAPI_CALL vkDebugReportMessageEXT(
- VkInstance instance,
- VkDebugReportFlagsEXT flags,
- VkDebugReportObjectTypeEXT objType,
- uint64_t object,
- size_t location,
- int32_t msgCode,
- const char* pLayerPrefix,
- const char* pMsg)
-{
+VK_LAYER_EXPORT VKAPI_ATTR void VKAPI_CALL
+vkDebugReportMessageEXT(VkInstance instance, VkDebugReportFlagsEXT flags, VkDebugReportObjectTypeEXT objType, uint64_t object,
+ size_t location, int32_t msgCode, const char *pLayerPrefix, const char *pMsg) {
layer_data *my_data = get_my_data_ptr(get_dispatch_key(instance), layer_data_map);
- my_data->instance_dispatch_table->DebugReportMessageEXT(instance, flags, objType, object, location, msgCode, pLayerPrefix, pMsg);
+ my_data->instance_dispatch_table->DebugReportMessageEXT(instance, flags, objType, object, location, msgCode, pLayerPrefix,
+ pMsg);
}
-VK_LAYER_EXPORT VKAPI_ATTR PFN_vkVoidFunction VKAPI_CALL vkGetDeviceProcAddr(VkDevice device, const char* funcName)
-{
+VK_LAYER_EXPORT VKAPI_ATTR PFN_vkVoidFunction VKAPI_CALL vkGetDeviceProcAddr(VkDevice device, const char *funcName) {
if (!strcmp("vkGetDeviceProcAddr", funcName))
- return (PFN_vkVoidFunction) vkGetDeviceProcAddr;
+ return (PFN_vkVoidFunction)vkGetDeviceProcAddr;
if (!strcmp(funcName, "vkDestroyDevice"))
- return (PFN_vkVoidFunction) vkDestroyDevice;
+ return (PFN_vkVoidFunction)vkDestroyDevice;
if (device == VK_NULL_HANDLE) {
return NULL;
@@ -2384,20 +1970,16 @@ VK_LAYER_EXPORT VKAPI_ATTR PFN_vkVoidFunction VKAPI_CALL vkGetDeviceProcAddr(VkD
my_data = get_my_data_ptr(get_dispatch_key(device), layer_data_map);
VkLayerDispatchTable *pDisp = my_data->device_dispatch_table;
- if (my_data->deviceMap.size() != 0 &&
- my_data->deviceMap[device].swapchainExtensionEnabled)
- {
- if (!strcmp("vkCreateSwapchainKHR", funcName))
- return reinterpret_cast<PFN_vkVoidFunction>(vkCreateSwapchainKHR);
- if (!strcmp("vkDestroySwapchainKHR", funcName))
- return reinterpret_cast<PFN_vkVoidFunction>(vkDestroySwapchainKHR);
- if (!strcmp("vkGetSwapchainImagesKHR", funcName))
- return reinterpret_cast<PFN_vkVoidFunction>(vkGetSwapchainImagesKHR);
- if (!strcmp("vkAcquireNextImageKHR", funcName))
- return reinterpret_cast<PFN_vkVoidFunction>(vkAcquireNextImageKHR);
- if (!strcmp("vkQueuePresentKHR", funcName))
- return reinterpret_cast<PFN_vkVoidFunction>(vkQueuePresentKHR);
- }
+ if (!strcmp("vkCreateSwapchainKHR", funcName))
+ return reinterpret_cast<PFN_vkVoidFunction>(vkCreateSwapchainKHR);
+ if (!strcmp("vkDestroySwapchainKHR", funcName))
+ return reinterpret_cast<PFN_vkVoidFunction>(vkDestroySwapchainKHR);
+ if (!strcmp("vkGetSwapchainImagesKHR", funcName))
+ return reinterpret_cast<PFN_vkVoidFunction>(vkGetSwapchainImagesKHR);
+ if (!strcmp("vkAcquireNextImageKHR", funcName))
+ return reinterpret_cast<PFN_vkVoidFunction>(vkAcquireNextImageKHR);
+ if (!strcmp("vkQueuePresentKHR", funcName))
+ return reinterpret_cast<PFN_vkVoidFunction>(vkQueuePresentKHR);
if (!strcmp("vkGetDeviceQueue", funcName))
return reinterpret_cast<PFN_vkVoidFunction>(vkGetDeviceQueue);
@@ -2406,28 +1988,27 @@ VK_LAYER_EXPORT VKAPI_ATTR PFN_vkVoidFunction VKAPI_CALL vkGetDeviceProcAddr(VkD
return pDisp->GetDeviceProcAddr(device, funcName);
}
-VK_LAYER_EXPORT VKAPI_ATTR PFN_vkVoidFunction VKAPI_CALL vkGetInstanceProcAddr(VkInstance instance, const char* funcName)
-{
+VK_LAYER_EXPORT VKAPI_ATTR PFN_vkVoidFunction VKAPI_CALL vkGetInstanceProcAddr(VkInstance instance, const char *funcName) {
if (!strcmp("vkGetInstanceProcAddr", funcName))
- return (PFN_vkVoidFunction) vkGetInstanceProcAddr;
+ return (PFN_vkVoidFunction)vkGetInstanceProcAddr;
if (!strcmp(funcName, "vkCreateInstance"))
- return (PFN_vkVoidFunction) vkCreateInstance;
+ return (PFN_vkVoidFunction)vkCreateInstance;
if (!strcmp(funcName, "vkDestroyInstance"))
- return (PFN_vkVoidFunction) vkDestroyInstance;
+ return (PFN_vkVoidFunction)vkDestroyInstance;
if (!strcmp(funcName, "vkCreateDevice"))
- return (PFN_vkVoidFunction) vkCreateDevice;
+ return (PFN_vkVoidFunction)vkCreateDevice;
if (!strcmp(funcName, "vkEnumeratePhysicalDevices"))
- return (PFN_vkVoidFunction) vkEnumeratePhysicalDevices;
+ return (PFN_vkVoidFunction)vkEnumeratePhysicalDevices;
if (!strcmp(funcName, "vkEnumerateInstanceLayerProperties"))
- return (PFN_vkVoidFunction) vkEnumerateInstanceLayerProperties;
+ return (PFN_vkVoidFunction)vkEnumerateInstanceLayerProperties;
if (!strcmp(funcName, "vkEnumerateDeviceLayerProperties"))
return (PFN_vkVoidFunction)vkEnumerateDeviceLayerProperties;
if (!strcmp(funcName, "vkEnumerateInstanceExtensionProperties"))
- return (PFN_vkVoidFunction) vkEnumerateInstanceExtensionProperties;
+ return (PFN_vkVoidFunction)vkEnumerateInstanceExtensionProperties;
if (!strcmp(funcName, "vkEnumerateDeviceExtensionProperties"))
return (PFN_vkVoidFunction)vkEnumerateDeviceExtensionProperties;
if (!strcmp(funcName, "vkGetPhysicalDeviceQueueFamilyProperties"))
- return (PFN_vkVoidFunction) vkGetPhysicalDeviceQueueFamilyProperties;
+ return (PFN_vkVoidFunction)vkGetPhysicalDeviceQueueFamilyProperties;
if (instance == VK_NULL_HANDLE) {
return NULL;
@@ -2437,87 +2018,58 @@ VK_LAYER_EXPORT VKAPI_ATTR PFN_vkVoidFunction VKAPI_CALL vkGetInstanceProcAddr(V
layer_data *my_data;
my_data = get_my_data_ptr(get_dispatch_key(instance), layer_data_map);
- VkLayerInstanceDispatchTable* pTable = my_data->instance_dispatch_table;
+ VkLayerInstanceDispatchTable *pTable = my_data->instance_dispatch_table;
addr = debug_report_get_instance_proc_addr(my_data->report_data, funcName);
if (addr) {
return addr;
}
#ifdef VK_USE_PLATFORM_ANDROID_KHR
- if (my_data->instanceMap.size() != 0 &&
- my_data->instanceMap[instance].androidSurfaceExtensionEnabled)
- {
- if (!strcmp("vkCreateAndroidSurfaceKHR", funcName))
- return reinterpret_cast<PFN_vkVoidFunction>(vkCreateAndroidSurfaceKHR);
- }
+ if (!strcmp("vkCreateAndroidSurfaceKHR", funcName))
+ return reinterpret_cast<PFN_vkVoidFunction>(vkCreateAndroidSurfaceKHR);
#endif // VK_USE_PLATFORM_ANDROID_KHR
#ifdef VK_USE_PLATFORM_MIR_KHR
- if (my_data->instanceMap.size() != 0 &&
- my_data->instanceMap[instance].mirSurfaceExtensionEnabled)
- {
- if (!strcmp("vkCreateMirSurfaceKHR", funcName))
- return reinterpret_cast<PFN_vkVoidFunction>(vkCreateMirSurfaceKHR);
- if (!strcmp("vkGetPhysicalDeviceMirPresentationSupportKHR", funcName))
- return reinterpret_cast<PFN_vkVoidFunction>(vkGetPhysicalDeviceMirPresentationSupportKHR);
- }
+ if (!strcmp("vkCreateMirSurfaceKHR", funcName))
+ return reinterpret_cast<PFN_vkVoidFunction>(vkCreateMirSurfaceKHR);
+ if (!strcmp("vkGetPhysicalDeviceMirPresentationSupportKHR", funcName))
+ return reinterpret_cast<PFN_vkVoidFunction>(vkGetPhysicalDeviceMirPresentationSupportKHR);
#endif // VK_USE_PLATFORM_MIR_KHR
#ifdef VK_USE_PLATFORM_WAYLAND_KHR
- if (my_data->instanceMap.size() != 0 &&
- my_data->instanceMap[instance].waylandSurfaceExtensionEnabled)
- {
- if (!strcmp("vkCreateWaylandSurfaceKHR", funcName))
- return reinterpret_cast<PFN_vkVoidFunction>(vkCreateWaylandSurfaceKHR);
- if (!strcmp("vkGetPhysicalDeviceWaylandPresentationSupportKHR", funcName))
- return reinterpret_cast<PFN_vkVoidFunction>(vkGetPhysicalDeviceWaylandPresentationSupportKHR);
- }
+ if (!strcmp("vkCreateWaylandSurfaceKHR", funcName))
+ return reinterpret_cast<PFN_vkVoidFunction>(vkCreateWaylandSurfaceKHR);
+ if (!strcmp("vkGetPhysicalDeviceWaylandPresentationSupportKHR", funcName))
+ return reinterpret_cast<PFN_vkVoidFunction>(vkGetPhysicalDeviceWaylandPresentationSupportKHR);
#endif // VK_USE_PLATFORM_WAYLAND_KHR
#ifdef VK_USE_PLATFORM_WIN32_KHR
- if (my_data->instanceMap.size() != 0 &&
- my_data->instanceMap[instance].win32SurfaceExtensionEnabled)
- {
- if (!strcmp("vkCreateWin32SurfaceKHR", funcName))
- return reinterpret_cast<PFN_vkVoidFunction>(vkCreateWin32SurfaceKHR);
- if (!strcmp("vkGetPhysicalDeviceWin32PresentationSupportKHR", funcName))
- return reinterpret_cast<PFN_vkVoidFunction>(vkGetPhysicalDeviceWin32PresentationSupportKHR);
- }
+ if (!strcmp("vkCreateWin32SurfaceKHR", funcName))
+ return reinterpret_cast<PFN_vkVoidFunction>(vkCreateWin32SurfaceKHR);
+ if (!strcmp("vkGetPhysicalDeviceWin32PresentationSupportKHR", funcName))
+ return reinterpret_cast<PFN_vkVoidFunction>(vkGetPhysicalDeviceWin32PresentationSupportKHR);
#endif // VK_USE_PLATFORM_WIN32_KHR
#ifdef VK_USE_PLATFORM_XCB_KHR
- if (my_data->instanceMap.size() != 0 &&
- my_data->instanceMap[instance].xcbSurfaceExtensionEnabled)
- {
- if (!strcmp("vkCreateXcbSurfaceKHR", funcName))
- return reinterpret_cast<PFN_vkVoidFunction>(vkCreateXcbSurfaceKHR);
- if (!strcmp("vkGetPhysicalDeviceXcbPresentationSupportKHR", funcName))
- return reinterpret_cast<PFN_vkVoidFunction>(vkGetPhysicalDeviceXcbPresentationSupportKHR);
- }
+ if (!strcmp("vkCreateXcbSurfaceKHR", funcName))
+ return reinterpret_cast<PFN_vkVoidFunction>(vkCreateXcbSurfaceKHR);
+ if (!strcmp("vkGetPhysicalDeviceXcbPresentationSupportKHR", funcName))
+ return reinterpret_cast<PFN_vkVoidFunction>(vkGetPhysicalDeviceXcbPresentationSupportKHR);
#endif // VK_USE_PLATFORM_XCB_KHR
#ifdef VK_USE_PLATFORM_XLIB_KHR
- if (my_data->instanceMap.size() != 0 &&
- my_data->instanceMap[instance].xlibSurfaceExtensionEnabled)
- {
- if (!strcmp("vkCreateXlibSurfaceKHR", funcName))
- return reinterpret_cast<PFN_vkVoidFunction>(vkCreateXlibSurfaceKHR);
- if (!strcmp("vkGetPhysicalDeviceXlibPresentationSupportKHR", funcName))
- return reinterpret_cast<PFN_vkVoidFunction>(vkGetPhysicalDeviceXlibPresentationSupportKHR);
- }
+ if (!strcmp("vkCreateXlibSurfaceKHR", funcName))
+ return reinterpret_cast<PFN_vkVoidFunction>(vkCreateXlibSurfaceKHR);
+ if (!strcmp("vkGetPhysicalDeviceXlibPresentationSupportKHR", funcName))
+ return reinterpret_cast<PFN_vkVoidFunction>(vkGetPhysicalDeviceXlibPresentationSupportKHR);
#endif // VK_USE_PLATFORM_XLIB_KHR
- if (my_data->instanceMap.size() != 0 &&
- my_data->instanceMap[instance].surfaceExtensionEnabled)
- {
- if (!strcmp("vkDestroySurfaceKHR", funcName))
- return reinterpret_cast<PFN_vkVoidFunction>(vkDestroySurfaceKHR);
- if (!strcmp("vkGetPhysicalDeviceSurfaceSupportKHR", funcName))
- return reinterpret_cast<PFN_vkVoidFunction>(vkGetPhysicalDeviceSurfaceSupportKHR);
- if (!strcmp("vkGetPhysicalDeviceSurfaceCapabilitiesKHR", funcName))
- return reinterpret_cast<PFN_vkVoidFunction>(vkGetPhysicalDeviceSurfaceCapabilitiesKHR);
- if (!strcmp("vkGetPhysicalDeviceSurfaceFormatsKHR", funcName))
- return reinterpret_cast<PFN_vkVoidFunction>(vkGetPhysicalDeviceSurfaceFormatsKHR);
- if (!strcmp("vkGetPhysicalDeviceSurfacePresentModesKHR", funcName))
- return reinterpret_cast<PFN_vkVoidFunction>(vkGetPhysicalDeviceSurfacePresentModesKHR);
- }
+ if (!strcmp("vkDestroySurfaceKHR", funcName))
+ return reinterpret_cast<PFN_vkVoidFunction>(vkDestroySurfaceKHR);
+ if (!strcmp("vkGetPhysicalDeviceSurfaceSupportKHR", funcName))
+ return reinterpret_cast<PFN_vkVoidFunction>(vkGetPhysicalDeviceSurfaceSupportKHR);
+ if (!strcmp("vkGetPhysicalDeviceSurfaceCapabilitiesKHR", funcName))
+ return reinterpret_cast<PFN_vkVoidFunction>(vkGetPhysicalDeviceSurfaceCapabilitiesKHR);
+ if (!strcmp("vkGetPhysicalDeviceSurfaceFormatsKHR", funcName))
+ return reinterpret_cast<PFN_vkVoidFunction>(vkGetPhysicalDeviceSurfaceFormatsKHR);
+ if (!strcmp("vkGetPhysicalDeviceSurfacePresentModesKHR", funcName))
+ return reinterpret_cast<PFN_vkVoidFunction>(vkGetPhysicalDeviceSurfacePresentModesKHR);
if (pTable->GetInstanceProcAddr == NULL)
return NULL;
return pTable->GetInstanceProcAddr(instance, funcName);
}
-
diff --git a/layers/swapchain.h b/layers/swapchain.h
index 756daaeaf..e6c1f4a13 100644
--- a/layers/swapchain.h
+++ b/layers/swapchain.h
@@ -37,18 +37,18 @@
using namespace std;
-
// Swapchain ERROR codes
-typedef enum _SWAPCHAIN_ERROR
-{
- SWAPCHAIN_INVALID_HANDLE, // Handle used that isn't currently valid
- SWAPCHAIN_NULL_POINTER, // Pointer set to NULL, instead of being a valid pointer
- SWAPCHAIN_EXT_NOT_ENABLED_BUT_USED, // Did not enable WSI extension, but called WSI function
- SWAPCHAIN_DEL_OBJECT_BEFORE_CHILDREN, // Called vkDestroyDevice() before vkDestroySwapchainKHR()
- SWAPCHAIN_CREATE_UNSUPPORTED_SURFACE, // Called vkCreateSwapchainKHR() with a pCreateInfo->surface that wasn't seen as supported by vkGetPhysicalDeviceSurfaceSupportKHR for the device
- SWAPCHAIN_CREATE_SWAP_WITHOUT_QUERY, // Called vkCreateSwapchainKHR() without calling a query (e.g. vkGetPhysicalDeviceSurfaceCapabilitiesKHR())
- SWAPCHAIN_CREATE_SWAP_BAD_MIN_IMG_COUNT, // Called vkCreateSwapchainKHR() with out-of-bounds minImageCount
- SWAPCHAIN_CREATE_SWAP_OUT_OF_BOUNDS_EXTENTS,// Called vkCreateSwapchainKHR() with out-of-bounds imageExtent
+typedef enum _SWAPCHAIN_ERROR {
+ SWAPCHAIN_INVALID_HANDLE, // Handle used that isn't currently valid
+ SWAPCHAIN_NULL_POINTER, // Pointer set to NULL, instead of being a valid pointer
+ SWAPCHAIN_EXT_NOT_ENABLED_BUT_USED, // Did not enable WSI extension, but called WSI function
+ SWAPCHAIN_DEL_OBJECT_BEFORE_CHILDREN, // Called vkDestroyDevice() before vkDestroySwapchainKHR()
+ SWAPCHAIN_CREATE_UNSUPPORTED_SURFACE, // Called vkCreateSwapchainKHR() with a pCreateInfo->surface that wasn't seen as supported
+ // by vkGetPhysicalDeviceSurfaceSupportKHR for the device
+ SWAPCHAIN_CREATE_SWAP_WITHOUT_QUERY, // Called vkCreateSwapchainKHR() without calling a query (e.g.
+ // vkGetPhysicalDeviceSurfaceCapabilitiesKHR())
+ SWAPCHAIN_CREATE_SWAP_BAD_MIN_IMG_COUNT, // Called vkCreateSwapchainKHR() with out-of-bounds minImageCount
+ SWAPCHAIN_CREATE_SWAP_OUT_OF_BOUNDS_EXTENTS, // Called vkCreateSwapchainKHR() with out-of-bounds imageExtent
SWAPCHAIN_CREATE_SWAP_EXTENTS_NO_MATCH_WIN, // Called vkCreateSwapchainKHR() with imageExtent that doesn't match window's extent
SWAPCHAIN_CREATE_SWAP_BAD_PRE_TRANSFORM, // Called vkCreateSwapchainKHR() with a non-supported preTransform
SWAPCHAIN_CREATE_SWAP_BAD_COMPOSITE_ALPHA, // Called vkCreateSwapchainKHR() with a non-supported compositeAlpha
@@ -59,90 +59,80 @@ typedef enum _SWAPCHAIN_ERROR
SWAPCHAIN_CREATE_SWAP_BAD_IMG_FMT_CLR_SP, // Called vkCreateSwapchainKHR() with a non-supported imageColorSpace
SWAPCHAIN_CREATE_SWAP_BAD_PRESENT_MODE, // Called vkCreateSwapchainKHR() with a non-supported presentMode
SWAPCHAIN_CREATE_SWAP_BAD_SHARING_MODE, // Called vkCreateSwapchainKHR() with a non-supported imageSharingMode
- SWAPCHAIN_CREATE_SWAP_BAD_SHARING_VALUES, // Called vkCreateSwapchainKHR() with bad values when imageSharingMode is VK_SHARING_MODE_CONCURRENT
- SWAPCHAIN_CREATE_SWAP_DIFF_SURFACE, // Called vkCreateSwapchainKHR() with pCreateInfo->oldSwapchain that has a different surface than pCreateInfo->surface
- SWAPCHAIN_DESTROY_SWAP_DIFF_DEVICE, // Called vkDestroySwapchainKHR() with a different VkDevice than vkCreateSwapchainKHR()
- SWAPCHAIN_APP_OWNS_TOO_MANY_IMAGES, // vkAcquireNextImageKHR() asked for more images than are available
- SWAPCHAIN_INDEX_TOO_LARGE, // Index is too large for swapchain
- SWAPCHAIN_INDEX_NOT_IN_USE, // vkQueuePresentKHR() given index that is not owned by app
- SWAPCHAIN_BAD_BOOL, // VkBool32 that doesn't have value of VK_TRUE or VK_FALSE (e.g. is a non-zero form of true)
- SWAPCHAIN_INVALID_COUNT, // Second time a query called, the pCount value didn't match first time
- SWAPCHAIN_WRONG_STYPE, // The sType for a struct has the wrong value
- SWAPCHAIN_WRONG_NEXT, // The pNext for a struct is not NULL
- SWAPCHAIN_ZERO_VALUE, // A value should be non-zero
- SWAPCHAIN_INCOMPATIBLE_ALLOCATOR, // pAllocator must be compatible (i.e. NULL or not) when object is created and destroyed
- SWAPCHAIN_DID_NOT_QUERY_QUEUE_FAMILIES, // A function using a queueFamilyIndex was called before vkGetPhysicalDeviceQueueFamilyProperties() was called
- SWAPCHAIN_QUEUE_FAMILY_INDEX_TOO_LARGE, // A queueFamilyIndex value is not less than pQueueFamilyPropertyCount returned by vkGetPhysicalDeviceQueueFamilyProperties()
- SWAPCHAIN_SURFACE_NOT_SUPPORTED_WITH_QUEUE, // A surface is not supported by a given queueFamilyIndex, as seen by vkGetPhysicalDeviceSurfaceSupportKHR()
+ SWAPCHAIN_CREATE_SWAP_BAD_SHARING_VALUES, // Called vkCreateSwapchainKHR() with bad values when imageSharingMode is
+ // VK_SHARING_MODE_CONCURRENT
+ SWAPCHAIN_CREATE_SWAP_DIFF_SURFACE, // Called vkCreateSwapchainKHR() with pCreateInfo->oldSwapchain that has a different surface
+ // than pCreateInfo->surface
+ SWAPCHAIN_DESTROY_SWAP_DIFF_DEVICE, // Called vkDestroySwapchainKHR() with a different VkDevice than vkCreateSwapchainKHR()
+ SWAPCHAIN_APP_OWNS_TOO_MANY_IMAGES, // vkAcquireNextImageKHR() asked for more images than are available
+ SWAPCHAIN_INDEX_TOO_LARGE, // Index is too large for swapchain
+ SWAPCHAIN_INDEX_NOT_IN_USE, // vkQueuePresentKHR() given index that is not owned by app
+ SWAPCHAIN_BAD_BOOL, // VkBool32 that doesn't have value of VK_TRUE or VK_FALSE (e.g. is a non-zero form of true)
+ SWAPCHAIN_INVALID_COUNT, // Second time a query called, the pCount value didn't match first time
+ SWAPCHAIN_WRONG_STYPE, // The sType for a struct has the wrong value
+ SWAPCHAIN_WRONG_NEXT, // The pNext for a struct is not NULL
+ SWAPCHAIN_ZERO_VALUE, // A value should be non-zero
+ SWAPCHAIN_INCOMPATIBLE_ALLOCATOR, // pAllocator must be compatible (i.e. NULL or not) when object is created and destroyed
+ SWAPCHAIN_DID_NOT_QUERY_QUEUE_FAMILIES, // A function using a queueFamilyIndex was called before
+ // vkGetPhysicalDeviceQueueFamilyProperties() was called
+ SWAPCHAIN_QUEUE_FAMILY_INDEX_TOO_LARGE, // A queueFamilyIndex value is not less than pQueueFamilyPropertyCount returned by
+ // vkGetPhysicalDeviceQueueFamilyProperties()
+ SWAPCHAIN_SURFACE_NOT_SUPPORTED_WITH_QUEUE, // A surface is not supported by a given queueFamilyIndex, as seen by
+ // vkGetPhysicalDeviceSurfaceSupportKHR()
+ SWAPCHAIN_NO_SYNC_FOR_ACQUIRE, // vkAcquireNextImageKHR should be called with a valid semaphore and/or fence
} SWAPCHAIN_ERROR;
-
// The following is for logging error messages:
#define LAYER_NAME (char *) "Swapchain"
-#define LOG_ERROR_NON_VALID_OBJ(objType, type, obj) \
- (my_data) ? \
- log_msg(my_data->report_data, VK_DEBUG_REPORT_ERROR_BIT_EXT, (objType), \
- (uint64_t) (obj), __LINE__, SWAPCHAIN_INVALID_HANDLE, LAYER_NAME, \
- "%s() called with a non-valid %s.", __FUNCTION__, (obj)) \
- : VK_FALSE
-#define LOG_ERROR_NULL_POINTER(objType, type, obj) \
- (my_data) ? \
- log_msg(my_data->report_data, VK_DEBUG_REPORT_ERROR_BIT_EXT, (objType), \
- (uint64_t) (obj), 0, SWAPCHAIN_NULL_POINTER, LAYER_NAME, \
- "%s() called with NULL pointer %s.", __FUNCTION__, (obj)) \
- : VK_FALSE
-#define LOG_ERROR_INVALID_COUNT(objType, type, obj, obj2, val, val2) \
- (my_data) ? \
- log_msg(my_data->report_data, VK_DEBUG_REPORT_ERROR_BIT_EXT, (objType), \
- (uint64_t) (obj), 0, SWAPCHAIN_INVALID_COUNT, LAYER_NAME, \
- "%s() called with non-NULL %s, and with %s set to a " \
- "value (%d) that is greater than the value (%d) that " \
- "was returned when %s was NULL.", \
- __FUNCTION__, (obj2), (obj), (val), (val2), (obj2)) \
- : VK_FALSE
-#define LOG_ERROR_WRONG_STYPE(objType, type, obj, val) \
- (my_data) ? \
- log_msg(my_data->report_data, VK_DEBUG_REPORT_ERROR_BIT_EXT, (objType), \
- (uint64_t) (obj), 0, SWAPCHAIN_WRONG_STYPE, LAYER_NAME, \
- "%s() called with the wrong value for %s->sType " \
- "(expected %s).", \
- __FUNCTION__, (obj), (val)) \
- : VK_FALSE
-#define LOG_ERROR_ZERO_VALUE(objType, type, obj) \
- (my_data) ? \
- log_msg(my_data->report_data, VK_DEBUG_REPORT_ERROR_BIT_EXT, (objType), \
- (uint64_t) (obj), 0, SWAPCHAIN_ZERO_VALUE, LAYER_NAME, \
- "%s() called with a zero value for %s.", \
- __FUNCTION__, (obj)) \
- : VK_FALSE
-#define LOG_ERROR(objType, type, obj, enm, fmt, ...) \
- (my_data) ? \
- log_msg(my_data->report_data, VK_DEBUG_REPORT_ERROR_BIT_EXT, (objType), \
- (uint64_t) (obj), __LINE__, (enm), LAYER_NAME, (fmt), __VA_ARGS__) \
- : VK_FALSE
-#define LOG_ERROR_QUEUE_FAMILY_INDEX_TOO_LARGE(objType, type, obj, val1, val2) \
- (my_data) ? \
- log_msg(my_data->report_data, VK_DEBUG_REPORT_ERROR_BIT_EXT, (objType), \
- (uint64_t) (obj), 0, SWAPCHAIN_QUEUE_FAMILY_INDEX_TOO_LARGE, LAYER_NAME, \
- "%s() called with a queueFamilyIndex that is too " \
- "large (i.e. %d). The maximum value (returned " \
- "by vkGetPhysicalDeviceQueueFamilyProperties) is " \
- "only %d.\n", \
- __FUNCTION__, (val1), (val2)) \
- : VK_FALSE
-#define LOG_PERF_WARNING(objType, type, obj, enm, fmt, ...) \
- (my_data) ? \
- log_msg(my_data->report_data, VK_DEBUG_REPORT_PERFORMANCE_WARNING_BIT_EXT, (objType), \
- (uint64_t) (obj), __LINE__, (enm), LAYER_NAME, (fmt), __VA_ARGS__) \
- : VK_FALSE
-#define LOG_INFO_WRONG_NEXT(objType, type, obj) \
- (my_data) ? \
- log_msg(my_data->report_data, VK_DEBUG_REPORT_INFORMATION_BIT_EXT, (objType), \
- (uint64_t) (obj), 0, SWAPCHAIN_WRONG_NEXT, LAYER_NAME, \
- "%s() called with non-NULL value for %s->pNext.", \
- __FUNCTION__, (obj)) \
- : VK_FALSE
-
+#define LOG_ERROR_NON_VALID_OBJ(objType, type, obj) \
+ (my_data) ? log_msg(my_data->report_data, VK_DEBUG_REPORT_ERROR_BIT_EXT, (objType), (uint64_t)(obj), __LINE__, \
+ SWAPCHAIN_INVALID_HANDLE, LAYER_NAME, "%s() called with a non-valid %s.", __FUNCTION__, (obj)) \
+ : VK_FALSE
+#define LOG_ERROR_NULL_POINTER(objType, type, obj) \
+ (my_data) ? log_msg(my_data->report_data, VK_DEBUG_REPORT_ERROR_BIT_EXT, (objType), (uint64_t)(obj), 0, \
+ SWAPCHAIN_NULL_POINTER, LAYER_NAME, "%s() called with NULL pointer %s.", __FUNCTION__, (obj)) \
+ : VK_FALSE
+#define LOG_ERROR_INVALID_COUNT(objType, type, obj, obj2, val, val2) \
+ (my_data) ? log_msg(my_data->report_data, VK_DEBUG_REPORT_ERROR_BIT_EXT, (objType), (uint64_t)(obj), 0, \
+ SWAPCHAIN_INVALID_COUNT, LAYER_NAME, "%s() called with non-NULL %s, and with %s set to a " \
+ "value (%d) that is greater than the value (%d) that " \
+ "was returned when %s was NULL.", \
+ __FUNCTION__, (obj2), (obj), (val), (val2), (obj2)) \
+ : VK_FALSE
+#define LOG_ERROR_WRONG_STYPE(objType, type, obj, val) \
+ (my_data) ? log_msg(my_data->report_data, VK_DEBUG_REPORT_ERROR_BIT_EXT, (objType), (uint64_t)(obj), 0, SWAPCHAIN_WRONG_STYPE, \
+ LAYER_NAME, "%s() called with the wrong value for %s->sType " \
+ "(expected %s).", \
+ __FUNCTION__, (obj), (val)) \
+ : VK_FALSE
+#define LOG_ERROR_ZERO_VALUE(objType, type, obj) \
+ (my_data) ? log_msg(my_data->report_data, VK_DEBUG_REPORT_ERROR_BIT_EXT, (objType), (uint64_t)(obj), 0, SWAPCHAIN_ZERO_VALUE, \
+ LAYER_NAME, "%s() called with a zero value for %s.", __FUNCTION__, (obj)) \
+ : VK_FALSE
+#define LOG_ERROR(objType, type, obj, enm, fmt, ...) \
+ (my_data) ? log_msg(my_data->report_data, VK_DEBUG_REPORT_ERROR_BIT_EXT, (objType), (uint64_t)(obj), __LINE__, (enm), \
+ LAYER_NAME, (fmt), __VA_ARGS__) \
+ : VK_FALSE
+#define LOG_ERROR_QUEUE_FAMILY_INDEX_TOO_LARGE(objType, type, obj, val1, val2) \
+ (my_data) ? log_msg(my_data->report_data, VK_DEBUG_REPORT_ERROR_BIT_EXT, (objType), (uint64_t)(obj), 0, \
+ SWAPCHAIN_QUEUE_FAMILY_INDEX_TOO_LARGE, LAYER_NAME, "%s() called with a queueFamilyIndex that is too " \
+ "large (i.e. %d). The maximum value (returned " \
+ "by vkGetPhysicalDeviceQueueFamilyProperties) is " \
+ "only %d.\n", \
+ __FUNCTION__, (val1), (val2)) \
+ : VK_FALSE
+#define LOG_PERF_WARNING(objType, type, obj, enm, fmt, ...) \
+ (my_data) ? log_msg(my_data->report_data, VK_DEBUG_REPORT_PERFORMANCE_WARNING_BIT_EXT, (objType), (uint64_t)(obj), __LINE__, \
+ (enm), LAYER_NAME, (fmt), __VA_ARGS__) \
+ : VK_FALSE
+#define LOG_WARNING(objType, type, obj, enm, fmt, ...) \
+ (my_data) ? log_msg(my_data->report_data, VK_DEBUG_REPORT_WARNING_BIT_EXT, (objType), (uint64_t)(obj), __LINE__, (enm), \
+ LAYER_NAME, (fmt), __VA_ARGS__) \
+ : VK_FALSE
+#define LOG_INFO_WRONG_NEXT(objType, type, obj) \
+ (my_data) ? log_msg(my_data->report_data, VK_DEBUG_REPORT_INFORMATION_BIT_EXT, (objType), (uint64_t)(obj), 0, \
+ SWAPCHAIN_WRONG_NEXT, LAYER_NAME, "%s() called with non-NULL value for %s->pNext.", __FUNCTION__, (obj)) \
+ : VK_FALSE
// NOTE: The following struct's/typedef's are for keeping track of
// info that is used for validating the WSI extensions.
@@ -157,7 +147,8 @@ struct _SwpImage;
struct _SwpQueue;
typedef _SwpInstance SwpInstance;
-typedef _SwpSurface SwpSurface;;
+typedef _SwpSurface SwpSurface;
+;
typedef _SwpPhysicalDevice SwpPhysicalDevice;
typedef _SwpDevice SwpDevice;
typedef _SwpSwapchain SwpSwapchain;
@@ -170,16 +161,16 @@ struct _SwpInstance {
VkInstance instance;
// Remember the VkSurfaceKHR's that are created for this VkInstance:
- unordered_map<VkSurfaceKHR, SwpSurface*> surfaces;
+ unordered_map<VkSurfaceKHR, SwpSurface *> surfaces;
// When vkEnumeratePhysicalDevices is called, the VkPhysicalDevice's are
// remembered:
- unordered_map<const void*, SwpPhysicalDevice*> physicalDevices;
+ unordered_map<const void *, SwpPhysicalDevice *> physicalDevices;
// Set to true if VK_KHR_SURFACE_EXTENSION_NAME was enabled for this VkInstance:
bool surfaceExtensionEnabled;
- // TODO: Add additional booleans for platform-specific extensions:
+// TODO: Add additional booleans for platform-specific extensions:
#ifdef VK_USE_PLATFORM_ANDROID_KHR
// Set to true if VK_KHR_ANDROID_SURFACE_EXTENSION_NAME was enabled for this VkInstance:
bool androidSurfaceExtensionEnabled;
@@ -205,7 +196,7 @@ struct _SwpInstance {
bool xlibSurfaceExtensionEnabled;
#endif // VK_USE_PLATFORM_XLIB_KHR
};
-
+
// Create one of these for each VkSurfaceKHR:
struct _SwpSurface {
// The actual handle for this VkSurfaceKHR:
@@ -216,7 +207,7 @@ struct _SwpSurface {
// When vkCreateSwapchainKHR is called, the VkSwapchainKHR's are
// remembered:
- unordered_map<VkSwapchainKHR, SwpSwapchain*> swapchains;
+ unordered_map<VkSwapchainKHR, SwpSwapchain *> swapchains;
// 'true' if pAllocator was non-NULL when vkCreate*SurfaceKHR was called:
bool usedAllocatorToCreate;
@@ -250,25 +241,25 @@ struct _SwpPhysicalDevice {
// Record all surfaces that vkGetPhysicalDeviceSurfaceSupportKHR() was
// called for:
- unordered_map<VkSurfaceKHR, SwpSurface*> supportedSurfaces;
+ unordered_map<VkSurfaceKHR, SwpSurface *> supportedSurfaces;
-// TODO: Record/use this info per-surface, not per-device, once a
-// non-dispatchable surface object is added to WSI:
+ // TODO: Record/use this info per-surface, not per-device, once a
+ // non-dispatchable surface object is added to WSI:
// Results of vkGetPhysicalDeviceSurfaceCapabilitiesKHR():
bool gotSurfaceCapabilities;
VkSurfaceCapabilitiesKHR surfaceCapabilities;
-// TODO: Record/use this info per-surface, not per-device, once a
-// non-dispatchable surface object is added to WSI:
+ // TODO: Record/use this info per-surface, not per-device, once a
+ // non-dispatchable surface object is added to WSI:
// Count and VkSurfaceFormatKHR's returned by vkGetPhysicalDeviceSurfaceFormatsKHR():
uint32_t surfaceFormatCount;
- VkSurfaceFormatKHR* pSurfaceFormats;
+ VkSurfaceFormatKHR *pSurfaceFormats;
-// TODO: Record/use this info per-surface, not per-device, once a
-// non-dispatchable surface object is added to WSI:
+ // TODO: Record/use this info per-surface, not per-device, once a
+ // non-dispatchable surface object is added to WSI:
// Count and VkPresentModeKHR's returned by vkGetPhysicalDeviceSurfacePresentModesKHR():
uint32_t presentModeCount;
- VkPresentModeKHR* pPresentModes;
+ VkPresentModeKHR *pPresentModes;
};
// Create one of these for each VkDevice within a VkInstance:
@@ -284,10 +275,10 @@ struct _SwpDevice {
// When vkCreateSwapchainKHR is called, the VkSwapchainKHR's are
// remembered:
- unordered_map<VkSwapchainKHR, SwpSwapchain*> swapchains;
+ unordered_map<VkSwapchainKHR, SwpSwapchain *> swapchains;
// When vkGetDeviceQueue is called, the VkQueue's are remembered:
- unordered_map<VkQueue, SwpQueue*> queues;
+ unordered_map<VkQueue, SwpQueue *> queues;
};
// Create one of these for each VkImage within a VkSwapchainKHR:
@@ -338,22 +329,18 @@ struct _SwpQueue {
struct layer_data {
debug_report_data *report_data;
std::vector<VkDebugReportCallbackEXT> logging_callback;
- VkLayerDispatchTable* device_dispatch_table;
- VkLayerInstanceDispatchTable* instance_dispatch_table;
+ VkLayerDispatchTable *device_dispatch_table;
+ VkLayerInstanceDispatchTable *instance_dispatch_table;
// NOTE: The following are for keeping track of info that is used for
// validating the WSI extensions.
- std::unordered_map<void *, SwpInstance> instanceMap;
- std::unordered_map<VkSurfaceKHR, SwpSurface> surfaceMap;
+ std::unordered_map<void *, SwpInstance> instanceMap;
+ std::unordered_map<VkSurfaceKHR, SwpSurface> surfaceMap;
std::unordered_map<void *, SwpPhysicalDevice> physicalDeviceMap;
- std::unordered_map<void *, SwpDevice> deviceMap;
- std::unordered_map<VkSwapchainKHR, SwpSwapchain> swapchainMap;
- std::unordered_map<void *, SwpQueue> queueMap;
-
- layer_data() :
- report_data(nullptr),
- device_dispatch_table(nullptr),
- instance_dispatch_table(nullptr)
- {};
+ std::unordered_map<void *, SwpDevice> deviceMap;
+ std::unordered_map<VkSwapchainKHR, SwpSwapchain> swapchainMap;
+ std::unordered_map<void *, SwpQueue> queueMap;
+
+ layer_data() : report_data(nullptr), device_dispatch_table(nullptr), instance_dispatch_table(nullptr){};
};
#endif // SWAPCHAIN_H
diff --git a/layers/threading.cpp b/layers/threading.cpp
index b110bbf07..a75bcdc5b 100644
--- a/layers/threading.cpp
+++ b/layers/threading.cpp
@@ -38,66 +38,31 @@
#include "vk_layer_table.h"
#include "vk_layer_logging.h"
#include "threading.h"
-
#include "vk_dispatch_table_helper.h"
#include "vk_struct_string_helper_cpp.h"
#include "vk_layer_data.h"
+#include "vk_layer_utils.h"
#include "thread_check.h"
-static void initThreading(layer_data *my_data, const VkAllocationCallbacks *pAllocator)
-{
-
- uint32_t report_flags = 0;
- uint32_t debug_action = 0;
- FILE *log_output = NULL;
- const char *strOpt;
- VkDebugReportCallbackEXT callback;
- // initialize Threading options
- report_flags = getLayerOptionFlags("ThreadingReportFlags", 0);
- getLayerOptionEnum("ThreadingDebugAction", (uint32_t *) &debug_action);
-
- if (debug_action & VK_DBG_LAYER_ACTION_LOG_MSG)
- {
- strOpt = getLayerOption("ThreadingLogFilename");
- log_output = getLayerLogOutput(strOpt, "Threading");
- VkDebugReportCallbackCreateInfoEXT dbgCreateInfo;
- memset(&dbgCreateInfo, 0, sizeof(dbgCreateInfo));
- dbgCreateInfo.sType = VK_STRUCTURE_TYPE_DEBUG_REPORT_CREATE_INFO_EXT;
- dbgCreateInfo.flags = report_flags;
- dbgCreateInfo.pfnCallback = log_callback;
- dbgCreateInfo.pUserData = (void *) log_output;
- layer_create_msg_callback(my_data->report_data, &dbgCreateInfo, pAllocator, &callback);
- my_data->logging_callback.push_back(callback);
- }
+static void initThreading(layer_data *my_data, const VkAllocationCallbacks *pAllocator) {
- if (debug_action & VK_DBG_LAYER_ACTION_DEBUG_OUTPUT) {
- VkDebugReportCallbackCreateInfoEXT dbgCreateInfo;
- memset(&dbgCreateInfo, 0, sizeof(dbgCreateInfo));
- dbgCreateInfo.sType = VK_STRUCTURE_TYPE_DEBUG_REPORT_CREATE_INFO_EXT;
- dbgCreateInfo.flags = report_flags;
- dbgCreateInfo.pfnCallback = win32_debug_output_msg;
- dbgCreateInfo.pUserData = NULL;
- layer_create_msg_callback(my_data->report_data, &dbgCreateInfo, pAllocator, &callback);
- my_data->logging_callback.push_back(callback);
- }
+ layer_debug_actions(my_data->report_data, my_data->logging_callback, pAllocator, "google_threading");
- if (!threadingLockInitialized)
- {
+ if (!threadingLockInitialized) {
loader_platform_thread_create_mutex(&threadingLock);
loader_platform_thread_init_cond(&threadingCond);
threadingLockInitialized = 1;
}
}
-
-VK_LAYER_EXPORT VKAPI_ATTR VkResult VKAPI_CALL vkCreateInstance(const VkInstanceCreateInfo* pCreateInfo, const VkAllocationCallbacks* pAllocator, VkInstance* pInstance)
-{
+VK_LAYER_EXPORT VKAPI_ATTR VkResult VKAPI_CALL
+vkCreateInstance(const VkInstanceCreateInfo *pCreateInfo, const VkAllocationCallbacks *pAllocator, VkInstance *pInstance) {
VkLayerInstanceCreateInfo *chain_info = get_chain_info(pCreateInfo, VK_LAYER_LINK_INFO);
assert(chain_info->u.pLayerInfo);
PFN_vkGetInstanceProcAddr fpGetInstanceProcAddr = chain_info->u.pLayerInfo->pfnNextGetInstanceProcAddr;
- PFN_vkCreateInstance fpCreateInstance = (PFN_vkCreateInstance) fpGetInstanceProcAddr(NULL, "vkCreateInstance");
+ PFN_vkCreateInstance fpCreateInstance = (PFN_vkCreateInstance)fpGetInstanceProcAddr(NULL, "vkCreateInstance");
if (fpCreateInstance == NULL) {
return VK_ERROR_INITIALIZATION_FAILED;
}
@@ -113,18 +78,13 @@ VK_LAYER_EXPORT VKAPI_ATTR VkResult VKAPI_CALL vkCreateInstance(const VkInstance
my_data->instance_dispatch_table = new VkLayerInstanceDispatchTable;
layer_init_instance_dispatch_table(*pInstance, my_data->instance_dispatch_table, fpGetInstanceProcAddr);
- my_data->report_data = debug_report_create_instance(
- my_data->instance_dispatch_table,
- *pInstance,
- pCreateInfo->enabledExtensionCount,
- pCreateInfo->ppEnabledExtensionNames);
+ my_data->report_data = debug_report_create_instance(my_data->instance_dispatch_table, *pInstance,
+ pCreateInfo->enabledExtensionCount, pCreateInfo->ppEnabledExtensionNames);
initThreading(my_data, pAllocator);
return result;
}
-
-VK_LAYER_EXPORT VKAPI_ATTR void VKAPI_CALL vkDestroyInstance(VkInstance instance, const VkAllocationCallbacks* pAllocator)
-{
+VK_LAYER_EXPORT VKAPI_ATTR void VKAPI_CALL vkDestroyInstance(VkInstance instance, const VkAllocationCallbacks *pAllocator) {
dispatch_key key = get_dispatch_key(instance);
layer_data *my_data = get_my_data_ptr(key, layer_data_map);
VkLayerInstanceDispatchTable *pTable = my_data->instance_dispatch_table;
@@ -150,15 +110,14 @@ VK_LAYER_EXPORT VKAPI_ATTR void VKAPI_CALL vkDestroyInstance(VkInstance instance
}
}
-
-VK_LAYER_EXPORT VKAPI_ATTR VkResult VKAPI_CALL vkCreateDevice(VkPhysicalDevice gpu, const VkDeviceCreateInfo* pCreateInfo, const VkAllocationCallbacks* pAllocator, VkDevice* pDevice)
-{
+VK_LAYER_EXPORT VKAPI_ATTR VkResult VKAPI_CALL vkCreateDevice(VkPhysicalDevice gpu, const VkDeviceCreateInfo *pCreateInfo,
+ const VkAllocationCallbacks *pAllocator, VkDevice *pDevice) {
VkLayerDeviceCreateInfo *chain_info = get_chain_info(pCreateInfo, VK_LAYER_LINK_INFO);
assert(chain_info->u.pLayerInfo);
PFN_vkGetInstanceProcAddr fpGetInstanceProcAddr = chain_info->u.pLayerInfo->pfnNextGetInstanceProcAddr;
PFN_vkGetDeviceProcAddr fpGetDeviceProcAddr = chain_info->u.pLayerInfo->pfnNextGetDeviceProcAddr;
- PFN_vkCreateDevice fpCreateDevice = (PFN_vkCreateDevice) fpGetInstanceProcAddr(NULL, "vkCreateDevice");
+ PFN_vkCreateDevice fpCreateDevice = (PFN_vkCreateDevice)fpGetInstanceProcAddr(NULL, "vkCreateDevice");
if (fpCreateDevice == NULL) {
return VK_ERROR_INITIALIZATION_FAILED;
}
@@ -182,11 +141,9 @@ VK_LAYER_EXPORT VKAPI_ATTR VkResult VKAPI_CALL vkCreateDevice(VkPhysicalDevice g
return result;
}
-
-VK_LAYER_EXPORT VKAPI_ATTR void VKAPI_CALL vkDestroyDevice(VkDevice device, const VkAllocationCallbacks* pAllocator)
-{
+VK_LAYER_EXPORT VKAPI_ATTR void VKAPI_CALL vkDestroyDevice(VkDevice device, const VkAllocationCallbacks *pAllocator) {
dispatch_key key = get_dispatch_key(device);
- layer_data* dev_data = get_my_data_ptr(key, layer_data_map);
+ layer_data *dev_data = get_my_data_ptr(key, layer_data_map);
startWriteObject(dev_data, device);
dev_data->device_dispatch_table->DestroyDevice(device, pAllocator);
finishWriteObject(dev_data, device);
@@ -194,105 +151,81 @@ VK_LAYER_EXPORT VKAPI_ATTR void VKAPI_CALL vkDestroyDevice(VkDevice device, cons
}
static const VkExtensionProperties threading_extensions[] = {
- {
- VK_EXT_DEBUG_REPORT_EXTENSION_NAME,
- VK_EXT_DEBUG_REPORT_SPEC_VERSION
- }
-};
+ {VK_EXT_DEBUG_REPORT_EXTENSION_NAME, VK_EXT_DEBUG_REPORT_SPEC_VERSION}};
-VK_LAYER_EXPORT VkResult VKAPI_CALL vkEnumerateInstanceExtensionProperties(const char *pLayerName, uint32_t *pCount, VkExtensionProperties* pProperties)
-{
+VK_LAYER_EXPORT VkResult VKAPI_CALL
+vkEnumerateInstanceExtensionProperties(const char *pLayerName, uint32_t *pCount, VkExtensionProperties *pProperties) {
return util_GetExtensionProperties(ARRAY_SIZE(threading_extensions), threading_extensions, pCount, pProperties);
}
-static const VkLayerProperties globalLayerProps[] = {
- {
- "VK_LAYER_GOOGLE_threading",
- VK_API_VERSION, // specVersion
- 1,
- "Google Validation Layer",
- }
-};
-
+static const VkLayerProperties globalLayerProps[] = {{
+ "VK_LAYER_GOOGLE_threading",
+ VK_LAYER_API_VERSION, // specVersion
+ 1, "Google Validation Layer",
+}};
-VK_LAYER_EXPORT VkResult VKAPI_CALL vkEnumerateInstanceLayerProperties(uint32_t *pCount, VkLayerProperties* pProperties)
-{
+VK_LAYER_EXPORT VkResult VKAPI_CALL vkEnumerateInstanceLayerProperties(uint32_t *pCount, VkLayerProperties *pProperties) {
return util_GetLayerProperties(ARRAY_SIZE(globalLayerProps), globalLayerProps, pCount, pProperties);
}
-static const VkLayerProperties deviceLayerProps[] = {
- {
- "VK_LAYER_GOOGLE_threading",
- VK_API_VERSION, // specVersion
- 1,
- "Google Validation Layer",
- }
-};
-
-VK_LAYER_EXPORT VKAPI_ATTR VkResult VKAPI_CALL vkEnumerateDeviceExtensionProperties(
- VkPhysicalDevice physicalDevice,
- const char *pLayerName,
- uint32_t *pCount,
- VkExtensionProperties* pProperties)
-{
+static const VkLayerProperties deviceLayerProps[] = {{
+ "VK_LAYER_GOOGLE_threading",
+ VK_LAYER_API_VERSION, // specVersion
+ 1, "Google Validation Layer",
+}};
+
+VK_LAYER_EXPORT VKAPI_ATTR VkResult VKAPI_CALL vkEnumerateDeviceExtensionProperties(VkPhysicalDevice physicalDevice,
+ const char *pLayerName, uint32_t *pCount,
+ VkExtensionProperties *pProperties) {
if (pLayerName == NULL) {
dispatch_key key = get_dispatch_key(physicalDevice);
layer_data *my_data = get_my_data_ptr(key, layer_data_map);
- return my_data->instance_dispatch_table->EnumerateDeviceExtensionProperties(
- physicalDevice,
- NULL,
- pCount,
- pProperties);
+ return my_data->instance_dispatch_table->EnumerateDeviceExtensionProperties(physicalDevice, NULL, pCount, pProperties);
} else {
// Threading layer does not have any device extensions
- return util_GetExtensionProperties(0,
- nullptr,
- pCount, pProperties);
+ return util_GetExtensionProperties(0, nullptr, pCount, pProperties);
}
}
-VK_LAYER_EXPORT VkResult VKAPI_CALL vkEnumerateDeviceLayerProperties(VkPhysicalDevice physicalDevice, uint32_t *pCount, VkLayerProperties* pProperties)
-{
+VK_LAYER_EXPORT VkResult VKAPI_CALL
+vkEnumerateDeviceLayerProperties(VkPhysicalDevice physicalDevice, uint32_t *pCount, VkLayerProperties *pProperties) {
return util_GetLayerProperties(ARRAY_SIZE(deviceLayerProps), deviceLayerProps, pCount, pProperties);
}
-static inline PFN_vkVoidFunction layer_intercept_proc(const char *name)
-{
- for (int i=0; i<sizeof(procmap)/sizeof(procmap[0]); i++) {
- if (!strcmp(name, procmap[i].name)) return procmap[i].pFunc;
+static inline PFN_vkVoidFunction layer_intercept_proc(const char *name) {
+ for (int i = 0; i < sizeof(procmap) / sizeof(procmap[0]); i++) {
+ if (!strcmp(name, procmap[i].name))
+ return procmap[i].pFunc;
}
return NULL;
}
-
-static inline PFN_vkVoidFunction layer_intercept_instance_proc(const char *name)
-{
+static inline PFN_vkVoidFunction layer_intercept_instance_proc(const char *name) {
if (!name || name[0] != 'v' || name[1] != 'k')
return NULL;
name += 2;
if (!strcmp(name, "CreateInstance"))
- return (PFN_vkVoidFunction) vkCreateInstance;
+ return (PFN_vkVoidFunction)vkCreateInstance;
if (!strcmp(name, "DestroyInstance"))
- return (PFN_vkVoidFunction) vkDestroyInstance;
+ return (PFN_vkVoidFunction)vkDestroyInstance;
if (!strcmp(name, "EnumerateInstanceExtensionProperties"))
- return (PFN_vkVoidFunction) vkEnumerateInstanceExtensionProperties;
+ return (PFN_vkVoidFunction)vkEnumerateInstanceExtensionProperties;
if (!strcmp(name, "EnumerateInstanceLayerProperties"))
- return (PFN_vkVoidFunction) vkEnumerateInstanceLayerProperties;
+ return (PFN_vkVoidFunction)vkEnumerateInstanceLayerProperties;
if (!strcmp(name, "EnumerateDeviceExtensionProperties"))
- return (PFN_vkVoidFunction) vkEnumerateDeviceExtensionProperties;
+ return (PFN_vkVoidFunction)vkEnumerateDeviceExtensionProperties;
if (!strcmp(name, "EnumerateDeviceLayerProperties"))
- return (PFN_vkVoidFunction) vkEnumerateDeviceLayerProperties;
+ return (PFN_vkVoidFunction)vkEnumerateDeviceLayerProperties;
if (!strcmp(name, "CreateDevice"))
- return (PFN_vkVoidFunction) vkCreateDevice;
+ return (PFN_vkVoidFunction)vkCreateDevice;
if (!strcmp(name, "GetInstanceProcAddr"))
- return (PFN_vkVoidFunction) vkGetInstanceProcAddr;
+ return (PFN_vkVoidFunction)vkGetInstanceProcAddr;
return NULL;
}
-VK_LAYER_EXPORT PFN_vkVoidFunction VKAPI_CALL vkGetDeviceProcAddr(VkDevice device, const char* funcName)
-{
+VK_LAYER_EXPORT PFN_vkVoidFunction VKAPI_CALL vkGetDeviceProcAddr(VkDevice device, const char *funcName) {
PFN_vkVoidFunction addr;
layer_data *dev_data;
if (device == VK_NULL_HANDLE) {
@@ -304,17 +237,16 @@ VK_LAYER_EXPORT PFN_vkVoidFunction VKAPI_CALL vkGetDeviceProcAddr(VkDevice devic
return addr;
dev_data = get_my_data_ptr(get_dispatch_key(device), layer_data_map);
- VkLayerDispatchTable* pTable = dev_data->device_dispatch_table;
+ VkLayerDispatchTable *pTable = dev_data->device_dispatch_table;
if (pTable->GetDeviceProcAddr == NULL)
return NULL;
return pTable->GetDeviceProcAddr(device, funcName);
}
-VK_LAYER_EXPORT PFN_vkVoidFunction VKAPI_CALL vkGetInstanceProcAddr(VkInstance instance, const char* funcName)
-{
+VK_LAYER_EXPORT PFN_vkVoidFunction VKAPI_CALL vkGetInstanceProcAddr(VkInstance instance, const char *funcName) {
PFN_vkVoidFunction addr;
- layer_data* my_data;
+ layer_data *my_data;
addr = layer_intercept_instance_proc(funcName);
if (addr) {
@@ -331,22 +263,20 @@ VK_LAYER_EXPORT PFN_vkVoidFunction VKAPI_CALL vkGetInstanceProcAddr(VkInstance i
return addr;
}
- VkLayerInstanceDispatchTable* pTable = my_data->instance_dispatch_table;
+ VkLayerInstanceDispatchTable *pTable = my_data->instance_dispatch_table;
if (pTable->GetInstanceProcAddr == NULL) {
return NULL;
}
return pTable->GetInstanceProcAddr(instance, funcName);
}
-VK_LAYER_EXPORT VKAPI_ATTR VkResult VKAPI_CALL vkCreateDebugReportCallbackEXT(
- VkInstance instance,
- const VkDebugReportCallbackCreateInfoEXT* pCreateInfo,
- const VkAllocationCallbacks* pAllocator,
- VkDebugReportCallbackEXT* pMsgCallback)
-{
+VK_LAYER_EXPORT VKAPI_ATTR VkResult VKAPI_CALL
+vkCreateDebugReportCallbackEXT(VkInstance instance, const VkDebugReportCallbackCreateInfoEXT *pCreateInfo,
+ const VkAllocationCallbacks *pAllocator, VkDebugReportCallbackEXT *pMsgCallback) {
layer_data *my_data = get_my_data_ptr(get_dispatch_key(instance), layer_data_map);
startReadObject(my_data, instance);
- VkResult result = my_data->instance_dispatch_table->CreateDebugReportCallbackEXT(instance, pCreateInfo, pAllocator, pMsgCallback);
+ VkResult result =
+ my_data->instance_dispatch_table->CreateDebugReportCallbackEXT(instance, pCreateInfo, pAllocator, pMsgCallback);
if (VK_SUCCESS == result) {
result = layer_create_msg_callback(my_data->report_data, pCreateInfo, pAllocator, pMsgCallback);
}
@@ -354,11 +284,8 @@ VK_LAYER_EXPORT VKAPI_ATTR VkResult VKAPI_CALL vkCreateDebugReportCallbackEXT(
return result;
}
-VK_LAYER_EXPORT VKAPI_ATTR void VKAPI_CALL vkDestroyDebugReportCallbackEXT(
- VkInstance instance,
- VkDebugReportCallbackEXT callback,
- const VkAllocationCallbacks* pAllocator)
-{
+VK_LAYER_EXPORT VKAPI_ATTR void VKAPI_CALL
+vkDestroyDebugReportCallbackEXT(VkInstance instance, VkDebugReportCallbackEXT callback, const VkAllocationCallbacks *pAllocator) {
layer_data *my_data = get_my_data_ptr(get_dispatch_key(instance), layer_data_map);
startReadObject(my_data, instance);
startWriteObject(my_data, callback);
@@ -368,11 +295,8 @@ VK_LAYER_EXPORT VKAPI_ATTR void VKAPI_CALL vkDestroyDebugReportCallbackEXT(
finishWriteObject(my_data, callback);
}
-VkResult VKAPI_CALL vkAllocateCommandBuffers(
- VkDevice device,
- const VkCommandBufferAllocateInfo* pAllocateInfo,
- VkCommandBuffer* pCommandBuffers)
-{
+VkResult VKAPI_CALL
+vkAllocateCommandBuffers(VkDevice device, const VkCommandBufferAllocateInfo *pAllocateInfo, VkCommandBuffer *pCommandBuffers) {
dispatch_key key = get_dispatch_key(device);
layer_data *my_data = get_my_data_ptr(key, layer_data_map);
VkLayerDispatchTable *pTable = my_data->device_dispatch_table;
@@ -386,7 +310,7 @@ VkResult VKAPI_CALL vkAllocateCommandBuffers(
// Record mapping from command buffer to command pool
if (VK_SUCCESS == result) {
- for (int index=0;index<pAllocateInfo->commandBufferCount;index++) {
+ for (int index = 0; index < pAllocateInfo->commandBufferCount; index++) {
loader_platform_thread_lock_mutex(&threadingLock);
command_pool_map[pCommandBuffers[index]] = pAllocateInfo->commandPool;
loader_platform_thread_unlock_mutex(&threadingLock);
@@ -396,30 +320,25 @@ VkResult VKAPI_CALL vkAllocateCommandBuffers(
return result;
}
-void VKAPI_CALL vkFreeCommandBuffers(
- VkDevice device,
- VkCommandPool commandPool,
- uint32_t commandBufferCount,
- const VkCommandBuffer* pCommandBuffers)
-{
+void VKAPI_CALL vkFreeCommandBuffers(VkDevice device, VkCommandPool commandPool, uint32_t commandBufferCount,
+ const VkCommandBuffer *pCommandBuffers) {
dispatch_key key = get_dispatch_key(device);
layer_data *my_data = get_my_data_ptr(key, layer_data_map);
VkLayerDispatchTable *pTable = my_data->device_dispatch_table;
const bool lockCommandPool = false; // pool is already directly locked
startReadObject(my_data, device);
startWriteObject(my_data, commandPool);
- for (int index=0;index<commandBufferCount;index++) {
+ for (int index = 0; index < commandBufferCount; index++) {
startWriteObject(my_data, pCommandBuffers[index], lockCommandPool);
}
- pTable->FreeCommandBuffers(device,commandPool,commandBufferCount,pCommandBuffers);
+ pTable->FreeCommandBuffers(device, commandPool, commandBufferCount, pCommandBuffers);
finishReadObject(my_data, device);
finishWriteObject(my_data, commandPool);
- for (int index=0;index<commandBufferCount;index++) {
+ for (int index = 0; index < commandBufferCount; index++) {
finishWriteObject(my_data, pCommandBuffers[index], lockCommandPool);
loader_platform_thread_lock_mutex(&threadingLock);
command_pool_map.erase(pCommandBuffers[index]);
loader_platform_thread_unlock_mutex(&threadingLock);
}
}
-
diff --git a/layers/threading.h b/layers/threading.h
index 1a04a62fa..0e2336392 100644
--- a/layers/threading.h
+++ b/layers/threading.h
@@ -31,18 +31,18 @@
#include "vk_layer_config.h"
#include "vk_layer_logging.h"
-#if defined(__LP64__) || defined(_WIN64) || defined(__x86_64__) || defined(_M_X64) || defined(__ia64) || defined (_M_IA64) || defined(__aarch64__) || defined(__powerpc64__)
+#if defined(__LP64__) || defined(_WIN64) || defined(__x86_64__) || defined(_M_X64) || defined(__ia64) || defined(_M_IA64) || \
+ defined(__aarch64__) || defined(__powerpc64__)
// If pointers are 64-bit, then there can be separate counters for each
// NONDISPATCHABLE_HANDLE type. Otherwise they are all typedef uint64_t.
#define DISTINCT_NONDISPATCHABLE_HANDLES
#endif
// Draw State ERROR codes
-typedef enum _THREADING_CHECKER_ERROR
-{
- THREADING_CHECKER_NONE, // Used for INFO & other non-error messages
- THREADING_CHECKER_MULTIPLE_THREADS, // Object used simultaneously by multiple threads
- THREADING_CHECKER_SINGLE_THREAD_REUSE, // Object used simultaneously by recursion in single thread
+typedef enum _THREADING_CHECKER_ERROR {
+ THREADING_CHECKER_NONE, // Used for INFO & other non-error messages
+ THREADING_CHECKER_MULTIPLE_THREADS, // Object used simultaneously by multiple threads
+ THREADING_CHECKER_SINGLE_THREAD_REUSE, // Object used simultaneously by recursion in single thread
} THREADING_CHECKER_ERROR;
struct object_use_data {
@@ -58,12 +58,11 @@ static loader_platform_thread_mutex threadingLock;
static loader_platform_thread_cond threadingCond;
template <typename T> class counter {
- public:
+ public:
const char *typeName;
VkDebugReportObjectTypeEXT objectType;
std::unordered_map<T, object_use_data> uses;
- void startWrite(debug_report_data *report_data, T object)
- {
+ void startWrite(debug_report_data *report_data, T object) {
VkBool32 skipCall = VK_FALSE;
loader_platform_thread_id tid = loader_platform_get_thread_id();
loader_platform_thread_lock_mutex(&threadingLock);
@@ -79,9 +78,9 @@ template <typename T> class counter {
// There are no readers. Two writers just collided.
if (use_data->thread != tid) {
skipCall |= log_msg(report_data, VK_DEBUG_REPORT_ERROR_BIT_EXT, objectType, (uint64_t)(object),
- /*location*/ 0, THREADING_CHECKER_MULTIPLE_THREADS, "THREADING",
- "THREADING ERROR : object of type %s is simultaneously used in thread %ld and thread %ld",
- typeName, use_data->thread, tid);
+ /*location*/ 0, THREADING_CHECKER_MULTIPLE_THREADS, "THREADING",
+ "THREADING ERROR : object of type %s is simultaneously used in thread %ld and thread %ld",
+ typeName, use_data->thread, tid);
if (skipCall) {
// Wait for thread-safe access to object instead of skipping call.
while (uses.find(object) != uses.end()) {
@@ -89,12 +88,12 @@ template <typename T> class counter {
}
// There is now no current use of the object. Record writer thread.
struct object_use_data *use_data = &uses[object];
- use_data->thread = tid ;
+ use_data->thread = tid;
use_data->reader_count = 0;
use_data->writer_count = 1;
} else {
// Continue with an unsafe use of the object.
- use_data->thread = tid ;
+ use_data->thread = tid;
use_data->writer_count += 1;
}
} else {
@@ -106,9 +105,9 @@ template <typename T> class counter {
// There are readers. This writer collided with them.
if (use_data->thread != tid) {
skipCall |= log_msg(report_data, VK_DEBUG_REPORT_ERROR_BIT_EXT, objectType, (uint64_t)(object),
- /*location*/ 0, THREADING_CHECKER_MULTIPLE_THREADS, "THREADING",
- "THREADING ERROR : object of type %s is simultaneously used in thread %ld and thread %ld",
- typeName, use_data->thread, tid);
+ /*location*/ 0, THREADING_CHECKER_MULTIPLE_THREADS, "THREADING",
+ "THREADING ERROR : object of type %s is simultaneously used in thread %ld and thread %ld",
+ typeName, use_data->thread, tid);
if (skipCall) {
// Wait for thread-safe access to object instead of skipping call.
while (uses.find(object) != uses.end()) {
@@ -116,12 +115,12 @@ template <typename T> class counter {
}
// There is now no current use of the object. Record writer thread.
struct object_use_data *use_data = &uses[object];
- use_data->thread = tid ;
+ use_data->thread = tid;
use_data->reader_count = 0;
use_data->writer_count = 1;
} else {
// Continue with an unsafe use of the object.
- use_data->thread = tid ;
+ use_data->thread = tid;
use_data->writer_count += 1;
}
} else {
@@ -134,8 +133,7 @@ template <typename T> class counter {
loader_platform_thread_unlock_mutex(&threadingLock);
}
- void finishWrite(T object)
- {
+ void finishWrite(T object) {
// Object is no longer in use
loader_platform_thread_lock_mutex(&threadingLock);
uses[object].writer_count -= 1;
@@ -160,9 +158,9 @@ template <typename T> class counter {
} else if (uses[object].writer_count > 0 && uses[object].thread != tid) {
// There is a writer of the object.
skipCall |= log_msg(report_data, VK_DEBUG_REPORT_ERROR_BIT_EXT, objectType, (uint64_t)(object),
- /*location*/ 0, THREADING_CHECKER_MULTIPLE_THREADS, "THREADING",
- "THREADING ERROR : object of type %s is simultaneously used in thread %ld and thread %ld",
- typeName, uses[object].thread, tid);
+ /*location*/ 0, THREADING_CHECKER_MULTIPLE_THREADS, "THREADING",
+ "THREADING ERROR : object of type %s is simultaneously used in thread %ld and thread %ld", typeName,
+ uses[object].thread, tid);
if (skipCall) {
// Wait for thread-safe access to object instead of skipping call.
while (uses.find(object) != uses.end()) {
@@ -192,18 +190,17 @@ template <typename T> class counter {
loader_platform_thread_cond_broadcast(&threadingCond);
loader_platform_thread_unlock_mutex(&threadingLock);
}
- counter(const char *name = "",
- VkDebugReportObjectTypeEXT type=VK_DEBUG_REPORT_OBJECT_TYPE_UNKNOWN_EXT) {
+ counter(const char *name = "", VkDebugReportObjectTypeEXT type = VK_DEBUG_REPORT_OBJECT_TYPE_UNKNOWN_EXT) {
typeName = name;
- objectType=type;
+ objectType = type;
}
};
struct layer_data {
debug_report_data *report_data;
std::vector<VkDebugReportCallbackEXT> logging_callback;
- VkLayerDispatchTable* device_dispatch_table;
- VkLayerInstanceDispatchTable* instance_dispatch_table;
+ VkLayerDispatchTable *device_dispatch_table;
+ VkLayerInstanceDispatchTable *instance_dispatch_table;
counter<VkCommandBuffer> c_VkCommandBuffer;
counter<VkDevice> c_VkDevice;
counter<VkInstance> c_VkInstance;
@@ -230,48 +227,50 @@ struct layer_data {
counter<VkSemaphore> c_VkSemaphore;
counter<VkShaderModule> c_VkShaderModule;
counter<VkDebugReportCallbackEXT> c_VkDebugReportCallbackEXT;
-#else // DISTINCT_NONDISPATCHABLE_HANDLES
+#else // DISTINCT_NONDISPATCHABLE_HANDLES
counter<uint64_t> c_uint64_t;
#endif // DISTINCT_NONDISPATCHABLE_HANDLES
- layer_data():
- report_data(nullptr),
- c_VkCommandBuffer("VkCommandBuffer", VK_DEBUG_REPORT_OBJECT_TYPE_COMMAND_BUFFER_EXT),
- c_VkDevice("VkDevice", VK_DEBUG_REPORT_OBJECT_TYPE_DEVICE_EXT),
- c_VkInstance("VkInstance", VK_DEBUG_REPORT_OBJECT_TYPE_INSTANCE_EXT),
- c_VkQueue("VkQueue", VK_DEBUG_REPORT_OBJECT_TYPE_QUEUE_EXT),
+ layer_data()
+ : report_data(nullptr), c_VkCommandBuffer("VkCommandBuffer", VK_DEBUG_REPORT_OBJECT_TYPE_COMMAND_BUFFER_EXT),
+ c_VkDevice("VkDevice", VK_DEBUG_REPORT_OBJECT_TYPE_DEVICE_EXT),
+ c_VkInstance("VkInstance", VK_DEBUG_REPORT_OBJECT_TYPE_INSTANCE_EXT),
+ c_VkQueue("VkQueue", VK_DEBUG_REPORT_OBJECT_TYPE_QUEUE_EXT),
#ifdef DISTINCT_NONDISPATCHABLE_HANDLES
- c_VkBuffer("VkBuffer", VK_DEBUG_REPORT_OBJECT_TYPE_BUFFER_EXT),
- c_VkBufferView("VkBufferView", VK_DEBUG_REPORT_OBJECT_TYPE_BUFFER_VIEW_EXT),
- c_VkCommandPool("VkCommandPool", VK_DEBUG_REPORT_OBJECT_TYPE_COMMAND_POOL_EXT),
- c_VkDescriptorPool("VkDescriptorPool", VK_DEBUG_REPORT_OBJECT_TYPE_DESCRIPTOR_POOL_EXT),
- c_VkDescriptorSet("VkDescriptorSet", VK_DEBUG_REPORT_OBJECT_TYPE_DESCRIPTOR_SET_EXT),
- c_VkDescriptorSetLayout("VkDescriptorSetLayout", VK_DEBUG_REPORT_OBJECT_TYPE_DESCRIPTOR_SET_LAYOUT_EXT),
- c_VkDeviceMemory("VkDeviceMemory", VK_DEBUG_REPORT_OBJECT_TYPE_DEVICE_MEMORY_EXT),
- c_VkEvent("VkEvent", VK_DEBUG_REPORT_OBJECT_TYPE_EVENT_EXT),
- c_VkFence("VkFence", VK_DEBUG_REPORT_OBJECT_TYPE_FENCE_EXT),
- c_VkFramebuffer("VkFramebuffer", VK_DEBUG_REPORT_OBJECT_TYPE_FRAMEBUFFER_EXT),
- c_VkImage("VkImage", VK_DEBUG_REPORT_OBJECT_TYPE_IMAGE_EXT),
- c_VkImageView("VkImageView", VK_DEBUG_REPORT_OBJECT_TYPE_IMAGE_VIEW_EXT),
- c_VkPipeline("VkPipeline", VK_DEBUG_REPORT_OBJECT_TYPE_PIPELINE_EXT),
- c_VkPipelineCache("VkPipelineCache", VK_DEBUG_REPORT_OBJECT_TYPE_PIPELINE_CACHE_EXT),
- c_VkPipelineLayout("VkPipelineLayout", VK_DEBUG_REPORT_OBJECT_TYPE_PIPELINE_LAYOUT_EXT),
- c_VkQueryPool("VkQueryPool", VK_DEBUG_REPORT_OBJECT_TYPE_QUERY_POOL_EXT),
- c_VkRenderPass("VkRenderPass", VK_DEBUG_REPORT_OBJECT_TYPE_RENDER_PASS_EXT),
- c_VkSampler("VkSampler", VK_DEBUG_REPORT_OBJECT_TYPE_SAMPLER_EXT),
- c_VkSemaphore("VkSemaphore", VK_DEBUG_REPORT_OBJECT_TYPE_SEMAPHORE_EXT),
- c_VkShaderModule("VkShaderModule", VK_DEBUG_REPORT_OBJECT_TYPE_SHADER_MODULE_EXT),
- c_VkDebugReportCallbackEXT("VkDebugReportCallbackEXT", VK_DEBUG_REPORT_OBJECT_TYPE_DEBUG_REPORT_EXT)
-#else // DISTINCT_NONDISPATCHABLE_HANDLES
- c_uint64_t("NON_DISPATCHABLE_HANDLE", VK_DEBUG_REPORT_OBJECT_TYPE_UNKNOWN_EXT)
+ c_VkBuffer("VkBuffer", VK_DEBUG_REPORT_OBJECT_TYPE_BUFFER_EXT),
+ c_VkBufferView("VkBufferView", VK_DEBUG_REPORT_OBJECT_TYPE_BUFFER_VIEW_EXT),
+ c_VkCommandPool("VkCommandPool", VK_DEBUG_REPORT_OBJECT_TYPE_COMMAND_POOL_EXT),
+ c_VkDescriptorPool("VkDescriptorPool", VK_DEBUG_REPORT_OBJECT_TYPE_DESCRIPTOR_POOL_EXT),
+ c_VkDescriptorSet("VkDescriptorSet", VK_DEBUG_REPORT_OBJECT_TYPE_DESCRIPTOR_SET_EXT),
+ c_VkDescriptorSetLayout("VkDescriptorSetLayout", VK_DEBUG_REPORT_OBJECT_TYPE_DESCRIPTOR_SET_LAYOUT_EXT),
+ c_VkDeviceMemory("VkDeviceMemory", VK_DEBUG_REPORT_OBJECT_TYPE_DEVICE_MEMORY_EXT),
+ c_VkEvent("VkEvent", VK_DEBUG_REPORT_OBJECT_TYPE_EVENT_EXT), c_VkFence("VkFence", VK_DEBUG_REPORT_OBJECT_TYPE_FENCE_EXT),
+ c_VkFramebuffer("VkFramebuffer", VK_DEBUG_REPORT_OBJECT_TYPE_FRAMEBUFFER_EXT),
+ c_VkImage("VkImage", VK_DEBUG_REPORT_OBJECT_TYPE_IMAGE_EXT),
+ c_VkImageView("VkImageView", VK_DEBUG_REPORT_OBJECT_TYPE_IMAGE_VIEW_EXT),
+ c_VkPipeline("VkPipeline", VK_DEBUG_REPORT_OBJECT_TYPE_PIPELINE_EXT),
+ c_VkPipelineCache("VkPipelineCache", VK_DEBUG_REPORT_OBJECT_TYPE_PIPELINE_CACHE_EXT),
+ c_VkPipelineLayout("VkPipelineLayout", VK_DEBUG_REPORT_OBJECT_TYPE_PIPELINE_LAYOUT_EXT),
+ c_VkQueryPool("VkQueryPool", VK_DEBUG_REPORT_OBJECT_TYPE_QUERY_POOL_EXT),
+ c_VkRenderPass("VkRenderPass", VK_DEBUG_REPORT_OBJECT_TYPE_RENDER_PASS_EXT),
+ c_VkSampler("VkSampler", VK_DEBUG_REPORT_OBJECT_TYPE_SAMPLER_EXT),
+ c_VkSemaphore("VkSemaphore", VK_DEBUG_REPORT_OBJECT_TYPE_SEMAPHORE_EXT),
+ c_VkShaderModule("VkShaderModule", VK_DEBUG_REPORT_OBJECT_TYPE_SHADER_MODULE_EXT),
+ c_VkDebugReportCallbackEXT("VkDebugReportCallbackEXT", VK_DEBUG_REPORT_OBJECT_TYPE_DEBUG_REPORT_EXT)
+#else // DISTINCT_NONDISPATCHABLE_HANDLES
+ c_uint64_t("NON_DISPATCHABLE_HANDLE", VK_DEBUG_REPORT_OBJECT_TYPE_UNKNOWN_EXT)
#endif // DISTINCT_NONDISPATCHABLE_HANDLES
- {};
+ {};
};
-#define WRAPPER(type) \
-static void startWriteObject(struct layer_data *my_data, type object){my_data->c_##type.startWrite(my_data->report_data, object);}\
-static void finishWriteObject(struct layer_data *my_data, type object){my_data->c_##type.finishWrite(object);}\
-static void startReadObject(struct layer_data *my_data, type object){my_data->c_##type.startRead(my_data->report_data, object);}\
-static void finishReadObject(struct layer_data *my_data, type object){my_data->c_##type.finishRead(object);}
+#define WRAPPER(type) \
+ static void startWriteObject(struct layer_data *my_data, type object) { \
+ my_data->c_##type.startWrite(my_data->report_data, object); \
+ } \
+ static void finishWriteObject(struct layer_data *my_data, type object) { my_data->c_##type.finishWrite(object); } \
+ static void startReadObject(struct layer_data *my_data, type object) { \
+ my_data->c_##type.startRead(my_data->report_data, object); \
+ } \
+ static void finishReadObject(struct layer_data *my_data, type object) { my_data->c_##type.finishRead(object); }
WRAPPER(VkDevice)
WRAPPER(VkInstance)
@@ -298,16 +297,15 @@ WRAPPER(VkSampler)
WRAPPER(VkSemaphore)
WRAPPER(VkShaderModule)
WRAPPER(VkDebugReportCallbackEXT)
-#else // DISTINCT_NONDISPATCHABLE_HANDLES
+#else // DISTINCT_NONDISPATCHABLE_HANDLES
WRAPPER(uint64_t)
#endif // DISTINCT_NONDISPATCHABLE_HANDLES
-static std::unordered_map<void*, layer_data *> layer_data_map;
+static std::unordered_map<void *, layer_data *> layer_data_map;
static std::unordered_map<VkCommandBuffer, VkCommandPool> command_pool_map;
// VkCommandBuffer needs check for implicit use of command pool
-static void startWriteObject(struct layer_data *my_data, VkCommandBuffer object, bool lockPool=true)
-{
+static void startWriteObject(struct layer_data *my_data, VkCommandBuffer object, bool lockPool = true) {
if (lockPool) {
loader_platform_thread_lock_mutex(&threadingLock);
VkCommandPool pool = command_pool_map[object];
@@ -316,8 +314,7 @@ static void startWriteObject(struct layer_data *my_data, VkCommandBuffer object,
}
my_data->c_VkCommandBuffer.startWrite(my_data->report_data, object);
}
-static void finishWriteObject(struct layer_data *my_data, VkCommandBuffer object, bool lockPool=true)
-{
+static void finishWriteObject(struct layer_data *my_data, VkCommandBuffer object, bool lockPool = true) {
my_data->c_VkCommandBuffer.finishWrite(object);
if (lockPool) {
loader_platform_thread_lock_mutex(&threadingLock);
@@ -326,16 +323,14 @@ static void finishWriteObject(struct layer_data *my_data, VkCommandBuffer object
finishWriteObject(my_data, pool);
}
}
-static void startReadObject(struct layer_data *my_data, VkCommandBuffer object)
-{
+static void startReadObject(struct layer_data *my_data, VkCommandBuffer object) {
loader_platform_thread_lock_mutex(&threadingLock);
VkCommandPool pool = command_pool_map[object];
loader_platform_thread_unlock_mutex(&threadingLock);
startReadObject(my_data, pool);
my_data->c_VkCommandBuffer.startRead(my_data->report_data, object);
}
-static void finishReadObject(struct layer_data *my_data, VkCommandBuffer object)
-{
+static void finishReadObject(struct layer_data *my_data, VkCommandBuffer object) {
my_data->c_VkCommandBuffer.finishRead(object);
loader_platform_thread_lock_mutex(&threadingLock);
VkCommandPool pool = command_pool_map[object];
diff --git a/layers/unique_objects.h b/layers/unique_objects.h
index 3ce6ddae3..b8effcf5a 100644
--- a/layers/unique_objects.h
+++ b/layers/unique_objects.h
@@ -43,13 +43,12 @@
#include "vk_layer_logging.h"
#include "vk_layer_extension_utils.h"
#include "vk_safe_struct.h"
+#include "vk_layer_utils.h"
struct layer_data {
bool wsi_enabled;
- layer_data() :
- wsi_enabled(false)
- {};
+ layer_data() : wsi_enabled(false){};
};
struct instExts {
@@ -62,49 +61,58 @@ struct instExts {
bool win32_enabled;
};
-static std::unordered_map<void*, struct instExts> instanceExtMap;
-static std::unordered_map<void*, layer_data *> layer_data_map;
-static device_table_map unique_objects_device_table_map;
-static instance_table_map unique_objects_instance_table_map;
+static std::unordered_map<void *, struct instExts> instanceExtMap;
+static std::unordered_map<void *, layer_data *> layer_data_map;
+static device_table_map unique_objects_device_table_map;
+static instance_table_map unique_objects_instance_table_map;
// Structure to wrap returned non-dispatchable objects to guarantee they have unique handles
// address of struct will be used as the unique handle
-struct VkUniqueObject
-{
+struct VkUniqueObject {
uint64_t actualObject;
};
// Handle CreateInstance
-static void createInstanceRegisterExtensions(const VkInstanceCreateInfo* pCreateInfo, VkInstance instance)
-{
+static void createInstanceRegisterExtensions(const VkInstanceCreateInfo *pCreateInfo, VkInstance instance) {
uint32_t i;
VkLayerInstanceDispatchTable *pDisp = get_dispatch_table(unique_objects_instance_table_map, instance);
PFN_vkGetInstanceProcAddr gpa = pDisp->GetInstanceProcAddr;
- pDisp->GetPhysicalDeviceSurfaceSupportKHR = (PFN_vkGetPhysicalDeviceSurfaceSupportKHR) gpa(instance, "vkGetPhysicalDeviceSurfaceSupportKHR");
- pDisp->GetPhysicalDeviceSurfaceCapabilitiesKHR = (PFN_vkGetPhysicalDeviceSurfaceCapabilitiesKHR) gpa(instance, "vkGetPhysicalDeviceSurfaceCapabilitiesKHR");
- pDisp->GetPhysicalDeviceSurfaceFormatsKHR = (PFN_vkGetPhysicalDeviceSurfaceFormatsKHR) gpa(instance, "vkGetPhysicalDeviceSurfaceFormatsKHR");
- pDisp->GetPhysicalDeviceSurfacePresentModesKHR = (PFN_vkGetPhysicalDeviceSurfacePresentModesKHR) gpa(instance, "vkGetPhysicalDeviceSurfacePresentModesKHR");
+
+ pDisp->DestroySurfaceKHR = (PFN_vkDestroySurfaceKHR)gpa(instance, "vkDestroySurfaceKHR");
+ pDisp->GetPhysicalDeviceSurfaceSupportKHR =
+ (PFN_vkGetPhysicalDeviceSurfaceSupportKHR)gpa(instance, "vkGetPhysicalDeviceSurfaceSupportKHR");
+ pDisp->GetPhysicalDeviceSurfaceCapabilitiesKHR =
+ (PFN_vkGetPhysicalDeviceSurfaceCapabilitiesKHR)gpa(instance, "vkGetPhysicalDeviceSurfaceCapabilitiesKHR");
+ pDisp->GetPhysicalDeviceSurfaceFormatsKHR =
+ (PFN_vkGetPhysicalDeviceSurfaceFormatsKHR)gpa(instance, "vkGetPhysicalDeviceSurfaceFormatsKHR");
+ pDisp->GetPhysicalDeviceSurfacePresentModesKHR =
+ (PFN_vkGetPhysicalDeviceSurfacePresentModesKHR)gpa(instance, "vkGetPhysicalDeviceSurfacePresentModesKHR");
#ifdef VK_USE_PLATFORM_WIN32_KHR
- pDisp->CreateWin32SurfaceKHR = (PFN_vkCreateWin32SurfaceKHR) gpa(instance, "vkCreateWin32SurfaceKHR");
- pDisp->GetPhysicalDeviceWin32PresentationSupportKHR = (PFN_vkGetPhysicalDeviceWin32PresentationSupportKHR) gpa(instance, "vkGetPhysicalDeviceWin32PresentationSupportKHR");
+ pDisp->CreateWin32SurfaceKHR = (PFN_vkCreateWin32SurfaceKHR)gpa(instance, "vkCreateWin32SurfaceKHR");
+ pDisp->GetPhysicalDeviceWin32PresentationSupportKHR =
+ (PFN_vkGetPhysicalDeviceWin32PresentationSupportKHR)gpa(instance, "vkGetPhysicalDeviceWin32PresentationSupportKHR");
#endif // VK_USE_PLATFORM_WIN32_KHR
#ifdef VK_USE_PLATFORM_XCB_KHR
- pDisp->CreateXcbSurfaceKHR = (PFN_vkCreateXcbSurfaceKHR) gpa(instance, "vkCreateXcbSurfaceKHR");
- pDisp->GetPhysicalDeviceXcbPresentationSupportKHR = (PFN_vkGetPhysicalDeviceXcbPresentationSupportKHR) gpa(instance, "vkGetPhysicalDeviceXcbPresentationSupportKHR");
+ pDisp->CreateXcbSurfaceKHR = (PFN_vkCreateXcbSurfaceKHR)gpa(instance, "vkCreateXcbSurfaceKHR");
+ pDisp->GetPhysicalDeviceXcbPresentationSupportKHR =
+ (PFN_vkGetPhysicalDeviceXcbPresentationSupportKHR)gpa(instance, "vkGetPhysicalDeviceXcbPresentationSupportKHR");
#endif // VK_USE_PLATFORM_XCB_KHR
#ifdef VK_USE_PLATFORM_XLIB_KHR
- pDisp->CreateXlibSurfaceKHR = (PFN_vkCreateXlibSurfaceKHR) gpa(instance, "vkCreateXlibSurfaceKHR");
- pDisp->GetPhysicalDeviceXlibPresentationSupportKHR = (PFN_vkGetPhysicalDeviceXlibPresentationSupportKHR) gpa(instance, "vkGetPhysicalDeviceXlibPresentationSupportKHR");
+ pDisp->CreateXlibSurfaceKHR = (PFN_vkCreateXlibSurfaceKHR)gpa(instance, "vkCreateXlibSurfaceKHR");
+ pDisp->GetPhysicalDeviceXlibPresentationSupportKHR =
+ (PFN_vkGetPhysicalDeviceXlibPresentationSupportKHR)gpa(instance, "vkGetPhysicalDeviceXlibPresentationSupportKHR");
#endif // VK_USE_PLATFORM_XLIB_KHR
#ifdef VK_USE_PLATFORM_MIR_KHR
- pDisp->CreateMirSurfaceKHR = (PFN_vkCreateMirSurfaceKHR) gpa(instance, "vkCreateMirSurfaceKHR");
- pDisp->GetPhysicalDeviceMirPresentationSupportKHR = (PFN_vkGetPhysicalDeviceMirPresentationSupportKHR) gpa(instance, "vkGetPhysicalDeviceMirPresentationSupportKHR");
+ pDisp->CreateMirSurfaceKHR = (PFN_vkCreateMirSurfaceKHR)gpa(instance, "vkCreateMirSurfaceKHR");
+ pDisp->GetPhysicalDeviceMirPresentationSupportKHR =
+ (PFN_vkGetPhysicalDeviceMirPresentationSupportKHR)gpa(instance, "vkGetPhysicalDeviceMirPresentationSupportKHR");
#endif // VK_USE_PLATFORM_MIR_KHR
#ifdef VK_USE_PLATFORM_WAYLAND_KHR
- pDisp->CreateWaylandSurfaceKHR = (PFN_vkCreateWaylandSurfaceKHR) gpa(instance, "vkCreateWaylandSurfaceKHR");
- pDisp->GetPhysicalDeviceWaylandPresentationSupportKHR = (PFN_vkGetPhysicalDeviceWaylandPresentationSupportKHR) gpa(instance, "vkGetPhysicalDeviceWaylandPresentationSupportKHR");
+ pDisp->CreateWaylandSurfaceKHR = (PFN_vkCreateWaylandSurfaceKHR)gpa(instance, "vkCreateWaylandSurfaceKHR");
+ pDisp->GetPhysicalDeviceWaylandPresentationSupportKHR =
+ (PFN_vkGetPhysicalDeviceWaylandPresentationSupportKHR)gpa(instance, "vkGetPhysicalDeviceWaylandPresentationSupportKHR");
#endif // VK_USE_PLATFORM_WAYLAND_KHR
#ifdef VK_USE_PLATFORM_ANDROID_KHR
- pDisp->CreateAndroidSurfaceKHR = (PFN_vkCreateAndroidSurfaceKHR) gpa(instance, "vkCreateAndroidSurfaceKHR");
+ pDisp->CreateAndroidSurfaceKHR = (PFN_vkCreateAndroidSurfaceKHR)gpa(instance, "vkCreateAndroidSurfaceKHR");
#endif // VK_USE_PLATFORM_ANDROID_KHR
instanceExtMap[pDisp] = {};
@@ -138,17 +146,13 @@ static void createInstanceRegisterExtensions(const VkInstanceCreateInfo* pCreate
}
}
-VkResult
-explicit_CreateInstance(
- const VkInstanceCreateInfo *pCreateInfo,
- const VkAllocationCallbacks *pAllocator,
- VkInstance *pInstance)
-{
+VkResult explicit_CreateInstance(const VkInstanceCreateInfo *pCreateInfo, const VkAllocationCallbacks *pAllocator,
+ VkInstance *pInstance) {
VkLayerInstanceCreateInfo *chain_info = get_chain_info(pCreateInfo, VK_LAYER_LINK_INFO);
assert(chain_info->u.pLayerInfo);
PFN_vkGetInstanceProcAddr fpGetInstanceProcAddr = chain_info->u.pLayerInfo->pfnNextGetInstanceProcAddr;
- PFN_vkCreateInstance fpCreateInstance = (PFN_vkCreateInstance) fpGetInstanceProcAddr(NULL, "vkCreateInstance");
+ PFN_vkCreateInstance fpCreateInstance = (PFN_vkCreateInstance)fpGetInstanceProcAddr(NULL, "vkCreateInstance");
if (fpCreateInstance == NULL) {
return VK_ERROR_INITIALIZATION_FAILED;
}
@@ -169,16 +173,15 @@ explicit_CreateInstance(
}
// Handle CreateDevice
-static void createDeviceRegisterExtensions(const VkDeviceCreateInfo* pCreateInfo, VkDevice device)
-{
+static void createDeviceRegisterExtensions(const VkDeviceCreateInfo *pCreateInfo, VkDevice device) {
layer_data *my_device_data = get_my_data_ptr(get_dispatch_key(device), layer_data_map);
VkLayerDispatchTable *pDisp = get_dispatch_table(unique_objects_device_table_map, device);
PFN_vkGetDeviceProcAddr gpa = pDisp->GetDeviceProcAddr;
- pDisp->CreateSwapchainKHR = (PFN_vkCreateSwapchainKHR) gpa(device, "vkCreateSwapchainKHR");
- pDisp->DestroySwapchainKHR = (PFN_vkDestroySwapchainKHR) gpa(device, "vkDestroySwapchainKHR");
- pDisp->GetSwapchainImagesKHR = (PFN_vkGetSwapchainImagesKHR) gpa(device, "vkGetSwapchainImagesKHR");
- pDisp->AcquireNextImageKHR = (PFN_vkAcquireNextImageKHR) gpa(device, "vkAcquireNextImageKHR");
- pDisp->QueuePresentKHR = (PFN_vkQueuePresentKHR) gpa(device, "vkQueuePresentKHR");
+ pDisp->CreateSwapchainKHR = (PFN_vkCreateSwapchainKHR)gpa(device, "vkCreateSwapchainKHR");
+ pDisp->DestroySwapchainKHR = (PFN_vkDestroySwapchainKHR)gpa(device, "vkDestroySwapchainKHR");
+ pDisp->GetSwapchainImagesKHR = (PFN_vkGetSwapchainImagesKHR)gpa(device, "vkGetSwapchainImagesKHR");
+ pDisp->AcquireNextImageKHR = (PFN_vkAcquireNextImageKHR)gpa(device, "vkAcquireNextImageKHR");
+ pDisp->QueuePresentKHR = (PFN_vkQueuePresentKHR)gpa(device, "vkQueuePresentKHR");
my_device_data->wsi_enabled = false;
for (uint32_t i = 0; i < pCreateInfo->enabledExtensionCount; i++) {
if (strcmp(pCreateInfo->ppEnabledExtensionNames[i], VK_KHR_SWAPCHAIN_EXTENSION_NAME) == 0)
@@ -186,19 +189,14 @@ static void createDeviceRegisterExtensions(const VkDeviceCreateInfo* pCreateInfo
}
}
-VkResult
-explicit_CreateDevice(
- VkPhysicalDevice gpu,
- const VkDeviceCreateInfo *pCreateInfo,
- const VkAllocationCallbacks *pAllocator,
- VkDevice *pDevice)
-{
+VkResult explicit_CreateDevice(VkPhysicalDevice gpu, const VkDeviceCreateInfo *pCreateInfo, const VkAllocationCallbacks *pAllocator,
+ VkDevice *pDevice) {
VkLayerDeviceCreateInfo *chain_info = get_chain_info(pCreateInfo, VK_LAYER_LINK_INFO);
assert(chain_info->u.pLayerInfo);
PFN_vkGetInstanceProcAddr fpGetInstanceProcAddr = chain_info->u.pLayerInfo->pfnNextGetInstanceProcAddr;
PFN_vkGetDeviceProcAddr fpGetDeviceProcAddr = chain_info->u.pLayerInfo->pfnNextGetDeviceProcAddr;
- PFN_vkCreateDevice fpCreateDevice = (PFN_vkCreateDevice) fpGetInstanceProcAddr(NULL, "vkCreateDevice");
+ PFN_vkCreateDevice fpCreateDevice = (PFN_vkCreateDevice)fpGetInstanceProcAddr(NULL, "vkCreateDevice");
if (fpCreateDevice == NULL) {
return VK_ERROR_INITIALIZATION_FAILED;
}
@@ -219,47 +217,48 @@ explicit_CreateDevice(
return result;
}
-VkResult explicit_QueueSubmit(VkQueue queue, uint32_t submitCount, const VkSubmitInfo* pSubmits, VkFence fence)
-{
-// UNWRAP USES:
-// 0 : fence,VkFence
+VkResult explicit_QueueSubmit(VkQueue queue, uint32_t submitCount, const VkSubmitInfo *pSubmits, VkFence fence) {
+ // UNWRAP USES:
+ // 0 : fence,VkFence
if (VK_NULL_HANDLE != fence) {
- fence = (VkFence)((VkUniqueObject*)fence)->actualObject;
+ fence = (VkFence)((VkUniqueObject *)fence)->actualObject;
}
-// waitSemaphoreCount : pSubmits[submitCount]->pWaitSemaphores,VkSemaphore
+ // waitSemaphoreCount : pSubmits[submitCount]->pWaitSemaphores,VkSemaphore
std::vector<VkSemaphore> original_pWaitSemaphores = {};
-// signalSemaphoreCount : pSubmits[submitCount]->pSignalSemaphores,VkSemaphore
+ // signalSemaphoreCount : pSubmits[submitCount]->pSignalSemaphores,VkSemaphore
std::vector<VkSemaphore> original_pSignalSemaphores = {};
if (pSubmits) {
- for (uint32_t index0=0; index0<submitCount; ++index0) {
+ for (uint32_t index0 = 0; index0 < submitCount; ++index0) {
if (pSubmits[index0].pWaitSemaphores) {
- for (uint32_t index1=0; index1<pSubmits[index0].waitSemaphoreCount; ++index1) {
- VkSemaphore** ppSemaphore = (VkSemaphore**)&(pSubmits[index0].pWaitSemaphores);
+ for (uint32_t index1 = 0; index1 < pSubmits[index0].waitSemaphoreCount; ++index1) {
+ VkSemaphore **ppSemaphore = (VkSemaphore **)&(pSubmits[index0].pWaitSemaphores);
original_pWaitSemaphores.push_back(pSubmits[index0].pWaitSemaphores[index1]);
- *(ppSemaphore[index1]) = (VkSemaphore)((VkUniqueObject*)pSubmits[index0].pWaitSemaphores[index1])->actualObject;
+ *(ppSemaphore[index1]) =
+ (VkSemaphore)((VkUniqueObject *)pSubmits[index0].pWaitSemaphores[index1])->actualObject;
}
}
if (pSubmits[index0].pSignalSemaphores) {
- for (uint32_t index1=0; index1<pSubmits[index0].signalSemaphoreCount; ++index1) {
- VkSemaphore** ppSemaphore = (VkSemaphore**)&(pSubmits[index0].pSignalSemaphores);
+ for (uint32_t index1 = 0; index1 < pSubmits[index0].signalSemaphoreCount; ++index1) {
+ VkSemaphore **ppSemaphore = (VkSemaphore **)&(pSubmits[index0].pSignalSemaphores);
original_pSignalSemaphores.push_back(pSubmits[index0].pSignalSemaphores[index1]);
- *(ppSemaphore[index1]) = (VkSemaphore)((VkUniqueObject*)pSubmits[index0].pSignalSemaphores[index1])->actualObject;
+ *(ppSemaphore[index1]) =
+ (VkSemaphore)((VkUniqueObject *)pSubmits[index0].pSignalSemaphores[index1])->actualObject;
}
}
}
}
VkResult result = get_dispatch_table(unique_objects_device_table_map, queue)->QueueSubmit(queue, submitCount, pSubmits, fence);
if (pSubmits) {
- for (uint32_t index0=0; index0<submitCount; ++index0) {
+ for (uint32_t index0 = 0; index0 < submitCount; ++index0) {
if (pSubmits[index0].pWaitSemaphores) {
- for (uint32_t index1=0; index1<pSubmits[index0].waitSemaphoreCount; ++index1) {
- VkSemaphore** ppSemaphore = (VkSemaphore**)&(pSubmits[index0].pWaitSemaphores);
+ for (uint32_t index1 = 0; index1 < pSubmits[index0].waitSemaphoreCount; ++index1) {
+ VkSemaphore **ppSemaphore = (VkSemaphore **)&(pSubmits[index0].pWaitSemaphores);
*(ppSemaphore[index1]) = original_pWaitSemaphores[index1];
}
}
if (pSubmits[index0].pSignalSemaphores) {
- for (uint32_t index1=0; index1<pSubmits[index0].signalSemaphoreCount; ++index1) {
- VkSemaphore** ppSemaphore = (VkSemaphore**)&(pSubmits[index0].pSignalSemaphores);
+ for (uint32_t index1 = 0; index1 < pSubmits[index0].signalSemaphoreCount; ++index1) {
+ VkSemaphore **ppSemaphore = (VkSemaphore **)&(pSubmits[index0].pSignalSemaphores);
*(ppSemaphore[index1]) = original_pSignalSemaphores[index1];
}
}
@@ -268,10 +267,14 @@ VkResult explicit_QueueSubmit(VkQueue queue, uint32_t submitCount, const VkSubmi
return result;
}
-VkResult explicit_QueueBindSparse(VkQueue queue, uint32_t bindInfoCount, const VkBindSparseInfo* pBindInfo, VkFence fence)
-{
-// UNWRAP USES:
-// 0 : pBindInfo[bindInfoCount]->pBufferBinds[bufferBindCount]->buffer,VkBuffer, pBindInfo[bindInfoCount]->pBufferBinds[bufferBindCount]->pBinds[bindCount]->memory,VkDeviceMemory, pBindInfo[bindInfoCount]->pImageOpaqueBinds[imageOpaqueBindCount]->image,VkImage, pBindInfo[bindInfoCount]->pImageOpaqueBinds[imageOpaqueBindCount]->pBinds[bindCount]->memory,VkDeviceMemory, pBindInfo[bindInfoCount]->pImageBinds[imageBindCount]->image,VkImage, pBindInfo[bindInfoCount]->pImageBinds[imageBindCount]->pBinds[bindCount]->memory,VkDeviceMemory
+VkResult explicit_QueueBindSparse(VkQueue queue, uint32_t bindInfoCount, const VkBindSparseInfo *pBindInfo, VkFence fence) {
+ // UNWRAP USES:
+ // 0 : pBindInfo[bindInfoCount]->pBufferBinds[bufferBindCount]->buffer,VkBuffer,
+ // pBindInfo[bindInfoCount]->pBufferBinds[bufferBindCount]->pBinds[bindCount]->memory,VkDeviceMemory,
+ // pBindInfo[bindInfoCount]->pImageOpaqueBinds[imageOpaqueBindCount]->image,VkImage,
+ // pBindInfo[bindInfoCount]->pImageOpaqueBinds[imageOpaqueBindCount]->pBinds[bindCount]->memory,VkDeviceMemory,
+ // pBindInfo[bindInfoCount]->pImageBinds[imageBindCount]->image,VkImage,
+ // pBindInfo[bindInfoCount]->pImageBinds[imageBindCount]->pBinds[bindCount]->memory,VkDeviceMemory
std::vector<VkBuffer> original_buffer = {};
std::vector<VkDeviceMemory> original_memory1 = {};
std::vector<VkImage> original_image1 = {};
@@ -281,93 +284,107 @@ VkResult explicit_QueueBindSparse(VkQueue queue, uint32_t bindInfoCount, const V
std::vector<VkSemaphore> original_pWaitSemaphores = {};
std::vector<VkSemaphore> original_pSignalSemaphores = {};
if (pBindInfo) {
- for (uint32_t index0=0; index0<bindInfoCount; ++index0) {
+ for (uint32_t index0 = 0; index0 < bindInfoCount; ++index0) {
if (pBindInfo[index0].pBufferBinds) {
- for (uint32_t index1=0; index1<pBindInfo[index0].bufferBindCount; ++index1) {
+ for (uint32_t index1 = 0; index1 < pBindInfo[index0].bufferBindCount; ++index1) {
if (pBindInfo[index0].pBufferBinds[index1].buffer) {
- VkBuffer* pBuffer = (VkBuffer*)&(pBindInfo[index0].pBufferBinds[index1].buffer);
+ VkBuffer *pBuffer = (VkBuffer *)&(pBindInfo[index0].pBufferBinds[index1].buffer);
original_buffer.push_back(pBindInfo[index0].pBufferBinds[index1].buffer);
- *(pBuffer) = (VkBuffer)((VkUniqueObject*)pBindInfo[index0].pBufferBinds[index1].buffer)->actualObject;
+ *(pBuffer) = (VkBuffer)((VkUniqueObject *)pBindInfo[index0].pBufferBinds[index1].buffer)->actualObject;
}
if (pBindInfo[index0].pBufferBinds[index1].pBinds) {
- for (uint32_t index2=0; index2<pBindInfo[index0].pBufferBinds[index1].bindCount; ++index2) {
+ for (uint32_t index2 = 0; index2 < pBindInfo[index0].pBufferBinds[index1].bindCount; ++index2) {
if (pBindInfo[index0].pBufferBinds[index1].pBinds[index2].memory) {
- VkDeviceMemory* pDeviceMemory = (VkDeviceMemory*)&(pBindInfo[index0].pBufferBinds[index1].pBinds[index2].memory);
+ VkDeviceMemory *pDeviceMemory =
+ (VkDeviceMemory *)&(pBindInfo[index0].pBufferBinds[index1].pBinds[index2].memory);
original_memory1.push_back(pBindInfo[index0].pBufferBinds[index1].pBinds[index2].memory);
- *(pDeviceMemory) = (VkDeviceMemory)((VkUniqueObject*)pBindInfo[index0].pBufferBinds[index1].pBinds[index2].memory)->actualObject;
+ *(pDeviceMemory) =
+ (VkDeviceMemory)((VkUniqueObject *)pBindInfo[index0].pBufferBinds[index1].pBinds[index2].memory)
+ ->actualObject;
}
}
}
}
}
if (pBindInfo[index0].pImageOpaqueBinds) {
- for (uint32_t index1=0; index1<pBindInfo[index0].imageOpaqueBindCount; ++index1) {
+ for (uint32_t index1 = 0; index1 < pBindInfo[index0].imageOpaqueBindCount; ++index1) {
if (pBindInfo[index0].pImageOpaqueBinds[index1].image) {
- VkImage* pImage = (VkImage*)&(pBindInfo[index0].pImageOpaqueBinds[index1].image);
+ VkImage *pImage = (VkImage *)&(pBindInfo[index0].pImageOpaqueBinds[index1].image);
original_image1.push_back(pBindInfo[index0].pImageOpaqueBinds[index1].image);
- *(pImage) = (VkImage)((VkUniqueObject*)pBindInfo[index0].pImageOpaqueBinds[index1].image)->actualObject;
+ *(pImage) = (VkImage)((VkUniqueObject *)pBindInfo[index0].pImageOpaqueBinds[index1].image)->actualObject;
}
if (pBindInfo[index0].pImageOpaqueBinds[index1].pBinds) {
- for (uint32_t index2=0; index2<pBindInfo[index0].pImageOpaqueBinds[index1].bindCount; ++index2) {
+ for (uint32_t index2 = 0; index2 < pBindInfo[index0].pImageOpaqueBinds[index1].bindCount; ++index2) {
if (pBindInfo[index0].pImageOpaqueBinds[index1].pBinds[index2].memory) {
- VkDeviceMemory* pDeviceMemory = (VkDeviceMemory*)&(pBindInfo[index0].pImageOpaqueBinds[index1].pBinds[index2].memory);
+ VkDeviceMemory *pDeviceMemory =
+ (VkDeviceMemory *)&(pBindInfo[index0].pImageOpaqueBinds[index1].pBinds[index2].memory);
original_memory2.push_back(pBindInfo[index0].pImageOpaqueBinds[index1].pBinds[index2].memory);
- *(pDeviceMemory) = (VkDeviceMemory)((VkUniqueObject*)pBindInfo[index0].pImageOpaqueBinds[index1].pBinds[index2].memory)->actualObject;
+ *(pDeviceMemory) =
+ (VkDeviceMemory)(
+ (VkUniqueObject *)pBindInfo[index0].pImageOpaqueBinds[index1].pBinds[index2].memory)
+ ->actualObject;
}
}
}
}
}
if (pBindInfo[index0].pImageBinds) {
- for (uint32_t index1=0; index1<pBindInfo[index0].imageBindCount; ++index1) {
+ for (uint32_t index1 = 0; index1 < pBindInfo[index0].imageBindCount; ++index1) {
if (pBindInfo[index0].pImageBinds[index1].image) {
- VkImage* pImage = (VkImage*)&(pBindInfo[index0].pImageBinds[index1].image);
+ VkImage *pImage = (VkImage *)&(pBindInfo[index0].pImageBinds[index1].image);
original_image2.push_back(pBindInfo[index0].pImageBinds[index1].image);
- *(pImage) = (VkImage)((VkUniqueObject*)pBindInfo[index0].pImageBinds[index1].image)->actualObject;
+ *(pImage) = (VkImage)((VkUniqueObject *)pBindInfo[index0].pImageBinds[index1].image)->actualObject;
}
if (pBindInfo[index0].pImageBinds[index1].pBinds) {
- for (uint32_t index2=0; index2<pBindInfo[index0].pImageBinds[index1].bindCount; ++index2) {
+ for (uint32_t index2 = 0; index2 < pBindInfo[index0].pImageBinds[index1].bindCount; ++index2) {
if (pBindInfo[index0].pImageBinds[index1].pBinds[index2].memory) {
- VkDeviceMemory* pDeviceMemory = (VkDeviceMemory*)&(pBindInfo[index0].pImageBinds[index1].pBinds[index2].memory);
+ VkDeviceMemory *pDeviceMemory =
+ (VkDeviceMemory *)&(pBindInfo[index0].pImageBinds[index1].pBinds[index2].memory);
original_memory3.push_back(pBindInfo[index0].pImageBinds[index1].pBinds[index2].memory);
- *(pDeviceMemory) = (VkDeviceMemory)((VkUniqueObject*)pBindInfo[index0].pImageBinds[index1].pBinds[index2].memory)->actualObject;
+ *(pDeviceMemory) =
+ (VkDeviceMemory)((VkUniqueObject *)pBindInfo[index0].pImageBinds[index1].pBinds[index2].memory)
+ ->actualObject;
}
}
}
}
}
if (pBindInfo[index0].pWaitSemaphores) {
- for (uint32_t index1=0; index1<pBindInfo[index0].waitSemaphoreCount; ++index1) {
- VkSemaphore** ppSemaphore = (VkSemaphore**)&(pBindInfo[index0].pWaitSemaphores);
+ for (uint32_t index1 = 0; index1 < pBindInfo[index0].waitSemaphoreCount; ++index1) {
+ VkSemaphore **ppSemaphore = (VkSemaphore **)&(pBindInfo[index0].pWaitSemaphores);
original_pWaitSemaphores.push_back(pBindInfo[index0].pWaitSemaphores[index1]);
- *(ppSemaphore[index1]) = (VkSemaphore)((VkUniqueObject*)pBindInfo[index0].pWaitSemaphores[index1])->actualObject;
+ *(ppSemaphore[index1]) =
+ (VkSemaphore)((VkUniqueObject *)pBindInfo[index0].pWaitSemaphores[index1])->actualObject;
}
}
if (pBindInfo[index0].pSignalSemaphores) {
- for (uint32_t index1=0; index1<pBindInfo[index0].signalSemaphoreCount; ++index1) {
- VkSemaphore** ppSemaphore = (VkSemaphore**)&(pBindInfo[index0].pSignalSemaphores);
+ for (uint32_t index1 = 0; index1 < pBindInfo[index0].signalSemaphoreCount; ++index1) {
+ VkSemaphore **ppSemaphore = (VkSemaphore **)&(pBindInfo[index0].pSignalSemaphores);
original_pSignalSemaphores.push_back(pBindInfo[index0].pSignalSemaphores[index1]);
- *(ppSemaphore[index1]) = (VkSemaphore)((VkUniqueObject*)pBindInfo[index0].pSignalSemaphores[index1])->actualObject;
+ *(ppSemaphore[index1]) =
+ (VkSemaphore)((VkUniqueObject *)pBindInfo[index0].pSignalSemaphores[index1])->actualObject;
}
}
}
}
if (VK_NULL_HANDLE != fence) {
- fence = (VkFence)((VkUniqueObject*)fence)->actualObject;
+ fence = (VkFence)((VkUniqueObject *)fence)->actualObject;
}
- VkResult result = get_dispatch_table(unique_objects_device_table_map, queue)->QueueBindSparse(queue, bindInfoCount, pBindInfo, fence);
+ VkResult result =
+ get_dispatch_table(unique_objects_device_table_map, queue)->QueueBindSparse(queue, bindInfoCount, pBindInfo, fence);
if (pBindInfo) {
- for (uint32_t index0=0; index0<bindInfoCount; ++index0) {
+ for (uint32_t index0 = 0; index0 < bindInfoCount; ++index0) {
if (pBindInfo[index0].pBufferBinds) {
- for (uint32_t index1=0; index1<pBindInfo[index0].bufferBindCount; ++index1) {
+ for (uint32_t index1 = 0; index1 < pBindInfo[index0].bufferBindCount; ++index1) {
if (pBindInfo[index0].pBufferBinds[index1].buffer) {
- VkBuffer* pBuffer = (VkBuffer*)&(pBindInfo[index0].pBufferBinds[index1].buffer);
+ VkBuffer *pBuffer = (VkBuffer *)&(pBindInfo[index0].pBufferBinds[index1].buffer);
*(pBuffer) = original_buffer[index1];
}
if (pBindInfo[index0].pBufferBinds[index1].pBinds) {
- for (uint32_t index2=0; index2<pBindInfo[index0].pBufferBinds[index1].bindCount; ++index2) {
+ for (uint32_t index2 = 0; index2 < pBindInfo[index0].pBufferBinds[index1].bindCount; ++index2) {
if (pBindInfo[index0].pBufferBinds[index1].pBinds[index2].memory) {
- VkDeviceMemory* pDeviceMemory = (VkDeviceMemory*)&(pBindInfo[index0].pBufferBinds[index1].pBinds[index2].memory);
+ VkDeviceMemory *pDeviceMemory =
+ (VkDeviceMemory *)&(pBindInfo[index0].pBufferBinds[index1].pBinds[index2].memory);
*(pDeviceMemory) = original_memory1[index2];
}
}
@@ -375,15 +392,16 @@ VkResult explicit_QueueBindSparse(VkQueue queue, uint32_t bindInfoCount, const V
}
}
if (pBindInfo[index0].pImageOpaqueBinds) {
- for (uint32_t index1=0; index1<pBindInfo[index0].imageOpaqueBindCount; ++index1) {
+ for (uint32_t index1 = 0; index1 < pBindInfo[index0].imageOpaqueBindCount; ++index1) {
if (pBindInfo[index0].pImageOpaqueBinds[index1].image) {
- VkImage* pImage = (VkImage*)&(pBindInfo[index0].pImageOpaqueBinds[index1].image);
+ VkImage *pImage = (VkImage *)&(pBindInfo[index0].pImageOpaqueBinds[index1].image);
*(pImage) = original_image1[index1];
}
if (pBindInfo[index0].pImageOpaqueBinds[index1].pBinds) {
- for (uint32_t index2=0; index2<pBindInfo[index0].pImageOpaqueBinds[index1].bindCount; ++index2) {
+ for (uint32_t index2 = 0; index2 < pBindInfo[index0].pImageOpaqueBinds[index1].bindCount; ++index2) {
if (pBindInfo[index0].pImageOpaqueBinds[index1].pBinds[index2].memory) {
- VkDeviceMemory* pDeviceMemory = (VkDeviceMemory*)&(pBindInfo[index0].pImageOpaqueBinds[index1].pBinds[index2].memory);
+ VkDeviceMemory *pDeviceMemory =
+ (VkDeviceMemory *)&(pBindInfo[index0].pImageOpaqueBinds[index1].pBinds[index2].memory);
*(pDeviceMemory) = original_memory2[index2];
}
}
@@ -391,15 +409,16 @@ VkResult explicit_QueueBindSparse(VkQueue queue, uint32_t bindInfoCount, const V
}
}
if (pBindInfo[index0].pImageBinds) {
- for (uint32_t index1=0; index1<pBindInfo[index0].imageBindCount; ++index1) {
+ for (uint32_t index1 = 0; index1 < pBindInfo[index0].imageBindCount; ++index1) {
if (pBindInfo[index0].pImageBinds[index1].image) {
- VkImage* pImage = (VkImage*)&(pBindInfo[index0].pImageBinds[index1].image);
+ VkImage *pImage = (VkImage *)&(pBindInfo[index0].pImageBinds[index1].image);
*(pImage) = original_image2[index1];
}
if (pBindInfo[index0].pImageBinds[index1].pBinds) {
- for (uint32_t index2=0; index2<pBindInfo[index0].pImageBinds[index1].bindCount; ++index2) {
+ for (uint32_t index2 = 0; index2 < pBindInfo[index0].pImageBinds[index1].bindCount; ++index2) {
if (pBindInfo[index0].pImageBinds[index1].pBinds[index2].memory) {
- VkDeviceMemory* pDeviceMemory = (VkDeviceMemory*)&(pBindInfo[index0].pImageBinds[index1].pBinds[index2].memory);
+ VkDeviceMemory *pDeviceMemory =
+ (VkDeviceMemory *)&(pBindInfo[index0].pImageBinds[index1].pBinds[index2].memory);
*(pDeviceMemory) = original_memory3[index2];
}
}
@@ -407,14 +426,14 @@ VkResult explicit_QueueBindSparse(VkQueue queue, uint32_t bindInfoCount, const V
}
}
if (pBindInfo[index0].pWaitSemaphores) {
- for (uint32_t index1=0; index1<pBindInfo[index0].waitSemaphoreCount; ++index1) {
- VkSemaphore** ppSemaphore = (VkSemaphore**)&(pBindInfo[index0].pWaitSemaphores);
+ for (uint32_t index1 = 0; index1 < pBindInfo[index0].waitSemaphoreCount; ++index1) {
+ VkSemaphore **ppSemaphore = (VkSemaphore **)&(pBindInfo[index0].pWaitSemaphores);
*(ppSemaphore[index1]) = original_pWaitSemaphores[index1];
}
}
if (pBindInfo[index0].pSignalSemaphores) {
- for (uint32_t index1=0; index1<pBindInfo[index0].signalSemaphoreCount; ++index1) {
- VkSemaphore** ppSemaphore = (VkSemaphore**)&(pBindInfo[index0].pSignalSemaphores);
+ for (uint32_t index1 = 0; index1 < pBindInfo[index0].signalSemaphoreCount; ++index1) {
+ VkSemaphore **ppSemaphore = (VkSemaphore **)&(pBindInfo[index0].pSignalSemaphores);
*(ppSemaphore[index1]) = original_pSignalSemaphores[index1];
}
}
@@ -423,36 +442,41 @@ VkResult explicit_QueueBindSparse(VkQueue queue, uint32_t bindInfoCount, const V
return result;
}
-VkResult explicit_CreateComputePipelines(VkDevice device, VkPipelineCache pipelineCache, uint32_t createInfoCount, const VkComputePipelineCreateInfo* pCreateInfos, const VkAllocationCallbacks* pAllocator, VkPipeline* pPipelines)
-{
-// STRUCT USES:{'pipelineCache': 'VkPipelineCache', 'pCreateInfos[createInfoCount]': {'stage': {'module': 'VkShaderModule'}, 'layout': 'VkPipelineLayout', 'basePipelineHandle': 'VkPipeline'}}
-//LOCAL DECLS:{'pCreateInfos': 'VkComputePipelineCreateInfo*'}
- safe_VkComputePipelineCreateInfo* local_pCreateInfos = NULL;
+VkResult explicit_CreateComputePipelines(VkDevice device, VkPipelineCache pipelineCache, uint32_t createInfoCount,
+ const VkComputePipelineCreateInfo *pCreateInfos, const VkAllocationCallbacks *pAllocator,
+ VkPipeline *pPipelines) {
+ // STRUCT USES:{'pipelineCache': 'VkPipelineCache', 'pCreateInfos[createInfoCount]': {'stage': {'module': 'VkShaderModule'},
+ // 'layout': 'VkPipelineLayout', 'basePipelineHandle': 'VkPipeline'}}
+ // LOCAL DECLS:{'pCreateInfos': 'VkComputePipelineCreateInfo*'}
+ safe_VkComputePipelineCreateInfo *local_pCreateInfos = NULL;
if (pCreateInfos) {
local_pCreateInfos = new safe_VkComputePipelineCreateInfo[createInfoCount];
- for (uint32_t idx0=0; idx0<createInfoCount; ++idx0) {
+ for (uint32_t idx0 = 0; idx0 < createInfoCount; ++idx0) {
local_pCreateInfos[idx0].initialize(&pCreateInfos[idx0]);
if (pCreateInfos[idx0].basePipelineHandle) {
- local_pCreateInfos[idx0].basePipelineHandle = (VkPipeline)((VkUniqueObject*)pCreateInfos[idx0].basePipelineHandle)->actualObject;
+ local_pCreateInfos[idx0].basePipelineHandle =
+ (VkPipeline)((VkUniqueObject *)pCreateInfos[idx0].basePipelineHandle)->actualObject;
}
if (pCreateInfos[idx0].layout) {
- local_pCreateInfos[idx0].layout = (VkPipelineLayout)((VkUniqueObject*)pCreateInfos[idx0].layout)->actualObject;
+ local_pCreateInfos[idx0].layout = (VkPipelineLayout)((VkUniqueObject *)pCreateInfos[idx0].layout)->actualObject;
}
if (pCreateInfos[idx0].stage.module) {
- local_pCreateInfos[idx0].stage.module = (VkShaderModule)((VkUniqueObject*)pCreateInfos[idx0].stage.module)->actualObject;
+ local_pCreateInfos[idx0].stage.module =
+ (VkShaderModule)((VkUniqueObject *)pCreateInfos[idx0].stage.module)->actualObject;
}
}
}
if (pipelineCache) {
- pipelineCache = (VkPipelineCache)((VkUniqueObject*)pipelineCache)->actualObject;
+ pipelineCache = (VkPipelineCache)((VkUniqueObject *)pipelineCache)->actualObject;
}
-// CODEGEN : file /usr/local/google/home/tobine/vulkan_work/LoaderAndTools/vk-layer-generate.py line #1671
- VkResult result = get_dispatch_table(unique_objects_device_table_map, device)->CreateComputePipelines(device, pipelineCache, createInfoCount, (const VkComputePipelineCreateInfo*)local_pCreateInfos, pAllocator, pPipelines);
- if (local_pCreateInfos)
- delete[] local_pCreateInfos;
+ // CODEGEN : file /usr/local/google/home/tobine/vulkan_work/LoaderAndTools/vk-layer-generate.py line #1671
+ VkResult result = get_dispatch_table(unique_objects_device_table_map, device)
+ ->CreateComputePipelines(device, pipelineCache, createInfoCount,
+ (const VkComputePipelineCreateInfo *)local_pCreateInfos, pAllocator, pPipelines);
+ delete[] local_pCreateInfos;
if (VK_SUCCESS == result) {
- VkUniqueObject* pUO = NULL;
- for (uint32_t i=0; i<createInfoCount; ++i) {
+ VkUniqueObject *pUO = NULL;
+ for (uint32_t i = 0; i < createInfoCount; ++i) {
pUO = new VkUniqueObject();
pUO->actualObject = (uint64_t)pPipelines[i];
pPipelines[i] = (VkPipeline)pUO;
@@ -461,43 +485,49 @@ VkResult explicit_CreateComputePipelines(VkDevice device, VkPipelineCache pipeli
return result;
}
-VkResult explicit_CreateGraphicsPipelines(VkDevice device, VkPipelineCache pipelineCache, uint32_t createInfoCount, const VkGraphicsPipelineCreateInfo* pCreateInfos, const VkAllocationCallbacks* pAllocator, VkPipeline* pPipelines)
-{
-// STRUCT USES:{'pipelineCache': 'VkPipelineCache', 'pCreateInfos[createInfoCount]': {'layout': 'VkPipelineLayout', 'pStages[stageCount]': {'module': 'VkShaderModule'}, 'renderPass': 'VkRenderPass', 'basePipelineHandle': 'VkPipeline'}}
-//LOCAL DECLS:{'pCreateInfos': 'VkGraphicsPipelineCreateInfo*'}
- safe_VkGraphicsPipelineCreateInfo* local_pCreateInfos = NULL;
+VkResult explicit_CreateGraphicsPipelines(VkDevice device, VkPipelineCache pipelineCache, uint32_t createInfoCount,
+ const VkGraphicsPipelineCreateInfo *pCreateInfos, const VkAllocationCallbacks *pAllocator,
+ VkPipeline *pPipelines) {
+ // STRUCT USES:{'pipelineCache': 'VkPipelineCache', 'pCreateInfos[createInfoCount]': {'layout': 'VkPipelineLayout',
+ // 'pStages[stageCount]': {'module': 'VkShaderModule'}, 'renderPass': 'VkRenderPass', 'basePipelineHandle': 'VkPipeline'}}
+ // LOCAL DECLS:{'pCreateInfos': 'VkGraphicsPipelineCreateInfo*'}
+ safe_VkGraphicsPipelineCreateInfo *local_pCreateInfos = NULL;
if (pCreateInfos) {
local_pCreateInfos = new safe_VkGraphicsPipelineCreateInfo[createInfoCount];
- for (uint32_t idx0=0; idx0<createInfoCount; ++idx0) {
+ for (uint32_t idx0 = 0; idx0 < createInfoCount; ++idx0) {
local_pCreateInfos[idx0].initialize(&pCreateInfos[idx0]);
if (pCreateInfos[idx0].basePipelineHandle) {
- local_pCreateInfos[idx0].basePipelineHandle = (VkPipeline)((VkUniqueObject*)pCreateInfos[idx0].basePipelineHandle)->actualObject;
+ local_pCreateInfos[idx0].basePipelineHandle =
+ (VkPipeline)((VkUniqueObject *)pCreateInfos[idx0].basePipelineHandle)->actualObject;
}
if (pCreateInfos[idx0].layout) {
- local_pCreateInfos[idx0].layout = (VkPipelineLayout)((VkUniqueObject*)pCreateInfos[idx0].layout)->actualObject;
+ local_pCreateInfos[idx0].layout = (VkPipelineLayout)((VkUniqueObject *)pCreateInfos[idx0].layout)->actualObject;
}
if (pCreateInfos[idx0].pStages) {
- for (uint32_t idx1=0; idx1<pCreateInfos[idx0].stageCount; ++idx1) {
+ for (uint32_t idx1 = 0; idx1 < pCreateInfos[idx0].stageCount; ++idx1) {
if (pCreateInfos[idx0].pStages[idx1].module) {
- local_pCreateInfos[idx0].pStages[idx1].module = (VkShaderModule)((VkUniqueObject*)pCreateInfos[idx0].pStages[idx1].module)->actualObject;
+ local_pCreateInfos[idx0].pStages[idx1].module =
+ (VkShaderModule)((VkUniqueObject *)pCreateInfos[idx0].pStages[idx1].module)->actualObject;
}
}
}
if (pCreateInfos[idx0].renderPass) {
- local_pCreateInfos[idx0].renderPass = (VkRenderPass)((VkUniqueObject*)pCreateInfos[idx0].renderPass)->actualObject;
+ local_pCreateInfos[idx0].renderPass = (VkRenderPass)((VkUniqueObject *)pCreateInfos[idx0].renderPass)->actualObject;
}
}
}
if (pipelineCache) {
- pipelineCache = (VkPipelineCache)((VkUniqueObject*)pipelineCache)->actualObject;
+ pipelineCache = (VkPipelineCache)((VkUniqueObject *)pipelineCache)->actualObject;
}
-// CODEGEN : file /usr/local/google/home/tobine/vulkan_work/LoaderAndTools/vk-layer-generate.py line #1671
- VkResult result = get_dispatch_table(unique_objects_device_table_map, device)->CreateGraphicsPipelines(device, pipelineCache, createInfoCount, (const VkGraphicsPipelineCreateInfo*)local_pCreateInfos, pAllocator, pPipelines);
- if (local_pCreateInfos)
- delete[] local_pCreateInfos;
+ // CODEGEN : file /usr/local/google/home/tobine/vulkan_work/LoaderAndTools/vk-layer-generate.py line #1671
+ VkResult result =
+ get_dispatch_table(unique_objects_device_table_map, device)
+ ->CreateGraphicsPipelines(device, pipelineCache, createInfoCount,
+ (const VkGraphicsPipelineCreateInfo *)local_pCreateInfos, pAllocator, pPipelines);
+ delete[] local_pCreateInfos;
if (VK_SUCCESS == result) {
- VkUniqueObject* pUO = NULL;
- for (uint32_t i=0; i<createInfoCount; ++i) {
+ VkUniqueObject *pUO = NULL;
+ for (uint32_t i = 0; i < createInfoCount; ++i) {
pUO = new VkUniqueObject();
pUO->actualObject = (uint64_t)pPipelines[i];
pPipelines[i] = (VkPipeline)pUO;
@@ -506,19 +536,20 @@ VkResult explicit_CreateGraphicsPipelines(VkDevice device, VkPipelineCache pipel
return result;
}
-VkResult explicit_GetSwapchainImagesKHR(VkDevice device, VkSwapchainKHR swapchain, uint32_t* pSwapchainImageCount, VkImage* pSwapchainImages)
-{
-// UNWRAP USES:
-// 0 : swapchain,VkSwapchainKHR, pSwapchainImages,VkImage
+VkResult explicit_GetSwapchainImagesKHR(VkDevice device, VkSwapchainKHR swapchain, uint32_t *pSwapchainImageCount,
+ VkImage *pSwapchainImages) {
+ // UNWRAP USES:
+ // 0 : swapchain,VkSwapchainKHR, pSwapchainImages,VkImage
if (VK_NULL_HANDLE != swapchain) {
- swapchain = (VkSwapchainKHR)((VkUniqueObject*)swapchain)->actualObject;
+ swapchain = (VkSwapchainKHR)((VkUniqueObject *)swapchain)->actualObject;
}
- VkResult result = get_dispatch_table(unique_objects_device_table_map, device)->GetSwapchainImagesKHR(device, swapchain, pSwapchainImageCount, pSwapchainImages);
+ VkResult result = get_dispatch_table(unique_objects_device_table_map, device)
+ ->GetSwapchainImagesKHR(device, swapchain, pSwapchainImageCount, pSwapchainImages);
// TODO : Need to add corresponding code to delete these images
if (VK_SUCCESS == result) {
if ((*pSwapchainImageCount > 0) && pSwapchainImages) {
- std::vector<VkUniqueObject*> uniqueImages = {};
- for (uint32_t i=0; i<*pSwapchainImageCount; ++i) {
+ std::vector<VkUniqueObject *> uniqueImages = {};
+ for (uint32_t i = 0; i < *pSwapchainImageCount; ++i) {
uniqueImages.push_back(new VkUniqueObject());
uniqueImages[i]->actualObject = (uint64_t)pSwapchainImages[i];
pSwapchainImages[i] = (VkImage)uniqueImages[i];
diff --git a/layers/vk_layer_config.cpp b/layers/vk_layer_config.cpp
index ba66cbd60..e916cfe6d 100755..100644
--- a/layers/vk_layer_config.cpp
+++ b/layers/vk_layer_config.cpp
@@ -37,16 +37,15 @@
#define MAX_CHARS_PER_LINE 4096
-class ConfigFile
-{
-public:
+class ConfigFile {
+ public:
ConfigFile();
~ConfigFile();
const char *getOption(const std::string &_option);
void setOption(const std::string &_option, const std::string &_val);
-private:
+ private:
bool m_fileIsParsed;
std::map<std::string, std::string> m_valueMap;
@@ -55,8 +54,7 @@ private:
static ConfigFile g_configFileObj;
-static VkLayerDbgAction stringToDbgAction(const char *_enum)
-{
+static VkLayerDbgAction stringToDbgAction(const char *_enum) {
// only handles single enum values
if (!strcmp(_enum, "VK_DBG_LAYER_ACTION_IGNORE"))
return VK_DBG_LAYER_ACTION_IGNORE;
@@ -68,11 +66,10 @@ static VkLayerDbgAction stringToDbgAction(const char *_enum)
#endif
else if (!strcmp(_enum, "VK_DBG_LAYER_ACTION_BREAK"))
return VK_DBG_LAYER_ACTION_BREAK;
- return (VkLayerDbgAction) 0;
+ return (VkLayerDbgAction)0;
}
-static VkFlags stringToDbgReportFlags(const char *_enum)
-{
+static VkFlags stringToDbgReportFlags(const char *_enum) {
// only handles single enum values
if (!strcmp(_enum, "VK_DEBUG_REPORT_INFO"))
return VK_DEBUG_REPORT_INFORMATION_BIT_EXT;
@@ -84,11 +81,10 @@ static VkFlags stringToDbgReportFlags(const char *_enum)
return VK_DEBUG_REPORT_ERROR_BIT_EXT;
else if (!strcmp(_enum, "VK_DEBUG_REPORT_DEBUG"))
return VK_DEBUG_REPORT_DEBUG_BIT_EXT;
- return (VkFlags) 0;
+ return (VkFlags)0;
}
-static unsigned int convertStringEnumVal(const char *_enum)
-{
+static unsigned int convertStringEnumVal(const char *_enum) {
unsigned int ret;
ret = stringToDbgAction(_enum);
@@ -98,31 +94,29 @@ static unsigned int convertStringEnumVal(const char *_enum)
return stringToDbgReportFlags(_enum);
}
-const char *getLayerOption(const char *_option)
-{
- return g_configFileObj.getOption(_option);
-}
+const char *getLayerOption(const char *_option) { return g_configFileObj.getOption(_option); }
// If option is NULL or stdout, return stdout, otherwise try to open option
// as a filename. If successful, return file handle, otherwise stdout
-FILE* getLayerLogOutput(const char *_option, const char *layerName)
-{
- FILE* log_output = NULL;
+FILE *getLayerLogOutput(const char *_option, const char *layerName) {
+ FILE *log_output = NULL;
if (!_option || !strcmp("stdout", _option))
log_output = stdout;
else {
log_output = fopen(_option, "w");
if (log_output == NULL) {
if (_option)
- std::cout << std::endl << layerName << " ERROR: Bad output filename specified: " << _option << ". Writing to STDOUT instead" << std::endl << std::endl;
+ std::cout << std::endl
+ << layerName << " ERROR: Bad output filename specified: " << _option << ". Writing to STDOUT instead"
+ << std::endl
+ << std::endl;
log_output = stdout;
}
}
return log_output;
}
-VkDebugReportFlagsEXT getLayerOptionFlags(const char *_option, uint32_t optionDefault)
-{
+VkDebugReportFlagsEXT getLayerOptionFlags(const char *_option, uint32_t optionDefault) {
VkDebugReportFlagsEXT flags = optionDefault;
const char *option = (g_configFileObj.getOption(_option));
@@ -158,8 +152,7 @@ VkDebugReportFlagsEXT getLayerOptionFlags(const char *_option, uint32_t optionDe
return flags;
}
-bool getLayerOptionEnum(const char *_option, uint32_t *optionDefault)
-{
+bool getLayerOptionEnum(const char *_option, uint32_t *optionDefault) {
bool res;
const char *option = (g_configFileObj.getOption(_option));
if (option != NULL) {
@@ -171,32 +164,22 @@ bool getLayerOptionEnum(const char *_option, uint32_t *optionDefault)
return res;
}
-void setLayerOptionEnum(const char *_option, const char *_valEnum)
-{
+void setLayerOptionEnum(const char *_option, const char *_valEnum) {
unsigned int val = convertStringEnumVal(_valEnum);
char strVal[24];
snprintf(strVal, 24, "%u", val);
g_configFileObj.setOption(_option, strVal);
}
-void setLayerOption(const char *_option, const char *_val)
-{
- g_configFileObj.setOption(_option, _val);
-}
+void setLayerOption(const char *_option, const char *_val) { g_configFileObj.setOption(_option, _val); }
-ConfigFile::ConfigFile() : m_fileIsParsed(false)
-{
-}
+ConfigFile::ConfigFile() : m_fileIsParsed(false) {}
-ConfigFile::~ConfigFile()
-{
-}
+ConfigFile::~ConfigFile() {}
-const char *ConfigFile::getOption(const std::string &_option)
-{
+const char *ConfigFile::getOption(const std::string &_option) {
std::map<std::string, std::string>::const_iterator it;
- if (!m_fileIsParsed)
- {
+ if (!m_fileIsParsed) {
parseFile("vk_layer_settings.txt");
}
@@ -206,18 +189,15 @@ const char *ConfigFile::getOption(const std::string &_option)
return it->second.c_str();
}
-void ConfigFile::setOption(const std::string &_option, const std::string &_val)
-{
- if (!m_fileIsParsed)
- {
+void ConfigFile::setOption(const std::string &_option, const std::string &_val) {
+ if (!m_fileIsParsed) {
parseFile("vk_layer_settings.txt");
}
m_valueMap[_option] = _val;
}
-void ConfigFile::parseFile(const char *filename)
-{
+void ConfigFile::parseFile(const char *filename) {
std::ifstream file;
char buf[MAX_CHARS_PER_LINE];
@@ -230,20 +210,18 @@ void ConfigFile::parseFile(const char *filename)
// read tokens from the file and form option, value pairs
file.getline(buf, MAX_CHARS_PER_LINE);
- while (!file.eof())
- {
+ while (!file.eof()) {
char option[512];
char value[512];
char *pComment;
- //discard any comments delimited by '#' in the line
+ // discard any comments delimited by '#' in the line
pComment = strchr(buf, '#');
if (pComment)
*pComment = '\0';
- if (sscanf(buf, " %511[^\n\t =] = %511[^\n \t]", option, value) == 2)
- {
+ if (sscanf(buf, " %511[^\n\t =] = %511[^\n \t]", option, value) == 2) {
std::string optStr(option);
std::string valStr(value);
m_valueMap[optStr] = valStr;
@@ -252,8 +230,7 @@ void ConfigFile::parseFile(const char *filename)
}
}
-void print_msg_flags(VkFlags msgFlags, char *msg_flags)
-{
+void print_msg_flags(VkFlags msgFlags, char *msg_flags) {
bool separator = false;
msg_flags[0] = 0;
@@ -262,23 +239,26 @@ void print_msg_flags(VkFlags msgFlags, char *msg_flags)
separator = true;
}
if (msgFlags & VK_DEBUG_REPORT_INFORMATION_BIT_EXT) {
- if (separator) strcat(msg_flags, ",");
+ if (separator)
+ strcat(msg_flags, ",");
strcat(msg_flags, "INFO");
separator = true;
}
if (msgFlags & VK_DEBUG_REPORT_WARNING_BIT_EXT) {
- if (separator) strcat(msg_flags, ",");
+ if (separator)
+ strcat(msg_flags, ",");
strcat(msg_flags, "WARN");
separator = true;
}
if (msgFlags & VK_DEBUG_REPORT_PERFORMANCE_WARNING_BIT_EXT) {
- if (separator) strcat(msg_flags, ",");
+ if (separator)
+ strcat(msg_flags, ",");
strcat(msg_flags, "PERF");
separator = true;
}
if (msgFlags & VK_DEBUG_REPORT_ERROR_BIT_EXT) {
- if (separator) strcat(msg_flags, ",");
+ if (separator)
+ strcat(msg_flags, ",");
strcat(msg_flags, "ERROR");
}
}
-
diff --git a/layers/vk_layer_config.h b/layers/vk_layer_config.h
index 7d62041ce..0111e0522 100644
--- a/layers/vk_layer_config.h
+++ b/layers/vk_layer_config.h
@@ -31,7 +31,7 @@ extern "C" {
#endif
const char *getLayerOption(const char *_option);
-FILE* getLayerLogOutput(const char *_option, const char *layerName);
+FILE *getLayerLogOutput(const char *_option, const char *layerName);
VkDebugReportFlagsEXT getLayerOptionFlags(const char *_option, uint32_t optionDefault);
bool getLayerOptionEnum(const char *_option, uint32_t *optionDefault);
diff --git a/layers/vk_layer_data.h b/layers/vk_layer_data.h
index b51736c7a..36eb1b500 100644
--- a/layers/vk_layer_data.h
+++ b/layers/vk_layer_data.h
@@ -30,19 +30,16 @@
#include <unordered_map>
#include "vk_layer_table.h"
-template<typename DATA_T>
-DATA_T *get_my_data_ptr(void *data_key,
- std::unordered_map<void *, DATA_T*> &layer_data_map)
-{
+template <typename DATA_T> DATA_T *get_my_data_ptr(void *data_key, std::unordered_map<void *, DATA_T *> &layer_data_map) {
DATA_T *debug_data;
typename std::unordered_map<void *, DATA_T *>::const_iterator got;
/* TODO: We probably should lock here, or have caller lock */
got = layer_data_map.find(data_key);
- if ( got == layer_data_map.end() ) {
+ if (got == layer_data_map.end()) {
debug_data = new DATA_T;
- layer_data_map[(void *) data_key] = debug_data;
+ layer_data_map[(void *)data_key] = debug_data;
} else {
debug_data = got->second;
}
@@ -51,4 +48,3 @@ DATA_T *get_my_data_ptr(void *data_key,
}
#endif // LAYER_DATA_H
-
diff --git a/layers/vk_layer_debug_marker_table.cpp b/layers/vk_layer_debug_marker_table.cpp
deleted file mode 100644
index 08ed5d8ca..000000000
--- a/layers/vk_layer_debug_marker_table.cpp
+++ /dev/null
@@ -1,62 +0,0 @@
-/* Copyright (c) 2015-2016 The Khronos Group Inc.
- * Copyright (c) 2015-2016 Valve Corporation
- * Copyright (c) 2015-2016 LunarG, Inc.
- *
- * Permission is hereby granted, free of charge, to any person obtaining a copy
- * of this software and/or associated documentation files (the "Materials"), to
- * deal in the Materials without restriction, including without limitation the
- * rights to use, copy, modify, merge, publish, distribute, sublicense, and/or
- * sell copies of the Materials, and to permit persons to whom the Materials
- * are furnished to do so, subject to the following conditions:
- *
- * The above copyright notice(s) and this permission notice shall be included
- * in all copies or substantial portions of the Materials.
- *
- * THE MATERIALS ARE PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
- * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
- * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT.
- *
- * IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM,
- * DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR
- * OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE MATERIALS OR THE
- * USE OR OTHER DEALINGS IN THE MATERIALS
- *
- * Author: Tobin Ehlis <tobin@lunarg.com>
- */
-
-#include <assert.h>
-#include <unordered_map>
-#include "vulkan/vk_debug_marker_layer.h"
-std::unordered_map<void *, VkLayerDebugMarkerDispatchTable *> tableDebugMarkerMap;
-
-/* Various dispatchable objects will use the same underlying dispatch table if they
- * are created from that "parent" object. Thus use pointer to dispatch table
- * as the key to these table maps.
- * Instance -> PhysicalDevice
- * Device -> CommandBuffer or Queue
- * If use the object themselves as key to map then implies Create entrypoints have to be intercepted
- * and a new key inserted into map */
-VkLayerDebugMarkerDispatchTable * initDebugMarkerTable(VkDevice device)
-{
- VkLayerDebugMarkerDispatchTable *pDebugMarkerTable;
-
- assert(device);
- VkLayerDispatchTable *pDisp = *(VkLayerDispatchTable **) device;
-
- std::unordered_map<void *, VkLayerDebugMarkerDispatchTable *>::const_iterator it = tableDebugMarkerMap.find((void *) pDisp);
- if (it == tableDebugMarkerMap.end())
- {
- pDebugMarkerTable = new VkLayerDebugMarkerDispatchTable;
- tableDebugMarkerMap[(void *) pDisp] = pDebugMarkerTable;
- } else
- {
- return it->second;
- }
-
- pDebugMarkerTable->CmdDbgMarkerBegin = (PFN_vkCmdDbgMarkerBegin) pDisp->GetDeviceProcAddr(device, "vkCmdDbgMarkerBegin");
- pDebugMarkerTable->CmdDbgMarkerEnd = (PFN_vkCmdDbgMarkerEnd) pDisp->GetDeviceProcAddr(device, "vkCmdDbgMarkerEnd");
- pDebugMarkerTable->DbgSetObjectTag = (PFN_vkDbgSetObjectTag) pDisp->GetDeviceProcAddr(device, "vkDbgSetObjectTag");
- pDebugMarkerTable->DbgSetObjectName = (PFN_vkDbgSetObjectName) pDisp->GetDeviceProcAddr(device, "vkDbgSetObjectName");
-
- return pDebugMarkerTable;
-}
diff --git a/layers/vk_layer_debug_marker_table.h b/layers/vk_layer_debug_marker_table.h
deleted file mode 100644
index b417b82aa..000000000
--- a/layers/vk_layer_debug_marker_table.h
+++ /dev/null
@@ -1,45 +0,0 @@
-/* Copyright (c) 2015-2016 The Khronos Group Inc.
- * Copyright (c) 2015-2016 Valve Corporation
- * Copyright (c) 2015-2016 LunarG, Inc.
- * Copyright (c) 2015-2016 Google Inc.
- *
- * Permission is hereby granted, free of charge, to any person obtaining a copy
- * of this software and/or associated documentation files (the "Materials"), to
- * deal in the Materials without restriction, including without limitation the
- * rights to use, copy, modify, merge, publish, distribute, sublicense, and/or
- * sell copies of the Materials, and to permit persons to whom the Materials
- * are furnished to do so, subject to the following conditions:
- *
- * The above copyright notice(s) and this permission notice shall be included
- * in all copies or substantial portions of the Materials.
- *
- * THE MATERIALS ARE PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
- * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
- * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT.
- *
- * IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM,
- * DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR
- * OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE MATERIALS OR THE
- * USE OR OTHER DEALINGS IN THE MATERIALS
- *
- * Author: Michael Lentine <mlentine@google.com>
- * Author: Tobin Ehlis <tobin@lunarg.com>
- */
-
-#pragma once
-
-#include <cassert>
-#include <unordered_map>
-
-extern std::unordered_map<void *, VkLayerDebugMarkerDispatchTable *> tableDebugMarkerMap;
-VkLayerDebugMarkerDispatchTable * initDebugMarkerTable(VkDevice dev);
-
-// Map lookup must be thread safe
-static inline VkLayerDebugMarkerDispatchTable *debug_marker_dispatch_table(void* object)
-{
- VkLayerDebugMarkerDispatchTable *pDisp = *(VkLayerDebugMarkerDispatchTable **) object;
- std::unordered_map<void *, VkLayerDebugMarkerDispatchTable *>::const_iterator it = tableDebugMarkerMap.find((void *) pDisp);
- assert(it != tableDebugMarkerMap.end() && "Not able to find debug marker dispatch entry");
- return it->second;
-}
-
diff --git a/layers/vk_layer_extension_utils.cpp b/layers/vk_layer_extension_utils.cpp
index 06ca6c008..e566f50e8 100644
--- a/layers/vk_layer_extension_utils.cpp
+++ b/layers/vk_layer_extension_utils.cpp
@@ -34,12 +34,8 @@
* This file contains utility functions for layers
*/
-VkResult util_GetExtensionProperties(
- const uint32_t count,
- const VkExtensionProperties *layer_extensions,
- uint32_t* pCount,
- VkExtensionProperties* pProperties)
-{
+VkResult util_GetExtensionProperties(const uint32_t count, const VkExtensionProperties *layer_extensions, uint32_t *pCount,
+ VkExtensionProperties *pProperties) {
uint32_t copy_size;
if (pProperties == NULL || layer_extensions == NULL) {
@@ -57,12 +53,8 @@ VkResult util_GetExtensionProperties(
return VK_SUCCESS;
}
-VkResult util_GetLayerProperties(
- const uint32_t count,
- const VkLayerProperties *layer_properties,
- uint32_t* pCount,
- VkLayerProperties* pProperties)
-{
+VkResult util_GetLayerProperties(const uint32_t count, const VkLayerProperties *layer_properties, uint32_t *pCount,
+ VkLayerProperties *pProperties) {
uint32_t copy_size;
if (pProperties == NULL || layer_properties == NULL) {
diff --git a/layers/vk_layer_extension_utils.h b/layers/vk_layer_extension_utils.h
index 05e15f88f..0a07a78e9 100644
--- a/layers/vk_layer_extension_utils.h
+++ b/layers/vk_layer_extension_utils.h
@@ -37,18 +37,11 @@
*/
extern "C" {
-VkResult util_GetExtensionProperties(
- const uint32_t count,
- const VkExtensionProperties *layer_extensions,
- uint32_t* pCount,
- VkExtensionProperties* pProperties);
+VkResult util_GetExtensionProperties(const uint32_t count, const VkExtensionProperties *layer_extensions, uint32_t *pCount,
+ VkExtensionProperties *pProperties);
-VkResult util_GetLayerProperties(
- const uint32_t count,
- const VkLayerProperties *layer_properties,
- uint32_t* pCount,
- VkLayerProperties* pProperties);
+VkResult util_GetLayerProperties(const uint32_t count, const VkLayerProperties *layer_properties, uint32_t *pCount,
+ VkLayerProperties *pProperties);
} // extern "C"
#endif // LAYER_EXTENSION_UTILS_H
-
diff --git a/layers/vk_layer_logging.h b/layers/vk_layer_logging.h
index d2f731da5..096259328 100644
--- a/layers/vk_layer_logging.h
+++ b/layers/vk_layer_logging.h
@@ -45,32 +45,18 @@ typedef struct _debug_report_data {
bool g_DEBUG_REPORT;
} debug_report_data;
-template debug_report_data *get_my_data_ptr<debug_report_data>(
- void *data_key,
- std::unordered_map<void *, debug_report_data *> &data_map);
+template debug_report_data *get_my_data_ptr<debug_report_data>(void *data_key,
+ std::unordered_map<void *, debug_report_data *> &data_map);
// Utility function to handle reporting
-static inline VkBool32 debug_report_log_msg(
- debug_report_data *debug_data,
- VkFlags msgFlags,
- VkDebugReportObjectTypeEXT objectType,
- uint64_t srcObject,
- size_t location,
- int32_t msgCode,
- const char* pLayerPrefix,
- const char* pMsg)
-{
+static inline VkBool32 debug_report_log_msg(debug_report_data *debug_data, VkFlags msgFlags, VkDebugReportObjectTypeEXT objectType,
+ uint64_t srcObject, size_t location, int32_t msgCode, const char *pLayerPrefix,
+ const char *pMsg) {
VkBool32 bail = false;
VkLayerDbgFunctionNode *pTrav = debug_data->g_pDbgFunctionHead;
while (pTrav) {
if (pTrav->msgFlags & msgFlags) {
- if (pTrav->pfnMsgCallback(msgFlags,
- objectType, srcObject,
- location,
- msgCode,
- pLayerPrefix,
- pMsg,
- pTrav->pUserData)) {
+ if (pTrav->pfnMsgCallback(msgFlags, objectType, srcObject, location, msgCode, pLayerPrefix, pMsg, pTrav->pUserData)) {
bail = true;
}
}
@@ -80,21 +66,20 @@ static inline VkBool32 debug_report_log_msg(
return bail;
}
-static inline debug_report_data *debug_report_create_instance(
- VkLayerInstanceDispatchTable *table,
- VkInstance inst,
- uint32_t extension_count,
- const char*const* ppEnabledExtensions) // layer or extension name to be enabled
+static inline debug_report_data *
+debug_report_create_instance(VkLayerInstanceDispatchTable *table, VkInstance inst, uint32_t extension_count,
+ const char *const *ppEnabledExtensions) // layer or extension name to be enabled
{
- debug_report_data *debug_data;
+ debug_report_data *debug_data;
PFN_vkGetInstanceProcAddr gpa = table->GetInstanceProcAddr;
- table->CreateDebugReportCallbackEXT = (PFN_vkCreateDebugReportCallbackEXT) gpa(inst, "vkCreateDebugReportCallbackEXT");
- table->DestroyDebugReportCallbackEXT = (PFN_vkDestroyDebugReportCallbackEXT) gpa(inst, "vkDestroyDebugReportCallbackEXT");
- table->DebugReportMessageEXT = (PFN_vkDebugReportMessageEXT) gpa(inst, "vkDebugReportMessageEXT");
+ table->CreateDebugReportCallbackEXT = (PFN_vkCreateDebugReportCallbackEXT)gpa(inst, "vkCreateDebugReportCallbackEXT");
+ table->DestroyDebugReportCallbackEXT = (PFN_vkDestroyDebugReportCallbackEXT)gpa(inst, "vkDestroyDebugReportCallbackEXT");
+ table->DebugReportMessageEXT = (PFN_vkDebugReportMessageEXT)gpa(inst, "vkDebugReportMessageEXT");
- debug_data = (debug_report_data *) malloc(sizeof(debug_report_data));
- if (!debug_data) return NULL;
+ debug_data = (debug_report_data *)malloc(sizeof(debug_report_data));
+ if (!debug_data)
+ return NULL;
memset(debug_data, 0, sizeof(debug_report_data));
for (uint32_t i = 0; i < extension_count; i++) {
@@ -106,8 +91,7 @@ static inline debug_report_data *debug_report_create_instance(
return debug_data;
}
-static inline void layer_debug_report_destroy_instance(debug_report_data *debug_data)
-{
+static inline void layer_debug_report_destroy_instance(debug_report_data *debug_data) {
VkLayerDbgFunctionNode *pTrav;
VkLayerDbgFunctionNode *pTravNext;
@@ -120,12 +104,9 @@ static inline void layer_debug_report_destroy_instance(debug_report_data *debug_
while (pTrav) {
pTravNext = pTrav->pNext;
- debug_report_log_msg(
- debug_data, VK_DEBUG_REPORT_WARNING_BIT_EXT,
- VK_DEBUG_REPORT_OBJECT_TYPE_DEBUG_REPORT_EXT, (uint64_t) pTrav->msgCallback,
- 0, VK_DEBUG_REPORT_ERROR_CALLBACK_REF_EXT,
- "DebugReport",
- "Debug Report callbacks not removed before DestroyInstance");
+ debug_report_log_msg(debug_data, VK_DEBUG_REPORT_ERROR_BIT_EXT, VK_DEBUG_REPORT_OBJECT_TYPE_DEBUG_REPORT_EXT,
+ (uint64_t)pTrav->msgCallback, 0, VK_DEBUG_REPORT_ERROR_CALLBACK_REF_EXT, "DebugReport",
+ "Debug Report callbacks not removed before DestroyInstance");
free(pTrav);
pTrav = pTravNext;
@@ -135,34 +116,25 @@ static inline void layer_debug_report_destroy_instance(debug_report_data *debug_
free(debug_data);
}
-static inline debug_report_data *layer_debug_report_create_device(
- debug_report_data *instance_debug_data,
- VkDevice device)
-{
+static inline debug_report_data *layer_debug_report_create_device(debug_report_data *instance_debug_data, VkDevice device) {
/* DEBUG_REPORT shares data between Instance and Device,
* so just return instance's data pointer */
return instance_debug_data;
}
-static inline void layer_debug_report_destroy_device(VkDevice device)
-{
- /* Nothing to do since we're using instance data record */
-}
+static inline void layer_debug_report_destroy_device(VkDevice device) { /* Nothing to do since we're using instance data record */ }
-static inline VkResult layer_create_msg_callback(
- debug_report_data *debug_data,
- const VkDebugReportCallbackCreateInfoEXT *pCreateInfo,
- const VkAllocationCallbacks *pAllocator,
- VkDebugReportCallbackEXT *pCallback)
-{
+static inline VkResult layer_create_msg_callback(debug_report_data *debug_data,
+ const VkDebugReportCallbackCreateInfoEXT *pCreateInfo,
+ const VkAllocationCallbacks *pAllocator, VkDebugReportCallbackEXT *pCallback) {
/* TODO: Use app allocator */
- VkLayerDbgFunctionNode *pNewDbgFuncNode = (VkLayerDbgFunctionNode*)malloc(sizeof(VkLayerDbgFunctionNode));
+ VkLayerDbgFunctionNode *pNewDbgFuncNode = (VkLayerDbgFunctionNode *)malloc(sizeof(VkLayerDbgFunctionNode));
if (!pNewDbgFuncNode)
return VK_ERROR_OUT_OF_HOST_MEMORY;
// Handle of 0 is logging_callback so use allocated Node address as unique handle
if (!(*pCallback))
- *pCallback = (VkDebugReportCallbackEXT) pNewDbgFuncNode;
+ *pCallback = (VkDebugReportCallbackEXT)pNewDbgFuncNode;
pNewDbgFuncNode->msgCallback = *pCallback;
pNewDbgFuncNode->pfnMsgCallback = pCreateInfo->pfnCallback;
pNewDbgFuncNode->msgFlags = pCreateInfo->flags;
@@ -172,20 +144,13 @@ static inline VkResult layer_create_msg_callback(
debug_data->g_pDbgFunctionHead = pNewDbgFuncNode;
debug_data->active_flags |= pCreateInfo->flags;
- debug_report_log_msg(
- debug_data, VK_DEBUG_REPORT_DEBUG_BIT_EXT,
- VK_DEBUG_REPORT_OBJECT_TYPE_DEBUG_REPORT_EXT, (uint64_t) *pCallback,
- 0, VK_DEBUG_REPORT_ERROR_CALLBACK_REF_EXT,
- "DebugReport",
- "Added callback");
+ debug_report_log_msg(debug_data, VK_DEBUG_REPORT_DEBUG_BIT_EXT, VK_DEBUG_REPORT_OBJECT_TYPE_DEBUG_REPORT_EXT,
+ (uint64_t)*pCallback, 0, VK_DEBUG_REPORT_ERROR_CALLBACK_REF_EXT, "DebugReport", "Added callback");
return VK_SUCCESS;
}
-static inline void layer_destroy_msg_callback(
- debug_report_data *debug_data,
- VkDebugReportCallbackEXT callback,
- const VkAllocationCallbacks *pAllocator)
-{
+static inline void layer_destroy_msg_callback(debug_report_data *debug_data, VkDebugReportCallbackEXT callback,
+ const VkAllocationCallbacks *pAllocator) {
VkLayerDbgFunctionNode *pTrav = debug_data->g_pDbgFunctionHead;
VkLayerDbgFunctionNode *pPrev = pTrav;
bool matched;
@@ -198,12 +163,9 @@ static inline void layer_destroy_msg_callback(
if (debug_data->g_pDbgFunctionHead == pTrav) {
debug_data->g_pDbgFunctionHead = pTrav->pNext;
}
- debug_report_log_msg(
- debug_data, VK_DEBUG_REPORT_DEBUG_BIT_EXT,
- VK_DEBUG_REPORT_OBJECT_TYPE_DEBUG_REPORT_EXT, (uint64_t) pTrav->msgCallback,
- 0, VK_DEBUG_REPORT_ERROR_CALLBACK_REF_EXT,
- "DebugReport",
- "Destroyed callback");
+ debug_report_log_msg(debug_data, VK_DEBUG_REPORT_DEBUG_BIT_EXT, VK_DEBUG_REPORT_OBJECT_TYPE_DEBUG_REPORT_EXT,
+ (uint64_t)pTrav->msgCallback, 0, VK_DEBUG_REPORT_ERROR_CALLBACK_REF_EXT, "DebugReport",
+ "Destroyed callback");
} else {
matched = false;
debug_data->active_flags |= pTrav->msgFlags;
@@ -217,23 +179,20 @@ static inline void layer_destroy_msg_callback(
}
}
-static inline PFN_vkVoidFunction debug_report_get_instance_proc_addr(
- debug_report_data *debug_data,
- const char *funcName)
-{
+static inline PFN_vkVoidFunction debug_report_get_instance_proc_addr(debug_report_data *debug_data, const char *funcName) {
if (!debug_data || !debug_data->g_DEBUG_REPORT) {
return NULL;
}
if (!strcmp(funcName, "vkCreateDebugReportCallbackEXT")) {
- return (PFN_vkVoidFunction) vkCreateDebugReportCallbackEXT;
+ return (PFN_vkVoidFunction)vkCreateDebugReportCallbackEXT;
}
if (!strcmp(funcName, "vkDestroyDebugReportCallbackEXT")) {
- return (PFN_vkVoidFunction) vkDestroyDebugReportCallbackEXT;
+ return (PFN_vkVoidFunction)vkDestroyDebugReportCallbackEXT;
}
if (!strcmp(funcName, "vkDebugReportMessageEXT")) {
- return (PFN_vkVoidFunction) vkDebugReportMessageEXT;
+ return (PFN_vkVoidFunction)vkDebugReportMessageEXT;
}
return NULL;
@@ -244,10 +203,7 @@ static inline PFN_vkVoidFunction debug_report_get_instance_proc_addr(
* Allows layer to defer collecting & formating data if the
* message will be discarded.
*/
-static inline VkBool32 will_log_msg(
- debug_report_data *debug_data,
- VkFlags msgFlags)
-{
+static inline VkBool32 will_log_msg(debug_report_data *debug_data, VkFlags msgFlags) {
if (!debug_data || !(debug_data->active_flags & msgFlags)) {
/* message is not wanted */
return false;
@@ -262,28 +218,13 @@ static inline VkBool32 will_log_msg(
* is only computed if a message needs to be logged
*/
#ifndef WIN32
-static inline VkBool32 log_msg(
- debug_report_data *debug_data,
- VkFlags msgFlags,
- VkDebugReportObjectTypeEXT objectType,
- uint64_t srcObject,
- size_t location,
- int32_t msgCode,
- const char* pLayerPrefix,
- const char* format,
- ...) __attribute__ ((format (printf, 8, 9)));
+static inline VkBool32 log_msg(debug_report_data *debug_data, VkFlags msgFlags, VkDebugReportObjectTypeEXT objectType,
+ uint64_t srcObject, size_t location, int32_t msgCode, const char *pLayerPrefix, const char *format,
+ ...) __attribute__((format(printf, 8, 9)));
#endif
-static inline VkBool32 log_msg(
- debug_report_data *debug_data,
- VkFlags msgFlags,
- VkDebugReportObjectTypeEXT objectType,
- uint64_t srcObject,
- size_t location,
- int32_t msgCode,
- const char* pLayerPrefix,
- const char* format,
- ...)
-{
+static inline VkBool32 log_msg(debug_report_data *debug_data, VkFlags msgFlags, VkDebugReportObjectTypeEXT objectType,
+ uint64_t srcObject, size_t location, int32_t msgCode, const char *pLayerPrefix, const char *format,
+ ...) {
if (!debug_data || !(debug_data->active_flags & msgFlags)) {
/* message is not wanted */
return false;
@@ -294,50 +235,34 @@ static inline VkBool32 log_msg(
va_start(argptr, format);
vsnprintf(str, 1024, format, argptr);
va_end(argptr);
- return debug_report_log_msg(
- debug_data, msgFlags, objectType,
- srcObject, location, msgCode,
- pLayerPrefix, str);
+ return debug_report_log_msg(debug_data, msgFlags, objectType, srcObject, location, msgCode, pLayerPrefix, str);
}
-static inline VKAPI_ATTR VkBool32 VKAPI_CALL log_callback(
- VkFlags msgFlags,
- VkDebugReportObjectTypeEXT objType,
- uint64_t srcObject,
- size_t location,
- int32_t msgCode,
- const char* pLayerPrefix,
- const char* pMsg,
- void* pUserData)
-{
+static inline VKAPI_ATTR VkBool32 VKAPI_CALL log_callback(VkFlags msgFlags, VkDebugReportObjectTypeEXT objType, uint64_t srcObject,
+ size_t location, int32_t msgCode, const char *pLayerPrefix,
+ const char *pMsg, void *pUserData) {
char msg_flags[30];
print_msg_flags(msgFlags, msg_flags);
- fprintf((FILE *) pUserData, "%s(%s): object: %#" PRIx64 " type: %d location: %lu msgCode: %d: %s\n",
- pLayerPrefix, msg_flags, srcObject, objType, (unsigned long)location, msgCode, pMsg);
- fflush((FILE *) pUserData);
+ fprintf((FILE *)pUserData, "%s(%s): object: %#" PRIx64 " type: %d location: %lu msgCode: %d: %s\n", pLayerPrefix, msg_flags,
+ srcObject, objType, (unsigned long)location, msgCode, pMsg);
+ fflush((FILE *)pUserData);
return false;
}
-static inline VKAPI_ATTR VkBool32 VKAPI_CALL win32_debug_output_msg(
- VkFlags msgFlags,
- VkDebugReportObjectTypeEXT objType,
- uint64_t srcObject,
- size_t location,
- int32_t msgCode,
- const char* pLayerPrefix,
- const char* pMsg,
- void* pUserData)
-{
+static inline VKAPI_ATTR VkBool32 VKAPI_CALL win32_debug_output_msg(VkFlags msgFlags, VkDebugReportObjectTypeEXT objType,
+ uint64_t srcObject, size_t location, int32_t msgCode,
+ const char *pLayerPrefix, const char *pMsg, void *pUserData) {
#ifdef WIN32
char msg_flags[30];
char buf[2048];
print_msg_flags(msgFlags, msg_flags);
- _snprintf(buf, sizeof(buf) - 1, "%s (%s): object: 0x%" PRIxPTR " type: %d location: " PRINTF_SIZE_T_SPECIFIER " msgCode: %d: %s\n",
- pLayerPrefix, msg_flags, (size_t)srcObject, objType, location, msgCode, pMsg);
+ _snprintf(buf, sizeof(buf) - 1,
+ "%s (%s): object: 0x%" PRIxPTR " type: %d location: " PRINTF_SIZE_T_SPECIFIER " msgCode: %d: %s\n", pLayerPrefix,
+ msg_flags, (size_t)srcObject, objType, location, msgCode, pMsg);
OutputDebugString(buf);
#endif
diff --git a/layers/vk_layer_settings.txt b/layers/vk_layer_settings.txt
index 48dcff2b0..d83b8326d 100644
--- a/layers/vk_layer_settings.txt
+++ b/layers/vk_layer_settings.txt
@@ -1,75 +1,87 @@
# This is an example vk_layer_settings.txt file.
# This file allows for per-layer settings which can dynamically affect layer
# behavior. Comments in this file are denoted with the "#" char.
-# Settings lines are of the form "<LayerName><SettingName> = <SettingValue>"
+# Settings lines are of the form "<LayerIdentifier>.<SettingName> = <SettingValue>"
+#
+# <LayerIdentifier> is typically the official layer name, minus the VK_LAYER prefix
+# and all lower-camel-case -- i.e., for VK_LAYER_LUNARG_core_validation, the layer
+# identifier is 'lunarg_core_validation', and for VK_LAYER_GOOGLE_threading the layer
+# identifier is 'google_threading'.
+#
# There are some common settings that are used by each layer.
# Below is a general description of three common settings, followed by
# actual template settings for each layer in the SDK.
#
-# Common settings description:
-# <LayerName>DebugAction : This is an enum value indicating what action is to
-# be taken when a layer wants to report information. Possible settings values
-# are defined in the vk_layer.h header file. These settings are:
-# VK_DBG_LAYER_ACTION_IGNORE - Take no action
-# VK_DBG_LAYER_ACTION_LOG_MSG - Log a txt message to stdout or to a log file
-# specified via the <LayerName>LogFilename setting (see below)
-# VK_DBG_LAYER_ACTION_CALLBACK - Call user defined callback function(s) that
-# have been registered via the VK_EXT_LUNARG_debug_report extension. Since
-# app must register callback, this is a NOOP for the settings file.
-# VK_DBG_LAYER_ACTION_BREAK - Trigger a breakpoint.
+# Common settings descriptions:
+# =============================
#
-# <LayerName>ReportFlags : This is a comma-delineated list of options telling
-# the layer what types of messages it should report back. Options are:
-# info - Report informational messages
-# warn - Report warnings of using the API in an unrecommended manner which may
-# also lead to undefined behavior
-# perf - Report using the API in a way that may cause suboptimal performance
-# error - Report errors in API usage
-# debug - For layer development. Report messages for debugging layer behavior
+# DEBUG_ACTION:
+# =============
+# <LayerIdentifier>.debug_action : This is an enum value indicating what action is to
+# be taken when a layer wants to report information. Possible settings values
+# are defined in the vk_layer.h header file. These settings are:
+# VK_DBG_LAYER_ACTION_IGNORE - Take no action
+# VK_DBG_LAYER_ACTION_LOG_MSG - Log a txt message to stdout or to a log file
+# specified via the <LayerIdentifier>.log_filename setting (see below)
+# VK_DBG_LAYER_ACTION_CALLBACK - Call user defined callback function(s) that
+# have been registered via the VK_EXT_LUNARG_debug_report extension. Since
+# app must register callback, this is a NOOP for the settings file.
+# VK_DBG_LAYER_ACTION_BREAK - Trigger a breakpoint.
#
-# <LayerName>LogFilename : output filename. Can be relative to location of
-# vk_layer_settings.txt file, or an absolute path. If no filename is
-# specified or if filename has invalid path, then stdout is used by default.
+# REPORT_FLAGS:
+# =============
+# <LayerIdentifier>.report_flags : This is a comma-delineated list of options telling
+# the layer what types of messages it should report back. Options are:
+# info - Report informational messages
+# warn - Report warnings of using the API in an unrecommended manner which may
+# also lead to undefined behavior
+# perf - Report using the API in a way that may cause suboptimal performance
+# error - Report errors in API usage
+# debug - For layer development. Report messages for debugging layer behavior
#
-# Example of actual settings for each layer
+# LOG_FILENAME:
+# =============
+# <LayerIdentifier>.log_filename : output filename. Can be relative to location of
+# vk_layer_settings.txt file, or an absolute path. If no filename is
+# specified or if filename has invalid path, then stdout is used by default.
#
-# VK_LUNARG_LAYER_device_limits Settings
-DeviceLimitsDebugAction = VK_DBG_LAYER_ACTION_LOG_MSG
-DeviceLimitsReportFlags = error,warn,perf
-DeviceLimitsLogFilename = stdout
-
-# VK_LUNARG_LAYER_draw_state Settings
-DrawStateDebugAction = VK_DBG_LAYER_ACTION_LOG_MSG
-DrawStateReportFlags = error,warn,perf
-DrawStateLogFilename = stdout
+#
+#
+# Example of actual settings for each layer:
+# ==========================================
+#
+# VK_LAYER_LUNARG_device_limits Settings
+lunarg_device_limits.debug_action = VK_DBG_LAYER_ACTION_LOG_MSG
+lunarg_device_limits.report_flags = error,warn,perf
+lunarg_device_limits.log_filename = stdout
-# VK_LUNARG_LAYER_image Settings
-ImageDebugAction = VK_DBG_LAYER_ACTION_LOG_MSG
-ImageReportFlags = error,warn,perf
-ImageLogFilename = stdout
+# VK_LAYER_LUNARG_core_validation Settings
+lunarg_core_validation.debug_action = VK_DBG_LAYER_ACTION_LOG_MSG
+lunarg_core_validation.report_flags = error,warn,perf
+lunarg_core_validation.log_filename = stdout
-# VK_LUNARG_LAYER_mem_tracker Settings
-MemTrackerDebugAction = VK_DBG_LAYER_ACTION_LOG_MSG
-MemTrackerReportFlags = error,warn,perf
-MemTrackerLogFilename = stdout
+# VK_LAYER_LUNARG_image Settings
+lunarg_image.debug_action = VK_DBG_LAYER_ACTION_LOG_MSG
+lunarg_image.report_flags = error,warn,perf
+lunarg_image.log_filename = stdout
-# VK_LUNARG_LAYER_object_tracker Settings
-ObjectTrackerDebugAction = VK_DBG_LAYER_ACTION_LOG_MSG
-ObjectTrackerReportFlags = error,warn,perf
-ObjectTrackerLogFilename = stdout
+# VK_LAYER_LUNARG_object_tracker Settings
+lunarg_object_tracker.debug_action = VK_DBG_LAYER_ACTION_LOG_MSG
+lunarg_object_tracker.report_flags = error,warn,perf
+lunarg_object_tracker.log_filename = stdout
-# VK_LUNARG_LAYER_param_checker Settings
-ParamCheckerDebugAction = VK_DBG_LAYER_ACTION_LOG_MSG
-ParamCheckerReportFlags = error,warn,perf
-ParamCheckerLogFilename = stdout
+# VK_LAYER_LUNARG_parameter_validation Settings
+lunarg_parameter_validation.debug_action = VK_DBG_LAYER_ACTION_LOG_MSG
+lunarg_parameter_validation.report_flags = error,warn,perf
+lunarg_parameter_validation.log_filename = stdout
-# VK_LUNARG_LAYER_swapchain Settings
-SwapchainDebugAction = VK_DBG_LAYER_ACTION_LOG_MSG
-SwapchainReportFlags = error,warn,perf
-SwapchainLogFilename = stdout
+# VK_LAYER_LUNARG_swapchain Settings
+lunarg_swapchain.debug_action = VK_DBG_LAYER_ACTION_LOG_MSG
+lunarg_swapchain.report_flags = error,warn,perf
+lunarg_swapchain.log_filename = stdout
-# VK_LUNARG_LAYER_threading Settings
-ThreadingDebugAction = VK_DBG_LAYER_ACTION_LOG_MSG
-ThreadingReportFlags = error,warn,perf
-ThreadingLogFilename = stdout
+# VK_LAYER_GOOGLE_threading Settings
+google_threading.debug_action = VK_DBG_LAYER_ACTION_LOG_MSG
+google_threading.report_flags = error,warn,perf
+google_threading.log_filename = stdout
diff --git a/layers/vk_layer_table.cpp b/layers/vk_layer_table.cpp
index 5e364e5fa..153f565c3 100644
--- a/layers/vk_layer_table.cpp
+++ b/layers/vk_layer_table.cpp
@@ -35,21 +35,20 @@ static instance_table_map tableInstanceMap;
#define DISPATCH_MAP_DEBUG 0
// Map lookup must be thread safe
-VkLayerDispatchTable *device_dispatch_table(void* object)
-{
+VkLayerDispatchTable *device_dispatch_table(void *object) {
dispatch_key key = get_dispatch_key(object);
- device_table_map::const_iterator it = tableMap.find((void *) key);
+ device_table_map::const_iterator it = tableMap.find((void *)key);
assert(it != tableMap.end() && "Not able to find device dispatch entry");
return it->second;
}
-VkLayerInstanceDispatchTable *instance_dispatch_table(void* object)
-{
+VkLayerInstanceDispatchTable *instance_dispatch_table(void *object) {
dispatch_key key = get_dispatch_key(object);
- instance_table_map::const_iterator it = tableInstanceMap.find((void *) key);
+ instance_table_map::const_iterator it = tableInstanceMap.find((void *)key);
#if DISPATCH_MAP_DEBUG
if (it != tableInstanceMap.end()) {
- fprintf(stderr, "instance_dispatch_table: map: %p, object: %p, key: %p, table: %p\n", &tableInstanceMap, object, key, it->second);
+ fprintf(stderr, "instance_dispatch_table: map: %p, object: %p, key: %p, table: %p\n", &tableInstanceMap, object, key,
+ it->second);
} else {
fprintf(stderr, "instance_dispatch_table: map: %p, object: %p, key: %p, table: UNKNOWN\n", &tableInstanceMap, object, key);
}
@@ -58,8 +57,7 @@ VkLayerInstanceDispatchTable *instance_dispatch_table(void* object)
return it->second;
}
-void destroy_dispatch_table(device_table_map &map, dispatch_key key)
-{
+void destroy_dispatch_table(device_table_map &map, dispatch_key key) {
#if DISPATCH_MAP_DEBUG
device_table_map::const_iterator it = map.find((void *)key);
if (it != map.end()) {
@@ -72,8 +70,7 @@ void destroy_dispatch_table(device_table_map &map, dispatch_key key)
map.erase(key);
}
-void destroy_dispatch_table(instance_table_map &map, dispatch_key key)
-{
+void destroy_dispatch_table(instance_table_map &map, dispatch_key key) {
#if DISPATCH_MAP_DEBUG
instance_table_map::const_iterator it = map.find((void *)key);
if (it != map.end()) {
@@ -86,23 +83,17 @@ void destroy_dispatch_table(instance_table_map &map, dispatch_key key)
map.erase(key);
}
-void destroy_device_dispatch_table(dispatch_key key)
-{
- destroy_dispatch_table(tableMap, key);
-}
+void destroy_device_dispatch_table(dispatch_key key) { destroy_dispatch_table(tableMap, key); }
-void destroy_instance_dispatch_table(dispatch_key key)
-{
- destroy_dispatch_table(tableInstanceMap, key);
-}
+void destroy_instance_dispatch_table(dispatch_key key) { destroy_dispatch_table(tableInstanceMap, key); }
-VkLayerDispatchTable *get_dispatch_table(device_table_map &map, void* object)
-{
+VkLayerDispatchTable *get_dispatch_table(device_table_map &map, void *object) {
dispatch_key key = get_dispatch_key(object);
- device_table_map::const_iterator it = map.find((void *) key);
+ device_table_map::const_iterator it = map.find((void *)key);
#if DISPATCH_MAP_DEBUG
if (it != map.end()) {
- fprintf(stderr, "device_dispatch_table: map: %p, object: %p, key: %p, table: %p\n", &tableInstanceMap, object, key, it->second);
+ fprintf(stderr, "device_dispatch_table: map: %p, object: %p, key: %p, table: %p\n", &tableInstanceMap, object, key,
+ it->second);
} else {
fprintf(stderr, "device_dispatch_table: map: %p, object: %p, key: %p, table: UNKNOWN\n", &tableInstanceMap, object, key);
}
@@ -111,14 +102,14 @@ VkLayerDispatchTable *get_dispatch_table(device_table_map &map, void* object)
return it->second;
}
-VkLayerInstanceDispatchTable *get_dispatch_table(instance_table_map &map, void* object)
-{
-// VkLayerInstanceDispatchTable *pDisp = *(VkLayerInstanceDispatchTable **) object;
+VkLayerInstanceDispatchTable *get_dispatch_table(instance_table_map &map, void *object) {
+ // VkLayerInstanceDispatchTable *pDisp = *(VkLayerInstanceDispatchTable **) object;
dispatch_key key = get_dispatch_key(object);
- instance_table_map::const_iterator it = map.find((void *) key);
+ instance_table_map::const_iterator it = map.find((void *)key);
#if DISPATCH_MAP_DEBUG
if (it != map.end()) {
- fprintf(stderr, "instance_dispatch_table: map: %p, object: %p, key: %p, table: %p\n", &tableInstanceMap, object, key, it->second);
+ fprintf(stderr, "instance_dispatch_table: map: %p, object: %p, key: %p, table: %p\n", &tableInstanceMap, object, key,
+ it->second);
} else {
fprintf(stderr, "instance_dispatch_table: map: %p, object: %p, key: %p, table: UNKNOWN\n", &tableInstanceMap, object, key);
}
@@ -127,23 +118,19 @@ VkLayerInstanceDispatchTable *get_dispatch_table(instance_table_map &map, void*
return it->second;
}
-VkLayerInstanceCreateInfo *get_chain_info(const VkInstanceCreateInfo *pCreateInfo, VkLayerFunction func)
-{
- VkLayerInstanceCreateInfo *chain_info = (VkLayerInstanceCreateInfo *) pCreateInfo->pNext;
- while (chain_info && !(chain_info->sType == VK_STRUCTURE_TYPE_LOADER_INSTANCE_CREATE_INFO
- && chain_info->function == func)) {
- chain_info = (VkLayerInstanceCreateInfo *) chain_info->pNext;
+VkLayerInstanceCreateInfo *get_chain_info(const VkInstanceCreateInfo *pCreateInfo, VkLayerFunction func) {
+ VkLayerInstanceCreateInfo *chain_info = (VkLayerInstanceCreateInfo *)pCreateInfo->pNext;
+ while (chain_info && !(chain_info->sType == VK_STRUCTURE_TYPE_LOADER_INSTANCE_CREATE_INFO && chain_info->function == func)) {
+ chain_info = (VkLayerInstanceCreateInfo *)chain_info->pNext;
}
assert(chain_info != NULL);
return chain_info;
}
-VkLayerDeviceCreateInfo *get_chain_info(const VkDeviceCreateInfo *pCreateInfo, VkLayerFunction func)
-{
- VkLayerDeviceCreateInfo *chain_info = (VkLayerDeviceCreateInfo *) pCreateInfo->pNext;
- while (chain_info && !(chain_info->sType == VK_STRUCTURE_TYPE_LOADER_DEVICE_CREATE_INFO
- && chain_info->function == func)) {
- chain_info = (VkLayerDeviceCreateInfo *) chain_info->pNext;
+VkLayerDeviceCreateInfo *get_chain_info(const VkDeviceCreateInfo *pCreateInfo, VkLayerFunction func) {
+ VkLayerDeviceCreateInfo *chain_info = (VkLayerDeviceCreateInfo *)pCreateInfo->pNext;
+ while (chain_info && !(chain_info->sType == VK_STRUCTURE_TYPE_LOADER_DEVICE_CREATE_INFO && chain_info->function == func)) {
+ chain_info = (VkLayerDeviceCreateInfo *)chain_info->pNext;
}
assert(chain_info != NULL);
return chain_info;
@@ -156,21 +143,18 @@ VkLayerDeviceCreateInfo *get_chain_info(const VkDeviceCreateInfo *pCreateInfo, V
* Device -> CommandBuffer or Queue
* If use the object themselves as key to map then implies Create entrypoints have to be intercepted
* and a new key inserted into map */
-VkLayerInstanceDispatchTable * initInstanceTable(VkInstance instance, const PFN_vkGetInstanceProcAddr gpa, instance_table_map &map)
-{
+VkLayerInstanceDispatchTable *initInstanceTable(VkInstance instance, const PFN_vkGetInstanceProcAddr gpa, instance_table_map &map) {
VkLayerInstanceDispatchTable *pTable;
dispatch_key key = get_dispatch_key(instance);
- instance_table_map::const_iterator it = map.find((void *) key);
+ instance_table_map::const_iterator it = map.find((void *)key);
- if (it == map.end())
- {
- pTable = new VkLayerInstanceDispatchTable;
- map[(void *) key] = pTable;
+ if (it == map.end()) {
+ pTable = new VkLayerInstanceDispatchTable;
+ map[(void *)key] = pTable;
#if DISPATCH_MAP_DEBUG
fprintf(stderr, "New, Instance: map: %p, key: %p, table: %p\n", &map, key, pTable);
#endif
- } else
- {
+ } else {
#if DISPATCH_MAP_DEBUG
fprintf(stderr, "Instance: map: %p, key: %p, table: %p\n", &map, key, it->second);
#endif
@@ -182,26 +166,22 @@ VkLayerInstanceDispatchTable * initInstanceTable(VkInstance instance, const PFN_
return pTable;
}
-VkLayerInstanceDispatchTable * initInstanceTable(VkInstance instance, const PFN_vkGetInstanceProcAddr gpa)
-{
+VkLayerInstanceDispatchTable *initInstanceTable(VkInstance instance, const PFN_vkGetInstanceProcAddr gpa) {
return initInstanceTable(instance, gpa, tableInstanceMap);
}
-VkLayerDispatchTable * initDeviceTable(VkDevice device, const PFN_vkGetDeviceProcAddr gpa, device_table_map &map)
-{
+VkLayerDispatchTable *initDeviceTable(VkDevice device, const PFN_vkGetDeviceProcAddr gpa, device_table_map &map) {
VkLayerDispatchTable *pTable;
dispatch_key key = get_dispatch_key(device);
- device_table_map::const_iterator it = map.find((void *) key);
+ device_table_map::const_iterator it = map.find((void *)key);
- if (it == map.end())
- {
- pTable = new VkLayerDispatchTable;
- map[(void *) key] = pTable;
+ if (it == map.end()) {
+ pTable = new VkLayerDispatchTable;
+ map[(void *)key] = pTable;
#if DISPATCH_MAP_DEBUG
fprintf(stderr, "New, Device: map: %p, key: %p, table: %p\n", &map, key, pTable);
#endif
- } else
- {
+ } else {
#if DISPATCH_MAP_DEBUG
fprintf(stderr, "Device: map: %p, key: %p, table: %p\n", &map, key, it->second);
#endif
@@ -213,7 +193,6 @@ VkLayerDispatchTable * initDeviceTable(VkDevice device, const PFN_vkGetDevicePro
return pTable;
}
-VkLayerDispatchTable * initDeviceTable(VkDevice device, const PFN_vkGetDeviceProcAddr gpa)
-{
+VkLayerDispatchTable *initDeviceTable(VkDevice device, const PFN_vkGetDeviceProcAddr gpa) {
return initDeviceTable(device, gpa, tableMap);
}
diff --git a/layers/vk_layer_table.h b/layers/vk_layer_table.h
index 33a4cf728..eb7efd37a 100644
--- a/layers/vk_layer_table.h
+++ b/layers/vk_layer_table.h
@@ -31,26 +31,22 @@
typedef std::unordered_map<void *, VkLayerDispatchTable *> device_table_map;
typedef std::unordered_map<void *, VkLayerInstanceDispatchTable *> instance_table_map;
-VkLayerDispatchTable * initDeviceTable(VkDevice device, const PFN_vkGetDeviceProcAddr gpa, device_table_map &map);
-VkLayerDispatchTable * initDeviceTable(VkDevice device, const PFN_vkGetDeviceProcAddr gpa);
-VkLayerInstanceDispatchTable * initInstanceTable(VkInstance instance, const PFN_vkGetInstanceProcAddr gpa, instance_table_map &map);
-VkLayerInstanceDispatchTable * initInstanceTable(VkInstance instance, const PFN_vkGetInstanceProcAddr gpa);
-
+VkLayerDispatchTable *initDeviceTable(VkDevice device, const PFN_vkGetDeviceProcAddr gpa, device_table_map &map);
+VkLayerDispatchTable *initDeviceTable(VkDevice device, const PFN_vkGetDeviceProcAddr gpa);
+VkLayerInstanceDispatchTable *initInstanceTable(VkInstance instance, const PFN_vkGetInstanceProcAddr gpa, instance_table_map &map);
+VkLayerInstanceDispatchTable *initInstanceTable(VkInstance instance, const PFN_vkGetInstanceProcAddr gpa);
typedef void *dispatch_key;
-static inline dispatch_key get_dispatch_key(const void* object)
-{
- return (dispatch_key) *(VkLayerDispatchTable **) object;
-}
+static inline dispatch_key get_dispatch_key(const void *object) { return (dispatch_key) * (VkLayerDispatchTable **)object; }
-VkLayerDispatchTable *device_dispatch_table(void* object);
+VkLayerDispatchTable *device_dispatch_table(void *object);
-VkLayerInstanceDispatchTable *instance_dispatch_table(void* object);
+VkLayerInstanceDispatchTable *instance_dispatch_table(void *object);
-VkLayerDispatchTable *get_dispatch_table(device_table_map &map, void* object);
+VkLayerDispatchTable *get_dispatch_table(device_table_map &map, void *object);
-VkLayerInstanceDispatchTable *get_dispatch_table(instance_table_map &map, void* object);
+VkLayerInstanceDispatchTable *get_dispatch_table(instance_table_map &map, void *object);
VkLayerInstanceCreateInfo *get_chain_info(const VkInstanceCreateInfo *pCreateInfo, VkLayerFunction func);
VkLayerDeviceCreateInfo *get_chain_info(const VkDeviceCreateInfo *pCreateInfo, VkLayerFunction func);
diff --git a/layers/vk_layer_utils.cpp b/layers/vk_layer_utils.cpp
index 22fb52fef..a1ddd2222 100644
--- a/layers/vk_layer_utils.cpp
+++ b/layers/vk_layer_utils.cpp
@@ -27,218 +27,214 @@
#include <string.h>
#include <string>
+#include <vector>
#include "vulkan/vulkan.h"
+#include "vk_layer_config.h"
#include "vk_layer_utils.h"
-
typedef struct _VULKAN_FORMAT_INFO {
- size_t size;
- uint32_t channel_count;
- VkFormatCompatibilityClass format_class;
+ size_t size;
+ uint32_t channel_count;
+ VkFormatCompatibilityClass format_class;
} VULKAN_FORMAT_INFO;
-
// Set up data structure with number of bytes and number of channels
// for each Vulkan format.
static const VULKAN_FORMAT_INFO vk_format_table[VK_FORMAT_RANGE_SIZE] = {
- { 0, 0, VK_FORMAT_COMPATIBILITY_CLASS_NONE_BIT }, // [VK_FORMAT_UNDEFINED]
- { 1, 2, VK_FORMAT_COMPATIBILITY_CLASS_8_BIT }, // [VK_FORMAT_R4G4_UNORM_PACK8]
- { 2, 4, VK_FORMAT_COMPATIBILITY_CLASS_16_BIT }, // [VK_FORMAT_R4G4B4A4_UNORM_PACK16]
- { 2, 4, VK_FORMAT_COMPATIBILITY_CLASS_16_BIT }, // [VK_FORMAT_B4G4R4A4_UNORM_PACK16]
- { 2, 3, VK_FORMAT_COMPATIBILITY_CLASS_16_BIT }, // [VK_FORMAT_R5G6B5_UNORM_PACK16]
- { 2, 3, VK_FORMAT_COMPATIBILITY_CLASS_16_BIT }, // [VK_FORMAT_B5G6R5_UNORM_PACK16]
- { 2, 4, VK_FORMAT_COMPATIBILITY_CLASS_16_BIT }, // [VK_FORMAT_R5G5B5A1_UNORM_PACK16]
- { 2, 4, VK_FORMAT_COMPATIBILITY_CLASS_16_BIT }, // [VK_FORMAT_B5G5R5A1_UNORM_PACK16]
- { 2, 4, VK_FORMAT_COMPATIBILITY_CLASS_16_BIT }, // [VK_FORMAT_A1R5G5B5_UNORM_PACK16]
- { 1, 1, VK_FORMAT_COMPATIBILITY_CLASS_8_BIT }, // [VK_FORMAT_R8_UNORM]
- { 1, 1, VK_FORMAT_COMPATIBILITY_CLASS_8_BIT }, // [VK_FORMAT_R8_SNORM]
- { 1, 1, VK_FORMAT_COMPATIBILITY_CLASS_8_BIT }, // [VK_FORMAT_R8_USCALED]
- { 1, 1, VK_FORMAT_COMPATIBILITY_CLASS_8_BIT }, // [VK_FORMAT_R8_SSCALED]
- { 1, 1, VK_FORMAT_COMPATIBILITY_CLASS_8_BIT }, // [VK_FORMAT_R8_UINT]
- { 1, 1, VK_FORMAT_COMPATIBILITY_CLASS_8_BIT }, // [VK_FORMAT_R8_SINT]
- { 1, 1, VK_FORMAT_COMPATIBILITY_CLASS_8_BIT }, // [VK_FORMAT_R8_SRGB]
- { 2, 2, VK_FORMAT_COMPATIBILITY_CLASS_16_BIT }, // [VK_FORMAT_R8G8_UNORM]
- { 2, 2, VK_FORMAT_COMPATIBILITY_CLASS_16_BIT }, // [VK_FORMAT_R8G8_SNORM]
- { 2, 2, VK_FORMAT_COMPATIBILITY_CLASS_16_BIT }, // [VK_FORMAT_R8G8_USCALED]
- { 2, 2, VK_FORMAT_COMPATIBILITY_CLASS_16_BIT }, // [VK_FORMAT_R8G8_SSCALED]
- { 2, 2, VK_FORMAT_COMPATIBILITY_CLASS_16_BIT }, // [VK_FORMAT_R8G8_UINT]
- { 2, 2, VK_FORMAT_COMPATIBILITY_CLASS_16_BIT }, // [VK_FORMAT_R8G8_SINT]
- { 2, 2, VK_FORMAT_COMPATIBILITY_CLASS_16_BIT }, // [VK_FORMAT_R8G8_SRGB]
- { 3, 3, VK_FORMAT_COMPATIBILITY_CLASS_24_BIT }, // [VK_FORMAT_R8G8B8_UNORM]
- { 3, 3, VK_FORMAT_COMPATIBILITY_CLASS_24_BIT }, // [VK_FORMAT_R8G8B8_SNORM]
- { 3, 3, VK_FORMAT_COMPATIBILITY_CLASS_24_BIT }, // [VK_FORMAT_R8G8B8_USCALED]
- { 3, 3, VK_FORMAT_COMPATIBILITY_CLASS_24_BIT }, // [VK_FORMAT_R8G8B8_SSCALED]
- { 3, 3, VK_FORMAT_COMPATIBILITY_CLASS_24_BIT }, // [VK_FORMAT_R8G8B8_UINT]
- { 3, 3, VK_FORMAT_COMPATIBILITY_CLASS_24_BIT }, // [VK_FORMAT_R8G8B8_SINT]
- { 3, 3, VK_FORMAT_COMPATIBILITY_CLASS_24_BIT }, // [VK_FORMAT_R8G8B8_SRGB]
- { 3, 3, VK_FORMAT_COMPATIBILITY_CLASS_24_BIT }, // [VK_FORMAT_B8G8R8_UNORM]
- { 3, 3, VK_FORMAT_COMPATIBILITY_CLASS_24_BIT }, // [VK_FORMAT_B8G8R8_SNORM]
- { 3, 3, VK_FORMAT_COMPATIBILITY_CLASS_24_BIT }, // [VK_FORMAT_B8G8R8_USCALED]
- { 3, 3, VK_FORMAT_COMPATIBILITY_CLASS_24_BIT }, // [VK_FORMAT_B8G8R8_SSCALED]
- { 3, 3, VK_FORMAT_COMPATIBILITY_CLASS_24_BIT }, // [VK_FORMAT_B8G8R8_UINT]
- { 3, 3, VK_FORMAT_COMPATIBILITY_CLASS_24_BIT }, // [VK_FORMAT_B8G8R8_SINT]
- { 3, 3, VK_FORMAT_COMPATIBILITY_CLASS_24_BIT }, // [VK_FORMAT_B8G8R8_SRGB]
- { 4, 4, VK_FORMAT_COMPATIBILITY_CLASS_32_BIT }, // [VK_FORMAT_R8G8B8A8_UNORM]
- { 4, 4, VK_FORMAT_COMPATIBILITY_CLASS_32_BIT }, // [VK_FORMAT_R8G8B8A8_SNORM]
- { 4, 4, VK_FORMAT_COMPATIBILITY_CLASS_32_BIT }, // [VK_FORMAT_R8G8B8A8_USCALED]
- { 4, 4, VK_FORMAT_COMPATIBILITY_CLASS_32_BIT }, // [VK_FORMAT_R8G8B8A8_SSCALED]
- { 4, 4, VK_FORMAT_COMPATIBILITY_CLASS_32_BIT }, // [VK_FORMAT_R8G8B8A8_UINT]
- { 4, 4, VK_FORMAT_COMPATIBILITY_CLASS_32_BIT }, // [VK_FORMAT_R8G8B8A8_SINT]
- { 4, 4, VK_FORMAT_COMPATIBILITY_CLASS_32_BIT }, // [VK_FORMAT_R8G8B8A8_SRGB]
- { 4, 4, VK_FORMAT_COMPATIBILITY_CLASS_32_BIT }, // [VK_FORMAT_B8G8R8A8_UNORM]
- { 4, 4, VK_FORMAT_COMPATIBILITY_CLASS_32_BIT }, // [VK_FORMAT_B8G8R8A8_SNORM]
- { 4, 4, VK_FORMAT_COMPATIBILITY_CLASS_32_BIT }, // [VK_FORMAT_B8G8R8A8_USCALED]
- { 4, 4, VK_FORMAT_COMPATIBILITY_CLASS_32_BIT }, // [VK_FORMAT_B8G8R8A8_SSCALED]
- { 4, 4, VK_FORMAT_COMPATIBILITY_CLASS_32_BIT }, // [VK_FORMAT_B8G8R8A8_UINT]
- { 4, 4, VK_FORMAT_COMPATIBILITY_CLASS_32_BIT }, // [VK_FORMAT_B8G8R8A8_SINT]
- { 4, 4, VK_FORMAT_COMPATIBILITY_CLASS_32_BIT }, // [VK_FORMAT_B8G8R8A8_SRGB]
- { 4, 4, VK_FORMAT_COMPATIBILITY_CLASS_32_BIT }, // [VK_FORMAT_A8B8G8R8_UNORM_PACK32]
- { 4, 4, VK_FORMAT_COMPATIBILITY_CLASS_32_BIT }, // [VK_FORMAT_A8B8G8R8_SNORM_PACK32]
- { 4, 4, VK_FORMAT_COMPATIBILITY_CLASS_32_BIT }, // [VK_FORMAT_A8B8G8R8_USCALED_PACK32]
- { 4, 4, VK_FORMAT_COMPATIBILITY_CLASS_32_BIT }, // [VK_FORMAT_A8B8G8R8_SSCALED_PACK32]
- { 4, 4, VK_FORMAT_COMPATIBILITY_CLASS_32_BIT }, // [VK_FORMAT_A8B8G8R8_UINT_PACK32]
- { 4, 4, VK_FORMAT_COMPATIBILITY_CLASS_32_BIT }, // [VK_FORMAT_A8B8G8R8_SINT_PACK32]
- { 4, 4, VK_FORMAT_COMPATIBILITY_CLASS_32_BIT }, // [VK_FORMAT_B8G8R8A8_SRGB_PACK32]
- { 4, 4, VK_FORMAT_COMPATIBILITY_CLASS_32_BIT }, // [VK_FORMAT_A2R10G10B10_UNORM_PACK32]
- { 4, 4, VK_FORMAT_COMPATIBILITY_CLASS_32_BIT }, // [VK_FORMAT_A2R10G10B10_SNORM_PACK32]
- { 4, 4, VK_FORMAT_COMPATIBILITY_CLASS_32_BIT }, // [VK_FORMAT_A2R10G10B10_USCALED_PACK32]
- { 4, 4, VK_FORMAT_COMPATIBILITY_CLASS_32_BIT }, // [VK_FORMAT_A2R10G10B10_SSCALED_PACK32]
- { 4, 4, VK_FORMAT_COMPATIBILITY_CLASS_32_BIT }, // [VK_FORMAT_A2R10G10B10_UINT_PACK32]
- { 4, 4, VK_FORMAT_COMPATIBILITY_CLASS_32_BIT }, // [VK_FORMAT_A2R10G10B10_SINT_PACK32]
- { 4, 4, VK_FORMAT_COMPATIBILITY_CLASS_32_BIT }, // [VK_FORMAT_A2B10G10R10_UNORM_PACK32]
- { 4, 4, VK_FORMAT_COMPATIBILITY_CLASS_32_BIT }, // [VK_FORMAT_A2B10G10R10_SNORM_PACK32]
- { 4, 4, VK_FORMAT_COMPATIBILITY_CLASS_32_BIT }, // [VK_FORMAT_A2B10G10R10_USCALED_PACK32]
- { 4, 4, VK_FORMAT_COMPATIBILITY_CLASS_32_BIT }, // [VK_FORMAT_A2B10G10R10_SSCALED_PACK32]
- { 4, 4, VK_FORMAT_COMPATIBILITY_CLASS_32_BIT }, // [VK_FORMAT_A2B10G10R10_UINT_PACK32]
- { 4, 4, VK_FORMAT_COMPATIBILITY_CLASS_32_BIT }, // [VK_FORMAT_A2B10G10R10_SINT_PACK32]
- { 2, 1, VK_FORMAT_COMPATIBILITY_CLASS_16_BIT }, // [VK_FORMAT_R16_UNORM]
- { 2, 1, VK_FORMAT_COMPATIBILITY_CLASS_16_BIT }, // [VK_FORMAT_R16_SNORM]
- { 2, 1, VK_FORMAT_COMPATIBILITY_CLASS_16_BIT }, // [VK_FORMAT_R16_USCALED]
- { 2, 1, VK_FORMAT_COMPATIBILITY_CLASS_16_BIT }, // [VK_FORMAT_R16_SSCALED]
- { 2, 1, VK_FORMAT_COMPATIBILITY_CLASS_16_BIT }, // [VK_FORMAT_R16_UINT]
- { 2, 1, VK_FORMAT_COMPATIBILITY_CLASS_16_BIT }, // [VK_FORMAT_R16_SINT]
- { 2, 1, VK_FORMAT_COMPATIBILITY_CLASS_16_BIT }, // [VK_FORMAT_R16_SFLOAT]
- { 4, 2, VK_FORMAT_COMPATIBILITY_CLASS_32_BIT }, // [VK_FORMAT_R16G16_UNORM]
- { 4, 2, VK_FORMAT_COMPATIBILITY_CLASS_32_BIT }, // [VK_FORMAT_R16G16_SNORM]
- { 4, 2, VK_FORMAT_COMPATIBILITY_CLASS_32_BIT }, // [VK_FORMAT_R16G16_USCALED]
- { 4, 2, VK_FORMAT_COMPATIBILITY_CLASS_32_BIT }, // [VK_FORMAT_R16G16_SSCALED]
- { 4, 2, VK_FORMAT_COMPATIBILITY_CLASS_32_BIT }, // [VK_FORMAT_R16G16_UINT]
- { 4, 2, VK_FORMAT_COMPATIBILITY_CLASS_32_BIT }, // [VK_FORMAT_R16G16_SINT]
- { 4, 2, VK_FORMAT_COMPATIBILITY_CLASS_32_BIT }, // [VK_FORMAT_R16G16_SFLOAT]
- { 6, 3, VK_FORMAT_COMPATIBILITY_CLASS_48_BIT }, // [VK_FORMAT_R16G16B16_UNORM]
- { 6, 3, VK_FORMAT_COMPATIBILITY_CLASS_48_BIT }, // [VK_FORMAT_R16G16B16_SNORM]
- { 6, 3, VK_FORMAT_COMPATIBILITY_CLASS_48_BIT }, // [VK_FORMAT_R16G16B16_USCALED]
- { 6, 3, VK_FORMAT_COMPATIBILITY_CLASS_48_BIT }, // [VK_FORMAT_R16G16B16_SSCALED]
- { 6, 3, VK_FORMAT_COMPATIBILITY_CLASS_48_BIT }, // [VK_FORMAT_R16G16B16_UINT]
- { 6, 3, VK_FORMAT_COMPATIBILITY_CLASS_48_BIT }, // [VK_FORMAT_R16G16B16_SINT]
- { 6, 3, VK_FORMAT_COMPATIBILITY_CLASS_48_BIT }, // [VK_FORMAT_R16G16B16_SFLOAT]
- { 8, 4, VK_FORMAT_COMPATIBILITY_CLASS_64_BIT }, // [VK_FORMAT_R16G16B16A16_UNORM]
- { 8, 4, VK_FORMAT_COMPATIBILITY_CLASS_64_BIT }, // [VK_FORMAT_R16G16B16A16_SNORM]
- { 8, 4, VK_FORMAT_COMPATIBILITY_CLASS_64_BIT }, // [VK_FORMAT_R16G16B16A16_USCALED]
- { 8, 4, VK_FORMAT_COMPATIBILITY_CLASS_64_BIT }, // [VK_FORMAT_R16G16B16A16_SSCALED]
- { 8, 4, VK_FORMAT_COMPATIBILITY_CLASS_64_BIT }, // [VK_FORMAT_R16G16B16A16_UINT]
- { 8, 4, VK_FORMAT_COMPATIBILITY_CLASS_64_BIT }, // [VK_FORMAT_R16G16B16A16_SINT]
- { 8, 4, VK_FORMAT_COMPATIBILITY_CLASS_64_BIT }, // [VK_FORMAT_R16G16B16A16_SFLOAT]
- { 4, 1, VK_FORMAT_COMPATIBILITY_CLASS_32_BIT }, // [VK_FORMAT_R32_UINT]
- { 4, 1, VK_FORMAT_COMPATIBILITY_CLASS_32_BIT }, // [VK_FORMAT_R32_SINT]
- { 4, 1, VK_FORMAT_COMPATIBILITY_CLASS_32_BIT }, // [VK_FORMAT_R32_SFLOAT]
- { 8, 2, VK_FORMAT_COMPATIBILITY_CLASS_64_BIT }, // [VK_FORMAT_R32G32_UINT]
- { 8, 2, VK_FORMAT_COMPATIBILITY_CLASS_64_BIT }, // [VK_FORMAT_R32G32_SINT]
- { 8, 2, VK_FORMAT_COMPATIBILITY_CLASS_64_BIT }, // [VK_FORMAT_R32G32_SFLOAT]
- { 12, 3, VK_FORMAT_COMPATIBILITY_CLASS_96_BIT }, // [VK_FORMAT_R32G32B32_UINT]
- { 12, 3, VK_FORMAT_COMPATIBILITY_CLASS_96_BIT }, // [VK_FORMAT_R32G32B32_SINT]
- { 12, 3, VK_FORMAT_COMPATIBILITY_CLASS_96_BIT }, // [VK_FORMAT_R32G32B32_SFLOAT]
- { 16, 4, VK_FORMAT_COMPATIBILITY_CLASS_128_BIT }, // [VK_FORMAT_R32G32B32A32_UINT]
- { 16, 4, VK_FORMAT_COMPATIBILITY_CLASS_128_BIT }, // [VK_FORMAT_R32G32B32A32_SINT]
- { 16, 4, VK_FORMAT_COMPATIBILITY_CLASS_128_BIT }, // [VK_FORMAT_R32G32B32A32_SFLOAT]
- { 8, 1, VK_FORMAT_COMPATIBILITY_CLASS_64_BIT }, // [VK_FORMAT_R64_UINT]
- { 8, 1, VK_FORMAT_COMPATIBILITY_CLASS_64_BIT }, // [VK_FORMAT_R64_SINT]
- { 8, 1, VK_FORMAT_COMPATIBILITY_CLASS_64_BIT }, // [VK_FORMAT_R64_SFLOAT]
- { 16, 2, VK_FORMAT_COMPATIBILITY_CLASS_128_BIT }, // [VK_FORMAT_R64G64_UINT]
- { 16, 2, VK_FORMAT_COMPATIBILITY_CLASS_128_BIT }, // [VK_FORMAT_R64G64_SINT]
- { 16, 2, VK_FORMAT_COMPATIBILITY_CLASS_128_BIT }, // [VK_FORMAT_R64G64_SFLOAT]
- { 24, 3, VK_FORMAT_COMPATIBILITY_CLASS_192_BIT }, // [VK_FORMAT_R64G64B64_UINT]
- { 24, 3, VK_FORMAT_COMPATIBILITY_CLASS_192_BIT }, // [VK_FORMAT_R64G64B64_SINT]
- { 24, 3, VK_FORMAT_COMPATIBILITY_CLASS_192_BIT }, // [VK_FORMAT_R64G64B64_SFLOAT]
- { 32, 4, VK_FORMAT_COMPATIBILITY_CLASS_256_BIT }, // [VK_FORMAT_R64G64B64A64_UINT]
- { 32, 4, VK_FORMAT_COMPATIBILITY_CLASS_256_BIT }, // [VK_FORMAT_R64G64B64A64_SINT]
- { 32, 4, VK_FORMAT_COMPATIBILITY_CLASS_256_BIT }, // [VK_FORMAT_R64G64B64A64_SFLOAT]
- { 4, 3, VK_FORMAT_COMPATIBILITY_CLASS_32_BIT }, // [VK_FORMAT_B10G11R11_UFLOAT_PACK32]
- { 4, 3, VK_FORMAT_COMPATIBILITY_CLASS_32_BIT }, // [VK_FORMAT_E5B9G9R9_UFLOAT_PACK32]
- { 2, 1, VK_FORMAT_COMPATIBILITY_CLASS_NONE_BIT }, // [VK_FORMAT_D16_UNORM]
- { 3, 1, VK_FORMAT_COMPATIBILITY_CLASS_NONE_BIT }, // [VK_FORMAT_X8_D24_UNORM_PACK32]
- { 4, 1, VK_FORMAT_COMPATIBILITY_CLASS_NONE_BIT }, // [VK_FORMAT_D32_SFLOAT]
- { 1, 1, VK_FORMAT_COMPATIBILITY_CLASS_NONE_BIT }, // [VK_FORMAT_S8_UINT]
- { 3, 2, VK_FORMAT_COMPATIBILITY_CLASS_NONE_BIT }, // [VK_FORMAT_D16_UNORM_S8_UINT]
- { 4, 2, VK_FORMAT_COMPATIBILITY_CLASS_NONE_BIT }, // [VK_FORMAT_D24_UNORM_S8_UINT]
- { 4, 2, VK_FORMAT_COMPATIBILITY_CLASS_NONE_BIT }, // [VK_FORMAT_D32_SFLOAT_S8_UINT]
- { 8, 4, VK_FORMAT_COMPATIBILITY_CLASS_BC1_RGB_BIT }, // [VK_FORMAT_BC1_RGB_UNORM_BLOCK]
- { 8, 4, VK_FORMAT_COMPATIBILITY_CLASS_BC1_RGB_BIT }, // [VK_FORMAT_BC1_RGB_SRGB_BLOCK]
- { 8, 4, VK_FORMAT_COMPATIBILITY_CLASS_BC1_RGBA_BIT }, // [VK_FORMAT_BC1_RGBA_UNORM_BLOCK]
- { 8, 4, VK_FORMAT_COMPATIBILITY_CLASS_BC1_RGBA_BIT }, // [VK_FORMAT_BC1_RGBA_SRGB_BLOCK]
- { 16, 4, VK_FORMAT_COMPATIBILITY_CLASS_BC2_BIT }, // [VK_FORMAT_BC2_UNORM_BLOCK]
- { 16, 4, VK_FORMAT_COMPATIBILITY_CLASS_BC2_BIT }, // [VK_FORMAT_BC2_SRGB_BLOCK]
- { 16, 4, VK_FORMAT_COMPATIBILITY_CLASS_BC3_BIT }, // [VK_FORMAT_BC3_UNORM_BLOCK]
- { 16, 4, VK_FORMAT_COMPATIBILITY_CLASS_BC3_BIT }, // [VK_FORMAT_BC3_SRGB_BLOCK]
- { 8, 4, VK_FORMAT_COMPATIBILITY_CLASS_BC4_BIT }, // [VK_FORMAT_BC4_UNORM_BLOCK]
- { 8, 4, VK_FORMAT_COMPATIBILITY_CLASS_BC4_BIT }, // [VK_FORMAT_BC4_SNORM_BLOCK]
- { 16, 4, VK_FORMAT_COMPATIBILITY_CLASS_BC5_BIT }, // [VK_FORMAT_BC5_UNORM_BLOCK]
- { 16, 4, VK_FORMAT_COMPATIBILITY_CLASS_BC5_BIT }, // [VK_FORMAT_BC5_SNORM_BLOCK]
- { 16, 4, VK_FORMAT_COMPATIBILITY_CLASS_BC6H_BIT }, // [VK_FORMAT_BC6H_UFLOAT_BLOCK]
- { 16, 4, VK_FORMAT_COMPATIBILITY_CLASS_BC6H_BIT }, // [VK_FORMAT_BC6H_SFLOAT_BLOCK]
- { 16, 4, VK_FORMAT_COMPATIBILITY_CLASS_BC7_BIT }, // [VK_FORMAT_BC7_UNORM_BLOCK]
- { 16, 4, VK_FORMAT_COMPATIBILITY_CLASS_BC7_BIT }, // [VK_FORMAT_BC7_SRGB_BLOCK]
- { 8, 3, VK_FORMAT_COMPATIBILITY_CLASS_ETC2_RGB_BIT }, // [VK_FORMAT_ETC2_R8G8B8_UNORM_BLOCK]
- { 8, 3, VK_FORMAT_COMPATIBILITY_CLASS_ETC2_RGB_BIT }, // [VK_FORMAT_ETC2_R8G8B8_SRGB_BLOCK]
- { 8, 4, VK_FORMAT_COMPATIBILITY_CLASS_ETC2_RGBA_BIT }, // [VK_FORMAT_ETC2_R8G8B8A1_UNORM_BLOCK]
- { 8, 4, VK_FORMAT_COMPATIBILITY_CLASS_ETC2_RGBA_BIT }, // [VK_FORMAT_ETC2_R8G8B8A1_SRGB_BLOCK]
- { 8, 4, VK_FORMAT_COMPATIBILITY_CLASS_ETC2_EAC_RGBA_BIT }, // [VK_FORMAT_ETC2_R8G8B8A8_UNORM_BLOCK]
- { 8, 4, VK_FORMAT_COMPATIBILITY_CLASS_ETC2_EAC_RGBA_BIT }, // [VK_FORMAT_ETC2_R8G8B8A8_SRGB_BLOCK]
- { 8, 1, VK_FORMAT_COMPATIBILITY_CLASS_EAC_R_BIT }, // [VK_FORMAT_EAC_R11_UNORM_BLOCK]
- { 8, 1, VK_FORMAT_COMPATIBILITY_CLASS_EAC_R_BIT }, // [VK_FORMAT_EAC_R11_SNORM_BLOCK]
- { 16, 2, VK_FORMAT_COMPATIBILITY_CLASS_EAC_RG_BIT }, // [VK_FORMAT_EAC_R11G11_UNORM_BLOCK]
- { 16, 2, VK_FORMAT_COMPATIBILITY_CLASS_EAC_RG_BIT }, // [VK_FORMAT_EAC_R11G11_SNORM_BLOCK]
- { 16, 4, VK_FORMAT_COMPATIBILITY_CLASS_ASTC_4X4_BIT }, // [VK_FORMAT_ASTC_4x4_UNORM_BLOCK]
- { 16, 4, VK_FORMAT_COMPATIBILITY_CLASS_ASTC_4X4_BIT }, // [VK_FORMAT_ASTC_4x4_SRGB_BLOCK]
- { 16, 4, VK_FORMAT_COMPATIBILITY_CLASS_ASTC_5X4_BIT }, // [VK_FORMAT_ASTC_5x4_UNORM_BLOCK]
- { 16, 4, VK_FORMAT_COMPATIBILITY_CLASS_ASTC_5X4_BIT }, // [VK_FORMAT_ASTC_5x4_SRGB_BLOCK]
- { 16, 4, VK_FORMAT_COMPATIBILITY_CLASS_ASTC_5X5_BIT }, // [VK_FORMAT_ASTC_5x5_UNORM_BLOCK]
- { 16, 4, VK_FORMAT_COMPATIBILITY_CLASS_ASTC_5X5_BIT }, // [VK_FORMAT_ASTC_5x5_SRGB_BLOCK]
- { 16, 4, VK_FORMAT_COMPATIBILITY_CLASS_ASTC_6X5_BIT }, // [VK_FORMAT_ASTC_6x5_UNORM_BLOCK]
- { 16, 4, VK_FORMAT_COMPATIBILITY_CLASS_ASTC_6X5_BIT }, // [VK_FORMAT_ASTC_6x5_SRGB_BLOCK]
- { 16, 4, VK_FORMAT_COMPATIBILITY_CLASS_ASTC_6X6_BIT }, // [VK_FORMAT_ASTC_6x6_UNORM_BLOCK]
- { 16, 4, VK_FORMAT_COMPATIBILITY_CLASS_ASTC_6X6_BIT }, // [VK_FORMAT_ASTC_6x6_SRGB_BLOCK]
- { 16, 4, VK_FORMAT_COMPATIBILITY_CLASS_ASTC_8X5_BIT }, // [VK_FORMAT_ASTC_8x5_UNORM_BLOCK]
- { 16, 4, VK_FORMAT_COMPATIBILITY_CLASS_ASTC_8X5_BIT }, // [VK_FORMAT_ASTC_8x5_SRGB_BLOCK]
- { 16, 4, VK_FORMAT_COMPATIBILITY_CLASS_ASTC_8X6_BIT }, // [VK_FORMAT_ASTC_8x6_UNORM_BLOCK]
- { 16, 4, VK_FORMAT_COMPATIBILITY_CLASS_ASTC_8X6_BIT }, // [VK_FORMAT_ASTC_8x6_SRGB_BLOCK]
- { 16, 4, VK_FORMAT_COMPATIBILITY_CLASS_ASTC_8X8_BIT }, // [VK_FORMAT_ASTC_8x8_UNORM_BLOCK]
- { 16, 4, VK_FORMAT_COMPATIBILITY_CLASS_ASTC_8X8_BIT }, // [VK_FORMAT_ASTC_8x8_SRGB_BLOCK]
- { 16, 4, VK_FORMAT_COMPATIBILITY_CLASS_ASTC_10X5_BIT }, // [VK_FORMAT_ASTC_10x5_UNORM_BLOCK]
- { 16, 4, VK_FORMAT_COMPATIBILITY_CLASS_ASTC_10X5_BIT }, // [VK_FORMAT_ASTC_10x5_SRGB_BLOCK]
- { 16, 4, VK_FORMAT_COMPATIBILITY_CLASS_ASTC_10X6_BIT }, // [VK_FORMAT_ASTC_10x6_UNORM_BLOCK]
- { 16, 4, VK_FORMAT_COMPATIBILITY_CLASS_ASTC_10X6_BIT }, // [VK_FORMAT_ASTC_10x6_SRGB_BLOCK]
- { 16, 4, VK_FORMAT_COMPATIBILITY_CLASS_ASTC_10X8_BIT }, // [VK_FORMAT_ASTC_10x8_UNORM_BLOCK]
- { 16, 4, VK_FORMAT_COMPATIBILITY_CLASS_ASTC_10X8_BIT }, // [VK_FORMAT_ASTC_10x8_SRGB_BLOCK]
- { 16, 4, VK_FORMAT_COMPATIBILITY_CLASS_ASTC_10X10_BIT }, // [VK_FORMAT_ASTC_10x10_UNORM_BLOCK]
- { 16, 4, VK_FORMAT_COMPATIBILITY_CLASS_ASTC_10X10_BIT }, // [VK_FORMAT_ASTC_10x10_SRGB_BLOCK]
- { 16, 4, VK_FORMAT_COMPATIBILITY_CLASS_ASTC_12X10_BIT }, // [VK_FORMAT_ASTC_12x10_UNORM_BLOCK]
- { 16, 4, VK_FORMAT_COMPATIBILITY_CLASS_ASTC_12X10_BIT }, // [VK_FORMAT_ASTC_12x10_SRGB_BLOCK]
- { 16, 4, VK_FORMAT_COMPATIBILITY_CLASS_ASTC_12X12_BIT }, // [VK_FORMAT_ASTC_12x12_UNORM_BLOCK]
- { 16, 4, VK_FORMAT_COMPATIBILITY_CLASS_ASTC_12X12_BIT }, // [VK_FORMAT_ASTC_12x12_SRGB_BLOCK]
+ {0, 0, VK_FORMAT_COMPATIBILITY_CLASS_NONE_BIT}, // [VK_FORMAT_UNDEFINED]
+ {1, 2, VK_FORMAT_COMPATIBILITY_CLASS_8_BIT}, // [VK_FORMAT_R4G4_UNORM_PACK8]
+ {2, 4, VK_FORMAT_COMPATIBILITY_CLASS_16_BIT}, // [VK_FORMAT_R4G4B4A4_UNORM_PACK16]
+ {2, 4, VK_FORMAT_COMPATIBILITY_CLASS_16_BIT}, // [VK_FORMAT_B4G4R4A4_UNORM_PACK16]
+ {2, 3, VK_FORMAT_COMPATIBILITY_CLASS_16_BIT}, // [VK_FORMAT_R5G6B5_UNORM_PACK16]
+ {2, 3, VK_FORMAT_COMPATIBILITY_CLASS_16_BIT}, // [VK_FORMAT_B5G6R5_UNORM_PACK16]
+ {2, 4, VK_FORMAT_COMPATIBILITY_CLASS_16_BIT}, // [VK_FORMAT_R5G5B5A1_UNORM_PACK16]
+ {2, 4, VK_FORMAT_COMPATIBILITY_CLASS_16_BIT}, // [VK_FORMAT_B5G5R5A1_UNORM_PACK16]
+ {2, 4, VK_FORMAT_COMPATIBILITY_CLASS_16_BIT}, // [VK_FORMAT_A1R5G5B5_UNORM_PACK16]
+ {1, 1, VK_FORMAT_COMPATIBILITY_CLASS_8_BIT}, // [VK_FORMAT_R8_UNORM]
+ {1, 1, VK_FORMAT_COMPATIBILITY_CLASS_8_BIT}, // [VK_FORMAT_R8_SNORM]
+ {1, 1, VK_FORMAT_COMPATIBILITY_CLASS_8_BIT}, // [VK_FORMAT_R8_USCALED]
+ {1, 1, VK_FORMAT_COMPATIBILITY_CLASS_8_BIT}, // [VK_FORMAT_R8_SSCALED]
+ {1, 1, VK_FORMAT_COMPATIBILITY_CLASS_8_BIT}, // [VK_FORMAT_R8_UINT]
+ {1, 1, VK_FORMAT_COMPATIBILITY_CLASS_8_BIT}, // [VK_FORMAT_R8_SINT]
+ {1, 1, VK_FORMAT_COMPATIBILITY_CLASS_8_BIT}, // [VK_FORMAT_R8_SRGB]
+ {2, 2, VK_FORMAT_COMPATIBILITY_CLASS_16_BIT}, // [VK_FORMAT_R8G8_UNORM]
+ {2, 2, VK_FORMAT_COMPATIBILITY_CLASS_16_BIT}, // [VK_FORMAT_R8G8_SNORM]
+ {2, 2, VK_FORMAT_COMPATIBILITY_CLASS_16_BIT}, // [VK_FORMAT_R8G8_USCALED]
+ {2, 2, VK_FORMAT_COMPATIBILITY_CLASS_16_BIT}, // [VK_FORMAT_R8G8_SSCALED]
+ {2, 2, VK_FORMAT_COMPATIBILITY_CLASS_16_BIT}, // [VK_FORMAT_R8G8_UINT]
+ {2, 2, VK_FORMAT_COMPATIBILITY_CLASS_16_BIT}, // [VK_FORMAT_R8G8_SINT]
+ {2, 2, VK_FORMAT_COMPATIBILITY_CLASS_16_BIT}, // [VK_FORMAT_R8G8_SRGB]
+ {3, 3, VK_FORMAT_COMPATIBILITY_CLASS_24_BIT}, // [VK_FORMAT_R8G8B8_UNORM]
+ {3, 3, VK_FORMAT_COMPATIBILITY_CLASS_24_BIT}, // [VK_FORMAT_R8G8B8_SNORM]
+ {3, 3, VK_FORMAT_COMPATIBILITY_CLASS_24_BIT}, // [VK_FORMAT_R8G8B8_USCALED]
+ {3, 3, VK_FORMAT_COMPATIBILITY_CLASS_24_BIT}, // [VK_FORMAT_R8G8B8_SSCALED]
+ {3, 3, VK_FORMAT_COMPATIBILITY_CLASS_24_BIT}, // [VK_FORMAT_R8G8B8_UINT]
+ {3, 3, VK_FORMAT_COMPATIBILITY_CLASS_24_BIT}, // [VK_FORMAT_R8G8B8_SINT]
+ {3, 3, VK_FORMAT_COMPATIBILITY_CLASS_24_BIT}, // [VK_FORMAT_R8G8B8_SRGB]
+ {3, 3, VK_FORMAT_COMPATIBILITY_CLASS_24_BIT}, // [VK_FORMAT_B8G8R8_UNORM]
+ {3, 3, VK_FORMAT_COMPATIBILITY_CLASS_24_BIT}, // [VK_FORMAT_B8G8R8_SNORM]
+ {3, 3, VK_FORMAT_COMPATIBILITY_CLASS_24_BIT}, // [VK_FORMAT_B8G8R8_USCALED]
+ {3, 3, VK_FORMAT_COMPATIBILITY_CLASS_24_BIT}, // [VK_FORMAT_B8G8R8_SSCALED]
+ {3, 3, VK_FORMAT_COMPATIBILITY_CLASS_24_BIT}, // [VK_FORMAT_B8G8R8_UINT]
+ {3, 3, VK_FORMAT_COMPATIBILITY_CLASS_24_BIT}, // [VK_FORMAT_B8G8R8_SINT]
+ {3, 3, VK_FORMAT_COMPATIBILITY_CLASS_24_BIT}, // [VK_FORMAT_B8G8R8_SRGB]
+ {4, 4, VK_FORMAT_COMPATIBILITY_CLASS_32_BIT}, // [VK_FORMAT_R8G8B8A8_UNORM]
+ {4, 4, VK_FORMAT_COMPATIBILITY_CLASS_32_BIT}, // [VK_FORMAT_R8G8B8A8_SNORM]
+ {4, 4, VK_FORMAT_COMPATIBILITY_CLASS_32_BIT}, // [VK_FORMAT_R8G8B8A8_USCALED]
+ {4, 4, VK_FORMAT_COMPATIBILITY_CLASS_32_BIT}, // [VK_FORMAT_R8G8B8A8_SSCALED]
+ {4, 4, VK_FORMAT_COMPATIBILITY_CLASS_32_BIT}, // [VK_FORMAT_R8G8B8A8_UINT]
+ {4, 4, VK_FORMAT_COMPATIBILITY_CLASS_32_BIT}, // [VK_FORMAT_R8G8B8A8_SINT]
+ {4, 4, VK_FORMAT_COMPATIBILITY_CLASS_32_BIT}, // [VK_FORMAT_R8G8B8A8_SRGB]
+ {4, 4, VK_FORMAT_COMPATIBILITY_CLASS_32_BIT}, // [VK_FORMAT_B8G8R8A8_UNORM]
+ {4, 4, VK_FORMAT_COMPATIBILITY_CLASS_32_BIT}, // [VK_FORMAT_B8G8R8A8_SNORM]
+ {4, 4, VK_FORMAT_COMPATIBILITY_CLASS_32_BIT}, // [VK_FORMAT_B8G8R8A8_USCALED]
+ {4, 4, VK_FORMAT_COMPATIBILITY_CLASS_32_BIT}, // [VK_FORMAT_B8G8R8A8_SSCALED]
+ {4, 4, VK_FORMAT_COMPATIBILITY_CLASS_32_BIT}, // [VK_FORMAT_B8G8R8A8_UINT]
+ {4, 4, VK_FORMAT_COMPATIBILITY_CLASS_32_BIT}, // [VK_FORMAT_B8G8R8A8_SINT]
+ {4, 4, VK_FORMAT_COMPATIBILITY_CLASS_32_BIT}, // [VK_FORMAT_B8G8R8A8_SRGB]
+ {4, 4, VK_FORMAT_COMPATIBILITY_CLASS_32_BIT}, // [VK_FORMAT_A8B8G8R8_UNORM_PACK32]
+ {4, 4, VK_FORMAT_COMPATIBILITY_CLASS_32_BIT}, // [VK_FORMAT_A8B8G8R8_SNORM_PACK32]
+ {4, 4, VK_FORMAT_COMPATIBILITY_CLASS_32_BIT}, // [VK_FORMAT_A8B8G8R8_USCALED_PACK32]
+ {4, 4, VK_FORMAT_COMPATIBILITY_CLASS_32_BIT}, // [VK_FORMAT_A8B8G8R8_SSCALED_PACK32]
+ {4, 4, VK_FORMAT_COMPATIBILITY_CLASS_32_BIT}, // [VK_FORMAT_A8B8G8R8_UINT_PACK32]
+ {4, 4, VK_FORMAT_COMPATIBILITY_CLASS_32_BIT}, // [VK_FORMAT_A8B8G8R8_SINT_PACK32]
+ {4, 4, VK_FORMAT_COMPATIBILITY_CLASS_32_BIT}, // [VK_FORMAT_B8G8R8A8_SRGB_PACK32]
+ {4, 4, VK_FORMAT_COMPATIBILITY_CLASS_32_BIT}, // [VK_FORMAT_A2R10G10B10_UNORM_PACK32]
+ {4, 4, VK_FORMAT_COMPATIBILITY_CLASS_32_BIT}, // [VK_FORMAT_A2R10G10B10_SNORM_PACK32]
+ {4, 4, VK_FORMAT_COMPATIBILITY_CLASS_32_BIT}, // [VK_FORMAT_A2R10G10B10_USCALED_PACK32]
+ {4, 4, VK_FORMAT_COMPATIBILITY_CLASS_32_BIT}, // [VK_FORMAT_A2R10G10B10_SSCALED_PACK32]
+ {4, 4, VK_FORMAT_COMPATIBILITY_CLASS_32_BIT}, // [VK_FORMAT_A2R10G10B10_UINT_PACK32]
+ {4, 4, VK_FORMAT_COMPATIBILITY_CLASS_32_BIT}, // [VK_FORMAT_A2R10G10B10_SINT_PACK32]
+ {4, 4, VK_FORMAT_COMPATIBILITY_CLASS_32_BIT}, // [VK_FORMAT_A2B10G10R10_UNORM_PACK32]
+ {4, 4, VK_FORMAT_COMPATIBILITY_CLASS_32_BIT}, // [VK_FORMAT_A2B10G10R10_SNORM_PACK32]
+ {4, 4, VK_FORMAT_COMPATIBILITY_CLASS_32_BIT}, // [VK_FORMAT_A2B10G10R10_USCALED_PACK32]
+ {4, 4, VK_FORMAT_COMPATIBILITY_CLASS_32_BIT}, // [VK_FORMAT_A2B10G10R10_SSCALED_PACK32]
+ {4, 4, VK_FORMAT_COMPATIBILITY_CLASS_32_BIT}, // [VK_FORMAT_A2B10G10R10_UINT_PACK32]
+ {4, 4, VK_FORMAT_COMPATIBILITY_CLASS_32_BIT}, // [VK_FORMAT_A2B10G10R10_SINT_PACK32]
+ {2, 1, VK_FORMAT_COMPATIBILITY_CLASS_16_BIT}, // [VK_FORMAT_R16_UNORM]
+ {2, 1, VK_FORMAT_COMPATIBILITY_CLASS_16_BIT}, // [VK_FORMAT_R16_SNORM]
+ {2, 1, VK_FORMAT_COMPATIBILITY_CLASS_16_BIT}, // [VK_FORMAT_R16_USCALED]
+ {2, 1, VK_FORMAT_COMPATIBILITY_CLASS_16_BIT}, // [VK_FORMAT_R16_SSCALED]
+ {2, 1, VK_FORMAT_COMPATIBILITY_CLASS_16_BIT}, // [VK_FORMAT_R16_UINT]
+ {2, 1, VK_FORMAT_COMPATIBILITY_CLASS_16_BIT}, // [VK_FORMAT_R16_SINT]
+ {2, 1, VK_FORMAT_COMPATIBILITY_CLASS_16_BIT}, // [VK_FORMAT_R16_SFLOAT]
+ {4, 2, VK_FORMAT_COMPATIBILITY_CLASS_32_BIT}, // [VK_FORMAT_R16G16_UNORM]
+ {4, 2, VK_FORMAT_COMPATIBILITY_CLASS_32_BIT}, // [VK_FORMAT_R16G16_SNORM]
+ {4, 2, VK_FORMAT_COMPATIBILITY_CLASS_32_BIT}, // [VK_FORMAT_R16G16_USCALED]
+ {4, 2, VK_FORMAT_COMPATIBILITY_CLASS_32_BIT}, // [VK_FORMAT_R16G16_SSCALED]
+ {4, 2, VK_FORMAT_COMPATIBILITY_CLASS_32_BIT}, // [VK_FORMAT_R16G16_UINT]
+ {4, 2, VK_FORMAT_COMPATIBILITY_CLASS_32_BIT}, // [VK_FORMAT_R16G16_SINT]
+ {4, 2, VK_FORMAT_COMPATIBILITY_CLASS_32_BIT}, // [VK_FORMAT_R16G16_SFLOAT]
+ {6, 3, VK_FORMAT_COMPATIBILITY_CLASS_48_BIT}, // [VK_FORMAT_R16G16B16_UNORM]
+ {6, 3, VK_FORMAT_COMPATIBILITY_CLASS_48_BIT}, // [VK_FORMAT_R16G16B16_SNORM]
+ {6, 3, VK_FORMAT_COMPATIBILITY_CLASS_48_BIT}, // [VK_FORMAT_R16G16B16_USCALED]
+ {6, 3, VK_FORMAT_COMPATIBILITY_CLASS_48_BIT}, // [VK_FORMAT_R16G16B16_SSCALED]
+ {6, 3, VK_FORMAT_COMPATIBILITY_CLASS_48_BIT}, // [VK_FORMAT_R16G16B16_UINT]
+ {6, 3, VK_FORMAT_COMPATIBILITY_CLASS_48_BIT}, // [VK_FORMAT_R16G16B16_SINT]
+ {6, 3, VK_FORMAT_COMPATIBILITY_CLASS_48_BIT}, // [VK_FORMAT_R16G16B16_SFLOAT]
+ {8, 4, VK_FORMAT_COMPATIBILITY_CLASS_64_BIT}, // [VK_FORMAT_R16G16B16A16_UNORM]
+ {8, 4, VK_FORMAT_COMPATIBILITY_CLASS_64_BIT}, // [VK_FORMAT_R16G16B16A16_SNORM]
+ {8, 4, VK_FORMAT_COMPATIBILITY_CLASS_64_BIT}, // [VK_FORMAT_R16G16B16A16_USCALED]
+ {8, 4, VK_FORMAT_COMPATIBILITY_CLASS_64_BIT}, // [VK_FORMAT_R16G16B16A16_SSCALED]
+ {8, 4, VK_FORMAT_COMPATIBILITY_CLASS_64_BIT}, // [VK_FORMAT_R16G16B16A16_UINT]
+ {8, 4, VK_FORMAT_COMPATIBILITY_CLASS_64_BIT}, // [VK_FORMAT_R16G16B16A16_SINT]
+ {8, 4, VK_FORMAT_COMPATIBILITY_CLASS_64_BIT}, // [VK_FORMAT_R16G16B16A16_SFLOAT]
+ {4, 1, VK_FORMAT_COMPATIBILITY_CLASS_32_BIT}, // [VK_FORMAT_R32_UINT]
+ {4, 1, VK_FORMAT_COMPATIBILITY_CLASS_32_BIT}, // [VK_FORMAT_R32_SINT]
+ {4, 1, VK_FORMAT_COMPATIBILITY_CLASS_32_BIT}, // [VK_FORMAT_R32_SFLOAT]
+ {8, 2, VK_FORMAT_COMPATIBILITY_CLASS_64_BIT}, // [VK_FORMAT_R32G32_UINT]
+ {8, 2, VK_FORMAT_COMPATIBILITY_CLASS_64_BIT}, // [VK_FORMAT_R32G32_SINT]
+ {8, 2, VK_FORMAT_COMPATIBILITY_CLASS_64_BIT}, // [VK_FORMAT_R32G32_SFLOAT]
+ {12, 3, VK_FORMAT_COMPATIBILITY_CLASS_96_BIT}, // [VK_FORMAT_R32G32B32_UINT]
+ {12, 3, VK_FORMAT_COMPATIBILITY_CLASS_96_BIT}, // [VK_FORMAT_R32G32B32_SINT]
+ {12, 3, VK_FORMAT_COMPATIBILITY_CLASS_96_BIT}, // [VK_FORMAT_R32G32B32_SFLOAT]
+ {16, 4, VK_FORMAT_COMPATIBILITY_CLASS_128_BIT}, // [VK_FORMAT_R32G32B32A32_UINT]
+ {16, 4, VK_FORMAT_COMPATIBILITY_CLASS_128_BIT}, // [VK_FORMAT_R32G32B32A32_SINT]
+ {16, 4, VK_FORMAT_COMPATIBILITY_CLASS_128_BIT}, // [VK_FORMAT_R32G32B32A32_SFLOAT]
+ {8, 1, VK_FORMAT_COMPATIBILITY_CLASS_64_BIT}, // [VK_FORMAT_R64_UINT]
+ {8, 1, VK_FORMAT_COMPATIBILITY_CLASS_64_BIT}, // [VK_FORMAT_R64_SINT]
+ {8, 1, VK_FORMAT_COMPATIBILITY_CLASS_64_BIT}, // [VK_FORMAT_R64_SFLOAT]
+ {16, 2, VK_FORMAT_COMPATIBILITY_CLASS_128_BIT}, // [VK_FORMAT_R64G64_UINT]
+ {16, 2, VK_FORMAT_COMPATIBILITY_CLASS_128_BIT}, // [VK_FORMAT_R64G64_SINT]
+ {16, 2, VK_FORMAT_COMPATIBILITY_CLASS_128_BIT}, // [VK_FORMAT_R64G64_SFLOAT]
+ {24, 3, VK_FORMAT_COMPATIBILITY_CLASS_192_BIT}, // [VK_FORMAT_R64G64B64_UINT]
+ {24, 3, VK_FORMAT_COMPATIBILITY_CLASS_192_BIT}, // [VK_FORMAT_R64G64B64_SINT]
+ {24, 3, VK_FORMAT_COMPATIBILITY_CLASS_192_BIT}, // [VK_FORMAT_R64G64B64_SFLOAT]
+ {32, 4, VK_FORMAT_COMPATIBILITY_CLASS_256_BIT}, // [VK_FORMAT_R64G64B64A64_UINT]
+ {32, 4, VK_FORMAT_COMPATIBILITY_CLASS_256_BIT}, // [VK_FORMAT_R64G64B64A64_SINT]
+ {32, 4, VK_FORMAT_COMPATIBILITY_CLASS_256_BIT}, // [VK_FORMAT_R64G64B64A64_SFLOAT]
+ {4, 3, VK_FORMAT_COMPATIBILITY_CLASS_32_BIT}, // [VK_FORMAT_B10G11R11_UFLOAT_PACK32]
+ {4, 3, VK_FORMAT_COMPATIBILITY_CLASS_32_BIT}, // [VK_FORMAT_E5B9G9R9_UFLOAT_PACK32]
+ {2, 1, VK_FORMAT_COMPATIBILITY_CLASS_NONE_BIT}, // [VK_FORMAT_D16_UNORM]
+ {3, 1, VK_FORMAT_COMPATIBILITY_CLASS_NONE_BIT}, // [VK_FORMAT_X8_D24_UNORM_PACK32]
+ {4, 1, VK_FORMAT_COMPATIBILITY_CLASS_NONE_BIT}, // [VK_FORMAT_D32_SFLOAT]
+ {1, 1, VK_FORMAT_COMPATIBILITY_CLASS_NONE_BIT}, // [VK_FORMAT_S8_UINT]
+ {3, 2, VK_FORMAT_COMPATIBILITY_CLASS_NONE_BIT}, // [VK_FORMAT_D16_UNORM_S8_UINT]
+ {4, 2, VK_FORMAT_COMPATIBILITY_CLASS_NONE_BIT}, // [VK_FORMAT_D24_UNORM_S8_UINT]
+ {4, 2, VK_FORMAT_COMPATIBILITY_CLASS_NONE_BIT}, // [VK_FORMAT_D32_SFLOAT_S8_UINT]
+ {8, 4, VK_FORMAT_COMPATIBILITY_CLASS_BC1_RGB_BIT}, // [VK_FORMAT_BC1_RGB_UNORM_BLOCK]
+ {8, 4, VK_FORMAT_COMPATIBILITY_CLASS_BC1_RGB_BIT}, // [VK_FORMAT_BC1_RGB_SRGB_BLOCK]
+ {8, 4, VK_FORMAT_COMPATIBILITY_CLASS_BC1_RGBA_BIT}, // [VK_FORMAT_BC1_RGBA_UNORM_BLOCK]
+ {8, 4, VK_FORMAT_COMPATIBILITY_CLASS_BC1_RGBA_BIT}, // [VK_FORMAT_BC1_RGBA_SRGB_BLOCK]
+ {16, 4, VK_FORMAT_COMPATIBILITY_CLASS_BC2_BIT}, // [VK_FORMAT_BC2_UNORM_BLOCK]
+ {16, 4, VK_FORMAT_COMPATIBILITY_CLASS_BC2_BIT}, // [VK_FORMAT_BC2_SRGB_BLOCK]
+ {16, 4, VK_FORMAT_COMPATIBILITY_CLASS_BC3_BIT}, // [VK_FORMAT_BC3_UNORM_BLOCK]
+ {16, 4, VK_FORMAT_COMPATIBILITY_CLASS_BC3_BIT}, // [VK_FORMAT_BC3_SRGB_BLOCK]
+ {8, 4, VK_FORMAT_COMPATIBILITY_CLASS_BC4_BIT}, // [VK_FORMAT_BC4_UNORM_BLOCK]
+ {8, 4, VK_FORMAT_COMPATIBILITY_CLASS_BC4_BIT}, // [VK_FORMAT_BC4_SNORM_BLOCK]
+ {16, 4, VK_FORMAT_COMPATIBILITY_CLASS_BC5_BIT}, // [VK_FORMAT_BC5_UNORM_BLOCK]
+ {16, 4, VK_FORMAT_COMPATIBILITY_CLASS_BC5_BIT}, // [VK_FORMAT_BC5_SNORM_BLOCK]
+ {16, 4, VK_FORMAT_COMPATIBILITY_CLASS_BC6H_BIT}, // [VK_FORMAT_BC6H_UFLOAT_BLOCK]
+ {16, 4, VK_FORMAT_COMPATIBILITY_CLASS_BC6H_BIT}, // [VK_FORMAT_BC6H_SFLOAT_BLOCK]
+ {16, 4, VK_FORMAT_COMPATIBILITY_CLASS_BC7_BIT}, // [VK_FORMAT_BC7_UNORM_BLOCK]
+ {16, 4, VK_FORMAT_COMPATIBILITY_CLASS_BC7_BIT}, // [VK_FORMAT_BC7_SRGB_BLOCK]
+ {8, 3, VK_FORMAT_COMPATIBILITY_CLASS_ETC2_RGB_BIT}, // [VK_FORMAT_ETC2_R8G8B8_UNORM_BLOCK]
+ {8, 3, VK_FORMAT_COMPATIBILITY_CLASS_ETC2_RGB_BIT}, // [VK_FORMAT_ETC2_R8G8B8_SRGB_BLOCK]
+ {8, 4, VK_FORMAT_COMPATIBILITY_CLASS_ETC2_RGBA_BIT}, // [VK_FORMAT_ETC2_R8G8B8A1_UNORM_BLOCK]
+ {8, 4, VK_FORMAT_COMPATIBILITY_CLASS_ETC2_RGBA_BIT}, // [VK_FORMAT_ETC2_R8G8B8A1_SRGB_BLOCK]
+ {8, 4, VK_FORMAT_COMPATIBILITY_CLASS_ETC2_EAC_RGBA_BIT}, // [VK_FORMAT_ETC2_R8G8B8A8_UNORM_BLOCK]
+ {8, 4, VK_FORMAT_COMPATIBILITY_CLASS_ETC2_EAC_RGBA_BIT}, // [VK_FORMAT_ETC2_R8G8B8A8_SRGB_BLOCK]
+ {8, 1, VK_FORMAT_COMPATIBILITY_CLASS_EAC_R_BIT}, // [VK_FORMAT_EAC_R11_UNORM_BLOCK]
+ {8, 1, VK_FORMAT_COMPATIBILITY_CLASS_EAC_R_BIT}, // [VK_FORMAT_EAC_R11_SNORM_BLOCK]
+ {16, 2, VK_FORMAT_COMPATIBILITY_CLASS_EAC_RG_BIT}, // [VK_FORMAT_EAC_R11G11_UNORM_BLOCK]
+ {16, 2, VK_FORMAT_COMPATIBILITY_CLASS_EAC_RG_BIT}, // [VK_FORMAT_EAC_R11G11_SNORM_BLOCK]
+ {16, 4, VK_FORMAT_COMPATIBILITY_CLASS_ASTC_4X4_BIT}, // [VK_FORMAT_ASTC_4x4_UNORM_BLOCK]
+ {16, 4, VK_FORMAT_COMPATIBILITY_CLASS_ASTC_4X4_BIT}, // [VK_FORMAT_ASTC_4x4_SRGB_BLOCK]
+ {16, 4, VK_FORMAT_COMPATIBILITY_CLASS_ASTC_5X4_BIT}, // [VK_FORMAT_ASTC_5x4_UNORM_BLOCK]
+ {16, 4, VK_FORMAT_COMPATIBILITY_CLASS_ASTC_5X4_BIT}, // [VK_FORMAT_ASTC_5x4_SRGB_BLOCK]
+ {16, 4, VK_FORMAT_COMPATIBILITY_CLASS_ASTC_5X5_BIT}, // [VK_FORMAT_ASTC_5x5_UNORM_BLOCK]
+ {16, 4, VK_FORMAT_COMPATIBILITY_CLASS_ASTC_5X5_BIT}, // [VK_FORMAT_ASTC_5x5_SRGB_BLOCK]
+ {16, 4, VK_FORMAT_COMPATIBILITY_CLASS_ASTC_6X5_BIT}, // [VK_FORMAT_ASTC_6x5_UNORM_BLOCK]
+ {16, 4, VK_FORMAT_COMPATIBILITY_CLASS_ASTC_6X5_BIT}, // [VK_FORMAT_ASTC_6x5_SRGB_BLOCK]
+ {16, 4, VK_FORMAT_COMPATIBILITY_CLASS_ASTC_6X6_BIT}, // [VK_FORMAT_ASTC_6x6_UNORM_BLOCK]
+ {16, 4, VK_FORMAT_COMPATIBILITY_CLASS_ASTC_6X6_BIT}, // [VK_FORMAT_ASTC_6x6_SRGB_BLOCK]
+ {16, 4, VK_FORMAT_COMPATIBILITY_CLASS_ASTC_8X5_BIT}, // [VK_FORMAT_ASTC_8x5_UNORM_BLOCK]
+ {16, 4, VK_FORMAT_COMPATIBILITY_CLASS_ASTC_8X5_BIT}, // [VK_FORMAT_ASTC_8x5_SRGB_BLOCK]
+ {16, 4, VK_FORMAT_COMPATIBILITY_CLASS_ASTC_8X6_BIT}, // [VK_FORMAT_ASTC_8x6_UNORM_BLOCK]
+ {16, 4, VK_FORMAT_COMPATIBILITY_CLASS_ASTC_8X6_BIT}, // [VK_FORMAT_ASTC_8x6_SRGB_BLOCK]
+ {16, 4, VK_FORMAT_COMPATIBILITY_CLASS_ASTC_8X8_BIT}, // [VK_FORMAT_ASTC_8x8_UNORM_BLOCK]
+ {16, 4, VK_FORMAT_COMPATIBILITY_CLASS_ASTC_8X8_BIT}, // [VK_FORMAT_ASTC_8x8_SRGB_BLOCK]
+ {16, 4, VK_FORMAT_COMPATIBILITY_CLASS_ASTC_10X5_BIT}, // [VK_FORMAT_ASTC_10x5_UNORM_BLOCK]
+ {16, 4, VK_FORMAT_COMPATIBILITY_CLASS_ASTC_10X5_BIT}, // [VK_FORMAT_ASTC_10x5_SRGB_BLOCK]
+ {16, 4, VK_FORMAT_COMPATIBILITY_CLASS_ASTC_10X6_BIT}, // [VK_FORMAT_ASTC_10x6_UNORM_BLOCK]
+ {16, 4, VK_FORMAT_COMPATIBILITY_CLASS_ASTC_10X6_BIT}, // [VK_FORMAT_ASTC_10x6_SRGB_BLOCK]
+ {16, 4, VK_FORMAT_COMPATIBILITY_CLASS_ASTC_10X8_BIT}, // [VK_FORMAT_ASTC_10x8_UNORM_BLOCK]
+ {16, 4, VK_FORMAT_COMPATIBILITY_CLASS_ASTC_10X8_BIT}, // [VK_FORMAT_ASTC_10x8_SRGB_BLOCK]
+ {16, 4, VK_FORMAT_COMPATIBILITY_CLASS_ASTC_10X10_BIT}, // [VK_FORMAT_ASTC_10x10_UNORM_BLOCK]
+ {16, 4, VK_FORMAT_COMPATIBILITY_CLASS_ASTC_10X10_BIT}, // [VK_FORMAT_ASTC_10x10_SRGB_BLOCK]
+ {16, 4, VK_FORMAT_COMPATIBILITY_CLASS_ASTC_12X10_BIT}, // [VK_FORMAT_ASTC_12x10_UNORM_BLOCK]
+ {16, 4, VK_FORMAT_COMPATIBILITY_CLASS_ASTC_12X10_BIT}, // [VK_FORMAT_ASTC_12x10_SRGB_BLOCK]
+ {16, 4, VK_FORMAT_COMPATIBILITY_CLASS_ASTC_12X12_BIT}, // [VK_FORMAT_ASTC_12x12_UNORM_BLOCK]
+ {16, 4, VK_FORMAT_COMPATIBILITY_CLASS_ASTC_12X12_BIT}, // [VK_FORMAT_ASTC_12x12_SRGB_BLOCK]
};
// Return true if format is a depth or stencil format
-bool vk_format_is_depth_or_stencil(VkFormat format)
-{
- return (vk_format_is_depth_and_stencil(format) ||
- vk_format_is_depth_only(format) ||
- vk_format_is_stencil_only(format));
+bool vk_format_is_depth_or_stencil(VkFormat format) {
+ return (vk_format_is_depth_and_stencil(format) || vk_format_is_depth_only(format) || vk_format_is_stencil_only(format));
}
// Return true if format contains depth and stencil information
-bool vk_format_is_depth_and_stencil(VkFormat format)
-{
+bool vk_format_is_depth_and_stencil(VkFormat format) {
bool is_ds = false;
switch (format) {
@@ -254,14 +250,10 @@ bool vk_format_is_depth_and_stencil(VkFormat format)
}
// Return true if format is a stencil-only format
-bool vk_format_is_stencil_only(VkFormat format)
-{
- return (format == VK_FORMAT_S8_UINT);
-}
+bool vk_format_is_stencil_only(VkFormat format) { return (format == VK_FORMAT_S8_UINT); }
// Return true if format is a depth-only format
-bool vk_format_is_depth_only(VkFormat format)
-{
+bool vk_format_is_depth_only(VkFormat format) {
bool is_depth = false;
switch (format) {
@@ -278,8 +270,7 @@ bool vk_format_is_depth_only(VkFormat format)
}
// Return true if format is of time UNORM
-bool vk_format_is_norm(VkFormat format)
-{
+bool vk_format_is_norm(VkFormat format) {
bool is_norm = false;
switch (format) {
@@ -353,16 +344,11 @@ bool vk_format_is_norm(VkFormat format)
return is_norm;
};
-
// Return true if format is an integer format
-bool vk_format_is_int(VkFormat format)
-{
- return (vk_format_is_sint(format) || vk_format_is_uint(format));
-}
+bool vk_format_is_int(VkFormat format) { return (vk_format_is_sint(format) || vk_format_is_uint(format)); }
// Return true if format is an unsigned integer format
-bool vk_format_is_uint(VkFormat format)
-{
+bool vk_format_is_uint(VkFormat format) {
bool is_uint = false;
switch (format) {
@@ -397,8 +383,7 @@ bool vk_format_is_uint(VkFormat format)
}
// Return true if format is a signed integer format
-bool vk_format_is_sint(VkFormat format)
-{
+bool vk_format_is_sint(VkFormat format) {
bool is_sint = false;
switch (format) {
@@ -433,8 +418,7 @@ bool vk_format_is_sint(VkFormat format)
}
// Return true if format is a floating-point format
-bool vk_format_is_float(VkFormat format)
-{
+bool vk_format_is_float(VkFormat format) {
bool is_float = false;
switch (format) {
@@ -464,8 +448,7 @@ bool vk_format_is_float(VkFormat format)
}
// Return true if format is in the SRGB colorspace
-bool vk_format_is_srgb(VkFormat format)
-{
+bool vk_format_is_srgb(VkFormat format) {
bool is_srgb = false;
switch (format) {
@@ -507,8 +490,7 @@ bool vk_format_is_srgb(VkFormat format)
}
// Return true if format is compressed
-bool vk_format_is_compressed(VkFormat format)
-{
+bool vk_format_is_compressed(VkFormat format) {
switch (format) {
case VK_FORMAT_BC1_RGB_UNORM_BLOCK:
case VK_FORMAT_BC1_RGB_SRGB_BLOCK:
@@ -569,26 +551,16 @@ bool vk_format_is_compressed(VkFormat format)
}
// Return format class of the specified format
-VkFormatCompatibilityClass vk_format_get_compatibility_class(VkFormat format)
-{
- return vk_format_table[format].format_class;
-}
+VkFormatCompatibilityClass vk_format_get_compatibility_class(VkFormat format) { return vk_format_table[format].format_class; }
// Return size, in bytes, of a pixel of the specified format
-size_t vk_format_get_size(VkFormat format)
-{
- return vk_format_table[format].size;
-}
+size_t vk_format_get_size(VkFormat format) { return vk_format_table[format].size; }
// Return the number of channels for a given format
-unsigned int vk_format_get_channel_count(VkFormat format)
-{
- return vk_format_table[format].channel_count;
-}
+unsigned int vk_format_get_channel_count(VkFormat format) { return vk_format_table[format].channel_count; }
// Perform a zero-tolerant modulo operation
-VkDeviceSize vk_safe_modulo(VkDeviceSize dividend, VkDeviceSize divisor)
-{
+VkDeviceSize vk_safe_modulo(VkDeviceSize dividend, VkDeviceSize divisor) {
VkDeviceSize result = 0;
if (divisor != 0) {
result = dividend % divisor;
@@ -596,31 +568,28 @@ VkDeviceSize vk_safe_modulo(VkDeviceSize dividend, VkDeviceSize divisor)
return result;
}
-
-static const char UTF8_ONE_BYTE_CODE = 0xC0;
-static const char UTF8_ONE_BYTE_MASK = 0xE0;
-static const char UTF8_TWO_BYTE_CODE = 0xE0;
-static const char UTF8_TWO_BYTE_MASK = 0xF0;
+static const char UTF8_ONE_BYTE_CODE = 0xC0;
+static const char UTF8_ONE_BYTE_MASK = 0xE0;
+static const char UTF8_TWO_BYTE_CODE = 0xE0;
+static const char UTF8_TWO_BYTE_MASK = 0xF0;
static const char UTF8_THREE_BYTE_CODE = 0xF0;
static const char UTF8_THREE_BYTE_MASK = 0xF8;
-static const char UTF8_DATA_BYTE_CODE = 0x80;
-static const char UTF8_DATA_BYTE_MASK = 0xC0;
+static const char UTF8_DATA_BYTE_CODE = 0x80;
+static const char UTF8_DATA_BYTE_MASK = 0xC0;
-VkStringErrorFlags vk_string_validate(const int max_length, const char *utf8)
-{
+VkStringErrorFlags vk_string_validate(const int max_length, const char *utf8) {
VkStringErrorFlags result = VK_STRING_ERROR_NONE;
- int num_char_bytes;
- int i,j;
+ int num_char_bytes = 0;
+ int i, j;
- for (i = 0; i < max_length; i++)
- {
+ for (i = 0; i < max_length; i++) {
if (utf8[i] == 0) {
break;
} else if ((utf8[i] >= 0x20) && (utf8[i] < 0x7f)) {
num_char_bytes = 0;
- } else if ((utf8[i] & UTF8_ONE_BYTE_MASK) == UTF8_ONE_BYTE_CODE) {
+ } else if ((utf8[i] & UTF8_ONE_BYTE_MASK) == UTF8_ONE_BYTE_CODE) {
num_char_bytes = 1;
- } else if ((utf8[i] & UTF8_TWO_BYTE_MASK) == UTF8_TWO_BYTE_CODE) {
+ } else if ((utf8[i] & UTF8_TWO_BYTE_MASK) == UTF8_TWO_BYTE_CODE) {
num_char_bytes = 2;
} else if ((utf8[i] & UTF8_THREE_BYTE_MASK) == UTF8_THREE_BYTE_CODE) {
num_char_bytes = 3;
@@ -641,3 +610,46 @@ VkStringErrorFlags vk_string_validate(const int max_length, const char *utf8)
}
return result;
}
+
+void layer_debug_actions(debug_report_data *report_data, std::vector<VkDebugReportCallbackEXT> &logging_callback,
+ const VkAllocationCallbacks *pAllocator, const char *layer_identifier) {
+
+ uint32_t report_flags = 0;
+ uint32_t debug_action = 0;
+ VkDebugReportCallbackEXT callback = VK_NULL_HANDLE;
+
+ std::string report_flags_key = layer_identifier;
+ std::string debug_action_key = layer_identifier;
+ std::string log_filename_key = layer_identifier;
+ report_flags_key.append(".report_flags");
+ debug_action_key.append(".debug_action");
+ log_filename_key.append(".log_filename");
+
+ // initialize layer options
+ report_flags = getLayerOptionFlags(report_flags_key.c_str(), 0);
+ getLayerOptionEnum(debug_action_key.c_str(), (uint32_t *)&debug_action);
+
+ if (debug_action & VK_DBG_LAYER_ACTION_LOG_MSG) {
+ const char *log_filename = getLayerOption(log_filename_key.c_str());
+ FILE *log_output = getLayerLogOutput(log_filename, layer_identifier);
+ VkDebugReportCallbackCreateInfoEXT dbgCreateInfo;
+ memset(&dbgCreateInfo, 0, sizeof(dbgCreateInfo));
+ dbgCreateInfo.sType = VK_STRUCTURE_TYPE_DEBUG_REPORT_CREATE_INFO_EXT;
+ dbgCreateInfo.flags = report_flags;
+ dbgCreateInfo.pfnCallback = log_callback;
+ dbgCreateInfo.pUserData = (void *)log_output;
+ layer_create_msg_callback(report_data, &dbgCreateInfo, pAllocator, &callback);
+ logging_callback.push_back(callback);
+ }
+
+ if (debug_action & VK_DBG_LAYER_ACTION_DEBUG_OUTPUT) {
+ VkDebugReportCallbackCreateInfoEXT dbgCreateInfo;
+ memset(&dbgCreateInfo, 0, sizeof(dbgCreateInfo));
+ dbgCreateInfo.sType = VK_STRUCTURE_TYPE_DEBUG_REPORT_CREATE_INFO_EXT;
+ dbgCreateInfo.flags = report_flags;
+ dbgCreateInfo.pfnCallback = win32_debug_output_msg;
+ dbgCreateInfo.pUserData = NULL;
+ layer_create_msg_callback(report_data, &dbgCreateInfo, pAllocator, &callback);
+ logging_callback.push_back(callback);
+ }
+}
diff --git a/layers/vk_layer_utils.h b/layers/vk_layer_utils.h
index e94798348..d349ffe77 100644
--- a/layers/vk_layer_utils.h
+++ b/layers/vk_layer_utils.h
@@ -27,6 +27,9 @@
#pragma once
#include <stdbool.h>
+#include <vector>
+#include "vk_layer_logging.h"
+
#ifndef WIN32
#include <strings.h> /* for ffs() */
#else
@@ -37,92 +40,91 @@
extern "C" {
#endif
+#define VK_LAYER_API_VERSION (VK_VERSION_MAJOR(1) | VK_VERSION_MINOR(0) | VK_VERSION_PATCH(VK_HEADER_VERSION))
typedef enum VkFormatCompatibilityClass {
- VK_FORMAT_COMPATIBILITY_CLASS_NONE_BIT = 0,
- VK_FORMAT_COMPATIBILITY_CLASS_8_BIT = 1,
- VK_FORMAT_COMPATIBILITY_CLASS_16_BIT = 2,
- VK_FORMAT_COMPATIBILITY_CLASS_24_BIT = 3,
- VK_FORMAT_COMPATIBILITY_CLASS_32_BIT = 4,
- VK_FORMAT_COMPATIBILITY_CLASS_48_BIT = 5,
- VK_FORMAT_COMPATIBILITY_CLASS_64_BIT = 6,
- VK_FORMAT_COMPATIBILITY_CLASS_96_BIT = 7,
- VK_FORMAT_COMPATIBILITY_CLASS_128_BIT = 8,
- VK_FORMAT_COMPATIBILITY_CLASS_192_BIT = 9,
- VK_FORMAT_COMPATIBILITY_CLASS_256_BIT = 10,
- VK_FORMAT_COMPATIBILITY_CLASS_BC1_RGB_BIT = 11,
- VK_FORMAT_COMPATIBILITY_CLASS_BC1_RGBA_BIT = 12,
- VK_FORMAT_COMPATIBILITY_CLASS_BC2_BIT = 13,
- VK_FORMAT_COMPATIBILITY_CLASS_BC3_BIT = 14,
- VK_FORMAT_COMPATIBILITY_CLASS_BC4_BIT = 15,
- VK_FORMAT_COMPATIBILITY_CLASS_BC5_BIT = 16,
- VK_FORMAT_COMPATIBILITY_CLASS_BC6H_BIT = 17,
- VK_FORMAT_COMPATIBILITY_CLASS_BC7_BIT = 18,
- VK_FORMAT_COMPATIBILITY_CLASS_ETC2_RGB_BIT = 19,
- VK_FORMAT_COMPATIBILITY_CLASS_ETC2_RGBA_BIT = 20,
+ VK_FORMAT_COMPATIBILITY_CLASS_NONE_BIT = 0,
+ VK_FORMAT_COMPATIBILITY_CLASS_8_BIT = 1,
+ VK_FORMAT_COMPATIBILITY_CLASS_16_BIT = 2,
+ VK_FORMAT_COMPATIBILITY_CLASS_24_BIT = 3,
+ VK_FORMAT_COMPATIBILITY_CLASS_32_BIT = 4,
+ VK_FORMAT_COMPATIBILITY_CLASS_48_BIT = 5,
+ VK_FORMAT_COMPATIBILITY_CLASS_64_BIT = 6,
+ VK_FORMAT_COMPATIBILITY_CLASS_96_BIT = 7,
+ VK_FORMAT_COMPATIBILITY_CLASS_128_BIT = 8,
+ VK_FORMAT_COMPATIBILITY_CLASS_192_BIT = 9,
+ VK_FORMAT_COMPATIBILITY_CLASS_256_BIT = 10,
+ VK_FORMAT_COMPATIBILITY_CLASS_BC1_RGB_BIT = 11,
+ VK_FORMAT_COMPATIBILITY_CLASS_BC1_RGBA_BIT = 12,
+ VK_FORMAT_COMPATIBILITY_CLASS_BC2_BIT = 13,
+ VK_FORMAT_COMPATIBILITY_CLASS_BC3_BIT = 14,
+ VK_FORMAT_COMPATIBILITY_CLASS_BC4_BIT = 15,
+ VK_FORMAT_COMPATIBILITY_CLASS_BC5_BIT = 16,
+ VK_FORMAT_COMPATIBILITY_CLASS_BC6H_BIT = 17,
+ VK_FORMAT_COMPATIBILITY_CLASS_BC7_BIT = 18,
+ VK_FORMAT_COMPATIBILITY_CLASS_ETC2_RGB_BIT = 19,
+ VK_FORMAT_COMPATIBILITY_CLASS_ETC2_RGBA_BIT = 20,
VK_FORMAT_COMPATIBILITY_CLASS_ETC2_EAC_RGBA_BIT = 21,
- VK_FORMAT_COMPATIBILITY_CLASS_EAC_R_BIT = 22,
- VK_FORMAT_COMPATIBILITY_CLASS_EAC_RG_BIT = 23,
- VK_FORMAT_COMPATIBILITY_CLASS_ASTC_4X4_BIT = 24,
- VK_FORMAT_COMPATIBILITY_CLASS_ASTC_5X4_BIT = 25,
- VK_FORMAT_COMPATIBILITY_CLASS_ASTC_5X5_BIT = 26,
- VK_FORMAT_COMPATIBILITY_CLASS_ASTC_6X5_BIT = 27,
- VK_FORMAT_COMPATIBILITY_CLASS_ASTC_6X6_BIT = 28,
- VK_FORMAT_COMPATIBILITY_CLASS_ASTC_8X5_BIT = 29,
- VK_FORMAT_COMPATIBILITY_CLASS_ASTC_8X6_BIT = 20,
- VK_FORMAT_COMPATIBILITY_CLASS_ASTC_8X8_BIT = 31,
- VK_FORMAT_COMPATIBILITY_CLASS_ASTC_10X5_BIT = 32,
- VK_FORMAT_COMPATIBILITY_CLASS_ASTC_10X6_BIT = 33,
- VK_FORMAT_COMPATIBILITY_CLASS_ASTC_10X8_BIT = 34,
- VK_FORMAT_COMPATIBILITY_CLASS_ASTC_10X10_BIT = 35,
- VK_FORMAT_COMPATIBILITY_CLASS_ASTC_12X10_BIT = 36,
- VK_FORMAT_COMPATIBILITY_CLASS_ASTC_12X12_BIT = 37,
- VK_FORMAT_COMPATIBILITY_CLASS_D16_BIT = 38,
- VK_FORMAT_COMPATIBILITY_CLASS_D24_BIT = 39,
- VK_FORMAT_COMPATIBILITY_CLASS_D32_BIT = 30,
- VK_FORMAT_COMPATIBILITY_CLASS_S8_BIT = 41,
- VK_FORMAT_COMPATIBILITY_CLASS_D16S8_BIT = 42,
- VK_FORMAT_COMPATIBILITY_CLASS_D24S8_BIT = 43,
- VK_FORMAT_COMPATIBILITY_CLASS_D32S8_BIT = 44,
- VK_FORMAT_COMPATIBILITY_CLASS_MAX_ENUM = 45
+ VK_FORMAT_COMPATIBILITY_CLASS_EAC_R_BIT = 22,
+ VK_FORMAT_COMPATIBILITY_CLASS_EAC_RG_BIT = 23,
+ VK_FORMAT_COMPATIBILITY_CLASS_ASTC_4X4_BIT = 24,
+ VK_FORMAT_COMPATIBILITY_CLASS_ASTC_5X4_BIT = 25,
+ VK_FORMAT_COMPATIBILITY_CLASS_ASTC_5X5_BIT = 26,
+ VK_FORMAT_COMPATIBILITY_CLASS_ASTC_6X5_BIT = 27,
+ VK_FORMAT_COMPATIBILITY_CLASS_ASTC_6X6_BIT = 28,
+ VK_FORMAT_COMPATIBILITY_CLASS_ASTC_8X5_BIT = 29,
+ VK_FORMAT_COMPATIBILITY_CLASS_ASTC_8X6_BIT = 20,
+ VK_FORMAT_COMPATIBILITY_CLASS_ASTC_8X8_BIT = 31,
+ VK_FORMAT_COMPATIBILITY_CLASS_ASTC_10X5_BIT = 32,
+ VK_FORMAT_COMPATIBILITY_CLASS_ASTC_10X6_BIT = 33,
+ VK_FORMAT_COMPATIBILITY_CLASS_ASTC_10X8_BIT = 34,
+ VK_FORMAT_COMPATIBILITY_CLASS_ASTC_10X10_BIT = 35,
+ VK_FORMAT_COMPATIBILITY_CLASS_ASTC_12X10_BIT = 36,
+ VK_FORMAT_COMPATIBILITY_CLASS_ASTC_12X12_BIT = 37,
+ VK_FORMAT_COMPATIBILITY_CLASS_D16_BIT = 38,
+ VK_FORMAT_COMPATIBILITY_CLASS_D24_BIT = 39,
+ VK_FORMAT_COMPATIBILITY_CLASS_D32_BIT = 30,
+ VK_FORMAT_COMPATIBILITY_CLASS_S8_BIT = 41,
+ VK_FORMAT_COMPATIBILITY_CLASS_D16S8_BIT = 42,
+ VK_FORMAT_COMPATIBILITY_CLASS_D24S8_BIT = 43,
+ VK_FORMAT_COMPATIBILITY_CLASS_D32S8_BIT = 44,
+ VK_FORMAT_COMPATIBILITY_CLASS_MAX_ENUM = 45
} VkFormatCompatibilityClass;
typedef enum VkStringErrorFlagBits {
- VK_STRING_ERROR_NONE = 0x00000000,
- VK_STRING_ERROR_LENGTH = 0x00000001,
- VK_STRING_ERROR_BAD_DATA = 0x00000002,
+ VK_STRING_ERROR_NONE = 0x00000000,
+ VK_STRING_ERROR_LENGTH = 0x00000001,
+ VK_STRING_ERROR_BAD_DATA = 0x00000002,
} VkStringErrorFlagBits;
typedef VkFlags VkStringErrorFlags;
-static inline bool vk_format_is_undef(VkFormat format)
-{
- return (format == VK_FORMAT_UNDEFINED);
-}
+void layer_debug_actions(debug_report_data* report_data, std::vector<VkDebugReportCallbackEXT> &logging_callback,
+ const VkAllocationCallbacks *pAllocator, const char* layer_identifier);
+
+static inline bool vk_format_is_undef(VkFormat format) { return (format == VK_FORMAT_UNDEFINED); }
bool vk_format_is_depth_or_stencil(VkFormat format);
bool vk_format_is_depth_and_stencil(VkFormat format);
bool vk_format_is_depth_only(VkFormat format);
bool vk_format_is_stencil_only(VkFormat format);
-static inline bool vk_format_is_color(VkFormat format)
-{
+static inline bool vk_format_is_color(VkFormat format) {
return !(vk_format_is_undef(format) || vk_format_is_depth_or_stencil(format));
}
-bool vk_format_is_norm(VkFormat format);
-bool vk_format_is_int(VkFormat format);
-bool vk_format_is_sint(VkFormat format);
-bool vk_format_is_uint(VkFormat format);
-bool vk_format_is_float(VkFormat format);
-bool vk_format_is_srgb(VkFormat format);
-bool vk_format_is_compressed(VkFormat format);
-size_t vk_format_get_size(VkFormat format);
-unsigned int vk_format_get_channel_count(VkFormat format);
+bool vk_format_is_norm(VkFormat format);
+bool vk_format_is_int(VkFormat format);
+bool vk_format_is_sint(VkFormat format);
+bool vk_format_is_uint(VkFormat format);
+bool vk_format_is_float(VkFormat format);
+bool vk_format_is_srgb(VkFormat format);
+bool vk_format_is_compressed(VkFormat format);
+size_t vk_format_get_size(VkFormat format);
+unsigned int vk_format_get_channel_count(VkFormat format);
VkFormatCompatibilityClass vk_format_get_compatibility_class(VkFormat format);
-VkDeviceSize vk_safe_modulo(VkDeviceSize dividend, VkDeviceSize divisor);
-VkStringErrorFlags vk_string_validate(const int max_length, const char *char_array);
+VkDeviceSize vk_safe_modulo(VkDeviceSize dividend, VkDeviceSize divisor);
+VkStringErrorFlags vk_string_validate(const int max_length, const char *char_array);
-static inline int u_ffs(int val)
-{
+static inline int u_ffs(int val) {
#ifdef WIN32
unsigned long bit_pos = 0;
if (_BitScanForward(&bit_pos, val) != 0) {
@@ -137,5 +139,3 @@ static inline int u_ffs(int val)
#ifdef __cplusplus
}
#endif
-
-
diff --git a/layers/vk_validation_layer_details.md b/layers/vk_validation_layer_details.md
index 2e56f2967..7042cf165 100644
--- a/layers/vk_validation_layer_details.md
+++ b/layers/vk_validation_layer_details.md
@@ -2,29 +2,46 @@
# Validation Layer Details
-## VK_LAYER_LUNARG_draw_state
+## VK_LAYER_LUNARG_standard_validation
-### VK_LAYER_LUNARG_draw_state Overview
+### VK_LAYER_LUNARG_standard_validation Overview
-The VK_LAYER_LUNARG_draw_state layer tracks state leading into Draw cmds. This includes the Pipeline state, dynamic state, shaders, and descriptor set state. VK_LAYER_LUNARG_draw_state validates the consistency and correctness between and within these states. VK_LAYER_LUNARG_draw_state also includes SPIR-V validation which functionality is recorded under the VK_LAYER_LUNARG_ShaderChecker section below.
+This is a meta-layer managed by the loader. Specifying this layer name will cause the loader to load the all of the standard validation layers in the following optimal order:
-### VK_LAYER_LUNARG_draw_state Details Table
+ - VK_LAYER_GOOGLE_threading
+ - VK_LAYER_LUNARG_parameter_validation
+ - VK_LAYER_LUNARG_device_limits
+ - VK_LAYER_LUNARG_object_tracker
+ - VK_LAYER_LUNARG_image
+ - VK_LAYER_LUNARG_core_validation
+ - VK_LAYER_LUNARG_swapchain
+ - VK_LAYER_GOOGLE_unique_objects
+
+Other layers can be specified and the loader will remove duplicates. See the following individual layer descriptions for layer details.
+
+## VK_LAYER_LUNARG_core_validation
+
+### VK_LAYER_LUNARG_core_validation Overview
+
+The VK_LAYER_LUNARG_core_validation layer is the main layer performing state tracking, object and state lifetime validation, and consistency and coherency between these states and the requirements, limits, and capabilities. Currently, it is divided into three main areas of validation: Draw State, Memory Tracking, and Shader Checking.
+
+### VK_LAYER_LUNARG_core_validation Draw State Details Table
+The Draw State portion of the core validation layer tracks state leading into Draw commands. This includes the Pipeline state, dynamic state, shaders, and descriptor set state. This functionality validates the consistency and correctness between and within these states.
| Check | Overview | ENUM DRAWSTATE_* | Relevant API | Testname | Notes/TODO |
| ----- | -------- | ---------------- | ------------ | -------- | ---------- |
| Valid Pipeline Layouts | Verify that sets being bound are compatible with their PipelineLayout and that the last-bound PSO PipelineLayout at Draw time is compatible with all bound sets used by that PSO | PIPELINE_LAYOUTS_INCOMPATIBLE | vkCmdDraw vkCmdDrawIndexed vkCmdDrawIndirect vkCmdDrawIndexedIndirect | TBD | None |
-| Validate DbgMarker exensions | Validates that DbgMarker extensions have been enabled before use | INVALID_EXTENSION | vkCmdDbgMarkerBegin vkCmdDbgMarkerEnd | TBD | None |
| Valid BeginCommandBuffer state | Must not call Begin on command buffers that are being recorded, and primary command buffers must specify VK_NULL_HANDLE for RenderPass or Framebuffer parameters, while secondary command buffers must provide non-null parameters, | BEGIN_CB_INVALID_STATE | vkBeginCommandBuffer | PrimaryCommandBufferFramebufferAndRenderpass SecondaryCommandBufferFramebufferAndRenderpass | None |
| Command Buffer Simultaneous Use | Violation of VK_COMMAND_BUFFER_USAGE_SIMULTANEOUS_USE_BIT rules. Most likely attempting to simultaneously use a CmdBuffer w/o having that bit set. This also warns if you add secondary command buffer w/o that bit set to a primary command buffer that does have that bit set. | INVALID_CB_SIMULTANEOUS_USE | vkQueueSubmit vkCmdExecuteCommands | TODO | Write test |
| Valid Command Buffer Reset | Can only reset individual command buffer that was allocated from a pool with VK_COMMAND_POOL_CREATE_RESET_COMMAND_BUFFER_BIT set | INVALID_COMMAND_BUFFER_RESET | vkBeginCommandBuffer vkResetCommandBuffer | CommandBufferResetErrors | None |
-| PSO Bound | Verify that a properly created and valid pipeline object is bound to the CommandBuffer specified in these calls | NO_PIPELINE_BOUND | vkCmdBindDescriptorSets vkCmdBindVertexBuffers | PipelineNotBound | This check is currently more related to VK_LAYER_LUNARG_draw_state data structures and less about verifying that PSO is bound at all appropriate points in API. For API purposes, need to make sure this is checked at Draw time and any other relevant calls. |
-| Valid DescriptorPool | Verifies that the descriptor set pool object was properly created and is valid | INVALID_POOL | vkResetDescriptorPool vkAllocateDescriptorSets | None | This is just an internal layer data structure check. VK_LAYER_LUNARG_param_checker or VK_LAYER_LUNARG_object_tracker should really catch bad DSPool |
+| PSO Bound | Verify that a properly created and valid pipeline object is bound to the CommandBuffer specified in these calls | NO_PIPELINE_BOUND | vkCmdBindDescriptorSets vkCmdBindVertexBuffers | PipelineNotBound | This check is currently more related to VK_LAYER_LUNARG_core_validation internal data structures and less about verifying that PSO is bound at all appropriate points in API. For API purposes, need to make sure this is checked at Draw time and any other relevant calls. |
+| Valid DescriptorPool | Verifies that the descriptor set pool object was properly created and is valid | INVALID_POOL | vkResetDescriptorPool vkAllocateDescriptorSets | None | This is just an internal layer data structure check. VK_LAYER_LUNARG_parameter_validation or VK_LAYER_LUNARG_object_tracker should really catch bad DSPool |
| Valid DescriptorSet | Validate that descriptor set was properly created and is currently valid | INVALID_SET | vkCmdBindDescriptorSets | None | Is this needed other places (like Update/Clear descriptors) |
| Valid DescriptorSetLayout | Flag DescriptorSetLayout object that was not properly created | INVALID_LAYOUT | vkAllocateDescriptorSets | None | Anywhere else to check this? |
| Valid Pipeline | Flag VkPipeline object that was not properly created | INVALID_PIPELINE | vkCmdBindPipeline | InvalidPipeline | NA |
| Valid PipelineLayout | Flag VkPipelineLayout object that was not properly created | INVALID_PIPELINE_LAYOUT | vkCmdBindPipeline | TODO | Write test for this case |
| Valid Pipeline Create Info | Tests for the following: That compute shaders are not specified for the graphics pipeline, tess evaluation and tess control shaders are included or excluded as a pair, that VK_PRIMITIVE_TOPOLOGY_PATCH_LIST is set as IA topology for tessellation pipelines, that VK_PRIMITIVE_TOPOLOGY_PATCH_LIST primitive topology is only set for tessellation pipelines, and that Vtx Shader specified | INVALID_PIPELINE_CREATE_STATE | vkCreateGraphicsPipelines | InvalidPipelineCreateState | NA |
-| Valid CommandBuffer | Validates that the command buffer object was properly created and is currently valid | INVALID_COMMAND_BUFFER | vkQueueSubmit vkBeginCommandBuffer vkEndCommandBuffer vkCmdBindPipeline vkCmdBindDescriptorSets vkCmdBindIndexBuffer vkCmdBindVertexBuffers vkCmdDraw vkCmdDrawIndexed vkCmdDrawIndirect vkCmdDrawIndexedIndirect vkCmdDispatch vkCmdDispatchIndirect vkCmdCopyBuffer vkCmdCopyImage vkCmdBlitImage vkCmdCopyBufferToImage vkCmdCopyImageToBuffer vkCmdUpdateBuffer vkCmdFillBuffer vkCmdClearAttachments vkCmdClearColorImage vkCmdClearDepthStencilImage vkCmdResolveImage vkCmdSetEvent vkCmdResetEvent vkCmdWaitEvents vkCmdPipelineBarrier vkCmdBeginQuery vkCmdEndQuery vkCmdResetQueryPool vkCmdWriteTimestamp vkCmdBeginRenderPass vkCmdNextSubpass vkCmdEndRenderPass vkCmdExecuteCommands vkCmdDbgMarkerBegin vkCmdDbgMarkerEnd vkAllocateCommandBuffers | None | NA |
+| Valid CommandBuffer | Validates that the command buffer object was properly created and is currently valid | INVALID_COMMAND_BUFFER | vkQueueSubmit vkBeginCommandBuffer vkEndCommandBuffer vkCmdBindPipeline vkCmdBindDescriptorSets vkCmdBindIndexBuffer vkCmdBindVertexBuffers vkCmdDraw vkCmdDrawIndexed vkCmdDrawIndirect vkCmdDrawIndexedIndirect vkCmdDispatch vkCmdDispatchIndirect vkCmdCopyBuffer vkCmdCopyImage vkCmdBlitImage vkCmdCopyBufferToImage vkCmdCopyImageToBuffer vkCmdUpdateBuffer vkCmdFillBuffer vkCmdClearAttachments vkCmdClearColorImage vkCmdClearDepthStencilImage vkCmdResolveImage vkCmdSetEvent vkCmdResetEvent vkCmdWaitEvents vkCmdPipelineBarrier vkCmdBeginQuery vkCmdEndQuery vkCmdResetQueryPool vkCmdWriteTimestamp vkCmdBeginRenderPass vkCmdNextSubpass vkCmdEndRenderPass vkCmdExecuteCommands vkAllocateCommandBuffers | None | NA |
| Vtx Buffer Bounds | Check if VBO index too large for PSO Vtx binding count, and that at least one vertex buffer is attached to pipeline object | VTX_INDEX_OUT_OF_BOUNDS | vkCmdBindDescriptorSets vkCmdBindVertexBuffers | VtxBufferBadIndex | NA |
| Idx Buffer Alignment | Verify that offset of Index buffer falls on an alignment boundary as defined by IdxBufferAlignmentError param | VTX_INDEX_ALIGNMENT_ERROR | vkCmdBindIndexBuffer | IdxBufferAlignmentError | NA |
| Cmd Buffer End | Verifies that EndCommandBuffer was called for this commandBuffer at QueueSubmit time | NO_END_COMMAND_BUFFER | vkQueueSubmit | NoEndCommandBuffer | NA |
@@ -59,7 +76,7 @@ The VK_LAYER_LUNARG_draw_state layer tracks state leading into Draw cmds. This i
| Correct Clear Use | Warn user if CmdClear for Color or DepthStencil issued to Cmd Buffer prior to a Draw Cmd. RenderPass LOAD_OP_CLEAR is preferred in this case. | CLEAR_CMD_BEFORE_DRAW | vkCmdClearColorImage vkCmdClearDepthStencilImage | ClearCmdNoDraw | NA |
| Index Buffer Binding | Verify that an index buffer is bound at the point when an indexed draw is attempted. | INDEX_BUFFER_NOT_BOUND | vkCmdDrawIndexed vkCmdDrawIndexedIndirect | TODO | Implement validation test |
| Viewport and Scissors match | In PSO viewportCount and scissorCount must match. Also for each count that is non-zero, there corresponding data array ptr should be non-NULL. | VIEWPORT_SCISSOR_MISMATCH | vkCreateGraphicsPipelines vkCmdSetViewport vkCmdSetScissor | TODO | Implement validation test |
-| Valid Image Aspects for descriptor Updates | When updating ImageView for Descriptor Sets with layout of DEPTH_STENCIL type, the Image Aspect must not have both the DEPTH and STENCIL aspects set, but must have one of the two set. For COLOR_ATTACHMENT, aspect must have COLOR_BIT set. | INVALID_IMAGE_ASPECT | vkUpdateDescriptorSets | DepthStencilImageViewWithColorAspectBitError | This test hits Image layer error, but tough to create case that that skips that error and gets to VK_LAYER_LUNARG_draw_state error. |
+| Valid Image Aspects for descriptor Updates | When updating ImageView for Descriptor Sets with layout of DEPTH_STENCIL type, the Image Aspect must not have both the DEPTH and STENCIL aspects set, but must have one of the two set. For COLOR_ATTACHMENT, aspect must have COLOR_BIT set. | INVALID_IMAGE_ASPECT | vkUpdateDescriptorSets | DepthStencilImageViewWithColorAspectBitError | This test hits Image layer error, but tough to create case that that skips that error and gets to VK_LAYER_LUNARG_core_validaton draw state error. |
| Valid sampler descriptor Updates | An invalid sampler is used when updating SAMPLER descriptor. | SAMPLER_DESCRIPTOR_ERROR | vkUpdateDescriptorSets | SampleDescriptorUpdateError | Currently only making sure sampler handle is known, can add further validation for sampler parameters |
| Immutable sampler update consistency | Within a single write update, all sampler updates must use either immutable samplers or non-immutable samplers, but not a combination of both. | INCONSISTENT_IMMUTABLE_SAMPLER_UPDATE | vkUpdateDescriptorSets | None | Write a test for this case |
| Valid imageView descriptor Updates | An invalid imageView is used when updating *_IMAGE or *_ATTACHMENT descriptor. | IMAGEVIEW_DESCRIPTOR_ERROR | vkUpdateDescriptorSets | ImageViewDescriptorUpdateError | Currently only making sure imageView handle is known, can add further validation for imageView and underlying image parameters |
@@ -78,44 +95,30 @@ The VK_LAYER_LUNARG_draw_state layer tracks state leading into Draw cmds. This i
| Live Semaphore | When waiting on a semaphore, need to make sure that the semaphore is live and therefore can be signalled, otherwise queue is stalled and cannot make forward progress. | QUEUE_FORWARD_PROGRESS | vkQueueSubmit vkQueueBindSparse vkQueuePresentKHR vkAcquireNextImageKHR | TODO | Create test |
| Storage Buffer Alignment | Storage Buffer offsets in BindDescriptorSets must agree with offset alignment device limit | INVALID_STORAGE_BUFFER_OFFSET | vkCmdBindDescriptorSets | TODO | Create test |
| Uniform Buffer Alignment | Uniform Buffer offsets in BindDescriptorSets must agree with offset alignment device limit | INVALID_UNIFORM_BUFFER_OFFSET | vkCmdBindDescriptorSets | TODO | Create test |
+| Independent Blending | If independent blending is not enabled, all elements of pAttachments must be identical | INDEPENDENT_BLEND | vkCreateGraphicsPipelines | TODO | Create test |
+| Enabled Logic Operations | If logic operations is not enabled, logicOpEnable must be VK_FALSE | DISABLED_LOGIC_OP | vkCreateGraphicsPipelines | TODO | Create test |
+| Valid Logic Operations | If logicOpEnable is VK_TRUE, logicOp must be a valid VkLogicOp value | INVALID_LOGIC_OP | vkCreateGraphicsPipelines | TODO | Create test |
+| QueueFamilyIndex is Valid | Validates that QueueFamilyIndices are less an the number of QueueFamilies | INVALID_QUEUE_INDEX | vkCmdWaitEvents vkCmdPipelineBarrier vkCreateBuffer vkCreateImage | TODO | Create test |
+| Push Constants | Validate that the size of push constant ranges and updates does not exceed maxPushConstantSize | PUSH_CONSTANTS_ERROR | vkCreatePipelineLayout vkCmdPushConstants | TODO | Create test |
| NA | Enum used for informational messages | NONE | | NA | None |
| NA | Enum used for errors in the layer itself. This does not indicate an app issue, but instead a bug in the layer. | INTERNAL_ERROR | | NA | None |
-| NA | Enum used when VK_LAYER_LUNARG_draw_state attempts to allocate memory for its own internal use and is unable to. | OUT_OF_MEMORY | | NA | None |
-| NA | Enum used when VK_LAYER_LUNARG_draw_state attempts to allocate memory for its own internal use and is unable to. | OUT_OF_MEMORY | | NA | None |
-
-### VK_LAYER_LUNARG_draw_state Pending Work
-Additional checks to be added to VK_LAYER_LUNARG_draw_state
-
- 7. Lifetime validation (See [bug 13383](https://cvs.khronos.org/bugzilla/show_bug.cgi?id=13383))
- 8. XGL_DESCRIPTOR_SET
- 9. Cannot be deleted until no longer in use on GPU, or referenced in any pending command.
- 10. Sets in XGL_DESCRIPTOR_REGION_USAGE_NON_FREE regions can never be deleted. Instead the xglClearDescriptorRegion() deletes all sets.
- 11. Sets in XGL_DESCRIPTOR_REGION_USAGE_DYNAMIC regions can be deleted.
- 12. XGL_DESCRIPTOR_SET_LAYOUT
- 13. What do IHVs want here?
- 14. Option 1 (assuming this one): Must not be deleted until all sets and layout chains referencing the set layout are deleted.
- 15. Option 2: Can be freely deleted after usage.
- 19. XGL_DESCRIPTOR_REGION
- 20. Cannot be deleted until no longer in use on the GPU, or referenced in any pending command.
- 21. XGL_BUFFER_VIEW, XGL_IMAGE_VIEW, etc
- 22. Cannot be deleted until the descriptors referencing the objects are deleted.
- 23. For ClearAttachments function, verify that the index of referenced attachment actually exists
- 24. GetRenderAreaGranularity - The pname:renderPass parameter must be the same as the one given in the sname:VkRenderPassBeginInfo structure for which the render area is relevant.
- 28. Verify that all relevent dynamic state objects are bound (See https://cvs.khronos.org/bugzilla/show_bug.cgi?id=14323)
- 30. At PSO creation time, there is no case when NOT including a FS should flag an error since there exist dynamic state configurations that can be set to cause a FS to not be required. Instead, in the case when no FS is in the PSO, validation should detect at runtime if dynamic state will require a FS, and in those case issue a runtime warning about undefined behavior. (see bug https://cvs.khronos.org/bugzilla/show_bug.cgi?id=14429)
- 31. Error if a cmdbuffer is submitted on a queue whose family doesn't match the family of the pool from which it was created.
- 32. Update Gfx Pipe Create Info shadowing to remove new/delete and instead use unique_ptrs for auto clean-up
- 33. Add validation for Pipeline Derivatives (see Pipeline Derivatives) section of the spec
-
-## VK_LAYER_LUNARG_ShaderChecker
-
-### VK_LAYER_LUNARG_ShaderChecker Overview
-
-The VK_LAYER_LUNARG_ShaderChecker functionality is part of VK_LAYER_LUNARG_draw_state layer and it inspects the SPIR-V shader images and fixed function pipeline stages at PSO creation time.
-It flags errors when inconsistencies are found across interfaces between shader stages. The exact behavior of the checks
-depends on the pair of pipeline stages involved.
-
-### VK_LAYER_LUNARG_ShaderChecker Details Table
+| NA | Enum used when VK_LAYER_LUNARG_core_validation attempts to allocate memory for its own internal use and is unable to. | OUT_OF_MEMORY | | NA | None |
+| NA | Enum used when VK_LAYER_LUNARG_core_validation attempts to allocate memory for its own internal use and is unable to. | OUT_OF_MEMORY | | NA | None |
+
+### VK_LAYER_LUNARG_core_validation Draw State Pending Work
+Additional Draw State-related checks to be added:
+
+ 1. Lifetime validation (See [bug 13383](https://cvs.khronos.org/bugzilla/show_bug.cgi?id=13383))
+ 2. GetRenderAreaGranularity - The pname:renderPass parameter must be the same as the one given in the sname:VkRenderPassBeginInfo structure for which the render area is relevant.
+ 3. Update Gfx Pipe Create Info shadowing to remove new/delete and instead use unique_ptrs for auto clean-up
+ 4. Add validation for Pipeline Derivatives (see Pipeline Derivatives) section of the spec
+
+ See the Khronos github repository for Vulkan-LoaderAndValidationLayers for additional pending issues, or to submit new validation requests
+
+
+### VK_LAYER_LUNARG_core_validation Shader Checker Details Table
+The Shader Checker portion of the VK_LAYER_LUNARG_core_validation layer inspects the SPIR-V shader images and fixed function pipeline stages at PSO creation time.
+It flags errors when inconsistencies are found across interfaces between shader stages. The exact behavior of the checksdepends on the pair of pipeline stages involved.
| Check | Overview | ENUM SHADER_CHECKER_* | Relevant API | Testname | Notes/TODO |
| ----- | -------- | ---------------- | ------------ | -------- | ---------- |
@@ -124,25 +127,64 @@ depends on the pair of pipeline stages involved.
| Type mismatch | Flag error if a location has inconsistent types | INTERFACE_TYPE_MISMATCH | vkCreateGraphicsPipelines | CreatePipeline*TypeMismatch | Between shader stages, an exact structural type match is required. Between VI and VS, or between FS and CB, only the basic component type must match (float for UNORM/SNORM/FLOAT, int for SINT, uint for UINT) as the VI and CB stages perform conversions to the exact format. |
| Inconsistent shader | Flag error if an inconsistent SPIR-V image is detected. Possible cases include broken type definitions which the layer fails to walk. | INCONSISTENT_SPIRV | vkCreateGraphicsPipelines | TODO | All current tests use the reference compiler to produce valid SPIRV images from GLSL. |
| Non-SPIRV shader | Flag warning if a non-SPIR-V shader image is detected. This can occur if early drivers are ingesting GLSL. VK_LAYER_LUNARG_ShaderChecker cannot analyze non-SPIRV shaders, so this suppresses most other checks. | NON_SPIRV_SHADER | vkCreateGraphicsPipelines | TODO | NA |
-| FS mixed broadcast | Flag error if the fragment shader writes both the legacy gl_FragCoord (which broadcasts to all CBs) and custom FS outputs. | FS_MIXED_BROADCAST | vkCreateGraphicsPipelines | TODO | Reference compiler refuses to compile shaders which do this |
| VI Binding Descriptions | Validate that there is a single vertex input binding description for each binding | INCONSISTENT_VI | vkCreateGraphicsPipelines | CreatePipelineAttribBindingConflict | NA |
| Shader Stage Check | Warns if shader stage is unsupported | UNKNOWN_STAGE | vkCreateGraphicsPipelines | TBD | NA |
| Shader Specialization | Error if specialization entry data is not fully contained within the specialization data block. | BAD_SPECIALIZATION | vkCreateGraphicsPipelines vkCreateComputePipelines | TBD | NA |
| Missing Descriptor | Flags error if shader attempts to use a descriptor binding not declared in the layout | MISSING_DESCRIPTOR | vkCreateGraphicsPipelines | CreatePipelineUniformBlockNotProvided | NA |
| Missing Entrypoint | Flags error if specified entrypoint is not present in the shader module | MISSING_ENTRYPOINT | vkCreateGraphicsPipelines | TBD | NA |
+| Push constant out of range | Flags error if a member of a push constant block is not contained within a push constant range specified in the pipeline layout | PUSH_CONSTANT_OUT_OF_RANGE | vkCreateGraphicsPipelines | CreatePipelinePushContantsNotInLayout | NA |
+| Push constant not accessible from stage | Flags error if the push constant range containing a push constant block member is not accessible from the current shader stage. | PUSH_CONSTANT_NOT_ACCESSIBLE_FROM_STAGE | vkCreateGraphicsPipelines | TBD | NA |
+| Descriptor not accessible from stage | Flags error if a descriptor used by a shader stage does not include that stage in its stageFlags | DESCRIPTOR_NOT_ACCESSIBLE_FROM_STAGE | vkCreateGraphicsPipelines | TBD | NA |
+| Descriptor type mismatch | Flags error if a descriptor type does not match the shader resource type. | DESCRIPTOR_TYPE_MISMATCH | vkCreateGraphicsPipelines | TBD | NA |
+| Feature not enabled | Flags error if a capability declared by the shader requires a feature not enabled on the device | FEATURE_NOT_ENABLED | vkCreateGraphicsPipelines | TBD | NA |
+| Bad capability | Flags error if a capability declared by the shader is not supported by Vulkan shaders | BAD_CAPABILITY | vkCreateGraphicsPipelines | TBD | NA |
| NA | Enum used for informational messages | NONE | | NA | None |
-### VK_LAYER_LUNARG_ShaderChecker Pending Work
+### VK_LAYER_LUNARG_core_validation Shader Checker Pending Work
- Additional test cases for variously broken SPIRV images
- Validation of a single SPIRV image in isolation (the spec describes many constraints)
+
+ See the Khronos github repository for Vulkan-LoaderAndValidationLayers for additional pending issues, or to submit new validation requests
+
+### VK_LAYER_LUNARG_core_validation Memory Tracker Details Table
+The Mem Tracker portion of the VK_LAYER_LUNARG_core_validation layer tracks memory objects and references and validates that they are managed correctly by the application. This includes tracking object bindings, memory hazards, and memory object lifetimes. Several other hazard-related issues related to command buffers, fences, and memory mapping are also validated in this layer segment.
+
+| Check | Overview | ENUM MEMTRACK_* | Relevant API | Testname | Notes/TODO |
+| ----- | -------- | ---------------- | ------------ | -------- | ---------- |
+| Valid Command Buffer | Verifies that the command buffer was properly created and is currently valid | INVALID_CB | vkCmdBindPipeline vkCmdSetViewport vkCmdSetLineWidth vkCmdSetDepthBias vkCmdSetBlendConstants vkCmdSetDepthBounds vkCmdSetStencilCompareMask vkCmdSetStencilWriteMask vkCmdSetStencilReference vkBeginCommandBuffer vkResetCommandBuffer vkDestroyDevice vkFreeMemory | NA | NA |
+| Valid Memory Object | Verifies that the memory object was properly created and is currently valid | INVALID_MEM_OBJ | vkCmdDrawIndirect vkCmdDrawIndexedIndirect vkCmdDispatchIndirect vkCmdCopyBuffer vkCmdCopyImage vkCmdBlitImage vkCmdCopyBufferToImage vkCmdCopyImageToBuffer vkCmdUpdateBuffer vkCmdFillBuffer vkCmdClearColorImage vkCmdClearDepthStencilImage vkCmdResolveImage vkFreeMemory vkBindBufferMemory vkBindImageMemory vkQueueBindSparse | NA | NA |
+| Memory Aliasing | Flag error if image and/or buffer memory binding ranges overlap | INVALID_ALIASING | vkBindBufferMemory vkBindImageMemory | TODO | Implement test |
+| Memory Layout | Flag error if attachment is cleared with invalid first layout | INVALID_LAYOUT | vkCmdBeginRenderPass | TODO | Implement test |
+| Free Referenced Memory | Checks to see if memory being freed still has current references | FREED_MEM_REF | vmFreeMemory | FreeBoundMemory | NA |
+| Memory Properly Bound | Validate that the memory object referenced in the call was properly created, is currently valid, and is properly bound to the object | MISSING_MEM_BINDINGS | vkCmdDrawIndirect vkCmdDrawIndexedIndirect vkCmdDispatchIndirect vkCmdCopyBuffer vkCmdCopyQueryPoolResults vkCmdCopyImage vkCmdBlitImage vkCmdCopyBufferToImage vkCmdCopyImageToBuffer vkCmdUpdateBuffer vkCmdFillBuffer vkCmdClearColorImage vkCmdClearDepthStencilImage vkCmdResolveImage | NA | NA |
+| Valid Object | Verifies that the specified Vulkan object was created properly and is currently valid | INVALID_OBJECT | vkCmdBindPipeline vkCmdDrawIndirect vkCmdDrawIndexedIndirect vkCmdDispatchIndirect vkCmdCopyBuffer vkCmdCopyImage vkCmdBlitImage vkCmdCopyBufferToImage vkCmdCopyImageToBuffer vkCmdUpdateBuffer vkCmdFillBuffer vkCmdClearColorImage vkCmdClearDepthStencilImage vkCmdResolveImage | NA | NA |
+| Bind Invalid Memory | Validate that memory object was correctly created, that the command buffer object was correctly created, and that both are currently valid objects. | MEMORY_BINDING_ERROR | vkQueueBindSparse vkCmdDrawIndirect vkCmdDrawIndexedIndirect vkCmdDispatchIndirect vkCmdCopyBuffer vkCmdCopyImage vkCmdBlitImage vkCmdCopyBufferToImage vkCmdCopyImageToBuffer vkCmdUpdateBuffer vkCmdFillBuffer vkCmdClearColorImage vkCmdClearDepthStencilImage vkCmdResolveImage | NA | The valid Object checks are primarily the responsibilty of VK_LAYER_LUNARG_object_tracker layer, so these checks are more of a backup in case VK_LAYER_LUNARG_object_tracker is not enabled |
+| Objects Not Destroyed | Verify all objects destroyed at DestroyDevice time | MEMORY_LEAK | vkDestroyDevice | NA | NA |
+| Memory Mapping State | Verifies that mapped memory is CPU-visible | INVALID_STATE | vkMapMemory | MapMemWithoutHostVisibleBit | NA |
+| Command Buffer Synchronization | Command Buffer must be complete before BeginCommandBuffer or ResetCommandBuffer can be called | RESET_CB_WHILE_IN_FLIGHT | vkBeginCommandBuffer vkResetCommandBuffer | CallBeginCommandBufferBeforeCompletion CallBeginCommandBufferBeforeCompletion | NA |
+| Submitted Fence Status | Verifies that: The fence is not submitted in an already signaled state, that ResetFences is not called with a fence in an unsignaled state, and that fences being checked have been submitted | INVALID_FENCE_STATE | vkResetFences vkWaitForFences vkQueueSubmit vkGetFenceStatus | SubmitSignaledFence ResetUnsignaledFence | Create test(s) for case where an unsubmitted fence is having its status checked |
+| Immutable Memory Binding | Validates that non-sparse memory bindings are immutable, so objects are not re-boundt | REBIND_OBJECT | vkBindBufferMemory, vkBindImageMemory | RebindMemory | NA |
+| Image/Buffer Usage bits | Verify correct USAGE bits set based on how Images and Buffers are used | INVALID_USAGE_FLAG | vkCreateImage, vkCreateBuffer, vkCreateBufferView, vkCmdCopyBuffer, vkCmdCopyQueryPoolResults, vkCmdCopyImage, vkCmdBlitImage, vkCmdCopyBufferToImage, vkCmdCopyImageToBuffer, vkCmdUpdateBuffer, vkCmdFillBuffer | InvalidUsageBits | NA |
+| Objects Not Destroyed Warning | Warns if any memory objects have not been freed before their objects are destroyed | MEM_OBJ_CLEAR_EMPTY_BINDINGS | vkDestroyDevice | TBD | NA |
+| Memory Map Range Checks | Validates that Memory Mapping Requests are valid for the Memory Object (in-range, not currently mapped on Map, currently mapped on UnMap, size is non-zero) | INVALID_MAP | vkMapMemory | TBD | NA |
+| NA | Enum used for informational messages | NONE | | NA | None |
+| NA | Enum used for errors in the layer itself. This does not indicate an app issue, but instead a bug in the layer. | INTERNAL_ERROR | | NA | None |
+
+### VK_LAYER_LUNARG_core_validation Memory Tracker Pending Work and Enhancements
+ See the Khronos github repository for Vulkan-LoaderAndValidationLayers for additional pending issues, or to submit new validation requests
-## VK_LAYER_LUNARG_param_checker
+1. Consolidate error messages and make them consistent
+2. Add validation for maximum memory references, maximum object counts, and object leaks
+3. Warn on image/buffer deletion if USAGE bits were set that were not needed
+4. Modify INVALID_FENCE_STATE to be WARNINGs instead of ERROR
-### VK_LAYER_LUNARG_param_checker Overview
+## VK_LAYER_LUNARG_parameter_validation
-The VK_LAYER_LUNARG_param_checker layer validates parameter values and flags errors for any values that are outside of acceptable values for the given parameter.
+### VK_LAYER_LUNARG_parameter_validation Overview
-### VK_LAYER_LUNARG_param_checker Details Table
+The VK_LAYER_LUNARG_parameter_validation layer validates parameter values and flags errors for any values that are outside of acceptable values for the given parameter.
+
+### VK_LAYER_LUNARG_parameter_validation Details Table
| Check | Overview | ENUM | Relevant API | Testname | Notes/TODO |
| ----- | -------- | ---------------- | ------------ | -------- | ---------- |
@@ -150,11 +192,11 @@ The VK_LAYER_LUNARG_param_checker layer validates parameter values and flags err
| Call results, Output Parameters | Return values are checked for VK_SUCCESS, returned pointers are checked to be NON-NULL, enumerated types of return values are checked to be within the defined range. | NA | vkEnumeratePhysicalDevices vkGetPhysicalDeviceFeatures vkGetPhysicalDeviceFormatProperties vkGetPhysicalDeviceImageFormatProperties vkGetPhysicalDeviceLimits vkGetPhysicalDeviceProperties vkGetPhysicalDeviceQueueFamilyProperties vkGetPhysicalDeviceMemoryProperties vkGetDeviceQueue vkQueueSubmit vkQueueWaitIdle vkDeviceWaitIdle vkAllocateMemory vkFreeMemory vkMapMemory vkUnmapMemory vkFlushMappedMemoryRanges vkInvalidateMappedMemoryRanges vkGetDeviceMemoryCommitment vkBindBufferMemory vkBindImageMemory vkGetBufferMemoryRequirements vkGetImageMemoryRequirements vkGetImageSparseMemoryRequirements vkGetPhysicalDeviceSparseImageFormatProperties vkQueueBindSparse vkCreateFence vkDestroyFence vkResetFences vkGetFenceStatus vkWaitForFences vkCreateSemaphore vkDestroySemaphore vkCreateEvent vkDestroyEvent vkGetEventStatus vkSetEvent vkResetEvent vkCreateQueryPool vkDestroyQueryPool vkGetQueryPoolResults vkCreateBuffer vkDestroyBuffer vkCreateBufferView vkDestroyBufferView vkCreateImage vkDestroyImage vkGetImageSubresourceLayout vkCreateImageView vkDestroyImageView vkDestroyShaderModule vkCreatePipelineCache vkDestroyPipelineCache vkGetPipelineCacheData vkMergePipelineCaches vkCreateGraphicsPipelines vkCreateComputePipelines vkDestroyPipeline vkCreatePipelineLayout vkDestroyPipelineLayout vkCreateSampler vkDestroySampler vkCreateDescriptorSetLayout vkDestroyDescriptorSetLayout vkCreateDescriptorPool vkDestroyDescriptorPool vkResetDescriptorPool vkAllocateDescriptorSets vkFreeDescriptorSets vkUpdateDescriptorSets vkCreateFramebuffer vkDestroyFramebuffer vkCreateRenderPass vkDestroyRenderPass vkGetRenderAreaGranularity vkCreateCommandPool vkDestroyCommandPool vkResetCommandPool vkAllocateCommandBuffers vkFreeCommandBuffers vkBeginCommandBuffer vkEndCommandBuffer vkResetCommandBuffer vkCmdBindPipeline vkCmdBindDescriptorSets vkCmdBindIndexBuffer vkCmdBindVertexBuffers vkCmdDraw vkCmdDrawIndexed vkCmdDrawIndirect vkCmdDrawIndexedIndirect vkCmdDispatch vkCmdDispatchIndirect vkCmdCopyBuffer vkCmdCopyImage vkCmdBlitImage vkCmdCopyBufferToImage vkCmdCopyImageToBuffer vkCmdUpdateBuffer vkCmdFillBuffer vkCmdClearColorImage vkCmdClearDepthStencilImage vkCmdClearAttachments vkCmdResolveImage vkCmdSetEvent vkCmdResetEvent vkCmdWaitEvents vkCmdPipelineBarrier vkCmdBeginQuery vkCmdEndQuery vkCmdResetQueryPool vkCmdWriteTimestamp vkCmdCopyQueryPoolResults vkCmdPushConstants vkCmdBeginRenderPass vkCmdNextSubpass vkCmdEndRenderPass vkCmdExecuteCommands | TBD | NA |
| NA | Enum used for informational messages | NONE | | NA | None |
-### VK_LAYER_LUNARG_param_checker Pending Work
+### VK_LAYER_LUNARG_parameter_validation Pending Work
Additional work to be done
1. Source2 was creating a VK_FORMAT_R8_SRGB texture (and image view) which was not supported by the underlying implementation (rendersystemtest imageformat test). Checking that formats are supported by the implementation is something the validation layer could do using the VK_FORMAT_INFO_TYPE_PROPERTIES query. There are probably a bunch of checks here you could be doing around vkCreateImage formats along with whether image/color/depth attachment views are valid. I’m not sure how much of this is already there.
- 2. From AMD: we were using an image view with a swizzle of VK_COLOR_COMPONENT_FORMAT_A with a BC1_RGB texture, which is not valid because the texture does not have an alpha channel. In general, should validate that the swizzles do not reference components not in the texture format.
+ 2. From AMD: we were using an image view with a swizzle of VK_COLOR_COMPONENT_FORMAT_A with a BC1_RGB texture, which is not valid because the texture does not have an alpha channel. In general, should validate that the swizzles do not reference components not in the texture format.
3. When querying VK_PHYSICAL_DEVICE_INFO_TYPE_QUEUE_PROPERTIES must provide enough memory for all the queues on the device (not just 1 when device has multiple queues).
4. INT & FLOAT bordercolors. Border color int/float selection must match associated texture format.
5. Flag error on VkBufferCreateInfo if buffer size is 0
@@ -163,6 +205,8 @@ Additional work to be done
8. Check for valid VkIndexType in vkCmdBindIndexBuffer() should be in PreCmdBindIndexBuffer() call
9. Check for valid VkPipelineBindPoint in vkCmdBindPipeline() & vkCmdBindDescriptorSets() should be in PreCmdBindPipeline() & PreCmdBindDescriptorSets() calls respectively.
+ See the Khronos github repository for Vulkan-LoaderAndValidationLayers for additional pending issues, or to submit new validation requests
+
## VK_LAYER_LUNARG_image
### VK_LAYER_LUNARG_image Layer Overview
@@ -188,55 +232,11 @@ DETAILS TABLE PENDING
| Verify Correct Image Filter| Verifies that specified filter is valid | INVALID_FILTER | vkCmdBlitImage | TBD | NA |
| Verify Correct Image Settings | Verifies that values are valid for a given resource or subresource | INVALID_IMAGE_RESOURCE | vkCmdPipelineBarrier | TBD | NA |
| Verify Image Format Limits | Verifies that image creation parameters are with the device format limits | INVALID_FORMAT_LIMITS_VIOLATION | vkCreateImage | TBD | NA |
+| Verify Layout | Verifies the layouts are valid for this image operation | INVALID_LAYOUT | vkCreateImage vkCmdClearColorImage | TBD | NA |
| NA | Enum used for informational messages | NONE | | NA | None |
### VK_LAYER_LUNARG_image Pending Work
-Additional work to be done
-
-## VK_LAYER_LUNARG_mem_tracker
-
-### VK_LAYER_LUNARG_mem_tracker Overview
-
-The VK_LAYER_LUNARG_mem_tracker layer tracks memory objects and references and validates that they are managed correctly by the application. This includes tracking object bindings, memory hazards, and memory object lifetimes. VK_LAYER_LUNARG_mem_tracker validates several other hazard-related issues related to command buffers, fences, and memory mapping.
-
-### VK_LAYER_LUNARG_mem_tracker Details Table
-
-| Check | Overview | ENUM MEMTRACK_* | Relevant API | Testname | Notes/TODO |
-| ----- | -------- | ---------------- | ------------ | -------- | ---------- |
-| Valid Command Buffer | Verifies that the command buffer was properly created and is currently valid | INVALID_CB | vkCmdBindPipeline vkCmdSetViewport vkCmdSetLineWidth vkCmdSetDepthBias vkCmdSetBlendConstants vkCmdSetDepthBounds vkCmdSetStencilCompareMask vkCmdSetStencilWriteMask vkCmdSetStencilReference vkBeginCommandBuffer vkResetCommandBuffer vkDestroyDevice vkFreeMemory | NA | NA |
-| Valid Memory Object | Verifies that the memory object was properly created and is currently valid | INVALID_MEM_OBJ | vkCmdDrawIndirect vkCmdDrawIndexedIndirect vkCmdDispatchIndirect vkCmdCopyBuffer vkCmdCopyImage vkCmdBlitImage vkCmdCopyBufferToImage vkCmdCopyImageToBuffer vkCmdUpdateBuffer vkCmdFillBuffer vkCmdClearColorImage vkCmdClearDepthStencilImage vkCmdResolveImage vkFreeMemory vkBindBufferMemory vkBindImageMemory vkQueueBindSparse | NA | NA |
-| Memory Aliasing | Flag error if image and/or buffer memory binding ranges overlap | INVALID_ALIASING | vkBindBufferMemory vkBindImageMemory | TODO | Implement test |
-| Memory Layout | Flag error if attachment is cleared with invalid first layout | INVALID_LAYOUT | vkCmdBeginRenderPass | TODO | Implement test |
-| Free Referenced Memory | Checks to see if memory being freed still has current references | FREED_MEM_REF | vmFreeMemory | FreeBoundMemory | NA |
-| Memory Properly Bound | Validate that the memory object referenced in the call was properly created, is currently valid, and is properly bound to the object | MISSING_MEM_BINDINGS | vkCmdDrawIndirect vkCmdDrawIndexedIndirect vkCmdDispatchIndirect vkCmdCopyBuffer vkCmdCopyQueryPoolResults vkCmdCopyImage vkCmdBlitImage vkCmdCopyBufferToImage vkCmdCopyImageToBuffer vkCmdUpdateBuffer vkCmdFillBuffer vkCmdClearColorImage vkCmdClearDepthStencilImage vkCmdResolveImage | NA | NA |
-| Valid Object | Verifies that the specified Vulkan object was created properly and is currently valid | INVALID_OBJECT | vkCmdBindPipeline vkCmdDrawIndirect vkCmdDrawIndexedIndirect vkCmdDispatchIndirect vkCmdCopyBuffer vkCmdCopyImage vkCmdBlitImage vkCmdCopyBufferToImage vkCmdCopyImageToBuffer vkCmdUpdateBuffer vkCmdFillBuffer vkCmdClearColorImage vkCmdClearDepthStencilImage vkCmdResolveImage | NA | NA |
-| Bind Invalid Memory | Validate that memory object was correctly created, that the command buffer object was correctly created, and that both are currently valid objects. | MEMORY_BINDING_ERROR | vkQueueBindSparse vkCmdDrawIndirect vkCmdDrawIndexedIndirect vkCmdDispatchIndirect vkCmdCopyBuffer vkCmdCopyImage vkCmdBlitImage vkCmdCopyBufferToImage vkCmdCopyImageToBuffer vkCmdUpdateBuffer vkCmdFillBuffer vkCmdClearColorImage vkCmdClearDepthStencilImage vkCmdResolveImage | NA | The valid Object checks are primarily the responsibilty of VK_LAYER_LUNARG_object_tracker layer, so these checks are more of a backup in case VK_LAYER_LUNARG_object_tracker is not enabled |
-| Objects Not Destroyed | Verify all objects destroyed at DestroyDevice time | MEMORY_LEAK | vkDestroyDevice | NA | NA |
-| Memory Mapping State | Verifies that mapped memory is CPU-visible | INVALID_STATE | vkMapMemory | MapMemWithoutHostVisibleBit | NA |
-| Command Buffer Synchronization | Command Buffer must be complete before BeginCommandBuffer or ResetCommandBuffer can be called | RESET_CB_WHILE_IN_FLIGHT | vkBeginCommandBuffer vkResetCommandBuffer | CallBeginCommandBufferBeforeCompletion CallBeginCommandBufferBeforeCompletion | NA |
-| Submitted Fence Status | Verifies that: The fence is not submitted in an already signaled state, that ResetFences is not called with a fence in an unsignaled state, and that fences being checked have been submitted | INVALID_FENCE_STATE | vkResetFences vkWaitForFences vkQueueSubmit vkGetFenceStatus | SubmitSignaledFence ResetUnsignaledFence | Create test(s) for case where an unsubmitted fence is having its status checked |
-| Immutable Memory Binding | Validates that non-sparse memory bindings are immutable, so objects are not re-boundt | REBIND_OBJECT | vkBindBufferMemory, vkBindImageMemory | RebindMemory | NA |
-| Image/Buffer Usage bits | Verify correct USAGE bits set based on how Images and Buffers are used | INVALID_USAGE_FLAG | vkCreateImage, vkCreateBuffer, vkCreateBufferView, vkCmdCopyBuffer, vkCmdCopyQueryPoolResults, vkCmdCopyImage, vkCmdBlitImage, vkCmdCopyBufferToImage, vkCmdCopyImageToBuffer, vkCmdUpdateBuffer, vkCmdFillBuffer | InvalidUsageBits | NA |
-| Objects Not Destroyed Warning | Warns if any memory objects have not been freed before their objects are destroyed | MEM_OBJ_CLEAR_EMPTY_BINDINGS | vkDestroyDevice | TBD | NA |
-| Memory Map Range Checks | Validates that Memory Mapping Requests are valid for the Memory Object (in-range, not currently mapped on Map, currently mapped on UnMap, size is non-zero) | INVALID_MAP | vkMapMemory | TBD | NA |
-| NA | Enum used for informational messages | NONE | | NA | None |
-| NA | Enum used for errors in the layer itself. This does not indicate an app issue, but instead a bug in the layer. | INTERNAL_ERROR | | NA | None |
-
-### VK_LAYER_LUNARG_mem_tracker Pending Work
-
-#### VK_LAYER_LUNARG_mem_tracker Enhancements
-
-1. Flag any memory hazards: Validate that the pipeline barriers for buffers are sufficient to avoid hazards
-2. Make sure that the XGL_IMAGE_VIEW_ATTACH_INFO.layout matches the layout of the image as determined by the last IMAGE_MEMORY_BARRIER
-3. Verify that the XGL_IMAGE_MEMORY_BARRIER.oldLayout matches the actual previous layout (this one was super important for previous work in dealing with out-of-order command buffer generation). Note that these need to be tracked for each subresource.
-4. Update for new Memory Binding Model
-5. Consolidate error messages and make them consistent
-7. Add validation for having mapped objects in a command list - GPU writing to mapped object is warning
-8. Add validation for maximum memory references, maximum object counts, and object leaks
-9. When performing clears on surfaces that have both Depth and Stencil, WARN user if subresource range for depth and stencil are not both set (see blit_tests.cpp VkCmdClearDepthStencilTest test).
-11. Warn on image/buffer deletion if USAGE bits were set that were not needed
-12. Modify INVALID_FENCE_STATE to be WARNINGs instead of ERROR
-13. Report destroy or modify of resources in use on queues and not cleared by fence or WaitIdle. Could be fence, semaphore, or objects used by submitted CommandBuffers.
+ See the Khronos github repository for Vulkan-LoaderAndValidationLayers for additional pending issues, or to submit new validation requests
## VK_LAYER_LUNARG_object_tracker
@@ -261,12 +261,12 @@ The VK_LAYER_LUNARG_object_tracker layer maintains a record of all Vulkan object
### VK_LAYER_LUNARG_object_tracker Pending Work
- 4. Verify images have CmdPipelineBarrier layouts matching new layout parameters to Cmd*Image* functions
- 6. For specific object instances that are allowed to be NULL, update object validation to verify that such objects are either NULL or valid
- 7. Verify cube array VkImageView objects use subresourceRange.arraySize (or effective arraySize when VK_REMAINING_ARRAY_SLICES is specified) that is a multiple of 6.
- 8. Make object maps specific to instance and device. Objects may only be used with matching instance or device.
- 9. Use reference counting for non-dispatchable objects. Multiple object creation calls may return identical handles.
- 10. Update codegen for destroy_obj & validate_obj to generate all of the correct signatures and use the generated code
+ 1. Verify images have CmdPipelineBarrier layouts matching new layout parameters to Cmd*Image* functions
+ 2. For specific object instances that are allowed to be NULL, update object validation to verify that such objects are either NULL or valid
+ 3. Verify cube array VkImageView objects use subresourceRange.arraySize (or effective arraySize when VK_REMAINING_ARRAY_SLICES is specified) that is a multiple of 6.
+ 4. Make object maps specific to instance and device. Objects may only be used with matching instance or device.
+
+ See the Khronos github repository for Vulkan-LoaderAndValidationLayers for additional pending issues, or to submit new validation requests
## VK_LAYER_GOOGLE_threading
@@ -302,7 +302,7 @@ It cannot insure that there is no latent race condition.
| NA | Enum used for informational messages | NONE | | NA | None |
### VK_LAYER_GOOGLE_threading Pending Work
-Additional work to be done
+ See the Khronos github repository for Vulkan-LoaderAndValidationLayers for additional pending issues, or to submit new validation requests
## VK_LAYER_LUNARG_device_limits
@@ -340,6 +340,8 @@ For the second category of errors, VK_LAYER_LUNARG_device_limits stores its own
1. For all Formats, call vkGetPhysicalDeviceFormatProperties to pull their properties for the underlying device. After that point, if the app attempts to use any formats in violation of those properties, flag errors (this is done for Images).
+ See the Khronos github repository for Vulkan-LoaderAndValidationLayers for additional pending issues, or to submit new validation requests
+
## VK_LAYER_LUNARG_swapchain
### Swapchain Overview
@@ -383,6 +385,7 @@ This layer is a work in progress. VK_LAYER_LUNARG_swapchain layer is intended to
| Valid use of queueFamilyIndex | Validates that a queueFamilyIndex not used before vkGetPhysicalDeviceQueueFamilyProperties() was called | DID_NOT_QUERY_QUEUE_FAMILIES | vkGetPhysicalDeviceSurfaceSupportKHR | NA | None |
| Valid queueFamilyIndex value | Validates that a queueFamilyIndex value is less-than pQueueFamilyPropertyCount returned by vkGetPhysicalDeviceQueueFamilyProperties | QUEUE_FAMILY_INDEX_TOO_LARGE | vkGetPhysicalDeviceSurfaceSupportKHR | NA | None |
| Supported combination of queue and surface | Validates that the surface associated with a swapchain was seen to support the queueFamilyIndex of a given queue | SURFACE_NOT_SUPPORTED_WITH_QUEUE | vkQueuePresentKHR | NA | None |
+| Proper synchronization of acquired images | vkAcquireNextImageKHR should be called with a valid semaphore and/or fence | NO_SYNC_FOR_ACQUIRE | vkAcquireNextImageKHR | NA | None |
Note: The following platform-specific functions are not mentioned above, because they are protected by ifdefs, which cause test failures:
@@ -395,7 +398,7 @@ Note: The following platform-specific functions are not mentioned above, because
- vkGetPhysicalDeviceWin32PresentationSupportKHR
- vkCreateXcbSurfaceKHR
- vkGetPhysicalDeviceXcbPresentationSupportKHR
-- vkCreateXlibSurfaceKHR
+- vkCreateXlibSurfaceKHR
- vkGetPhysicalDeviceXlibPresentationSupportKHR
### VK_LAYER_LUNARG_Swapchain Pending Work
@@ -405,6 +408,15 @@ Additional checks to be added to VK_LAYER_LUNARG_swapchain
2. One issue that has already come up is correct UsageFlags for WSI SwapChains and SurfaceProperties.
3. Tons of other stuff including semaphore and synchronization validation.
+ See the Khronos github repository for Vulkan-LoaderAndValidationLayers for additional pending issues, or to submit new validation requests
+
+## VK_LAYER_GOOGLE_unique_objects
+
+### VK_LAYER_GOOGLE_unique_objects Overview
+
+The unique_objects is not a validation layer but a helper layer that assists with validation. The Vulkan specification allows objects that have non-unique handles. This makes tracking object lifetimes difficult in that it is unclear which object is being referenced upon deletion. The unique_objects layer addresses this by wrapping all objects with a unique object representation allowing proper object lifetime tracking. This layer does no validation on its own and may not be required for the proper operation of all layers or all platforms. One sign that it is needed is the appearance of many errors from the object_tracker layer indicating the use of previously destroyed objects. For optimal effectiveness this layer should be loaded last (to reside in the layer chain closest to the display driver and farthest from the application).
+
+
## General Pending Work
A place to capture general validation work to be done. This includes new checks that don't clearly fit into the above layers.
diff --git a/layers/windows/VkLayer_mem_tracker.json b/layers/windows/VkLayer_core_validation.json
index 34e7a9ad3..4fbae55cd 100644
--- a/layers/windows/VkLayer_mem_tracker.json
+++ b/layers/windows/VkLayer_core_validation.json
@@ -1,16 +1,16 @@
{
"file_format_version" : "1.0.0",
"layer" : {
- "name": "VK_LAYER_LUNARG_mem_tracker",
+ "name": "VK_LAYER_LUNARG_core_validation",
"type": "GLOBAL",
- "library_path": ".\\VkLayer_mem_tracker.dll",
- "api_version": "1.0.3",
+ "library_path": ".\\VkLayer_core_validation.dll",
+ "api_version": "1.0.6",
"implementation_version": "1",
"description": "LunarG Validation Layer",
"instance_extensions": [
{
"name": "VK_EXT_debug_report",
- "spec_version": "1"
+ "spec_version": "2"
}
]
}
diff --git a/layers/windows/VkLayer_device_limits.json b/layers/windows/VkLayer_device_limits.json
index 2f12df880..79c744ae6 100644
--- a/layers/windows/VkLayer_device_limits.json
+++ b/layers/windows/VkLayer_device_limits.json
@@ -4,13 +4,13 @@
"name": "VK_LAYER_LUNARG_device_limits",
"type": "GLOBAL",
"library_path": ".\\VkLayer_device_limits.dll",
- "api_version": "1.0.3",
+ "api_version": "1.0.6",
"implementation_version": "1",
"description": "LunarG Validation Layer",
"instance_extensions": [
{
"name": "VK_EXT_debug_report",
- "spec_version": "1"
+ "spec_version": "2"
}
]
}
diff --git a/layers/windows/VkLayer_draw_state.json b/layers/windows/VkLayer_draw_state.json
deleted file mode 100644
index ebaa874cf..000000000
--- a/layers/windows/VkLayer_draw_state.json
+++ /dev/null
@@ -1,24 +0,0 @@
-{
- "file_format_version" : "1.0.0",
- "layer" : {
- "name": "VK_LAYER_LUNARG_draw_state",
- "type": "GLOBAL",
- "library_path": ".\\VkLayer_draw_state.dll",
- "api_version": "1.0.3",
- "implementation_version": "1",
- "description": "LunarG Validation Layer",
- "instance_extensions": [
- {
- "name": "VK_EXT_debug_report",
- "spec_version": "1"
- }
- ],
- "device_extensions": [
- {
- "name": "VK_LUNARG_DEBUG_MARKER",
- "spec_version": "0",
- "entrypoints": ["vkCmdDbgMarkerBegin","vkCmdDbgMarkerEnd"]
- }
- ]
- }
-}
diff --git a/layers/windows/VkLayer_image.json b/layers/windows/VkLayer_image.json
index 94db9a86e..dbcbfb22a 100644
--- a/layers/windows/VkLayer_image.json
+++ b/layers/windows/VkLayer_image.json
@@ -4,13 +4,13 @@
"name": "VK_LAYER_LUNARG_image",
"type": "GLOBAL",
"library_path": ".\\VkLayer_image.dll",
- "api_version": "1.0.3",
+ "api_version": "1.0.6",
"implementation_version": "1",
"description": "LunarG Validation Layer",
"instance_extensions": [
{
"name": "VK_EXT_debug_report",
- "spec_version": "1"
+ "spec_version": "2"
}
]
}
diff --git a/layers/windows/VkLayer_object_tracker.json b/layers/windows/VkLayer_object_tracker.json
index 98d977d34..04c280954 100644
--- a/layers/windows/VkLayer_object_tracker.json
+++ b/layers/windows/VkLayer_object_tracker.json
@@ -4,13 +4,13 @@
"name": "VK_LAYER_LUNARG_object_tracker",
"type": "GLOBAL",
"library_path": ".\\VkLayer_object_tracker.dll",
- "api_version": "1.0.3",
+ "api_version": "1.0.6",
"implementation_version": "1",
"description": "LunarG Validation Layer",
"instance_extensions": [
{
"name": "VK_EXT_debug_report",
- "spec_version": "1"
+ "spec_version": "2"
}
]
}
diff --git a/layers/linux/VkLayer_param_checker.json b/layers/windows/VkLayer_parameter_validation.json
index f568139c0..8d75f27f0 100644
--- a/layers/linux/VkLayer_param_checker.json
+++ b/layers/windows/VkLayer_parameter_validation.json
@@ -1,16 +1,16 @@
{
"file_format_version" : "1.0.0",
"layer" : {
- "name": "VK_LAYER_LUNARG_param_checker",
+ "name": "VK_LAYER_LUNARG_parameter_validation",
"type": "GLOBAL",
- "library_path": "./libVkLayer_param_checker.so",
- "api_version": "1.0.3",
+ "library_path": ".\\VkLayer_parameter_validation.dll",
+ "api_version": "1.0.6",
"implementation_version": "1",
"description": "LunarG Validation Layer",
"instance_extensions": [
{
"name": "VK_EXT_debug_report",
- "spec_version": "1"
+ "spec_version": "2"
}
]
}
diff --git a/layers/windows/VkLayer_swapchain.json b/layers/windows/VkLayer_swapchain.json
index 557c3b6d9..52feca201 100644
--- a/layers/windows/VkLayer_swapchain.json
+++ b/layers/windows/VkLayer_swapchain.json
@@ -4,13 +4,13 @@
"name": "VK_LAYER_LUNARG_swapchain",
"type": "GLOBAL",
"library_path": ".\\VkLayer_swapchain.dll",
- "api_version": "1.0.3",
+ "api_version": "1.0.6",
"implementation_version": "1",
"description": "LunarG Validation Layer",
"instance_extensions": [
{
"name": "VK_EXT_debug_report",
- "spec_version": "1"
+ "spec_version": "2"
}
]
}
diff --git a/layers/windows/VkLayer_threading.json b/layers/windows/VkLayer_threading.json
index 24fb65e90..228a148ec 100644
--- a/layers/windows/VkLayer_threading.json
+++ b/layers/windows/VkLayer_threading.json
@@ -4,13 +4,13 @@
"name": "VK_LAYER_GOOGLE_threading",
"type": "GLOBAL",
"library_path": ".\\VkLayer_threading.dll",
- "api_version": "1.0.3",
+ "api_version": "1.0.6",
"implementation_version": "1",
"description": "Google Validation Layer",
"instance_extensions": [
{
"name": "VK_EXT_debug_report",
- "spec_version": "1"
+ "spec_version": "2"
}
]
}
diff --git a/layers/windows/VkLayer_unique_objects.json b/layers/windows/VkLayer_unique_objects.json
index 59edda3aa..11f53433c 100644
--- a/layers/windows/VkLayer_unique_objects.json
+++ b/layers/windows/VkLayer_unique_objects.json
@@ -4,7 +4,7 @@
"name": "VK_LAYER_GOOGLE_unique_objects",
"type": "GLOBAL",
"library_path": ".\\VkLayer_unique_objects.dll",
- "api_version": "1.0.3",
+ "api_version": "1.0.6",
"implementation_version": "1",
"description": "Google Validation Layer"
}
diff --git a/libs/vkjson/vkjson_info.cc b/libs/vkjson/vkjson_info.cc
index 67b973f82..670eabbf8 100644
--- a/libs/vkjson/vkjson_info.cc
+++ b/libs/vkjson/vkjson_info.cc
@@ -124,7 +124,7 @@ int main(int argc, char* argv[]) {
1,
"",
0,
- VK_API_VERSION};
+ VK_API_VERSION_1_0};
VkInstanceCreateInfo instance_info = {VK_STRUCTURE_TYPE_INSTANCE_CREATE_INFO,
nullptr,
0,
diff --git a/loader/CMakeLists.txt b/loader/CMakeLists.txt
index 16300729d..654c2a2ff 100644
--- a/loader/CMakeLists.txt
+++ b/loader/CMakeLists.txt
@@ -5,7 +5,7 @@ include_directories(
if (WIN32)
add_custom_command(OUTPUT ${CMAKE_CURRENT_BINARY_DIR}/vulkan-${MAJOR}.def
- COMMAND ${PYTHON_CMD} ${PROJECT_SOURCE_DIR}/loader/vk-loader-generate.py win-def-file vulkan-${MAJOR}.dll all > ${CMAKE_CURRENT_BINARY_DIR}/vulkan-${MAJOR}.def
+ COMMAND ${PYTHON_CMD} ${PROJECT_SOURCE_DIR}/loader/vk-loader-generate.py ${DisplayServer} win-def-file vulkan-${MAJOR}.dll all > ${CMAKE_CURRENT_BINARY_DIR}/vulkan-${MAJOR}.def
DEPENDS ${PROJECT_SOURCE_DIR}/loader/vk-loader-generate.py ${PROJECT_SOURCE_DIR}/vulkan.py)
endif()
@@ -13,7 +13,7 @@ endif()
set(CMAKE_C_FLAGS_DEBUG "${CMAKE_C_FLAGS_DEBUG} -DDEBUG")
set(CMAKE_CXX_FLAGS_DEBUG "${CMAKE_CXX_FLAGS_DEBUG} -DDEBUG")
-set(LOADER_SRCS
+set(NORMAL_LOADER_SRCS
loader.c
loader.h
vk_loader_platform.h
@@ -26,29 +26,38 @@ set(LOADER_SRCS
gpa_helper.h
cJSON.c
cJSON.h
- dev_ext_trampoline.c
murmurhash.c
murmurhash.h
)
+set (OPT_LOADER_SRCS
+ dev_ext_trampoline.c
+)
+
+set (LOADER_SRCS ${NORMAL_LOADER_SRCS} ${OPT_LOADER_SRCS})
if (WIN32)
set(CMAKE_C_FLAGS "${CMAKE_C_FLAGS} -D_CRT_SECURE_NO_WARNINGS")
- # build dev_ext_trampoline with release flags to allow tail-call optimization
- # cmake and MSVC doesn't make this easy to do
- set_source_files_properties(${LOADER_SRCS} PROPERTIES COMPILE_FLAGS ${CMAKE_C_FLAGS_DEBUG})
- set(CMAKE_C_FLAGS_DEBUG "")
- set_source_files_properties(dev_ext_trampoline.c PROPERTIES COMPILE_FLAGS ${CMAKE_C_FLAGS_RELEASE})
+ # Build dev_ext_trampoline.c with -O2 to allow tail-call optimization.
+ # Build other C files with normal options
+ # setup two Cmake targets (loader-norm and loader-opt) for the different compilation flags
+ separate_arguments(LOCAL_C_FLAGS_DBG WINDOWS_COMMAND ${CMAKE_C_FLAGS_DEBUG})
+ set(CMAKE_C_FLAGS_DEBUG " ")
+ separate_arguments(LOCAL_C_FLAGS_REL WINDOWS_COMMAND ${CMAKE_C_FLAGS_RELEASE})
- add_library(vulkan-${MAJOR} SHARED ${LOADER_SRCS} dirent_on_windows.c ${CMAKE_CURRENT_BINARY_DIR}/vulkan-${MAJOR}.def)
+ add_library(loader-norm OBJECT ${NORMAL_LOADER_SRCS} dirent_on_windows.c)
+ target_compile_options(loader-norm PUBLIC "$<$<CONFIG:DEBUG>:${LOCAL_C_FLAGS_DBG}>")
+ add_library(loader-opt OBJECT ${OPT_LOADER_SRCS})
+ target_compile_options(loader-opt PUBLIC "$<$<CONFIG:DEBUG>:${LOCAL_C_FLAGS_REL}>")
+ add_library(vulkan-${MAJOR} SHARED $<TARGET_OBJECTS:loader-opt> $<TARGET_OBJECTS:loader-norm> ${CMAKE_CURRENT_BINARY_DIR}/vulkan-${MAJOR}.def)
set_target_properties(vulkan-${MAJOR} PROPERTIES LINK_FLAGS "/DEF:${CMAKE_CURRENT_BINARY_DIR}/vulkan-${MAJOR}.def")
- add_library(VKstatic.${MAJOR} STATIC ${LOADER_SRCS} dirent_on_windows.c)
+ add_library(VKstatic.${MAJOR} STATIC $<TARGET_OBJECTS:loader-opt> $<TARGET_OBJECTS:loader-norm>)
set_target_properties(VKstatic.${MAJOR} PROPERTIES OUTPUT_NAME VKstatic.${MAJOR})
target_link_libraries(vulkan-${MAJOR} shlwapi)
else()
set(CMAKE_C_FLAGS "${CMAKE_C_FLAGS} -Wpointer-arith")
add_library(vulkan SHARED ${LOADER_SRCS})
- set_target_properties(vulkan PROPERTIES SOVERSION "1" VERSION "1.0.3")
+ set_target_properties(vulkan PROPERTIES SOVERSION "1" VERSION "1.0.5")
target_link_libraries(vulkan -ldl -lpthread -lm)
endif()
diff --git a/loader/LoaderAndLayerInterface.md b/loader/LoaderAndLayerInterface.md
index d7965d5a0..d695f472b 100644
--- a/loader/LoaderAndLayerInterface.md
+++ b/loader/LoaderAndLayerInterface.md
@@ -117,12 +117,12 @@ object they are given.
Applications are not required to link directly to the loader library, instead
they can use the appropriate platform specific dynamic symbol lookup on the
loader library to initialize the application’s own dispatch table. This allows
-an application to fail gracefully if the loader cannot be found and provide the
-fastest mechanism for the application to call Vulkan functions. An application
-will only need to query (via system calls such as dlsym()) the address of
-vkGetInstanceProcAddr from the loader library. Using vkGetInstanceProcAddr the
-application can then discover the address of all instance and global functions
-and extensions, such as vkCreateInstance,
+an application to fail gracefully if the loader cannot be found, and it
+provides the fastest mechanism for the application to call Vulkan functions. An
+application will only need to query (via system calls such as dlsym()) the
+address of vkGetInstanceProcAddr from the loader library. Using
+vkGetInstanceProcAddr the application can then discover the address of all
+instance and global functions and extensions, such as vkCreateInstance,
vkEnumerateInstanceExtensionProperties and vkEnumerateInstanceLayerProperties
in a platform independent way.
@@ -131,11 +131,11 @@ SDKs, OS package distributions and IHV driver packages. These details are
beyond the scope of this document. However, the name and versioning of the
Vulkan loader library is specified so an app can link to the correct Vulkan ABI
library version. Vulkan versioning is such that ABI backwards compatibility is
-guaranteed for all versions with the same major number (eg 1.0 and 1.1). On
+guaranteed for all versions with the same major number (e.g. 1.0 and 1.1). On
Windows, the loader library encodes the ABI version in its name such that
multiple ABI incompatible versions of the loader can peacefully coexist on a
-given system. The vulkan loader library key name is “vulkan-&lt;ABI
-version&gt;”. For example, for Vulkan version 1.X on Windows the library
+given system. The Vulkan loader library file name is “vulkan-&lt;ABI
+version&gt;.dll”. For example, for Vulkan version 1.X on Windows the library
filename is vulkan-1.dll. And this library file can typically be found in the
windows/system32 directory.
@@ -143,14 +143,14 @@ For Linux, shared libraries are versioned based on a suffix. Thus, the ABI
number is not encoded in the base of the library filename as on Windows. On
Linux an application wanting to link to the latest Vulkan ABI version would
just link to the name vulkan (libvulkan.so). A specific Vulkan ABI version can
-also be linked to by applications (eg libvulkan.so.1).
+also be linked to by applications (e.g. libvulkan.so.1).
Applications desiring Vulkan functionality beyond what the core API offers may
use various layers or extensions. A layer cannot add new or modify existing
Vulkan commands, but may offer extensions that do. A common use of layers is
for API validation. A developer can use validation layers during application
development, but during production the layers can be disabled by the
-application. Thus, eliminating the overhead of validating the applications
+application. Thus, eliminating the overhead of validating the application's
usage of the API. Layers discovered by the loader can be reported to the
application via vkEnumerateInstanceLayerProperties and
vkEnumerateDeviceLayerProperties, for instance and device layers respectively.
@@ -183,9 +183,9 @@ An example of using these environment variables to activate the validation
layer VK\_LAYER\_LUNARG\_param\_checker on Windows or Linux is as follows:
```
-> $ export VK_INSTANCE_LAYERS=VK_LAYER_LUNARG_param_checker
+> $ export VK_INSTANCE_LAYERS=VK_LAYER_LUNARG_parameter_validation
-> $ export VK_DEVICE_LAYERS=VK_LAYER_LUNARG_param_checker
+> $ export VK_DEVICE_LAYERS=VK_LAYER_LUNARG_parameter_validation
```
**Note**: Many layers, including all LunarG validation layers are “global”
@@ -214,13 +214,14 @@ vkEnumerateInstanceExtensionProperties. Device extensions can be discovered via
vkEnumerateDeviceExtensionProperties. The loader discovers and aggregates all
extensions from layers (both explicit and implicit), ICDs and the loader before
reporting them to the application in vkEnumerate\*ExtensionProperties. The
-pLayerName parameter in these functions are used to select either a single
-layer or the Vulkan platform implementation. If pLayerName is NULL, extensions
-from Vulkan implementation components (including loader, implicit layers, and
-ICDs) are enumerated. If pLayerName is equal to a discovered layer module name
-then any extensions from that layer (which may be implicit or explicit) are
-enumerated. Duplicate extensions (eg an implicit layer and ICD might report
-support for the same extension) are eliminated by the loader. Extensions must
+pLayerName parameter in these functions is used to select either a single layer
+or the Vulkan platform implementation. If pLayerName is NULL, extensions from
+Vulkan implementation components (including loader, implicit layers, and ICDs)
+are enumerated. If pLayerName is equal to a discovered layer module name then
+any extensions from that layer (which may be implicit or explicit) are
+enumerated. Duplicate extensions (e.g. an implicit layer and ICD might report
+support for the same extension) are eliminated by the loader. For duplicates, the
+ICD version is reported and the layer version is culled. Extensions must
be enabled (in vkCreateInstance or vkCreateDevice) before they can be used.
Extension command entry points should be queried via vkGetInstanceProcAddr or
@@ -237,7 +238,7 @@ what could happen if the application were to use vkGetDeviceProcAddr for the
function “vkGetDeviceQueue” and “vkDestroyDevice” but not “vkAllocateMemory”.
The resulting function pointer (fpGetDeviceQueue) would be the ICD’s entry
point if the loader and any enabled layers do not need to see that call. Even
-if an enabled layer intercepts the call (eg vkDestroyDevice) the loader
+if an enabled layer intercepts the call (e.g. vkDestroyDevice) the loader
trampoline code is skipped for function pointers obtained via
vkGetDeviceProcAddr. This also means that function pointers obtained via
vkGetDeviceProcAddr will only work with the specific VkDevice it was created
@@ -281,7 +282,7 @@ of an ICD shared library (".dll") file. For example:
"file_format_version": "1.0.0",
"ICD": {
"library_path": "path to ICD library",
- "api_version": "1.0.3"
+ "api_version": "1.0.5"
}
}
```
@@ -325,9 +326,9 @@ specified contents:
| Text File Name | Text File Contents |
|----------------|--------------------|
-|vk\_vendora.json | "ICD": { "library\_path": "C:\\\\VENDORA\\\\vk\_vendora.dll", "api_version": "1.0.3" } |
-| vendorb\_vk.json | "ICD": { "library\_path": "vendorb\_vk.dll", "api_version": "1.0.3" } |
-|vendorc\_icd.json | "ICD": { "library\_path": "vedorc\_icd.dll", "api_version": "1.0.3" }|
+|vk\_vendora.json | "ICD": { "library\_path": "C:\\\\VENDORA\\\\vk\_vendora.dll", "api_version": "1.0.5" } |
+| vendorb\_vk.json | "ICD": { "library\_path": "vendorb\_vk.dll", "api_version": "1.0.5" } |
+|vendorc\_icd.json | "ICD": { "library\_path": "vedorc\_icd.dll", "api_version": "1.0.5" }|
Then the loader will open the three files mentioned in the "Text File Contents"
column, and then try to load and use the three shared libraries indicated by
@@ -376,6 +377,10 @@ in the following Linux directories:
/usr/share/vulkan/icd.d
/etc/vulkan/icd.d
+$HOME/.local/share/vulkan/icd.d
+
+Where $HOME is the current home directory of the application's user id; this
+path will be ignored for suid programs.
These directories will contain text information files (a.k.a. "manifest
files"), that use a JSON format.
@@ -388,7 +393,7 @@ pathname of an ICD shared library (".so") file. For example:
"file_format_version": "1.0.0",
"ICD": {
"library_path": "path to ICD library",
- "api_version": "1.0.3"
+ "api_version": "1.0.5"
}
}
```
@@ -425,9 +430,9 @@ the specified contents:
| Text File Name | Text File Contents |
|-------------------|------------------------|
-| vk\_vendora.json | "ICD": { "library\_path": "vendora.so", "api_version": "1.0.3" } |
-| vendorb\_vk.json | "ICD": { "library\_path": "vendorb\_vulkan\_icd.so", "api_version": "1.0.3" } |
-| vendorc\_icd.json | "ICD": { "library\_path": "/usr/lib/VENDORC/icd.so", "api_version": "1.0.3" } |
+| vk\_vendora.json | "ICD": { "library\_path": "vendora.so", "api_version": "1.0.5" } |
+| vendorb\_vk.json | "ICD": { "library\_path": "vendorb\_vulkan\_icd.so", "api_version": "1.0.5" } |
+| vendorc\_icd.json | "ICD": { "library\_path": "/usr/lib/VENDORC/icd.so", "api_version": "1.0.5" } |
then the loader will open the three files mentioned in the "Text File Contents"
column, and then try to load and use the three shared libraries indicated by
@@ -449,7 +454,7 @@ other words, only the ICDs listed in "VK\_ICD\_FILENAMES" will be used.
The "VK\_ICD\_FILENAMES" environment variable is a colon-separated list of ICD
manifest files, containing the following:
-- A filename (e.g. "libvkicd.json") in the "/usr/share/vulkan/icd.d" or "/etc/vulkan/icd.d" system directories
+- A filename (e.g. "libvkicd.json") in the "/usr/share/vulkan/icd.d", "/etc/vulkan/icd.d" "$HOME/.local/share/vulkan/icd.d" directories
- A full pathname (e.g. "/my\_build/my\_icd.json")
@@ -499,19 +504,21 @@ Linux and Windows:
1) Recommended
-- vk\_icdGetInstanceProcAddr exported in the ICD library and it returns valid
- function pointers for all the global level and instance level Vulkan commands,
- and also vkGetDeviceProcAddr. Global level commands are those which contain no
- dispatchable object as the first parameter, such as vkCreateInstance and
- vkEnumerateInstanceExtensionProperties. The ICD must support querying global
- level entry points by calling vk\_icdGetInstanceProcAddr with a NULL VkInstance
- parameter. Instance level commands are those that have either VkInstance, or
- VkPhysicalDevice as the first parameter dispatchable object. Both core entry
- points and any instance extension entry points the ICD supports should be
- available via vk\_icdGetInstanceProcAddr. Future Vulkan instance extensions may
- define and use new instance level dispatchable objects other than VkInstance
- and VkPhysicalDevice, in which case, extensions entry points using these newly
- defined dispatchable oibjects must be queryable via vk\_icdGetInstanceProcAddr.
+- vk\_icdGetInstanceProcAddr is exported by the ICD library and it returns
+ valid function pointers for all the global level and instance level Vulkan
+ commands, and also for vkGetDeviceProcAddr. Global level commands are those
+ which contain no dispatchable object as the first parameter, such as
+ vkCreateInstance and vkEnumerateInstanceExtensionProperties. The ICD must
+ support querying global level entry points by calling
+ vk\_icdGetInstanceProcAddr with a NULL VkInstance parameter. Instance level
+ commands are those that have either VkInstance, or VkPhysicalDevice as the
+ first parameter dispatchable object. Both core entry points and any instance
+ extension entry points the ICD supports should be available via
+ vk\_icdGetInstanceProcAddr. Future Vulkan instance extensions may define and
+ use new instance level dispatchable objects other than VkInstance and
+ VkPhysicalDevice, in which case extension entry points using these newly
+ defined dispatchable objects must be queryable via
+ vk\_icdGetInstanceProcAddr.
- All other Vulkan entry points must either NOT be exported from the ICD
library or else NOT use the official Vulkan function names if they are
@@ -521,12 +528,14 @@ Linux and Windows:
application. In other words, the ICD library exported Vulkan symbols must not
clash with the loader's exported Vulkan symbols.
-- Beware of interposing by dynamic OS library loaders if the offical Vulkan names
-are used. On Linux, if offical names are used, the ICD library must be linked with -Bsymbolic.
+- Beware of interposing by dynamic OS library loaders if the official Vulkan
+ names are used. On Linux, if official names are used, the ICD library must be
+ linked with -Bsymbolic.
+
2) Deprecated
- vkGetInstanceProcAddr exported in the ICD library and returns valid function
- pointers for all the Vulkan API entrypoints.
+ pointers for all the Vulkan API entry points.
- vkCreateInstance exported in the ICD library;
@@ -551,8 +560,8 @@ destruction is handled by the loader as follows:
VkIcdSurface\* structure.
4. VkIcdSurface\* structures include VkIcdSurfaceWin32, VkIcdSurfaceXcb,
- VkIcdSurfaceXlib, VkIcdSurfaceMir, and VkIcdSurfaceWayland. The first field in
- the structure is a VkIcdSurfaceBase enumerant that indicates westher the
+ VkIcdSurfaceXlib, VkIcdSurfaceMir, and VkIcdSurfaceWayland. The first field
+ in the structure is a VkIcdSurfaceBase enumerant that indicates whether the
surface object is Win32, Xcb, Xlib, Mir, or Wayland.
As previously covered, the loader requires dispatch tables to be accessible
@@ -570,7 +579,8 @@ dispatchable objects created by ICDs are as follows:
2. This pointer points to a regular C structure with the first entry being a
pointer. Note: for any C\++ ICD's that implement VK objects directly as C\++
classes. The C\++ compiler may put a vtable at offset zero if your class is
- virtual. In this case use a regular C structure (see below).
+ non-POD due to the use of a virtual function. In this case use a regular C
+ structure (see below).
3. The loader checks for a magic value (ICD\_LOADER\_MAGIC) in all the created
dispatchable objects, as follows (see include/vulkan/vk\_icd.h):
@@ -600,14 +610,13 @@ Additional Notes:
- The loader will filter out extensions requested in vkCreateInstance and
vkCreateDevice before calling into the ICD; Filtering will be of extensions
-advertised by entities (eg layers) different from the ICD in question.
-- The loader will not call ICD for vkEnumerate\*LayerProperties() as layer
+advertised by entities (e.g. layers) different from the ICD in question.
+- The loader will not call the ICD for vkEnumerate\*LayerProperties() as layer
properties are obtained from the layer libraries and layer JSON files.
- If an ICD library wants to implement a layer it can do so by having the
appropriate layer JSON manifest file refer to the ICD library file.
- The loader will not call the ICD for
vkEnumerate\*ExtensionProperties(pLayerName != NULL).
-- The ICD may or may not implement a dispatch table.
#### Android
@@ -694,13 +703,13 @@ extension information as follows
single number, increasing with backward compatible changes.
- (required for device\_extensions with entry points) extension
-"entrypoints" - array of device extension entrypoints; not used for instance
+"entrypoints" - array of device extension entry points; not used for instance
extensions
- (sometimes required) "functions" - mapping list of function entry points. If
multiple layers exist within the same shared library (or if a layer is in the
same shared library as an ICD), this must be specified to allow each layer to
-have its own vkGet\*ProcAddr entrypoints that can be found by the loader. At
+have its own vkGet\*ProcAddr entry points that can be found by the loader. At
this time, only the following two functions are required:
- "vkGetInstanceProcAddr" name
@@ -734,7 +743,7 @@ For example:
"name": "VK_LAYER_LUNARG_OverlayLayer",
"type": "DEVICE",
"library_path": "vkOverlayLayer.dll"
- "api_version" : "1.0.3",
+ "api_version" : "1.0.5",
"implementation_version" : "2",
"description" : "LunarG HUD layer",
"functions": {
@@ -751,9 +760,9 @@ For example:
"spec_version": "3"
}
],
- device_extensions": [
+ "device_extensions": [
{
- "name": "VK_LUNARG_DEBUG_MARKER",
+ "name": "VK_DEBUG_MARKER_EXT",
"spec_version": "1",
"entrypoints": ["vkCmdDbgMarkerBegin", "vkCmdDbgMarkerEnd"]
}
@@ -810,6 +819,11 @@ The Vulkan loader will scan the files in the following Linux directories:
/usr/share/vulkan/implicit\_layer.d
/etc/vulkan/explicit\_layer.d
/etc/vulkan/implicit\_layer.d
+$HOME/.local/share/vulkan/explicit\_layer.d
+$HOME/.local/share/vulkan/implicit\_layer.d
+
+Where $HOME is the current home directory of the application's user id; this
+path will be ignored for suid programs.
Explicit layers are those which are enabled by an application (e.g. with the
vkCreateInstance function), or by an environment variable (as mentioned
@@ -847,7 +861,7 @@ file
- (required) "implementation\_version" – layer version, a single number
increasing with backward compatible changes.
-- (required) "description" – informative decription of the layer.
+- (required) "description" – informative description of the layer.
- (optional) "device\_extensions" or "instance\_extensions" - array of
extension information as follows
@@ -858,13 +872,13 @@ extension information as follows
single number, increasing with backward compatible changes.
- (required for device extensions with entry points) extension
-"entrypoints" - array of device extension entrypoints; not used for instance
+"entrypoints" - array of device extension entry points; not used for instance
extensions
- (sometimes required) "functions" - mapping list of function entry points. If
multiple layers exist within the same shared library (or if a layer is in the
same shared library as an ICD), this must be specified to allow each layer to
-have its own vkGet\*ProcAddr entrypoints that can be found by the loader. At
+have its own vkGet\*ProcAddr entry points that can be found by the loader. At
this time, only the following two functions are required:
- "vkGetInstanceProcAddr" name
- "vkGetDeviceProcAddr" name
@@ -895,7 +909,7 @@ For example:
"name": "VK_LAYER_LUNARG_OverlayLayer",
"type": "DEVICE",
"library_path": "vkOverlayLayer.dll"
- "api_version" : "1.0.3",
+ "api_version" : "1.0.5",
"implementation_version" : "2",
"description" : "LunarG HUD layer",
"functions": {
@@ -912,9 +926,9 @@ For example:
"spec_version": "3"
}
],
- device_extensions": [
+ "device_extensions": [
{
- "name": "VK_LUNARG_DEBUG_MARKER",
+ "name": "VK_DEBUG_MARKER_EXT",
"spec_version": "1",
"entrypoints": ["vkCmdDbgMarkerBegin", "vkCmdDbgMarkerEnd"]
}
@@ -986,32 +1000,34 @@ CreateDevice. A layer can intercept Vulkan instance commands, device commands
or both. For a layer to intercept instance commands, it must participate in the
instance call chain. For a layer to intercept device commands, it must
participate in the device chain. Layers which participate in intercepting calls
-in both the intance and device chains are called global layers.
+in both the instance and device chains are called global layers.
Normally, when a layer intercepts a given Vulkan command, it will call down the
instance or device chain as needed. The loader and all layer libraries that
participate in a call chain cooperate to ensure the correct sequencing of calls
from one entity to the next. This group effort for call chain sequencing is
-hereinafter referred to as disitributed dispatch. In distributed dispatch,
-since each layer is responsible for properly calling the next entity in the
-device or instance chain, a dispatch mechanism is required for all Vulkan
-commands a layer intercepts. For Vulkan commands that are not intercepted by a
-layer, or if the layer chooses to terminate a given Vulkman command by not
-calling down the chain, then no dispatch mechanism is needed for that
-particular Vulkan command. Only for those Vulkan commands, which may be a
-subset of all Vulkan commands, that a layer intercepts is a dispatching
-mechanism by the layer needed. The loader is responsible for dispatching all
-core and instance extension Vulkan commands to the first entity in the chain.
-
-Instance level Vulkan commands are those that have the disapatchable objects
-VkInstance, or VkPhysicalDevice as the first parameter and alos includes
+hereinafter referred to as distributed dispatch. In distributed dispatch, since
+each layer is responsible for properly calling the next entity in the device or
+instance chain, a dispatch mechanism is required for all Vulkan commands a
+layer intercepts. For Vulkan commands that are not intercepted by a layer, or
+if the layer chooses to terminate a given Vulkan command by not calling down
+the chain, then no dispatch mechanism is needed for that particular Vulkan
+command. Only for those Vulkan commands, which may be a subset of all Vulkan
+commands, that a layer intercepts is a dispatching mechanism by the layer
+needed. The loader is responsible for dispatching all core and instance
+extension Vulkan commands to the first entity in the chain.
+
+Instance level Vulkan commands are those that have the dispatchable objects
+VkInstance, or VkPhysicalDevice as the first parameter and also includes
vkCreateInstance.
+
Device level Vulkan commands are those that use VkDevice, VkQueue or
VkCommandBuffer as the first parameter and also include vkCreateDevice. Future
extensions may introduce new instance or device level dispatchable objects, so
the above lists may be extended in the future.
-#### Discovery of layer entrypoints
+#### Discovery of layer entry points
+
For the layer libraries that have been discovered by the loader, their
intercepting entry points that will participate in a device or instance call
chain need to be available to the loader or whatever layer is before them in
@@ -1030,13 +1046,12 @@ The name of this function is specified in various ways: 1) the layer manifest
JSON file in the "functions", "vkGetDeviceProcAddr" node (Linux/Windows); 2) it
is named "vkGetDeviceProcAddr"; 3) it is "<layerName>GetDeviceProcAddr"
(Android).
-- A layer's vkGetInstanceProcAddr function (irregardless of it's name) must
-return the local entry points for all instance level Vulkan commands it
-intercepts. At a minimum, this includes vkGetInstanceProcAddr and
-vkCreateInstance.
-- A layer's vkGetDeviceProcAddr function (irregardless of it's name) must
-return the entry points for all device level Vulkan commands it intercepts. At
-a minimum, this includes vkGetDeviceProcAddr and vkCreateDevice.
+- A layer's vkGetInstanceProcAddr function (regardless of its name) must return
+the local entry points for all instance level Vulkan commands it intercepts. At
+a minimum, this includes vkGetInstanceProcAddr and vkCreateInstance.
+- A layer's vkGetDeviceProcAddr function (regardless of its name) must return
+the entry points for all device level Vulkan commands it intercepts. At a
+minimum, this includes vkGetDeviceProcAddr and vkCreateDevice.
- There are no requirements on the names of the intercepting functions a layer
implements except those listed above for vkGetInstanceProcAddr and
vkGetDeviceProcAddr.
@@ -1047,6 +1062,7 @@ instance level commands it intercepts including vkCreateDevice.
parameter equal to NULL for device level commands it intercepts.
#### Layer intercept requirements
+
- Layers intercept a Vulkan command by defining a C/C++ function with signature
identical to the Vulkan API for that command.
- Other than the two vkGet*ProcAddr, all other functions intercepted by a layer
@@ -1065,6 +1081,7 @@ vkQueueSubmit. Any additional calls inserted by a layer must be on the same
chain. They should call down the chain.
#### Distributed dispatching requirements
+
- For each entry point a layer intercepts, it must keep track of the entry
point residing in the next entity in the chain it will call down into. In other
words, the layer must have a list of pointers to functions of the appropriate
@@ -1075,16 +1092,17 @@ for clarity will be referred to as a dispatch table.
- A layer can use the VkLayerInstanceDispatchTable structure as a instance
dispatch table (see include/vulkan/vk_layer.h).
- Layers vkGetInstanceProcAddr function uses the next entity's
-vkGetInstanceProcAddr to call down the chain for unknown (ie non-intercepted)
+vkGetInstanceProcAddr to call down the chain for unknown (i.e. non-intercepted)
functions.
- Layers vkGetDeviceProcAddr function uses the next entity's
-vkGetDeviceProcAddr to call down the chain for unknown (ie non-intercepted)
+vkGetDeviceProcAddr to call down the chain for unknown (i.e. non-intercepted)
functions.
#### Layer dispatch initialization
-- A layer intializes it's instance dispatch table within it's vkCreateInstance
+
+- A layer initializes its instance dispatch table within its vkCreateInstance
function.
-- A layer intializes it's device dispatch table within it's vkCreateDevice
+- A layer initializes its device dispatch table within its vkCreateDevice
function.
- The loader passes a linked list of initialization structures to layers via
the "pNext" field in the VkInstanceCreateInfo and VkDeviceCreateInfo structures
@@ -1122,34 +1140,164 @@ the VkInstanceCreateInfo/VkDeviceCreateInfo structure.
Get*ProcAddr function once for each Vulkan command needed in your dispatch
table
-TODO: Example code for CreateInstance.
+#### Example code for CreateInstance
+
+```cpp
+VkResult vkCreateInstance(
+ const VkInstanceCreateInfo *pCreateInfo,
+ const VkAllocationCallbacks *pAllocator,
+ VkInstance *pInstance)
+{
+ VkLayerInstanceCreateInfo *chain_info =
+ get_chain_info(pCreateInfo, VK_LAYER_LINK_INFO);
+
+ assert(chain_info->u.pLayerInfo);
+ PFN_vkGetInstanceProcAddr fpGetInstanceProcAddr =
+ chain_info->u.pLayerInfo->pfnNextGetInstanceProcAddr;
+ PFN_vkCreateInstance fpCreateInstance =
+ (PFN_vkCreateInstance)fpGetInstanceProcAddr(NULL, "vkCreateInstance");
+ if (fpCreateInstance == NULL) {
+ return VK_ERROR_INITIALIZATION_FAILED;
+ }
+
+ // Advance the link info for the next element of the chain
+ chain_info->u.pLayerInfo = chain_info->u.pLayerInfo->pNext;
+
+ // Continue call down the chain
+ VkResult result = fpCreateInstance(pCreateInfo, pAllocator, pInstance);
+ if (result != VK_SUCCESS)
+ return result;
+
+ // Allocate new structure to store peristent data
+ layer_data *my_data = new layer_data;
+
+ // Associate this instance with the newly allocated data
+ // layer will store any persistent state it needs for
+ // this instance in the my_data structure
+ layer_data_map[get_dispatch_key(*pInstance)] = my_data;
+
+ // Create layer's dispatch table using GetInstanceProcAddr of
+ // next layer in the chain.
+ my_data->instance_dispatch_table = new VkLayerInstanceDispatchTable;
+ layer_init_instance_dispatch_table(
+ *pInstance, my_data->instance_dispatch_table, fpGetInstanceProcAddr);
-TODO: Example code for CreateDevice.
+ // Keep track of any extensions that were enabled for this
+ // instance. In this case check for VK_EXT_debug_report
+ my_data->report_data = debug_report_create_instance(
+ my_data->instance_dispatch_table, *pInstance,
+ pCreateInfo->enabledExtensionCount,
+ pCreateInfo->ppEnabledExtensionNames);
+
+ // Other layer initialization
+ ...
+
+ return VK_SUCCESS;
+}
+```
+
+#### Example code for CreateDevice
+
+```cpp
+VkResult
+vkCreateDevice(
+ VkPhysicalDevice gpu,
+ const VkDeviceCreateInfo *pCreateInfo,
+ const VkAllocationCallbacks *pAllocator,
+ VkDevice *pDevice)
+{
+ VkLayerDeviceCreateInfo *chain_info =
+ get_chain_info(pCreateInfo, VK_LAYER_LINK_INFO);
+
+ PFN_vkGetInstanceProcAddr fpGetInstanceProcAddr =
+ chain_info->u.pLayerInfo->pfnNextGetInstanceProcAddr;
+ PFN_vkGetDeviceProcAddr fpGetDeviceProcAddr =
+ chain_info->u.pLayerInfo->pfnNextGetDeviceProcAddr;
+ PFN_vkCreateDevice fpCreateDevice =
+ (PFN_vkCreateDevice)fpGetInstanceProcAddr(NULL, "vkCreateDevice");
+ if (fpCreateDevice == NULL) {
+ return VK_ERROR_INITIALIZATION_FAILED;
+ }
+
+ // Advance the link info for the next element on the chain
+ chain_info->u.pLayerInfo = chain_info->u.pLayerInfo->pNext;
+
+ VkResult result = fpCreateDevice(gpu, pCreateInfo, pAllocator, pDevice);
+ if (result != VK_SUCCESS) {
+ return result;
+ }
+
+ // Allocate new structure to store peristent data
+ layer_data *my_data = new layer_data;
+
+ // Associate this instance with the newly allocated data
+ // layer will store any persistent state it needs for
+ // this instance in the my_data structure
+ layer_data_map[get_dispatch_key(*pDevice)] = my_data;
+
+ my_device_data->device_dispatch_table = new VkLayerDispatchTable;
+ layer_init_device_dispatch_table(
+ *pDevice, my_device_data->device_dispatch_table, fpGetDeviceProcAddr);
+
+ // Keep track of any extensions that were enabled for this
+ // instance. In this case check for VK_EXT_debug_report
+ my_data->report_data = debug_report_create_instance(
+ my_instance_data->report_data, *pDevice);
+
+ // Other layer initialization
+ ...
+
+ return VK_SUCCESS;
+}
+```
#### Special Considerations
-A layer may want to associate it's own private data with one or more Vulkan objects.
+A layer may want to associate it's own private data with one or more Vulkan
+objects.
Two common methods to do this are hash maps and object wrapping. The loader
supports layers wrapping any Vulkan object including dispatchable objects.
Layers which wrap objects should ensure they always unwrap objects before
passing them down the chain. This implies the layer must intercept every Vulkan
command which uses the object in question. Layers above the object wrapping
-layer will see the wrapped object.
+layer will see the wrapped object. Layers which wrap dispatchable objects must
+ensure that the first field in the wrapping structure is a pointer to a dispatch table
+as defined in vk_layer.h. Specifically, an instance wrapped dispatchable object
+could be as follows:
+```
+struct my_wrapped_instance_obj_ {
+ VkLayerInstanceDispatchTable *disp;
+ // whatever data layer wants to add to this object
+};
+```
+A device wrapped dispatchable object could be as follows:
+```
+struct my_wrapped_instance_obj_ {
+ VkLayerDispatchTable *disp;
+ // whatever data layer wants to add to this object
+};
+```
-Alternatively, a layer may want to use a hash map to associate data with a given object.
-The key to the map could be the object. Alternatively, for dispatchable objects
-at a given level (eg device or instance) the may layer may want data associated with the all command for the VkDevice or VkInstance. Since there are multiple
-dispatchable objects for a given VkInstance or VkDevice, the VkDevice or
-VkInstance object is not a great map key. Instead the layer should use the
-dispatch table pointer withbin the VkDevice or VkInstance since that will be
-unique for a given VkInstance or VkDevice.
+Alternatively, a layer may want to use a hash map to associate data with a
+given object. The key to the map could be the object. Alternatively, for
+dispatchable objects at a given level (eg device or instance) the layer may
+want data associated with the VkDevice or VkInstance objects. Since
+there are multiple dispatchable objects for a given VkInstance or VkDevice, the
+VkDevice or VkInstance object is not a great map key. Instead the layer should
+use the dispatch table pointer within the VkDevice or VkInstance since that
+will be unique for a given VkInstance or VkDevice.
Layers which create dispatchable objects take special care. Remember that loader
trampoline code normally fills in the dispatch table pointer in the newly
created object. Thus, the layer must fill in the dispatch table pointer if the
-loader trampoline will not do so. Common cases a layer may create a dispatchable
-object without loader trampoline code is as follows:
+loader trampoline will not do so. Common cases where a layer (or ICD) may create a
+dispatchable object without loader trampoline code is as follows:
- object wrapping layers that wrap dispatchable objects
- layers which add extensions that create dispatchable objects
- layers which insert extra Vulkan commands in the stream of commands they
intercept from the application
+- ICDs which add extensions that create dispatchable objects
+
+To fill in the dispatch table pointer in newly created dispatchable object,
+the layer should copy the dispatch pointer, which is always the first entry in the structure, from an existing parent object of the same level (instance versus
+device). For example, if there is a newly created VkCommandBuffer object, then the dispatch pointer from the VkDevice object, which is the parent of the VkCommandBuffer object, should be copied into the newly created object.
diff --git a/loader/cJSON.c b/loader/cJSON.c
index 097866032..f28eadb7e 100644
--- a/loader/cJSON.c
+++ b/loader/cJSON.c
@@ -343,7 +343,7 @@ static const char *parse_string(cJSON *item, const char *str) {
*--ptr2 = ((uc | 0x80) & 0xBF);
uc >>= 6;
case 1:
- *--ptr2 = (uc | firstByteMark[len]);
+ *--ptr2 = ((unsigned char)uc | firstByteMark[len]);
}
ptr2 += len;
break;
@@ -521,7 +521,6 @@ char *cJSON_PrintBuffered(cJSON *item, int prebuffer, int fmt) {
p.length = prebuffer;
p.offset = 0;
return print_value(item, 0, fmt, &p);
- return p.buffer;
}
/* Parser core - when encountering text, process appropriately. */
diff --git a/loader/debug_report.c b/loader/debug_report.c
index 232fa6d6b..7da370ce2 100644
--- a/loader/debug_report.c
+++ b/loader/debug_report.c
@@ -185,14 +185,14 @@ static VKAPI_ATTR void VKAPI_CALL debug_report_DebugReportMessage(
* for CreateDebugReportCallback
*/
-VKAPI_ATTR VkResult VKAPI_CALL loader_CreateDebugReportCallback(
+VKAPI_ATTR VkResult VKAPI_CALL terminator_CreateDebugReportCallback(
VkInstance instance, const VkDebugReportCallbackCreateInfoEXT *pCreateInfo,
const VkAllocationCallbacks *pAllocator,
VkDebugReportCallbackEXT *pCallback) {
VkDebugReportCallbackEXT *icd_info;
const struct loader_icd *icd;
struct loader_instance *inst = (struct loader_instance *)instance;
- VkResult res;
+ VkResult res = VK_SUCCESS;
uint32_t storage_idx;
icd_info = calloc(sizeof(VkDebugReportCallbackEXT), inst->total_icd_count);
@@ -239,9 +239,9 @@ VKAPI_ATTR VkResult VKAPI_CALL loader_CreateDebugReportCallback(
* for DestroyDebugReportCallback
*/
VKAPI_ATTR void VKAPI_CALL
-loader_DestroyDebugReportCallback(VkInstance instance,
- VkDebugReportCallbackEXT callback,
- const VkAllocationCallbacks *pAllocator) {
+terminator_DestroyDebugReportCallback(VkInstance instance,
+ VkDebugReportCallbackEXT callback,
+ const VkAllocationCallbacks *pAllocator) {
uint32_t storage_idx;
VkDebugReportCallbackEXT *icd_info;
const struct loader_icd *icd;
@@ -263,10 +263,10 @@ loader_DestroyDebugReportCallback(VkInstance instance,
* for DebugReportMessage
*/
VKAPI_ATTR void VKAPI_CALL
-loader_DebugReportMessage(VkInstance instance, VkDebugReportFlagsEXT flags,
- VkDebugReportObjectTypeEXT objType, uint64_t object,
- size_t location, int32_t msgCode,
- const char *pLayerPrefix, const char *pMsg) {
+terminator_DebugReportMessage(VkInstance instance, VkDebugReportFlagsEXT flags,
+ VkDebugReportObjectTypeEXT objType,
+ uint64_t object, size_t location, int32_t msgCode,
+ const char *pLayerPrefix, const char *pMsg) {
const struct loader_icd *icd;
struct loader_instance *inst = (struct loader_instance *)instance;
diff --git a/loader/debug_report.h b/loader/debug_report.h
index 7b665a5f3..baac021e9 100644
--- a/loader/debug_report.h
+++ b/loader/debug_report.h
@@ -116,21 +116,21 @@ void debug_report_create_instance(struct loader_instance *ptr_instance,
bool debug_report_instance_gpa(struct loader_instance *ptr_instance,
const char *name, void **addr);
-VKAPI_ATTR VkResult VKAPI_CALL loader_CreateDebugReportCallback(
+VKAPI_ATTR VkResult VKAPI_CALL terminator_CreateDebugReportCallback(
VkInstance instance, const VkDebugReportCallbackCreateInfoEXT *pCreateInfo,
const VkAllocationCallbacks *pAllocator,
VkDebugReportCallbackEXT *pCallback);
VKAPI_ATTR void VKAPI_CALL
-loader_DestroyDebugReportCallback(VkInstance instance,
- VkDebugReportCallbackEXT callback,
- const VkAllocationCallbacks *pAllocator);
+terminator_DestroyDebugReportCallback(VkInstance instance,
+ VkDebugReportCallbackEXT callback,
+ const VkAllocationCallbacks *pAllocator);
VKAPI_ATTR void VKAPI_CALL
-loader_DebugReportMessage(VkInstance instance, VkDebugReportFlagsEXT flags,
- VkDebugReportObjectTypeEXT objType, uint64_t object,
- size_t location, int32_t msgCode,
- const char *pLayerPrefix, const char *pMsg);
+terminator_DebugReportMessage(VkInstance instance, VkDebugReportFlagsEXT flags,
+ VkDebugReportObjectTypeEXT objType,
+ uint64_t object, size_t location, int32_t msgCode,
+ const char *pLayerPrefix, const char *pMsg);
VkResult
util_CreateDebugReportCallback(struct loader_instance *inst,
diff --git a/loader/dirent_on_windows.c b/loader/dirent_on_windows.c
index 985fb6a1a..e408224f3 100644
--- a/loader/dirent_on_windows.c
+++ b/loader/dirent_on_windows.c
@@ -7,7 +7,7 @@
Rights: See end of file.
*/
-#include <dirent_on_windows.h>
+#include "dirent_on_windows.h"
#include <errno.h>
#include <io.h> /* _findfirst and _findnext set errno iff they return -1 */
#include <stdlib.h>
diff --git a/loader/loader.c b/loader/loader.c
index 789fe2994..426b1a102 100644
--- a/loader/loader.c
+++ b/loader/loader.c
@@ -63,15 +63,12 @@ struct loader_struct loader = {0};
// TLS for instance for alloc/free callbacks
THREAD_LOCAL_DECL struct loader_instance *tls_instance;
-static bool loader_init_generic_list(const struct loader_instance *inst,
- struct loader_generic_list *list_info,
- size_t element_size);
-
static size_t loader_platform_combine_path(char *dest, size_t len, ...);
struct loader_phys_dev_per_icd {
uint32_t count;
VkPhysicalDevice *phys_devs;
+ struct loader_icd *this_icd;
};
enum loader_debug {
@@ -98,63 +95,77 @@ const char *std_validation_str = "VK_LAYER_LUNARG_standard_validation";
// pointers to "terminator functions".
const VkLayerInstanceDispatchTable instance_disp = {
.GetInstanceProcAddr = vkGetInstanceProcAddr,
- .DestroyInstance = loader_DestroyInstance,
- .EnumeratePhysicalDevices = loader_EnumeratePhysicalDevices,
- .GetPhysicalDeviceFeatures = loader_GetPhysicalDeviceFeatures,
+ .DestroyInstance = terminator_DestroyInstance,
+ .EnumeratePhysicalDevices = terminator_EnumeratePhysicalDevices,
+ .GetPhysicalDeviceFeatures = terminator_GetPhysicalDeviceFeatures,
.GetPhysicalDeviceFormatProperties =
- loader_GetPhysicalDeviceFormatProperties,
+ terminator_GetPhysicalDeviceFormatProperties,
.GetPhysicalDeviceImageFormatProperties =
- loader_GetPhysicalDeviceImageFormatProperties,
- .GetPhysicalDeviceProperties = loader_GetPhysicalDeviceProperties,
+ terminator_GetPhysicalDeviceImageFormatProperties,
+ .GetPhysicalDeviceProperties = terminator_GetPhysicalDeviceProperties,
.GetPhysicalDeviceQueueFamilyProperties =
- loader_GetPhysicalDeviceQueueFamilyProperties,
+ terminator_GetPhysicalDeviceQueueFamilyProperties,
.GetPhysicalDeviceMemoryProperties =
- loader_GetPhysicalDeviceMemoryProperties,
+ terminator_GetPhysicalDeviceMemoryProperties,
.EnumerateDeviceExtensionProperties =
- loader_EnumerateDeviceExtensionProperties,
- .EnumerateDeviceLayerProperties = loader_EnumerateDeviceLayerProperties,
+ terminator_EnumerateDeviceExtensionProperties,
+ .EnumerateDeviceLayerProperties = terminator_EnumerateDeviceLayerProperties,
.GetPhysicalDeviceSparseImageFormatProperties =
- loader_GetPhysicalDeviceSparseImageFormatProperties,
- .DestroySurfaceKHR = loader_DestroySurfaceKHR,
+ terminator_GetPhysicalDeviceSparseImageFormatProperties,
+ .DestroySurfaceKHR = terminator_DestroySurfaceKHR,
.GetPhysicalDeviceSurfaceSupportKHR =
- loader_GetPhysicalDeviceSurfaceSupportKHR,
+ terminator_GetPhysicalDeviceSurfaceSupportKHR,
.GetPhysicalDeviceSurfaceCapabilitiesKHR =
- loader_GetPhysicalDeviceSurfaceCapabilitiesKHR,
+ terminator_GetPhysicalDeviceSurfaceCapabilitiesKHR,
.GetPhysicalDeviceSurfaceFormatsKHR =
- loader_GetPhysicalDeviceSurfaceFormatsKHR,
+ terminator_GetPhysicalDeviceSurfaceFormatsKHR,
.GetPhysicalDeviceSurfacePresentModesKHR =
- loader_GetPhysicalDeviceSurfacePresentModesKHR,
- .CreateDebugReportCallbackEXT = loader_CreateDebugReportCallback,
- .DestroyDebugReportCallbackEXT = loader_DestroyDebugReportCallback,
- .DebugReportMessageEXT = loader_DebugReportMessage,
+ terminator_GetPhysicalDeviceSurfacePresentModesKHR,
+ .CreateDebugReportCallbackEXT = terminator_CreateDebugReportCallback,
+ .DestroyDebugReportCallbackEXT = terminator_DestroyDebugReportCallback,
+ .DebugReportMessageEXT = terminator_DebugReportMessage,
#ifdef VK_USE_PLATFORM_MIR_KHR
- .CreateMirSurfaceKHR = loader_CreateMirSurfaceKHR,
+ .CreateMirSurfaceKHR = terminator_CreateMirSurfaceKHR,
.GetPhysicalDeviceMirPresentationSupportKHR =
- loader_GetPhysicalDeviceMirPresentationSupportKHR,
+ terminator_GetPhysicalDeviceMirPresentationSupportKHR,
#endif
#ifdef VK_USE_PLATFORM_WAYLAND_KHR
- .CreateWaylandSurfaceKHR = loader_CreateWaylandSurfaceKHR,
+ .CreateWaylandSurfaceKHR = terminator_CreateWaylandSurfaceKHR,
.GetPhysicalDeviceWaylandPresentationSupportKHR =
- loader_GetPhysicalDeviceWaylandPresentationSupportKHR,
+ terminator_GetPhysicalDeviceWaylandPresentationSupportKHR,
#endif
#ifdef VK_USE_PLATFORM_WIN32_KHR
- .CreateWin32SurfaceKHR = loader_CreateWin32SurfaceKHR,
+ .CreateWin32SurfaceKHR = terminator_CreateWin32SurfaceKHR,
.GetPhysicalDeviceWin32PresentationSupportKHR =
- loader_GetPhysicalDeviceWin32PresentationSupportKHR,
+ terminator_GetPhysicalDeviceWin32PresentationSupportKHR,
#endif
#ifdef VK_USE_PLATFORM_XCB_KHR
- .CreateXcbSurfaceKHR = loader_CreateXcbSurfaceKHR,
+ .CreateXcbSurfaceKHR = terminator_CreateXcbSurfaceKHR,
.GetPhysicalDeviceXcbPresentationSupportKHR =
- loader_GetPhysicalDeviceXcbPresentationSupportKHR,
+ terminator_GetPhysicalDeviceXcbPresentationSupportKHR,
#endif
#ifdef VK_USE_PLATFORM_XLIB_KHR
- .CreateXlibSurfaceKHR = loader_CreateXlibSurfaceKHR,
+ .CreateXlibSurfaceKHR = terminator_CreateXlibSurfaceKHR,
.GetPhysicalDeviceXlibPresentationSupportKHR =
- loader_GetPhysicalDeviceXlibPresentationSupportKHR,
+ terminator_GetPhysicalDeviceXlibPresentationSupportKHR,
#endif
#ifdef VK_USE_PLATFORM_ANDROID_KHR
- .CreateAndroidSurfaceKHR = loader_CreateAndroidSurfaceKHR,
+ .CreateAndroidSurfaceKHR = terminator_CreateAndroidSurfaceKHR,
#endif
+ .GetPhysicalDeviceDisplayPropertiesKHR =
+ terminator_GetPhysicalDeviceDisplayPropertiesKHR,
+ .GetPhysicalDeviceDisplayPlanePropertiesKHR =
+ terminator_GetPhysicalDeviceDisplayPlanePropertiesKHR,
+ .GetDisplayPlaneSupportedDisplaysKHR =
+ terminator_GetDisplayPlaneSupportedDisplaysKHR,
+ .GetDisplayModePropertiesKHR =
+ terminator_GetDisplayModePropertiesKHR,
+ .CreateDisplayModeKHR =
+ terminator_CreateDisplayModeKHR,
+ .GetDisplayPlaneCapabilitiesKHR =
+ terminator_GetDisplayPlaneCapabilitiesKHR,
+ .CreateDisplayPlaneSurfaceKHR =
+ terminator_CreateDisplayPlaneSurfaceKHR,
};
LOADER_PLATFORM_THREAD_ONCE_DECLARATION(once_init);
@@ -220,7 +231,7 @@ void loader_tls_heap_free(void *pMemory) {
}
void loader_log(const struct loader_instance *inst, VkFlags msg_type,
- int32_t msg_code, const char *format, ...) {
+ int32_t msg_code, const char *format, ...) {
char msg[512];
va_list ap;
int ret;
@@ -621,10 +632,11 @@ loader_init_device_extensions(const struct loader_instance *inst,
return VK_SUCCESS;
}
-static VkResult loader_add_device_extensions(
- const struct loader_instance *inst, struct loader_icd *icd,
- VkPhysicalDevice physical_device, const char *lib_name,
- struct loader_extension_list *ext_list) {
+VkResult loader_add_device_extensions(const struct loader_instance *inst,
+ struct loader_icd *icd,
+ VkPhysicalDevice physical_device,
+ const char *lib_name,
+ struct loader_extension_list *ext_list) {
uint32_t i, count;
VkResult res;
VkExtensionProperties *ext_props;
@@ -664,9 +676,9 @@ static VkResult loader_add_device_extensions(
return VK_SUCCESS;
}
-static bool loader_init_generic_list(const struct loader_instance *inst,
- struct loader_generic_list *list_info,
- size_t element_size) {
+bool loader_init_generic_list(const struct loader_instance *inst,
+ struct loader_generic_list *list_info,
+ size_t element_size) {
list_info->capacity = 32 * element_size;
list_info->list = loader_heap_alloc(inst, list_info->capacity,
VK_SYSTEM_ALLOCATION_SCOPE_INSTANCE);
@@ -1091,23 +1103,6 @@ void loader_get_icd_loader_instance_extensions(
debug_report_add_instance_extensions(inst, inst_exts);
}
-struct loader_physical_device *
-loader_get_physical_device(const VkPhysicalDevice physdev) {
- uint32_t i;
- for (struct loader_instance *inst = loader.instances; inst;
- inst = inst->next) {
- for (i = 0; i < inst->total_gpu_count; i++) {
- // TODO this aliases physDevices within instances, need for this
- // function to go away
- if (inst->phys_devs[i].disp ==
- loader_get_instance_dispatch(physdev)) {
- return &inst->phys_devs[i];
- }
- }
- }
- return NULL;
-}
-
struct loader_icd *loader_get_icd_and_device(const VkDevice device,
struct loader_device **found_dev) {
*found_dev = NULL;
@@ -1135,7 +1130,7 @@ static void loader_destroy_logical_device(const struct loader_instance *inst,
loader_heap_free(inst, dev);
}
-static struct loader_device *
+struct loader_device *
loader_add_logical_device(const struct loader_instance *inst,
struct loader_device **device_list) {
struct loader_device *new_dev;
@@ -1186,6 +1181,8 @@ static void loader_icd_destroy(struct loader_instance *ptr_inst,
dev = next_dev;
}
+ if (icd->phys_devs != NULL)
+ loader_heap_free(ptr_inst, icd->phys_devs);
loader_heap_free(ptr_inst, icd);
}
@@ -1356,7 +1353,7 @@ static bool loader_icd_init_entrys(struct loader_icd *icd, VkInstance inst,
icd->func = (PFN_vk##func)fp_gipa(inst, "vk" #func); \
if (!icd->func && required) { \
loader_log((struct loader_instance *)inst, \
- VK_DEBUG_REPORT_WARNING_BIT_EXT, 0, \
+ VK_DEBUG_REPORT_WARNING_BIT_EXT, 0, \
loader_platform_get_proc_address_error("vk" #func)); \
return false; \
} \
@@ -1386,6 +1383,12 @@ static bool loader_icd_init_entrys(struct loader_icd *icd, VkInstance inst,
#ifdef VK_USE_PLATFORM_XCB_KHR
LOOKUP_GIPA(GetPhysicalDeviceXcbPresentationSupportKHR, false);
#endif
+#ifdef VK_USE_PLATFORM_XLIB_KHR
+ LOOKUP_GIPA(GetPhysicalDeviceXlibPresentationSupportKHR, false);
+#endif
+#ifdef VK_USE_PLATFORM_WAYLAND_KHR
+ LOOKUP_GIPA(GetPhysicalDeviceWaylandPresentationSupportKHR, false);
+#endif
#undef LOOKUP_GIPA
@@ -1423,7 +1426,8 @@ static void loader_debug_init(void) {
g_loader_log_msgs |= VK_DEBUG_REPORT_INFORMATION_BIT_EXT;
} else if (strncmp(env, "perf", len) == 0) {
g_loader_debug |= LOADER_PERF_BIT;
- g_loader_log_msgs |= VK_DEBUG_REPORT_PERFORMANCE_WARNING_BIT_EXT;
+ g_loader_log_msgs |=
+ VK_DEBUG_REPORT_PERFORMANCE_WARNING_BIT_EXT;
} else if (strncmp(env, "error", len) == 0) {
g_loader_debug |= LOADER_ERROR_BIT;
g_loader_log_msgs |= VK_DEBUG_REPORT_ERROR_BIT_EXT;
@@ -1783,11 +1787,10 @@ static void loader_add_layer_property_meta(
bool found;
struct loader_layer_list *layer_list;
- if (0 == layer_count ||
- NULL == layer_instance_list ||
- NULL == layer_device_list ||
- (layer_count > layer_instance_list->count &&
- layer_count > layer_device_list->count))
+ if (0 == layer_count || (!layer_instance_list && !layer_device_list))
+ return;
+ if ((layer_instance_list && (layer_count > layer_instance_list->count)) &&
+ (layer_device_list && (layer_count > layer_device_list->count)))
return;
for (j = 0; j < 2; j++) {
@@ -1796,6 +1799,8 @@ static void loader_add_layer_property_meta(
else
layer_list = layer_device_list;
found = true;
+ if (layer_list == NULL)
+ continue;
for (i = 0; i < layer_count; i++) {
if (loader_find_layer_name_list(layer_names[i], layer_list))
continue;
@@ -1814,7 +1819,7 @@ static void loader_add_layer_property_meta(
sizeof(props->info.layerName));
// TODO what about specVersion? for now insert loader's built
// version
- props->info.specVersion = VK_API_VERSION;
+ props->info.specVersion = VK_API_VERSION_1_0;
}
}
}
@@ -1853,7 +1858,7 @@ loader_add_layer_properties(const struct loader_instance *inst,
char *temp;
char *name, *type, *library_path, *api_version;
char *implementation_version, *description;
- cJSON *disable_environment;
+ cJSON *disable_environment = NULL;
int i, j;
VkExtensionProperties ext_prop;
item = cJSON_GetObjectItem(json, "file_format_version");
@@ -1884,7 +1889,7 @@ loader_add_layer_properties(const struct loader_instance *inst,
var = cJSON_GetObjectItem(node, #var); \
if (var == NULL) { \
layer_node = layer_node->next; \
- loader_log(inst, VK_DEBUG_REPORT_WARNING_BIT_EXT, 0, \
+ loader_log(inst, VK_DEBUG_REPORT_WARNING_BIT_EXT, 0, \
"Didn't find required layer object %s in manifest " \
"JSON file, skipping this layer", \
#var); \
@@ -1896,7 +1901,7 @@ loader_add_layer_properties(const struct loader_instance *inst,
item = cJSON_GetObjectItem(node, #var); \
if (item == NULL) { \
layer_node = layer_node->next; \
- loader_log(inst, VK_DEBUG_REPORT_WARNING_BIT_EXT, 0, \
+ loader_log(inst, VK_DEBUG_REPORT_WARNING_BIT_EXT, 0, \
"Didn't find required layer value %s in manifest JSON " \
"file, skipping this layer", \
#var); \
@@ -1983,9 +1988,10 @@ loader_add_layer_properties(const struct loader_instance *inst,
props->info.description[sizeof(props->info.description) - 1] = '\0';
if (is_implicit) {
if (!disable_environment || !disable_environment->child) {
- loader_log(inst, VK_DEBUG_REPORT_WARNING_BIT_EXT, 0,
- "Didn't find required layer child value disable_environment"
- "in manifest JSON file, skipping this layer");
+ loader_log(
+ inst, VK_DEBUG_REPORT_WARNING_BIT_EXT, 0,
+ "Didn't find required layer child value disable_environment"
+ "in manifest JSON file, skipping this layer");
layer_node = layer_node->next;
continue;
}
@@ -2356,8 +2362,8 @@ static void loader_get_manifest_files(const struct loader_instance *inst,
closedir(sysdir);
file = next_file;
#if !defined(_WIN32)
- if (home_location != NULL && (next_file == NULL || *next_file == '\0')
- && override == NULL) {
+ if (home_location != NULL &&
+ (next_file == NULL || *next_file == '\0') && override == NULL) {
char *home = secure_getenv("HOME");
if (home != NULL) {
size_t len;
@@ -2365,7 +2371,7 @@ static void loader_get_manifest_files(const struct loader_instance *inst,
strlen(home_location));
if (home_loc == NULL) {
loader_log(inst, VK_DEBUG_REPORT_ERROR_BIT_EXT, 0,
- "Out of memory can't get manifest files");
+ "Out of memory can't get manifest files");
return;
}
strcpy(home_loc, home);
@@ -2373,16 +2379,17 @@ static void loader_get_manifest_files(const struct loader_instance *inst,
if (home_location[0] != DIRECTORY_SYMBOL) {
len = strlen(home_loc);
home_loc[len] = DIRECTORY_SYMBOL;
- home_loc[len+1] = '\0';
+ home_loc[len + 1] = '\0';
}
strcat(home_loc, home_location);
file = home_loc;
next_file = loader_get_next_path(file);
home_location = NULL;
- loader_log(inst, VK_DEBUG_REPORT_DEBUG_BIT_EXT, 0,
- "Searching the following paths for manifest files: %s\n",
- home_loc);
+ loader_log(
+ inst, VK_DEBUG_REPORT_DEBUG_BIT_EXT, 0,
+ "Searching the following paths for manifest files: %s\n",
+ home_loc);
list_is_dirs = true;
}
}
@@ -2596,9 +2603,9 @@ loader_gpa_instance_internal(VkInstance inst, const char *pName) {
if (!strcmp(pName, "vkGetInstanceProcAddr"))
return (void *)loader_gpa_instance_internal;
if (!strcmp(pName, "vkCreateInstance"))
- return (void *)loader_CreateInstance;
+ return (void *)terminator_CreateInstance;
if (!strcmp(pName, "vkCreateDevice"))
- return (void *)loader_create_device_terminator;
+ return (void *)terminator_CreateDevice;
// inst is not wrapped
if (inst == VK_NULL_HANDLE) {
@@ -2611,15 +2618,16 @@ loader_gpa_instance_internal(VkInstance inst, const char *pName) {
if (disp_table == NULL)
return NULL;
- addr = loader_lookup_instance_dispatch_table(disp_table, pName);
- if (addr) {
+ bool found_name;
+ addr = loader_lookup_instance_dispatch_table(disp_table, pName, &found_name);
+ if (found_name) {
return addr;
}
- if (disp_table->GetInstanceProcAddr == NULL) {
- return NULL;
- }
- return disp_table->GetInstanceProcAddr(inst, pName);
+ // Don't call down the chain, this would be an infinite loop
+ loader_log(NULL, VK_DEBUG_REPORT_WARNING_BIT_EXT, 0,
+ "loader_gpa_instance_internal() unrecognized name %s", pName);
+ return NULL;
}
/**
@@ -2647,15 +2655,15 @@ static void loader_init_dispatch_dev_ext_entry(struct loader_instance *inst,
} else {
for (uint32_t i = 0; i < inst->total_icd_count; i++) {
struct loader_icd *icd = &inst->icds[i];
- struct loader_device *dev = icd->logical_device_list;
- while (dev) {
+ struct loader_device *ldev = icd->logical_device_list;
+ while (ldev) {
gdpa_value =
- dev->loader_dispatch.core_dispatch.GetDeviceProcAddr(
- dev->device, funcName);
+ ldev->loader_dispatch.core_dispatch.GetDeviceProcAddr(
+ ldev->device, funcName);
if (gdpa_value != NULL)
- dev->loader_dispatch.ext_dispatch.DevExt[idx] =
+ ldev->loader_dispatch.ext_dispatch.DevExt[idx] =
(PFN_vkDevExt)gdpa_value;
- dev = dev->next;
+ ldev = ldev->next;
}
}
}
@@ -2666,8 +2674,8 @@ static void loader_init_dispatch_dev_ext_entry(struct loader_instance *inst,
* for dev for each of those extension entrypoints found in hash table.
*/
-static void loader_init_dispatch_dev_ext(struct loader_instance *inst,
- struct loader_device *dev) {
+void loader_init_dispatch_dev_ext(struct loader_instance *inst,
+ struct loader_device *dev) {
for (uint32_t i = 0; i < MAX_NUM_DEV_EXTS; i++) {
if (inst->disp_hash[i].func_name != NULL)
loader_init_dispatch_dev_ext_entry(inst, dev, i,
@@ -2689,6 +2697,43 @@ static bool loader_check_icds_for_address(struct loader_instance *inst,
return false;
}
+static bool loader_check_layer_list_for_address(const struct loader_layer_list *const layers,
+ const char *funcName){
+ // Iterate over the layers.
+ for (uint32_t layer = 0; layer < layers->count; ++layer)
+ {
+ // Iterate over the extensions.
+ const struct loader_device_extension_list *const extensions = &(layers->list[layer].device_extension_list);
+ for(uint32_t extension = 0; extension < extensions->count; ++extension)
+ {
+ // Iterate over the entry points.
+ const struct loader_dev_ext_props *const property = &(extensions->list[extension]);
+ for(uint32_t entry = 0; entry < property->entrypoint_count; ++entry)
+ {
+ if(strcmp(property->entrypoints[entry], funcName) == 0)
+ {
+ return true;
+ }
+ }
+ }
+ }
+
+ return false;
+}
+
+static bool loader_check_layers_for_address(const struct loader_instance *const inst,
+ const char *funcName){
+ if(loader_check_layer_list_for_address(&inst->instance_layer_list, funcName)) {
+ return true;
+ }
+
+ if(loader_check_layer_list_for_address(&inst->device_layer_list, funcName)) {
+ return true;
+ }
+
+ return false;
+}
+
static void loader_free_dev_ext_table(struct loader_instance *inst) {
for (uint32_t i = 0; i < MAX_NUM_DEV_EXTS; i++) {
loader_heap_free(inst, inst->disp_hash[i].func_name);
@@ -2820,8 +2865,8 @@ void *loader_dev_ext_gpa(struct loader_instance *inst, const char *funcName) {
return loader_get_dev_ext_trampoline(idx);
// Check if funcName is supported in either ICDs or a layer library
- if (!loader_check_icds_for_address(inst, funcName)) {
- // TODO Add check in layer libraries for support of address
+ if (!loader_check_icds_for_address(inst, funcName) &&
+ !loader_check_layers_for_address(inst, funcName)) {
// if support found in layers continue on
return NULL;
}
@@ -3244,6 +3289,7 @@ VkResult loader_create_instance_chain(const VkInstanceCreateInfo *pCreateInfo,
} else {
loader_init_instance_core_dispatch_table(inst->disp, nextGIPA,
*created_instance);
+ inst->instance = *created_instance;
}
return res;
@@ -3256,7 +3302,7 @@ void loader_activate_instance_layer_extensions(struct loader_instance *inst,
inst->disp, inst->disp->GetInstanceProcAddr, created_inst);
}
-static VkResult
+VkResult
loader_enable_device_layers(const struct loader_instance *inst,
struct loader_icd *icd,
struct loader_layer_list *activated_layer_list,
@@ -3296,94 +3342,10 @@ loader_enable_device_layers(const struct loader_instance *inst,
return err;
}
-VKAPI_ATTR VkResult VKAPI_CALL
-loader_create_device_terminator(VkPhysicalDevice physicalDevice,
- const VkDeviceCreateInfo *pCreateInfo,
- const VkAllocationCallbacks *pAllocator,
- VkDevice *pDevice) {
- struct loader_physical_device *phys_dev;
- phys_dev = loader_get_physical_device(physicalDevice);
-
- VkLayerDeviceCreateInfo *chain_info =
- (VkLayerDeviceCreateInfo *)pCreateInfo->pNext;
- while (chain_info &&
- !(chain_info->sType == VK_STRUCTURE_TYPE_LOADER_DEVICE_CREATE_INFO &&
- chain_info->function == VK_LAYER_DEVICE_INFO)) {
- chain_info = (VkLayerDeviceCreateInfo *)chain_info->pNext;
- }
- assert(chain_info != NULL);
-
- struct loader_device *dev =
- (struct loader_device *)chain_info->u.deviceInfo.device_info;
- PFN_vkGetInstanceProcAddr fpGetInstanceProcAddr =
- chain_info->u.deviceInfo.pfnNextGetInstanceProcAddr;
- PFN_vkCreateDevice fpCreateDevice =
- (PFN_vkCreateDevice)fpGetInstanceProcAddr(phys_dev->this_icd->instance,
- "vkCreateDevice");
- if (fpCreateDevice == NULL) {
- return VK_ERROR_INITIALIZATION_FAILED;
- }
-
- VkDeviceCreateInfo localCreateInfo;
- memcpy(&localCreateInfo, pCreateInfo, sizeof(localCreateInfo));
- localCreateInfo.pNext = loader_strip_create_extensions(pCreateInfo->pNext);
-
- /*
- * NOTE: Need to filter the extensions to only those
- * supported by the ICD.
- * No ICD will advertise support for layers. An ICD
- * library could support a layer, but it would be
- * independent of the actual ICD, just in the same library.
- */
- char **filtered_extension_names = NULL;
- filtered_extension_names =
- loader_stack_alloc(pCreateInfo->enabledExtensionCount * sizeof(char *));
- if (!filtered_extension_names) {
- return VK_ERROR_OUT_OF_HOST_MEMORY;
- }
-
- localCreateInfo.enabledLayerCount = 0;
- localCreateInfo.ppEnabledLayerNames = NULL;
-
- localCreateInfo.enabledExtensionCount = 0;
- localCreateInfo.ppEnabledExtensionNames =
- (const char *const *)filtered_extension_names;
-
- for (uint32_t i = 0; i < pCreateInfo->enabledExtensionCount; i++) {
- const char *extension_name = pCreateInfo->ppEnabledExtensionNames[i];
- VkExtensionProperties *prop = get_extension_property(
- extension_name, &phys_dev->device_extension_cache);
- if (prop) {
- filtered_extension_names[localCreateInfo.enabledExtensionCount] =
- (char *)extension_name;
- localCreateInfo.enabledExtensionCount++;
- }
- }
-
- VkDevice localDevice;
- // TODO: Why does fpCreateDevice behave differently than
- // this_icd->CreateDevice?
- // VkResult res = fpCreateDevice(phys_dev->phys_dev, &localCreateInfo,
- // pAllocator, &localDevice);
- VkResult res = phys_dev->this_icd->CreateDevice(
- phys_dev->phys_dev, &localCreateInfo, pAllocator, &localDevice);
-
- if (res != VK_SUCCESS) {
- return res;
- }
-
- *pDevice = localDevice;
-
- /* Init dispatch pointer in new device object */
- loader_init_dispatch(*pDevice, &dev->loader_dispatch);
-
- return res;
-}
-
-VkResult loader_create_device_chain(VkPhysicalDevice physicalDevice,
+VkResult loader_create_device_chain(const struct loader_physical_device *pd,
const VkDeviceCreateInfo *pCreateInfo,
const VkAllocationCallbacks *pAllocator,
- struct loader_instance *inst,
+ const struct loader_instance *inst,
struct loader_icd *icd,
struct loader_device *dev) {
uint32_t activated_layers = 0;
@@ -3495,9 +3457,9 @@ VkResult loader_create_device_chain(VkPhysicalDevice physicalDevice,
}
PFN_vkCreateDevice fpCreateDevice =
- (PFN_vkCreateDevice)nextGIPA((VkInstance)inst, "vkCreateDevice");
+ (PFN_vkCreateDevice)nextGIPA(inst->instance, "vkCreateDevice");
if (fpCreateDevice) {
- res = fpCreateDevice(physicalDevice, &loader_create_info, pAllocator,
+ res = fpCreateDevice(pd->phys_dev, &loader_create_info, pAllocator,
&dev->device);
} else {
// Couldn't find CreateDevice function!
@@ -3596,6 +3558,7 @@ VkResult loader_validate_instance_extensions(
VkResult loader_validate_device_extensions(
struct loader_physical_device *phys_dev,
const struct loader_layer_list *activated_device_layers,
+ const struct loader_extension_list *icd_exts,
const VkDeviceCreateInfo *pCreateInfo) {
VkExtensionProperties *extension_prop;
struct loader_layer_properties *layer_prop;
@@ -3605,15 +3568,15 @@ VkResult loader_validate_device_extensions(
VkStringErrorFlags result = vk_string_validate(
MaxLoaderStringLength, pCreateInfo->ppEnabledExtensionNames[i]);
if (result != VK_STRING_ERROR_NONE) {
- loader_log(phys_dev->this_instance, VK_DEBUG_REPORT_ERROR_BIT_EXT,
- 0, "Loader: Device ppEnabledExtensionNames contains "
- "string that is too long or is badly formed");
+ loader_log(phys_dev->this_icd->this_instance,
+ VK_DEBUG_REPORT_ERROR_BIT_EXT, 0,
+ "Loader: Device ppEnabledExtensionNames contains "
+ "string that is too long or is badly formed");
return VK_ERROR_EXTENSION_NOT_PRESENT;
}
const char *extension_name = pCreateInfo->ppEnabledExtensionNames[i];
- extension_prop = get_extension_property(
- extension_name, &phys_dev->device_extension_cache);
+ extension_prop = get_extension_property(extension_name, icd_exts);
if (extension_prop) {
continue;
@@ -3641,10 +3604,14 @@ VkResult loader_validate_device_extensions(
return VK_SUCCESS;
}
+/**
+ * Terminator functions for the Instance chain
+ * All named terminator_<Vulakn API name>
+ */
VKAPI_ATTR VkResult VKAPI_CALL
-loader_CreateInstance(const VkInstanceCreateInfo *pCreateInfo,
- const VkAllocationCallbacks *pAllocator,
- VkInstance *pInstance) {
+terminator_CreateInstance(const VkInstanceCreateInfo *pCreateInfo,
+ const VkAllocationCallbacks *pAllocator,
+ VkInstance *pInstance) {
struct loader_icd *icd;
VkExtensionProperties *prop;
char **filtered_extension_names = NULL;
@@ -3705,13 +3672,13 @@ loader_CreateInstance(const VkInstanceCreateInfo *pCreateInfo,
icd->this_icd_lib->EnumerateInstanceExtensionProperties,
icd->this_icd_lib->lib_name, &icd_exts);
- for (uint32_t i = 0; i < pCreateInfo->enabledExtensionCount; i++) {
+ for (uint32_t j = 0; j < pCreateInfo->enabledExtensionCount; j++) {
prop = get_extension_property(
- pCreateInfo->ppEnabledExtensionNames[i], &icd_exts);
+ pCreateInfo->ppEnabledExtensionNames[j], &icd_exts);
if (prop) {
filtered_extension_names[icd_create_info
.enabledExtensionCount] =
- (char *)pCreateInfo->ppEnabledExtensionNames[i];
+ (char *)pCreateInfo->ppEnabledExtensionNames[j];
icd_create_info.enabledExtensionCount++;
}
}
@@ -3753,8 +3720,8 @@ loader_CreateInstance(const VkInstanceCreateInfo *pCreateInfo,
}
VKAPI_ATTR void VKAPI_CALL
-loader_DestroyInstance(VkInstance instance,
- const VkAllocationCallbacks *pAllocator) {
+terminator_DestroyInstance(VkInstance instance,
+ const VkAllocationCallbacks *pAllocator) {
struct loader_instance *ptr_instance = loader_instance(instance);
struct loader_icd *icds = ptr_instance->icds;
struct loader_icd *next_icd;
@@ -3792,125 +3759,212 @@ loader_DestroyInstance(VkInstance instance,
loader_scanned_icd_clear(ptr_instance, &ptr_instance->icd_libs);
loader_destroy_generic_list(
ptr_instance, (struct loader_generic_list *)&ptr_instance->ext_list);
- for (uint32_t i = 0; i < ptr_instance->total_gpu_count; i++)
- loader_destroy_generic_list(
- ptr_instance,
- (struct loader_generic_list *)&ptr_instance->phys_devs[i]
- .device_extension_cache);
- loader_heap_free(ptr_instance, ptr_instance->phys_devs);
+ if (ptr_instance->phys_devs_term)
+ loader_heap_free(ptr_instance, ptr_instance->phys_devs_term);
loader_free_dev_ext_table(ptr_instance);
}
-VkResult
-loader_init_physical_device_info(struct loader_instance *ptr_instance) {
- struct loader_icd *icd;
- uint32_t i, j, idx, count = 0;
+VKAPI_ATTR VkResult VKAPI_CALL
+terminator_CreateDevice(VkPhysicalDevice physicalDevice,
+ const VkDeviceCreateInfo *pCreateInfo,
+ const VkAllocationCallbacks *pAllocator,
+ VkDevice *pDevice) {
+ struct loader_physical_device *phys_dev;
+ phys_dev = (struct loader_physical_device *)physicalDevice;
+
+ VkLayerDeviceCreateInfo *chain_info =
+ (VkLayerDeviceCreateInfo *)pCreateInfo->pNext;
+ while (chain_info &&
+ !(chain_info->sType == VK_STRUCTURE_TYPE_LOADER_DEVICE_CREATE_INFO &&
+ chain_info->function == VK_LAYER_DEVICE_INFO)) {
+ chain_info = (VkLayerDeviceCreateInfo *)chain_info->pNext;
+ }
+ assert(chain_info != NULL);
+
+ struct loader_device *dev =
+ (struct loader_device *)chain_info->u.deviceInfo.device_info;
+ PFN_vkGetInstanceProcAddr fpGetInstanceProcAddr =
+ chain_info->u.deviceInfo.pfnNextGetInstanceProcAddr;
+ PFN_vkCreateDevice fpCreateDevice =
+ (PFN_vkCreateDevice)fpGetInstanceProcAddr(phys_dev->this_icd->instance,
+ "vkCreateDevice");
+ if (fpCreateDevice == NULL) {
+ return VK_ERROR_INITIALIZATION_FAILED;
+ }
+
+ VkDeviceCreateInfo localCreateInfo;
+ memcpy(&localCreateInfo, pCreateInfo, sizeof(localCreateInfo));
+ localCreateInfo.pNext = loader_strip_create_extensions(pCreateInfo->pNext);
+
+ /*
+ * NOTE: Need to filter the extensions to only those
+ * supported by the ICD.
+ * No ICD will advertise support for layers. An ICD
+ * library could support a layer, but it would be
+ * independent of the actual ICD, just in the same library.
+ */
+ char **filtered_extension_names = NULL;
+ filtered_extension_names =
+ loader_stack_alloc(pCreateInfo->enabledExtensionCount * sizeof(char *));
+ if (!filtered_extension_names) {
+ return VK_ERROR_OUT_OF_HOST_MEMORY;
+ }
+
+ localCreateInfo.enabledLayerCount = 0;
+ localCreateInfo.ppEnabledLayerNames = NULL;
+
+ localCreateInfo.enabledExtensionCount = 0;
+ localCreateInfo.ppEnabledExtensionNames =
+ (const char *const *)filtered_extension_names;
+
+ /* Get the physical device (ICD) extensions */
+ struct loader_extension_list icd_exts;
VkResult res;
+ if (!loader_init_generic_list(phys_dev->this_icd->this_instance,
+ (struct loader_generic_list *)&icd_exts,
+ sizeof(VkExtensionProperties))) {
+ return VK_ERROR_OUT_OF_HOST_MEMORY;
+ }
+
+ res = loader_add_device_extensions(
+ phys_dev->this_icd->this_instance, phys_dev->this_icd,
+ phys_dev->phys_dev, phys_dev->this_icd->this_icd_lib->lib_name,
+ &icd_exts);
+ if (res != VK_SUCCESS) {
+ return res;
+ }
+
+ for (uint32_t i = 0; i < pCreateInfo->enabledExtensionCount; i++) {
+ const char *extension_name = pCreateInfo->ppEnabledExtensionNames[i];
+ VkExtensionProperties *prop =
+ get_extension_property(extension_name, &icd_exts);
+ if (prop) {
+ filtered_extension_names[localCreateInfo.enabledExtensionCount] =
+ (char *)extension_name;
+ localCreateInfo.enabledExtensionCount++;
+ }
+ }
+
+ VkDevice localDevice;
+ // TODO: Why does fpCreateDevice behave differently than
+ // this_icd->CreateDevice?
+ // VkResult res = fpCreateDevice(phys_dev->phys_dev, &localCreateInfo,
+ // pAllocator, &localDevice);
+ res = phys_dev->this_icd->CreateDevice(phys_dev->phys_dev, &localCreateInfo,
+ pAllocator, &localDevice);
+
+ if (res != VK_SUCCESS) {
+ return res;
+ }
+
+ *pDevice = localDevice;
+
+ /* Init dispatch pointer in new device object */
+ loader_init_dispatch(*pDevice, &dev->loader_dispatch);
+
+ return res;
+}
+
+VKAPI_ATTR VkResult VKAPI_CALL
+terminator_EnumeratePhysicalDevices(VkInstance instance,
+ uint32_t *pPhysicalDeviceCount,
+ VkPhysicalDevice *pPhysicalDevices) {
+ uint32_t i;
+ struct loader_instance *inst = (struct loader_instance *)instance;
+ VkResult res = VK_SUCCESS;
+
+ struct loader_icd *icd;
struct loader_phys_dev_per_icd *phys_devs;
- ptr_instance->total_gpu_count = 0;
+ inst->total_gpu_count = 0;
phys_devs = (struct loader_phys_dev_per_icd *)loader_stack_alloc(
- sizeof(struct loader_phys_dev_per_icd) * ptr_instance->total_icd_count);
+ sizeof(struct loader_phys_dev_per_icd) * inst->total_icd_count);
if (!phys_devs)
return VK_ERROR_OUT_OF_HOST_MEMORY;
- icd = ptr_instance->icds;
- for (i = 0; i < ptr_instance->total_icd_count; i++) {
+ icd = inst->icds;
+ for (i = 0; i < inst->total_icd_count; i++) {
assert(icd);
res = icd->EnumeratePhysicalDevices(icd->instance, &phys_devs[i].count,
NULL);
if (res != VK_SUCCESS)
return res;
- count += phys_devs[i].count;
icd = icd->next;
}
- ptr_instance->phys_devs =
- (struct loader_physical_device *)loader_heap_alloc(
- ptr_instance, count * sizeof(struct loader_physical_device),
- VK_SYSTEM_ALLOCATION_SCOPE_INSTANCE);
- if (!ptr_instance->phys_devs)
- return VK_ERROR_OUT_OF_HOST_MEMORY;
-
- icd = ptr_instance->icds;
-
- struct loader_physical_device *inst_phys_devs = ptr_instance->phys_devs;
- idx = 0;
- for (i = 0; i < ptr_instance->total_icd_count; i++) {
+ icd = inst->icds;
+ for (i = 0; i < inst->total_icd_count; i++) {
assert(icd);
-
phys_devs[i].phys_devs = (VkPhysicalDevice *)loader_stack_alloc(
phys_devs[i].count * sizeof(VkPhysicalDevice));
if (!phys_devs[i].phys_devs) {
- loader_heap_free(ptr_instance, ptr_instance->phys_devs);
- ptr_instance->phys_devs = NULL;
return VK_ERROR_OUT_OF_HOST_MEMORY;
}
res = icd->EnumeratePhysicalDevices(
icd->instance, &(phys_devs[i].count), phys_devs[i].phys_devs);
if ((res == VK_SUCCESS)) {
- ptr_instance->total_gpu_count += phys_devs[i].count;
- for (j = 0; j < phys_devs[i].count; j++) {
-
- // initialize the loader's physicalDevice object
- loader_set_dispatch((void *)&inst_phys_devs[idx],
- ptr_instance->disp);
- inst_phys_devs[idx].this_instance = ptr_instance;
- inst_phys_devs[idx].this_icd = icd;
- inst_phys_devs[idx].phys_dev = phys_devs[i].phys_devs[j];
- memset(&inst_phys_devs[idx].device_extension_cache, 0,
- sizeof(struct loader_extension_list));
-
- idx++;
- }
+ inst->total_gpu_count += phys_devs[i].count;
} else {
- loader_heap_free(ptr_instance, ptr_instance->phys_devs);
- ptr_instance->phys_devs = NULL;
return res;
}
-
+ phys_devs[i].this_icd = icd;
icd = icd->next;
}
- return VK_SUCCESS;
-}
-
-VKAPI_ATTR VkResult VKAPI_CALL
-loader_EnumeratePhysicalDevices(VkInstance instance,
- uint32_t *pPhysicalDeviceCount,
- VkPhysicalDevice *pPhysicalDevices) {
- uint32_t i;
- uint32_t copy_count = 0;
- struct loader_instance *ptr_instance = (struct loader_instance *)instance;
- VkResult res = VK_SUCCESS;
-
- if (ptr_instance->total_gpu_count == 0) {
- res = loader_init_physical_device_info(ptr_instance);
- }
-
- *pPhysicalDeviceCount = ptr_instance->total_gpu_count;
+ *pPhysicalDeviceCount = inst->total_gpu_count;
if (!pPhysicalDevices) {
return res;
}
- copy_count = (ptr_instance->total_gpu_count < *pPhysicalDeviceCount)
- ? ptr_instance->total_gpu_count
+ /* Initialize the output pPhysicalDevices with wrapped loader terminator
+ * physicalDevice objects; save this list of wrapped objects in instance
+ * struct for later cleanup and use by trampoline code */
+ uint32_t j, idx = 0;
+ uint32_t copy_count = 0;
+
+ copy_count = (inst->total_gpu_count < *pPhysicalDeviceCount)
+ ? inst->total_gpu_count
: *pPhysicalDeviceCount;
- for (i = 0; i < copy_count; i++) {
- pPhysicalDevices[i] = (VkPhysicalDevice)&ptr_instance->phys_devs[i];
+
+ // phys_devs_term is used to pass the "this_icd" info to trampoline code
+ if (inst->phys_devs_term)
+ loader_heap_free(inst, inst->phys_devs_term);
+ inst->phys_devs_term = loader_heap_alloc(
+ inst, sizeof(struct loader_physical_device) * copy_count,
+ VK_SYSTEM_ALLOCATION_SCOPE_INSTANCE);
+ if (!inst->phys_devs_term)
+ return VK_ERROR_OUT_OF_HOST_MEMORY;
+
+ for (i = 0; idx < copy_count && i < inst->total_icd_count; i++) {
+ icd = phys_devs[i].this_icd;
+ if (icd->phys_devs != NULL) {
+ loader_heap_free(inst, icd->phys_devs);
+ }
+ icd->phys_devs = loader_heap_alloc(inst,
+ sizeof(VkPhysicalDevice) * phys_devs[i].count,
+ VK_SYSTEM_ALLOCATION_SCOPE_INSTANCE);
+
+ for (j = 0; j < phys_devs[i].count && idx < copy_count; j++) {
+ loader_set_dispatch((void *)&inst->phys_devs_term[idx], inst->disp);
+ inst->phys_devs_term[idx].this_icd = phys_devs[i].this_icd;
+ inst->phys_devs_term[idx].phys_dev = phys_devs[i].phys_devs[j];
+ icd->phys_devs[j] = phys_devs[i].phys_devs[j];
+ pPhysicalDevices[idx] =
+ (VkPhysicalDevice)&inst->phys_devs_term[idx];
+ idx++;
+ }
}
*pPhysicalDeviceCount = copy_count;
- if (copy_count < ptr_instance->total_gpu_count) {
+ if (copy_count < inst->total_gpu_count) {
+ inst->total_gpu_count = copy_count;
return VK_INCOMPLETE;
}
-
return res;
}
-VKAPI_ATTR void VKAPI_CALL
-loader_GetPhysicalDeviceProperties(VkPhysicalDevice physicalDevice,
- VkPhysicalDeviceProperties *pProperties) {
+VKAPI_ATTR void VKAPI_CALL terminator_GetPhysicalDeviceProperties(
+ VkPhysicalDevice physicalDevice, VkPhysicalDeviceProperties *pProperties) {
struct loader_physical_device *phys_dev =
(struct loader_physical_device *)physicalDevice;
struct loader_icd *icd = phys_dev->this_icd;
@@ -3919,7 +3973,7 @@ loader_GetPhysicalDeviceProperties(VkPhysicalDevice physicalDevice,
icd->GetPhysicalDeviceProperties(phys_dev->phys_dev, pProperties);
}
-VKAPI_ATTR void VKAPI_CALL loader_GetPhysicalDeviceQueueFamilyProperties(
+VKAPI_ATTR void VKAPI_CALL terminator_GetPhysicalDeviceQueueFamilyProperties(
VkPhysicalDevice physicalDevice, uint32_t *pQueueFamilyPropertyCount,
VkQueueFamilyProperties *pProperties) {
struct loader_physical_device *phys_dev =
@@ -3931,7 +3985,7 @@ VKAPI_ATTR void VKAPI_CALL loader_GetPhysicalDeviceQueueFamilyProperties(
phys_dev->phys_dev, pQueueFamilyPropertyCount, pProperties);
}
-VKAPI_ATTR void VKAPI_CALL loader_GetPhysicalDeviceMemoryProperties(
+VKAPI_ATTR void VKAPI_CALL terminator_GetPhysicalDeviceMemoryProperties(
VkPhysicalDevice physicalDevice,
VkPhysicalDeviceMemoryProperties *pProperties) {
struct loader_physical_device *phys_dev =
@@ -3943,8 +3997,8 @@ VKAPI_ATTR void VKAPI_CALL loader_GetPhysicalDeviceMemoryProperties(
}
VKAPI_ATTR void VKAPI_CALL
-loader_GetPhysicalDeviceFeatures(VkPhysicalDevice physicalDevice,
- VkPhysicalDeviceFeatures *pFeatures) {
+terminator_GetPhysicalDeviceFeatures(VkPhysicalDevice physicalDevice,
+ VkPhysicalDeviceFeatures *pFeatures) {
struct loader_physical_device *phys_dev =
(struct loader_physical_device *)physicalDevice;
struct loader_icd *icd = phys_dev->this_icd;
@@ -3954,9 +4008,9 @@ loader_GetPhysicalDeviceFeatures(VkPhysicalDevice physicalDevice,
}
VKAPI_ATTR void VKAPI_CALL
-loader_GetPhysicalDeviceFormatProperties(VkPhysicalDevice physicalDevice,
- VkFormat format,
- VkFormatProperties *pFormatInfo) {
+terminator_GetPhysicalDeviceFormatProperties(VkPhysicalDevice physicalDevice,
+ VkFormat format,
+ VkFormatProperties *pFormatInfo) {
struct loader_physical_device *phys_dev =
(struct loader_physical_device *)physicalDevice;
struct loader_icd *icd = phys_dev->this_icd;
@@ -3966,7 +4020,8 @@ loader_GetPhysicalDeviceFormatProperties(VkPhysicalDevice physicalDevice,
pFormatInfo);
}
-VKAPI_ATTR VkResult VKAPI_CALL loader_GetPhysicalDeviceImageFormatProperties(
+VKAPI_ATTR VkResult VKAPI_CALL
+terminator_GetPhysicalDeviceImageFormatProperties(
VkPhysicalDevice physicalDevice, VkFormat format, VkImageType type,
VkImageTiling tiling, VkImageUsageFlags usage, VkImageCreateFlags flags,
VkImageFormatProperties *pImageFormatProperties) {
@@ -3982,7 +4037,8 @@ VKAPI_ATTR VkResult VKAPI_CALL loader_GetPhysicalDeviceImageFormatProperties(
pImageFormatProperties);
}
-VKAPI_ATTR void VKAPI_CALL loader_GetPhysicalDeviceSparseImageFormatProperties(
+VKAPI_ATTR void VKAPI_CALL
+terminator_GetPhysicalDeviceSparseImageFormatProperties(
VkPhysicalDevice physicalDevice, VkFormat format, VkImageType type,
VkSampleCountFlagBits samples, VkImageUsageFlags usage,
VkImageTiling tiling, uint32_t *pNumProperties,
@@ -3997,530 +4053,119 @@ VKAPI_ATTR void VKAPI_CALL loader_GetPhysicalDeviceSparseImageFormatProperties(
pNumProperties, pProperties);
}
-VKAPI_ATTR VkResult VKAPI_CALL
-loader_CreateDevice(VkPhysicalDevice physicalDevice,
- const VkDeviceCreateInfo *pCreateInfo,
- const VkAllocationCallbacks *pAllocator,
- VkDevice *pDevice) {
+VKAPI_ATTR VkResult VKAPI_CALL terminator_EnumerateDeviceExtensionProperties(
+ VkPhysicalDevice physicalDevice, const char *pLayerName,
+ uint32_t *pPropertyCount, VkExtensionProperties *pProperties) {
struct loader_physical_device *phys_dev;
- struct loader_icd *icd;
- struct loader_device *dev;
- struct loader_instance *inst;
- struct loader_layer_list activated_layer_list = {0};
- VkResult res;
-
- assert(pCreateInfo->queueCreateInfoCount >= 1);
-
- // TODO this only works for one physical device per instance
- // once CreateDevice layer bootstrapping is done via DeviceCreateInfo
- // hopefully don't need this anymore in trampoline code
- phys_dev = loader_get_physical_device(physicalDevice);
- icd = phys_dev->this_icd;
- if (!icd)
- return VK_ERROR_INITIALIZATION_FAILED;
-
- inst = phys_dev->this_instance;
- if (!icd->CreateDevice) {
- return VK_ERROR_INITIALIZATION_FAILED;
- }
-
- /* validate any app enabled layers are available */
- if (pCreateInfo->enabledLayerCount > 0) {
- res = loader_validate_layers(inst, pCreateInfo->enabledLayerCount,
- pCreateInfo->ppEnabledLayerNames,
- &inst->device_layer_list);
- if (res != VK_SUCCESS) {
- return res;
- }
- }
-
- /* Get the physical device extensions if they haven't been retrieved yet */
- if (phys_dev->device_extension_cache.capacity == 0) {
- if (!loader_init_generic_list(
- inst,
- (struct loader_generic_list *)&phys_dev->device_extension_cache,
- sizeof(VkExtensionProperties))) {
- return VK_ERROR_OUT_OF_HOST_MEMORY;
- }
-
- res = loader_add_device_extensions(
- inst, icd, phys_dev->phys_dev,
- phys_dev->this_icd->this_icd_lib->lib_name,
- &phys_dev->device_extension_cache);
- if (res != VK_SUCCESS) {
- return res;
- }
- }
-
- /* convert any meta layers to the actual layers makes a copy of layer name*/
- uint32_t saved_layer_count = pCreateInfo->enabledLayerCount;
- char **saved_layer_names;
- char **saved_layer_ptr;
- saved_layer_names =
- loader_stack_alloc(sizeof(char *) * pCreateInfo->enabledLayerCount);
- for (uint32_t i = 0; i < saved_layer_count; i++) {
- saved_layer_names[i] = (char *)pCreateInfo->ppEnabledLayerNames[i];
- }
- saved_layer_ptr = (char **)pCreateInfo->ppEnabledLayerNames;
-
- loader_expand_layer_names(
- inst, std_validation_str,
- sizeof(std_validation_names) / sizeof(std_validation_names[0]),
- std_validation_names, (uint32_t *)&pCreateInfo->enabledLayerCount,
- (char ***)&pCreateInfo->ppEnabledLayerNames);
-
- /* fetch a list of all layers activated, explicit and implicit */
- res = loader_enable_device_layers(inst, icd, &activated_layer_list,
- pCreateInfo, &inst->device_layer_list);
- if (res != VK_SUCCESS) {
- loader_unexpand_dev_layer_names(inst, saved_layer_count,
- saved_layer_names, saved_layer_ptr,
- pCreateInfo);
- return res;
- }
-
- /* make sure requested extensions to be enabled are supported */
- res = loader_validate_device_extensions(phys_dev, &activated_layer_list,
- pCreateInfo);
- if (res != VK_SUCCESS) {
- loader_unexpand_dev_layer_names(inst, saved_layer_count,
- saved_layer_names, saved_layer_ptr,
- pCreateInfo);
- loader_destroy_generic_list(
- inst, (struct loader_generic_list *)&activated_layer_list);
- return res;
- }
+ struct loader_layer_list implicit_layer_list;
- dev = loader_add_logical_device(inst, &icd->logical_device_list);
- if (dev == NULL) {
- loader_unexpand_dev_layer_names(inst, saved_layer_count,
- saved_layer_names, saved_layer_ptr,
- pCreateInfo);
- loader_destroy_generic_list(
- inst, (struct loader_generic_list *)&activated_layer_list);
- return VK_ERROR_OUT_OF_HOST_MEMORY;
- }
+ assert(pLayerName == NULL || strlen(pLayerName) == 0);
- /* move the locally filled layer list into the device, and pass ownership of
- * the memory */
- dev->activated_layer_list.capacity = activated_layer_list.capacity;
- dev->activated_layer_list.count = activated_layer_list.count;
- dev->activated_layer_list.list = activated_layer_list.list;
- memset(&activated_layer_list, 0, sizeof(activated_layer_list));
+ /* Any layer or trampoline wrapping should be removed at this point in time
+ * can just cast to the expected type for VkPhysicalDevice. */
+ phys_dev = (struct loader_physical_device *)physicalDevice;
- /* activate any layers on device chain which terminates with device*/
- res = loader_enable_device_layers(inst, icd, &dev->activated_layer_list,
- pCreateInfo, &inst->device_layer_list);
- if (res != VK_SUCCESS) {
- loader_unexpand_dev_layer_names(inst, saved_layer_count,
- saved_layer_names, saved_layer_ptr,
- pCreateInfo);
- loader_remove_logical_device(inst, icd, dev);
- return res;
- }
+ /* this case is during the call down the instance chain with pLayerName
+ * == NULL*/
+ struct loader_icd *icd = phys_dev->this_icd;
+ uint32_t icd_ext_count = *pPropertyCount;
+ VkResult res;
- res = loader_create_device_chain(physicalDevice, pCreateInfo, pAllocator,
- inst, icd, dev);
- if (res != VK_SUCCESS) {
- loader_unexpand_dev_layer_names(inst, saved_layer_count,
- saved_layer_names, saved_layer_ptr,
- pCreateInfo);
- loader_remove_logical_device(inst, icd, dev);
+ /* get device extensions */
+ res = icd->EnumerateDeviceExtensionProperties(phys_dev->phys_dev, NULL,
+ &icd_ext_count, pProperties);
+ if (res != VK_SUCCESS)
return res;
- }
-
- *pDevice = dev->device;
-
- /* initialize any device extension dispatch entry's from the instance list*/
- loader_init_dispatch_dev_ext(inst, dev);
-
- /* initialize WSI device extensions as part of core dispatch since loader
- * has
- * dedicated trampoline code for these*/
- loader_init_device_extension_dispatch_table(
- &dev->loader_dispatch,
- dev->loader_dispatch.core_dispatch.GetDeviceProcAddr, *pDevice);
-
- loader_unexpand_dev_layer_names(inst, saved_layer_count, saved_layer_names,
- saved_layer_ptr, pCreateInfo);
- return res;
-}
-
-/**
- * Get an instance level or global level entry point address.
- * @param instance
- * @param pName
- * @return
- * If instance == NULL returns a global level functions only
- * If instance is valid returns a trampoline entry point for all dispatchable
- * Vulkan
- * functions both core and extensions.
- */
-LOADER_EXPORT VKAPI_ATTR PFN_vkVoidFunction VKAPI_CALL
-vkGetInstanceProcAddr(VkInstance instance, const char *pName) {
-
- void *addr;
- addr = globalGetProcAddr(pName);
- if (instance == VK_NULL_HANDLE) {
- // get entrypoint addresses that are global (no dispatchable object)
+ loader_init_layer_list(icd->this_instance, &implicit_layer_list);
- return addr;
- } else {
- // if a global entrypoint return NULL
- if (addr)
- return NULL;
- }
-
- struct loader_instance *ptr_instance = loader_get_instance(instance);
- if (ptr_instance == NULL)
- return NULL;
- // Return trampoline code for non-global entrypoints including any
- // extensions.
- // Device extensions are returned if a layer or ICD supports the extension.
- // Instance extensions are returned if the extension is enabled and the
- // loader
- // or someone else supports the extension
- return trampolineGetProcAddr(ptr_instance, pName);
-}
-
-/**
- * Get a device level or global level entry point address.
- * @param device
- * @param pName
- * @return
- * If device is valid, returns a device relative entry point for device level
- * entry points both core and extensions.
- * Device relative means call down the device chain.
- */
-LOADER_EXPORT VKAPI_ATTR PFN_vkVoidFunction VKAPI_CALL
-vkGetDeviceProcAddr(VkDevice device, const char *pName) {
- void *addr;
-
- /* for entrypoints that loader must handle (ie non-dispatchable or create
- object)
- make sure the loader entrypoint is returned */
- addr = loader_non_passthrough_gdpa(pName);
- if (addr) {
- return addr;
- }
-
- /* Although CreateDevice is on device chain it's dispatchable object isn't
- * a VkDevice or child of VkDevice so return NULL.
+ loader_add_layer_implicit(
+ icd->this_instance, VK_LAYER_TYPE_INSTANCE_IMPLICIT,
+ &implicit_layer_list, &icd->this_instance->instance_layer_list);
+ /* we need to determine which implicit layers are active,
+ * and then add their extensions. This can't be cached as
+ * it depends on results of environment variables (which can change).
*/
- if (!strcmp(pName, "CreateDevice"))
- return NULL;
-
- /* return the dispatch table entrypoint for the fastest case */
- const VkLayerDispatchTable *disp_table = *(VkLayerDispatchTable **)device;
- if (disp_table == NULL)
- return NULL;
-
- addr = loader_lookup_device_dispatch_table(disp_table, pName);
- if (addr)
- return addr;
-
- if (disp_table->GetDeviceProcAddr == NULL)
- return NULL;
- return disp_table->GetDeviceProcAddr(device, pName);
-}
-
-LOADER_EXPORT VKAPI_ATTR VkResult VKAPI_CALL
-vkEnumerateInstanceExtensionProperties(const char *pLayerName,
- uint32_t *pPropertyCount,
- VkExtensionProperties *pProperties) {
- struct loader_extension_list *global_ext_list = NULL;
- struct loader_layer_list instance_layers;
- struct loader_extension_list icd_extensions;
- struct loader_icd_libs icd_libs;
- uint32_t copy_size;
-
- tls_instance = NULL;
- memset(&icd_extensions, 0, sizeof(icd_extensions));
- memset(&instance_layers, 0, sizeof(instance_layers));
- loader_platform_thread_once(&once_init, loader_initialize);
-
- /* get layer libraries if needed */
- if (pLayerName && strlen(pLayerName) != 0) {
- if (vk_string_validate(MaxLoaderStringLength, pLayerName) ==
- VK_STRING_ERROR_NONE) {
- loader_layer_scan(NULL, &instance_layers, NULL);
- for (uint32_t i = 0; i < instance_layers.count; i++) {
- struct loader_layer_properties *props =
- &instance_layers.list[i];
- if (strcmp(props->info.layerName, pLayerName) == 0) {
- global_ext_list = &props->instance_extension_list;
- }
- }
- } else {
- assert(VK_FALSE && "vkEnumerateInstanceExtensionProperties: "
- "pLayerName is too long or is badly formed");
- return VK_ERROR_EXTENSION_NOT_PRESENT;
- }
- } else {
- /* Scan/discover all ICD libraries */
- memset(&icd_libs, 0, sizeof(struct loader_icd_libs));
- loader_icd_scan(NULL, &icd_libs);
- /* get extensions from all ICD's, merge so no duplicates */
- loader_get_icd_loader_instance_extensions(NULL, &icd_libs,
- &icd_extensions);
- loader_scanned_icd_clear(NULL, &icd_libs);
- global_ext_list = &icd_extensions;
- }
-
- if (global_ext_list == NULL) {
- loader_destroy_layer_list(NULL, &instance_layers);
- return VK_ERROR_LAYER_NOT_PRESENT;
- }
-
- if (pProperties == NULL) {
- *pPropertyCount = global_ext_list->count;
- loader_destroy_layer_list(NULL, &instance_layers);
- loader_destroy_generic_list(
- NULL, (struct loader_generic_list *)&icd_extensions);
- return VK_SUCCESS;
- }
-
- copy_size = *pPropertyCount < global_ext_list->count
- ? *pPropertyCount
- : global_ext_list->count;
- for (uint32_t i = 0; i < copy_size; i++) {
- memcpy(&pProperties[i], &global_ext_list->list[i],
- sizeof(VkExtensionProperties));
- }
- *pPropertyCount = copy_size;
- loader_destroy_generic_list(NULL,
- (struct loader_generic_list *)&icd_extensions);
-
- if (copy_size < global_ext_list->count) {
- loader_destroy_layer_list(NULL, &instance_layers);
- return VK_INCOMPLETE;
- }
-
- loader_destroy_layer_list(NULL, &instance_layers);
- return VK_SUCCESS;
-}
-
-LOADER_EXPORT VKAPI_ATTR VkResult VKAPI_CALL
-vkEnumerateInstanceLayerProperties(uint32_t *pPropertyCount,
- VkLayerProperties *pProperties) {
-
- struct loader_layer_list instance_layer_list;
- tls_instance = NULL;
-
- loader_platform_thread_once(&once_init, loader_initialize);
-
- uint32_t copy_size;
-
- /* get layer libraries */
- memset(&instance_layer_list, 0, sizeof(instance_layer_list));
- loader_layer_scan(NULL, &instance_layer_list, NULL);
-
- if (pProperties == NULL) {
- *pPropertyCount = instance_layer_list.count;
- loader_destroy_layer_list(NULL, &instance_layer_list);
- return VK_SUCCESS;
- }
-
- copy_size = (*pPropertyCount < instance_layer_list.count)
- ? *pPropertyCount
- : instance_layer_list.count;
- for (uint32_t i = 0; i < copy_size; i++) {
- memcpy(&pProperties[i], &instance_layer_list.list[i].info,
- sizeof(VkLayerProperties));
- }
-
- *pPropertyCount = copy_size;
- loader_destroy_layer_list(NULL, &instance_layer_list);
-
- if (copy_size < instance_layer_list.count) {
- return VK_INCOMPLETE;
- }
-
- return VK_SUCCESS;
-}
-
-VKAPI_ATTR VkResult VKAPI_CALL
-loader_EnumerateDeviceExtensionProperties(VkPhysicalDevice physicalDevice,
- const char *pLayerName,
- uint32_t *pPropertyCount,
- VkExtensionProperties *pProperties) {
- struct loader_physical_device *phys_dev;
- uint32_t copy_size;
-
- uint32_t count;
- struct loader_device_extension_list *dev_ext_list = NULL;
- struct loader_layer_list implicit_layer_list;
-
- // TODO fix this aliases physical devices
- phys_dev = loader_get_physical_device(physicalDevice);
-
- /* get layer libraries if needed */
- if (pLayerName && strlen(pLayerName) != 0) {
- if (vk_string_validate(MaxLoaderStringLength, pLayerName) ==
- VK_STRING_ERROR_NONE) {
- for (uint32_t i = 0;
- i < phys_dev->this_instance->device_layer_list.count; i++) {
- struct loader_layer_properties *props =
- &phys_dev->this_instance->device_layer_list.list[i];
- if (strcmp(props->info.layerName, pLayerName) == 0) {
- dev_ext_list = &props->device_extension_list;
- }
- }
- count = (dev_ext_list == NULL) ? 0 : dev_ext_list->count;
- if (pProperties == NULL) {
- *pPropertyCount = count;
- return VK_SUCCESS;
- }
-
- copy_size = *pPropertyCount < count ? *pPropertyCount : count;
- for (uint32_t i = 0; i < copy_size; i++) {
- memcpy(&pProperties[i], &dev_ext_list->list[i].props,
- sizeof(VkExtensionProperties));
- }
- *pPropertyCount = copy_size;
-
- if (copy_size < count) {
- return VK_INCOMPLETE;
- }
- } else {
- loader_log(phys_dev->this_instance, VK_DEBUG_REPORT_ERROR_BIT_EXT,
- 0, "vkEnumerateDeviceExtensionProperties: pLayerName "
- "is too long or is badly formed");
- return VK_ERROR_EXTENSION_NOT_PRESENT;
- }
- return VK_SUCCESS;
- } else {
- /* this case is during the call down the instance chain with pLayerName
- * == NULL*/
- struct loader_icd *icd = phys_dev->this_icd;
- uint32_t icd_ext_count = *pPropertyCount;
- VkResult res;
-
- /* get device extensions */
- res = icd->EnumerateDeviceExtensionProperties(
- phys_dev->phys_dev, NULL, &icd_ext_count, pProperties);
+ if (pProperties != NULL) {
+ struct loader_extension_list icd_exts;
+ /* initialize dev_extension list within the physicalDevice object */
+ res = loader_init_device_extensions(icd->this_instance, phys_dev,
+ icd_ext_count, pProperties,
+ &icd_exts);
if (res != VK_SUCCESS)
return res;
- loader_init_layer_list(phys_dev->this_instance, &implicit_layer_list);
-
- loader_add_layer_implicit(
- phys_dev->this_instance, VK_LAYER_TYPE_INSTANCE_IMPLICIT,
- &implicit_layer_list,
- &phys_dev->this_instance->instance_layer_list);
/* we need to determine which implicit layers are active,
* and then add their extensions. This can't be cached as
- * it depends on results of environment variables (which can change).
+ * it depends on results of environment variables (which can
+ * change).
*/
- if (pProperties != NULL) {
- /* initialize dev_extension list within the physicalDevice object */
- res = loader_init_device_extensions(
- phys_dev->this_instance, phys_dev, icd_ext_count, pProperties,
- &phys_dev->device_extension_cache);
- if (res != VK_SUCCESS)
- return res;
+ struct loader_extension_list all_exts = {0};
+ loader_add_to_ext_list(icd->this_instance, &all_exts, icd_exts.count,
+ icd_exts.list);
- /* we need to determine which implicit layers are active,
- * and then add their extensions. This can't be cached as
- * it depends on results of environment variables (which can
- * change).
- */
- struct loader_extension_list all_exts = {0};
- loader_add_to_ext_list(phys_dev->this_instance, &all_exts,
- phys_dev->device_extension_cache.count,
- phys_dev->device_extension_cache.list);
-
- loader_init_layer_list(phys_dev->this_instance,
- &implicit_layer_list);
-
- loader_add_layer_implicit(
- phys_dev->this_instance, VK_LAYER_TYPE_INSTANCE_IMPLICIT,
- &implicit_layer_list,
- &phys_dev->this_instance->instance_layer_list);
-
- for (uint32_t i = 0; i < implicit_layer_list.count; i++) {
- for (
- uint32_t j = 0;
- j < implicit_layer_list.list[i].device_extension_list.count;
- j++) {
- loader_add_to_ext_list(phys_dev->this_instance, &all_exts,
- 1,
- &implicit_layer_list.list[i]
- .device_extension_list.list[j]
- .props);
- }
- }
- uint32_t capacity = *pPropertyCount;
- VkExtensionProperties *props = pProperties;
+ loader_init_layer_list(icd->this_instance, &implicit_layer_list);
- for (uint32_t i = 0; i < all_exts.count && i < capacity; i++) {
- props[i] = all_exts.list[i];
- }
- /* wasn't enough space for the extensions, we did partial copy now
- * return VK_INCOMPLETE */
- if (capacity < all_exts.count) {
- res = VK_INCOMPLETE;
- } else {
- *pPropertyCount = all_exts.count;
- }
- loader_destroy_generic_list(
- phys_dev->this_instance,
- (struct loader_generic_list *)&all_exts);
- } else {
- /* just return the count; need to add in the count of implicit layer
- * extensions
- * don't worry about duplicates being added in the count */
- *pPropertyCount = icd_ext_count;
-
- for (uint32_t i = 0; i < implicit_layer_list.count; i++) {
- *pPropertyCount +=
- implicit_layer_list.list[i].device_extension_list.count;
+ loader_add_layer_implicit(
+ icd->this_instance, VK_LAYER_TYPE_INSTANCE_IMPLICIT,
+ &implicit_layer_list, &icd->this_instance->instance_layer_list);
+
+ for (uint32_t i = 0; i < implicit_layer_list.count; i++) {
+ for (uint32_t j = 0;
+ j < implicit_layer_list.list[i].device_extension_list.count;
+ j++) {
+ loader_add_to_ext_list(icd->this_instance, &all_exts, 1,
+ &implicit_layer_list.list[i]
+ .device_extension_list.list[j]
+ .props);
}
- res = VK_SUCCESS;
}
+ uint32_t capacity = *pPropertyCount;
+ VkExtensionProperties *props = pProperties;
- loader_destroy_generic_list(
- phys_dev->this_instance,
- (struct loader_generic_list *)&implicit_layer_list);
- return res;
+ for (uint32_t i = 0; i < all_exts.count && i < capacity; i++) {
+ props[i] = all_exts.list[i];
+ }
+ /* wasn't enough space for the extensions, we did partial copy now
+ * return VK_INCOMPLETE */
+ if (capacity < all_exts.count) {
+ res = VK_INCOMPLETE;
+ } else {
+ *pPropertyCount = all_exts.count;
+ }
+ loader_destroy_generic_list(icd->this_instance,
+ (struct loader_generic_list *)&all_exts);
+ } else {
+ /* just return the count; need to add in the count of implicit layer
+ * extensions
+ * don't worry about duplicates being added in the count */
+ *pPropertyCount = icd_ext_count;
+
+ for (uint32_t i = 0; i < implicit_layer_list.count; i++) {
+ *pPropertyCount +=
+ implicit_layer_list.list[i].device_extension_list.count;
+ }
+ res = VK_SUCCESS;
}
+
+ loader_destroy_generic_list(
+ icd->this_instance, (struct loader_generic_list *)&implicit_layer_list);
+ return res;
}
VKAPI_ATTR VkResult VKAPI_CALL
-loader_EnumerateDeviceLayerProperties(VkPhysicalDevice physicalDevice,
- uint32_t *pPropertyCount,
- VkLayerProperties *pProperties) {
- uint32_t copy_size;
- struct loader_physical_device *phys_dev;
- // TODO fix this, aliases physical devices
- phys_dev = loader_get_physical_device(physicalDevice);
- uint32_t count = phys_dev->this_instance->device_layer_list.count;
-
- if (pProperties == NULL) {
- *pPropertyCount = count;
- return VK_SUCCESS;
- }
-
- copy_size = (*pPropertyCount < count) ? *pPropertyCount : count;
- for (uint32_t i = 0; i < copy_size; i++) {
- memcpy(&pProperties[i],
- &(phys_dev->this_instance->device_layer_list.list[i].info),
- sizeof(VkLayerProperties));
- }
- *pPropertyCount = copy_size;
-
- if (copy_size < count) {
- return VK_INCOMPLETE;
- }
+terminator_EnumerateDeviceLayerProperties(VkPhysicalDevice physicalDevice,
+ uint32_t *pPropertyCount,
+ VkLayerProperties *pProperties) {
- return VK_SUCCESS;
+ // should never get here this call isn't dispatched down the chain
+ return VK_ERROR_INITIALIZATION_FAILED;
}
VkStringErrorFlags vk_string_validate(const int max_length, const char *utf8) {
VkStringErrorFlags result = VK_STRING_ERROR_NONE;
- int num_char_bytes;
+ int num_char_bytes = 0;
int i, j;
for (i = 0; i < max_length; i++) {
diff --git a/loader/loader.h b/loader/loader.h
index 345c1891a..b0b366320 100644
--- a/loader/loader.h
+++ b/loader/loader.h
@@ -36,8 +36,7 @@
#define LOADER_H
#include <vulkan/vulkan.h>
-#include <vk_loader_platform.h>
-
+#include "vk_loader_platform.h"
#include <vulkan/vk_layer.h>
#include <vulkan/vk_icd.h>
@@ -83,12 +82,11 @@ static const char UTF8_THREE_BYTE_MASK = 0xF8;
static const char UTF8_DATA_BYTE_CODE = 0x80;
static const char UTF8_DATA_BYTE_MASK = 0xC0;
-static const char std_validation_names[9][VK_MAX_EXTENSION_NAME_SIZE] = {
- "VK_LAYER_GOOGLE_threading", "VK_LAYER_LUNARG_param_checker",
+static const char std_validation_names[8][VK_MAX_EXTENSION_NAME_SIZE] = {
+ "VK_LAYER_GOOGLE_threading", "VK_LAYER_LUNARG_parameter_validation",
"VK_LAYER_LUNARG_device_limits", "VK_LAYER_LUNARG_object_tracker",
- "VK_LAYER_LUNARG_image", "VK_LAYER_LUNARG_mem_tracker",
- "VK_LAYER_LUNARG_draw_state", "VK_LAYER_LUNARG_swapchain",
- "VK_LAYER_GOOGLE_unique_objects"};
+ "VK_LAYER_LUNARG_image", "VK_LAYER_LUNARG_core_validation",
+ "VK_LAYER_LUNARG_swapchain", "VK_LAYER_GOOGLE_unique_objects"};
// form of all dynamic lists/arrays
// only the list element should be changed
@@ -201,7 +199,7 @@ struct loader_icd {
// pointers to find other structs
const struct loader_scanned_icds *this_icd_lib;
const struct loader_instance *this_instance;
-
+ VkPhysicalDevice *phys_devs; // physicalDevice object from icd
struct loader_device *logical_device_list;
VkInstance instance; // instance object from the icd
PFN_vkGetDeviceProcAddr GetDeviceProcAddr;
@@ -248,7 +246,20 @@ struct loader_icd {
PFN_vkGetPhysicalDeviceXlibPresentationSupportKHR
GetPhysicalDeviceXlibPresentationSupportKHR;
#endif
-
+ PFN_vkGetPhysicalDeviceDisplayPropertiesKHR
+ GetPhysicalDeviceDisplayPropertiesKHR;
+ PFN_vkGetPhysicalDeviceDisplayPlanePropertiesKHR
+ GetPhysicalDeviceDisplayPlanePropertiesKHR;
+ PFN_vkGetDisplayPlaneSupportedDisplaysKHR
+ GetDisplayPlaneSupportedDisplaysKHR;
+ PFN_vkGetDisplayModePropertiesKHR
+ GetDisplayModePropertiesKHR;
+ PFN_vkCreateDisplayModeKHR
+ CreateDisplayModeKHR;
+ PFN_vkGetDisplayPlaneCapabilitiesKHR
+ GetDisplayPlaneCapabilitiesKHR;
+ PFN_vkCreateDisplayPlaneSurfaceKHR
+ CreateDisplayPlaneSurfaceKHR;
struct loader_icd *next;
};
@@ -263,8 +274,9 @@ struct loader_icd_libs {
struct loader_instance {
VkLayerInstanceDispatchTable *disp; // must be first entry in structure
- uint32_t total_gpu_count;
- struct loader_physical_device *phys_devs;
+ uint32_t total_gpu_count; // count of the next two arrays
+ struct loader_physical_device *phys_devs_term;
+ struct loader_physical_device *phys_devs; // tramp wrapped physDev obj list
uint32_t total_icd_count;
struct loader_icd *icds;
struct loader_instance *next;
@@ -278,7 +290,7 @@ struct loader_instance {
struct loader_layer_list activated_layer_list;
- VkInstance instance;
+ VkInstance instance; // layers/ICD instance returned to trampoline
bool debug_report_enabled;
VkLayerDbgFunctionNode *DbgFunctionHead;
@@ -304,19 +316,29 @@ struct loader_instance {
#ifdef VK_USE_PLATFORM_ANDROID_KHR
bool wsi_android_surface_enabled;
#endif
+ bool wsi_display_enabled;
};
-/* per enumerated PhysicalDevice structure */
+/* VkPhysicalDevice requires special treatment by loader. Firstly, terminator
+ * code must be able to get the struct loader_icd to call into the proper
+ * driver (multiple ICD/gpu case). This can be accomplished by wrapping the
+ * created VkPhysicalDevice in loader terminate_EnumeratePhysicalDevices().
+ * Secondly, loader must be able to find the instance and icd in trampoline
+ * code.
+ * Thirdly, the loader must be able to handle wrapped by layer VkPhysicalDevice
+ * in trampoline code. This implies, that the loader trampoline code must also
+ * wrap the VkPhysicalDevice object in trampoline code. Thus, loader has to
+ * wrap the VkPhysicalDevice created object twice. In trampoline code it can't
+ * rely on the terminator object wrapping since a layer may also wrap. Since
+ * trampoline code wraps the VkPhysicalDevice this means all loader trampoline
+ * code that passes a VkPhysicalDevice should unwrap it. */
+
+/* per enumerated PhysicalDevice structure, used to wrap in trampoline code and
+ also same structure used to wrap in terminator code */
struct loader_physical_device {
VkLayerInstanceDispatchTable *disp; // must be first entry in structure
- struct loader_instance *this_instance;
struct loader_icd *this_icd;
- VkPhysicalDevice phys_dev; // object from ICD
- /*
- * Fill in the cache of available device extensions from
- * this physical device. This cache can be used during CreateDevice
- */
- struct loader_extension_list device_extension_cache;
+ VkPhysicalDevice phys_dev; // object from ICD/layers/loader terminator
};
struct loader_struct {
@@ -344,6 +366,13 @@ static inline struct loader_instance *loader_instance(VkInstance instance) {
return (struct loader_instance *)instance;
}
+static inline VkPhysicalDevice
+loader_unwrap_physical_device(VkPhysicalDevice physicalDevice) {
+ struct loader_physical_device *phys_dev =
+ (struct loader_physical_device *)physicalDevice;
+ return phys_dev->phys_dev;
+}
+
static inline void loader_set_dispatch(void *obj, const void *data) {
*((const void **)obj) = data;
}
@@ -386,8 +415,18 @@ struct loader_msg_callback_map_entry {
VkDebugReportCallbackEXT loader_obj;
};
+/* helper function definitions */
+void *loader_heap_alloc(const struct loader_instance *instance, size_t size,
+ VkSystemAllocationScope allocationScope);
+
+void loader_heap_free(const struct loader_instance *instance, void *pMemory);
+
+void *loader_tls_heap_alloc(size_t size);
+
+void loader_tls_heap_free(void *pMemory);
+
void loader_log(const struct loader_instance *inst, VkFlags msg_type,
- int32_t msg_code, const char *format, ...);
+ int32_t msg_code, const char *format, ...);
bool compare_vk_extension_properties(const VkExtensionProperties *op1,
const VkExtensionProperties *op2);
@@ -403,75 +442,6 @@ VkResult loader_validate_instance_extensions(
const struct loader_layer_list *instance_layer,
const VkInstanceCreateInfo *pCreateInfo);
-/* instance layer chain termination entrypoint definitions */
-VKAPI_ATTR VkResult VKAPI_CALL
-loader_CreateInstance(const VkInstanceCreateInfo *pCreateInfo,
- const VkAllocationCallbacks *pAllocator,
- VkInstance *pInstance);
-
-VKAPI_ATTR void VKAPI_CALL
-loader_DestroyInstance(VkInstance instance,
- const VkAllocationCallbacks *pAllocator);
-
-VKAPI_ATTR VkResult VKAPI_CALL
-loader_EnumeratePhysicalDevices(VkInstance instance,
- uint32_t *pPhysicalDeviceCount,
- VkPhysicalDevice *pPhysicalDevices);
-
-VKAPI_ATTR void VKAPI_CALL
-loader_GetPhysicalDeviceFeatures(VkPhysicalDevice physicalDevice,
- VkPhysicalDeviceFeatures *pFeatures);
-
-VKAPI_ATTR void VKAPI_CALL
-loader_GetPhysicalDeviceFormatProperties(VkPhysicalDevice physicalDevice,
- VkFormat format,
- VkFormatProperties *pFormatInfo);
-
-VKAPI_ATTR VkResult VKAPI_CALL loader_GetPhysicalDeviceImageFormatProperties(
- VkPhysicalDevice physicalDevice, VkFormat format, VkImageType type,
- VkImageTiling tiling, VkImageUsageFlags usage, VkImageCreateFlags flags,
- VkImageFormatProperties *pImageFormatProperties);
-
-VKAPI_ATTR void VKAPI_CALL loader_GetPhysicalDeviceSparseImageFormatProperties(
- VkPhysicalDevice physicalDevice, VkFormat format, VkImageType type,
- VkSampleCountFlagBits samples, VkImageUsageFlags usage,
- VkImageTiling tiling, uint32_t *pNumProperties,
- VkSparseImageFormatProperties *pProperties);
-
-VKAPI_ATTR void VKAPI_CALL
-loader_GetPhysicalDeviceProperties(VkPhysicalDevice physicalDevice,
- VkPhysicalDeviceProperties *pProperties);
-
-VKAPI_ATTR VkResult VKAPI_CALL
-loader_EnumerateDeviceExtensionProperties(VkPhysicalDevice physicalDevice,
- const char *pLayerName,
- uint32_t *pCount,
- VkExtensionProperties *pProperties);
-
-VKAPI_ATTR VkResult VKAPI_CALL
-loader_EnumerateDeviceLayerProperties(VkPhysicalDevice physicalDevice,
- uint32_t *pCount,
- VkLayerProperties *pProperties);
-
-VKAPI_ATTR void VKAPI_CALL loader_GetPhysicalDeviceQueueFamilyProperties(
- VkPhysicalDevice physicalDevice, uint32_t *pCount,
- VkQueueFamilyProperties *pProperties);
-
-VKAPI_ATTR void VKAPI_CALL loader_GetPhysicalDeviceMemoryProperties(
- VkPhysicalDevice physicalDevice,
- VkPhysicalDeviceMemoryProperties *pProperties);
-
-VKAPI_ATTR VkResult VKAPI_CALL
-loader_create_device_terminator(VkPhysicalDevice physicalDevice,
- const VkDeviceCreateInfo *pCreateInfo,
- const VkAllocationCallbacks *pAllocator,
- VkDevice *pDevice);
-
-VKAPI_ATTR VkResult VKAPI_CALL
-loader_CreateDevice(VkPhysicalDevice gpu, const VkDeviceCreateInfo *pCreateInfo,
- const VkAllocationCallbacks *pAllocator, VkDevice *pDevice);
-
-/* helper function definitions */
void loader_initialize(void);
bool has_vk_extension_property_array(const VkExtensionProperties *vk_ext_prop,
const uint32_t count,
@@ -483,8 +453,18 @@ VkResult loader_add_to_ext_list(const struct loader_instance *inst,
struct loader_extension_list *ext_list,
uint32_t prop_list_count,
const VkExtensionProperties *props);
+VkResult loader_add_device_extensions(const struct loader_instance *inst,
+ struct loader_icd *icd,
+ VkPhysicalDevice physical_device,
+ const char *lib_name,
+ struct loader_extension_list *ext_list);
+bool loader_init_generic_list(const struct loader_instance *inst,
+ struct loader_generic_list *list_info,
+ size_t element_size);
void loader_destroy_generic_list(const struct loader_instance *inst,
struct loader_generic_list *list);
+void loader_destroy_layer_list(const struct loader_instance *inst,
+ struct loader_layer_list *layer_list);
void loader_delete_layer_properties(const struct loader_instance *inst,
struct loader_layer_list *layer_list);
void loader_expand_layer_names(
@@ -516,9 +496,14 @@ void loader_get_icd_loader_instance_extensions(
struct loader_extension_list *inst_exts);
struct loader_icd *loader_get_icd_and_device(const VkDevice device,
struct loader_device **found_dev);
+void loader_init_dispatch_dev_ext(struct loader_instance *inst,
+ struct loader_device *dev);
void *loader_dev_ext_gpa(struct loader_instance *inst, const char *funcName);
void *loader_get_dev_ext_trampoline(uint32_t index);
struct loader_instance *loader_get_instance(const VkInstance instance);
+struct loader_device *
+loader_add_logical_device(const struct loader_instance *inst,
+ struct loader_device **device_list);
void loader_remove_logical_device(const struct loader_instance *inst,
struct loader_icd *icd,
struct loader_device *found_dev);
@@ -535,15 +520,88 @@ VkResult loader_create_instance_chain(const VkInstanceCreateInfo *pCreateInfo,
void loader_activate_instance_layer_extensions(struct loader_instance *inst,
VkInstance created_inst);
+VkResult
+loader_enable_device_layers(const struct loader_instance *inst,
+ struct loader_icd *icd,
+ struct loader_layer_list *activated_layer_list,
+ const VkDeviceCreateInfo *pCreateInfo,
+ const struct loader_layer_list *device_layers);
+
+VkResult loader_create_device_chain(const struct loader_physical_device *pd,
+ const VkDeviceCreateInfo *pCreateInfo,
+ const VkAllocationCallbacks *pAllocator,
+ const struct loader_instance *inst,
+ struct loader_icd *icd,
+ struct loader_device *dev);
+VkResult loader_validate_device_extensions(
+ struct loader_physical_device *phys_dev,
+ const struct loader_layer_list *activated_device_layers,
+ const struct loader_extension_list *icd_exts,
+ const VkDeviceCreateInfo *pCreateInfo);
-void *loader_heap_alloc(const struct loader_instance *instance, size_t size,
- VkSystemAllocationScope allocationScope);
+/* instance layer chain termination entrypoint definitions */
+VKAPI_ATTR VkResult VKAPI_CALL
+terminator_CreateInstance(const VkInstanceCreateInfo *pCreateInfo,
+ const VkAllocationCallbacks *pAllocator,
+ VkInstance *pInstance);
-void loader_heap_free(const struct loader_instance *instance, void *pMemory);
+VKAPI_ATTR void VKAPI_CALL
+terminator_DestroyInstance(VkInstance instance,
+ const VkAllocationCallbacks *pAllocator);
-void *loader_tls_heap_alloc(size_t size);
+VKAPI_ATTR VkResult VKAPI_CALL
+terminator_EnumeratePhysicalDevices(VkInstance instance,
+ uint32_t *pPhysicalDeviceCount,
+ VkPhysicalDevice *pPhysicalDevices);
-void loader_tls_heap_free(void *pMemory);
+VKAPI_ATTR void VKAPI_CALL
+terminator_GetPhysicalDeviceFeatures(VkPhysicalDevice physicalDevice,
+ VkPhysicalDeviceFeatures *pFeatures);
+
+VKAPI_ATTR void VKAPI_CALL
+terminator_GetPhysicalDeviceFormatProperties(VkPhysicalDevice physicalDevice,
+ VkFormat format,
+ VkFormatProperties *pFormatInfo);
+
+VKAPI_ATTR VkResult VKAPI_CALL
+terminator_GetPhysicalDeviceImageFormatProperties(
+ VkPhysicalDevice physicalDevice, VkFormat format, VkImageType type,
+ VkImageTiling tiling, VkImageUsageFlags usage, VkImageCreateFlags flags,
+ VkImageFormatProperties *pImageFormatProperties);
+
+VKAPI_ATTR void VKAPI_CALL
+terminator_GetPhysicalDeviceSparseImageFormatProperties(
+ VkPhysicalDevice physicalDevice, VkFormat format, VkImageType type,
+ VkSampleCountFlagBits samples, VkImageUsageFlags usage,
+ VkImageTiling tiling, uint32_t *pNumProperties,
+ VkSparseImageFormatProperties *pProperties);
+
+VKAPI_ATTR void VKAPI_CALL
+terminator_GetPhysicalDeviceProperties(VkPhysicalDevice physicalDevice,
+ VkPhysicalDeviceProperties *pProperties);
+
+VKAPI_ATTR VkResult VKAPI_CALL terminator_EnumerateDeviceExtensionProperties(
+ VkPhysicalDevice physicalDevice, const char *pLayerName, uint32_t *pCount,
+ VkExtensionProperties *pProperties);
+
+VKAPI_ATTR VkResult VKAPI_CALL
+terminator_EnumerateDeviceLayerProperties(VkPhysicalDevice physicalDevice,
+ uint32_t *pCount,
+ VkLayerProperties *pProperties);
+
+VKAPI_ATTR void VKAPI_CALL terminator_GetPhysicalDeviceQueueFamilyProperties(
+ VkPhysicalDevice physicalDevice, uint32_t *pCount,
+ VkQueueFamilyProperties *pProperties);
+
+VKAPI_ATTR void VKAPI_CALL terminator_GetPhysicalDeviceMemoryProperties(
+ VkPhysicalDevice physicalDevice,
+ VkPhysicalDeviceMemoryProperties *pProperties);
+
+VKAPI_ATTR VkResult VKAPI_CALL
+terminator_CreateDevice(VkPhysicalDevice gpu,
+ const VkDeviceCreateInfo *pCreateInfo,
+ const VkAllocationCallbacks *pAllocator,
+ VkDevice *pDevice);
VkStringErrorFlags vk_string_validate(const int max_length,
const char *char_array);
diff --git a/loader/table_ops.h b/loader/table_ops.h
index 4bf8b410a..a216e9388 100644
--- a/loader/table_ops.h
+++ b/loader/table_ops.h
@@ -626,14 +626,38 @@ static inline void loader_init_instance_extension_dispatch_table(
(PFN_vkGetPhysicalDeviceXlibPresentationSupportKHR)gpa(
inst, "vkGetPhysicalDeviceXlibPresentationSupportKHR");
#endif
+ table->GetPhysicalDeviceDisplayPropertiesKHR =
+ (PFN_vkGetPhysicalDeviceDisplayPropertiesKHR) gpa(inst,
+ "vkGetPhysicalDeviceDisplayPropertiesKHR");
+ table->GetPhysicalDeviceDisplayPlanePropertiesKHR =
+ (PFN_vkGetPhysicalDeviceDisplayPlanePropertiesKHR) gpa(inst,
+ "vkGetPhysicalDeviceDisplayPlanePropertiesKHR");
+ table->GetDisplayPlaneSupportedDisplaysKHR =
+ (PFN_vkGetDisplayPlaneSupportedDisplaysKHR) gpa(inst,
+ "vkGetDisplayPlaneSupportedDisplaysKHR");
+ table->GetDisplayModePropertiesKHR =
+ (PFN_vkGetDisplayModePropertiesKHR) gpa(inst,
+ "vkGetDisplayModePropertiesKHR");
+ table->CreateDisplayModeKHR =
+ (PFN_vkCreateDisplayModeKHR) gpa(inst,
+ "vkCreateDisplayModeKHR");
+ table->GetDisplayPlaneCapabilitiesKHR =
+ (PFN_vkGetDisplayPlaneCapabilitiesKHR) gpa(inst,
+ "vkGetDisplayPlaneCapabilitiesKHR");
+ table->CreateDisplayPlaneSurfaceKHR =
+ (PFN_vkCreateDisplayPlaneSurfaceKHR) gpa(inst,
+ "vkCreateDisplayPlaneSurfaceKHR");
}
static inline void *
loader_lookup_instance_dispatch_table(const VkLayerInstanceDispatchTable *table,
- const char *name) {
- if (!name || name[0] != 'v' || name[1] != 'k')
+ const char *name, bool *found_name) {
+ if (!name || name[0] != 'v' || name[1] != 'k') {
+ *found_name = false;
return NULL;
+ }
+ *found_name = true;
name += 2;
if (!strcmp(name, "DestroyInstance"))
return (void *)table->DestroyInstance;
@@ -699,6 +723,21 @@ loader_lookup_instance_dispatch_table(const VkLayerInstanceDispatchTable *table,
if (!strcmp(name, "GetPhysicalDeviceXlibPresentationSupportKHR"))
return (void *)table->GetPhysicalDeviceXlibPresentationSupportKHR;
#endif
+ if (!strcmp(name, "GetPhysicalDeviceDisplayPropertiesKHR"))
+ return (void *)table->GetPhysicalDeviceDisplayPropertiesKHR;
+ if (!strcmp(name, "GetPhysicalDeviceDisplayPlanePropertiesKHR"))
+ return (void *)table->GetPhysicalDeviceDisplayPlanePropertiesKHR;
+ if (!strcmp(name, "GetDisplayPlaneSupportedDisplaysKHR"))
+ return (void *)table->GetDisplayPlaneSupportedDisplaysKHR;
+ if (!strcmp(name, "GetDisplayModePropertiesKHR"))
+ return (void *)table->GetDisplayModePropertiesKHR;
+ if (!strcmp(name, "CreateDisplayModeKHR"))
+ return (void *)table->CreateDisplayModeKHR;
+ if (!strcmp(name, "GetDisplayPlaneCapabilitiesKHR"))
+ return (void *)table->GetDisplayPlaneCapabilitiesKHR;
+ if (!strcmp(name, "CreateDisplayPlaneSurfaceKHR"))
+ return (void *)table->CreateDisplayPlaneSurfaceKHR;
+
if (!strcmp(name, "CreateDebugReportCallbackEXT"))
return (void *)table->CreateDebugReportCallbackEXT;
if (!strcmp(name, "DestroyDebugReportCallbackEXT"))
@@ -706,5 +745,6 @@ loader_lookup_instance_dispatch_table(const VkLayerInstanceDispatchTable *table,
if (!strcmp(name, "DebugReportMessageEXT"))
return (void *)table->DebugReportMessageEXT;
+ *found_name = false;
return NULL;
}
diff --git a/loader/trampoline.c b/loader/trampoline.c
index dfd2c0001..a26d8c5a4 100644
--- a/loader/trampoline.c
+++ b/loader/trampoline.c
@@ -37,8 +37,204 @@
#include "loader.h"
#include "debug_report.h"
#include "wsi.h"
+#include "gpa_helper.h"
+#include "table_ops.h"
+
+/* Trampoline entrypoints are in this file for core Vulkan commands */
+/**
+ * Get an instance level or global level entry point address.
+ * @param instance
+ * @param pName
+ * @return
+ * If instance == NULL returns a global level functions only
+ * If instance is valid returns a trampoline entry point for all dispatchable
+ * Vulkan
+ * functions both core and extensions.
+ */
+LOADER_EXPORT VKAPI_ATTR PFN_vkVoidFunction VKAPI_CALL
+vkGetInstanceProcAddr(VkInstance instance, const char *pName) {
+
+ void *addr;
+
+ addr = globalGetProcAddr(pName);
+ if (instance == VK_NULL_HANDLE) {
+ // get entrypoint addresses that are global (no dispatchable object)
+
+ return addr;
+ } else {
+ // if a global entrypoint return NULL
+ if (addr)
+ return NULL;
+ }
+
+ struct loader_instance *ptr_instance = loader_get_instance(instance);
+ if (ptr_instance == NULL)
+ return NULL;
+ // Return trampoline code for non-global entrypoints including any
+ // extensions.
+ // Device extensions are returned if a layer or ICD supports the extension.
+ // Instance extensions are returned if the extension is enabled and the
+ // loader
+ // or someone else supports the extension
+ return trampolineGetProcAddr(ptr_instance, pName);
+}
+
+/**
+ * Get a device level or global level entry point address.
+ * @param device
+ * @param pName
+ * @return
+ * If device is valid, returns a device relative entry point for device level
+ * entry points both core and extensions.
+ * Device relative means call down the device chain.
+ */
+LOADER_EXPORT VKAPI_ATTR PFN_vkVoidFunction VKAPI_CALL
+vkGetDeviceProcAddr(VkDevice device, const char *pName) {
+ void *addr;
+
+ /* for entrypoints that loader must handle (ie non-dispatchable or create
+ object)
+ make sure the loader entrypoint is returned */
+ addr = loader_non_passthrough_gdpa(pName);
+ if (addr) {
+ return addr;
+ }
+
+ /* Although CreateDevice is on device chain it's dispatchable object isn't
+ * a VkDevice or child of VkDevice so return NULL.
+ */
+ if (!strcmp(pName, "CreateDevice"))
+ return NULL;
+
+ /* return the dispatch table entrypoint for the fastest case */
+ const VkLayerDispatchTable *disp_table = *(VkLayerDispatchTable **)device;
+ if (disp_table == NULL)
+ return NULL;
+
+ addr = loader_lookup_device_dispatch_table(disp_table, pName);
+ if (addr)
+ return addr;
+
+ if (disp_table->GetDeviceProcAddr == NULL)
+ return NULL;
+ return disp_table->GetDeviceProcAddr(device, pName);
+}
+
+LOADER_EXPORT VKAPI_ATTR VkResult VKAPI_CALL
+vkEnumerateInstanceExtensionProperties(const char *pLayerName,
+ uint32_t *pPropertyCount,
+ VkExtensionProperties *pProperties) {
+ struct loader_extension_list *global_ext_list = NULL;
+ struct loader_layer_list instance_layers;
+ struct loader_extension_list icd_extensions;
+ struct loader_icd_libs icd_libs;
+ uint32_t copy_size;
+
+ tls_instance = NULL;
+ memset(&icd_extensions, 0, sizeof(icd_extensions));
+ memset(&instance_layers, 0, sizeof(instance_layers));
+ loader_platform_thread_once(&once_init, loader_initialize);
+
+ /* get layer libraries if needed */
+ if (pLayerName && strlen(pLayerName) != 0) {
+ if (vk_string_validate(MaxLoaderStringLength, pLayerName) ==
+ VK_STRING_ERROR_NONE) {
+ loader_layer_scan(NULL, &instance_layers, NULL);
+ for (uint32_t i = 0; i < instance_layers.count; i++) {
+ struct loader_layer_properties *props =
+ &instance_layers.list[i];
+ if (strcmp(props->info.layerName, pLayerName) == 0) {
+ global_ext_list = &props->instance_extension_list;
+ }
+ }
+ } else {
+ assert(VK_FALSE && "vkEnumerateInstanceExtensionProperties: "
+ "pLayerName is too long or is badly formed");
+ return VK_ERROR_EXTENSION_NOT_PRESENT;
+ }
+ } else {
+ /* Scan/discover all ICD libraries */
+ memset(&icd_libs, 0, sizeof(struct loader_icd_libs));
+ loader_icd_scan(NULL, &icd_libs);
+ /* get extensions from all ICD's, merge so no duplicates */
+ loader_get_icd_loader_instance_extensions(NULL, &icd_libs,
+ &icd_extensions);
+ loader_scanned_icd_clear(NULL, &icd_libs);
+ global_ext_list = &icd_extensions;
+ }
+
+ if (global_ext_list == NULL) {
+ loader_destroy_layer_list(NULL, &instance_layers);
+ return VK_ERROR_LAYER_NOT_PRESENT;
+ }
+
+ if (pProperties == NULL) {
+ *pPropertyCount = global_ext_list->count;
+ loader_destroy_layer_list(NULL, &instance_layers);
+ loader_destroy_generic_list(
+ NULL, (struct loader_generic_list *)&icd_extensions);
+ return VK_SUCCESS;
+ }
+
+ copy_size = *pPropertyCount < global_ext_list->count
+ ? *pPropertyCount
+ : global_ext_list->count;
+ for (uint32_t i = 0; i < copy_size; i++) {
+ memcpy(&pProperties[i], &global_ext_list->list[i],
+ sizeof(VkExtensionProperties));
+ }
+ *pPropertyCount = copy_size;
+ loader_destroy_generic_list(NULL,
+ (struct loader_generic_list *)&icd_extensions);
+
+ if (copy_size < global_ext_list->count) {
+ loader_destroy_layer_list(NULL, &instance_layers);
+ return VK_INCOMPLETE;
+ }
+
+ loader_destroy_layer_list(NULL, &instance_layers);
+ return VK_SUCCESS;
+}
+
+LOADER_EXPORT VKAPI_ATTR VkResult VKAPI_CALL
+vkEnumerateInstanceLayerProperties(uint32_t *pPropertyCount,
+ VkLayerProperties *pProperties) {
+
+ struct loader_layer_list instance_layer_list;
+ tls_instance = NULL;
+
+ loader_platform_thread_once(&once_init, loader_initialize);
+
+ uint32_t copy_size;
+
+ /* get layer libraries */
+ memset(&instance_layer_list, 0, sizeof(instance_layer_list));
+ loader_layer_scan(NULL, &instance_layer_list, NULL);
+
+ if (pProperties == NULL) {
+ *pPropertyCount = instance_layer_list.count;
+ loader_destroy_layer_list(NULL, &instance_layer_list);
+ return VK_SUCCESS;
+ }
+
+ copy_size = (*pPropertyCount < instance_layer_list.count)
+ ? *pPropertyCount
+ : instance_layer_list.count;
+ for (uint32_t i = 0; i < copy_size; i++) {
+ memcpy(&pProperties[i], &instance_layer_list.list[i].info,
+ sizeof(VkLayerProperties));
+ }
+
+ *pPropertyCount = copy_size;
+ loader_destroy_layer_list(NULL, &instance_layer_list);
+
+ if (copy_size < instance_layer_list.count) {
+ return VK_INCOMPLETE;
+ }
+
+ return VK_SUCCESS;
+}
-/* Trampoline entrypoints */
LOADER_EXPORT VKAPI_ATTR VkResult VKAPI_CALL
vkCreateInstance(const VkInstanceCreateInfo *pCreateInfo,
const VkAllocationCallbacks *pAllocator,
@@ -256,6 +452,8 @@ vkDestroyInstance(VkInstance instance,
disp->DestroyInstance(instance, pAllocator);
loader_deactivate_instance_layers(ptr_instance);
+ if (ptr_instance->phys_devs)
+ loader_heap_free(ptr_instance, ptr_instance->phys_devs);
loader_heap_free(ptr_instance, ptr_instance->disp);
loader_heap_free(ptr_instance, ptr_instance);
loader_platform_thread_unlock_mutex(&loader_lock);
@@ -266,31 +464,74 @@ vkEnumeratePhysicalDevices(VkInstance instance, uint32_t *pPhysicalDeviceCount,
VkPhysicalDevice *pPhysicalDevices) {
const VkLayerInstanceDispatchTable *disp;
VkResult res;
+ uint32_t count, i;
+ struct loader_instance *inst;
disp = loader_get_instance_dispatch(instance);
loader_platform_thread_lock_mutex(&loader_lock);
res = disp->EnumeratePhysicalDevices(instance, pPhysicalDeviceCount,
pPhysicalDevices);
+
+ if (res != VK_SUCCESS && res != VK_INCOMPLETE) {
+ loader_platform_thread_unlock_mutex(&loader_lock);
+ return res;
+ }
+
+ if (!pPhysicalDevices) {
+ loader_platform_thread_unlock_mutex(&loader_lock);
+ return res;
+ }
+
+ // wrap the PhysDev object for loader usage, return wrapped objects
+ inst = loader_get_instance(instance);
+ if (!inst) {
+ loader_platform_thread_unlock_mutex(&loader_lock);
+ return VK_ERROR_INITIALIZATION_FAILED;
+ }
+ if (inst->phys_devs)
+ loader_heap_free(inst, inst->phys_devs);
+ count = inst->total_gpu_count;
+ inst->phys_devs = (struct loader_physical_device *)loader_heap_alloc(
+ inst, count * sizeof(struct loader_physical_device),
+ VK_SYSTEM_ALLOCATION_SCOPE_INSTANCE);
+ if (!inst->phys_devs) {
+ loader_platform_thread_unlock_mutex(&loader_lock);
+ return VK_ERROR_OUT_OF_HOST_MEMORY;
+ }
+
+ for (i = 0; i < count; i++) {
+
+ // initialize the loader's physicalDevice object
+ loader_set_dispatch((void *)&inst->phys_devs[i], inst->disp);
+ inst->phys_devs[i].this_icd = inst->phys_devs_term[i].this_icd;
+ inst->phys_devs[i].phys_dev = pPhysicalDevices[i];
+
+ // copy wrapped object into Application provided array
+ pPhysicalDevices[i] = (VkPhysicalDevice)&inst->phys_devs[i];
+ }
loader_platform_thread_unlock_mutex(&loader_lock);
return res;
}
LOADER_EXPORT VKAPI_ATTR void VKAPI_CALL
-vkGetPhysicalDeviceFeatures(VkPhysicalDevice gpu,
+vkGetPhysicalDeviceFeatures(VkPhysicalDevice physicalDevice,
VkPhysicalDeviceFeatures *pFeatures) {
const VkLayerInstanceDispatchTable *disp;
-
- disp = loader_get_instance_dispatch(gpu);
- disp->GetPhysicalDeviceFeatures(gpu, pFeatures);
+ VkPhysicalDevice unwrapped_phys_dev =
+ loader_unwrap_physical_device(physicalDevice);
+ disp = loader_get_instance_dispatch(physicalDevice);
+ disp->GetPhysicalDeviceFeatures(unwrapped_phys_dev, pFeatures);
}
LOADER_EXPORT VKAPI_ATTR void VKAPI_CALL
-vkGetPhysicalDeviceFormatProperties(VkPhysicalDevice gpu, VkFormat format,
+vkGetPhysicalDeviceFormatProperties(VkPhysicalDevice physicalDevice,
+ VkFormat format,
VkFormatProperties *pFormatInfo) {
const VkLayerInstanceDispatchTable *disp;
-
- disp = loader_get_instance_dispatch(gpu);
- disp->GetPhysicalDeviceFormatProperties(gpu, format, pFormatInfo);
+ VkPhysicalDevice unwrapped_pd =
+ loader_unwrap_physical_device(physicalDevice);
+ disp = loader_get_instance_dispatch(physicalDevice);
+ disp->GetPhysicalDeviceFormatProperties(unwrapped_pd, format, pFormatInfo);
}
LOADER_EXPORT VKAPI_ATTR VkResult VKAPI_CALL
@@ -299,49 +540,200 @@ vkGetPhysicalDeviceImageFormatProperties(
VkImageTiling tiling, VkImageUsageFlags usage, VkImageCreateFlags flags,
VkImageFormatProperties *pImageFormatProperties) {
const VkLayerInstanceDispatchTable *disp;
-
+ VkPhysicalDevice unwrapped_phys_dev =
+ loader_unwrap_physical_device(physicalDevice);
disp = loader_get_instance_dispatch(physicalDevice);
return disp->GetPhysicalDeviceImageFormatProperties(
- physicalDevice, format, type, tiling, usage, flags,
+ unwrapped_phys_dev, format, type, tiling, usage, flags,
pImageFormatProperties);
}
LOADER_EXPORT VKAPI_ATTR void VKAPI_CALL
-vkGetPhysicalDeviceProperties(VkPhysicalDevice gpu,
+vkGetPhysicalDeviceProperties(VkPhysicalDevice physicalDevice,
VkPhysicalDeviceProperties *pProperties) {
const VkLayerInstanceDispatchTable *disp;
-
- disp = loader_get_instance_dispatch(gpu);
- disp->GetPhysicalDeviceProperties(gpu, pProperties);
+ VkPhysicalDevice unwrapped_phys_dev =
+ loader_unwrap_physical_device(physicalDevice);
+ disp = loader_get_instance_dispatch(physicalDevice);
+ disp->GetPhysicalDeviceProperties(unwrapped_phys_dev, pProperties);
}
LOADER_EXPORT VKAPI_ATTR void VKAPI_CALL
vkGetPhysicalDeviceQueueFamilyProperties(
- VkPhysicalDevice gpu, uint32_t *pQueueFamilyPropertyCount,
+ VkPhysicalDevice physicalDevice, uint32_t *pQueueFamilyPropertyCount,
VkQueueFamilyProperties *pQueueProperties) {
const VkLayerInstanceDispatchTable *disp;
-
- disp = loader_get_instance_dispatch(gpu);
- disp->GetPhysicalDeviceQueueFamilyProperties(gpu, pQueueFamilyPropertyCount,
- pQueueProperties);
+ VkPhysicalDevice unwrapped_phys_dev =
+ loader_unwrap_physical_device(physicalDevice);
+ disp = loader_get_instance_dispatch(physicalDevice);
+ disp->GetPhysicalDeviceQueueFamilyProperties(
+ unwrapped_phys_dev, pQueueFamilyPropertyCount, pQueueProperties);
}
LOADER_EXPORT VKAPI_ATTR void VKAPI_CALL vkGetPhysicalDeviceMemoryProperties(
- VkPhysicalDevice gpu, VkPhysicalDeviceMemoryProperties *pMemoryProperties) {
+ VkPhysicalDevice physicalDevice,
+ VkPhysicalDeviceMemoryProperties *pMemoryProperties) {
const VkLayerInstanceDispatchTable *disp;
-
- disp = loader_get_instance_dispatch(gpu);
- disp->GetPhysicalDeviceMemoryProperties(gpu, pMemoryProperties);
+ VkPhysicalDevice unwrapped_phys_dev =
+ loader_unwrap_physical_device(physicalDevice);
+ disp = loader_get_instance_dispatch(physicalDevice);
+ disp->GetPhysicalDeviceMemoryProperties(unwrapped_phys_dev,
+ pMemoryProperties);
}
LOADER_EXPORT VKAPI_ATTR VkResult VKAPI_CALL
-vkCreateDevice(VkPhysicalDevice gpu, const VkDeviceCreateInfo *pCreateInfo,
+vkCreateDevice(VkPhysicalDevice physicalDevice,
+ const VkDeviceCreateInfo *pCreateInfo,
const VkAllocationCallbacks *pAllocator, VkDevice *pDevice) {
VkResult res;
+ struct loader_physical_device *phys_dev;
+ struct loader_icd *icd;
+ struct loader_device *dev;
+ struct loader_instance *inst;
+ struct loader_layer_list activated_layer_list = {0};
+
+ assert(pCreateInfo->queueCreateInfoCount >= 1);
loader_platform_thread_lock_mutex(&loader_lock);
- res = loader_CreateDevice(gpu, pCreateInfo, pAllocator, pDevice);
+ phys_dev = (struct loader_physical_device *)physicalDevice;
+ icd = phys_dev->this_icd;
+ if (!icd) {
+ loader_platform_thread_unlock_mutex(&loader_lock);
+ return VK_ERROR_INITIALIZATION_FAILED;
+ }
+
+ inst = (struct loader_instance *)phys_dev->this_icd->this_instance;
+
+ if (!icd->CreateDevice) {
+ loader_platform_thread_unlock_mutex(&loader_lock);
+ return VK_ERROR_INITIALIZATION_FAILED;
+ }
+
+ /* validate any app enabled layers are available */
+ if (pCreateInfo->enabledLayerCount > 0) {
+ res = loader_validate_layers(inst, pCreateInfo->enabledLayerCount,
+ pCreateInfo->ppEnabledLayerNames,
+ &inst->device_layer_list);
+ if (res != VK_SUCCESS) {
+ loader_platform_thread_unlock_mutex(&loader_lock);
+ return res;
+ }
+ }
+
+ /* Get the physical device (ICD) extensions */
+ struct loader_extension_list icd_exts;
+ if (!loader_init_generic_list(inst, (struct loader_generic_list *)&icd_exts,
+ sizeof(VkExtensionProperties))) {
+ loader_platform_thread_unlock_mutex(&loader_lock);
+ return VK_ERROR_OUT_OF_HOST_MEMORY;
+ }
+
+ //TODO handle more than one phys dev per icd (icd->phys_devs[0])
+ res = loader_add_device_extensions(
+ inst, icd, icd->phys_devs[0],
+ phys_dev->this_icd->this_icd_lib->lib_name, &icd_exts);
+ if (res != VK_SUCCESS) {
+ loader_platform_thread_unlock_mutex(&loader_lock);
+ return res;
+ }
+
+ /* convert any meta layers to the actual layers makes a copy of layer name*/
+ uint32_t saved_layer_count = pCreateInfo->enabledLayerCount;
+ char **saved_layer_names;
+ char **saved_layer_ptr;
+ saved_layer_names =
+ loader_stack_alloc(sizeof(char *) * pCreateInfo->enabledLayerCount);
+ for (uint32_t i = 0; i < saved_layer_count; i++) {
+ saved_layer_names[i] = (char *)pCreateInfo->ppEnabledLayerNames[i];
+ }
+ saved_layer_ptr = (char **)pCreateInfo->ppEnabledLayerNames;
+
+ loader_expand_layer_names(
+ inst, std_validation_str,
+ sizeof(std_validation_names) / sizeof(std_validation_names[0]),
+ std_validation_names, (uint32_t *)&pCreateInfo->enabledLayerCount,
+ (char ***)&pCreateInfo->ppEnabledLayerNames);
+
+ /* fetch a list of all layers activated, explicit and implicit */
+ res = loader_enable_device_layers(inst, icd, &activated_layer_list,
+ pCreateInfo, &inst->device_layer_list);
+ if (res != VK_SUCCESS) {
+ loader_unexpand_dev_layer_names(inst, saved_layer_count,
+ saved_layer_names, saved_layer_ptr,
+ pCreateInfo);
+ loader_platform_thread_unlock_mutex(&loader_lock);
+ return res;
+ }
+
+ /* make sure requested extensions to be enabled are supported */
+ res = loader_validate_device_extensions(phys_dev, &activated_layer_list,
+ &icd_exts, pCreateInfo);
+ if (res != VK_SUCCESS) {
+ loader_unexpand_dev_layer_names(inst, saved_layer_count,
+ saved_layer_names, saved_layer_ptr,
+ pCreateInfo);
+ loader_destroy_generic_list(
+ inst, (struct loader_generic_list *)&activated_layer_list);
+ loader_platform_thread_unlock_mutex(&loader_lock);
+ return res;
+ }
+
+ dev = loader_add_logical_device(inst, &icd->logical_device_list);
+ if (dev == NULL) {
+ loader_unexpand_dev_layer_names(inst, saved_layer_count,
+ saved_layer_names, saved_layer_ptr,
+ pCreateInfo);
+ loader_destroy_generic_list(
+ inst, (struct loader_generic_list *)&activated_layer_list);
+ loader_platform_thread_unlock_mutex(&loader_lock);
+ return VK_ERROR_OUT_OF_HOST_MEMORY;
+ }
+
+ /* move the locally filled layer list into the device, and pass ownership of
+ * the memory */
+ dev->activated_layer_list.capacity = activated_layer_list.capacity;
+ dev->activated_layer_list.count = activated_layer_list.count;
+ dev->activated_layer_list.list = activated_layer_list.list;
+ memset(&activated_layer_list, 0, sizeof(activated_layer_list));
+
+ /* activate any layers on device chain which terminates with device*/
+ res = loader_enable_device_layers(inst, icd, &dev->activated_layer_list,
+ pCreateInfo, &inst->device_layer_list);
+ if (res != VK_SUCCESS) {
+ loader_unexpand_dev_layer_names(inst, saved_layer_count,
+ saved_layer_names, saved_layer_ptr,
+ pCreateInfo);
+ loader_remove_logical_device(inst, icd, dev);
+ loader_platform_thread_unlock_mutex(&loader_lock);
+ return res;
+ }
+
+ res = loader_create_device_chain(phys_dev, pCreateInfo, pAllocator, inst,
+ icd, dev);
+ if (res != VK_SUCCESS) {
+ loader_unexpand_dev_layer_names(inst, saved_layer_count,
+ saved_layer_names, saved_layer_ptr,
+ pCreateInfo);
+ loader_remove_logical_device(inst, icd, dev);
+ loader_platform_thread_unlock_mutex(&loader_lock);
+ return res;
+ }
+
+ *pDevice = dev->device;
+
+ /* initialize any device extension dispatch entry's from the instance list*/
+ loader_init_dispatch_dev_ext(inst, dev);
+
+ /* initialize WSI device extensions as part of core dispatch since loader
+ * has
+ * dedicated trampoline code for these*/
+ loader_init_device_extension_dispatch_table(
+ &dev->loader_dispatch,
+ dev->loader_dispatch.core_dispatch.GetDeviceProcAddr, *pDevice);
+
+ loader_unexpand_dev_layer_names(inst, saved_layer_count, saved_layer_names,
+ saved_layer_ptr, pCreateInfo);
loader_platform_thread_unlock_mutex(&loader_lock);
return res;
@@ -370,7 +762,9 @@ vkEnumerateDeviceExtensionProperties(VkPhysicalDevice physicalDevice,
const char *pLayerName,
uint32_t *pPropertyCount,
VkExtensionProperties *pProperties) {
- VkResult res;
+ VkResult res = VK_SUCCESS;
+ struct loader_physical_device *phys_dev;
+ phys_dev = (struct loader_physical_device *)physicalDevice;
loader_platform_thread_lock_mutex(&loader_lock);
@@ -383,10 +777,48 @@ vkEnumerateDeviceExtensionProperties(VkPhysicalDevice physicalDevice,
disp = loader_get_instance_dispatch(physicalDevice);
res = disp->EnumerateDeviceExtensionProperties(
- physicalDevice, NULL, pPropertyCount, pProperties);
+ phys_dev->phys_dev, NULL, pPropertyCount, pProperties);
} else {
- res = loader_EnumerateDeviceExtensionProperties(
- physicalDevice, pLayerName, pPropertyCount, pProperties);
+
+ uint32_t count;
+ uint32_t copy_size;
+ const struct loader_instance *inst = phys_dev->this_icd->this_instance;
+ if (vk_string_validate(MaxLoaderStringLength, pLayerName) ==
+ VK_STRING_ERROR_NONE) {
+
+ struct loader_device_extension_list *dev_ext_list = NULL;
+ for (uint32_t i = 0; i < inst->device_layer_list.count; i++) {
+ struct loader_layer_properties *props =
+ &inst->device_layer_list.list[i];
+ if (strcmp(props->info.layerName, pLayerName) == 0) {
+ dev_ext_list = &props->device_extension_list;
+ }
+ }
+ count = (dev_ext_list == NULL) ? 0 : dev_ext_list->count;
+ if (pProperties == NULL) {
+ *pPropertyCount = count;
+ loader_platform_thread_unlock_mutex(&loader_lock);
+ return VK_SUCCESS;
+ }
+
+ copy_size = *pPropertyCount < count ? *pPropertyCount : count;
+ for (uint32_t i = 0; i < copy_size; i++) {
+ memcpy(&pProperties[i], &dev_ext_list->list[i].props,
+ sizeof(VkExtensionProperties));
+ }
+ *pPropertyCount = copy_size;
+
+ if (copy_size < count) {
+ loader_platform_thread_unlock_mutex(&loader_lock);
+ return VK_INCOMPLETE;
+ }
+ } else {
+ loader_log(inst, VK_DEBUG_REPORT_ERROR_BIT_EXT, 0,
+ "vkEnumerateDeviceExtensionProperties: pLayerName "
+ "is too long or is badly formed");
+ loader_platform_thread_unlock_mutex(&loader_lock);
+ return VK_ERROR_EXTENSION_NOT_PRESENT;
+ }
}
loader_platform_thread_unlock_mutex(&loader_lock);
@@ -397,16 +829,38 @@ LOADER_EXPORT VKAPI_ATTR VkResult VKAPI_CALL
vkEnumerateDeviceLayerProperties(VkPhysicalDevice physicalDevice,
uint32_t *pPropertyCount,
VkLayerProperties *pProperties) {
- VkResult res;
+ uint32_t copy_size;
+ struct loader_physical_device *phys_dev;
loader_platform_thread_lock_mutex(&loader_lock);
/* Don't dispatch this call down the instance chain, want all device layers
enumerated and instance chain may not contain all device layers */
- res = loader_EnumerateDeviceLayerProperties(physicalDevice, pPropertyCount,
- pProperties);
+
+ phys_dev = (struct loader_physical_device *)physicalDevice;
+ const struct loader_instance *inst = phys_dev->this_icd->this_instance;
+ uint32_t count = inst->device_layer_list.count;
+
+ if (pProperties == NULL) {
+ *pPropertyCount = count;
+ loader_platform_thread_unlock_mutex(&loader_lock);
+ return VK_SUCCESS;
+ }
+
+ copy_size = (*pPropertyCount < count) ? *pPropertyCount : count;
+ for (uint32_t i = 0; i < copy_size; i++) {
+ memcpy(&pProperties[i], &(inst->device_layer_list.list[i].info),
+ sizeof(VkLayerProperties));
+ }
+ *pPropertyCount = copy_size;
+
+ if (copy_size < count) {
+ loader_platform_thread_unlock_mutex(&loader_lock);
+ return VK_INCOMPLETE;
+ }
+
loader_platform_thread_unlock_mutex(&loader_lock);
- return res;
+ return VK_SUCCESS;
}
LOADER_EXPORT VKAPI_ATTR void VKAPI_CALL
@@ -577,12 +1031,13 @@ vkGetPhysicalDeviceSparseImageFormatProperties(
VkImageTiling tiling, uint32_t *pPropertyCount,
VkSparseImageFormatProperties *pProperties) {
const VkLayerInstanceDispatchTable *disp;
-
+ VkPhysicalDevice unwrapped_phys_dev =
+ loader_unwrap_physical_device(physicalDevice);
disp = loader_get_instance_dispatch(physicalDevice);
disp->GetPhysicalDeviceSparseImageFormatProperties(
- physicalDevice, format, type, samples, usage, tiling, pPropertyCount,
- pProperties);
+ unwrapped_phys_dev, format, type, samples, usage, tiling,
+ pPropertyCount, pProperties);
}
LOADER_EXPORT VKAPI_ATTR VkResult VKAPI_CALL
diff --git a/loader/vk-loader-generate.py b/loader/vk-loader-generate.py
index e1226f599..235851c15 100755
--- a/loader/vk-loader-generate.py
+++ b/loader/vk-loader-generate.py
@@ -453,6 +453,16 @@ class LoaderGetProcAddrSubcommand(Subcommand):
return "\n".join(body)
def main():
+
+ wsi = {
+ "Win32",
+ "Android",
+ "Xcb",
+ "Xlib",
+ "Wayland",
+ "Mir"
+ }
+
subcommands = {
"dev-ext-trampoline": DevExtTrampolineSubcommand,
"loader-entrypoints": LoaderEntrypointsSubcommand,
@@ -461,13 +471,14 @@ def main():
"loader-get-proc-addr": LoaderGetProcAddrSubcommand,
}
- if len(sys.argv) < 2 or sys.argv[1] not in subcommands:
- print("Usage: %s <subcommand> [options]" % sys.argv[0])
+ if len(sys.argv) < 3 or sys.argv[1] not in wsi or sys.argv[2] not in subcommands:
+ print("Usage: %s <wsi> <subcommand> [options]" % sys.argv[0])
print
- print("Available sucommands are: %s" % " ".join(subcommands))
+ print("Available wsi (displayservers) are: %s" % " ".join(wsi))
+ print("Available subcommands are: %s" % " ".join(subcommands))
exit(1)
- subcmd = subcommands[sys.argv[1]](sys.argv[2:])
+ subcmd = subcommands[sys.argv[2]](sys.argv[3:])
subcmd.run()
if __name__ == "__main__":
diff --git a/loader/wsi.c b/loader/wsi.c
index 05945fb50..d53ce4a9f 100644
--- a/loader/wsi.c
+++ b/loader/wsi.c
@@ -84,6 +84,9 @@ static const VkExtensionProperties wsi_android_surface_extension_info = {
};
#endif // VK_USE_PLATFORM_ANDROID_KHR
+// Note for VK_DISPLAY_KHR don't advertise support since we really need support
+// to come from ICD, although the loader supplements the support from ICD
+
void wsi_add_instance_extensions(const struct loader_instance *inst,
struct loader_extension_list *ext_list) {
loader_add_to_ext_list(inst, ext_list, 1, &wsi_surface_extension_info);
@@ -115,7 +118,7 @@ void wsi_create_instance(struct loader_instance *ptr_instance,
ptr_instance->wsi_surface_enabled = false;
#ifdef VK_USE_PLATFORM_WIN32_KHR
- ptr_instance->wsi_win32_surface_enabled = true;
+ ptr_instance->wsi_win32_surface_enabled = false;
#endif // VK_USE_PLATFORM_WIN32_KHR
#ifdef VK_USE_PLATFORM_MIR_KHR
ptr_instance->wsi_mir_surface_enabled = false;
@@ -133,6 +136,8 @@ void wsi_create_instance(struct loader_instance *ptr_instance,
ptr_instance->wsi_android_surface_enabled = false;
#endif // VK_USE_PLATFORM_ANDROID_KHR
+ ptr_instance->wsi_display_enabled = false;
+
for (uint32_t i = 0; i < pCreateInfo->enabledExtensionCount; i++) {
if (strcmp(pCreateInfo->ppEnabledExtensionNames[i],
VK_KHR_SURFACE_EXTENSION_NAME) == 0) {
@@ -181,6 +186,11 @@ void wsi_create_instance(struct loader_instance *ptr_instance,
continue;
}
#endif // VK_USE_PLATFORM_ANDROID_KHR
+ if (strcmp(pCreateInfo->ppEnabledExtensionNames[i],
+ VK_KHR_DISPLAY_EXTENSION_NAME) == 0) {
+ ptr_instance->wsi_display_enabled = true;
+ continue;
+ }
}
}
@@ -200,13 +210,14 @@ vkDestroySurfaceKHR(VkInstance instance, VkSurfaceKHR surface,
disp->DestroySurfaceKHR(instance, surface, pAllocator);
}
+// TODO probably need to lock around all the loader_get_instance() calls.
/*
* This is the instance chain terminator function
* for DestroySurfaceKHR
*/
VKAPI_ATTR void VKAPI_CALL
-loader_DestroySurfaceKHR(VkInstance instance, VkSurfaceKHR surface,
- const VkAllocationCallbacks *pAllocator) {
+terminator_DestroySurfaceKHR(VkInstance instance, VkSurfaceKHR surface,
+ const VkAllocationCallbacks *pAllocator) {
struct loader_instance *ptr_instance = loader_get_instance(instance);
loader_heap_free(ptr_instance, (void *)surface);
@@ -222,9 +233,11 @@ vkGetPhysicalDeviceSurfaceSupportKHR(VkPhysicalDevice physicalDevice,
VkSurfaceKHR surface,
VkBool32 *pSupported) {
const VkLayerInstanceDispatchTable *disp;
+ VkPhysicalDevice unwrapped_phys_dev =
+ loader_unwrap_physical_device(physicalDevice);
disp = loader_get_instance_dispatch(physicalDevice);
VkResult res = disp->GetPhysicalDeviceSurfaceSupportKHR(
- physicalDevice, queueFamilyIndex, surface, pSupported);
+ unwrapped_phys_dev, queueFamilyIndex, surface, pSupported);
return res;
}
@@ -233,10 +246,10 @@ vkGetPhysicalDeviceSurfaceSupportKHR(VkPhysicalDevice physicalDevice,
* for GetPhysicalDeviceSurfaceSupportKHR
*/
VKAPI_ATTR VkResult VKAPI_CALL
-loader_GetPhysicalDeviceSurfaceSupportKHR(VkPhysicalDevice physicalDevice,
- uint32_t queueFamilyIndex,
- VkSurfaceKHR surface,
- VkBool32 *pSupported) {
+terminator_GetPhysicalDeviceSurfaceSupportKHR(VkPhysicalDevice physicalDevice,
+ uint32_t queueFamilyIndex,
+ VkSurfaceKHR surface,
+ VkBool32 *pSupported) {
struct loader_physical_device *phys_dev =
(struct loader_physical_device *)physicalDevice;
struct loader_icd *icd = phys_dev->this_icd;
@@ -260,10 +273,13 @@ LOADER_EXPORT VKAPI_ATTR VkResult VKAPI_CALL
vkGetPhysicalDeviceSurfaceCapabilitiesKHR(
VkPhysicalDevice physicalDevice, VkSurfaceKHR surface,
VkSurfaceCapabilitiesKHR *pSurfaceCapabilities) {
+
const VkLayerInstanceDispatchTable *disp;
+ VkPhysicalDevice unwrapped_phys_dev =
+ loader_unwrap_physical_device(physicalDevice);
disp = loader_get_instance_dispatch(physicalDevice);
VkResult res = disp->GetPhysicalDeviceSurfaceCapabilitiesKHR(
- physicalDevice, surface, pSurfaceCapabilities);
+ unwrapped_phys_dev, surface, pSurfaceCapabilities);
return res;
}
@@ -271,7 +287,8 @@ vkGetPhysicalDeviceSurfaceCapabilitiesKHR(
* This is the instance chain terminator function
* for GetPhysicalDeviceSurfaceCapabilitiesKHR
*/
-VKAPI_ATTR VkResult VKAPI_CALL loader_GetPhysicalDeviceSurfaceCapabilitiesKHR(
+VKAPI_ATTR VkResult VKAPI_CALL
+terminator_GetPhysicalDeviceSurfaceCapabilitiesKHR(
VkPhysicalDevice physicalDevice, VkSurfaceKHR surface,
VkSurfaceCapabilitiesKHR *pSurfaceCapabilities) {
struct loader_physical_device *phys_dev =
@@ -297,10 +314,12 @@ vkGetPhysicalDeviceSurfaceFormatsKHR(VkPhysicalDevice physicalDevice,
VkSurfaceKHR surface,
uint32_t *pSurfaceFormatCount,
VkSurfaceFormatKHR *pSurfaceFormats) {
+ VkPhysicalDevice unwrapped_phys_dev =
+ loader_unwrap_physical_device(physicalDevice);
const VkLayerInstanceDispatchTable *disp;
disp = loader_get_instance_dispatch(physicalDevice);
VkResult res = disp->GetPhysicalDeviceSurfaceFormatsKHR(
- physicalDevice, surface, pSurfaceFormatCount, pSurfaceFormats);
+ unwrapped_phys_dev, surface, pSurfaceFormatCount, pSurfaceFormats);
return res;
}
@@ -308,11 +327,9 @@ vkGetPhysicalDeviceSurfaceFormatsKHR(VkPhysicalDevice physicalDevice,
* This is the instance chain terminator function
* for GetPhysicalDeviceSurfaceFormatsKHR
*/
-VKAPI_ATTR VkResult VKAPI_CALL
-loader_GetPhysicalDeviceSurfaceFormatsKHR(VkPhysicalDevice physicalDevice,
- VkSurfaceKHR surface,
- uint32_t *pSurfaceFormatCount,
- VkSurfaceFormatKHR *pSurfaceFormats) {
+VKAPI_ATTR VkResult VKAPI_CALL terminator_GetPhysicalDeviceSurfaceFormatsKHR(
+ VkPhysicalDevice physicalDevice, VkSurfaceKHR surface,
+ uint32_t *pSurfaceFormatCount, VkSurfaceFormatKHR *pSurfaceFormats) {
struct loader_physical_device *phys_dev =
(struct loader_physical_device *)physicalDevice;
struct loader_icd *icd = phys_dev->this_icd;
@@ -337,10 +354,12 @@ vkGetPhysicalDeviceSurfacePresentModesKHR(VkPhysicalDevice physicalDevice,
VkSurfaceKHR surface,
uint32_t *pPresentModeCount,
VkPresentModeKHR *pPresentModes) {
+ VkPhysicalDevice unwrapped_phys_dev =
+ loader_unwrap_physical_device(physicalDevice);
const VkLayerInstanceDispatchTable *disp;
disp = loader_get_instance_dispatch(physicalDevice);
VkResult res = disp->GetPhysicalDeviceSurfacePresentModesKHR(
- physicalDevice, surface, pPresentModeCount, pPresentModes);
+ unwrapped_phys_dev, surface, pPresentModeCount, pPresentModes);
return res;
}
@@ -348,7 +367,8 @@ vkGetPhysicalDeviceSurfacePresentModesKHR(VkPhysicalDevice physicalDevice,
* This is the instance chain terminator function
* for GetPhysicalDeviceSurfacePresentModesKHR
*/
-VKAPI_ATTR VkResult VKAPI_CALL loader_GetPhysicalDeviceSurfacePresentModesKHR(
+VKAPI_ATTR VkResult VKAPI_CALL
+terminator_GetPhysicalDeviceSurfacePresentModesKHR(
VkPhysicalDevice physicalDevice, VkSurfaceKHR surface,
uint32_t *pPresentModeCount, VkPresentModeKHR *pPresentModes) {
struct loader_physical_device *phys_dev =
@@ -468,10 +488,10 @@ vkCreateWin32SurfaceKHR(VkInstance instance,
* for CreateWin32SurfaceKHR
*/
VKAPI_ATTR VkResult VKAPI_CALL
-loader_CreateWin32SurfaceKHR(VkInstance instance,
- const VkWin32SurfaceCreateInfoKHR *pCreateInfo,
- const VkAllocationCallbacks *pAllocator,
- VkSurfaceKHR *pSurface) {
+terminator_CreateWin32SurfaceKHR(VkInstance instance,
+ const VkWin32SurfaceCreateInfoKHR *pCreateInfo,
+ const VkAllocationCallbacks *pAllocator,
+ VkSurfaceKHR *pSurface) {
struct loader_instance *ptr_instance = loader_get_instance(instance);
VkIcdSurfaceWin32 *pIcdSurface = NULL;
@@ -497,10 +517,12 @@ loader_CreateWin32SurfaceKHR(VkInstance instance,
LOADER_EXPORT VKAPI_ATTR VkBool32 VKAPI_CALL
vkGetPhysicalDeviceWin32PresentationSupportKHR(VkPhysicalDevice physicalDevice,
uint32_t queueFamilyIndex) {
+ VkPhysicalDevice unwrapped_phys_dev =
+ loader_unwrap_physical_device(physicalDevice);
const VkLayerInstanceDispatchTable *disp;
disp = loader_get_instance_dispatch(physicalDevice);
VkBool32 res = disp->GetPhysicalDeviceWin32PresentationSupportKHR(
- physicalDevice, queueFamilyIndex);
+ unwrapped_phys_dev, queueFamilyIndex);
return res;
}
@@ -509,7 +531,7 @@ vkGetPhysicalDeviceWin32PresentationSupportKHR(VkPhysicalDevice physicalDevice,
* for GetPhysicalDeviceWin32PresentationSupportKHR
*/
VKAPI_ATTR VkBool32 VKAPI_CALL
-loader_GetPhysicalDeviceWin32PresentationSupportKHR(
+terminator_GetPhysicalDeviceWin32PresentationSupportKHR(
VkPhysicalDevice physicalDevice, uint32_t queueFamilyIndex) {
struct loader_physical_device *phys_dev =
(struct loader_physical_device *)physicalDevice;
@@ -553,10 +575,10 @@ vkCreateMirSurfaceKHR(VkInstance instance,
* for CreateMirSurfaceKHR
*/
VKAPI_ATTR VkResult VKAPI_CALL
-loader_CreateMirSurfaceKHR(VkInstance instance,
- const VkMirSurfaceCreateInfoKHR *pCreateInfo,
- const VkAllocationCallbacks *pAllocator,
- VkSurfaceKHR *pSurface) {
+terminator_CreateMirSurfaceKHR(VkInstance instance,
+ const VkMirSurfaceCreateInfoKHR *pCreateInfo,
+ const VkAllocationCallbacks *pAllocator,
+ VkSurfaceKHR *pSurface) {
struct loader_instance *ptr_instance = loader_get_instance(instance);
VkIcdSurfaceMir *pIcdSurface = NULL;
@@ -583,10 +605,12 @@ LOADER_EXPORT VKAPI_ATTR VkBool32 VKAPI_CALL
vkGetPhysicalDeviceMirPresentationSupportKHR(VkPhysicalDevice physicalDevice,
uint32_t queueFamilyIndex,
MirConnection *connection) {
+ VkPhysicalDevice unwrapped_phys_dev =
+ loader_unwrap_physical_device(physicalDevice);
const VkLayerInstanceDispatchTable *disp;
disp = loader_get_instance_dispatch(physicalDevice);
VkBool32 res = disp->GetPhysicalDeviceMirPresentationSupportKHR(
- physicalDevice, queueFamilyIndex, connection);
+ unwrapped_phys_dev, queueFamilyIndex, connection);
return res;
}
@@ -595,7 +619,7 @@ vkGetPhysicalDeviceMirPresentationSupportKHR(VkPhysicalDevice physicalDevice,
* for GetPhysicalDeviceMirPresentationSupportKHR
*/
VKAPI_ATTR VkBool32 VKAPI_CALL
-loader_GetPhysicalDeviceMirPresentationSupportKHR(
+terminator_GetPhysicalDeviceMirPresentationSupportKHR(
VkPhysicalDevice physicalDevice, uint32_t queueFamilyIndex,
MirConnection *connection) {
struct loader_physical_device *phys_dev =
@@ -623,7 +647,7 @@ loader_GetPhysicalDeviceMirPresentationSupportKHR(
*/
LOADER_EXPORT VKAPI_ATTR VkResult VKAPI_CALL
vkCreateWaylandSurfaceKHR(VkInstance instance,
- const VkMirSurfaceCreateInfoKHR *pCreateInfo,
+ const VkWaylandSurfaceCreateInfoKHR *pCreateInfo,
const VkAllocationCallbacks *pAllocator,
VkSurfaceKHR *pSurface) {
const VkLayerInstanceDispatchTable *disp;
@@ -637,13 +661,11 @@ vkCreateWaylandSurfaceKHR(VkInstance instance,
/*
* This is the instance chain terminator function
- * for CreateXlibSurfaceKHR
+ * for CreateWaylandSurfaceKHR
*/
-VKAPI_ATTR VkResult VKAPI_CALL
-loader_CreateWaylandSurfaceKHR(VkInstance instance,
- const VkMirSurfaceCreateInfoKHR *pCreateInfo,
- const VkAllocationCallbacks *pAllocator,
- VkSurfaceKHR *pSurface) {
+VKAPI_ATTR VkResult VKAPI_CALL terminator_CreateWaylandSurfaceKHR(
+ VkInstance instance, const VkWaylandSurfaceCreateInfoKHR *pCreateInfo,
+ const VkAllocationCallbacks *pAllocator, VkSurfaceKHR *pSurface) {
struct loader_instance *ptr_instance = loader_get_instance(instance);
VkIcdSurfaceWayland *pIcdSurface = NULL;
@@ -670,10 +692,12 @@ LOADER_EXPORT VKAPI_ATTR VkBool32 VKAPI_CALL
vkGetPhysicalDeviceWaylandPresentationSupportKHR(
VkPhysicalDevice physicalDevice, uint32_t queueFamilyIndex,
struct wl_display *display) {
+ VkPhysicalDevice unwrapped_phys_dev =
+ loader_unwrap_physical_device(physicalDevice);
const VkLayerInstanceDispatchTable *disp;
disp = loader_get_instance_dispatch(physicalDevice);
VkBool32 res = disp->GetPhysicalDeviceWaylandPresentationSupportKHR(
- physicalDevice, queueFamilyIndex, display);
+ unwrapped_phys_dev, queueFamilyIndex, display);
return res;
}
@@ -682,7 +706,7 @@ vkGetPhysicalDeviceWaylandPresentationSupportKHR(
* for GetPhysicalDeviceWaylandPresentationSupportKHR
*/
VKAPI_ATTR VkBool32 VKAPI_CALL
-loader_GetPhysicalDeviceWaylandPresentationSupportKHR(
+terminator_GetPhysicalDeviceWaylandPresentationSupportKHR(
VkPhysicalDevice physicalDevice, uint32_t queueFamilyIndex,
struct wl_display *display) {
struct loader_physical_device *phys_dev =
@@ -727,10 +751,10 @@ vkCreateXcbSurfaceKHR(VkInstance instance,
* for CreateXcbSurfaceKHR
*/
VKAPI_ATTR VkResult VKAPI_CALL
-loader_CreateXcbSurfaceKHR(VkInstance instance,
- const VkXcbSurfaceCreateInfoKHR *pCreateInfo,
- const VkAllocationCallbacks *pAllocator,
- VkSurfaceKHR *pSurface) {
+terminator_CreateXcbSurfaceKHR(VkInstance instance,
+ const VkXcbSurfaceCreateInfoKHR *pCreateInfo,
+ const VkAllocationCallbacks *pAllocator,
+ VkSurfaceKHR *pSurface) {
struct loader_instance *ptr_instance = loader_get_instance(instance);
VkIcdSurfaceXcb *pIcdSurface = NULL;
@@ -758,10 +782,12 @@ vkGetPhysicalDeviceXcbPresentationSupportKHR(VkPhysicalDevice physicalDevice,
uint32_t queueFamilyIndex,
xcb_connection_t *connection,
xcb_visualid_t visual_id) {
+ VkPhysicalDevice unwrapped_phys_dev =
+ loader_unwrap_physical_device(physicalDevice);
const VkLayerInstanceDispatchTable *disp;
disp = loader_get_instance_dispatch(physicalDevice);
VkBool32 res = disp->GetPhysicalDeviceXcbPresentationSupportKHR(
- physicalDevice, queueFamilyIndex, connection, visual_id);
+ unwrapped_phys_dev, queueFamilyIndex, connection, visual_id);
return res;
}
@@ -770,7 +796,7 @@ vkGetPhysicalDeviceXcbPresentationSupportKHR(VkPhysicalDevice physicalDevice,
* for GetPhysicalDeviceXcbPresentationSupportKHR
*/
VKAPI_ATTR VkBool32 VKAPI_CALL
-loader_GetPhysicalDeviceXcbPresentationSupportKHR(
+terminator_GetPhysicalDeviceXcbPresentationSupportKHR(
VkPhysicalDevice physicalDevice, uint32_t queueFamilyIndex,
xcb_connection_t *connection, xcb_visualid_t visual_id) {
struct loader_physical_device *phys_dev =
@@ -815,10 +841,10 @@ vkCreateXlibSurfaceKHR(VkInstance instance,
* for CreateXlibSurfaceKHR
*/
VKAPI_ATTR VkResult VKAPI_CALL
-loader_CreateXlibSurfaceKHR(VkInstance instance,
- const VkXlibSurfaceCreateInfoKHR *pCreateInfo,
- const VkAllocationCallbacks *pAllocator,
- VkSurfaceKHR *pSurface) {
+terminator_CreateXlibSurfaceKHR(VkInstance instance,
+ const VkXlibSurfaceCreateInfoKHR *pCreateInfo,
+ const VkAllocationCallbacks *pAllocator,
+ VkSurfaceKHR *pSurface) {
struct loader_instance *ptr_instance = loader_get_instance(instance);
VkIcdSurfaceXlib *pIcdSurface = NULL;
@@ -845,10 +871,12 @@ LOADER_EXPORT VKAPI_ATTR VkBool32 VKAPI_CALL
vkGetPhysicalDeviceXlibPresentationSupportKHR(VkPhysicalDevice physicalDevice,
uint32_t queueFamilyIndex,
Display *dpy, VisualID visualID) {
+ VkPhysicalDevice unwrapped_phys_dev =
+ loader_unwrap_physical_device(physicalDevice);
const VkLayerInstanceDispatchTable *disp;
disp = loader_get_instance_dispatch(physicalDevice);
VkBool32 res = disp->GetPhysicalDeviceXlibPresentationSupportKHR(
- physicalDevice, queueFamilyIndex, dpy, visualID);
+ unwrapped_phys_dev, queueFamilyIndex, dpy, visualID);
return res;
}
@@ -857,7 +885,7 @@ vkGetPhysicalDeviceXlibPresentationSupportKHR(VkPhysicalDevice physicalDevice,
* for GetPhysicalDeviceXlibPresentationSupportKHR
*/
VKAPI_ATTR VkBool32 VKAPI_CALL
-loader_GetPhysicalDeviceXlibPresentationSupportKHR(
+terminator_GetPhysicalDeviceXlibPresentationSupportKHR(
VkPhysicalDevice physicalDevice, uint32_t queueFamilyIndex, Display *dpy,
VisualID visualID) {
struct loader_physical_device *phys_dev =
@@ -900,9 +928,9 @@ vkCreateAndroidSurfaceKHR(VkInstance instance, ANativeWindow *window,
* for CreateAndroidSurfaceKHR
*/
VKAPI_ATTR VkResult VKAPI_CALL
-loader_CreateAndroidSurfaceKHR(VkInstance instance, Window window,
- const VkAllocationCallbacks *pAllocator,
- VkSurfaceKHR *pSurface) {
+terminator_CreateAndroidSurfaceKHR(VkInstance instance, Window window,
+ const VkAllocationCallbacks *pAllocator,
+ VkSurfaceKHR *pSurface) {
struct loader_instance *ptr_instance = loader_get_instance(instance);
VkIcdSurfaceAndroid *pIcdSurface = NULL;
@@ -923,6 +951,264 @@ loader_CreateAndroidSurfaceKHR(VkInstance instance, Window window,
#endif // VK_USE_PLATFORM_ANDROID_KHR
+
+/*
+ * Functions for the VK_KHR_display instance extension:
+ */
+LOADER_EXPORT VKAPI_ATTR VkResult VKAPI_CALL
+vkGetPhysicalDeviceDisplayPropertiesKHR(
+ VkPhysicalDevice physicalDevice,
+ uint32_t* pPropertyCount,
+ VkDisplayPropertiesKHR* pProperties)
+{
+ VkPhysicalDevice unwrapped_phys_dev =
+ loader_unwrap_physical_device(physicalDevice);
+ const VkLayerInstanceDispatchTable *disp;
+ disp = loader_get_instance_dispatch(physicalDevice);
+ VkResult res = disp->GetPhysicalDeviceDisplayPropertiesKHR(
+ unwrapped_phys_dev, pPropertyCount, pProperties);
+ return res;
+}
+
+VKAPI_ATTR VkResult VKAPI_CALL
+terminator_GetPhysicalDeviceDisplayPropertiesKHR(
+ VkPhysicalDevice physicalDevice,
+ uint32_t* pPropertyCount,
+ VkDisplayPropertiesKHR* pProperties)
+{
+ struct loader_physical_device *phys_dev =
+ (struct loader_physical_device *)physicalDevice;
+ struct loader_icd *icd = phys_dev->this_icd;
+
+ assert(
+ icd->GetPhysicalDeviceDisplayPropertiesKHR &&
+ "loader: null GetPhysicalDeviceDisplayPropertiesKHR ICD pointer");
+
+ return icd->GetPhysicalDeviceDisplayPropertiesKHR(
+ phys_dev->phys_dev, pPropertyCount, pProperties);
+}
+
+LOADER_EXPORT VKAPI_ATTR VkResult VKAPI_CALL
+vkGetPhysicalDeviceDisplayPlanePropertiesKHR(
+ VkPhysicalDevice physicalDevice,
+ uint32_t* pPropertyCount,
+ VkDisplayPlanePropertiesKHR* pProperties)
+{
+ VkPhysicalDevice unwrapped_phys_dev =
+ loader_unwrap_physical_device(physicalDevice);
+ const VkLayerInstanceDispatchTable *disp;
+ disp = loader_get_instance_dispatch(physicalDevice);
+ VkResult res = disp->GetPhysicalDeviceDisplayPlanePropertiesKHR(
+ unwrapped_phys_dev, pPropertyCount, pProperties);
+ return res;
+}
+
+VKAPI_ATTR VkResult VKAPI_CALL
+terminator_GetPhysicalDeviceDisplayPlanePropertiesKHR(
+ VkPhysicalDevice physicalDevice,
+ uint32_t* pPropertyCount,
+ VkDisplayPlanePropertiesKHR* pProperties)
+{
+ struct loader_physical_device *phys_dev =
+ (struct loader_physical_device *)physicalDevice;
+ struct loader_icd *icd = phys_dev->this_icd;
+
+ assert(
+ icd->GetPhysicalDeviceDisplayPlanePropertiesKHR &&
+ "loader: null GetPhysicalDeviceDisplayPlanePropertiesKHR ICD pointer");
+
+ return icd->GetPhysicalDeviceDisplayPlanePropertiesKHR(
+ phys_dev->phys_dev, pPropertyCount, pProperties);
+}
+
+LOADER_EXPORT VKAPI_ATTR VkResult VKAPI_CALL
+vkGetDisplayPlaneSupportedDisplaysKHR(
+ VkPhysicalDevice physicalDevice,
+ uint32_t planeIndex,
+ uint32_t* pDisplayCount,
+ VkDisplayKHR* pDisplays)
+{
+ VkPhysicalDevice unwrapped_phys_dev =
+ loader_unwrap_physical_device(physicalDevice);
+ const VkLayerInstanceDispatchTable *disp;
+ disp = loader_get_instance_dispatch(physicalDevice);
+ VkResult res = disp->GetDisplayPlaneSupportedDisplaysKHR(
+ unwrapped_phys_dev, planeIndex, pDisplayCount, pDisplays);
+ return res;
+}
+
+VKAPI_ATTR VkResult VKAPI_CALL
+terminator_GetDisplayPlaneSupportedDisplaysKHR(
+ VkPhysicalDevice physicalDevice,
+ uint32_t planeIndex,
+ uint32_t* pDisplayCount,
+ VkDisplayKHR* pDisplays)
+{
+ struct loader_physical_device *phys_dev =
+ (struct loader_physical_device *)physicalDevice;
+ struct loader_icd *icd = phys_dev->this_icd;
+
+ assert(
+ icd->GetDisplayPlaneSupportedDisplaysKHR &&
+ "loader: null GetDisplayPlaneSupportedDisplaysKHR ICD pointer");
+
+ return icd->GetDisplayPlaneSupportedDisplaysKHR(
+ phys_dev->phys_dev, planeIndex, pDisplayCount, pDisplays);
+}
+
+LOADER_EXPORT VKAPI_ATTR VkResult VKAPI_CALL
+vkGetDisplayModePropertiesKHR(
+ VkPhysicalDevice physicalDevice,
+ VkDisplayKHR display,
+ uint32_t* pPropertyCount,
+ VkDisplayModePropertiesKHR* pProperties)
+{
+ VkPhysicalDevice unwrapped_phys_dev =
+ loader_unwrap_physical_device(physicalDevice);
+ const VkLayerInstanceDispatchTable *disp;
+ disp = loader_get_instance_dispatch(physicalDevice);
+ VkResult res = disp->GetDisplayModePropertiesKHR(
+ unwrapped_phys_dev, display, pPropertyCount, pProperties);
+ return res;
+}
+
+VKAPI_ATTR VkResult VKAPI_CALL
+terminator_GetDisplayModePropertiesKHR(
+ VkPhysicalDevice physicalDevice,
+ VkDisplayKHR display,
+ uint32_t* pPropertyCount,
+ VkDisplayModePropertiesKHR* pProperties)
+{
+ struct loader_physical_device *phys_dev =
+ (struct loader_physical_device *)physicalDevice;
+ struct loader_icd *icd = phys_dev->this_icd;
+
+ assert(
+ icd->GetDisplayModePropertiesKHR &&
+ "loader: null GetDisplayModePropertiesKHR ICD pointer");
+
+ return icd->GetDisplayModePropertiesKHR(
+ phys_dev->phys_dev, display, pPropertyCount, pProperties);
+}
+
+LOADER_EXPORT VKAPI_ATTR VkResult VKAPI_CALL
+vkCreateDisplayModeKHR(
+ VkPhysicalDevice physicalDevice,
+ VkDisplayKHR display,
+ const VkDisplayModeCreateInfoKHR* pCreateInfo,
+ const VkAllocationCallbacks* pAllocator,
+ VkDisplayModeKHR* pMode)
+{
+ VkPhysicalDevice unwrapped_phys_dev =
+ loader_unwrap_physical_device(physicalDevice);
+ const VkLayerInstanceDispatchTable *disp;
+ disp = loader_get_instance_dispatch(physicalDevice);
+ VkResult res = disp->CreateDisplayModeKHR(
+ unwrapped_phys_dev, display, pCreateInfo, pAllocator, pMode);
+ return res;
+}
+
+VKAPI_ATTR VkResult VKAPI_CALL
+terminator_CreateDisplayModeKHR(
+ VkPhysicalDevice physicalDevice,
+ VkDisplayKHR display,
+ const VkDisplayModeCreateInfoKHR* pCreateInfo,
+ const VkAllocationCallbacks* pAllocator,
+ VkDisplayModeKHR* pMode)
+{
+ struct loader_physical_device *phys_dev =
+ (struct loader_physical_device *)physicalDevice;
+ struct loader_icd *icd = phys_dev->this_icd;
+
+ assert(
+ icd->CreateDisplayModeKHR &&
+ "loader: null CreateDisplayModeKHR ICD pointer");
+
+ return icd->CreateDisplayModeKHR(
+ phys_dev->phys_dev, display, pCreateInfo, pAllocator, pMode);
+}
+
+LOADER_EXPORT VKAPI_ATTR VkResult VKAPI_CALL
+vkGetDisplayPlaneCapabilitiesKHR(
+ VkPhysicalDevice physicalDevice,
+ VkDisplayModeKHR mode,
+ uint32_t planeIndex,
+ VkDisplayPlaneCapabilitiesKHR* pCapabilities)
+{
+ VkPhysicalDevice unwrapped_phys_dev =
+ loader_unwrap_physical_device(physicalDevice);
+ const VkLayerInstanceDispatchTable *disp;
+ disp = loader_get_instance_dispatch(physicalDevice);
+ VkResult res = disp->GetDisplayPlaneCapabilitiesKHR(
+ unwrapped_phys_dev, mode, planeIndex, pCapabilities);
+ return res;
+}
+
+VKAPI_ATTR VkResult VKAPI_CALL
+terminator_GetDisplayPlaneCapabilitiesKHR(
+ VkPhysicalDevice physicalDevice,
+ VkDisplayModeKHR mode,
+ uint32_t planeIndex,
+ VkDisplayPlaneCapabilitiesKHR* pCapabilities)
+{
+ struct loader_physical_device *phys_dev =
+ (struct loader_physical_device *)physicalDevice;
+ struct loader_icd *icd = phys_dev->this_icd;
+
+ assert(
+ icd->GetDisplayPlaneCapabilitiesKHR &&
+ "loader: null GetDisplayPlaneCapabilitiesKHR ICD pointer");
+
+ return icd->GetDisplayPlaneCapabilitiesKHR(
+ phys_dev->phys_dev, mode, planeIndex, pCapabilities);
+}
+
+LOADER_EXPORT VKAPI_ATTR VkResult VKAPI_CALL
+vkCreateDisplayPlaneSurfaceKHR(
+ VkInstance instance,
+ const VkDisplaySurfaceCreateInfoKHR* pCreateInfo,
+ const VkAllocationCallbacks* pAllocator,
+ VkSurfaceKHR* pSurface)
+{
+ const VkLayerInstanceDispatchTable *disp;
+ disp = loader_get_instance_dispatch(instance);
+ VkResult res;
+
+ res = disp->CreateDisplayPlaneSurfaceKHR(instance, pCreateInfo, pAllocator,
+ pSurface);
+ return res;
+}
+
+VKAPI_ATTR VkResult VKAPI_CALL
+terminator_CreateDisplayPlaneSurfaceKHR(
+ VkInstance instance,
+ const VkDisplaySurfaceCreateInfoKHR* pCreateInfo,
+ const VkAllocationCallbacks* pAllocator,
+ VkSurfaceKHR* pSurface)
+{
+ struct loader_instance *inst = loader_get_instance(instance);
+ VkIcdSurfaceDisplay *pIcdSurface = NULL;
+
+ pIcdSurface = loader_heap_alloc(inst, sizeof(VkIcdSurfaceDisplay),
+ VK_SYSTEM_ALLOCATION_SCOPE_INSTANCE);
+ if (pIcdSurface == NULL) {
+ return VK_ERROR_OUT_OF_HOST_MEMORY;
+ }
+
+ pIcdSurface->base.platform = VK_ICD_WSI_PLATFORM_DISPLAY;
+ pIcdSurface->displayMode = pCreateInfo->displayMode;
+ pIcdSurface->planeIndex = pCreateInfo->planeIndex;
+ pIcdSurface->planeStackIndex = pCreateInfo->planeStackIndex;
+ pIcdSurface->transform = pCreateInfo->transform;
+ pIcdSurface->globalAlpha = pCreateInfo->globalAlpha;
+ pIcdSurface->alphaMode = pCreateInfo->alphaMode;
+ pIcdSurface->imageExtent = pCreateInfo->imageExtent;
+
+ *pSurface = (VkSurfaceKHR)pIcdSurface;
+
+ return VK_SUCCESS;
+}
+
bool wsi_swapchain_instance_gpa(struct loader_instance *ptr_instance,
const char *name, void **addr) {
*addr = NULL;
@@ -1021,72 +1307,115 @@ bool wsi_swapchain_instance_gpa(struct loader_instance *ptr_instance,
? (void *)vkGetPhysicalDeviceMirPresentationSupportKHR
: NULL;
return true;
+ }
#endif // VK_USE_PLATFORM_MIR_KHR
#ifdef VK_USE_PLATFORM_WAYLAND_KHR
- /*
- * Functions for the VK_KHR_wayland_surface extension:
- */
- if (!strcmp("vkCreateWaylandSurfaceKHR", name)) {
- *addr = ptr_instance->wsi_wayland_surface_enabled
- ? (void *)vkCreateWaylandSurfaceKHR
- : NULL;
- return true;
- }
- if (!strcmp("vkGetPhysicalDeviceWaylandPresentationSupportKHR", name)) {
- *addr =
- ptr_instance->wsi_wayland_surface_enabled
+ /*
+ * Functions for the VK_KHR_wayland_surface extension:
+ */
+ if (!strcmp("vkCreateWaylandSurfaceKHR", name)) {
+ *addr = ptr_instance->wsi_wayland_surface_enabled
+ ? (void *)vkCreateWaylandSurfaceKHR
+ : NULL;
+ return true;
+ }
+ if (!strcmp("vkGetPhysicalDeviceWaylandPresentationSupportKHR", name)) {
+ *addr = ptr_instance->wsi_wayland_surface_enabled
? (void *)vkGetPhysicalDeviceWaylandPresentationSupportKHR
: NULL;
- return true;
+ return true;
+ }
#endif // VK_USE_PLATFORM_WAYLAND_KHR
#ifdef VK_USE_PLATFORM_XCB_KHR
- /*
- * Functions for the VK_KHR_xcb_surface extension:
- */
- if (!strcmp("vkCreateXcbSurfaceKHR", name)) {
- *addr = ptr_instance->wsi_xcb_surface_enabled
- ? (void *)vkCreateXcbSurfaceKHR
- : NULL;
- return true;
- }
- if (!strcmp("vkGetPhysicalDeviceXcbPresentationSupportKHR", name)) {
- *addr =
- ptr_instance->wsi_xcb_surface_enabled
- ? (void *)vkGetPhysicalDeviceXcbPresentationSupportKHR
- : NULL;
- return true;
- }
+ /*
+ * Functions for the VK_KHR_xcb_surface extension:
+ */
+ if (!strcmp("vkCreateXcbSurfaceKHR", name)) {
+ *addr = ptr_instance->wsi_xcb_surface_enabled
+ ? (void *)vkCreateXcbSurfaceKHR
+ : NULL;
+ return true;
+ }
+ if (!strcmp("vkGetPhysicalDeviceXcbPresentationSupportKHR", name)) {
+ *addr = ptr_instance->wsi_xcb_surface_enabled
+ ? (void *)vkGetPhysicalDeviceXcbPresentationSupportKHR
+ : NULL;
+ return true;
+ }
#endif // VK_USE_PLATFORM_XCB_KHR
#ifdef VK_USE_PLATFORM_XLIB_KHR
- /*
- * Functions for the VK_KHR_xlib_surface extension:
- */
- if (!strcmp("vkCreateXlibSurfaceKHR", name)) {
- *addr = ptr_instance->wsi_xlib_surface_enabled
- ? (void *)vkCreateXlibSurfaceKHR
- : NULL;
- return true;
- }
- if (!strcmp("vkGetPhysicalDeviceXlibPresentationSupportKHR",
- name)) {
- *addr =
- ptr_instance->wsi_xlib_surface_enabled
- ? (void *)vkGetPhysicalDeviceXlibPresentationSupportKHR
- : NULL;
- return true;
- }
+ /*
+ * Functions for the VK_KHR_xlib_surface extension:
+ */
+ if (!strcmp("vkCreateXlibSurfaceKHR", name)) {
+ *addr = ptr_instance->wsi_xlib_surface_enabled
+ ? (void *)vkCreateXlibSurfaceKHR
+ : NULL;
+ return true;
+ }
+ if (!strcmp("vkGetPhysicalDeviceXlibPresentationSupportKHR", name)) {
+ *addr = ptr_instance->wsi_xlib_surface_enabled
+ ? (void *)vkGetPhysicalDeviceXlibPresentationSupportKHR
+ : NULL;
+ return true;
+ }
#endif // VK_USE_PLATFORM_XLIB_KHR
#ifdef VK_USE_PLATFORM_ANDROID_KHR
- /*
- * Functions for the VK_KHR_android_surface extension:
- */
- if (!strcmp("vkCreateAndroidSurfaceKHR", name)) {
- *addr = ptr_instance->wsi_xlib_surface_enabled
- ? (void *)vkCreateAndroidSurfaceKHR
- : NULL;
- return true;
- }
+ /*
+ * Functions for the VK_KHR_android_surface extension:
+ */
+ if (!strcmp("vkCreateAndroidSurfaceKHR", name)) {
+ *addr = ptr_instance->wsi_xlib_surface_enabled
+ ? (void *)vkCreateAndroidSurfaceKHR
+ : NULL;
+ return true;
+ }
#endif // VK_USE_PLATFORM_ANDROID_KHR
- return false;
- }
+ /*
+ * Functions for VK_KHR_display extension:
+ */
+ if (!strcmp("vkGetPhysicalDeviceDisplayPropertiesKHR", name)) {
+ *addr = ptr_instance->wsi_display_enabled
+ ? (void *)vkGetPhysicalDeviceDisplayPropertiesKHR
+ : NULL;
+ return true;
+ }
+ if (!strcmp("vkGetPhysicalDeviceDisplayPlanePropertiesKHR", name)) {
+ *addr = ptr_instance->wsi_display_enabled
+ ? (void *)vkGetPhysicalDeviceDisplayPlanePropertiesKHR
+ : NULL;
+ return true;
+ }
+ if (!strcmp("vkGetDisplayPlaneSupportedDisplaysKHR", name)) {
+ *addr = ptr_instance->wsi_display_enabled
+ ? (void *)vkGetDisplayPlaneSupportedDisplaysKHR
+ : NULL;
+ return true;
+ }
+ if (!strcmp("vkGetDisplayModePropertiesKHR", name)) {
+ *addr = ptr_instance->wsi_display_enabled
+ ? (void *)vkGetDisplayModePropertiesKHR
+ : NULL;
+ return true;
+ }
+ if (!strcmp("vkCreateDisplayModeKHR", name)) {
+ *addr = ptr_instance->wsi_display_enabled
+ ? (void *)vkCreateDisplayModeKHR
+ : NULL;
+ return true;
+ }
+ if (!strcmp("vkGetDisplayPlaneCapabilitiesKHR", name)) {
+ *addr = ptr_instance->wsi_display_enabled
+ ? (void *)vkGetDisplayPlaneCapabilitiesKHR
+ : NULL;
+ return true;
+ }
+ if (!strcmp("vkCreateDisplayPlaneSurfaceKHR", name)) {
+ *addr = ptr_instance->wsi_display_enabled
+ ? (void *)vkCreateDisplayPlaneSurfaceKHR
+ : NULL;
+ return true;
+ }
+ return false;
+} \ No newline at end of file
diff --git a/loader/wsi.h b/loader/wsi.h
index c0213313d..88540f5d4 100644
--- a/loader/wsi.h
+++ b/loader/wsi.h
@@ -38,83 +38,120 @@ void wsi_create_instance(struct loader_instance *ptr_instance,
const VkInstanceCreateInfo *pCreateInfo);
VKAPI_ATTR void VKAPI_CALL
-loader_DestroySurfaceKHR(VkInstance instance, VkSurfaceKHR surface,
- const VkAllocationCallbacks *pAllocator);
+terminator_DestroySurfaceKHR(VkInstance instance, VkSurfaceKHR surface,
+ const VkAllocationCallbacks *pAllocator);
VKAPI_ATTR VkResult VKAPI_CALL
-loader_GetPhysicalDeviceSurfaceSupportKHR(VkPhysicalDevice physicalDevice,
- uint32_t queueFamilyIndex,
- VkSurfaceKHR surface,
- VkBool32 *pSupported);
+terminator_GetPhysicalDeviceSurfaceSupportKHR(VkPhysicalDevice physicalDevice,
+ uint32_t queueFamilyIndex,
+ VkSurfaceKHR surface,
+ VkBool32 *pSupported);
-VKAPI_ATTR VkResult VKAPI_CALL loader_GetPhysicalDeviceSurfaceCapabilitiesKHR(
+VKAPI_ATTR VkResult VKAPI_CALL
+terminator_GetPhysicalDeviceSurfaceCapabilitiesKHR(
VkPhysicalDevice physicalDevice, VkSurfaceKHR surface,
VkSurfaceCapabilitiesKHR *pSurfaceCapabilities);
-VKAPI_ATTR VkResult VKAPI_CALL
-loader_GetPhysicalDeviceSurfaceFormatsKHR(VkPhysicalDevice physicalDevice,
- VkSurfaceKHR surface,
- uint32_t *pSurfaceFormatCount,
- VkSurfaceFormatKHR *pSurfaceFormats);
+VKAPI_ATTR VkResult VKAPI_CALL terminator_GetPhysicalDeviceSurfaceFormatsKHR(
+ VkPhysicalDevice physicalDevice, VkSurfaceKHR surface,
+ uint32_t *pSurfaceFormatCount, VkSurfaceFormatKHR *pSurfaceFormats);
VKAPI_ATTR VkResult VKAPI_CALL
-loader_GetPhysicalDeviceSurfacePresentModesKHR(VkPhysicalDevice physicalDevice,
- VkSurfaceKHR surface,
- uint32_t *pPresentModeCount,
- VkPresentModeKHR *pPresentModes);
+terminator_GetPhysicalDeviceSurfacePresentModesKHR(
+ VkPhysicalDevice physicalDevice, VkSurfaceKHR surface,
+ uint32_t *pPresentModeCount, VkPresentModeKHR *pPresentModes);
#ifdef VK_USE_PLATFORM_WIN32_KHR
VKAPI_ATTR VkResult VKAPI_CALL
-loader_CreateWin32SurfaceKHR(VkInstance instance,
- const VkWin32SurfaceCreateInfoKHR *pCreateInfo,
- const VkAllocationCallbacks *pAllocator,
- VkSurfaceKHR *pSurface);
+terminator_CreateWin32SurfaceKHR(VkInstance instance,
+ const VkWin32SurfaceCreateInfoKHR *pCreateInfo,
+ const VkAllocationCallbacks *pAllocator,
+ VkSurfaceKHR *pSurface);
VKAPI_ATTR VkBool32 VKAPI_CALL
-loader_GetPhysicalDeviceWin32PresentationSupportKHR(
+terminator_GetPhysicalDeviceWin32PresentationSupportKHR(
VkPhysicalDevice physicalDevice, uint32_t queueFamilyIndex);
#endif
#ifdef VK_USE_PLATFORM_MIR_KHR
VKAPI_ATTR VkResult VKAPI_CALL
-loader_CreateMirSurfaceKHR(VkInstance instance,
- const VkMirSurfaceCreateInfoKHR *pCreateInfo,
- const VkAllocationCallbacks *pAllocator,
- VkSurfaceKHR *pSurface);
+terminator_CreateMirSurfaceKHR(VkInstance instance,
+ const VkMirSurfaceCreateInfoKHR *pCreateInfo,
+ const VkAllocationCallbacks *pAllocator,
+ VkSurfaceKHR *pSurface);
VKAPI_ATTR VkBool32 VKAPI_CALL
-loader_GetPhysicalDeviceMirPresentationSupportKHR(
+terminator_GetPhysicalDeviceMirPresentationSupportKHR(
VkPhysicalDevice physicalDevice, uint32_t queueFamilyIndex,
MirConnection *connection);
#endif
#ifdef VK_USE_PLATFORM_WAYLAND_KHR
-VKAPI_ATTR VkResult VKAPI_CALL
-loader_CreateWaylandSurfaceKHR(VkInstance instance,
- const VkWaylandSurfaceCreateInfoKHR *pCreateInfo,
- const VkAllocationCallbacks *pAllocator,
- VkSurfaceKHR *pSurface);
+VKAPI_ATTR VkResult VKAPI_CALL terminator_CreateWaylandSurfaceKHR(
+ VkInstance instance, const VkWaylandSurfaceCreateInfoKHR *pCreateInfo,
+ const VkAllocationCallbacks *pAllocator, VkSurfaceKHR *pSurface);
VKAPI_ATTR VkBool32 VKAPI_CALL
-loader_GetPhysicalDeviceWaylandPresentationSupportKHR(
+terminator_GetPhysicalDeviceWaylandPresentationSupportKHR(
VkPhysicalDevice physicalDevice, uint32_t queueFamilyIndex,
struct wl_display *display);
#endif
#ifdef VK_USE_PLATFORM_XCB_KHR
VKAPI_ATTR VkResult VKAPI_CALL
-loader_CreateXcbSurfaceKHR(VkInstance instance,
- const VkXcbSurfaceCreateInfoKHR *pCreateInfo,
- const VkAllocationCallbacks *pAllocator,
- VkSurfaceKHR *pSurface);
+terminator_CreateXcbSurfaceKHR(VkInstance instance,
+ const VkXcbSurfaceCreateInfoKHR *pCreateInfo,
+ const VkAllocationCallbacks *pAllocator,
+ VkSurfaceKHR *pSurface);
VKAPI_ATTR VkBool32 VKAPI_CALL
-loader_GetPhysicalDeviceXcbPresentationSupportKHR(
+terminator_GetPhysicalDeviceXcbPresentationSupportKHR(
VkPhysicalDevice physicalDevice, uint32_t queueFamilyIndex,
xcb_connection_t *connection, xcb_visualid_t visual_id);
#endif
#ifdef VK_USE_PLATFORM_XLIB_KHR
VKAPI_ATTR VkResult VKAPI_CALL
-loader_CreateXlibSurfaceKHR(VkInstance instance,
- const VkXlibSurfaceCreateInfoKHR *pCreateInfo,
- const VkAllocationCallbacks *pAllocator,
- VkSurfaceKHR *pSurface);
+terminator_CreateXlibSurfaceKHR(VkInstance instance,
+ const VkXlibSurfaceCreateInfoKHR *pCreateInfo,
+ const VkAllocationCallbacks *pAllocator,
+ VkSurfaceKHR *pSurface);
VKAPI_ATTR VkBool32 VKAPI_CALL
-loader_GetPhysicalDeviceXlibPresentationSupportKHR(
+terminator_GetPhysicalDeviceXlibPresentationSupportKHR(
VkPhysicalDevice physicalDevice, uint32_t queueFamilyIndex, Display *dpy,
VisualID visualID);
#endif
+VKAPI_ATTR VkResult VKAPI_CALL
+terminator_GetPhysicalDeviceDisplayPropertiesKHR(
+ VkPhysicalDevice physicalDevice,
+ uint32_t* pPropertyCount,
+ VkDisplayPropertiesKHR* pProperties);
+VKAPI_ATTR VkResult VKAPI_CALL
+terminator_GetPhysicalDeviceDisplayPlanePropertiesKHR(
+ VkPhysicalDevice physicalDevice,
+ uint32_t* pPropertyCount,
+ VkDisplayPlanePropertiesKHR* pProperties);
+VKAPI_ATTR VkResult VKAPI_CALL
+terminator_GetDisplayPlaneSupportedDisplaysKHR(
+ VkPhysicalDevice physicalDevice,
+ uint32_t planeIndex,
+ uint32_t* pDisplayCount,
+ VkDisplayKHR* pDisplays);
+VKAPI_ATTR VkResult VKAPI_CALL
+terminator_GetDisplayModePropertiesKHR(
+ VkPhysicalDevice physicalDevice,
+ VkDisplayKHR display,
+ uint32_t* pPropertyCount,
+ VkDisplayModePropertiesKHR* pProperties);
+VKAPI_ATTR VkResult VKAPI_CALL
+terminator_CreateDisplayModeKHR(
+ VkPhysicalDevice physicalDevice,
+ VkDisplayKHR display,
+ const VkDisplayModeCreateInfoKHR* pCreateInfo,
+ const VkAllocationCallbacks* pAllocator,
+ VkDisplayModeKHR* pMode);
+VKAPI_ATTR VkResult VKAPI_CALL
+terminator_GetDisplayPlaneCapabilitiesKHR(
+ VkPhysicalDevice physicalDevice,
+ VkDisplayModeKHR mode,
+ uint32_t planeIndex,
+ VkDisplayPlaneCapabilitiesKHR* pCapabilities);
+VKAPI_ATTR VkResult VKAPI_CALL
+terminator_CreateDisplayPlaneSurfaceKHR(
+ VkInstance instance,
+ const VkDisplaySurfaceCreateInfoKHR* pCreateInfo,
+ const VkAllocationCallbacks* pAllocator,
+ VkSurfaceKHR* pSurface); \ No newline at end of file
diff --git a/spirv-tools_revision b/spirv-tools_revision
index 00add5a22..e83c901e8 100644
--- a/spirv-tools_revision
+++ b/spirv-tools_revision
@@ -1 +1 @@
-7ef6da7b7f9175da509b4d71
+9149a66ca406d86967b104cac209bad309fd2c33
diff --git a/tests/CMakeLists.txt b/tests/CMakeLists.txt
index 914d56fb8..de028a8b9 100644
--- a/tests/CMakeLists.txt
+++ b/tests/CMakeLists.txt
@@ -38,49 +38,11 @@ set(COMMON_CPP
test_environment.cpp
)
-set(TEST_LIBRARIES
- glslang
- OGLCompiler
- OSDependent
- SPIRV
- )
-
-add_library(glslang STATIC IMPORTED)
-add_library(OGLCompiler STATIC IMPORTED)
-add_library(OSDependent STATIC IMPORTED)
-add_library(SPIRV STATIC IMPORTED)
-
-# On Windows, we must pair Debug and Release appropriately
-if (WIN32)
-
- set_target_properties(glslang PROPERTIES
- IMPORTED_LOCATION "${GLSLANG_PREFIX}/${BUILDTGT_DIR}/glslang/Release/glslang.lib"
- IMPORTED_LOCATION_DEBUG "${GLSLANG_PREFIX}/${BUILDTGT_DIR}/glslang/Debug/glslang.lib")
- set_target_properties(OGLCompiler PROPERTIES
- IMPORTED_LOCATION "${GLSLANG_PREFIX}/${BUILDTGT_DIR}/OGLCompilersDLL/Release/OGLCompiler.lib"
- IMPORTED_LOCATION_DEBUG "${GLSLANG_PREFIX}/${BUILDTGT_DIR}/OGLCompilersDLL/Debug/OGLCompiler.lib")
- set_target_properties(OSDependent PROPERTIES
- IMPORTED_LOCATION "${GLSLANG_PREFIX}/${BUILDTGT_DIR}/glslang/OSDependent/Windows/Release/OSDependent.lib"
- IMPORTED_LOCATION_DEBUG "${GLSLANG_PREFIX}/${BUILDTGT_DIR}/glslang/OSDependent/Windows/Debug/OSDependent.lib")
- set_target_properties(SPIRV PROPERTIES
- IMPORTED_LOCATION "${GLSLANG_PREFIX}/${BUILDTGT_DIR}/SPIRV/Release/SPIRV.lib"
- IMPORTED_LOCATION_DEBUG "${GLSLANG_PREFIX}/${BUILDTGT_DIR}/SPIRV/Debug/SPIRV.lib")
-else ()
- set_target_properties(glslang PROPERTIES
- IMPORTED_LOCATION "${GLSLANG_PREFIX}/build/install/lib/libglslang.a")
- set_target_properties(OGLCompiler PROPERTIES
- IMPORTED_LOCATION "${GLSLANG_PREFIX}/build/install/lib/libOGLCompiler.a")
- set_target_properties(OSDependent PROPERTIES
- IMPORTED_LOCATION "${GLSLANG_PREFIX}/build/install/lib/libOSDependent.a")
- set_target_properties(SPIRV PROPERTIES
- IMPORTED_LOCATION "${GLSLANG_PREFIX}/build/install/lib/libSPIRV.a")
-endif()
-
include_directories(
"${PROJECT_SOURCE_DIR}/tests/gtest-1.7.0/include"
"${PROJECT_SOURCE_DIR}/icd/common"
"${PROJECT_SOURCE_DIR}/layers"
- ${GLSLANG_PREFIX}
+ ${GLSLANG_SPIRV_INCLUDE_DIR}
${LIBGLM_INCLUDE_DIR}
)
@@ -115,6 +77,6 @@ add_executable(vk_layer_validation_tests layer_validation_tests.cpp ${COMMON_CPP
set_target_properties(vk_layer_validation_tests
PROPERTIES
COMPILE_DEFINITIONS "GTEST_LINKED_AS_SHARED_LIBRARY=1")
-target_link_libraries(vk_layer_validation_tests ${LIBVK} gtest gtest_main layer_utils ${TEST_LIBRARIES})
+target_link_libraries(vk_layer_validation_tests ${LIBVK} gtest gtest_main layer_utils ${GLSLANG_LIBRARIES})
add_subdirectory(gtest-1.7.0)
diff --git a/tests/layer_validation_tests.cpp b/tests/layer_validation_tests.cpp
index 73475ae1b..c81a6c4da 100644
--- a/tests/layer_validation_tests.cpp
+++ b/tests/layer_validation_tests.cpp
@@ -264,16 +264,14 @@ class VkLayerTest : public VkRenderFramework {
// ThreadCommandBufferCollision test
instance_layer_names.push_back("VK_LAYER_GOOGLE_threading");
instance_layer_names.push_back("VK_LAYER_LUNARG_object_tracker");
- instance_layer_names.push_back("VK_LAYER_LUNARG_mem_tracker");
- instance_layer_names.push_back("VK_LAYER_LUNARG_draw_state");
+ instance_layer_names.push_back("VK_LAYER_LUNARG_core_validation");
instance_layer_names.push_back("VK_LAYER_LUNARG_device_limits");
instance_layer_names.push_back("VK_LAYER_LUNARG_image");
instance_layer_names.push_back("VK_LAYER_GOOGLE_unique_objects");
device_layer_names.push_back("VK_LAYER_GOOGLE_threading");
device_layer_names.push_back("VK_LAYER_LUNARG_object_tracker");
- device_layer_names.push_back("VK_LAYER_LUNARG_mem_tracker");
- device_layer_names.push_back("VK_LAYER_LUNARG_draw_state");
+ device_layer_names.push_back("VK_LAYER_LUNARG_core_validation");
device_layer_names.push_back("VK_LAYER_LUNARG_device_limits");
device_layer_names.push_back("VK_LAYER_LUNARG_image");
device_layer_names.push_back("VK_LAYER_GOOGLE_unique_objects");
@@ -284,7 +282,7 @@ class VkLayerTest : public VkRenderFramework {
this->app_info.applicationVersion = 1;
this->app_info.pEngineName = "unittest";
this->app_info.engineVersion = 1;
- this->app_info.apiVersion = VK_API_VERSION;
+ this->app_info.apiVersion = VK_API_VERSION_1_0;
m_errorMonitor = new ErrorMonitor;
InitFramework(instance_layer_names, device_layer_names,
@@ -1382,118 +1380,6 @@ TEST_F(VkLayerTest, CommandBufferTwoSubmits) {
}
}
-TEST_F(VkLayerTest, BindPipelineNoRenderPass) {
- // Initiate Draw w/o a PSO bound
- VkResult err;
-
- m_errorMonitor->SetDesiredFailureMsg(VK_DEBUG_REPORT_ERROR_BIT_EXT,
- "vkCmdBindPipeline: This call must be "
- "issued inside an active render pass");
-
- ASSERT_NO_FATAL_FAILURE(InitState());
- ASSERT_NO_FATAL_FAILURE(InitRenderTarget());
-
- VkDescriptorPoolSize ds_type_count = {};
- ds_type_count.type = VK_DESCRIPTOR_TYPE_UNIFORM_BUFFER;
- ds_type_count.descriptorCount = 1;
-
- VkDescriptorPoolCreateInfo ds_pool_ci = {};
- ds_pool_ci.sType = VK_STRUCTURE_TYPE_DESCRIPTOR_POOL_CREATE_INFO;
- ds_pool_ci.pNext = NULL;
- ds_pool_ci.maxSets = 1;
- ds_pool_ci.poolSizeCount = 1;
- ds_pool_ci.pPoolSizes = &ds_type_count;
-
- VkDescriptorPool ds_pool;
- err =
- vkCreateDescriptorPool(m_device->device(), &ds_pool_ci, NULL, &ds_pool);
- ASSERT_VK_SUCCESS(err);
-
- VkDescriptorSetLayoutBinding dsl_binding = {};
- dsl_binding.binding = 0;
- dsl_binding.descriptorType = VK_DESCRIPTOR_TYPE_UNIFORM_BUFFER;
- dsl_binding.descriptorCount = 1;
- dsl_binding.stageFlags = VK_SHADER_STAGE_ALL;
- dsl_binding.pImmutableSamplers = NULL;
-
- VkDescriptorSetLayoutCreateInfo ds_layout_ci = {};
- ds_layout_ci.sType = VK_STRUCTURE_TYPE_DESCRIPTOR_SET_LAYOUT_CREATE_INFO;
- ds_layout_ci.pNext = NULL;
- ds_layout_ci.bindingCount = 1;
- ds_layout_ci.pBindings = &dsl_binding;
-
- VkDescriptorSetLayout ds_layout;
- err = vkCreateDescriptorSetLayout(m_device->device(), &ds_layout_ci, NULL,
- &ds_layout);
- ASSERT_VK_SUCCESS(err);
-
- VkDescriptorSet descriptorSet;
- VkDescriptorSetAllocateInfo alloc_info = {};
- alloc_info.sType = VK_STRUCTURE_TYPE_DESCRIPTOR_SET_ALLOCATE_INFO;
- alloc_info.descriptorSetCount = 1;
- alloc_info.descriptorPool = ds_pool;
- alloc_info.pSetLayouts = &ds_layout;
- err = vkAllocateDescriptorSets(m_device->device(), &alloc_info,
- &descriptorSet);
- ASSERT_VK_SUCCESS(err);
- VkPipelineMultisampleStateCreateInfo pipe_ms_state_ci = {};
- pipe_ms_state_ci.sType =
- VK_STRUCTURE_TYPE_PIPELINE_MULTISAMPLE_STATE_CREATE_INFO;
- pipe_ms_state_ci.pNext = NULL;
- pipe_ms_state_ci.rasterizationSamples = VK_SAMPLE_COUNT_1_BIT;
- pipe_ms_state_ci.sampleShadingEnable = 0;
- pipe_ms_state_ci.minSampleShading = 1.0;
- pipe_ms_state_ci.pSampleMask = NULL;
-
- VkPipelineLayoutCreateInfo pipeline_layout_ci = {};
- pipeline_layout_ci.sType = VK_STRUCTURE_TYPE_PIPELINE_LAYOUT_CREATE_INFO;
- pipeline_layout_ci.pNext = NULL;
- pipeline_layout_ci.setLayoutCount = 1;
- pipeline_layout_ci.pSetLayouts = &ds_layout;
- VkPipelineLayout pipeline_layout;
-
- err = vkCreatePipelineLayout(m_device->device(), &pipeline_layout_ci, NULL,
- &pipeline_layout);
- ASSERT_VK_SUCCESS(err);
-
- VkShaderObj vs(m_device, bindStateVertShaderText,
- VK_SHADER_STAGE_VERTEX_BIT, this);
- VkShaderObj fs(m_device, bindStateFragShaderText,
- VK_SHADER_STAGE_FRAGMENT_BIT,
- this); // TODO - We shouldn't need a fragment shader
- // but add it to be able to run on more devices
- VkPipelineObj pipe(m_device);
- pipe.AddShader(&vs);
- pipe.AddShader(&fs);
- pipe.SetMSAA(&pipe_ms_state_ci);
- pipe.CreateVKPipeline(pipeline_layout, renderPass());
-
- // Calls AllocateCommandBuffers
- VkCommandBufferObj commandBuffer(m_device, m_commandPool);
- VkCommandBufferBeginInfo cmd_buf_info = {};
- memset(&cmd_buf_info, 0, sizeof(VkCommandBufferBeginInfo));
- VkCommandBufferInheritanceInfo cmd_buf_hinfo = {};
- memset(&cmd_buf_hinfo, 0, sizeof(VkCommandBufferInheritanceInfo));
- cmd_buf_info.sType = VK_STRUCTURE_TYPE_COMMAND_BUFFER_BEGIN_INFO;
- cmd_buf_info.pNext = NULL;
- cmd_buf_info.flags = VK_COMMAND_BUFFER_USAGE_ONE_TIME_SUBMIT_BIT;
- cmd_buf_info.pInheritanceInfo = &cmd_buf_hinfo;
-
- vkBeginCommandBuffer(commandBuffer.GetBufferHandle(), &cmd_buf_info);
- vkCmdBindPipeline(commandBuffer.GetBufferHandle(),
- VK_PIPELINE_BIND_POINT_GRAPHICS, pipe.handle());
-
- if (!m_errorMonitor->DesiredMsgFound()) {
- FAIL() << "Did not receive Error 'vkCmdBindPipeline: This call must be "
- "issued inside an active render pass'";
- m_errorMonitor->DumpFailureMsgs();
- }
-
- vkDestroyPipelineLayout(m_device->device(), pipeline_layout, NULL);
- vkDestroyDescriptorSetLayout(m_device->device(), ds_layout, NULL);
- vkDestroyDescriptorPool(m_device->device(), ds_pool, NULL);
-}
-
TEST_F(VkLayerTest, AllocDescriptorFromEmptyPool) {
// Initiate Draw w/o a PSO bound
VkResult err;
@@ -2039,6 +1925,81 @@ TEST_F(VkLayerTest, InvalidDynamicOffsetCases) {
vkDestroyDescriptorPool(m_device->device(), ds_pool, NULL);
}
+TEST_F(VkLayerTest, InvalidPushConstants) {
+ // Hit push constant error cases:
+ // 1. Create PipelineLayout where push constant overstep maxPushConstantSize
+ // 2. Incorrectly set push constant size to 0
+ // 3. Incorrectly set push constant size to non-multiple of 4
+ // 4. Attempt push constant update that exceeds maxPushConstantSize
+ VkResult err;
+ m_errorMonitor->SetDesiredFailureMsg(
+ VK_DEBUG_REPORT_ERROR_BIT_EXT,
+ "vkCreatePipelineLayout() call has push constants with offset ");
+
+ ASSERT_NO_FATAL_FAILURE(InitState());
+ ASSERT_NO_FATAL_FAILURE(InitViewport());
+ ASSERT_NO_FATAL_FAILURE(InitRenderTarget());
+
+ VkPushConstantRange pc_range = {};
+ pc_range.size = 0xFFFFFFFFu;
+ pc_range.stageFlags = VK_SHADER_STAGE_VERTEX_BIT;
+ VkPipelineLayoutCreateInfo pipeline_layout_ci = {};
+ pipeline_layout_ci.sType = VK_STRUCTURE_TYPE_PIPELINE_LAYOUT_CREATE_INFO;
+ pipeline_layout_ci.pushConstantRangeCount = 1;
+ pipeline_layout_ci.pPushConstantRanges = &pc_range;
+
+ VkPipelineLayout pipeline_layout;
+ err = vkCreatePipelineLayout(m_device->device(), &pipeline_layout_ci, NULL,
+ &pipeline_layout);
+
+ if (!m_errorMonitor->DesiredMsgFound()) {
+ FAIL() << "Error received was not 'vkCreatePipelineLayout() call has "
+ "push constants with offset 0...'";
+ m_errorMonitor->DumpFailureMsgs();
+ }
+ // Now cause errors due to size 0 and non-4 byte aligned size
+ pc_range.size = 0;
+ m_errorMonitor->SetDesiredFailureMsg(
+ VK_DEBUG_REPORT_ERROR_BIT_EXT,
+ "vkCreatePipelineLayout() call has push constant index 0 with size 0");
+ err = vkCreatePipelineLayout(m_device->device(), &pipeline_layout_ci, NULL,
+ &pipeline_layout);
+ if (!m_errorMonitor->DesiredMsgFound()) {
+ FAIL() << "Error received was not 'vkCreatePipelineLayout() call has "
+ "push constant index 0 with size 0...'";
+ m_errorMonitor->DumpFailureMsgs();
+ }
+ pc_range.size = 1;
+ m_errorMonitor->SetDesiredFailureMsg(
+ VK_DEBUG_REPORT_ERROR_BIT_EXT,
+ "vkCreatePipelineLayout() call has push constant index 0 with size 1");
+ err = vkCreatePipelineLayout(m_device->device(), &pipeline_layout_ci, NULL,
+ &pipeline_layout);
+ if (!m_errorMonitor->DesiredMsgFound()) {
+ FAIL() << "Error received was not 'vkCreatePipelineLayout() call has "
+ "push constant index 0 with size 0...'";
+ m_errorMonitor->DumpFailureMsgs();
+ }
+ // Cause error due to bad size in vkCmdPushConstants() call
+ m_errorMonitor->SetDesiredFailureMsg(
+ VK_DEBUG_REPORT_ERROR_BIT_EXT,
+ "vkCmdPushConstants() call has push constants with offset ");
+ pipeline_layout_ci.pushConstantRangeCount = 0;
+ pipeline_layout_ci.pPushConstantRanges = NULL;
+ err = vkCreatePipelineLayout(m_device->device(), &pipeline_layout_ci, NULL,
+ &pipeline_layout);
+ ASSERT_VK_SUCCESS(err);
+ BeginCommandBuffer();
+ vkCmdPushConstants(m_commandBuffer->GetBufferHandle(), pipeline_layout,
+ VK_SHADER_STAGE_VERTEX_BIT, 0, 0xFFFFFFFFu, NULL);
+ if (!m_errorMonitor->DesiredMsgFound()) {
+ FAIL() << "Error received was not 'vkCmdPushConstants() call has push "
+ "constants with offset 0...'";
+ m_errorMonitor->DumpFailureMsgs();
+ }
+ vkDestroyPipelineLayout(m_device->device(), pipeline_layout, NULL);
+}
+
TEST_F(VkLayerTest, DescriptorSetCompatibility) {
// Test various desriptorSet errors with bad binding combinations
VkResult err;
@@ -2682,9 +2643,19 @@ TEST_F(VkLayerTest, InvalidPipelineCreateState) {
vp_state_ci.viewportCount = 1;
vp_state_ci.pViewports = &vp;
+ VkPipelineRasterizationStateCreateInfo rs_state_ci = {};
+ rs_state_ci.sType = VK_STRUCTURE_TYPE_PIPELINE_RASTERIZATION_STATE_CREATE_INFO;
+ rs_state_ci.polygonMode = VK_POLYGON_MODE_FILL;
+ rs_state_ci.cullMode = VK_CULL_MODE_BACK_BIT;
+ rs_state_ci.frontFace = VK_FRONT_FACE_COUNTER_CLOCKWISE;
+ rs_state_ci.depthClampEnable = VK_FALSE;
+ rs_state_ci.rasterizerDiscardEnable = VK_FALSE;
+ rs_state_ci.depthBiasEnable = VK_FALSE;
+
VkGraphicsPipelineCreateInfo gp_ci = {};
gp_ci.sType = VK_STRUCTURE_TYPE_GRAPHICS_PIPELINE_CREATE_INFO;
gp_ci.pViewportState = &vp_state_ci;
+ gp_ci.pRasterizationState = &rs_state_ci;
gp_ci.flags = VK_PIPELINE_CREATE_DISABLE_OPTIMIZATION_BIT;
gp_ci.layout = pipeline_layout;
gp_ci.renderPass = renderPass();
@@ -2929,6 +2900,15 @@ TEST_F(VkLayerTest, PSOViewportScissorCountMismatch) {
vp_state_ci.viewportCount = 1; // Count mismatch should cause error
vp_state_ci.pViewports = &vp;
+ VkPipelineRasterizationStateCreateInfo rs_state_ci = {};
+ rs_state_ci.sType = VK_STRUCTURE_TYPE_PIPELINE_RASTERIZATION_STATE_CREATE_INFO;
+ rs_state_ci.polygonMode = VK_POLYGON_MODE_FILL;
+ rs_state_ci.cullMode = VK_CULL_MODE_BACK_BIT;
+ rs_state_ci.frontFace = VK_FRONT_FACE_COUNTER_CLOCKWISE;
+ rs_state_ci.depthClampEnable = VK_FALSE;
+ rs_state_ci.rasterizerDiscardEnable = VK_FALSE;
+ rs_state_ci.depthBiasEnable = VK_FALSE;
+
VkPipelineShaderStageCreateInfo shaderStages[2];
memset(&shaderStages, 0, 2 * sizeof(VkPipelineShaderStageCreateInfo));
@@ -2946,6 +2926,7 @@ TEST_F(VkLayerTest, PSOViewportScissorCountMismatch) {
gp_ci.stageCount = 2;
gp_ci.pStages = shaderStages;
gp_ci.pViewportState = &vp_state_ci;
+ gp_ci.pRasterizationState = &rs_state_ci;
gp_ci.flags = VK_PIPELINE_CREATE_DISABLE_OPTIMIZATION_BIT;
gp_ci.layout = pipeline_layout;
gp_ci.renderPass = renderPass();
@@ -3057,10 +3038,21 @@ TEST_F(VkLayerTest, PSOViewportStateNotSet) {
shaderStages[0] = vs.GetStageCreateInfo();
shaderStages[1] = fs.GetStageCreateInfo();
+
+ VkPipelineRasterizationStateCreateInfo rs_state_ci = {};
+ rs_state_ci.sType = VK_STRUCTURE_TYPE_PIPELINE_RASTERIZATION_STATE_CREATE_INFO;
+ rs_state_ci.polygonMode = VK_POLYGON_MODE_FILL;
+ rs_state_ci.cullMode = VK_CULL_MODE_BACK_BIT;
+ rs_state_ci.frontFace = VK_FRONT_FACE_COUNTER_CLOCKWISE;
+ rs_state_ci.depthClampEnable = VK_FALSE;
+ rs_state_ci.rasterizerDiscardEnable = VK_FALSE;
+ rs_state_ci.depthBiasEnable = VK_FALSE;
+
VkGraphicsPipelineCreateInfo gp_ci = {};
gp_ci.sType = VK_STRUCTURE_TYPE_GRAPHICS_PIPELINE_CREATE_INFO;
gp_ci.stageCount = 2;
gp_ci.pStages = shaderStages;
+ gp_ci.pRasterizationState = &rs_state_ci;
gp_ci.pViewportState = NULL; // Not setting VP state w/o dynamic vp state
// should cause validation error
gp_ci.pDynamicState = &dyn_state_ci;
@@ -3720,6 +3712,33 @@ TEST_F(VkLayerTest, IdxBufferAlignmentError) {
vkDestroyBuffer(m_device->device(), ib, NULL);
}
+TEST_F(VkLayerTest, InvalidQueueFamilyIndex) {
+ // Create an out-of-range queueFamilyIndex
+ m_errorMonitor->SetDesiredFailureMsg(
+ VK_DEBUG_REPORT_ERROR_BIT_EXT,
+ "vkCreateBuffer has QueueFamilyIndex greater than");
+
+ ASSERT_NO_FATAL_FAILURE(InitState());
+ ASSERT_NO_FATAL_FAILURE(InitRenderTarget());
+ VkBufferCreateInfo buffCI = {};
+ buffCI.sType = VK_STRUCTURE_TYPE_BUFFER_CREATE_INFO;
+ buffCI.size = 1024;
+ buffCI.usage = VK_BUFFER_USAGE_INDEX_BUFFER_BIT;
+ buffCI.queueFamilyIndexCount = 1;
+ // Introduce failure by specifying invalid queue_family_index
+ uint32_t qfi = 777;
+ buffCI.pQueueFamilyIndices = &qfi;
+
+ VkBuffer ib;
+ vkCreateBuffer(m_device->device(), &buffCI, NULL, &ib);
+
+ if (!m_errorMonitor->DesiredMsgFound()) {
+ FAIL() << "Did not receive Error 'vkCreateBuffer() has "
+ "QueueFamilyIndex greater than...'";
+ m_errorMonitor->DumpFailureMsgs();
+ }
+}
+
TEST_F(VkLayerTest, ExecuteCommandsPrimaryCB) {
// Attempt vkCmdExecuteCommands w/ a primary cmd buffer (should only be
// secondary)
@@ -4611,7 +4630,7 @@ TEST_F(VkLayerTest, ClearCmdNoDraw) {
// TODO: verify that this matches layer
m_errorMonitor->SetDesiredFailureMsg(
- VK_DEBUG_REPORT_WARNING_BIT_EXT,
+ VK_DEBUG_REPORT_PERFORMANCE_WARNING_BIT_EXT,
"vkCmdClearAttachments() issued on CB object ");
ASSERT_NO_FATAL_FAILURE(InitState());
@@ -5146,7 +5165,7 @@ TEST_F(VkLayerTest, CreatePipelineFragmentInputNotProvidedInBlock) {
TEST_F(VkLayerTest, CreatePipelineVsFsTypeMismatchArraySize) {
m_errorMonitor->SetDesiredFailureMsg(VK_DEBUG_REPORT_ERROR_BIT_EXT,
- "Type mismatch on location 0: 'ptr to "
+ "Type mismatch on location 0.0: 'ptr to "
"output arr[2] of float32' vs 'ptr to "
"input arr[3] of float32'");
@@ -5193,7 +5212,7 @@ TEST_F(VkLayerTest, CreatePipelineVsFsTypeMismatchArraySize) {
if (!m_errorMonitor->DesiredMsgFound()) {
m_errorMonitor->DumpFailureMsgs();
- FAIL() << "Did not receive Error 'Type mismatch on location 0: 'ptr to "
+ FAIL() << "Did not receive Error 'Type mismatch on location 0.0: 'ptr to "
"output arr[2] of float32' vs 'ptr to input arr[3] of "
"float32''";
}
@@ -5296,8 +5315,110 @@ TEST_F(VkLayerTest, CreatePipelineVsFsTypeMismatchInBlock) {
pipe.CreateVKPipeline(descriptorSet.GetPipelineLayout(), renderPass());
if (!m_errorMonitor->DesiredMsgFound()) {
+ m_errorMonitor->DumpFailureMsgs();
FAIL() << "Did not receive Error 'Type mismatch on location 0'";
+ }
+}
+
+TEST_F(VkLayerTest, CreatePipelineVsFsMismatchByLocation) {
+ m_errorMonitor->SetDesiredFailureMsg(VK_DEBUG_REPORT_ERROR_BIT_EXT,
+ "location 0.0 which is not written by vertex shader");
+
+ ASSERT_NO_FATAL_FAILURE(InitState());
+ ASSERT_NO_FATAL_FAILURE(InitRenderTarget());
+
+ char const *vsSource =
+ "#version 450\n"
+ "#extension GL_ARB_separate_shader_objects: require\n"
+ "#extension GL_ARB_shading_language_420pack: require\n"
+ "\n"
+ "out block { layout(location=1) float x; } outs;\n"
+ "out gl_PerVertex {\n"
+ " vec4 gl_Position;\n"
+ "};\n"
+ "void main(){\n"
+ " outs.x = 0;\n"
+ " gl_Position = vec4(1);\n"
+ "}\n";
+ char const *fsSource =
+ "#version 450\n"
+ "#extension GL_ARB_separate_shader_objects: require\n"
+ "#extension GL_ARB_shading_language_420pack: require\n"
+ "\n"
+ "in block { layout(location=0) float x; } ins;\n"
+ "layout(location=0) out vec4 color;\n"
+ "void main(){\n"
+ " color = vec4(ins.x);\n"
+ "}\n";
+
+ VkShaderObj vs(m_device, vsSource, VK_SHADER_STAGE_VERTEX_BIT, this);
+ VkShaderObj fs(m_device, fsSource, VK_SHADER_STAGE_FRAGMENT_BIT, this);
+
+ VkPipelineObj pipe(m_device);
+ pipe.AddColorAttachment();
+ pipe.AddShader(&vs);
+ pipe.AddShader(&fs);
+
+ VkDescriptorSetObj descriptorSet(m_device);
+ descriptorSet.AppendDummy();
+ descriptorSet.CreateVKDescriptorSet(m_commandBuffer);
+
+ pipe.CreateVKPipeline(descriptorSet.GetPipelineLayout(), renderPass());
+
+ if (!m_errorMonitor->DesiredMsgFound()) {
m_errorMonitor->DumpFailureMsgs();
+ FAIL() << "Did not receive Error 'location 0.0 which is not written by vertex shader'";
+ }
+}
+
+TEST_F(VkLayerTest, CreatePipelineVsFsMismatchByComponent) {
+ m_errorMonitor->SetDesiredFailureMsg(VK_DEBUG_REPORT_ERROR_BIT_EXT,
+ "location 0.1 which is not written by vertex shader");
+
+ ASSERT_NO_FATAL_FAILURE(InitState());
+ ASSERT_NO_FATAL_FAILURE(InitRenderTarget());
+
+ char const *vsSource =
+ "#version 450\n"
+ "#extension GL_ARB_separate_shader_objects: require\n"
+ "#extension GL_ARB_shading_language_420pack: require\n"
+ "\n"
+ "out block { layout(location=0, component=0) float x; } outs;\n"
+ "out gl_PerVertex {\n"
+ " vec4 gl_Position;\n"
+ "};\n"
+ "void main(){\n"
+ " outs.x = 0;\n"
+ " gl_Position = vec4(1);\n"
+ "}\n";
+ char const *fsSource =
+ "#version 450\n"
+ "#extension GL_ARB_separate_shader_objects: require\n"
+ "#extension GL_ARB_shading_language_420pack: require\n"
+ "\n"
+ "in block { layout(location=0, component=1) float x; } ins;\n"
+ "layout(location=0) out vec4 color;\n"
+ "void main(){\n"
+ " color = vec4(ins.x);\n"
+ "}\n";
+
+ VkShaderObj vs(m_device, vsSource, VK_SHADER_STAGE_VERTEX_BIT, this);
+ VkShaderObj fs(m_device, fsSource, VK_SHADER_STAGE_FRAGMENT_BIT, this);
+
+ VkPipelineObj pipe(m_device);
+ pipe.AddColorAttachment();
+ pipe.AddShader(&vs);
+ pipe.AddShader(&fs);
+
+ VkDescriptorSetObj descriptorSet(m_device);
+ descriptorSet.AppendDummy();
+ descriptorSet.CreateVKDescriptorSet(m_commandBuffer);
+
+ pipe.CreateVKPipeline(descriptorSet.GetPipelineLayout(), renderPass());
+
+ if (!m_errorMonitor->DesiredMsgFound()) {
+ m_errorMonitor->DumpFailureMsgs();
+ FAIL() << "Did not receive Error 'location 0.1 which is not written by vertex shader'";
}
}
@@ -5594,10 +5715,6 @@ TEST_F(VkLayerTest, CreatePipelineAttribMatrixType) {
}
}
-/*
- * Would work, but not supported by glslang! This is similar to the matrix case
-above.
- *
TEST_F(VkLayerTest, CreatePipelineAttribArrayType)
{
m_errorMonitor->SetDesiredFailureMsg(~0u, "");
@@ -5661,7 +5778,6 @@ m_errorMonitor->GetFailureMsg();
m_errorMonitor->DumpFailureMsgs();
}
}
-*/
TEST_F(VkLayerTest, CreatePipelineAttribBindingConflict) {
m_errorMonitor->SetDesiredFailureMsg(
@@ -5932,6 +6048,57 @@ TEST_F(VkLayerTest, CreatePipelineUniformBlockNotProvided) {
}
}
+TEST_F(VkLayerTest, CreatePipelinePushConstantsNotInLayout) {
+ m_errorMonitor->SetDesiredFailureMsg(VK_DEBUG_REPORT_ERROR_BIT_EXT,
+ "not declared in layout");
+
+ ASSERT_NO_FATAL_FAILURE(InitState());
+
+ char const *vsSource =
+ "#version 450\n"
+ "#extension GL_ARB_separate_shader_objects: require\n"
+ "#extension GL_ARB_shading_language_420pack: require\n"
+ "\n"
+ "layout(push_constant, std430) uniform foo { float x; } consts;\n"
+ "out gl_PerVertex {\n"
+ " vec4 gl_Position;\n"
+ "};\n"
+ "void main(){\n"
+ " gl_Position = vec4(consts.x);\n"
+ "}\n";
+ char const *fsSource =
+ "#version 450\n"
+ "#extension GL_ARB_separate_shader_objects: require\n"
+ "#extension GL_ARB_shading_language_420pack: require\n"
+ "\n"
+ "layout(location=0) out vec4 x;\n"
+ "void main(){\n"
+ " x = vec4(1);\n"
+ "}\n";
+
+ VkShaderObj vs(m_device, vsSource, VK_SHADER_STAGE_VERTEX_BIT, this);
+ VkShaderObj fs(m_device, fsSource, VK_SHADER_STAGE_FRAGMENT_BIT, this);
+
+ VkPipelineObj pipe(m_device);
+ pipe.AddShader(&vs);
+ pipe.AddShader(&fs);
+
+ /* set up CB 0; type is UNORM by default */
+ pipe.AddColorAttachment();
+ ASSERT_NO_FATAL_FAILURE(InitRenderTarget());
+
+ VkDescriptorSetObj descriptorSet(m_device);
+ descriptorSet.CreateVKDescriptorSet(m_commandBuffer);
+
+ pipe.CreateVKPipeline(descriptorSet.GetPipelineLayout(), renderPass());
+
+ /* should have generated an error -- no push constant ranges provided! */
+ if (!m_errorMonitor->DesiredMsgFound()) {
+ FAIL() << "Did not receive Error 'not declared in pipeline layout'";
+ m_errorMonitor->DumpFailureMsgs();
+ }
+}
+
#endif // SHADER_CHECKER_TESTS
#if DEVICE_LIMITS_TESTS
diff --git a/tests/test_environment.cpp b/tests/test_environment.cpp
index c21651809..f74ac3b07 100644
--- a/tests/test_environment.cpp
+++ b/tests/test_environment.cpp
@@ -51,7 +51,7 @@ Environment::Environment() : default_dev_(0) {
app_.applicationVersion = 1;
app_.pEngineName = "vk_testing";
app_.engineVersion = 1;
- app_.apiVersion = VK_API_VERSION;
+ app_.apiVersion = VK_API_VERSION_1_0;
app_.pNext = NULL;
}
diff --git a/tests/vk_layer_settings.txt b/tests/vk_layer_settings.txt
index cc072ccdc..88ebae024 100644
--- a/tests/vk_layer_settings.txt
+++ b/tests/vk_layer_settings.txt
@@ -1,16 +1,14 @@
-MemTrackerReportFlags = error
-MemTrackerDebugAction = VK_DBG_LAYER_ACTION_LOG_MSG
-DrawStateReportFlags = error
-DrawStateDebugAction = VK_DBG_LAYER_ACTION_LOG_MSG
-ObjectTrackerReportFlags = error
-ObjectTrackerDebugAction = VK_DBG_LAYER_ACTION_LOG_MSG
-ParamCheckerReportFlags = error
-ParamCheckerDebugAction = VK_DBG_LAYER_ACTION_LOG_MSG
-ThreadingReportFlags = error
-ThreadingDebugAction = VK_DBG_LAYER_ACTION_LOG_MSG
-DeviceLimitsReportFlags = error
-DeviceLimitsDebugAction = VK_DBG_LAYER_ACTION_LOG_MSG
-SwapchainReportFlags = error
-SwapchainDebugAction = VK_DBG_LAYER_ACTION_LOG_MSG
-ImageReportFlags = error
-ImageDebugAction = VK_DBG_LAYER_ACTION_LOG_MSG
+lunarg_core_validation.report_flags = error
+lunarg_core_validation.debug_action = VK_DBG_LAYER_ACTION_LOG_MSG
+lunarg_object_tracker.report_flags = error
+lunarg_object_tracker.debug_action = VK_DBG_LAYER_ACTION_LOG_MSG
+lunarg_parameter_validation.report_flags = error
+lunarg_parameter_validation.debug_action = VK_DBG_LAYER_ACTION_LOG_MSG
+google_threading.report_flags = error
+google_threading.debug_action = VK_DBG_LAYER_ACTION_LOG_MSG
+lunarg_device_limits.report_flags = error
+lunarg_device_limits.debug_action = VK_DBG_LAYER_ACTION_LOG_MSG
+lunarg_swapchain.report_flags = error
+lunarg_swapchain.debug_action = VK_DBG_LAYER_ACTION_LOG_MSG
+lunarg_image.report_flags = error
+lunarg_image.debug_action = VK_DBG_LAYER_ACTION_LOG_MSG
diff --git a/tests/vkrenderframework.cpp b/tests/vkrenderframework.cpp
index edcc93c29..bc66bf8fb 100644
--- a/tests/vkrenderframework.cpp
+++ b/tests/vkrenderframework.cpp
@@ -162,8 +162,7 @@ void VkRenderFramework::InitFramework(
}
void VkRenderFramework::ShutdownFramework() {
- if (m_commandBuffer)
- delete m_commandBuffer;
+ delete m_commandBuffer;
if (m_commandPool)
vkDestroyCommandPool(device(), m_commandPool, NULL);
if (m_framebuffer)
@@ -419,7 +418,11 @@ void VkDeviceObj::get_device_queue() {
VkDescriptorSetObj::VkDescriptorSetObj(VkDeviceObj *device)
: m_device(device), m_nextSlot(0) {}
-VkDescriptorSetObj::~VkDescriptorSetObj() { delete m_set; }
+VkDescriptorSetObj::~VkDescriptorSetObj() {
+ if (m_set) {
+ delete m_set;
+ }
+}
int VkDescriptorSetObj::AppendDummy() {
/* request a descriptor but do not update it */
@@ -477,13 +480,16 @@ VkDescriptorSet VkDescriptorSetObj::GetDescriptorSetHandle() const {
void VkDescriptorSetObj::CreateVKDescriptorSet(
VkCommandBufferObj *commandBuffer) {
- // create VkDescriptorPool
- VkDescriptorPoolCreateInfo pool = {};
- pool.sType = VK_STRUCTURE_TYPE_DESCRIPTOR_POOL_CREATE_INFO;
- pool.poolSizeCount = m_type_counts.size();
- pool.maxSets = 1;
- pool.pPoolSizes = m_type_counts.data();
- init(*m_device, pool);
+
+ if ( m_type_counts.size()) {
+ // create VkDescriptorPool
+ VkDescriptorPoolCreateInfo pool = {};
+ pool.sType = VK_STRUCTURE_TYPE_DESCRIPTOR_POOL_CREATE_INFO;
+ pool.poolSizeCount = m_type_counts.size();
+ pool.maxSets = 1;
+ pool.pPoolSizes = m_type_counts.data();
+ init(*m_device, pool);
+ }
// create VkDescriptorSetLayout
vector<VkDescriptorSetLayoutBinding> bindings;
@@ -514,20 +520,22 @@ void VkDescriptorSetObj::CreateVKDescriptorSet(
m_pipeline_layout.init(*m_device, pipeline_layout, layouts);
- // create VkDescriptorSet
- m_set = alloc_sets(*m_device, m_layout);
+ if (m_type_counts.size()) {
+ // create VkDescriptorSet
+ m_set = alloc_sets(*m_device, m_layout);
+
+ // build the update array
+ size_t imageSamplerCount = 0;
+ for (std::vector<VkWriteDescriptorSet>::iterator it = m_writes.begin();
+ it != m_writes.end(); it++) {
+ it->dstSet = m_set->handle();
+ if (it->descriptorType == VK_DESCRIPTOR_TYPE_COMBINED_IMAGE_SAMPLER)
+ it->pImageInfo = &m_imageSamplerDescriptors[imageSamplerCount++];
+ }
- // build the update array
- size_t imageSamplerCount = 0;
- for (std::vector<VkWriteDescriptorSet>::iterator it = m_writes.begin();
- it != m_writes.end(); it++) {
- it->dstSet = m_set->handle();
- if (it->descriptorType == VK_DESCRIPTOR_TYPE_COMBINED_IMAGE_SAMPLER)
- it->pImageInfo = &m_imageSamplerDescriptors[imageSamplerCount++];
+ // do the updates
+ m_device->update_descriptor_sets(m_writes);
}
-
- // do the updates
- m_device->update_descriptor_sets(m_writes);
}
VkImageObj::VkImageObj(VkDeviceObj *dev) {
@@ -598,23 +606,39 @@ void VkImageObj::SetLayout(VkCommandBufferObj *cmd_buf,
src_mask = VK_ACCESS_COLOR_ATTACHMENT_WRITE_BIT;
else
src_mask = VK_ACCESS_TRANSFER_WRITE_BIT;
- dst_mask = VK_ACCESS_SHADER_READ_BIT | VK_ACCESS_MEMORY_READ_BIT;
+ dst_mask = VK_ACCESS_SHADER_READ_BIT | VK_ACCESS_TRANSFER_READ_BIT;
break;
case VK_IMAGE_LAYOUT_TRANSFER_DST_OPTIMAL:
if (m_descriptorImageInfo.imageLayout ==
VK_IMAGE_LAYOUT_COLOR_ATTACHMENT_OPTIMAL)
src_mask = VK_ACCESS_COLOR_ATTACHMENT_WRITE_BIT;
+ else if (m_descriptorImageInfo.imageLayout ==
+ VK_IMAGE_LAYOUT_SHADER_READ_ONLY_OPTIMAL)
+ src_mask = VK_ACCESS_INPUT_ATTACHMENT_READ_BIT;
else
src_mask = VK_ACCESS_TRANSFER_WRITE_BIT;
- dst_mask = VK_ACCESS_SHADER_READ_BIT | VK_ACCESS_MEMORY_READ_BIT;
+ dst_mask = VK_ACCESS_TRANSFER_WRITE_BIT;
break;
case VK_IMAGE_LAYOUT_SHADER_READ_ONLY_OPTIMAL:
- src_mask = VK_ACCESS_TRANSFER_WRITE_BIT;
+ if (m_descriptorImageInfo.imageLayout ==
+ VK_IMAGE_LAYOUT_TRANSFER_DST_OPTIMAL)
+ src_mask = VK_ACCESS_TRANSFER_WRITE_BIT;
+ else
+ src_mask = VK_ACCESS_INPUT_ATTACHMENT_READ_BIT;
dst_mask = VK_ACCESS_SHADER_READ_BIT | VK_ACCESS_MEMORY_READ_BIT;
break;
+ case VK_IMAGE_LAYOUT_COLOR_ATTACHMENT_OPTIMAL:
+ if (m_descriptorImageInfo.imageLayout ==
+ VK_IMAGE_LAYOUT_TRANSFER_SRC_OPTIMAL)
+ src_mask = VK_ACCESS_TRANSFER_READ_BIT;
+ else
+ src_mask = 0;
+ dst_mask = VK_ACCESS_COLOR_ATTACHMENT_WRITE_BIT;
+ break;
+
default:
src_mask = all_cache_outputs;
dst_mask = all_cache_inputs;
@@ -664,20 +688,9 @@ bool VkImageObj::IsCompatible(VkFlags usage, VkFlags features) {
void VkImageObj::init(uint32_t w, uint32_t h, VkFormat fmt, VkFlags usage,
VkImageTiling requested_tiling,
VkMemoryPropertyFlags reqs) {
- uint32_t mipCount;
VkFormatProperties image_fmt;
VkImageTiling tiling = VK_IMAGE_TILING_OPTIMAL;
- mipCount = 0;
-
- uint32_t _w = w;
- uint32_t _h = h;
- while ((_w > 0) || (_h > 0)) {
- _w >>= 1;
- _h >>= 1;
- mipCount++;
- }
-
vkGetPhysicalDeviceFormatProperties(m_device->phy().handle(), fmt,
&image_fmt);
@@ -699,35 +712,29 @@ void VkImageObj::init(uint32_t w, uint32_t h, VkFormat fmt, VkFlags usage,
<< "Error: Cannot find requested tiling configuration";
}
- VkImageFormatProperties imageFormatProperties;
- vkGetPhysicalDeviceImageFormatProperties(m_device->phy().handle(), fmt,
- VK_IMAGE_TYPE_2D, tiling, usage,
- 0, // VkImageCreateFlags
- &imageFormatProperties);
- if (imageFormatProperties.maxMipLevels < mipCount) {
- mipCount = imageFormatProperties.maxMipLevels;
- }
-
VkImageCreateInfo imageCreateInfo = vk_testing::Image::create_info();
imageCreateInfo.imageType = VK_IMAGE_TYPE_2D;
imageCreateInfo.format = fmt;
imageCreateInfo.extent.width = w;
imageCreateInfo.extent.height = h;
- imageCreateInfo.mipLevels = mipCount;
+ imageCreateInfo.mipLevels = 1;
imageCreateInfo.tiling = tiling;
- if (usage & VK_IMAGE_USAGE_COLOR_ATTACHMENT_BIT)
- imageCreateInfo.initialLayout =
- VK_IMAGE_LAYOUT_COLOR_ATTACHMENT_OPTIMAL;
- else if (usage & VK_IMAGE_USAGE_SAMPLED_BIT)
- imageCreateInfo.initialLayout =
- VK_IMAGE_LAYOUT_SHADER_READ_ONLY_OPTIMAL;
- else
- imageCreateInfo.initialLayout = m_descriptorImageInfo.imageLayout;
+ imageCreateInfo.initialLayout = VK_IMAGE_LAYOUT_UNDEFINED;
layout(imageCreateInfo.initialLayout);
imageCreateInfo.usage = usage;
vk_testing::Image::init(*m_device, imageCreateInfo, reqs);
+
+ VkImageLayout newLayout;
+ if (usage & VK_IMAGE_USAGE_COLOR_ATTACHMENT_BIT)
+ newLayout = VK_IMAGE_LAYOUT_COLOR_ATTACHMENT_OPTIMAL;
+ else if (usage & VK_IMAGE_USAGE_SAMPLED_BIT)
+ newLayout = VK_IMAGE_LAYOUT_SHADER_READ_ONLY_OPTIMAL;
+ else
+ newLayout = m_descriptorImageInfo.imageLayout;
+
+ SetLayout(VK_IMAGE_ASPECT_COLOR_BIT, newLayout);
}
VkResult VkImageObj::CopyImage(VkImageObj &src_image) {
@@ -759,12 +766,14 @@ VkResult VkImageObj::CopyImage(VkImageObj &src_image) {
copy_region.srcSubresource.aspectMask = VK_IMAGE_ASPECT_COLOR_BIT;
copy_region.srcSubresource.baseArrayLayer = 0;
copy_region.srcSubresource.mipLevel = 0;
+ copy_region.srcSubresource.layerCount = 1;
copy_region.srcOffset.x = 0;
copy_region.srcOffset.y = 0;
copy_region.srcOffset.z = 0;
copy_region.dstSubresource.aspectMask = VK_IMAGE_ASPECT_COLOR_BIT;
copy_region.dstSubresource.baseArrayLayer = 0;
copy_region.dstSubresource.mipLevel = 0;
+ copy_region.dstSubresource.layerCount = 1;
copy_region.dstOffset.x = 0;
copy_region.dstOffset.y = 0;
copy_region.dstOffset.z = 0;
@@ -826,6 +835,7 @@ VkTextureObj::VkTextureObj(VkDeviceObj *device, uint32_t *colors)
init(16, 16, tex_format,
VK_IMAGE_USAGE_SAMPLED_BIT | VK_IMAGE_USAGE_TRANSFER_DST_BIT,
VK_IMAGE_TILING_OPTIMAL);
+ stagingImage.SetLayout(VK_IMAGE_ASPECT_COLOR_BIT, VK_IMAGE_LAYOUT_GENERAL);
/* create image view */
view.image = handle();
@@ -840,6 +850,7 @@ VkTextureObj::VkTextureObj(VkDeviceObj *device, uint32_t *colors)
row[x] = colors[(x & 1) ^ (y & 1)];
}
stagingImage.UnmapMemory();
+ stagingImage.SetLayout(VK_IMAGE_ASPECT_COLOR_BIT, VK_IMAGE_LAYOUT_TRANSFER_SRC_OPTIMAL);
VkImageObj::CopyImage(stagingImage);
}
diff --git a/tests/vkrenderframework.h b/tests/vkrenderframework.h
index 7f0d4960c..6a0a2f3de 100644
--- a/tests/vkrenderframework.h
+++ b/tests/vkrenderframework.h
@@ -34,7 +34,6 @@ class VkImageObj;
#else
#include "vktestframework.h"
#endif
-#include "vulkan/vk_lunarg_debug_marker.h"
#include <vector>
@@ -155,7 +154,7 @@ class VkRenderFramework : public VkTestFramework {
this->app_info.applicationVersion = 1;
this->app_info.pEngineName = "unittest";
this->app_info.engineVersion = 1;
- this->app_info.apiVersion = VK_API_VERSION;
+ this->app_info.apiVersion = VK_API_VERSION_1_0;
InitFramework();
}
@@ -380,6 +379,7 @@ class VkDescriptorSetObj : public vk_testing::DescriptorPool {
VkDescriptorSet GetDescriptorSetHandle() const;
VkPipelineLayout GetPipelineLayout() const;
+ int GetTypeCounts() {return m_type_counts.size();}
protected:
VkDeviceObj *m_device;
@@ -391,7 +391,7 @@ class VkDescriptorSetObj : public vk_testing::DescriptorPool {
vk_testing::DescriptorSetLayout m_layout;
vk_testing::PipelineLayout m_pipeline_layout;
- vk_testing::DescriptorSet *m_set;
+ vk_testing::DescriptorSet *m_set = NULL;
};
class VkShaderObj : public vk_testing::ShaderModule {
diff --git a/update_external_sources.bat b/update_external_sources.bat
index ca4fbb0ce..cf81a82b9 100755
--- a/update_external_sources.bat
+++ b/update_external_sources.bat
@@ -1,5 +1,5 @@
@echo off
-REM Update source for glslang, LunarGLASS, spirv-tools
+REM Update source for glslang, spirv-tools
REM Determine the appropriate CMake strings for the current version of Visual Studio
echo Determining VS version
@@ -10,19 +10,11 @@ echo Detected Visual Studio Version as %VS_VERSION%
REM Cleanup the file we used to collect the VS version output since it's no longer needed.
del /Q /F vsversion.tmp
-REM Determine if SVN exists, this is a requirement for LunarGLASS
-set SVN_EXE_FOUND=0
-for %%X in (svn.exe) do (set FOUND=%%~$PATH:X)
-if defined FOUND (
- set SVN_EXE_FOUND=1
-)
-
setlocal EnableDelayedExpansion
set errorCode=0
set BUILD_DIR=%~dp0
set BASE_DIR=%BUILD_DIR%..
set GLSLANG_DIR=%BASE_DIR%\glslang
-set LUNARGLASS_DIR=%BASE_DIR%\LunarGLASS
set SPIRV_TOOLS_DIR=%BASE_DIR%\spirv-tools
REM // ======== Parameter parsing ======== //
@@ -32,24 +24,18 @@ REM // ======== Parameter parsing ======== //
echo.
echo Available options:
echo --sync-glslang just pull glslang_revision
- echo --sync-LunarGLASS just pull LunarGLASS_revision
echo --sync-spirv-tools just pull spirv-tools_revision
echo --build-glslang pulls glslang_revision, configures CMake, builds Release and Debug
- echo --build-LunarGLASS pulls LunarGLASS_revision, configures CMake, builds Release and Debug
echo --build-spirv-tools pulls spirv-tools_revision, configures CMake, builds Release and Debug
echo --all sync and build glslang, LunarGLASS, spirv-tools
goto:finish
)
set sync-glslang=0
- set sync-LunarGLASS=0
set sync-spirv-tools=0
set build-glslang=0
- set build-LunarGLASS=0
set build-spirv-tools=0
set check-glslang-build-dependencies=0
- set check-LunarGLASS-fetch-dependencies=0
- set check-LunarGLASS-build-dependencies=0
:parameterLoop
@@ -61,13 +47,6 @@ REM // ======== Parameter parsing ======== //
goto:parameterLoop
)
- if "%1" == "--sync-LunarGLASS" (
- set sync-LunarGLASS=1
- set check-LunarGLASS-fetch-dependencies=1
- shift
- goto:parameterLoop
- )
-
if "%1" == "--sync-spirv-tools" (
set sync-spirv-tools=1
shift
@@ -82,15 +61,6 @@ REM // ======== Parameter parsing ======== //
goto:parameterLoop
)
- if "%1" == "--build-LunarGLASS" (
- set sync-LunarGLASS=1
- set check-LunarGLASS-fetch-dependencies=1
- set check-LunarGLASS-build-dependencies=1
- set build-LunarGLASS=1
- shift
- goto:parameterLoop
- )
-
if "%1" == "--build-spirv-tools" (
set sync-spirv-tools=1
REM glslang has the same needs as spirv-tools
@@ -106,14 +76,6 @@ REM // ======== Parameter parsing ======== //
set build-glslang=1
set build-spirv-tools=1
set check-glslang-build-dependencies=1
-
- REM Only attempt to build LunarGLASS if we find SVN
- if %SVN_EXE_FOUND% equ 1 (
- set sync-LunarGLASS=1
- set build-LunarGLASS=1
- set check-LunarGLASS-fetch-dependencies=1
- set check-LunarGLASS-build-dependencies=1
- )
shift
goto:parameterLoop
)
@@ -137,46 +99,6 @@ REM // ======== Dependency checking ======== //
set errorCode=1
)
- if %check-LunarGLASS-fetch-dependencies% equ 1 (
- if %SVN_EXE_FOUND% equ 0 (
- echo Dependency check failed:
- echo svn.exe not found
- echo Get Subversion for Windows here: http://sourceforge.net/projects/win32svn/
- echo Install and ensure svn.exe makes it into your PATH, default is "C:\Program Files (x86)\Subversion\bin"
- set errorCode=1
- )
-
- for %%X in (wget.exe) do (set FOUND=%%~$PATH:X)
- if not defined FOUND (
- echo Dependency check failed:
- echo wget.exe not found
- echo Get wget for Windows here: http://gnuwin32.sourceforge.net/packages/wget.htm
- echo Easiest to select "Complete package, except sources" link which will install and setup PATH
- echo Install and ensure each makes it into your PATH, default is "C:\Program Files (x86)\GnuWin32\bin"
- set errorCode=1
- )
-
- for %%X in (gzip.exe) do (set FOUND=%%~$PATH:X)
- if not defined FOUND (
- echo Dependency check failed:
- echo gzip.exe not found
- echo Get gzip for Windows here: http://gnuwin32.sourceforge.net/packages/gzip.htm
- echo Easiest to select "Complete package, except sources" link which will install and setup PATH
- echo Install and ensure each makes it into your PATH, default is "C:\Program Files (x86)\GnuWin32\bin"
- set errorCode=1
- )
-
- for %%X in (tar.exe) do (set FOUND=%%~$PATH:X)
- if not defined FOUND (
- echo Dependency check failed:
- echo tar.exe not found
- echo Get tar for Windows here: http://gnuwin32.sourceforge.net/packages/gtar.htm
- echo Easiest to select Binaries/Setup link which will install and setup PATH
- echo Install and ensure each makes it into your PATH, default is "C:\Program Files (x86)\GnuWin32\bin"
- set errorCode=1
- )
- )
-
if %check-glslang-build-dependencies% equ 1 (
for %%X in (cmake.exe) do (set FOUND=%%~$PATH:X)
if not defined FOUND (
@@ -188,25 +110,6 @@ REM // ======== Dependency checking ======== //
)
)
- if %check-LunarGLASS-build-dependencies% equ 1 (
- for %%X in (python.exe) do (set FOUND=%%~$PATH:X)
- if not defined FOUND (
- echo Dependency check failed:
- echo python.exe not found
- echo Get python 2.7x for Windows here: http://www.python.org/download/releases/2.7.6/
- echo Install and ensure each makes it into your PATH, default is "C:\Python27"
- set errorCode=1
- )
-
- for %%X in (cmake.exe) do (set FOUND=%%~$PATH:X)
- if not defined FOUND (
- echo Dependency check failed:
- echo cmake.exe not found
- echo Get CNake 2.8 for Windows here: http://www.cmake.org/cmake/resources/software.html
- echo Install and ensure each makes it into your PATH, default is "C:\Program Files (x86)\CMake\bin"
- set errorCode=1
- )
- )
REM goto:main
@@ -217,12 +120,6 @@ REM // ======== end Dependency checking ======== //
if %errorCode% neq 0 (goto:error)
REM Read the target versions from external file, which is shared with Linux script
-if not exist LunarGLASS_revision (
- echo.
- echo Missing LunarGLASS_revision file! Place it next to this script with target version in it.
- set errorCode=1
- goto:error
-)
if not exist glslang_revision (
echo.
@@ -238,17 +135,13 @@ if not exist spirv-tools_revision (
goto:error
)
-set /p LUNARGLASS_REVISION= < LunarGLASS_revision
set /p GLSLANG_REVISION= < glslang_revision
set /p SPIRV_TOOLS_REVISION= < spirv-tools_revision
-echo LUNARGLASS_REVISION=%LUNARGLASS_REVISION%
echo GLSLANG_REVISION=%GLSLANG_REVISION%
echo SPIRV_TOOLS_REVISION=%SPIRV_TOOLS_REVISION%
-set /p LUNARGLASS_REVISION_R32= < LunarGLASS_revision_R32
-echo LUNARGLASS_REVISION_R32=%LUNARGLASS_REVISION_R32%
-echo Creating and/or updating glslang, LunarGLASS, spirv-tools in %BASE_DIR%
+echo Creating and/or updating glslang, spirv-tools in %BASE_DIR%
if %sync-glslang% equ 1 (
if exist %GLSLANG_DIR% (
@@ -262,18 +155,6 @@ if %sync-glslang% equ 1 (
if %errorCode% neq 0 (goto:error)
)
-if %sync-LunarGLASS% equ 1 (
- if exist %LUNARGLASS_DIR% (
- rd /S /Q %LUNARGLASS_DIR%
- )
- if not exist %LUNARGLASS_DIR% (
- call:create_LunarGLASS
- )
- if %errorCode% neq 0 (goto:error)
- call:update_LunarGLASS
- if %errorCode% neq 0 (goto:error)
-)
-
if %sync-spirv-tools% equ 1 (
if exist %SPIRV_TOOLS_DIR% (
rd /S /Q %SPIRV_TOOLS_DIR%
@@ -292,11 +173,6 @@ if %build-glslang% equ 1 (
if %errorCode% neq 0 (goto:error)
)
-if %build-LunarGLASS% equ 1 (
- call:build_LunarGLASS
- if %errorCode% neq 0 (goto:error)
-)
-
if %build-spirv-tools% equ 1 (
call:build_spirv-tools
if %errorCode% neq 0 (goto:error)
@@ -341,64 +217,6 @@ goto:eof
git checkout %GLSLANG_REVISION%
goto:eof
-:create_LunarGLASS
- REM Windows complains if it can't find the directory below, no need to call
- REM rd /S /Q %LUNARGLASS_DIR%
- echo.
- echo Creating local LunarGLASS repository %LUNARGLASS_DIR%)
- mkdir %LUNARGLASS_DIR%
- cd %LUNARGLASS_DIR%
- git clone https://github.com/LunarG/LunarGLASS.git .
- git checkout %LUNARGLASS_REVISION%
- cd Core\LLVM
- echo.
- echo Downloading LLVM archive...
- wget http://llvm.org/releases/3.4/llvm-3.4.src.tar.gz
- REM tar on windows can't filter through gzip, so the below line doesn't work
- REM tar --gzip -xf llvm-3.4.src.tar.gz
- echo.
- echo Unzipping the archive...
- echo gzip --decompress --verbose --keep llvm-3.4.src.tar.gz
- gzip --decompress --verbose --keep llvm-3.4.src.tar.gz
- echo.
- echo Extracting the archive... (this is slow)
- echo tar -xf llvm-3.4.src.tar
- tar -xf llvm-3.4.src.tar
- if not exist %LUNARGLASS_DIR%\Core\LLVM\llvm-3.4\lib (
- echo .
- echo LLVM source download failed!
- echo Delete LunarGLASS directory and try again
- set errorCode=1
- goto:eof
- )
- echo.
- echo Syncing LunarGLASS source...
- cd %LUNARGLASS_DIR%
- REM put back the LunarGLASS github versions of some LLVM files
- git checkout -f .
- REM overwrite with private gitlab versions of some files
- svn checkout -r %LUNARGLASS_REVISION_R32% --force https://cvs.khronos.org/svn/repos/SPIRV/trunk/LunarGLASS/ .
- svn revert -R .
- if not exist %LUNARGLASS_DIR%\Frontends\SPIRV (
- echo.
- echo LunarGLASS source download failed!
- set errorCode=1
- )
-goto:eof
-
-:update_LunarGLASS
- echo.
- echo Updating %LUNARGLASS_DIR%
- cd %LUNARGLASS_DIR%
- git fetch --all
- git checkout -f %LUNARGLASS_REVISION% .
- if not exist %LUNARGLASS_DIR%\.svn (
- svn checkout -r %LUNARGLASS_REVISION_R32% --force https://cvs.khronos.org/svn/repos/SPIRV/trunk/LunarGLASS/ .
- )
- svn update -r %LUNARGLASS_REVISION_R325
- svn revert -R .
-goto:eof
-
:create_spirv-tools
echo.
echo Creating local spirv-tools repository %SPIRV_TOOLS_DIR%)
@@ -492,163 +310,6 @@ goto:eof
)
goto:eof
-:build_LunarGLASS
- echo.
- echo Building %LUNARGLASS_DIR%
- set LLVM_DIR=%LUNARGLASS_DIR%\Core\LLVM\llvm-3.4
- cd %LLVM_DIR%
-
- REM Cleanup any old directories lying around.
- if exist build32 (
- rmdir /s /q build32
- )
- if exist build (
- rmdir /s /q build
- )
-
- echo Making 32-bit LLVM
- echo *************************
- mkdir build32
- set LLVM_BUILD_DIR=%LLVM_DIR%\build32
- cd %LLVM_BUILD_DIR%
-
- echo Generating 32-bit LLVM CMake files for Visual Studio %VS_VERSION% -DCMAKE_INSTALL_PREFIX=install ..
- cmake -G "Visual Studio %VS_VERSION%" -DCMAKE_INSTALL_PREFIX=install ..
-
- echo Building 32-bit LLVM: MSBuild INSTALL.vcxproj /p:Platform=x86 /p:Configuration=Release
- msbuild INSTALL.vcxproj /p:Platform=x86 /p:Configuration=Release /verbosity:quiet
- REM Check for existence of one lib, even though we should check for all results
- if not exist %LLVM_BUILD_DIR%\lib\Release\LLVMCore.lib (
- echo.
- echo LLVM 32-bit Release build failed!
- set errorCode=1
- goto:eof
- )
- REM disable Debug build of LLVM until LunarGLASS cmake files are updated to
- REM handle Debug and Release builds of glslang simultaneously, instead of
- REM whatever last lands in "./build32/install"
- REM echo Building 32-bit LLVM: MSBuild INSTALL.vcxproj /p:Platform=x86 /p:Configuration=Debug
- REM msbuild INSTALL.vcxproj /p:Platform=x86 /p:Configuration=Debug /verbosity:quiet
- REM Check for existence of one lib, even though we should check for all results
- REM if not exist %LLVM_BUILD_DIR%\lib\Debug\LLVMCore.lib (
- REM echo.
- REM echo LLVM 32-bit Debug build failed!
- REM set errorCode=1
- REM goto:eof
- REM )
-
- cd ..
-
- echo Making 64-bit LLVM
- echo *************************
- mkdir build
- set LLVM_BUILD_DIR=%LLVM_DIR%\build
- cd %LLVM_BUILD_DIR%
-
- echo Generating 64-bit LLVM CMake files for Visual Studio %VS_VERSION% -DCMAKE_INSTALL_PREFIX=install ..
- cmake -G "Visual Studio %VS_VERSION% Win64" -DCMAKE_INSTALL_PREFIX=install ..
-
- echo Building 64-bit LLVM: MSBuild INSTALL.vcxproj /p:Platform=x64 /p:Configuration=Release
- msbuild INSTALL.vcxproj /p:Platform=x64 /p:Configuration=Release /verbosity:quiet
- REM Check for existence of one lib, even though we should check for all results
- if not exist %LLVM_BUILD_DIR%\lib\Release\LLVMCore.lib (
- echo.
- echo LLVM 64-bit Release build failed!
- set errorCode=1
- goto:eof
- )
- REM disable Debug build of LLVM until LunarGLASS cmake files are updated to
- REM handle Debug and Release builds of glslang simultaneously, instead of
- REM whatever last lands in "./build/install"
- REM echo Building 64-bit LLVM: MSBuild INSTALL.vcxproj /p:Platform=x64 /p:Configuration=Debug
- REM msbuild INSTALL.vcxproj /p:Platform=x64 /p:Configuration=Debug /verbosity:quiet
- REM Check for existence of one lib, even though we should check for all results
- REM if not exist %LLVM_BUILD_DIR%\lib\Debug\LLVMCore.lib (
- REM echo.
- REM echo LLVM 64-bit Debug build failed!
- REM set errorCode=1
- REM goto:eof
- REM )
-
- cd %LUNARGLASS_DIR%
-
- REM Cleanup any old directories lying around.
- if exist build32 (
- rmdir /s /q build32
- )
- if exist build (
- rmdir /s /q build
- )
-
- echo Making 32-bit LunarGLASS
- echo *************************
- mkdir build32
- set LUNARGLASS_BUILD_DIR=%LUNARGLASS_DIR%\build32
- cd %LUNARGLASS_BUILD_DIR%
-
- echo Generating 32-bit LunarGlass CMake files for Visual Studio %VS_VERSION% -DCMAKE_INSTALL_PREFIX=install ..
- cmake -G "Visual Studio %VS_VERSION%" -DCMAKE_INSTALL_PREFIX=install ..
-
- echo Building 32-bit LunarGlass: MSBuild INSTALL.vcxproj /p:Platform=x86 /p:Configuration=Release
- msbuild INSTALL.vcxproj /p:Platform=x86 /p:Configuration=Release /verbosity:quiet
-
- REM Check for existence of one lib, even though we should check for all results
- if not exist %LUNARGLASS_BUILD_DIR%\Core\Release\core.lib (
- echo.
- echo LunarGLASS 32-bit Release build failed!
- set errorCode=1
- goto:eof
- )
-
- REM disable Debug build of LunarGLASS until its cmake file can be updated to
- REM handle Debug and Release builds of glslang simultaneously, instead of
- REM whatever last lands in "./build/install"
- REM echo Building 32-bit LunarGlass: MSBuild INSTALL.vcxproj /p:Platform=x86 /p:Configuration=Debug
- REM msbuild INSTALL.vcxproj /p:Platform=x86 /p:Configuration=Debug /verbosity:quiet
- REM Check for existence of one lib, even though we should check for all results
- REM if not exist %LUNARGLASS_BUILD_DIR%\Core\Debug\core.lib (
- REM echo.
- REM echo LunarGLASS 32-bit Debug build failed!
- REM set errorCode=1
- REM goto:eof
- REM )
-
- cd ..
-
- echo Making 64-bit LunarGLASS
- echo *************************
- mkdir build
- set LUNARGLASS_BUILD_DIR=%LUNARGLASS_DIR%\build
- cd %LUNARGLASS_BUILD_DIR%
-
- echo Generating 64-bit LunarGlass CMake files for Visual Studio %VS_VERSION% -DCMAKE_INSTALL_PREFIX=install ..
- cmake -G "Visual Studio %VS_VERSION% Win64" -DCMAKE_INSTALL_PREFIX=install ..
-
- echo Building 64-bit LunarGlass: MSBuild INSTALL.vcxproj /p:Platform=x64 /p:Configuration=Release
- msbuild INSTALL.vcxproj /p:Platform=x64 /p:Configuration=Release /verbosity:quiet
-
- REM Check for existence of one lib, even though we should check for all results
- if not exist %LUNARGLASS_BUILD_DIR%\Core\Release\core.lib (
- echo.
- echo LunarGLASS 64-bit Release build failed!
- set errorCode=1
- goto:eof
- )
-
- REM disable Debug build of LunarGLASS until its cmake file can be updated to
- REM handle Debug and Release builds of glslang simultaneously, instead of
- REM whatever last lands in "./build/install"
- REM echo Building 64-bit LunarGlass: MSBuild INSTALL.vcxproj /p:Platform=x64 /p:Configuration=Debug
- REM msbuild INSTALL.vcxproj /p:Platform=x64 /p:Configuration=Debug /verbosity:quiet
- REM Check for existence of one lib, even though we should check for all results
- REM if not exist %LUNARGLASS_BUILD_DIR%\Core\Debug\core.lib (
- REM echo.
- REM echo LunarGLASS 64-bit Debug build failed!
- REM set errorCode=1
- REM goto:eof
- REM )
-goto:eof
-
:build_spirv-tools
echo.
echo Building %SPIRV_TOOLS_DIR%
diff --git a/update_external_sources.sh b/update_external_sources.sh
index 2464513b7..d3702ec95 100755
--- a/update_external_sources.sh
+++ b/update_external_sources.sh
@@ -3,16 +3,11 @@
set -e
-LUNARGLASS_REVISION=$(cat $PWD/LunarGLASS_revision)
GLSLANG_REVISION=$(cat $PWD/glslang_revision)
SPIRV_TOOLS_REVISION=$(cat $PWD/spirv-tools_revision)
-echo "LUNARGLASS_REVISION=$LUNARGLASS_REVISION"
echo "GLSLANG_REVISION=$GLSLANG_REVISION"
echo "SPIRV_TOOLS_REVISION=$SPIRV_TOOLS_REVISION"
-LUNARGLASS_REVISION_R32=$(cat $PWD/LunarGLASS_revision_R32)
-echo "LUNARGLASS_REVISION_R32=$LUNARGLASS_REVISION_R32"
-
BUILDDIR=$PWD
BASEDIR=$BUILDDIR/..
@@ -32,41 +27,6 @@ function update_glslang () {
git checkout $GLSLANG_REVISION
}
-function create_LunarGLASS () {
- rm -rf $BASEDIR/LunarGLASS
- echo "Creating local LunarGLASS repository ($BASEDIR/LunarGLASS)."
- mkdir -p $BASEDIR/LunarGLASS
- cd $BASEDIR/LunarGLASS
- git clone https://github.com/LunarG/LunarGLASS.git .
- mkdir -p Core/LLVM
- cd Core/LLVM
- wget http://llvm.org/releases/3.4/llvm-3.4.src.tar.gz
- tar --gzip -xf llvm-3.4.src.tar.gz
- git checkout -f . # put back the LunarGLASS versions of some LLVM files
- git checkout $LUNARGLASS_REVISION
- svn checkout -r $LUNARGLASS_REVISION_R32 --force https://cvs.khronos.org/svn/repos/SPIRV/trunk/LunarGLASS/ .
- svn revert -R .
-}
-
-function update_LunarGLASS () {
- echo "Updating $BASEDIR/LunarGLASS"
- cd $BASEDIR/LunarGLASS
- git fetch
- git checkout -f .
- git checkout $LUNARGLASS_REVISION
- # Figure out how to do this with git
- #git checkout $LUNARGLASS_REVISION |& tee gitout
- #if grep --quiet LLVM gitout ; then
- # rm -rf $BASEDIR/LunarGLASS/Core/LLVM/llvm-3.4/build
- #fi
- #rm -rf gitout
- if [ ! -d "$BASEDIR/LunarGLASS/.svn" ]; then
- svn checkout -r $LUNARGLASS_REVISION_R32 --force https://cvs.khronos.org/svn/repos/SPIRV/trunk/LunarGLASS/ .
- fi
- svn update -r $LUNARGLASS_REVISION_R32
- svn revert -R .
-}
-
function create_spirv-tools () {
rm -rf $BASEDIR/spirv-tools
echo "Creating local spirv-tools repository ($BASEDIR/spirv-tools)."
@@ -94,24 +54,6 @@ function build_glslang () {
make install
}
-function build_LunarGLASS () {
- echo "Building $BASEDIR/LunarGLASS"
- cd $BASEDIR/LunarGLASS/Core/LLVM/llvm-3.4
- if [ ! -d "$BASEDIR/LunarGLASS/Core/LLVM/llvm-3.4/build" ]; then
- mkdir -p build
- cd build
- ../configure --enable-terminfo=no --enable-curses=no
- REQUIRES_RTTI=1 make -j $(nproc) && make install DESTDIR=`pwd`/install
- fi
- cd $BASEDIR/LunarGLASS
- mkdir -p build
- cd build
- cmake -D CMAKE_BUILD_TYPE=Release ..
- cmake -D CMAKE_BUILD_TYPE=Release ..
- make
- make install
-}
-
function build_spirv-tools () {
echo "Building $BASEDIR/spirv-tools"
cd $BASEDIR/spirv-tools
@@ -124,13 +66,11 @@ function build_spirv-tools () {
# If any options are provided, just compile those tools
# If no options are provided, build everything
INCLUDE_GLSLANG=false
-INCLUDE_LUNARGLASS=false
INCLUDE_SPIRV_TOOLS=false
if [ "$#" == 0 ]; then
- echo "Building glslang, LunarGLASS, spirv-tools"
+ echo "Building glslang, spirv-tools"
INCLUDE_GLSLANG=true
- INCLUDE_LUNARGLASS=true
INCLUDE_SPIRV_TOOLS=true
else
# Parse options
@@ -144,11 +84,6 @@ else
INCLUDE_GLSLANG=true
echo "Building glslang ($option)"
;;
- # options to specify build of LunarGLASS components
- -l|--LunarGLASS)
- INCLUDE_LUNARGLASS=true
- echo "Building LunarGLASS ($option)"
- ;;
# options to specify build of spirv-tools components
-s|--spirv-tools)
INCLUDE_SPIRV_TOOLS=true
@@ -158,7 +93,6 @@ else
echo "Unrecognized option: $option"
echo "Try the following:"
echo " -g | --glslang # enable glslang"
- echo " -l | --LunarGLASS # enable LunarGLASS"
echo " -s | --spirv-tools # enable spirv-tools"
exit 1
;;
@@ -176,14 +110,6 @@ if [ $INCLUDE_GLSLANG == "true" ]; then
fi
-if [ $INCLUDE_LUNARGLASS == "true" ]; then
- if [ ! -d "$BASEDIR/LunarGLASS" -o ! -d "$BASEDIR/LunarGLASS/.git" ]; then
- create_LunarGLASS
- fi
- update_LunarGLASS
- build_LunarGLASS
-fi
-
if [ $INCLUDE_SPIRV_TOOLS == "true" ]; then
if [ ! -d "$BASEDIR/spirv-tools" -o ! -d "$BASEDIR/spirv-tools/.git" ]; then
create_spirv-tools
diff --git a/vk-generate.py b/vk-generate.py
index b255d7f32..6f478a6f9 100755
--- a/vk-generate.py
+++ b/vk-generate.py
@@ -27,6 +27,7 @@
# Author: Chia-I Wu <olv@lunarg.com>
# Author: Courtney Goeltzenleuchter <courtney@LunarG.com>
# Author: Jon Ashburn <jon@lunarg.com>
+# Author: Gwan-gyeong Mun <kk.moon@samsung.com>
import sys
@@ -241,18 +242,26 @@ class WinDefFileSubcommand(Subcommand):
return "\n".join(body)
def main():
+ wsi = {
+ "Win32",
+ "Android",
+ "Xcb",
+ "Xlib",
+ "Wayland",
+ "Mir"
+ }
subcommands = {
"dispatch-table-ops": DispatchTableOpsSubcommand,
"win-def-file": WinDefFileSubcommand,
}
- if len(sys.argv) < 2 or sys.argv[1] not in subcommands:
- print("Usage: %s <subcommand> [options]" % sys.argv[0])
+ if len(sys.argv) < 3 or sys.argv[1] not in wsi or sys.argv[2] not in subcommands:
+ print("Usage: %s <wsi> <subcommand> [options]" % sys.argv[0])
print
print("Available sucommands are: %s" % " ".join(subcommands))
exit(1)
- subcmd = subcommands[sys.argv[1]](sys.argv[2:])
+ subcmd = subcommands[sys.argv[2]](sys.argv[3:])
subcmd.run()
if __name__ == "__main__":
diff --git a/vk-layer-generate.py b/vk-layer-generate.py
index d03029d3a..3e25a8030 100755
--- a/vk-layer-generate.py
+++ b/vk-layer-generate.py
@@ -1,4 +1,4 @@
-#!/usr/bin/env python3
+#!/usr/bin/env python3
#
# VK
#
@@ -33,6 +33,7 @@
# Author: Mike Stroyan <stroyan@google.com>
# Author: Tony Barbour <tony@LunarG.com>
# Author: Chia-I Wu <olv@google.com>
+# Author: Gwan-gyeong Mun <kk.moon@samsung.com>
import sys
import os
@@ -177,6 +178,7 @@ class Subcommand(object):
self.no_addr = False
self.layer_name = ""
self.lineinfo = sourcelineinfo()
+ self.wsi = sys.argv[1]
def run(self):
print(self.generate())
@@ -304,7 +306,7 @@ class Subcommand(object):
r_body.append(' VkDebugReportCallbackEXT* pCallback)')
r_body.append('{')
# Switch to this code section for the new per-instance storage and debug callbacks
- if self.layer_name in ['object_tracker', 'threading', 'unique_objects']:
+ if self.layer_name in ['object_tracker', 'unique_objects']:
r_body.append(' VkLayerInstanceDispatchTable *pInstanceTable = get_dispatch_table(%s_instance_table_map, instance);' % self.layer_name )
r_body.append(' VkResult result = pInstanceTable->CreateDebugReportCallbackEXT(instance, pCreateInfo, pAllocator, pCallback);')
r_body.append(' if (VK_SUCCESS == result) {')
@@ -331,7 +333,7 @@ class Subcommand(object):
r_body.append('VK_LAYER_EXPORT VKAPI_ATTR void VKAPI_CALL vkDestroyDebugReportCallbackEXT(VkInstance instance, VkDebugReportCallbackEXT msgCallback, const VkAllocationCallbacks *pAllocator)')
r_body.append('{')
# Switch to this code section for the new per-instance storage and debug callbacks
- if self.layer_name in ['object_tracker', 'threading', 'unique_objects']:
+ if self.layer_name in ['object_tracker', 'unique_objects']:
r_body.append(' VkLayerInstanceDispatchTable *pInstanceTable = get_dispatch_table(%s_instance_table_map, instance);' % self.layer_name )
else:
r_body.append(' VkLayerInstanceDispatchTable *pInstanceTable = instance_dispatch_table(instance);')
@@ -347,7 +349,7 @@ class Subcommand(object):
r_body.append('VK_LAYER_EXPORT VKAPI_ATTR void VKAPI_CALL vkDebugReportMessageEXT(VkInstance instance, VkDebugReportFlagsEXT flags, VkDebugReportObjectTypeEXT objType, uint64_t object, size_t location, int32_t msgCode, const char *pLayerPrefix, const char *pMsg)')
r_body.append('{')
# Switch to this code section for the new per-instance storage and debug callbacks
- if self.layer_name == 'object_tracker' or self.layer_name == 'threading':
+ if self.layer_name == 'object_tracker':
r_body.append(' VkLayerInstanceDispatchTable *pInstanceTable = get_dispatch_table(%s_instance_table_map, instance);' % self.layer_name )
else:
r_body.append(' VkLayerInstanceDispatchTable *pInstanceTable = instance_dispatch_table(instance);')
@@ -361,7 +363,7 @@ class Subcommand(object):
ggep_body.append('%s' % self.lineinfo.get())
ggep_body.append('')
- if self.layer_name == 'object_tracker' or self.layer_name == 'threading':
+ if self.layer_name == 'object_tracker':
ggep_body.append('static const VkExtensionProperties instance_extensions[] = {')
ggep_body.append(' {')
ggep_body.append(' VK_EXT_DEBUG_REPORT_EXTENSION_NAME,')
@@ -370,7 +372,7 @@ class Subcommand(object):
ggep_body.append('};')
ggep_body.append('VK_LAYER_EXPORT VKAPI_ATTR VkResult VKAPI_CALL vkEnumerateInstanceExtensionProperties(const char *pLayerName, uint32_t *pCount, VkExtensionProperties* pProperties)')
ggep_body.append('{')
- if self.layer_name == 'object_tracker' or self.layer_name == 'threading':
+ if self.layer_name == 'object_tracker':
ggep_body.append(' return util_GetExtensionProperties(1, instance_extensions, pCount, pProperties);')
else:
ggep_body.append(' return util_GetExtensionProperties(0, NULL, pCount, pProperties);')
@@ -384,14 +386,14 @@ class Subcommand(object):
ggep_body.append('%s' % self.lineinfo.get())
ggep_body.append('static const VkLayerProperties globalLayerProps[] = {')
ggep_body.append(' {')
- if self.layer_name in ['threading', 'unique_objects']:
+ if self.layer_name in ['unique_objects']:
ggep_body.append(' "VK_LAYER_GOOGLE_%s",' % layer)
- ggep_body.append(' VK_API_VERSION, // specVersion')
+ ggep_body.append(' VK_LAYER_API_VERSION, // specVersion')
ggep_body.append(' 1, // implementationVersion')
ggep_body.append(' "Google Validation Layer"')
else:
ggep_body.append(' "VK_LAYER_LUNARG_%s",' % layer)
- ggep_body.append(' VK_API_VERSION, // specVersion')
+ ggep_body.append(' VK_LAYER_API_VERSION, // specVersion')
ggep_body.append(' 1, // implementationVersion')
ggep_body.append(' "LunarG Validation Layer"')
ggep_body.append(' }')
@@ -410,14 +412,14 @@ class Subcommand(object):
gpdlp_body.append('%s' % self.lineinfo.get())
gpdlp_body.append('static const VkLayerProperties deviceLayerProps[] = {')
gpdlp_body.append(' {')
- if self.layer_name in ['threading', 'unique_objects']:
+ if self.layer_name in ['unique_objects']:
gpdlp_body.append(' "VK_LAYER_GOOGLE_%s",' % layer)
- gpdlp_body.append(' VK_API_VERSION, // specVersion')
+ gpdlp_body.append(' VK_LAYER_API_VERSION, // specVersion')
gpdlp_body.append(' 1, // implementationVersion')
gpdlp_body.append(' "Google Validation Layer"')
else:
gpdlp_body.append(' "VK_LAYER_LUNARG_%s",' % layer)
- gpdlp_body.append(' VK_API_VERSION, // specVersion')
+ gpdlp_body.append(' VK_LAYER_API_VERSION, // specVersion')
gpdlp_body.append(' 1, // implementationVersion')
gpdlp_body.append(' "LunarG Validation Layer"')
gpdlp_body.append(' }')
@@ -522,7 +524,7 @@ class Subcommand(object):
#
# New style of GPA Functions for the new layer_data/layer_logging changes
#
- if self.layer_name in ['object_tracker', 'threading', 'unique_objects']:
+ if self.layer_name in ['object_tracker', 'unique_objects']:
func_body.append("VK_LAYER_EXPORT VKAPI_ATTR PFN_vkVoidFunction VKAPI_CALL vkGetDeviceProcAddr(VkDevice device, const char* funcName)\n"
"{\n"
" PFN_vkVoidFunction addr;\n"
@@ -700,70 +702,8 @@ class Subcommand(object):
'{\n' % self.layer_name)
if init_opts:
func_body.append('%s' % self.lineinfo.get())
- func_body.append(' uint32_t report_flags = 0;')
- func_body.append(' uint32_t debug_action = 0;')
- func_body.append(' FILE *log_output = NULL;')
- func_body.append(' const char *option_str;\n')
- func_body.append(' // initialize %s options' % self.layer_name)
- func_body.append(' report_flags = getLayerOptionFlags("%sReportFlags", 0);' % self.layer_name)
- func_body.append(' getLayerOptionEnum("%sDebugAction", (uint32_t *) &debug_action);' % self.layer_name)
func_body.append('')
- func_body.append(' if (debug_action & VK_DBG_LAYER_ACTION_LOG_MSG)')
- func_body.append(' {')
- func_body.append(' option_str = getLayerOption("%sLogFilename");' % self.layer_name)
- func_body.append(' log_output = getLayerLogOutput(option_str,"%s");' % self.layer_name)
- func_body.append(' VkDebugReportCallbackCreateInfoEXT dbgCreateInfo;')
- func_body.append(' memset(&dbgCreateInfo, 0, sizeof(dbgCreateInfo));')
- func_body.append(' dbgCreateInfo.sType = VK_STRUCTURE_TYPE_DEBUG_REPORT_CREATE_INFO_EXT;')
- func_body.append(' dbgCreateInfo.flags = report_flags;')
- func_body.append(' dbgCreateInfo.pfnCallback = log_callback;')
- func_body.append(' dbgCreateInfo.pUserData = NULL;')
- func_body.append(' layer_create_msg_callback(my_data->report_data, &dbgCreateInfo, pAllocator,')
- func_body.append(' &my_data->logging_callback);')
- func_body.append(' }')
- func_body.append('')
- if lockname is not None:
- func_body.append('%s' % self.lineinfo.get())
- func_body.append(" if (!%sLockInitialized)" % lockname)
- func_body.append(" {")
- func_body.append(" // TODO/TBD: Need to delete this mutex sometime. How???")
- func_body.append(" loader_platform_thread_create_mutex(&%sLock);" % lockname)
- if condname is not None:
- func_body.append(" loader_platform_thread_init_cond(&%sCond);" % condname)
- func_body.append(" %sLockInitialized = 1;" % lockname)
- func_body.append(" }")
- func_body.append("}\n")
- func_body.append('')
- return "\n".join(func_body)
-
- def _generate_new_layer_initialization(self, init_opts=False, prefix='vk', lockname=None, condname=None):
- func_body = ["#include \"vk_dispatch_table_helper.h\""]
- func_body.append('%s' % self.lineinfo.get())
- func_body.append('static void init_%s(layer_data *my_data, const VkAllocationCallbacks *pAllocator)\n'
- '{\n' % self.layer_name)
- if init_opts:
- func_body.append('%s' % self.lineinfo.get())
- func_body.append(' uint32_t report_flags = 0;')
- func_body.append(' uint32_t debug_action = 0;')
- func_body.append(' FILE *log_output = NULL;')
- func_body.append(' const char *strOpt;')
- func_body.append(' // initialize %s options' % self.layer_name)
- func_body.append(' report_flags = getLayerOptionFlags("%sReportFlags", 0);' % self.layer_name)
- func_body.append(' getLayerOptionEnum("%sDebugAction", (uint32_t *) &debug_action);' % self.layer_name)
- func_body.append('')
- func_body.append(' if (debug_action & VK_DBG_LAYER_ACTION_LOG_MSG)')
- func_body.append(' {')
- func_body.append(' strOpt = getLayerOption("%sLogFilename");' % self.layer_name)
- func_body.append(' log_output = getLayerLogOutput(strOpt, "%s");' % self.layer_name)
- func_body.append(' VkDebugReportCallbackCreateInfoEXT dbgCreateInfo;')
- func_body.append(' memset(&dbgCreateInfo, 0, sizeof(dbgCreateInfo));')
- func_body.append(' dbgCreateInfo.sType = VK_STRUCTURE_TYPE_DEBUG_REPORT_CREATE_INFO_EXT;')
- func_body.append(' dbgCreateInfo.flags = report_flags;')
- func_body.append(' dbgCreateInfo.pfnCallback = log_callback;')
- func_body.append(' dbgCreateInfo.pUserData = log_output;')
- func_body.append(' layer_create_msg_callback(my_data->report_data, &dbgCreateInfo, pAllocator,')
- func_body.append(' &my_data->logging_callback);')
- func_body.append(' }')
+ func_body.append(' layer_debug_actions(my_data->report_data, my_data->logging_callback, pAllocator, "lunarg_%s");' % self.layer_name)
func_body.append('')
if lockname is not None:
func_body.append('%s' % self.lineinfo.get())
@@ -854,6 +794,7 @@ class ObjectTrackerSubcommand(Subcommand):
procs_txt.append(' (uint64_t)(vkObj));')
procs_txt.append('')
procs_txt.append(' OBJTRACK_NODE* pNewObjNode = new OBJTRACK_NODE;')
+ procs_txt.append(' pNewObjNode->belongsTo = (uint64_t)dispatchable_object;')
procs_txt.append(' pNewObjNode->objType = objType;')
procs_txt.append(' pNewObjNode->status = OBJSTATUS_NONE;')
procs_txt.append(' pNewObjNode->vkObj = (uint64_t)(vkObj);')
@@ -870,8 +811,9 @@ class ObjectTrackerSubcommand(Subcommand):
procs_txt.append('static void destroy_%s(VkDevice dispatchable_object, %s object)' % (name, o))
procs_txt.append('{')
procs_txt.append(' uint64_t object_handle = (uint64_t)(object);')
- procs_txt.append(' if (%sMap.find(object_handle) != %sMap.end()) {' % (o, o))
- procs_txt.append(' OBJTRACK_NODE* pNode = %sMap[(uint64_t)object];' % (o))
+ procs_txt.append(' auto it = %sMap.find(object_handle);' % o)
+ procs_txt.append(' if (it != %sMap.end()) {' % o)
+ procs_txt.append(' OBJTRACK_NODE* pNode = it->second;')
procs_txt.append(' uint32_t objIndex = objTypeToIndex(pNode->objType);')
procs_txt.append(' assert(numTotalObjs > 0);')
procs_txt.append(' numTotalObjs--;')
@@ -882,7 +824,7 @@ class ObjectTrackerSubcommand(Subcommand):
procs_txt.append(' string_VkDebugReportObjectTypeEXT(pNode->objType), (uint64_t)(object), numTotalObjs, numObjs[objIndex],')
procs_txt.append(' string_VkDebugReportObjectTypeEXT(pNode->objType));')
procs_txt.append(' delete pNode;')
- procs_txt.append(' %sMap.erase(object_handle);' % (o))
+ procs_txt.append(' %sMap.erase(it);' % (o))
procs_txt.append(' } else {')
procs_txt.append(' log_msg(mdd(dispatchable_object), VK_DEBUG_REPORT_ERROR_BIT_EXT, (VkDebugReportObjectTypeEXT ) 0, object_handle, __LINE__, OBJTRACK_NONE, "OBJTRACK",')
procs_txt.append(' "Unable to remove obj 0x%" PRIxLEAST64 ". Was it created? Has it already been destroyed?",')
@@ -898,9 +840,9 @@ class ObjectTrackerSubcommand(Subcommand):
procs_txt.append('{')
procs_txt.append(' if (object != VK_NULL_HANDLE) {')
procs_txt.append(' uint64_t object_handle = (uint64_t)(object);')
- procs_txt.append(' if (%sMap.find(object_handle) != %sMap.end()) {' % (o, o))
- procs_txt.append(' OBJTRACK_NODE* pNode = %sMap[object_handle];' % (o))
- procs_txt.append(' pNode->status |= status_flag;')
+ procs_txt.append(' auto it = %sMap.find(object_handle);' % o)
+ procs_txt.append(' if (it != %sMap.end()) {' % o)
+ procs_txt.append(' it->second->status |= status_flag;')
procs_txt.append(' }')
procs_txt.append(' else {')
procs_txt.append(' // If we do not find it print an error')
@@ -926,8 +868,9 @@ class ObjectTrackerSubcommand(Subcommand):
procs_txt.append(' const char *fail_msg)')
procs_txt.append('{')
procs_txt.append(' uint64_t object_handle = (uint64_t)(object);')
- procs_txt.append(' if (%sMap.find(object_handle) != %sMap.end()) {' % (o, o))
- procs_txt.append(' OBJTRACK_NODE* pNode = %sMap[object_handle];' % (o))
+ procs_txt.append(' auto it = %sMap.find(object_handle);' % o)
+ procs_txt.append(' if (it != %sMap.end()) {' % o)
+ procs_txt.append(' OBJTRACK_NODE* pNode = it->second;')
procs_txt.append(' if ((pNode->status & status_mask) != status_flag) {')
procs_txt.append(' log_msg(mdd(dispatchable_object), msg_flags, pNode->objType, object_handle, __LINE__, OBJTRACK_UNKNOWN_OBJECT, "OBJTRACK",')
procs_txt.append(' "OBJECT VALIDATION WARNING: %s object 0x%" PRIxLEAST64 ": %s", string_VkDebugReportObjectTypeEXT(objType),')
@@ -952,9 +895,9 @@ class ObjectTrackerSubcommand(Subcommand):
procs_txt.append('static VkBool32 reset_%s_status(VkDevice dispatchable_object, %s object, VkDebugReportObjectTypeEXT objType, ObjectStatusFlags status_flag)' % (name, o))
procs_txt.append('{')
procs_txt.append(' uint64_t object_handle = (uint64_t)(object);')
- procs_txt.append(' if (%sMap.find(object_handle) != %sMap.end()) {' % (o, o))
- procs_txt.append(' OBJTRACK_NODE* pNode = %sMap[object_handle];' % (o))
- procs_txt.append(' pNode->status &= ~status_flag;')
+ procs_txt.append(' auto it = %sMap.find(object_handle);' % o)
+ procs_txt.append(' if (it != %sMap.end()) {' % o)
+ procs_txt.append(' it->second->status &= ~status_flag;')
procs_txt.append(' }')
procs_txt.append(' else {')
procs_txt.append(' // If we do not find it print an error')
@@ -1024,25 +967,43 @@ class ObjectTrackerSubcommand(Subcommand):
gedi_txt.append('')
gedi_txt.append(' destroy_instance(instance, instance);')
gedi_txt.append(' // Report any remaining objects in LL')
+ gedi_txt.append('')
+ gedi_txt.append(' for (auto iit = VkDeviceMap.begin(); iit != VkDeviceMap.end();) {')
+ gedi_txt.append(' OBJTRACK_NODE* pNode = iit->second;')
+ gedi_txt.append(' if (pNode->belongsTo == (uint64_t)instance) {')
+ gedi_txt.append(' log_msg(mid(instance), VK_DEBUG_REPORT_ERROR_BIT_EXT, pNode->objType, pNode->vkObj, __LINE__, OBJTRACK_OBJECT_LEAK, "OBJTRACK",')
+ gedi_txt.append(' "OBJ ERROR : %s object 0x%" PRIxLEAST64 " has not been destroyed.", string_VkDebugReportObjectTypeEXT(pNode->objType),')
+ gedi_txt.append(' pNode->vkObj);')
for o in vulkan.core.objects:
- if o in ['VkInstance', 'VkPhysicalDevice', 'VkQueue']:
+ if o in ['VkInstance', 'VkPhysicalDevice', 'VkQueue', 'VkDevice']:
continue
- gedi_txt.append(' for (auto it = %sMap.begin(); it != %sMap.end(); ++it) {' % (o, o))
- gedi_txt.append(' OBJTRACK_NODE* pNode = it->second;')
- gedi_txt.append(' log_msg(mid(instance), VK_DEBUG_REPORT_ERROR_BIT_EXT, pNode->objType, pNode->vkObj, __LINE__, OBJTRACK_OBJECT_LEAK, "OBJTRACK",')
- gedi_txt.append(' "OBJ ERROR : %s object 0x%" PRIxLEAST64 " has not been destroyed.", string_VkDebugReportObjectTypeEXT(pNode->objType),')
- gedi_txt.append(' pNode->vkObj);')
- gedi_txt.append(' }')
- gedi_txt.append(' %sMap.clear();' % (o))
- gedi_txt.append('')
+ gedi_txt.append(' for (auto idt = %sMap.begin(); idt != %sMap.end();) {' % (o, o))
+ gedi_txt.append(' OBJTRACK_NODE* pNode = idt->second;')
+ gedi_txt.append(' if (pNode->belongsTo == iit->first) {')
+ gedi_txt.append(' log_msg(mid(instance), VK_DEBUG_REPORT_ERROR_BIT_EXT, pNode->objType, pNode->vkObj, __LINE__, OBJTRACK_OBJECT_LEAK, "OBJTRACK",')
+ gedi_txt.append(' "OBJ ERROR : %s object 0x%" PRIxLEAST64 " has not been destroyed.", string_VkDebugReportObjectTypeEXT(pNode->objType),')
+ gedi_txt.append(' pNode->vkObj);')
+ gedi_txt.append(' %sMap.erase(idt++);' % o )
+ gedi_txt.append(' } else {')
+ gedi_txt.append(' ++idt;')
+ gedi_txt.append(' }')
+ gedi_txt.append(' }')
+ gedi_txt.append(' VkDeviceMap.erase(iit++);')
+ gedi_txt.append(' } else {')
+ gedi_txt.append(' ++iit;')
+ gedi_txt.append(' }')
+ gedi_txt.append(' }')
+ gedi_txt.append('')
gedi_txt.append(' dispatch_key key = get_dispatch_key(instance);')
gedi_txt.append(' VkLayerInstanceDispatchTable *pInstanceTable = get_dispatch_table(object_tracker_instance_table_map, instance);')
gedi_txt.append(' pInstanceTable->DestroyInstance(instance, pAllocator);')
gedi_txt.append('')
- gedi_txt.append(' // Clean up logging callback, if any')
gedi_txt.append(' layer_data *my_data = get_my_data_ptr(key, layer_data_map);')
- gedi_txt.append(' if (my_data->logging_callback) {')
- gedi_txt.append(' layer_destroy_msg_callback(my_data->report_data, my_data->logging_callback, pAllocator);')
+ gedi_txt.append(' // Clean up logging callback, if any')
+ gedi_txt.append(' while (my_data->logging_callback.size() > 0) {')
+ gedi_txt.append(' VkDebugReportCallbackEXT callback = my_data->logging_callback.back();')
+ gedi_txt.append(' layer_destroy_msg_callback(my_data->report_data, callback, pAllocator);')
+ gedi_txt.append(' my_data->logging_callback.pop_back();')
gedi_txt.append(' }')
gedi_txt.append('')
gedi_txt.append(' layer_debug_report_destroy_instance(mid(instance));')
@@ -1072,18 +1033,22 @@ class ObjectTrackerSubcommand(Subcommand):
gedd_txt.append(' validate_device(device, device, VK_DEBUG_REPORT_OBJECT_TYPE_DEVICE_EXT, false);')
gedd_txt.append('')
gedd_txt.append(' destroy_device(device, device);')
- gedd_txt.append(' // Report any remaining objects in LL')
+ gedd_txt.append(' // Report any remaining objects associated with this VkDevice object in LL')
for o in vulkan.core.objects:
# DescriptorSets and Command Buffers are destroyed through their pools, not explicitly
if o in ['VkInstance', 'VkPhysicalDevice', 'VkQueue', 'VkDevice', 'VkDescriptorSet', 'VkCommandBuffer']:
continue
- gedd_txt.append(' for (auto it = %sMap.begin(); it != %sMap.end(); ++it) {' % (o, o))
+ gedd_txt.append(' for (auto it = %sMap.begin(); it != %sMap.end();) {' % (o, o))
gedd_txt.append(' OBJTRACK_NODE* pNode = it->second;')
- gedd_txt.append(' log_msg(mdd(device), VK_DEBUG_REPORT_ERROR_BIT_EXT, pNode->objType, pNode->vkObj, __LINE__, OBJTRACK_OBJECT_LEAK, "OBJTRACK",')
- gedd_txt.append(' "OBJ ERROR : %s object 0x%" PRIxLEAST64 " has not been destroyed.", string_VkDebugReportObjectTypeEXT(pNode->objType),')
- gedd_txt.append(' pNode->vkObj);')
+ gedd_txt.append(' if (pNode->belongsTo == (uint64_t)device) {')
+ gedd_txt.append(' log_msg(mdd(device), VK_DEBUG_REPORT_ERROR_BIT_EXT, pNode->objType, pNode->vkObj, __LINE__, OBJTRACK_OBJECT_LEAK, "OBJTRACK",')
+ gedd_txt.append(' "OBJ ERROR : %s object 0x%" PRIxLEAST64 " has not been destroyed.", string_VkDebugReportObjectTypeEXT(pNode->objType),')
+ gedd_txt.append(' pNode->vkObj);')
+ gedd_txt.append(' %sMap.erase(it++);' % o )
+ gedd_txt.append(' } else {')
+ gedd_txt.append(' ++it;')
+ gedd_txt.append(' }')
gedd_txt.append(' }')
- gedd_txt.append(' %sMap.clear();' % (o))
gedd_txt.append('')
gedd_txt.append(" // Clean up Queue's MemRef Linked Lists")
gedd_txt.append(' destroyQueueMemRefLists();')
@@ -1397,33 +1362,45 @@ class ObjectTrackerSubcommand(Subcommand):
['vkCreateSwapchainKHR',
'vkDestroySwapchainKHR', 'vkGetSwapchainImagesKHR',
'vkAcquireNextImageKHR', 'vkQueuePresentKHR'])]
- if sys.platform.startswith('win32'):
+ if self.wsi == 'Win32':
instance_extensions=[('msg_callback_get_proc_addr', []),
('wsi_enabled',
- ['vkGetPhysicalDeviceSurfaceSupportKHR',
+ ['vkDestroySurfaceKHR',
+ 'vkGetPhysicalDeviceSurfaceSupportKHR',
'vkGetPhysicalDeviceSurfaceCapabilitiesKHR',
'vkGetPhysicalDeviceSurfaceFormatsKHR',
'vkGetPhysicalDeviceSurfacePresentModesKHR',
'vkCreateWin32SurfaceKHR',
'vkGetPhysicalDeviceWin32PresentationSupportKHR'])]
- elif sys.platform.startswith('linux'):
+ elif self.wsi == 'Android':
instance_extensions=[('msg_callback_get_proc_addr', []),
('wsi_enabled',
- ['vkGetPhysicalDeviceSurfaceSupportKHR',
+ ['vkDestroySurfaceKHR',
+ 'vkGetPhysicalDeviceSurfaceSupportKHR',
'vkGetPhysicalDeviceSurfaceCapabilitiesKHR',
'vkGetPhysicalDeviceSurfaceFormatsKHR',
'vkGetPhysicalDeviceSurfacePresentModesKHR',
- 'vkCreateXcbSurfaceKHR',
- 'vkCreateAndroidSurfaceKHR',
- 'vkGetPhysicalDeviceXcbPresentationSupportKHR'])]
- # TODO: Add cases for Mir, Wayland and Xlib
- else: # android
+ 'vkCreateAndroidSurfaceKHR'])]
+ elif self.wsi == 'Xcb' or self.wsi == 'Xlib' or self.wsi == 'Wayland' or self.wsi == 'Mir':
instance_extensions=[('msg_callback_get_proc_addr', []),
('wsi_enabled',
- ['vkGetPhysicalDeviceSurfaceSupportKHR',
+ ['vkDestroySurfaceKHR',
+ 'vkGetPhysicalDeviceSurfaceSupportKHR',
'vkGetPhysicalDeviceSurfaceCapabilitiesKHR',
'vkGetPhysicalDeviceSurfaceFormatsKHR',
- 'vkGetPhysicalDeviceSurfacePresentModesKHR'])]
+ 'vkGetPhysicalDeviceSurfacePresentModesKHR',
+ 'vkCreateXcbSurfaceKHR',
+ 'vkGetPhysicalDeviceXcbPresentationSupportKHR',
+ 'vkCreateXlibSurfaceKHR',
+ 'vkGetPhysicalDeviceXlibPresentationSupportKHR',
+ 'vkCreateWaylandSurfaceKHR',
+ 'vkGetPhysicalDeviceWaylandPresentationSupportKHR',
+ 'vkCreateMirSurfaceKHR',
+ 'vkGetPhysicalDeviceMirPresentationSupportKHR'])]
+ else:
+ print('Error: Undefined DisplayServer')
+ instance_extensions=[]
+
body = [self.generate_maps(),
self.generate_procs(),
self.generate_destroy_instance(),
@@ -1560,7 +1537,9 @@ class UniqueObjectsSubcommand(Subcommand):
'CreateGraphicsPipelines'
]
# TODO : This is hacky, need to make this a more general-purpose solution for all layers
- ifdef_dict = {'CreateXcbSurfaceKHR': 'VK_USE_PLATFORM_XCB_KHR', 'CreateAndroidSurfaceKHR': 'VK_USE_PLATFORM_ANDROID_KHR'}
+ ifdef_dict = {'CreateXcbSurfaceKHR': 'VK_USE_PLATFORM_XCB_KHR',
+ 'CreateAndroidSurfaceKHR': 'VK_USE_PLATFORM_ANDROID_KHR',
+ 'CreateWin32SurfaceKHR': 'VK_USE_PLATFORM_WIN32_KHR'}
# Give special treatment to create functions that return multiple new objects
# This dict stores array name and size of array
custom_create_dict = {'pDescriptorSets' : 'pAllocateInfo->descriptorSetCount'}
@@ -1684,337 +1663,66 @@ class UniqueObjectsSubcommand(Subcommand):
['vkCreateSwapchainKHR',
'vkDestroySwapchainKHR', 'vkGetSwapchainImagesKHR',
'vkAcquireNextImageKHR', 'vkQueuePresentKHR'])]
- if sys.platform.startswith('win32'):
+ if self.wsi == 'Win32':
instance_extensions=[('wsi_enabled',
- ['vkGetPhysicalDeviceSurfaceSupportKHR',
+ ['vkDestroySurfaceKHR',
+ 'vkGetPhysicalDeviceSurfaceSupportKHR',
'vkGetPhysicalDeviceSurfaceCapabilitiesKHR',
'vkGetPhysicalDeviceSurfaceFormatsKHR',
'vkGetPhysicalDeviceSurfacePresentModesKHR',
'vkCreateWin32SurfaceKHR'
])]
- elif sys.platform.startswith('linux'):
+ elif self.wsi == 'Android':
instance_extensions=[('wsi_enabled',
- ['vkGetPhysicalDeviceSurfaceSupportKHR',
+ ['vkDestroySurfaceKHR',
+ 'vkGetPhysicalDeviceSurfaceSupportKHR',
'vkGetPhysicalDeviceSurfaceCapabilitiesKHR',
'vkGetPhysicalDeviceSurfaceFormatsKHR',
'vkGetPhysicalDeviceSurfacePresentModesKHR',
- 'vkCreateXcbSurfaceKHR',
- 'vkCreateAndroidSurfaceKHR'
- ])]
- # TODO: Add cases for Mir, Wayland and Xlib
- else: # android
+ 'vkCreateAndroidSurfaceKHR'])]
+ elif self.wsi == 'Xcb' or self.wsi == 'Xlib' or self.wsi == 'Wayland' or self.wsi == 'Mir':
instance_extensions=[('wsi_enabled',
- ['vkGetPhysicalDeviceSurfaceSupportKHR',
+ ['vkDestroySurfaceKHR',
+ 'vkGetPhysicalDeviceSurfaceSupportKHR',
'vkGetPhysicalDeviceSurfaceCapabilitiesKHR',
'vkGetPhysicalDeviceSurfaceFormatsKHR',
- 'vkGetPhysicalDeviceSurfacePresentModesKHR'])]
+ 'vkGetPhysicalDeviceSurfacePresentModesKHR',
+ 'vkCreateXcbSurfaceKHR',
+ 'vkCreateXlibSurfaceKHR',
+ 'vkCreateWaylandSurfaceKHR',
+ 'vkCreateMirSurfaceKHR'
+ ])]
+ else:
+ print('Error: Undefined DisplayServer')
+ instance_extensions=[]
+
body = [self._generate_dispatch_entrypoints("VK_LAYER_EXPORT"),
self._generate_layer_gpa_function(extensions,
instance_extensions)]
return "\n\n".join(body)
-class ThreadingSubcommand(Subcommand):
- thread_check_dispatchable_objects = [
- "VkQueue",
- "VkCommandBuffer",
- ]
- thread_check_nondispatchable_objects = [
- "VkDeviceMemory",
- "VkBuffer",
- "VkImage",
- "VkDescriptorSet",
- "VkDescriptorPool",
- "VkSemaphore"
- ]
- thread_check_object_types = {
- 'VkInstance' : 'VK_DEBUG_REPORT_OBJECT_TYPE_INSTANCE_EXT',
- 'VkPhysicalDevice' : 'VK_DEBUG_REPORT_OBJECT_TYPE_PHYSICAL_DEVICE_EXT',
- 'VkDevice' : 'VK_DEBUG_REPORT_OBJECT_TYPE_DEVICE_EXT',
- 'VkQueue' : 'VK_DEBUG_REPORT_OBJECT_TYPE_QUEUE_EXT',
- 'VkCommandBuffer' : 'VK_DEBUG_REPORT_OBJECT_TYPE_COMMAND_BUFFER_EXT',
- 'VkFence' : 'VK_DEBUG_REPORT_OBJECT_TYPE_FENCE_EXT',
- 'VkDeviceMemory' : 'VK_DEBUG_REPORT_OBJECT_TYPE_DEVICE_MEMORY_EXT',
- 'VkBuffer' : 'VK_DEBUG_REPORT_OBJECT_TYPE_BUFFER_EXT',
- 'VkImage' : 'VK_DEBUG_REPORT_OBJECT_TYPE_IMAGE_EXT',
- 'VkSemaphore' : 'VK_DEBUG_REPORT_OBJECT_TYPE_SEMAPHORE_EXT',
- 'VkEvent' : 'VK_DEBUG_REPORT_OBJECT_TYPE_EVENT_EXT',
- 'VkQueryPool' : 'VK_DEBUG_REPORT_OBJECT_TYPE_QUERY_POOL_EXT',
- 'VkBufferView' : 'VK_DEBUG_REPORT_OBJECT_TYPE_BUFFER_VIEW_EXT',
- 'VkImageView' : 'VK_DEBUG_REPORT_OBJECT_TYPE_IMAGE_VIEW_EXT',
- 'VkShaderModule' : 'VK_DEBUG_REPORT_OBJECT_TYPE_SHADER_MODULE_EXT',
- 'VkShader' : 'VK_DEBUG_REPORT_OBJECT_TYPE_SHADER',
- 'VkPipelineCache' : 'VK_DEBUG_REPORT_OBJECT_TYPE_PIPELINE_CACHE_EXT',
- 'VkPipelineLayout' : 'VK_DEBUG_REPORT_OBJECT_TYPE_PIPELINE_LAYOUT_EXT',
- 'VkRenderPass' : 'VK_DEBUG_REPORT_OBJECT_TYPE_RENDER_PASS_EXT',
- 'VkPipeline' : 'VK_DEBUG_REPORT_OBJECT_TYPE_PIPELINE_EXT',
- 'VkDescriptorSetLayout' : 'VK_DEBUG_REPORT_OBJECT_TYPE_DESCRIPTOR_SET_LAYOUT_EXT',
- 'VkSampler' : 'VK_DEBUG_REPORT_OBJECT_TYPE_SAMPLER_EXT',
- 'VkDescriptorPool' : 'VK_DEBUG_REPORT_OBJECT_TYPE_DESCRIPTOR_POOL_EXT',
- 'VkDescriptorSet' : 'VK_DEBUG_REPORT_OBJECT_TYPE_DESCRIPTOR_SET_EXT',
- 'VkFramebuffer' : 'VK_DEBUG_REPORT_OBJECT_TYPE_FRAMEBUFFER_EXT',
- 'VkCommandPool' : 'VK_DEBUG_REPORT_OBJECT_TYPE_COMMAND_POOL_EXT',
+def main():
+ wsi = {
+ "Win32",
+ "Android",
+ "Xcb",
+ "Xlib",
+ "Wayland",
+ "Mir",
}
- def generate_useObject(self, ty):
- obj_type = self.thread_check_object_types[ty]
- key = "object"
- msg_object = "(uint64_t)(object)"
- header_txt = []
- header_txt.append('%s' % self.lineinfo.get())
- header_txt.append('static void use%s(const void* dispatchable_object, %s object)' % (ty, ty))
- header_txt.append('{')
- header_txt.append(' loader_platform_thread_id tid = loader_platform_get_thread_id();')
- header_txt.append(' loader_platform_thread_lock_mutex(&threadingLock);')
- header_txt.append(' if (%sObjectsInUse.find(%s) == %sObjectsInUse.end()) {' % (ty, key, ty))
- header_txt.append(' %sObjectsInUse[%s] = tid;' % (ty, key))
- header_txt.append(' } else {')
- header_txt.append(' if (%sObjectsInUse[%s] != tid) {' % (ty, key))
- header_txt.append(' log_msg(mdd(dispatchable_object), VK_DEBUG_REPORT_ERROR_BIT_EXT, %s, %s,' % (obj_type, msg_object))
- header_txt.append(' __LINE__, THREADING_CHECKER_MULTIPLE_THREADS, "THREADING",')
- header_txt.append(' "THREADING ERROR : object of type %s is simultaneously used in thread %%ld and thread %%ld",' % (ty))
- header_txt.append(' %sObjectsInUse[%s], tid);' % (ty, key))
- header_txt.append(' // Wait for thread-safe access to object')
- header_txt.append(' while (%sObjectsInUse.find(%s) != %sObjectsInUse.end()) {' % (ty, key, ty))
- header_txt.append(' loader_platform_thread_cond_wait(&threadingCond, &threadingLock);')
- header_txt.append(' }')
- header_txt.append(' %sObjectsInUse[%s] = tid;' % (ty, key))
- header_txt.append(' } else {')
- header_txt.append(' log_msg(mdd(dispatchable_object), VK_DEBUG_REPORT_ERROR_BIT_EXT, %s, %s,' % (obj_type, msg_object))
- header_txt.append(' __LINE__, THREADING_CHECKER_MULTIPLE_THREADS, "THREADING",')
- header_txt.append(' "THREADING ERROR : object of type %s is recursively used in thread %%ld",' % (ty))
- header_txt.append(' tid);')
- header_txt.append(' }')
- header_txt.append(' }')
- header_txt.append(' loader_platform_thread_unlock_mutex(&threadingLock);')
- header_txt.append('}')
- return "\n".join(header_txt)
- def generate_finishUsingObject(self, ty):
- key = "object"
- header_txt = []
- header_txt.append('%s' % self.lineinfo.get())
- header_txt.append('static void finishUsing%s(%s object)' % (ty, ty))
- header_txt.append('{')
- header_txt.append(' // Object is no longer in use')
- header_txt.append(' loader_platform_thread_lock_mutex(&threadingLock);')
- header_txt.append(' %sObjectsInUse.erase(%s);' % (ty, key))
- header_txt.append(' loader_platform_thread_cond_broadcast(&threadingCond);')
- header_txt.append(' loader_platform_thread_unlock_mutex(&threadingLock);')
- header_txt.append('}')
- return "\n".join(header_txt)
- def generate_header(self):
- header_txt = []
- header_txt.append('%s' % self.lineinfo.get())
- header_txt.append('#include <stdio.h>')
- header_txt.append('#include <stdlib.h>')
- header_txt.append('#include <string.h>')
- header_txt.append('#include <unordered_map>')
- header_txt.append('#include "vk_loader_platform.h"')
- header_txt.append('#include "vulkan/vk_layer.h"')
- header_txt.append('#include "threading.h"')
- header_txt.append('#include "vk_layer_config.h"')
- header_txt.append('#include "vk_layer_extension_utils.h"')
- header_txt.append('#include "vk_enum_validate_helper.h"')
- header_txt.append('#include "vk_struct_validate_helper.h"')
- header_txt.append('#include "vk_layer_table.h"')
- header_txt.append('#include "vk_layer_logging.h"')
- header_txt.append('')
- header_txt.append('')
- header_txt.append('static LOADER_PLATFORM_THREAD_ONCE_DECLARATION(initOnce);')
- header_txt.append('')
- header_txt.append('using namespace std;')
- for ty in self.thread_check_dispatchable_objects:
- header_txt.append('static unordered_map<%s, loader_platform_thread_id> %sObjectsInUse;' % (ty, ty))
- for ty in self.thread_check_nondispatchable_objects:
- header_txt.append('static unordered_map<%s, loader_platform_thread_id> %sObjectsInUse;' % (ty, ty))
- header_txt.append('static int threadingLockInitialized = 0;')
- header_txt.append('static loader_platform_thread_mutex threadingLock;')
- header_txt.append('static loader_platform_thread_cond threadingCond;')
- header_txt.append('%s' % self.lineinfo.get())
- for ty in self.thread_check_dispatchable_objects + self.thread_check_nondispatchable_objects:
- header_txt.append(self.generate_useObject(ty))
- header_txt.append(self.generate_finishUsingObject(ty))
- header_txt.append('%s' % self.lineinfo.get())
- return "\n".join(header_txt)
-
- def generate_intercept(self, proto, qual):
- if proto.name in [ 'CreateDebugReportCallbackEXT' ]:
- # use default version
- return None
- decl = proto.c_func(prefix="vk", attr="VKAPI")
- ret_val = ''
- stmt = ''
- funcs = []
- table = 'device'
- if proto.ret != "void":
- ret_val = "%s result = " % proto.ret
- stmt = " return result;\n"
- if proto_is_global(proto):
- table = 'instance'
- # Memory range calls are special in needed thread checking within structs
- if proto.name in ["FlushMappedMemoryRanges","InvalidateMappedMemoryRanges"]:
- funcs.append('%s' % self.lineinfo.get())
- funcs.append('%s%s\n' % (qual, decl) +
- '{\n'
- ' for (uint32_t i=0; i<memoryRangeCount; i++) {\n'
- ' useVkDeviceMemory((const void *) %s, pMemoryRanges[i].memory);\n' % proto.params[0].name +
- ' }\n'
- ' VkLayerDispatchTable *pDeviceTable = get_dispatch_table(threading_%s_table_map, %s);\n' % (table, proto.params[0].name) +
- ' %s pDeviceTable->%s;\n' % (ret_val, proto.c_call()) +
- ' for (uint32_t i=0; i<memoryRangeCount; i++) {\n'
- ' finishUsingVkDeviceMemory(pMemoryRanges[i].memory);\n'
- ' }\n'
- '%s' % (stmt) +
- '}')
- return "\n".join(funcs)
- # All functions that do a Get are thread safe
- if 'Get' in proto.name:
- return None
- # All WSI functions are thread safe
- if 'KHR' in proto.name:
- return None
- # Initialize in early calls
- if proto.name == "CreateDevice":
- funcs.append('%s' % self.lineinfo.get())
- funcs.append('%s%s\n' % (qual, decl) +
- '{\n'
- ' VkLayerDeviceCreateInfo *chain_info = get_chain_info(pCreateInfo, VK_LAYER_LINK_INFO);\n'
- ' PFN_vkGetInstanceProcAddr fpGetInstanceProcAddr = chain_info->u.pLayerInfo->pfnNextGetInstanceProcAddr;\n'
- ' PFN_vkGetDeviceProcAddr fpGetDeviceProcAddr = chain_info->u.pLayerInfo->pfnNextGetDeviceProcAddr;\n'
- ' PFN_vkCreateDevice fpCreateDevice = (PFN_vkCreateDevice) fpGetInstanceProcAddr(NULL, "vkCreateDevice");\n'
- ' if (fpCreateDevice == NULL) {\n'
- ' return VK_ERROR_INITIALIZATION_FAILED;\n'
- ' }\n'
- ' // Advance the link info for the next element on the chain\n'
- ' chain_info->u.pLayerInfo = chain_info->u.pLayerInfo->pNext;\n'
- ' VkResult result = fpCreateDevice(physicalDevice, pCreateInfo, pAllocator, pDevice);\n'
- ' if (result != VK_SUCCESS) {\n'
- ' return result;\n'
- ' }\n'
- ' layer_data *my_instance_data = get_my_data_ptr(get_dispatch_key(physicalDevice), layer_data_map);\n'
- ' layer_data *my_device_data = get_my_data_ptr(get_dispatch_key(*pDevice), layer_data_map);\n'
- ' initDeviceTable(*pDevice, fpGetDeviceProcAddr, threading_device_table_map);\n'
- ' my_device_data->report_data = layer_debug_report_create_device(my_instance_data->report_data, *pDevice);\n'
- ' return result;\n'
- '}\n')
- return "\n".join(funcs)
- elif proto.params[0].ty == "VkPhysicalDevice":
- return None
- # Functions changing command buffers need thread safe use of first parameter
- if proto.params[0].ty == "VkCommandBuffer":
- funcs.append('%s' % self.lineinfo.get())
- funcs.append('%s%s\n' % (qual, decl) +
- '{\n'
- ' use%s((const void *) %s, %s);\n' % (proto.params[0].ty, proto.params[0].name, proto.params[0].name) +
- ' VkLayerDispatchTable *pDeviceTable = get_dispatch_table(threading_%s_table_map, %s);\n' % (table, proto.params[0].name) +
- ' %spDeviceTable->%s;\n' % (ret_val, proto.c_call()) +
- ' finishUsing%s(%s);\n' % (proto.params[0].ty, proto.params[0].name) +
- '%s' % stmt +
- '}')
- return "\n".join(funcs)
- # Non-Cmd functions that do a Wait are thread safe
- if 'Wait' in proto.name:
- return None
- # Watch use of certain types of objects passed as any parameter
- checked_params = []
- for param in proto.params:
- if param.ty in self.thread_check_dispatchable_objects or param.ty in self.thread_check_nondispatchable_objects:
- checked_params.append(param)
- if proto.name == "DestroyDevice":
- funcs.append('%s%s\n' % (qual, decl) +
- '{\n'
- ' dispatch_key key = get_dispatch_key(device);\n'
- ' VkLayerDispatchTable *pDeviceTable = get_dispatch_table(threading_%s_table_map, %s);\n' % (table, proto.params[0].name) +
- ' %spDeviceTable->%s;\n' % (ret_val, proto.c_call()) +
- ' threading_device_table_map.erase(key);\n'
- '}\n')
- return "\n".join(funcs);
- elif proto.name == "DestroyInstance":
- funcs.append('%s%s\n' % (qual, decl) +
- '{\n'
- ' dispatch_key key = get_dispatch_key(instance);\n'
- ' VkLayerInstanceDispatchTable *pInstanceTable = get_dispatch_table(threading_instance_table_map, %s);\n' % proto.params[0].name +
- ' %spInstanceTable->%s;\n' % (ret_val, proto.c_call()) +
- ' destroy_dispatch_table(threading_instance_table_map, key);\n'
- '\n'
- ' // Clean up logging callback, if any\n'
- ' layer_data *my_data = get_my_data_ptr(key, layer_data_map);\n'
- ' if (my_data->logging_callback) {\n'
- ' layer_destroy_msg_callback(my_data->report_data, my_data->logging_callback, pAllocator);\n'
- ' }\n'
- '\n'
- ' layer_debug_report_destroy_instance(my_data->report_data);\n'
- ' layer_data_map.erase(pInstanceTable);\n'
- '\n'
- ' threading_instance_table_map.erase(key);\n'
- '}\n')
- return "\n".join(funcs);
- elif proto.name == "CreateInstance":
- funcs.append('%s%s\n'
- '{\n'
- ' VkLayerInstanceCreateInfo *chain_info = get_chain_info(pCreateInfo, VK_LAYER_LINK_INFO);\n'
- ' PFN_vkGetInstanceProcAddr fpGetInstanceProcAddr = chain_info->u.pLayerInfo->pfnNextGetInstanceProcAddr;\n'
- ' PFN_vkCreateInstance fpCreateInstance = (PFN_vkCreateInstance) fpGetInstanceProcAddr(NULL, "vkCreateInstance");\n'
- ' if (fpCreateInstance == NULL) {\n'
- ' return VK_ERROR_INITIALIZATION_FAILED;\n'
- ' }\n'
- ' // Advance the link info for the next element on the chain\n'
- ' chain_info->u.pLayerInfo = chain_info->u.pLayerInfo->pNext;\n'
- ' VkResult result = fpCreateInstance(pCreateInfo, pAllocator, pInstance);\n'
- ' if (result != VK_SUCCESS) {\n'
- ' return result;\n'
- ' }\n'
- ' VkLayerInstanceDispatchTable *pTable = initInstanceTable(*pInstance, fpGetInstanceProcAddr, threading_instance_table_map);\n'
- ' layer_data *my_data = get_my_data_ptr(get_dispatch_key(*pInstance), layer_data_map);\n'
- ' my_data->report_data = debug_report_create_instance(\n'
- ' pTable,\n'
- ' *pInstance,\n'
- ' pCreateInfo->enabledExtensionCount,\n'
- ' pCreateInfo->ppEnabledExtensionNames);\n'
- ' init_threading(my_data, pAllocator);\n'
- ' return result;\n'
- '}\n' % (qual, decl))
- return "\n".join(funcs);
- if len(checked_params) == 0:
- return None
- # Surround call with useObject and finishUsingObject for each checked_param
- funcs.append('%s' % self.lineinfo.get())
- funcs.append('%s%s' % (qual, decl))
- funcs.append('{')
- for param in checked_params:
- funcs.append(' use%s((const void *) %s, %s);' % (param.ty, proto.params[0].name, param.name))
- funcs.append(' VkLayerDispatchTable *pDeviceTable = get_dispatch_table(threading_%s_table_map, %s);' % (table, proto.params[0].name));
- funcs.append(' %spDeviceTable->%s;' % (ret_val, proto.c_call()))
- for param in checked_params:
- funcs.append(' finishUsing%s(%s);' % (param.ty, param.name))
- funcs.append('%s'
- '}' % stmt)
- return "\n".join(funcs)
-
- def generate_body(self):
- self.layer_name = "threading"
- body = [self._generate_new_layer_initialization(True, lockname='threading', condname='threading'),
- self._generate_dispatch_entrypoints("VK_LAYER_EXPORT"),
- self._generate_layer_gpa_function(extensions=[],
- instance_extensions=[('msg_callback_get_proc_addr', [])]),
- self._gen_create_msg_callback(),
- self._gen_destroy_msg_callback(),
- self._gen_debug_report_msg()]
- return "\n\n".join(body)
-
-def main():
subcommands = {
"object_tracker" : ObjectTrackerSubcommand,
- "threading" : ThreadingSubcommand,
"unique_objects" : UniqueObjectsSubcommand,
}
- if len(sys.argv) < 3 or sys.argv[1] not in subcommands or not os.path.exists(sys.argv[2]):
- print("Usage: %s <subcommand> <input_header> [options]" % sys.argv[0])
+ if len(sys.argv) < 4 or sys.argv[1] not in wsi or sys.argv[2] not in subcommands or not os.path.exists(sys.argv[3]):
+ print("Usage: %s <wsi> <subcommand> <input_header> [options]" % sys.argv[0])
print
print("Available subcommands are: %s" % " ".join(subcommands))
exit(1)
- hfp = vk_helper.HeaderFileParser(sys.argv[2])
+ hfp = vk_helper.HeaderFileParser(sys.argv[3])
hfp.parse()
vk_helper.enum_val_dict = hfp.get_enum_val_dict()
vk_helper.enum_type_dict = hfp.get_enum_type_dict()
@@ -2023,7 +1731,7 @@ def main():
vk_helper.typedef_rev_dict = hfp.get_typedef_rev_dict()
vk_helper.types_dict = hfp.get_types_dict()
- subcmd = subcommands[sys.argv[1]](sys.argv[2:])
+ subcmd = subcommands[sys.argv[2]](sys.argv[3:])
subcmd.run()
if __name__ == "__main__":
diff --git a/vk.xml b/vk.xml
index 515f2d199..c863c5a82 100644
--- a/vk.xml
+++ b/vk.xml
@@ -488,7 +488,7 @@ maintained in the master branch of the Khronos Vulkan Github project.
<member optional="true" len="enabledLayerCount,null-terminated">const <type>char</type>* const* <name>ppEnabledLayerNames</name></member> <!-- Ordered list of layer names to be enabled -->
<member optional="true"><type>uint32_t</type> <name>enabledExtensionCount</name></member>
<member optional="true" len="enabledExtensionCount,null-terminated">const <type>char</type>* const* <name>ppEnabledExtensionNames</name></member>
- <member>const <type>VkPhysicalDeviceFeatures</type>* <name>pEnabledFeatures</name></member>
+ <member optional="true">const <type>VkPhysicalDeviceFeatures</type>* <name>pEnabledFeatures</name></member>
<validity>
<usage>Any given element of pname:ppEnabledLayerNames must: be the name of a layer present on the system, exactly matching a string returned in the sname:VkLayerProperties structure by fname:vkEnumerateDeviceLayerProperties</usage>
<usage>Any given element of pname:ppEnabledExtensionNames must: be the name of an extension present on the system, exactly matching a string returned in the sname:VkExtensionProperties structure by fname:vkEnumerateDeviceExtensionProperties</usage>
@@ -1174,7 +1174,7 @@ maintained in the master branch of the Khronos Vulkan Github project.
</validity>
</type>
<type category="struct" name="VkPipelineInputAssemblyStateCreateInfo">
- <member><type>VkStructureType</type> <name>sType</name></member> <!-- Must be VK_STRUCTURE_TYPE_PIPELINE_IINPUT_ASSEMBLY_STATE_CREATE_INFO -->
+ <member><type>VkStructureType</type> <name>sType</name></member> <!-- Must be VK_STRUCTURE_TYPE_PIPELINE_INPUT_ASSEMBLY_STATE_CREATE_INFO -->
<member>const <type>void</type>* <name>pNext</name></member> <!-- Pointer to next structure -->
<member optional="true"><type>VkPipelineInputAssemblyStateCreateFlags</type> <name>flags</name></member> <!-- Reserved -->
<member><type>VkPrimitiveTopology</type> <name>topology</name></member>
diff --git a/vk_helper.py b/vk_helper.py
index 87140ef52..88f1fa202 100755
--- a/vk_helper.py
+++ b/vk_helper.py
@@ -877,7 +877,11 @@ class StructWrapperGen:
if (typedef_fwd_dict[s] not in exclude_struct_list):
if (re.match(r'.*Xcb.*', typedef_fwd_dict[s])):
sh_funcs.append("#ifdef VK_USE_PLATFORM_XCB_KHR")
+ if (re.match(r'.*Win32.*', typedef_fwd_dict[s])):
+ sh_funcs.append("#ifdef VK_USE_PLATFORM_WIN32_KHR")
sh_funcs.append('string %s(const %s* pStruct, const string prefix);' % (self._get_sh_func_name(s), typedef_fwd_dict[s]))
+ if (re.match(r'.*Win32.*', typedef_fwd_dict[s])):
+ sh_funcs.append("#endif //VK_USE_PLATFORM_WIN32_KHR")
if (re.match(r'.*Xcb.*', typedef_fwd_dict[s])):
sh_funcs.append("#endif //VK_USE_PLATFORM_XCB_KHR")
sh_funcs.append('\n')
@@ -896,6 +900,8 @@ class StructWrapperGen:
sh_funcs.append('%s' % lineinfo.get())
if (re.match(r'.*Xcb.*', typedef_fwd_dict[s])):
sh_funcs.append("#ifdef VK_USE_PLATFORM_XCB_KHR")
+ if (re.match(r'.*Win32.*', typedef_fwd_dict[s])):
+ sh_funcs.append("#ifdef VK_USE_PLATFORM_WIN32_KHR")
sh_funcs.append('string %s(const %s* pStruct, const string prefix)\n{' % (self._get_sh_func_name(s), typedef_fwd_dict[s]))
sh_funcs.append('%s' % lineinfo.get())
indent = ' '
@@ -975,10 +981,10 @@ class StructWrapperGen:
else:
sh_funcs.append('%s' % lineinfo.get())
addr_char = ''
+ sh_funcs.append('%sss[%u] << %spStruct->%s[i];' % (indent, index, addr_char, stp_list[index]['name']))
if stp_list[index]['type'] in vulkan.core.objects:
sh_funcs.append('%sstp_strs[%u] += " " + prefix + "%s[" + index_ss.str() + "].handle = " + ss[%u].str() + "\\n";' % (indent, index, stp_list[index]['name'], index))
else:
- sh_funcs.append('%sss[%u] << %spStruct->%s[i];' % (indent, index, addr_char, stp_list[index]['name']))
sh_funcs.append('%sstp_strs[%u] += " " + prefix + "%s[" + index_ss.str() + "] = " + ss[%u].str() + "\\n";' % (indent, index, stp_list[index]['name'], index))
sh_funcs.append('%s' % lineinfo.get())
sh_funcs.append('%sss[%u].str("");' % (indent, index))
@@ -1057,7 +1063,11 @@ class StructWrapperGen:
sh_funcs.append(' ss[%u].str("address");' % (index))
elif 'char' in self.struct_dict[s][m]['type'].lower() and self.struct_dict[s][m]['ptr']:
sh_funcs.append('%s' % lineinfo.get())
- sh_funcs.append(' ss[%u] << pStruct->%s;' % (index, self.struct_dict[s][m]['name']))
+ sh_funcs.append(' if (pStruct->%s != NULL) {' % self.struct_dict[s][m]['name'])
+ sh_funcs.append(' ss[%u] << pStruct->%s;' % (index, self.struct_dict[s][m]['name']))
+ sh_funcs.append(' } else {')
+ sh_funcs.append(' ss[%u] << "";' % index)
+ sh_funcs.append(' }')
else:
sh_funcs.append('%s' % lineinfo.get())
(po, pa) = self._get_struct_print_formatted(self.struct_dict[s][m])
@@ -1089,6 +1099,8 @@ class StructWrapperGen:
sh_funcs.append('%s' % lineinfo.get())
sh_funcs.append(' final_str = %s;' % final_str)
sh_funcs.append(' return final_str;\n}')
+ if (re.match(r'.*Win32.*', typedef_fwd_dict[s])):
+ sh_funcs.append("#endif //VK_USE_PLATFORM_WIN32_KHR")
if (re.match(r'.*Xcb.*', typedef_fwd_dict[s])):
sh_funcs.append("#endif //VK_USE_PLATFORM_XCB_KHR")
# Add function to return a string value for input void*
@@ -1296,7 +1308,11 @@ class StructWrapperGen:
if (typedef_fwd_dict[s] not in exclude_struct_list):
if (re.match(r'.*Xcb.*', typedef_fwd_dict[s])):
sh_funcs.append("#ifdef VK_USE_PLATFORM_XCB_KHR")
+ if (re.match(r'.*Win32.*', typedef_fwd_dict[s])):
+ sh_funcs.append("#ifdef VK_USE_PLATFORM_WIN32_KHR")
sh_funcs.append('uint32_t %s(const %s* pStruct);' % (self._get_vh_func_name(s), typedef_fwd_dict[s]))
+ if (re.match(r'.*Win32.*', typedef_fwd_dict[s])):
+ sh_funcs.append("#endif //VK_USE_PLATFORM_WIN32_KHR")
if (re.match(r'.*Xcb.*', typedef_fwd_dict[s])):
sh_funcs.append("#endif //VK_USE_PLATFORM_XCB_KHR")
sh_funcs.append('\n')
@@ -1305,6 +1321,8 @@ class StructWrapperGen:
continue
if (re.match(r'.*Xcb.*', typedef_fwd_dict[s])):
sh_funcs.append("#ifdef VK_USE_PLATFORM_XCB_KHR")
+ if (re.match(r'.*Win32.*', typedef_fwd_dict[s])):
+ sh_funcs.append("#ifdef VK_USE_PLATFORM_WIN32_KHR")
sh_funcs.append('uint32_t %s(const %s* pStruct)\n{' % (self._get_vh_func_name(s), typedef_fwd_dict[s]))
for m in sorted(self.struct_dict[s]):
# TODO : Need to handle arrays of enums like in VkRenderPassCreateInfo struct
@@ -1317,6 +1335,8 @@ class StructWrapperGen:
else:
sh_funcs.append(' if (!%s((const %s*)&pStruct->%s))\n return 0;' % (self._get_vh_func_name(self.struct_dict[s][m]['type']), self.struct_dict[s][m]['type'], self.struct_dict[s][m]['name']))
sh_funcs.append(" return 1;\n}")
+ if (re.match(r'.*Win32.*', typedef_fwd_dict[s])):
+ sh_funcs.append("#endif //VK_USE_PLATFORM_WIN32_KHR")
if (re.match(r'.*Xcb.*', typedef_fwd_dict[s])):
sh_funcs.append("#endif //VK_USE_PLATFORM_XCB_KHR")
@@ -1347,7 +1367,11 @@ class StructWrapperGen:
if (typedef_fwd_dict[s] not in exclude_struct_list):
if (re.match(r'.*Xcb.*', typedef_fwd_dict[s])):
sh_funcs.append("#ifdef VK_USE_PLATFORM_XCB_KHR")
+ if (re.match(r'.*Win32.*', typedef_fwd_dict[s])):
+ sh_funcs.append("#ifdef VK_USE_PLATFORM_WIN32_KHR")
sh_funcs.append('size_t %s(const %s* pStruct);' % (self._get_size_helper_func_name(s), typedef_fwd_dict[s]))
+ if (re.match(r'.*Win32.*', typedef_fwd_dict[s])):
+ sh_funcs.append("#endif //VK_USE_PLATFORM_WIN32_KHR")
if (re.match(r'.*Xcb.*', typedef_fwd_dict[s])):
sh_funcs.append("#endif //VK_USE_PLATFORM_XCB_KHR")
return "\n".join(sh_funcs)
@@ -2102,8 +2126,6 @@ def main(argv=None):
input_header = os.path.basename(opts.input_file)
if 'vulkan.h' == input_header:
input_header = "vulkan/vulkan.h"
- if 'vk_lunarg_debug_marker.h' == input_header:
- input_header = "vulkan/vk_lunarg_debug_marker.h"
prefix = os.path.basename(opts.input_file).strip(".h")
if prefix == "vulkan":
diff --git a/vk_layer_documentation_generate.py b/vk_layer_documentation_generate.py
index 253696e84..4469bcee4 100755
--- a/vk_layer_documentation_generate.py
+++ b/vk_layer_documentation_generate.py
@@ -56,16 +56,16 @@ import platform
# TODO : Need list of known validation layers to use as default input
# Just a couple of flat lists right now, but may need to make this input file
# or at least a more dynamic data structure
-layer_inputs = { 'draw_state' : {'header' : 'layers/draw_state.h',
- 'source' : 'layers/draw_state.cpp',
+layer_inputs = { 'draw_state' : {'header' : 'layers/core_validation.h',
+ 'source' : 'layers/core_validation.cpp',
'generated' : False,
'error_enum' : 'DRAW_STATE_ERROR'},
- 'shader_checker' : {'header' : 'layers/draw_state.h',
- 'source' : 'layers/draw_state.cpp',
+ 'shader_checker' : {'header' : 'layers/core_validation.h',
+ 'source' : 'layers/core_validation.cpp',
'generated' : False,
'error_enum' : 'SHADER_CHECKER_ERROR'},
- 'mem_tracker' : {'header' : 'layers/mem_tracker.h',
- 'source' : 'layers/mem_tracker.cpp',
+ 'mem_tracker' : {'header' : 'layers/core_validation.h',
+ 'source' : 'layers/core_validation.cpp',
'generated' : False,
'error_enum' : 'MEM_TRACK_ERROR'},
'threading' : {'header' : 'layers/threading.h',
@@ -282,8 +282,7 @@ class LayerDoc:
wsi_s_names = [p.name for p in vulkan.ext_khr_surface.protos]
wsi_ds_names = [p.name for p in vulkan.ext_khr_device_swapchain.protos]
dbg_rpt_names = [p.name for p in vulkan.lunarg_debug_report.protos]
- dbg_mrk_names = [p.name for p in vulkan.lunarg_debug_marker.protos]
- api_names = core_api_names + wsi_s_names + wsi_ds_names + dbg_rpt_names + dbg_mrk_names
+ api_names = core_api_names + wsi_s_names + wsi_ds_names + dbg_rpt_names
for ln in self.layer_doc_dict:
for chk in self.layer_doc_dict[ln]:
if chk in ['overview', 'pending']:
diff --git a/vulkan.py b/vulkan.py
index 189be419c..11d196ab3 100755
--- a/vulkan.py
+++ b/vulkan.py
@@ -29,6 +29,7 @@
# Author: Courtney Goeltzenleuchter <courtney@LunarG.com>
# Author: Tobin Ehlis <tobin@lunarg.com>
# Author: Tony Barbour <tony@LunarG.com>
+# Author: Gwan-gyeong Mun <kk.moon@samsung.com>
class Param(object):
"""A function parameter."""
@@ -1114,6 +1115,58 @@ ext_khr_xcb_surface = Extension(
Param("xcb_visualid_t", "visual_id")]),
],
)
+ext_khr_xlib_surface = Extension(
+ name="VK_KHR_xlib_surface",
+ headers=["vulkan/vulkan.h"],
+ objects=[],
+ protos=[
+ Proto("VkResult", "CreateXlibSurfaceKHR",
+ [Param("VkInstance", "instance"),
+ Param("const VkXlibSurfaceCreateInfoKHR*", "pCreateInfo"),
+ Param("const VkAllocationCallbacks*", "pAllocator"),
+ Param("VkSurfaceKHR*", "pSurface")]),
+
+ Proto("VkBool32", "GetPhysicalDeviceXlibPresentationSupportKHR",
+ [Param("VkPhysicalDevice", "physicalDevice"),
+ Param("uint32_t", "queueFamilyIndex"),
+ Param("Display*", "dpy"),
+ Param("VisualID", "visualID")]),
+ ],
+)
+ext_khr_wayland_surface = Extension(
+ name="VK_KHR_wayland_surface",
+ headers=["vulkan/vulkan.h"],
+ objects=[],
+ protos=[
+ Proto("VkResult", "CreateWaylandSurfaceKHR",
+ [Param("VkInstance", "instance"),
+ Param("const VkWaylandSurfaceCreateInfoKHR*", "pCreateInfo"),
+ Param("const VkAllocationCallbacks*", "pAllocator"),
+ Param("VkSurfaceKHR*", "pSurface")]),
+
+ Proto("VkBool32", "GetPhysicalDeviceWaylandPresentationSupportKHR",
+ [Param("VkPhysicalDevice", "physicalDevice"),
+ Param("uint32_t", "queueFamilyIndex"),
+ Param("struct wl_display*", "display")]),
+ ],
+)
+ext_khr_mir_surface = Extension(
+ name="VK_KHR_mir_surface",
+ headers=["vulkan/vulkan.h"],
+ objects=[],
+ protos=[
+ Proto("VkResult", "CreateMirSurfaceKHR",
+ [Param("VkInstance", "instance"),
+ Param("const VkMirSurfaceCreateInfoKHR*", "pCreateInfo"),
+ Param("const VkAllocationCallbacks*", "pAllocator"),
+ Param("VkSurfaceKHR*", "pSurface")]),
+
+ Proto("VkBool32", "GetPhysicalDeviceMirPresentationSupportKHR",
+ [Param("VkPhysicalDevice", "physicalDevice"),
+ Param("uint32_t", "queueFamilyIndex"),
+ Param("MirConnection*", "connection")]),
+ ],
+)
ext_khr_android_surface = Extension(
name="VK_KHR_android_surface",
headers=["vulkan/vulkan.h"],
@@ -1171,41 +1224,43 @@ lunarg_debug_report = Extension(
Param("const char *", "pMsg")]),
],
)
-lunarg_debug_marker = Extension(
- name="VK_LUNARG_DEBUG_MARKER",
- headers=["vulkan/vk_lunarg_debug_marker.h"],
- objects=[],
- protos=[
- Proto("void", "CmdDbgMarkerBegin",
- [Param("VkCommandBuffer", "commandBuffer"),
- Param("const char*", "pMarker")]),
-
- Proto("void", "CmdDbgMarkerEnd",
- [Param("VkCommandBuffer", "commandBuffer")]),
-
- Proto("VkResult", "DbgSetObjectTag",
- [Param("VkDevice", "device"),
- Param("VkDebugReportObjectTypeEXT", "objType"),
- Param("uint64_t", "object"),
- Param("size_t", "tagSize"),
- Param("const void*", "pTag")]),
-
- Proto("VkResult", "DbgSetObjectName",
- [Param("VkDevice", "device"),
- Param("VkDebugReportObjectTypeEXT", "objType"),
- Param("uint64_t", "object"),
- Param("size_t", "nameSize"),
- Param("const char*", "pName")]),
- ],
-)
import sys
-if sys.platform.startswith('win32'):
- extensions = [core, ext_khr_surface, ext_khr_device_swapchain, ext_khr_win32_surface]
- extensions_all = [core, ext_khr_surface, ext_khr_device_swapchain, ext_khr_win32_surface, lunarg_debug_report, lunarg_debug_marker]
-else: # linux & android
- extensions = [core, ext_khr_surface, ext_khr_device_swapchain, ext_khr_xcb_surface, ext_khr_android_surface]
- extensions_all = [core, ext_khr_surface, ext_khr_device_swapchain, ext_khr_xcb_surface, ext_khr_android_surface, lunarg_debug_report, lunarg_debug_marker]
+
+if len(sys.argv) > 3:
+# TODO : Need to clean this up to more seemlessly handle building different targets than the platform you're on
+ if sys.platform.startswith('win32') and sys.argv[1] != 'Android':
+ extensions = [core, ext_khr_surface, ext_khr_device_swapchain, ext_khr_win32_surface]
+ extensions_all = [core, ext_khr_surface, ext_khr_device_swapchain, ext_khr_win32_surface, lunarg_debug_report]
+ elif sys.platform.startswith('linux') and sys.argv[1] != 'Android':
+ extensions = [core, ext_khr_surface, ext_khr_device_swapchain, ext_khr_xcb_surface]
+ extensions_all = [core, ext_khr_surface, ext_khr_device_swapchain, ext_khr_xcb_surface, lunarg_debug_report]
+ else: # android
+ extensions = [core, ext_khr_surface, ext_khr_device_swapchain, ext_khr_android_surface]
+ extensions_all = [core, ext_khr_surface, ext_khr_device_swapchain, ext_khr_android_surface, lunarg_debug_report]
+else :
+ if sys.argv[1] == 'Win32':
+ extensions = [core, ext_khr_surface, ext_khr_device_swapchain, ext_khr_win32_surface]
+ extensions_all = [core, ext_khr_surface, ext_khr_device_swapchain, ext_khr_win32_surface, lunarg_debug_report]
+ elif sys.argv[1] == 'Android':
+ extensions = [core, ext_khr_surface, ext_khr_device_swapchain, ext_khr_android_surface]
+ extensions_all = [core, ext_khr_surface, ext_khr_device_swapchain, ext_khr_android_surface, lunarg_debug_report]
+ elif sys.argv[1] == 'Xcb':
+ extensions = [core, ext_khr_surface, ext_khr_device_swapchain, ext_khr_xcb_surface]
+ extensions_all = [core, ext_khr_surface, ext_khr_device_swapchain, ext_khr_xcb_surface, lunarg_debug_report]
+ elif sys.argv[1] == 'Xlib':
+ extensions = [core, ext_khr_surface, ext_khr_device_swapchain, ext_khr_xlib_surface]
+ extensions_all = [core, ext_khr_surface, ext_khr_device_swapchain, ext_khr_xlib_surface, lunarg_debug_report]
+ elif sys.argv[1] == 'Wayland':
+ extensions = [core, ext_khr_surface, ext_khr_device_swapchain, ext_khr_wayland_surface]
+ extensions_all = [core, ext_khr_surface, ext_khr_device_swapchain, ext_khr_wayland_surface, lunarg_debug_report]
+ elif sys.argv[1] == 'Mir':
+ extensions = [core, ext_khr_surface, ext_khr_device_swapchain, ext_khr_mir_surface]
+ extensions_all = [core, ext_khr_surface, ext_khr_device_swapchain, ext_khr_mir_surface, lunarg_debug_report]
+ else:
+ print('Error: Undefined DisplayServer')
+ extensions = []
+ extensions_all = []
object_dispatch_list = [
"VkInstance",
diff --git a/windowsRuntimeInstaller/ConfigLayersAndVulkanDLL.ps1 b/windowsRuntimeInstaller/ConfigLayersAndVulkanDLL.ps1
index ad31cd294..9a6665d66 100644
--- a/windowsRuntimeInstaller/ConfigLayersAndVulkanDLL.ps1
+++ b/windowsRuntimeInstaller/ConfigLayersAndVulkanDLL.ps1
@@ -49,6 +49,16 @@ Param(
$vulkandll = "vulkan-"+$majorabi+".dll"
$windrive = $env:SYSTEMDRIVE
$winfolder = $env:SYSTEMROOT
+$script:VulkanDllList=@()
+
+function notNumeric ($x) {
+ try {
+ 0 + $x | Out-Null
+ return $false
+ } catch {
+ return $true
+ }
+}
# The name of the versioned vulkan dll file is one of the following:
#
@@ -80,17 +90,30 @@ $winfolder = $env:SYSTEMROOT
# from the file name. They are used later to find the path to the SDK
# install directory for the given filename.
+
function UpdateVulkanSysFolder([string]$dir, [int]$writeSdkName)
{
# Push the current path on the stack and go to $dir
Push-Location -Path $dir
- # Create a list for all the DLLs in the folder
- $VulkanDllList=@()
+ # Create a list for all the DLLs in the folder.
+ # First Initialize the list to empty
+ $script:VulkanDllList = @()
# Find all DLL objects in this directory
dir -name vulkan-$majorabi-*.dll |
ForEach-Object {
+ if ($_ -match "=" -or
+ $_ -match "@" -or
+ $_ -match " " -or
+ ($_.Split('-').count -lt 6) -or
+ ($_.Split('-').count -gt 8))
+ {
+ # If a file name contains "=", "@", or " ", or it contains less then 5 dashes or more than
+ # 7 dashes, it wasn't installed by the Vulkan Run Time.
+ # Note that we need to use return inside of ForEach-Object is to continue with iteration.
+ return
+ }
$major=$_.Split('-')[2]
$majorOrig=$major
$minor=$_.Split('-')[3]
@@ -100,17 +123,16 @@ function UpdateVulkanSysFolder([string]$dir, [int]$writeSdkName)
$buildno=$_.Split('-')[5]
if ($buildno -match ".dll") {
- # <prerelease> and <prebuildno> are not in the name
- $buildno=$buildno -replace ".dll",""
- $buildnoOrig=$buildno
- $prerelease="z"*10
- $prereleaseOrig=""
- $prebuildno="z"*10
- $prebuildnoOrig=""
+ # prerelease and prebuildno are not in the name
+ # Extract buildno, and set prerelease and prebuildno to "z"s
+ $buildno=$buildno -replace ".dll",""
+ $buildnoOrig=$buildno
+ $prerelease="z"*10
+ $prereleaseOrig=""
+ $prebuildno="z"*10
+ $prebuildnoOrig=""
} else {
-
- # We assume we don't have more than 5 dashes
-
+ # Extract buildno, prerelease, and prebuildno
$f=$_ -replace ".dll",""
$buildno=$f.Split('-')[5]
$buildnoOrig=$buildno
@@ -134,6 +156,15 @@ function UpdateVulkanSysFolder([string]$dir, [int]$writeSdkName)
}
}
+ # Make sure fields that are supposed to be numbers are numbers
+ if (notNumeric($major)) {return}
+ if (notNumeric($minor)) {return}
+ if (notNumeric($patch)) {return}
+ if (notNumeric($buildno)) {return}
+ if (notNumeric($prebuildno)) {
+ if ($prebuildno -ne "z"*10) {return}
+ }
+
$major = $major.padleft(10,'0')
$minor = $minor.padleft(10,'0')
$patch = $patch.padleft(10,'0')
@@ -142,22 +173,21 @@ function UpdateVulkanSysFolder([string]$dir, [int]$writeSdkName)
$prebuildno = $prebuildno.padleft(10,'0')
# Add a new element to the $VulkanDllList array
- $VulkanDllList+="$major=$minor=$patch=$buildno=$prerelease=$prebuildno= $_ @$majorOrig@$minorOrig@$patchOrig@$buildnoOrig@$prereleaseOrig@$prebuildnoOrig@"
+ $script:VulkanDllList+="$major=$minor=$patch=$buildno=$prerelease=$prebuildno= $_ @$majorOrig@$minorOrig@$patchOrig@$buildnoOrig@$prereleaseOrig@$prebuildnoOrig@"
}
# If $VulkanDllList contains at least one element, there's at least one vulkan*.dll file.
# Copy the most recent vulkan*.dll (named in the last element of $VulkanDllList) to vulkan-$majorabi.dll.
- # TODO: In the future, also copy the corresponding vulkaninfo-*.exe to vulkaninfo.exe.
- if ($VulkanDllList.Length -gt 0) {
+ if ($script:VulkanDllList.Length -gt 0) {
# Sort the list. The most recent vulkan-*.dll will be in the last element of the list.
- [array]::sort($VulkanDllList)
+ [array]::sort($script:VulkanDllList)
# Put the name of the most recent vulkan-*.dll in $mrVulkanDLL.
# The most recent vulkanDLL is the second word in the last element of the
# sorted $VulkanDllList. Copy it to $vulkandll.
- $mrVulkanDll=$VulkanDllList[-1].Split(' ')[1]
+ $mrVulkanDll=$script:VulkanDllList[-1].Split(' ')[1]
copy $mrVulkanDll $vulkandll
# Copy the most recent version of vulkaninfo-<abimajor>-*.exe to vulkaninfo.exe.
@@ -167,12 +197,12 @@ function UpdateVulkanSysFolder([string]$dir, [int]$writeSdkName)
copy $mrVulkaninfo vulkaninfo.exe
# Create the name used in the registry for the SDK associated with $mrVulkanDll.
- $major=$VulkanDLLList[-1].Split('@')[1]
- $minor=$VulkanDLLList[-1].Split('@')[2]
- $patch=$VulkanDLLList[-1].Split('@')[3]
- $buildno=$VulkanDLLList[-1].Split('@')[4]
- $prerelease=$VulkanDLLList[-1].Split('@')[5]
- $prebuildno=$VulkanDLLList[-1].Split('@')[6]
+ $major=$script:VulkanDllList[-1].Split('@')[1]
+ $minor=$script:VulkanDllList[-1].Split('@')[2]
+ $patch=$script:VulkanDllList[-1].Split('@')[3]
+ $buildno=$script:VulkanDllList[-1].Split('@')[4]
+ $prerelease=$script:VulkanDllList[-1].Split('@')[5]
+ $prebuildno=$script:VulkanDllList[-1].Split('@')[6]
$sdktempname="VulkanSDK"+$major + "." + $minor + "." + $patch + "." + $buildno
if ($prerelease -ne "") {
@@ -223,6 +253,42 @@ Get-ChildItem -Path Registry::HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\Windows\Curr
}
}
+
+# Search list of sdk install dirs for an sdk compatible with $script:sdkname.
+# We go backwards through VulkanDllList to generate SDK names, because we want the most recent SDK.
+if ($mrVulkanDllInstallDir -eq "") {
+ ForEach ($idx in ($script:VulkanDllList.Length-1)..0) {
+ $vulkanDllMajor=$script:VulkanDllList[$idx].Split('@')[1]
+ $vulkanDllMinor=$script:VulkanDllList[$idx].Split('@')[2]
+ $vulkanDllPatch=$script:VulkanDllList[$idx].Split('@')[3]
+ $vulkanDllBuildno=$script:VulkanDllList[$idx].Split('@')[4]
+ $vulkanDllPrerelease=$script:VulkanDllList[$idx].Split('@')[5]
+ $vulkanDllPrebuildno=$script:VulkanDllList[$idx].Split('@')[6]
+ $regEntry="VulkanSDK"+$vulkanDllMajor+"."+$vulkanDllMinor+"."+$vulkanDllPatch+"."+$vulkanDllBuildno
+ if ($vulkanDllPrerelease) {
+ $regEntry=$regEntry+"."+$vulkanDllPrerelease
+ }
+ if ($vulkanDllPrebuildno) {
+ $regEntry=$regEntry+"."+$vulkanDllPrebuildno
+ }
+ $rval=Get-ItemProperty -Path HKLM:\SOFTWARE\Microsoft\Windows\CurrentVersion\Uninstall\$regEntry -ErrorAction SilentlyContinue
+ $instDir=$rval
+ $instDir=$instDir -replace "\\Uninstall.exe.*",""
+ $instDir=$instDir -replace ".*=.",""
+ if ($rval) {
+ $rval=$rval -replace ".* DisplayVersion=",""
+ $rval=$rval -replace ";.*",""
+ $reMajor=$rval.Split('.')[0]
+ $reMinor=$rval.Split('.')[1]
+ $rePatch=$rval.Split('.')[2]
+ if ($reMajor+$reMinor+$rePatch -eq $vulkanDllMajor+$vulkanDllMinor+$vulkanDllPatch) {
+ $mrVulkanDllInstallDir=$instDir
+ break
+ }
+ }
+ }
+}
+
# Add C:\Vulkan\SDK\0.9.3 to list of SDK install dirs.
# We do this because there is in a bug in SDK 0.9.3 in which layer
# reg entries were not removed on uninstall. So we'll try to clean up
@@ -291,3 +357,158 @@ if ($mrVulkanDllInstallDir -ne "") {
}
}
+
+# SIG # Begin signature block
+# MIIcZgYJKoZIhvcNAQcCoIIcVzCCHFMCAQExCzAJBgUrDgMCGgUAMGkGCisGAQQB
+# gjcCAQSgWzBZMDQGCisGAQQBgjcCAR4wJgIDAQAABBAfzDtgWUsITrck0sYpfvNR
+# AgEAAgEAAgEAAgEAAgEAMCEwCQYFKw4DAhoFAAQUdeZMvyfevbCm2d9Sn02g0L39
+# 6EKggheVMIIFHjCCBAagAwIBAgIQDmYEpPtQ2iBY4vC2AGq6uzANBgkqhkiG9w0B
+# AQsFADByMQswCQYDVQQGEwJVUzEVMBMGA1UEChMMRGlnaUNlcnQgSW5jMRkwFwYD
+# VQQLExB3d3cuZGlnaWNlcnQuY29tMTEwLwYDVQQDEyhEaWdpQ2VydCBTSEEyIEFz
+# c3VyZWQgSUQgQ29kZSBTaWduaW5nIENBMB4XDTE1MDQzMDAwMDAwMFoXDTE2MDcw
+# NjEyMDAwMFowZTELMAkGA1UEBhMCVVMxETAPBgNVBAgTCENvbG9yYWRvMRUwEwYD
+# VQQHEwxGb3J0IENvbGxpbnMxFTATBgNVBAoTDEx1bmFyRywgSW5jLjEVMBMGA1UE
+# AxMMTHVuYXJHLCBJbmMuMIIBIjANBgkqhkiG9w0BAQEFAAOCAQ8AMIIBCgKCAQEA
+# jaLJm/Joxsn/IeopiGY3XPeSZeIhjSjSlQUkDyIyDGcBG7CvfiSsXIw3EgdGkjQg
+# yBcW5YsPz9bPGPUjo5go7CwZaRkhW7/LmSkAlx0UAv8EMLuJrAZ3jBNZvpPPqfWd
+# zgi/Rkm2gWQ6eSKouy7IjcLk+EwkeBbB+UBnYfMp0BfCPzR3mPgGAJH6efAmEaqQ
+# FBCrX97joYgDqp3v8u42jALLl/Ict/GNMHLxP+QWagIHIICCRgS6s02OsildLF6R
+# nqJOOG/43f2qUD4Cab65kzlI+0+uQyOl1UlxNxp0XareghGTqECsYA03j64Esxyo
+# 2xrNbV2LJm9crTX6QthxywIDAQABo4IBuzCCAbcwHwYDVR0jBBgwFoAUWsS5eyoK
+# o6XqcQPAYPkt9mV1DlgwHQYDVR0OBBYEFOSdVsqodGWApfCjHtAcn8sAzLBGMA4G
+# A1UdDwEB/wQEAwIHgDATBgNVHSUEDDAKBggrBgEFBQcDAzB3BgNVHR8EcDBuMDWg
+# M6Axhi9odHRwOi8vY3JsMy5kaWdpY2VydC5jb20vc2hhMi1hc3N1cmVkLWNzLWcx
+# LmNybDA1oDOgMYYvaHR0cDovL2NybDQuZGlnaWNlcnQuY29tL3NoYTItYXNzdXJl
+# ZC1jcy1nMS5jcmwwQgYDVR0gBDswOTA3BglghkgBhv1sAwEwKjAoBggrBgEFBQcC
+# ARYcaHR0cHM6Ly93d3cuZGlnaWNlcnQuY29tL0NQUzCBhAYIKwYBBQUHAQEEeDB2
+# MCQGCCsGAQUFBzABhhhodHRwOi8vb2NzcC5kaWdpY2VydC5jb20wTgYIKwYBBQUH
+# MAKGQmh0dHA6Ly9jYWNlcnRzLmRpZ2ljZXJ0LmNvbS9EaWdpQ2VydFNIQTJBc3N1
+# cmVkSURDb2RlU2lnbmluZ0NBLmNydDAMBgNVHRMBAf8EAjAAMA0GCSqGSIb3DQEB
+# CwUAA4IBAQCIt1S8zfvzMQEVmdAssjrwqBaq78xhtGPLjkNF06EvtWoV6VMLI/A6
+# 45KoULsaXeYuszLxNI+OT/b4HfD0e2LxImaTDZRmCLeIs+2pMLSlWDSV4okm8Vk2
+# rObLBlgiI1x0PiMa1le9D832COWM4EJcH7pxM+9JfiHYMLlZbcfNEVgv6Dhhl4MG
+# mOTMTl7vQNNQaJ1coNVf9m5Bez1DV9Iu2Cgd8BHp1oLVCQCHjVv0Ifj48RIPi4SQ
+# khzalrnrf+L/BWRDhpLnxYasazdV5WfrMHurPuBvYUiLQNkU9SqKgRk9XrzDAfMe
+# gPbGybMr0kqtbE/A/cDcTVnvRuTZnhXSMIIFMDCCBBigAwIBAgIQBAkYG1/Vu2Z1
+# U0O1b5VQCDANBgkqhkiG9w0BAQsFADBlMQswCQYDVQQGEwJVUzEVMBMGA1UEChMM
+# RGlnaUNlcnQgSW5jMRkwFwYDVQQLExB3d3cuZGlnaWNlcnQuY29tMSQwIgYDVQQD
+# ExtEaWdpQ2VydCBBc3N1cmVkIElEIFJvb3QgQ0EwHhcNMTMxMDIyMTIwMDAwWhcN
+# MjgxMDIyMTIwMDAwWjByMQswCQYDVQQGEwJVUzEVMBMGA1UEChMMRGlnaUNlcnQg
+# SW5jMRkwFwYDVQQLExB3d3cuZGlnaWNlcnQuY29tMTEwLwYDVQQDEyhEaWdpQ2Vy
+# dCBTSEEyIEFzc3VyZWQgSUQgQ29kZSBTaWduaW5nIENBMIIBIjANBgkqhkiG9w0B
+# AQEFAAOCAQ8AMIIBCgKCAQEA+NOzHH8OEa9ndwfTCzFJGc/Q+0WZsTrbRPV/5aid
+# 2zLXcep2nQUut4/6kkPApfmJ1DcZ17aq8JyGpdglrA55KDp+6dFn08b7KSfH03sj
+# lOSRI5aQd4L5oYQjZhJUM1B0sSgmuyRpwsJS8hRniolF1C2ho+mILCCVrhxKhwjf
+# DPXiTWAYvqrEsq5wMWYzcT6scKKrzn/pfMuSoeU7MRzP6vIK5Fe7SrXpdOYr/mzL
+# fnQ5Ng2Q7+S1TqSp6moKq4TzrGdOtcT3jNEgJSPrCGQ+UpbB8g8S9MWOD8Gi6CxR
+# 93O8vYWxYoNzQYIH5DiLanMg0A9kczyen6Yzqf0Z3yWT0QIDAQABo4IBzTCCAckw
+# EgYDVR0TAQH/BAgwBgEB/wIBADAOBgNVHQ8BAf8EBAMCAYYwEwYDVR0lBAwwCgYI
+# KwYBBQUHAwMweQYIKwYBBQUHAQEEbTBrMCQGCCsGAQUFBzABhhhodHRwOi8vb2Nz
+# cC5kaWdpY2VydC5jb20wQwYIKwYBBQUHMAKGN2h0dHA6Ly9jYWNlcnRzLmRpZ2lj
+# ZXJ0LmNvbS9EaWdpQ2VydEFzc3VyZWRJRFJvb3RDQS5jcnQwgYEGA1UdHwR6MHgw
+# OqA4oDaGNGh0dHA6Ly9jcmw0LmRpZ2ljZXJ0LmNvbS9EaWdpQ2VydEFzc3VyZWRJ
+# RFJvb3RDQS5jcmwwOqA4oDaGNGh0dHA6Ly9jcmwzLmRpZ2ljZXJ0LmNvbS9EaWdp
+# Q2VydEFzc3VyZWRJRFJvb3RDQS5jcmwwTwYDVR0gBEgwRjA4BgpghkgBhv1sAAIE
+# MCowKAYIKwYBBQUHAgEWHGh0dHBzOi8vd3d3LmRpZ2ljZXJ0LmNvbS9DUFMwCgYI
+# YIZIAYb9bAMwHQYDVR0OBBYEFFrEuXsqCqOl6nEDwGD5LfZldQ5YMB8GA1UdIwQY
+# MBaAFEXroq/0ksuCMS1Ri6enIZ3zbcgPMA0GCSqGSIb3DQEBCwUAA4IBAQA+7A1a
+# JLPzItEVyCx8JSl2qB1dHC06GsTvMGHXfgtg/cM9D8Svi/3vKt8gVTew4fbRknUP
+# UbRupY5a4l4kgU4QpO4/cY5jDhNLrddfRHnzNhQGivecRk5c/5CxGwcOkRX7uq+1
+# UcKNJK4kxscnKqEpKBo6cSgCPC6Ro8AlEeKcFEehemhor5unXCBc2XGxDI+7qPjF
+# Emifz0DLQESlE/DmZAwlCEIysjaKJAL+L3J+HNdJRZboWR3p+nRka7LrZkPas7CM
+# 1ekN3fYBIM6ZMWM9CBoYs4GbT8aTEAb8B4H6i9r5gkn3Ym6hU/oSlBiFLpKR6mhs
+# RDKyZqHnGKSaZFHvMIIGajCCBVKgAwIBAgIQAwGaAjr/WLFr1tXq5hfwZjANBgkq
+# hkiG9w0BAQUFADBiMQswCQYDVQQGEwJVUzEVMBMGA1UEChMMRGlnaUNlcnQgSW5j
+# MRkwFwYDVQQLExB3d3cuZGlnaWNlcnQuY29tMSEwHwYDVQQDExhEaWdpQ2VydCBB
+# c3N1cmVkIElEIENBLTEwHhcNMTQxMDIyMDAwMDAwWhcNMjQxMDIyMDAwMDAwWjBH
+# MQswCQYDVQQGEwJVUzERMA8GA1UEChMIRGlnaUNlcnQxJTAjBgNVBAMTHERpZ2lD
+# ZXJ0IFRpbWVzdGFtcCBSZXNwb25kZXIwggEiMA0GCSqGSIb3DQEBAQUAA4IBDwAw
+# ggEKAoIBAQCjZF38fLPggjXg4PbGKuZJdTvMbuBTqZ8fZFnmfGt/a4ydVfiS457V
+# WmNbAklQ2YPOb2bu3cuF6V+l+dSHdIhEOxnJ5fWRn8YUOawk6qhLLJGJzF4o9GS2
+# ULf1ErNzlgpno75hn67z/RJ4dQ6mWxT9RSOOhkRVfRiGBYxVh3lIRvfKDo2n3k5f
+# 4qi2LVkCYYhhchhoubh87ubnNC8xd4EwH7s2AY3vJ+P3mvBMMWSN4+v6GYeofs/s
+# jAw2W3rBerh4x8kGLkYQyI3oBGDbvHN0+k7Y/qpA8bLOcEaD6dpAoVk62RUJV5lW
+# MJPzyWHM0AjMa+xiQpGsAsDvpPCJEY93AgMBAAGjggM1MIIDMTAOBgNVHQ8BAf8E
+# BAMCB4AwDAYDVR0TAQH/BAIwADAWBgNVHSUBAf8EDDAKBggrBgEFBQcDCDCCAb8G
+# A1UdIASCAbYwggGyMIIBoQYJYIZIAYb9bAcBMIIBkjAoBggrBgEFBQcCARYcaHR0
+# cHM6Ly93d3cuZGlnaWNlcnQuY29tL0NQUzCCAWQGCCsGAQUFBwICMIIBVh6CAVIA
+# QQBuAHkAIAB1AHMAZQAgAG8AZgAgAHQAaABpAHMAIABDAGUAcgB0AGkAZgBpAGMA
+# YQB0AGUAIABjAG8AbgBzAHQAaQB0AHUAdABlAHMAIABhAGMAYwBlAHAAdABhAG4A
+# YwBlACAAbwBmACAAdABoAGUAIABEAGkAZwBpAEMAZQByAHQAIABDAFAALwBDAFAA
+# UwAgAGEAbgBkACAAdABoAGUAIABSAGUAbAB5AGkAbgBnACAAUABhAHIAdAB5ACAA
+# QQBnAHIAZQBlAG0AZQBuAHQAIAB3AGgAaQBjAGgAIABsAGkAbQBpAHQAIABsAGkA
+# YQBiAGkAbABpAHQAeQAgAGEAbgBkACAAYQByAGUAIABpAG4AYwBvAHIAcABvAHIA
+# YQB0AGUAZAAgAGgAZQByAGUAaQBuACAAYgB5ACAAcgBlAGYAZQByAGUAbgBjAGUA
+# LjALBglghkgBhv1sAxUwHwYDVR0jBBgwFoAUFQASKxOYspkH7R7for5XDStnAs0w
+# HQYDVR0OBBYEFGFaTSS2STKdSip5GoNL9B6Jwcp9MH0GA1UdHwR2MHQwOKA2oDSG
+# Mmh0dHA6Ly9jcmwzLmRpZ2ljZXJ0LmNvbS9EaWdpQ2VydEFzc3VyZWRJRENBLTEu
+# Y3JsMDigNqA0hjJodHRwOi8vY3JsNC5kaWdpY2VydC5jb20vRGlnaUNlcnRBc3N1
+# cmVkSURDQS0xLmNybDB3BggrBgEFBQcBAQRrMGkwJAYIKwYBBQUHMAGGGGh0dHA6
+# Ly9vY3NwLmRpZ2ljZXJ0LmNvbTBBBggrBgEFBQcwAoY1aHR0cDovL2NhY2VydHMu
+# ZGlnaWNlcnQuY29tL0RpZ2lDZXJ0QXNzdXJlZElEQ0EtMS5jcnQwDQYJKoZIhvcN
+# AQEFBQADggEBAJ0lfhszTbImgVybhs4jIA+Ah+WI//+x1GosMe06FxlxF82pG7xa
+# FjkAneNshORaQPveBgGMN/qbsZ0kfv4gpFetW7easGAm6mlXIV00Lx9xsIOUGQVr
+# NZAQoHuXx/Y/5+IRQaa9YtnwJz04HShvOlIJ8OxwYtNiS7Dgc6aSwNOOMdgv420X
+# Ewbu5AO2FKvzj0OncZ0h3RTKFV2SQdr5D4HRmXQNJsQOfxu19aDxxncGKBXp2JPl
+# VRbwuwqrHNtcSCdmyKOLChzlldquxC5ZoGHd2vNtomHpigtt7BIYvfdVVEADkitr
+# wlHCCkivsNRu4PQUCjob4489yq9qjXvc2EQwggbNMIIFtaADAgECAhAG/fkDlgOt
+# 6gAK6z8nu7obMA0GCSqGSIb3DQEBBQUAMGUxCzAJBgNVBAYTAlVTMRUwEwYDVQQK
+# EwxEaWdpQ2VydCBJbmMxGTAXBgNVBAsTEHd3dy5kaWdpY2VydC5jb20xJDAiBgNV
+# BAMTG0RpZ2lDZXJ0IEFzc3VyZWQgSUQgUm9vdCBDQTAeFw0wNjExMTAwMDAwMDBa
+# Fw0yMTExMTAwMDAwMDBaMGIxCzAJBgNVBAYTAlVTMRUwEwYDVQQKEwxEaWdpQ2Vy
+# dCBJbmMxGTAXBgNVBAsTEHd3dy5kaWdpY2VydC5jb20xITAfBgNVBAMTGERpZ2lD
+# ZXJ0IEFzc3VyZWQgSUQgQ0EtMTCCASIwDQYJKoZIhvcNAQEBBQADggEPADCCAQoC
+# ggEBAOiCLZn5ysJClaWAc0Bw0p5WVFypxNJBBo/JM/xNRZFcgZ/tLJz4FlnfnrUk
+# FcKYubR3SdyJxArar8tea+2tsHEx6886QAxGTZPsi3o2CAOrDDT+GEmC/sfHMUiA
+# fB6iD5IOUMnGh+s2P9gww/+m9/uizW9zI/6sVgWQ8DIhFonGcIj5BZd9o8dD3QLo
+# Oz3tsUGj7T++25VIxO4es/K8DCuZ0MZdEkKB4YNugnM/JksUkK5ZZgrEjb7Szgau
+# rYRvSISbT0C58Uzyr5j79s5AXVz2qPEvr+yJIvJrGGWxwXOt1/HYzx4KdFxCuGh+
+# t9V3CidWfA9ipD8yFGCV/QcEogkCAwEAAaOCA3owggN2MA4GA1UdDwEB/wQEAwIB
+# hjA7BgNVHSUENDAyBggrBgEFBQcDAQYIKwYBBQUHAwIGCCsGAQUFBwMDBggrBgEF
+# BQcDBAYIKwYBBQUHAwgwggHSBgNVHSAEggHJMIIBxTCCAbQGCmCGSAGG/WwAAQQw
+# ggGkMDoGCCsGAQUFBwIBFi5odHRwOi8vd3d3LmRpZ2ljZXJ0LmNvbS9zc2wtY3Bz
+# LXJlcG9zaXRvcnkuaHRtMIIBZAYIKwYBBQUHAgIwggFWHoIBUgBBAG4AeQAgAHUA
+# cwBlACAAbwBmACAAdABoAGkAcwAgAEMAZQByAHQAaQBmAGkAYwBhAHQAZQAgAGMA
+# bwBuAHMAdABpAHQAdQB0AGUAcwAgAGEAYwBjAGUAcAB0AGEAbgBjAGUAIABvAGYA
+# IAB0AGgAZQAgAEQAaQBnAGkAQwBlAHIAdAAgAEMAUAAvAEMAUABTACAAYQBuAGQA
+# IAB0AGgAZQAgAFIAZQBsAHkAaQBuAGcAIABQAGEAcgB0AHkAIABBAGcAcgBlAGUA
+# bQBlAG4AdAAgAHcAaABpAGMAaAAgAGwAaQBtAGkAdAAgAGwAaQBhAGIAaQBsAGkA
+# dAB5ACAAYQBuAGQAIABhAHIAZQAgAGkAbgBjAG8AcgBwAG8AcgBhAHQAZQBkACAA
+# aABlAHIAZQBpAG4AIABiAHkAIAByAGUAZgBlAHIAZQBuAGMAZQAuMAsGCWCGSAGG
+# /WwDFTASBgNVHRMBAf8ECDAGAQH/AgEAMHkGCCsGAQUFBwEBBG0wazAkBggrBgEF
+# BQcwAYYYaHR0cDovL29jc3AuZGlnaWNlcnQuY29tMEMGCCsGAQUFBzAChjdodHRw
+# Oi8vY2FjZXJ0cy5kaWdpY2VydC5jb20vRGlnaUNlcnRBc3N1cmVkSURSb290Q0Eu
+# Y3J0MIGBBgNVHR8EejB4MDqgOKA2hjRodHRwOi8vY3JsMy5kaWdpY2VydC5jb20v
+# RGlnaUNlcnRBc3N1cmVkSURSb290Q0EuY3JsMDqgOKA2hjRodHRwOi8vY3JsNC5k
+# aWdpY2VydC5jb20vRGlnaUNlcnRBc3N1cmVkSURSb290Q0EuY3JsMB0GA1UdDgQW
+# BBQVABIrE5iymQftHt+ivlcNK2cCzTAfBgNVHSMEGDAWgBRF66Kv9JLLgjEtUYun
+# pyGd823IDzANBgkqhkiG9w0BAQUFAAOCAQEARlA+ybcoJKc4HbZbKa9Sz1LpMUer
+# Vlx71Q0LQbPv7HUfdDjyslxhopyVw1Dkgrkj0bo6hnKtOHisdV0XFzRyR4WUVtHr
+# uzaEd8wkpfMEGVWp5+Pnq2LN+4stkMLA0rWUvV5PsQXSDj0aqRRbpoYxYqioM+Sb
+# OafE9c4deHaUJXPkKqvPnHZL7V/CSxbkS3BMAIke/MV5vEwSV/5f4R68Al2o/vsH
+# OE8Nxl2RuQ9nRc3Wg+3nkg2NsWmMT/tZ4CMP0qquAHzunEIOz5HXJ7cW7g/DvXwK
+# oO4sCFWFIrjrGBpN/CohrUkxg0eVd3HcsRtLSxwQnHcUwZ1PL1qVCCkQJjGCBDsw
+# ggQ3AgEBMIGGMHIxCzAJBgNVBAYTAlVTMRUwEwYDVQQKEwxEaWdpQ2VydCBJbmMx
+# GTAXBgNVBAsTEHd3dy5kaWdpY2VydC5jb20xMTAvBgNVBAMTKERpZ2lDZXJ0IFNI
+# QTIgQXNzdXJlZCBJRCBDb2RlIFNpZ25pbmcgQ0ECEA5mBKT7UNogWOLwtgBqursw
+# CQYFKw4DAhoFAKB4MBgGCisGAQQBgjcCAQwxCjAIoAKAAKECgAAwGQYJKoZIhvcN
+# AQkDMQwGCisGAQQBgjcCAQQwHAYKKwYBBAGCNwIBCzEOMAwGCisGAQQBgjcCARUw
+# IwYJKoZIhvcNAQkEMRYEFCQBfl/Xm3/R6yW/EO6kbSmkdowDMA0GCSqGSIb3DQEB
+# AQUABIIBADCbC3HqswOLfqwjX9+TM0hW9sG02WMHPbz0fFBTH5J/tck4wZECl9ct
+# DK0pUzHoJBY9EuBnH9OD46MiVCIYwYHQ9w/xiaypUNRbfXYEwSVL9EXCIcYkkqAN
+# pSpDrQJu0TzmGyvN1fSvYj/qahvIVKz/cxbzzQbYl4NqNXRfiD26Pa5JOdNABP8g
+# WL5Ruk/MPvMJE0dIW3em40hoanGKQhP0xgQ/BGJygumYrZsigENfhQkRVngH/aUP
+# f5k78VKL3DFoCMmneIxAfIwspTC37izb/AjlqDNUbqEmfBBIsbLgu6teZVIyPBI/
+# nktk5kwOOhzuyeQxLAcn0z+8ToF5frKhggIPMIICCwYJKoZIhvcNAQkGMYIB/DCC
+# AfgCAQEwdjBiMQswCQYDVQQGEwJVUzEVMBMGA1UEChMMRGlnaUNlcnQgSW5jMRkw
+# FwYDVQQLExB3d3cuZGlnaWNlcnQuY29tMSEwHwYDVQQDExhEaWdpQ2VydCBBc3N1
+# cmVkIElEIENBLTECEAMBmgI6/1ixa9bV6uYX8GYwCQYFKw4DAhoFAKBdMBgGCSqG
+# SIb3DQEJAzELBgkqhkiG9w0BBwEwHAYJKoZIhvcNAQkFMQ8XDTE2MDMyMjIwMjM0
+# N1owIwYJKoZIhvcNAQkEMRYEFM6NeSjPd00j7j25copMrjENL7GXMA0GCSqGSIb3
+# DQEBAQUABIIBAHJbUlt2mxIX5hbiigRw3kIoug57G5sDYWQK8rcTjHUif6PAdEqj
+# 5c1UhxQHJxEasddUAqbEtCsG8qiz1lq76KKiwaWxffSRQ2JwjYEvnYQ2TK9rtnMs
+# zeYnQajrIUP44z7ysqoikB0bEgup0QVDScm4SSa1SmqQzHMsUX5rCygsM3PlpF5K
+# dH2u3eSK4zDhGiye6/SQkcddvsI2lLFRcxQIyfUD4+W9oFdXuYkKhNBGPLUlOH9V
+# DEDQG9zH6CAzvla/r1iYnX8RZ4rz7yacdrMBq5g92HAEcuXFTBQfaeAZSGQBhNSn
+# p1rVWgLb0T3a/5zlOtZvp+bLyDRbms+w8BY=
+# SIG # End signature block
diff --git a/windowsRuntimeInstaller/CreateInstallerRT.sh b/windowsRuntimeInstaller/CreateInstallerRT.sh
new file mode 100644
index 000000000..f272d98e2
--- /dev/null
+++ b/windowsRuntimeInstaller/CreateInstallerRT.sh
@@ -0,0 +1,16 @@
+#!/bin/bash
+# Bash script to create the Vulkan Runtime Installer.
+
+# Create the uinstaller
+makensis /DUNINSTALLER InstallerRT.nsi
+$TEMP/tempinstaller.exe
+mv $TEMP/UninstallVulkanRT.exe .
+
+# Sign the Uninstaller
+# Replace SIGNFILE with your command and necessary args for
+# signing an executable. If you don't need to sign the uninstaller,
+# you can comment out this line.
+./SIGNFILE ./UninstallVulkanRT.exe
+
+# Create the RT Installer, using the signed uninstaller
+makensis InstallerRT.nsi
diff --git a/windowsRuntimeInstaller/InstallerRT.nsi b/windowsRuntimeInstaller/InstallerRT.nsi
index cd46271c7..a0291a76e 100644
--- a/windowsRuntimeInstaller/InstallerRT.nsi
+++ b/windowsRuntimeInstaller/InstallerRT.nsi
@@ -63,8 +63,19 @@ Icon ${ICOFILE}
UninstallIcon ${ICOFILE}
WindowIcon off
-# Define name of installer
-OutFile "VulkanRT-${PRODUCTVERSION}-Installer.exe"
+# If /DUNINSTALLER was specified, Create the uinstaller
+!ifdef UNINSTALLER
+ !echo "Creating RT uninstaller...."
+ OutFile "$%TEMP%\tempinstaller.exe"
+ SetCompress off
+!else
+ !echo "Creating RT installer...."
+
+ # Define name of installer
+ OutFile "VulkanRT-${PRODUCTVERSION}-Installer.exe"
+ SetCompressor /SOLID lzma
+
+!endif
# Define default installation directory
InstallDir "$PROGRAMFILES\${PRODUCTNAME}\${PRODUCTVERSION}"
@@ -75,7 +86,9 @@ Var FileVersion
# Directory RT was installed to.
# The uninstaller can't just use $INSTDIR because it is set to the
# directory the uninstaller exe file is located in.
+!ifdef UNINSTALLER
Var IDir
+!endif
# Install count
Var IC
@@ -191,6 +204,12 @@ RequestExecutionLevel admin
Function .onInit
+!ifdef UNINSTALLER
+ ; Write out the uinstaller and quit
+ WriteUninstaller "$%TEMP%\Uninstall${PRODUCTNAME}.exe"
+ Quit
+!endif
+
FunctionEnd
AddBrandingImage left 150
@@ -226,6 +245,20 @@ Section
${Endif}
+ # Create our temp directory, with minimal permissions
+ SetOutPath "$TEMP\VulkanRT"
+ AccessControl::DisableFileInheritance $TEMP\VulkanRT
+ AccessControl::SetFileOwner $TEMP\VulkanRT "Administrators"
+ AccessControl::ClearOnFile $TEMP\VulkanRT "Administrators" "FullAccess"
+ AccessControl::SetOnFile $TEMP\VulkanRT "SYSTEM" "FullAccess"
+ AccessControl::GrantOnFile $TEMP\VulkanRT "Everyone" "ListDirectory"
+ AccessControl::GrantOnFile $TEMP\VulkanRT "Everyone" "GenericExecute"
+ AccessControl::GrantOnFile $TEMP\VulkanRT "Everyone" "GenericRead"
+ AccessControl::GrantOnFile $TEMP\VulkanRT "Everyone" "ReadAttributes"
+ StrCpy $1 10
+ Call CheckForError
+
+ # Check the registry to see if we are already installed
ReadRegStr $0 HKLM "Software\Microsoft\Windows\CurrentVersion\Uninstall\${PRODUCTNAME}${PRODUCTVERSION}" "InstallDir"
# If the registry entry isn't there, it will throw an error as well as return a blank value. So, clear the errors.
@@ -249,16 +282,28 @@ Section
${EndIf}
SetOutPath "$INSTDIR"
+ AccessControl::DisableFileInheritance $INSTDIR
+ AccessControl::SetFileOwner $INSTDIR "Administrators"
+ AccessControl::ClearOnFile $INSTDIR "Administrators" "FullAccess"
+ AccessControl::SetOnFile $INSTDIR "SYSTEM" "FullAccess"
+ AccessControl::GrantOnFile $INSTDIR "Everyone" "ListDirectory"
+ AccessControl::GrantOnFile $INSTDIR "Everyone" "GenericExecute"
+ AccessControl::GrantOnFile $INSTDIR "Everyone" "GenericRead"
+ AccessControl::GrantOnFile $INSTDIR "Everyone" "ReadAttributes"
File ${ICOFILE}
File VULKANRT_LICENSE.RTF
File LICENSE.txt
File ConfigLayersAndVulkanDLL.ps1
- StrCpy $1 10
+ StrCpy $1 15
Call CheckForError
- # Create the uninstaller
- WriteUninstaller "$INSTDIR\Uninstall${PRODUCTNAME}.exe"
- StrCpy $1 11
+ # Add the signed uninstaller
+ !ifndef UNINSTALLER
+ SetOutPath $INSTDIR
+ File "Uninstall${PRODUCTNAME}.exe"
+ !endif
+
+ StrCpy $1 20
Call CheckForError
# Reference count the number of times we have been installed.
@@ -326,13 +371,23 @@ Section
WriteRegDword HKLM "Software\Microsoft\Windows\CurrentVersion\Uninstall\${PRODUCTNAME}${PRODUCTVERSION}" "SystemComponent" 0
${EndIf}
- StrCpy $1 12
+ StrCpy $1 25
Call CheckForError
# Set up version number for file names
${StrRep} $0 ${VERSION_BUILDNO} "." "-"
StrCpy $FileVersion ${VERSION_ABI_MAJOR}-${VERSION_API_MAJOR}-${VERSION_MINOR}-${VERSION_PATCH}-$0
+ # Remove vulkaninfo from Start Menu
+ SetShellVarContext all
+ Delete "$SMPROGRAMS\Vulkan\vulkaninfo32.lnk"
+ Delete "$SMPROGRAMS\Vulkan\vulkaninfo.lnk"
+ ClearErrors
+
+ # Create Vulkan in the Start Menu
+ CreateDirectory "$SMPROGRAMS\Vulkan"
+ ClearErrors
+
# If running on a 64-bit OS machine
${If} ${RunningX64}
@@ -341,14 +396,14 @@ Section
SetOutPath $WINDIR\SysWow64
File /oname=vulkan-$FileVersion.dll ..\build32\loader\Release\vulkan-${VERSION_ABI_MAJOR}.dll
File /oname=vulkaninfo-$FileVersion.exe ..\build32\demos\Release\vulkaninfo.exe
- StrCpy $1 13
+ StrCpy $1 30
Call CheckForError
# 64-bit DLLs/EXEs
##########################################
SetOutPath $WINDIR\System32
File /oname=vulkan-$FileVersion.dll ..\build\loader\Release\vulkan-${VERSION_ABI_MAJOR}.dll
- StrCpy $1 14
+ StrCpy $1 35
Call CheckForError
# vulkaninfo.exe
@@ -356,12 +411,7 @@ Section
SetOutPath "$INSTDIR"
File ..\build\demos\Release\vulkaninfo.exe
File /oname=vulkaninfo32.exe ..\build32\demos\Release\vulkaninfo.exe
- SetShellVarContext all
- CreateDirectory "$SMPROGRAMS\Vulkan ${PRODUCTVERSION}"
- CreateDirectory "$SMPROGRAMS\Vulkan ${PRODUCTVERSION}\Demos"
- CreateShortCut "$SMPROGRAMS\Vulkan ${PRODUCTVERSION}\Demos\vulkaninfo32.lnk" "$INSTDIR\vulkaninfo32.exe"
- CreateShortCut "$SMPROGRAMS\Vulkan ${PRODUCTVERSION}\Demos\vulkaninfo.lnk" "$INSTDIR\vulkaninfo.exe"
- StrCpy $1 15
+ StrCpy $1 40
Call CheckForError
# Run the ConfigLayersAndVulkanDLL.ps1 script to copy the most recent version of
@@ -372,7 +422,7 @@ Section
${If} $PsErr != 0
SetErrors
${EndIf}
- StrCpy $1 16
+ StrCpy $1 45
Call CheckForError
# Else, running on a 32-bit OS machine
@@ -382,18 +432,14 @@ Section
##########################################
SetOutPath $WINDIR\System32
File /oname=vulkan-$FileVersion.dll ..\build32\loader\Release\vulkan-${VERSION_ABI_MAJOR}.dll
- StrCpy $1 17
+ StrCpy $1 50
Call CheckForError
# vulkaninfo.exe
File /oname=vulkaninfo-$FileVersion.exe ..\build32\demos\Release\vulkaninfo.exe
SetOutPath "$INSTDIR"
File ..\build32\demos\Release\vulkaninfo.exe
- SetShellVarContext all
- CreateDirectory "$SMPROGRAMS\Vulkan ${PRODUCTVERSION}"
- CreateDirectory "$SMPROGRAMS\Vulkan ${PRODUCTVERSION}\Demos"
- CreateShortCut "$SMPROGRAMS\Vulkan ${PRODUCTVERSION}\Demos\vulkaninfo.lnk" "$INSTDIR\vulkaninfo.exe"
- StrCpy $1 18
+ StrCpy $1 55
Call CheckForError
# Run the ConfigLayersAndVulkanDLL.ps1 script to copy the most recent version of
@@ -404,7 +450,7 @@ Section
${If} $PsErr != 0
SetErrors
${EndIf}
- StrCpy $1 19
+ StrCpy $1 60
Call CheckForError
${Endif}
@@ -413,49 +459,49 @@ Section
# by the uninstaller when it needs to be run again during uninstall.
Delete ConfigLayersAndVulkanDLL.ps1
+ # Add vulkaninfo to Start Menu
+ SetShellVarContext all
+ IfFileExists $WINDIR\System32\vulkaninfo.exe 0 +2
+ CreateShortCut "$SMPROGRAMS\Vulkan\vulkaninfo.lnk" "$WINDIR\System32\vulkaninfo.exe"
+ IfFileExists $WINDIR\SysWow64\vulkaninfo.exe 0 +2
+ CreateShortCut "$SMPROGRAMS\Vulkan\vulkaninfo32.lnk" "$WINDIR\SysWow64\vulkaninfo.exe"
+
# Possibly install MSVC 2013 redistributables
+ ClearErrors
${If} ${RunningX64}
-
- # If running on a 64-bit OS machine, we need the 64-bit Visual Studio re-distributable. Install it if it's not already present.
- ReadRegDword $1 HKLM "SOFTWARE\Microsoft\DevDiv\vc\Servicing\12.0\RuntimeMinimum" "Install"
- ClearErrors
- IntCmp $1 1 RedistributablesInstalled6464 InstallRedistributables6464 InstallRedistributables6464
- InstallRedistributables6464:
- SetOutPath "$TEMP"
-
- File vcredist_x64.exe
- ExecWait '"$TEMP\vcredist_x64.exe" /quiet /norestart'
-
- RedistributablesInstalled6464:
-
- # We also need the 32-bit Visual Studio re-distributable. Install it as well if it's not present
ReadRegDword $1 HKLM "SOFTWARE\WOW6432Node\Microsoft\DevDiv\vc\Servicing\12.0\RuntimeMinimum" "Install"
- ClearErrors
- IntCmp $1 1 RedistributablesInstalled InstallRedistributables InstallRedistributables
-
+ ${If} ${Errors}
+ StrCpy $1 0
+ ClearErrors
+ ${Endif}
${Else}
-
- # Otherwise, we're running on a 32-bit OS machine, we need to install the 32-bit Visual Studio re-distributable if it's not present.
- ReadRegDword $1 HKLM "SOFTWARE\Microsoft\DevDiv\vc\Servicing\12.0\RuntimeMinimum" "Install"
- ClearErrors
- IntCmp $1 1 RedistributablesInstalled InstallRedistributables InstallRedistributables
-
+ StrCpy $1 1
${Endif}
-
- InstallRedistributables:
- SetOutPath "$TEMP"
-
- File vcredist_x86.exe
- ExecWait '"$TEMP\vcredist_x86.exe" /quiet /norestart'
-
- RedistributablesInstalled:
-
- StrCpy $1 20
+ ReadRegDword $2 HKLM "SOFTWARE\Microsoft\DevDiv\vc\Servicing\12.0\RuntimeMinimum" "Install"
+ ${If} ${Errors}
+ StrCpy $2 0
+ ClearErrors
+ ${Endif}
+ IntOp $3 $1 + $2
+ ${If} $3 <= 1
+ # If either x86 or x64 redistributables are not present, install redistributables.
+ # We install both resdistributables because we have found that the x86 redist
+ # will uninstall the x64 redist if the x64 redistrib is an old version. Amazing, isn't it?
+ SetOutPath "$TEMP\VulkanRT"
+ ${If} ${RunningX64}
+ File vcredist_x64.exe
+ ExecWait '"$TEMP\VulkanRT\vcredist_x64.exe" /quiet /norestart'
+ ${Endif}
+ File vcredist_x86.exe
+ ExecWait '"$TEMP\VulkanRT\vcredist_x86.exe" /quiet /norestart'
+ ${Endif}
+ StrCpy $1 65
Call CheckForError
SectionEnd
# Uninstaller section start
+!ifdef UNINSTALLER
Section "uninstall"
# If running on a 64-bit OS machine, disable registry re-direct since we're running as a 32-bit executable.
@@ -471,7 +517,7 @@ Section "uninstall"
ReadRegStr $0 HKLM "Software\Microsoft\Windows\CurrentVersion\Uninstall\${PRODUCTNAME}${PRODUCTVERSION}" "InstallDir"
StrCpy $IDir $0
- StrCpy $1 21
+ StrCpy $1 70
Call un.CheckForError
SetOutPath "$IDir"
@@ -490,12 +536,15 @@ Section "uninstall"
IntOp $1 $IC - 1
Rename "$IDir\Instance_$IC" "$IDir\Instance_$1"
${ElseIf} $IC = 2
- Delete /REBOOTOK "$IDir\Instance_$IC\UninstallVulkanRT.exe"
+ Delete /REBOOTOK "$IDir\Instance_$IC\Uninstall${PRODUCTNAME}.exe"
Rmdir /REBOOTOK "$IDir\Instance_$IC"
${Endif}
# Modify registry for Programs and Features
- DeleteRegKey HKLM "Software\Microsoft\Windows\CurrentVersion\Uninstall\${PRODUCTNAME}${PRODUCTVERSION}-$IC"
+
+ ${If} $IC > 1
+ DeleteRegKey HKLM "Software\Microsoft\Windows\CurrentVersion\Uninstall\${PRODUCTNAME}${PRODUCTVERSION}-$IC"
+ ${EndIf}
${If} $IC > 2
IntOp $IC $IC - 1
WriteRegDword HKLM "Software\Microsoft\Windows\CurrentVersion\Uninstall\${PRODUCTNAME}${PRODUCTVERSION}-$IC" "SystemComponent" 0
@@ -507,72 +556,79 @@ Section "uninstall"
DeleteRegKey HKLM "Software\Microsoft\Windows\CurrentVersion\Uninstall\${PRODUCTNAME}${PRODUCTVERSION}"
${EndIf}
- # Ref count is in $1. If it is zero, uninstall everything
- ${If} $IC <= 0
- # Install the ConfigLayersAndVulkanDLL.ps1 so we can run it.
- # It will be deleted later when we remove the install directory.
- File ConfigLayersAndVulkanDLL.ps1
+ # Install the ConfigLayersAndVulkanDLL.ps1 so we can run it.
+ # It will be deleted later when we remove the install directory.
+ File ConfigLayersAndVulkanDLL.ps1
+
+ # If running on a 64-bit OS machine
+ ${If} ${RunningX64}
- # If running on a 64-bit OS machine
- ${If} ${RunningX64}
+ # Delete vulkaninfo.exe in C:\Windows\System32 and C:\Windows\SysWOW64
+ Delete /REBOOTOK $WINDIR\SysWow64\vulkaninfo.exe
+ Delete /REBOOTOK "$WINDIR\SysWow64\vulkaninfo-$FileVersion.exe"
+ Delete /REBOOTOK $WINDIR\System32\vulkaninfo.exe
+ Delete /REBOOTOK "$WINDIR\System32\vulkaninfo-$FileVersion.exe"
- # Delete vulkaninfo.exe in C:\Windows\System32 and C:\Windows\SysWOW64
- Delete /REBOOTOK $WINDIR\SysWow64\vulkaninfo.exe
- Delete /REBOOTOK "$WINDIR\SysWow64\vulkaninfo-$FileVersion.exe"
- Delete /REBOOTOK $WINDIR\System32\vulkaninfo.exe
- Delete /REBOOTOK "$WINDIR\System32\vulkaninfo-$FileVersion.exe"
-
- # Delete vullkan dll files: vulkan-<majorabi>.dll and vulkan-<majorabi>-<major>-<minor>-<patch>-<buildno>.dll
- Delete /REBOOTOK $WINDIR\SysWow64\vulkan-${VERSION_ABI_MAJOR}.dll
- Delete /REBOOTOK $WINDIR\SysWow64\vulkan-$FileVersion.dll
- Delete /REBOOTOK $WINDIR\System32\vulkan-${VERSION_ABI_MAJOR}.dll
- Delete /REBOOTOK $WINDIR\System32\vulkan-$FileVersion.dll
-
- # Run the ConfigLayersAndVulkanDLL.ps1 script to:
- # Copy the most recent version of vulkan-<abimajor>-*.dll to vulkan-<abimajor>.dll
- # Copy the most recent version of vulkaninfo-<abimajor>-*.exe to vulkaninfo.exe
- # Set up layer registry entries to use layers from the corresponding SDK
- nsExec::ExecToStack 'powershell -NoLogo -NonInteractive -WindowStyle Hidden -inputformat none -ExecutionPolicy RemoteSigned -File "$IDir\ConfigLayersAndVulkanDLL.ps1" ${VERSION_ABI_MAJOR} 64'
-
- # Else, running on a 32-bit OS machine
- ${Else}
-
- # Delete vulkaninfo.exe in C:\Windows\System32
- Delete /REBOOTOK $WINDIR\System32\vulkaninfo.exe
- Delete /REBOOTOK "$WINDIR\System32\vulkaninfo-$FileVersion.exe"
-
- # Delete vullkan dll files: vulkan-<majorabi>.dll and vulkan-<majorabi>-<major>-<minor>-<patch>-<buildno>.dll
- Delete /REBOOTOK $WINDIR\System32\vulkan-${VERSION_ABI_MAJOR}.dll
- Delete /REBOOTOK $WINDIR\System32\vulkan-$FileVersion.dll
-
- # Run the ConfigLayersAndVulkanDLL.ps1 script to:
- # Copy the most recent version of vulkan-<abimajor>-*.dll to vulkan-<abimajor>.dll
- # Copy the most recent version of vulkaninfo-<abimajor>-*.exe to vulkaninfo.exe
- # Set up layer registry entries to use layers from the corresponding SDK
- nsExec::ExecToStack 'powershell -NoLogo -NonInteractive -WindowStyle Hidden -inputformat none -ExecutionPolicy RemoteSigned -File "$IDir\ConfigLayersAndVulkanDLL.ps1" ${VERSION_ABI_MAJOR} 32'
+ # Delete vullkan dll files: vulkan-<majorabi>.dll and vulkan-<majorabi>-<major>-<minor>-<patch>-<buildno>.dll
+ Delete /REBOOTOK $WINDIR\SysWow64\vulkan-${VERSION_ABI_MAJOR}.dll
+ Delete /REBOOTOK $WINDIR\SysWow64\vulkan-$FileVersion.dll
+ Delete /REBOOTOK $WINDIR\System32\vulkan-${VERSION_ABI_MAJOR}.dll
+ Delete /REBOOTOK $WINDIR\System32\vulkan-$FileVersion.dll
- ${EndIf}
+ # Run the ConfigLayersAndVulkanDLL.ps1 script to:
+ # Copy the most recent version of vulkan-<abimajor>-*.dll to vulkan-<abimajor>.dll
+ # Copy the most recent version of vulkaninfo-<abimajor>-*.exe to vulkaninfo.exe
+ # Set up layer registry entries to use layers from the corresponding SDK
+ nsExec::ExecToStack 'powershell -NoLogo -NonInteractive -WindowStyle Hidden -inputformat none -ExecutionPolicy RemoteSigned -File "$IDir\ConfigLayersAndVulkanDLL.ps1" ${VERSION_ABI_MAJOR} 64'
+
+ # Else, running on a 32-bit OS machine
+ ${Else}
+
+ # Delete vulkaninfo.exe in C:\Windows\System32
+ Delete /REBOOTOK $WINDIR\System32\vulkaninfo.exe
+ Delete /REBOOTOK "$WINDIR\System32\vulkaninfo-$FileVersion.exe"
+
+ # Delete vullkan dll files: vulkan-<majorabi>.dll and vulkan-<majorabi>-<major>-<minor>-<patch>-<buildno>.dll
+ Delete /REBOOTOK $WINDIR\System32\vulkan-${VERSION_ABI_MAJOR}.dll
+ Delete /REBOOTOK $WINDIR\System32\vulkan-$FileVersion.dll
+
+ # Run the ConfigLayersAndVulkanDLL.ps1 script to:
+ # Copy the most recent version of vulkan-<abimajor>-*.dll to vulkan-<abimajor>.dll
+ # Copy the most recent version of vulkaninfo-<abimajor>-*.exe to vulkaninfo.exe
+ # Set up layer registry entries to use layers from the corresponding SDK
+ nsExec::ExecToStack 'powershell -NoLogo -NonInteractive -WindowStyle Hidden -inputformat none -ExecutionPolicy RemoteSigned -File "$IDir\ConfigLayersAndVulkanDLL.ps1" ${VERSION_ABI_MAJOR} 32'
+
+ ${EndIf}
+
+ # If Ref Count is zero, uninstall everything
+ ${If} $IC <= 0
# Delete vulkaninfo from start menu.
- # Delete vulkan start menu if the vulkan start menu is empty
SetShellVarContext all
- Delete "$SMPROGRAMS\Vulkan ${PRODUCTVERSION}\Demos\vulkaninfo.lnk"
+ Delete "$SMPROGRAMS\Vulkan\vulkaninfo.lnk"
# If running on a 64-bit OS machine
${If} ${RunningX64}
- Delete "$SMPROGRAMS\Vulkan ${PRODUCTVERSION}\Demos\vulkaninfo32.lnk"
+ Delete "$SMPROGRAMS\Vulkan\vulkaninfo32.lnk"
${EndIf}
- StrCpy $0 "$SMPROGRAMS\Vulkan ${PRODUCTVERSION}\Demos"
- Call un.DeleteDirIfEmpty
- StrCpy $0 "$SMPROGRAMS\Vulkan ${PRODUCTVERSION}"
+ # Possibly add vulkaninfo to Start Menu
+ SetShellVarContext all
+ IfFileExists $WINDIR\System32\vulkaninfo.exe 0 +2
+ CreateShortCut "$SMPROGRAMS\Vulkan\vulkaninfo.lnk" "$WINDIR\System32\vulkaninfo.exe"
+ IfFileExists $WINDIR\SysWow64\vulkaninfo.exe 0 +2
+ CreateShortCut "$SMPROGRAMS\Vulkan\vulkaninfo32.lnk" "$WINDIR\SysWow64\vulkaninfo.exe"
+
+ # Possibly delete vulkan Start Menu
+ StrCpy $0 "$SMPROGRAMS\Vulkan"
Call un.DeleteDirIfEmpty
+ ClearErrors
# Remove files in install dir
Delete /REBOOTOK "$IDir\VULKANRT_LICENSE.rtf"
Delete /REBOOTOK "$IDir\LICENSE.txt"
- Delete /REBOOTOK "$IDir\UninstallVulkanRT.exe"
+ Delete /REBOOTOK "$IDir\Uninstall${PRODUCTNAME}.exe"
Delete /REBOOTOK "$IDir\V.ico"
Delete /REBOOTOK "$IDir\ConfigLayersAndVulkanDLL.ps1"
Delete /REBOOTOK "$IDir\vulkaninfo.exe"
@@ -582,7 +638,7 @@ Section "uninstall"
Delete /REBOOTOK "$IDir\vulkaninfo32.exe"
${EndIf}
- StrCpy $1 22
+ StrCpy $1 75
Call un.CheckForError
# Need to do a SetOutPath to something outside of install dir,
@@ -590,9 +646,11 @@ Section "uninstall"
SetOutPath "$TEMP"
# Remove install directories
- Rmdir /REBOOTOK "$IDir"
+ StrCpy $0 "$IDir"
+ Call un.DeleteDirIfEmpty
StrCpy $0 "$PROGRAMFILES\${PRODUCTNAME}"
Call un.DeleteDirIfEmpty
+ ClearErrors
# If any of the remove commands failed, request a reboot
IfRebootFlag 0 noreboot
@@ -608,10 +666,15 @@ Section "uninstall"
${Endif}
- StrCpy $1 23
+ StrCpy $1 80
Call un.CheckForError
+ # Remove temp dir
+ SetOutPath "$TEMP"
+ RmDir /R "$TEMP\VulkanRT"
+
SectionEnd
+!endif
Function brandimage
SetOutPath "$TEMP"
@@ -652,15 +715,15 @@ Function CheckForError
# Copy the uninstaller to a temp folder of our own creation so we can completely
# delete the old contents.
- SetOutPath "$TEMP\tempun"
- CopyFiles "$INSTDIR\Uninstall${PRODUCTNAME}.exe" "$TEMP\tempun"
+ SetOutPath "$TEMP\VulkanRT"
+ CopyFiles "$INSTDIR\Uninstall${PRODUCTNAME}.exe" "$TEMP\VulkanRT"
# No uninstall using the version in the temporary folder.
- ExecWait '"$TEMP\tempun\Uninstall${PRODUCTNAME}.exe" /S _?=$INSTDIR'
+ ExecWait '"$TEMP\VulkanRT\Uninstall${PRODUCTNAME}.exe" /S _?=$INSTDIR'
# Delete the copy of the uninstaller we ran
- Delete /REBOOTOK "$TEMP\tempun\Uninstall${PRODUCTNAME}.exe"
- RmDir /R /REBOOTOK "$TEMP\tempun"
+ Delete /REBOOTOK "$TEMP\VulkanRT\Uninstall${PRODUCTNAME}.exe"
+ RmDir /R /REBOOTOK "$TEMP\VulkanRT"
# Set an error message to output
SetErrorLevel $1
diff --git a/windowsRuntimeInstaller/README.txt b/windowsRuntimeInstaller/README.txt
index 4c7ddf5e7..73f82541a 100644
--- a/windowsRuntimeInstaller/README.txt
+++ b/windowsRuntimeInstaller/README.txt
@@ -1,14 +1,17 @@
This folder contains the files required for building the Windows Vulkan
Runtime Installer Package.
-To build the Installer:
+To build the Vulkan Runtime Installer:
1. Install Nullsoft Install System version 3.0b1 or greater. (Available
from http://nsis.sourceforge.net/Download.)
- 2. Build Vulkan LoaderAndTools as described in ../BUILD.md.
+ 2. Install the NSIS AccessControl plug-in. (Available from
+ http://nsis.sourceforge.net/AccessControl_plug-in.)
- 3. Edit the InstallerRT.nsi file in this folder and modify the following
+ 3. Build Vulkan-LoaderAndValidationLayers as described in ../BUILD.md.
+
+ 4. Edit the InstallerRT.nsi file in this folder and modify the following
lines to match the version of the Windows Vulkan Runtime you wish to
build:
@@ -19,10 +22,14 @@ To build the Installer:
!define VERSION_BUILDNO
!define PUBLISHER
- 4. Right click on the InstallerRT.nsi file and select "Compile NSIS Script".
- The Windows Vulkan Runtime Installer package file will be created in
- this folder. The name of the installer file is
- VulkanRT-<version>-Installer.exe.
+ 5. Edit the CreateInstaller.sh file and replace SIGNFILE with your
+ command and necessary args for signing an executable. If you don't
+ wish to sign the uninstaller, you can comment out that line.
+
+ 6. Run the CreateInstaller.sh script from a Cygwin bash command prompt.
+ The Cygwin bash shell must be running as Administrator. The Windows
+ Vulkan Runtime Installer package file will be created in this folder.
+ The name of the installer file is VulkanRT-<version>-Installer.exe.
Some notes on the behavior of the Windows Vulkan Runtime Installer:
@@ -81,20 +88,19 @@ Some notes on the behavior of the Windows Vulkan Runtime Installer:
C:\Windows\SYSWOW64 on 64-bit Windows systems to set up the
32-bit loader.
- o The Vulkan Runtime Installer returns the following exit codes:
- 0 - Success
- 10 - Failure
- If the Installer returns an error code of 10, the Installer
- will have attempted to uninstall whatever it installed
- before it detected an error.
-
- o The Vulkan Runtime Uninstaller returns the following exit codes:
- 0 - Success
- 3 - Success, reboot required
- 10 - Failure
- If the Uninstaller returns an error code of 10, it will have
- simply exited when the failure was detected and will
- not have attempted to do further uninstall work.
+ o The Vulkan Runtime Installer returns an exit code of 0-9
+ to indicate success. All other exit codes indicate failure.
+ If the Installer returns a failure exit code, the Installer
+ will have attempted to uninstall whatever it installed before
+ it detected an error.
+
+ o The Vulkan Runtime Uninstaller returns an exit code of 0-9
+ to indicate success. An exit code of 3 indicates success, but
+ a reboot is required to complete the uninstall. All other
+ exit codes indicate failure. If the Uninstaller returns a
+ failure exit code, it will have simply exited when the failure
+ was detected and will not have attempted to do further uninstall
+ work.
o The ProductVersion of the installer executable (right click on
the executable, Properties, then the Details tab) can be used