Compare commits

..

52 Commits

Author SHA1 Message Date
Vitaly Provodin
8b739ff937 update exclude list on results of 21.0.4_b620.4 test runs 2024-10-07 08:12:19 +04:00
Nikita Tsarev
1e1115a28e JBR-7524: Workaround for showing window tiling actions when hovering over the maximize button on macOS 2024-10-04 11:45:31 +02:00
Maxim Kartashev
3ede03b58a JBR-7504 Use accurate event serial number with the clipboard 2024-10-04 12:59:19 +04:00
Maxim Kartashev
231126c134 JBR-7504 WLToolkit - Middle click paste doesn't work properly when pasting to other applications 2024-10-04 12:58:38 +04:00
Nikita Tsarev
cb2eb9bb7f JBR-7672: Only abort key repeat when the key that is being repeated is released [WLToolkit] 2024-09-30 10:44:59 +02:00
Nikita Tsarev
022221b0bb JBR-7662: Fix key repeat manager sometimes not cancelling properly [WLToolkit] 2024-09-30 10:41:58 +02:00
Vitaly Provodin
746032ad3e JBR-7511 migrate build platforms to OL8 for the release 2024.3 2024-09-28 05:28:12 +07:00
Vitaly Provodin
6826f09811 JBR-7511 migrate build platforms to OL8
- remove Vulcan part that causing builds to fail
- modify scripts for building images from Oracle Linux 8
- update jb/build/VerifyDependencies.java to check libraries have no dependency on symbols from glibc version higher than 2.28
- rename Ubuntu2004 docker files
- upgrade wayland up to wayland-devel-1.21.0-1
2024-09-28 02:26:04 +04:00
bourgesl
5157205ed8 JBR-7616: fixed type (str) 2024-09-27 16:17:22 +02:00
Nikita Tsarev
1ccb174ecb JBR-7675: Respect disabling key repeat [WLToolkit] 2024-09-27 15:02:50 +02:00
bourgesl
226d6f982c JBR-7616: added ThreadUtilities.lwc_plog(env, formatMsg, ...) to use LWCToolkit's PlatformLogger instead of NSlog (only as fallback now) used by MTLRenderQueue, updated MTLUtils to share mtlDstTypeToStr(op)
(cherry picked from commit 6e00460b0009817593d04773516559d1ec5a9292)
2024-09-24 12:35:41 +02:00
Maxim Kartashev
5099233a6d JBR-7651 runtime/cds/appcds/complexURI/ComplexURITest.java throws java.nio.file.AccessDeniedException 2024-09-24 12:43:52 +04:00
Vladimir Dvorak
8d37639b50 JBR-7649 fix DCEVM crashes on assert() in VM_Exit 2024-09-23 19:36:39 +02:00
Vladimir Dvorak
9cfcff32c4 JBR-7523 fix ClassUnloading support in DCEVM 2024-09-23 19:36:16 +02:00
Vladimir Dvorak
ff87567544 JBR-7635 Skip rolled-back classes in search for affected classes 2024-09-23 19:35:56 +02:00
Vitaly Provodin
f912b0e071 update exclude list on results of 21.0.4_b607.1 test runs 2024-09-23 12:42:53 +04:00
Alexey Ushakov
7c057eb7f6 JBR-7588 Metal: Reuse MTLContext for all GCs of the same GPU
Implemented reference counting for shared MTLContext objects. Supported multiple display links per MTLContext. Also, works for macOS version < 10.13
2024-09-20 21:17:08 +02:00
bourgesl
d4a299a2e7 JBR-7616: improved MTLRenderQueue exception handling
(cherry picked from commit 9df0cce15d1b1773634907881d0c515a100cf412)
2024-09-16 21:57:05 +02:00
Maxim Kartashev
b5f427d580 JBR-7576 Cannot make CDS dump due to whitespaces in paths 2024-09-16 14:16:51 +04:00
Vitaly Provodin
e752f9a857 update exclude list on results of 21.0.4_b591.1 test runs 2024-09-13 04:11:28 +04:00
Maxim Kartashev
e37c2450a8 JBR-7600 Provide ability to add messages to fatal error log
Use JNU_LOG_EVENT(env, msg, ...) to save a message in the internal
JVM ring buffer that gets printed out to the Events section
of the fatal error log if JVM crashed.
2024-09-12 14:27:30 +04:00
Nikita Gubarkov
78ef30a322 JBR-7614 JBR API: fix generated proxy name clash. 2024-09-11 18:19:19 +02:00
Nikita Provotorov
6afbc34d4b JBR-5673: Wayland: support touch scrolling.
- Adding information to WLPointerEvent about wl_pointer::axis* events along the X axis;
- Introducing 'WLComponentPeer#convertPointerEventToMWEParameters' - a routine for converting WLPointerEvent parameters to parameters required for MouseWheelEvent s;
- Handling both X and Y axes within the WLPointerEvent dispatching routine.
2024-09-11 15:52:45 +02:00
Nikita Provotorov
874d5698bc JBR-7459: Wayland: touchpad scrolling is too sensitive.
- Remaking the mapping of wl_pointer::axis events values to MouseWheelEvent rotations to eliminate the touchpad scrolling behavior "the more slowly the fingers move, the more pixels are scrolled";
- Accumulating the fraction parts of wl_pointer::axis events values to improve the accuracy of touchpad scrolling;
- Distinguishing between wheel scrolling and touchpad scrolling to fine-tune MouseWheelEvent parameters for each of these cases.
2024-09-11 15:52:44 +02:00
Sergei Tachenov
dd1afa581e JBR-7586 Fix title click ungrab when an active user component is clicked
Swing requires that clicking frame decorations should cause the window
to be ungrabbed. However, if a custom title is used, and that title contains
user-provided components, then clicking such components should not
cause the window to be ungrabbed, otherwise a menu located in a custom
title behaves incorrectly.

Fix by using the same logic as for the native actions, such as moving the window.
If the native actions are allowed, then ungrabbing is allowed as well.
Otherwise, do not ungrab, let the component behave like it's located in the client area.

The fix is supplemented with a new regression test "test/jdk/jb/javax/swing/CustomTitleBar/JMenuClickToCloseTest.java".
2024-09-09 16:00:46 +02:00
Nikita Tsarev
cd5cad7154 JBR-7594 Check for LWCToolkit in JBR TextInput API 2024-09-06 13:39:02 +02:00
Sergei Tachenov
8827281c82 JBR-7484 Update the cursor on mouse entered/exited
AppKit resets the cursor on native mouse entered/exited events. Depending on the order of events, it may end up setting the wrong cursor. So update it forcibly on such events.
2024-09-05 13:49:26 +02:00
Sergei Tachenov
3c8aa5a169 JBR-7481 Work around mouse entered/exited bug
To fix missing mouse entered/exited events when
using rounded corners, we keep track of mouse moved events. When a mouse moved event is detected, and the current peer under the cursor belongs to a different window, we send fake mouse entered/exit events to the old and new windows. We also filter late mouse exited events.

The workaround is enabled by default with the VM option "awt.mac.enableMouseEnteredExitedWorkaround" to disable it in case something breaks.

About the test:
Use the robot to find the points when the mouse
entered event is sent to the popup when the mouse
enters through a rounded corner, and the similar
point for entering the outer window when exiting
through such a corner.

Once the points are found, move the mouse back
and forth to that point, but not beyond.
The correct behavior is that when the mouse
enters the popup, a mouse exited event is sent
to the outer frame and vice versa.
Therefore, every mouse entered/exited event
should be received exactly once.

Use reflection to set the rounded corners,
as JBR API isn't available in tests.
2024-09-05 13:49:26 +02:00
Vitaly Provodin
fe6d252047 update exclude list on results of 21.0.4_b589.3 test runs 2024-09-03 07:59:31 +04:00
Maxim Kartashev
08faf984f0 JBR-7493 Fixed the test 2024-08-30 11:57:42 +03:00
Nikita Gubarkov
1c95b96ff8 JBR-5973 Vulkan: Fix validation errors (#452)
- Added proper synchronization and image layout transitions.
- Refactored VKRenderer to hold per-device rendering context. Isolated surface rendering contexts.
- Implemented reusing of command buffers and semaphores
- Fixed surface resize, made surface initialization more robust.
- Added on-demand pipeline creation for actual surface formats.
- Added missing destruction logic.
- Added macros for easy checking of return codes, logging with source code location.
- Moved implementation details out of headers where possible. Stripped dead code.
- Implemented consistent OOM strategy from dynamic arrays and ring buffers.
2024-08-29 18:52:52 +02:00
Nikita Gubarkov
f1ad521666 JBR-7570 Implemented ring buffer. Added lazy implicit initialization for dynamic arrays. (#451) 2024-08-28 15:11:20 +02:00
Nikita Gubarkov
4a239d2c6e JBR-7568 Vulkan: Refactor VKLogicalDevice into VKDevice (#449)
* Renamed VKLogicalDevice to VKDevice for conformance and convenience.
* Refactored device->device to device->handle for clarity.
2024-08-28 12:35:38 +02:00
Nikita Gubarkov
47f236f3cc JBR-7569 Removed VMA-Hpp (#450) 2024-08-28 12:24:39 +02:00
Vitaly Provodin
7b21605ecb update exclude list on results of 21.0.4_b586.1 test runs 2024-08-27 01:05:12 +04:00
Dmitrii Morskii
ccba9acd9d JBR-7126 add more possible names for cursor arrow icon 2024-08-23 18:41:06 +01:00
Vitaly Provodin
869f0ca11f JBR-7517 build JBR artefacts with CDS archives 2024-08-22 18:04:56 +04:00
Maxim Kartashёv
133cf30195 JBR-7483 Monitor resolution is incorrect in jbr 21 when scaling with gtk-xft-dpi 2024-08-22 17:22:05 +04:00
Nikita Tsarev
210ed7aaba JBR-7529: Explicitly check for press-and-hold in performKeyEquivalent 2024-08-22 13:17:53 +02:00
Dmitrii Morskii
07ac32e2ad JBR-7438 tune updateCursorImmediately method 2024-08-21 14:36:01 +01:00
Maxim Kartashёv
6fe6b416ce JBR-7016 IDEA 2024.2 Wayland: UI Crash when selecting Code and pressing Alt+Enter 2024-08-21 18:24:09 +04:00
Maxim Kartashёv
0c40b9514d JBR-7493 Wayland: can't start in maximized state on WSL 2024-08-21 12:44:23 +04:00
Vitaly Provodin
70d0e98389 JBR-5989 Wayland: exclude tests failing on virtual agents 2024-08-20 14:18:55 +04:00
Vitaly Provodin
89049a7d65 update exclude list on results of 21.0.4_b575.1 test runs 2024-08-20 14:18:55 +04:00
Maxim Kartashёv
90ec8da421 JBR-7516 Wayland: DamageList_AddList: Assertion `list != add' failed 2024-08-20 11:48:48 +04:00
Vitaly Provodin
2348ef2e99 update exclude list on results of 21.0.4_b575 test runs 2024-08-17 02:08:15 +04:00
Maxim Kartashёv
54960903b1 JBR-7501 Wayland: SurfaceData.flush() method is mis-used 2024-08-15 18:58:19 +04:00
Vitaly Provodin
b95cee8fc2 update exclude list on results of 21.0.4_b569.1 test runs 2024-08-15 08:45:55 +04:00
Nikita Tsarev
4616ff7408 JBR-7478: Fix wrong timestamps on KEY_TYPED events [WLToolkit] 2024-08-13 13:52:56 +02:00
lbourges
777e396408 JBR-7461: Implement VKTexturePool for the linux vulkan pipeline:
- based on common AccelTexturePool
 - new VKTexturePool instance in VKLogicalDevice
 - fixed SIGSEGV in VKImage dispose
 - store device in TPI
 - indentation fixes
 - merged with latest changes for JBR-7460
 - use (ATexturePoolLock_init)(void)
 - fixed logs in lock implementations + fixed indentation
 - fixed MTLTexturePool to pre-processor conditions (not runtime) on USE_ACCEL_TEXTURE_POOL
2024-08-12 19:32:21 +02:00
Maxim Kartashёv
830fd66a48 JBR-7313 Wayland: error: xdg_surface buffer does not match the configured maximized state 2024-08-12 14:55:51 +04:00
Nikita Tsarev
53e7e4501e JBR-7426: Fix cancelling press-and-hold causing some future key events being swallowed 2024-08-09 22:40:16 +02:00
161 changed files with 7706 additions and 27260 deletions

View File

@@ -1,46 +0,0 @@
# NOTE: This Dockerfile is meant to be used from the mkdocker_aarch64.sh script.
# Pull a concrete version of Linux that does NOT recieve updates after it's
# been created. This is so that the image is as stable as possible to make
# image creation reproducible.
# NB: this also means there may be no security-related fixes there, need to
# move the version to the next manually.
# jetbrains/runtime:jbr17env_aarch64
FROM arm64v8/centos:7
# Install the necessary build tools
RUN yum -y update; \
yum -y install centos-release-scl; \
yum -y install devtoolset-10-10.1-0.el7; \
yum -y install \
alsa-lib-devel-1.1.8-1.el7.aarch64 \
autoconf-2.69-11.el7.noarch \
automake-1.13.4-3.el7.noarch \
bzip2-1.0.6-13.el7.aarch64 \
cups-devel-1.6.3-51.el7.aarch64 \
file-5.11-37.el7.aarch64 \
fontconfig-devel-2.13.0-4.3.el7.aarch64 \
freetype-devel-2.8-14.el7_9.1.aarch64 \
giflib-devel-4.1.6-9.el7.aarch64 \
git-1.8.3.1-24.el7_9.aarch64 \
libtool-2.4.2-22.el7_3.aarch64 \
libXi-devel-1.7.9-1.el7.aarch64 \
libXrandr-devel-1.5.1-2.el7.aarch64 \
libXrender-devel-0.9.10-1.el7.aarch64 \
libXt-devel-1.1.5-3.el7.aarch64 \
libXtst-devel-1.2.3-1.el7.aarch64 \
make-3.82-24.el7.aarch64 \
rsync-3.1.2-12.el7_9.aarch64 \
tar-1.26-35.el7.aarch64 \
unzip-6.0-24.el7_9.aarch64 \
wayland-devel-1.15.0-1.el7 \
zip-3.0-11.el7.aarch64; \
yum -y clean all
ENV PATH="/opt/rh/devtoolset-10/root/usr/bin:${PATH}"
ENV LD_LIBRARY_PATH="/opt/rh/devtoolset-10/root/usr/lib64:/opt/rh/devtoolset-10/root/usr/lib:/opt/rh/devtoolset-10/root/usr/lib64/dyninst:/opt/rh/devtoolset-10/root/usr/lib/dyninst:/opt/rh/devtoolset-10/root/usr/lib64:/opt/rh/devtoolset-10/root/usr/lib"
ENV PKG_CONFIG_PATH="/opt/rh/devtoolset-10/root/usr/lib64/pkgconfig"
RUN git config --global user.email "teamcity@jetbrains.com" && \
git config --global user.name "builduser"

View File

@@ -5,34 +5,33 @@
# image creation reproducible.
# NB: this also means there may be no security-related fixes there, need to
# move the version to the next manually.
FROM arm64v8/alpine:3.12
FROM arm64v8/alpine:3.14
# Install the necessary build tools
RUN apk --no-cache add --update \
alsa-lib-dev=1.2.2-r0 \
autoconf=2.69-r2 \
bash=5.0.17-r0 \
build-base=0.5-r2 \
alsa-lib-dev=1.2.5-r2 \
autoconf=2.71-r0 \
bash=5.1.16-r0 \
build-base=0.5-r3 \
bzip2=1.0.8-r1 \
cups-dev=2.3.3-r0 \
file=5.38-r0 \
fontconfig=2.13.1-r2 \
fontconfig-dev=2.13.1-r2 \
freetype-dev=2.10.4-r2 \
git=2.26.3-r1 \
grep=3.4-r0 \
libx11-dev=1.6.12-r1 \
cups-dev=2.3.3-r3 \
file=5.40-r1 \
fontconfig=2.13.1-r4 \
fontconfig-dev=2.13.1-r4 \
freetype-dev=2.10.4-r3 \
git=2.32.7-r0 \
grep=3.7-r0 \
libx11-dev=1.7.3.1-r0 \
libxext-dev=1.3.4-r0 \
libxrandr-dev=1.5.2-r0 \
libxrandr-dev=1.5.2-r1 \
libxrender-dev=0.9.10-r3 \
libxt-dev=1.2.0-r0 \
libxt-dev=1.2.1-r0 \
libxtst-dev=1.2.3-r3 \
linux-headers=5.4.5-r1 \
rsync=3.1.3-r3 \
tar=1.32-r2 \
wayland-dev=1.18.0-r4 \
zip=3.0-r8
linux-headers=5.10.41-r0 \
rsync=3.2.5-r0 \
tar=1.34-r1 \
wayland-dev=1.19.0-r0 \
zip=3.0-r9
# Set up boot JDK for building
COPY boot_jdk_musl_aarch64.tar.gz /jdk20/
@@ -40,4 +39,5 @@ RUN cd /jdk20 && tar --strip-components=1 -xzf boot_jdk_musl_aarch64.tar.gz && r
ENV BOOT_JDK=/jdk20
RUN git config --global user.email "teamcity@jetbrains.com" && \
git config --global user.name "builduser"
git config --global user.name "builduser" && \
git config --global --add safe.directory '*'

View File

@@ -0,0 +1,44 @@
# NOTE: This Dockerfile is meant to be used from the mkdocker_aarch64.sh script.
# Pull a concrete version of Linux that does NOT recieve updates after it's
# been created. This is so that the image is as stable as possible to make
# image creation reproducible.
# NB: this also means there may be no security-related fixes there, need to
# move the version to the next manually.
FROM arm64v8/oraclelinux:8
# Install the necessary build tools
RUN yum -y update; \
yum -y install gcc-toolset-10-10.1-0.el8.aarch64; \
yum -y install \
alsa-lib-devel-1.1.9-4.el8.aarch64 \
autoconf-2.69-29.el8_10.1.noarch \
automake-1.16.1-6.el8.noarch \
bzip2-libs-1.0.6-26.el8.aarch64 \
cups-devel-2.2.6-60.el8_10.aarch64 \
file-5.33-26.el8.aarch64 \
fontconfig-devel-2.13.1-4.el8.aarch64 \
freetype-devel-2.9.1-9.el8.aarch64 \
gcc-c++-8.5.0-22.0.1.el8_10.aarch64 \
git-2.43.5-1.el8_10.aarch64 \
git-core-2.43.5-1.el8_10.aarch64 \
libtool-2.4.6-25.el8.aarch64 \
libXi-devel-1.7.10-1.el8.aarch64 \
libXrandr-devel-1.5.2-1.el8.aarch64 \
libXrender-devel-0.9.10-7.el8.aarch64 \
libXt-devel-1.1.5-12.el8.aarch64 \
libXtst-devel-1.2.3-7.el8.aarch64 \
make-devel-4.2.1-11.el8.aarch64 \
rsync-3.1.3-19.el8_7.1.aarch64 \
unzip-6.0-46.el8.aarch64 \
wayland-devel-1.21.0-1.el8.aarch64; \
yum -y clean all
RUN git config --global user.email "teamcity@jetbrains.com" && \
git config --global user.name "builduser"
ENV PATH="/opt/rh/devtoolset-10/root/usr/bin:${PATH}"
ENV LD_LIBRARY_PATH="/opt/rh/devtoolset-10/root/usr/lib64:/opt/rh/devtoolset-10/root/usr/lib:/opt/rh/devtoolset-10/root/usr/lib64/dyninst:/opt/rh/devtoolset-10/root/usr/lib/dyninst:/opt/rh/devtoolset-10/root/usr/lib64:/opt/rh/devtoolset-10/root/usr/lib"
ENV PKG_CONFIG_PATH="/opt/rh/devtoolset-10/root/usr/lib64/pkgconfig"

View File

@@ -0,0 +1,45 @@
# NOTE: This Dockerfile is meant to be used from the mkdocker_x86_64.sh script.
# Pull a concrete version of Linux that does NOT recieve updates after it's
# been created. This is so that the image is as stable as possible to make
# image creation reproducible.
# NB: this also means there may be no security-related fixes there, need to
# move the version to the next manually.
FROM amd64/oraclelinux:8
# Install the necessary build tools
RUN yum -y update; \
yum -y install gcc-toolset-10-10.1-0.el8.x86_64; \
yum -y install \
alsa-lib-devel-1.1.9-4.el8.x86_64 \
autoconf-2.69-29.el8_10.1.noarch \
automake-1.16.1-6.el8.noarch \
bzip2-libs-1.0.6-26.el8.x86_64 \
cups-devel-2.2.6-60.el8_10.x86_64 \
file-5.33-26.el8.x86_64 \
fontconfig-devel-2.13.1-4.el8.x86_64 \
freetype-devel-2.9.1-9.el8.x86_64 \
gcc-c++-8.5.0-22.0.1.el8_10.x86_64 \
git-2.43.5-1.el8_10.x86_64 \
git-core-2.43.5-1.el8_10.x86_64 \
libtool-2.4.6-25.el8.x86_64 \
libXi-devel-1.7.10-1.el8.x86_64 \
libXrandr-devel-1.5.2-1.el8.x86_64 \
libXrender-devel-0.9.10-7.el8.x86_64 \
libXt-devel-1.1.5-12.el8.x86_64 \
libXtst-devel-1.2.3-7.el8.x86_64 \
make-devel-4.2.1-11.el8.x86_64 \
rsync-3.1.3-19.el8_7.1.x86_64 \
unzip-6.0-46.el8.x86_64 \
wayland-devel-1.21.0-1.el8.x86_64 \
python36-3.6.8-39.module+el8.10.0+90274+07ba55de; \
yum -y clean all
RUN git config --global user.email "teamcity@jetbrains.com" && \
git config --global user.name "builduser" && \
git config --global --add safe.directory '*'
ENV PATH="/opt/rh/devtoolset-10/root/usr/bin:${PATH}"
ENV LD_LIBRARY_PATH="/opt/rh/devtoolset-10/root/usr/lib64:/opt/rh/devtoolset-10/root/usr/lib:/opt/rh/devtoolset-10/root/usr/lib64/dyninst:/opt/rh/devtoolset-10/root/usr/lib/dyninst:/opt/rh/devtoolset-10/root/usr/lib64:/opt/rh/devtoolset-10/root/usr/lib"
ENV PKG_CONFIG_PATH="/opt/rh/devtoolset-10/root/usr/lib64/pkgconfig"

View File

@@ -1,73 +0,0 @@
# jetbrains/runtime:jbr17env_x86_64
FROM centos:7
RUN yum -y install centos-release-scl; \
yum -y install devtoolset-10-10.1-0.el7; \
yum -y install \
alsa-lib-devel-1.1.8-1.el7 \
autoconf-2.69-11.el7 \
automake-1.13.4-3.el7 \
bzip2-1.0.6-13.el7 \
cups-devel-1.6.3-51.el7 \
file-5.11-37.el7 \
fontconfig-devel-2.13.0-4.3.el7 \
freetype-devel-2.8-14.el7_9.1 \
giflib-devel-4.1.6-9.el7 \
git-1.8.3.1-24.el7_9 \
libtool-2.4.2-22.el7_3 \
libXi-devel-1.7.9-1.el7 \
libXrandr-devel-1.5.1-2.el7 \
libXrender-devel-0.9.10-1.el7 \
libXt-devel-1.1.5-3.el7 \
libXtst-devel-1.2.3-1.el7 \
make-3.82-24.el7 \
tar-1.26-35.el7 \
unzip-6.0-24.el7_9 \
wayland-devel-1.15.0-1.el7 \
wget-1.14-18.el7_6.1 \
which-2.20-7.el7 \
zip-3.0-11.el7 \
python3-3.6.8-17.el7
RUN mkdir .git && \
git config user.email "teamcity@jetbrains.com" && \
git config user.name "builduser"
ENV LD_LIBRARY_PATH="/opt/rh/devtoolset-10/root/usr/lib64:/opt/rh/devtoolset-10/root/usr/lib:/opt/rh/devtoolset-10/root/usr/lib64/dyninst:/opt/rh/devtoolset-10/root/usr/lib/dyninst:/opt/rh/devtoolset-10/root/usr/lib64:/opt/rh/devtoolset-10/root/usr/lib"
ENV PATH="/opt/rh/devtoolset-10/root/usr/bin::${PATH}"
ENV PKG_CONFIG_PATH="/opt/rh/devtoolset-10/root/usr/lib64/pkgconfig"
# Build GLSLC
RUN curl -OL https://github.com/Kitware/CMake/releases/download/v3.27.5/cmake-3.27.5-linux-x86_64.tar.gz \
&& echo 138c68addae825b16ed78d792dafef5e0960194833f48bd77e7e0429c6bc081c *cmake-3.27.5-linux-x86_64.tar.gz | sha256sum -c - \
&& tar -xzf cmake-3.27.5-linux-x86_64.tar.gz \
&& rm cmake-3.27.5-linux-x86_64.tar.gz \
&& git clone https://github.com/google/shaderc --branch v2023.6 \
&& cd shaderc \
&& ./utils/git-sync-deps \
&& mkdir build \
&& cd build \
&& /cmake-3.27.5-linux-x86_64/bin/cmake \
-DCMAKE_BUILD_TYPE=Release \
-DCMAKE_INSTALL_PREFIX=/usr/local \
-DSHADERC_SKIP_TESTS=ON \
-DSHADERC_SKIP_EXAMPLES=ON \
-DSHADERC_SKIP_COPYRIGHT_CHECK=ON \
.. \
&& make install
ENV PATH="/cmake-3.27.5-linux-x86_64/bin::${PATH}"
# Checkout Vulkan headers
RUN mkdir /vulkan \
&& cd /vulkan \
&& git init \
&& git remote add -f origin https://github.com/KhronosGroup/Vulkan-Headers.git \
&& git fetch origin \
&& git checkout v1.3.265 -- include \
&& rm -r .git \
&& mkdir /vulkan_hpp \
&& cd /vulkan_hpp \
&& git init \
&& git remote add -f origin https://github.com/KhronosGroup/Vulkan-Hpp.git \
&& git fetch origin \
&& git checkout v1.3.265 -- vulkan \
&& rm -r .git

View File

@@ -4,10 +4,9 @@ set -euo pipefail
set -x
# This script creates a Docker image suitable for building AArch64 variant
# of the JetBrains Runtime "dev" version.
BOOT_JDK_REMOTE_FILE=zulu17.30.15-ca-jdk17.0.1-linux_aarch64.tar.gz
BOOT_JDK_SHA=4d9c9116eb0cdd2d7fb220d6d27059f4bf1b7e95cc93d5512bd8ce3791af86c7
BOOT_JDK_REMOTE_FILE=https://cdn.azul.com/zulu/bin/zulu21.36.17-ca-jdk21.0.4-linux_aarch64.tar.gz
BOOT_JDK_SHA=da3c2d7db33670bcf66532441aeb7f33dcf0d227c8dafe7ce35cee67f6829c4c
BOOT_JDK_LOCAL_FILE=boot_jdk.tar.gz
if [ ! -f $BOOT_JDK_LOCAL_FILE ]; then
@@ -22,7 +21,7 @@ sha256sum -c - <<EOF
$BOOT_JDK_SHA *$BOOT_JDK_LOCAL_FILE
EOF
docker build -t jbrdevenv_arm64v8 -f Dockerfile.aarch64 .
docker build -t jetbrains/runtime:jbr21env_oraclelinux8_aarch64 -f Dockerfile.oraclelinux_aarch64 .
# NB: the resulting container can (and should) be used without the network
# connection (--network none) during build in order to reduce the chance

View File

@@ -0,0 +1,28 @@
#!/bin/bash
set -euo pipefail
set -x
# This script creates a Docker image suitable for building x86-64 variant
BOOT_JDK_REMOTE_FILE=zulu21.36.17-ca-jdk21.0.4-linux_x64.tar.gz
BOOT_JDK_SHA=318d0c2ed3c876fb7ea2c952945cdcf7decfb5264ca51aece159e635ac53d544
BOOT_JDK_LOCAL_FILE=boot_jdk.tar.gz
if [ ! -f $BOOT_JDK_LOCAL_FILE ]; then
# Obtain "boot JDK" from outside of the container.
wget -nc https://cdn.azul.com/zulu/bin/${BOOT_JDK_REMOTE_FILE} -O $BOOT_JDK_LOCAL_FILE
else
echo "boot JDK \"$BOOT_JDK_LOCAL_FILE\" present, skipping download"
fi
# Verify that what we've downloaded can be trusted.
sha256sum -c - <<EOF
$BOOT_JDK_SHA *$BOOT_JDK_LOCAL_FILE
EOF
docker build -t jetbrains/runtime:jbr21env_oraclelinux8_amd64 -f Dockerfile.oraclelinux_x86_64 .
# NB: the resulting container can (and should) be used without the network
# connection (--network none) during build in order to reduce the chance
# of build contamination.

View File

@@ -63,7 +63,7 @@ function create_image_bundle {
__cds_opt=''
if is_musl; then libc_type_suffix='musl-' ; fi
if [ "$__arch_name" == "$JBRSDK_BUNDLE" ]; then __cds_opt="--generate-cds-archive"; fi
__cds_opt="--generate-cds-archive"
[ "$bundle_type" == "fd" ] && [ "$__arch_name" == "$JBRSDK_BUNDLE" ] && __bundle_name=$__arch_name && fastdebug_infix="fastdebug-"
JBR=${__bundle_name}-${JBSDK_VERSION}-linux-${libc_type_suffix}aarch64-${fastdebug_infix}b${build_number}

View File

@@ -71,7 +71,7 @@ function create_image_bundle {
__cds_opt=''
if is_musl; then libc_type_suffix='musl-' ; fi
if [ "$__arch_name" == "$JBRSDK_BUNDLE" ]; then __cds_opt="--generate-cds-archive"; fi
__cds_opt="--generate-cds-archive"
[ "$bundle_type" == "fd" ] && [ "$__arch_name" == "$JBRSDK_BUNDLE" ] && __bundle_name=$__arch_name && fastdebug_infix="fastdebug-"
JBR=${__bundle_name}-${JBSDK_VERSION}-linux-${libc_type_suffix}x64-${fastdebug_infix}b${build_number}

View File

@@ -51,7 +51,7 @@ function create_image_bundle {
__cds_opt=''
if is_musl; then libc_type_suffix='musl-' ; fi
if [ "$__arch_name" == "$JBRSDK_BUNDLE" ]; then __cds_opt="--generate-cds-archive"; fi
__cds_opt="--generate-cds-archive"
[ "$bundle_type" == "fd" ] && [ "$__arch_name" == "$JBRSDK_BUNDLE" ] && __bundle_name=$__arch_name && fastdebug_infix="fastdebug-"
JBR=${__bundle_name}-${JBSDK_VERSION}-linux-${libc_type_suffix}x86-${fastdebug_infix}b${build_number}

View File

@@ -53,7 +53,7 @@ function create_image_bundle {
fastdebug_infix=''
__cds_opt=''
if [ "$__arch_name" == "$JBRSDK_BUNDLE" ]; then __cds_opt="--generate-cds-archive"; fi
__cds_opt="--generate-cds-archive"
tmp=.bundle.$$.tmp
mkdir "$tmp" || do_exit $?

View File

@@ -54,7 +54,7 @@ function create_image_bundle {
fastdebug_infix=''
__cds_opt=''
if [ "$__arch_name" == "$JBRSDK_BUNDLE" ]; then __cds_opt="--generate-cds-archive"; fi
__cds_opt="--generate-cds-archive"
[ "$bundle_type" == "fd" ] && [ "$__arch_name" == "$JBRSDK_BUNDLE" ] && __bundle_name=$__arch_name && fastdebug_infix="fastdebug-"
__root_dir=${__bundle_name}-${JBSDK_VERSION}-windows-x64-${fastdebug_infix}b${build_number}

View File

@@ -50,7 +50,7 @@ function create_image_bundle {
fastdebug_infix=''
__cds_opt=''
if [ "$__arch_name" == "$JBRSDK_BUNDLE" ]; then __cds_opt="--generate-cds-archive"; fi
__cds_opt="--generate-cds-archive"
[ "$bundle_type" == "fd" ] && [ "$__arch_name" == "$JBRSDK_BUNDLE" ] && __bundle_name=$__arch_name && fastdebug_infix="fastdebug-"
__root_dir=${__bundle_name}-${JBSDK_VERSION}-windows-x86-${fastdebug_infix}b${build_number}

View File

@@ -62,6 +62,7 @@ $(eval $(call SetupJdkLibrary, BUILD_LIBJAVA, \
ProcessImpl_md.c_CFLAGS := $(VERSION_CFLAGS), \
WARNINGS_AS_ERRORS_xlc := false, \
DISABLED_WARNINGS_gcc_ProcessImpl_md.c := unused-result, \
DISABLED_WARNINGS_clang_jni_util.c := format-nonliteral, \
LDFLAGS := $(LDFLAGS_JDKLIB) \
$(call SET_SHARED_LIBRARY_ORIGIN), \
LDFLAGS_macosx := -L$(SUPPORT_OUTPUTDIR)/native/$(MODULE)/, \

View File

@@ -1,5 +1,5 @@
#
# Copyright (c) 2011, 2023, Oracle and/or its affiliates. All rights reserved.
# Copyright (c) 2011, 2024, Oracle and/or its affiliates. All rights reserved.
# DO NOT ALTER OR REMOVE COPYRIGHT NOTICES OR THIS FILE HEADER.
#
# This code is free software; you can redistribute it and/or modify it
@@ -391,6 +391,7 @@ ifeq ($(call isTargetOs, windows macosx), false)
common/awt/debug \
common/awt/systemscale \
common/font \
common/java2d \
common/java2d/wl \
common/java2d/vulkan \
common/wayland \
@@ -1084,6 +1085,7 @@ ifeq ($(call isTargetOs, macosx), true)
libawt_lwawt/java2d/metal \
include \
common/awt/debug \
common/java2d \
common/java2d/opengl \
libosxapp \
#

View File

@@ -81,8 +81,10 @@ ifeq ($(call isTargetOs, windows), true)
BUILD_JDK_JTREG_LIBRARIES_LIBS_libNewDirectByteBuffer := $(WIN_LIB_JAVA)
BUILD_JDK_JTREG_LIBRARIES_LIBS_libGetXSpace := $(WIN_LIB_JAVA)
BUILD_JDK_JTREG_LIBRARIES_LIBS_libFatalErrorTest := $(WIN_LIB_JAVA)
BUILD_JDK_JTREG_LIBRARIES_LIBS_libLogEventTest := $(WIN_LIB_JAVA)
else
BUILD_JDK_JTREG_LIBRARIES_LIBS_libFatalErrorTest := -ljava
BUILD_JDK_JTREG_LIBRARIES_LIBS_libLogEventTest := -ljava
BUILD_JDK_JTREG_LIBRARIES_LIBS_libstringPlatformChars := -ljava
BUILD_JDK_JTREG_LIBRARIES_LIBS_libDirectIO := -ljava
BUILD_JDK_JTREG_LIBRARIES_LIBS_libNewDirectByteBuffer := -ljava

View File

@@ -472,7 +472,9 @@ InstanceKlass* ClassListParser::load_class_from_source(Symbol* class_name, TRAPS
THROW_NULL(vmSymbols::java_lang_ClassNotFoundException());
}
InstanceKlass* k = UnregisteredClasses::load_class(class_name, _source, CHECK_NULL);
ResourceMark rm;
char * source_path = os::strdup_check_oom(ClassLoader::uri_to_path(_source));
InstanceKlass* k = UnregisteredClasses::load_class(class_name, source_path, CHECK_NULL);
if (k->local_interfaces()->length() != _interfaces->length()) {
print_specified_interfaces();
print_actual_interfaces(k);

View File

@@ -171,6 +171,8 @@ void ClassListWriter::write_to_stream(const InstanceKlass* k, outputStream* stre
}
}
// NB: the string following "source: " is not really a proper file name, but rather
// a truncated URI referring to a file. It must be decoded after reading.
#ifdef _WINDOWS
// "file:/C:/dir/foo.jar" -> "C:/dir/foo.jar"
stream->print(" source: %s", cfs->source() + 6);

View File

@@ -586,7 +586,7 @@ int FileMapInfo::get_module_shared_path_index(Symbol* location) {
// skip_uri_protocol was also called during dump time -- see ClassLoaderExt::process_module_table()
ResourceMark rm;
const char* file = ClassLoader::skip_uri_protocol(location->as_C_string());
const char* file = ClassLoader::uri_to_path(location->as_C_string());
for (int i = ClassLoaderExt::app_module_paths_start_index(); i < get_number_of_shared_paths(); i++) {
SharedClassPathEntry* ent = shared_path(i);
assert(ent->in_named_module(), "must be");

View File

@@ -78,6 +78,9 @@
#include "utilities/macros.hpp"
#include "utilities/utf8.hpp"
#include <stdlib.h>
#include <ctype.h>
// Entry point in java.dll for path canonicalization
typedef int (*canonicalize_fn_t)(const char *orig, char *out, int len);
@@ -1224,7 +1227,7 @@ InstanceKlass* ClassLoader::load_class(Symbol* name, bool search_append_only, TR
}
#if INCLUDE_CDS
char* ClassLoader::skip_uri_protocol(char* source) {
static const char* skip_uri_protocol(const char* source) {
if (strncmp(source, "file:", 5) == 0) {
// file: protocol path could start with file:/ or file:///
// locate the char after all the forward slashes
@@ -1243,6 +1246,52 @@ char* ClassLoader::skip_uri_protocol(char* source) {
return source;
}
static char decode_percent_encoded(const char *str, size_t& index) {
if (str[index] == '%'
&& isxdigit(str[index + 1])
&& isxdigit(str[index + 2])) {
char hex[3];
hex[0] = str[index + 1];
hex[1] = str[index + 2];
hex[2] = '\0';
index += 2;
return (char) strtol(hex, NULL, 16);
}
return str[index];
}
char* ClassLoader::uri_to_path(const char* uri) {
const size_t len = strlen(uri) + 1;
char* path = NEW_RESOURCE_ARRAY(char, len);
uri = skip_uri_protocol(uri);
if (strncmp(uri, "//", 2) == 0) {
// Skip the empty "authority" part
uri += 2;
}
#ifdef _WINDOWS
if (uri[0] == '/') {
// Absolute path name on Windows does not begin with a slash
uri += 1;
}
#endif
size_t path_index = 0;
for (size_t i = 0; i < strlen(uri); ++i) {
char decoded = decode_percent_encoded(uri, i);
#ifdef _WINDOWS
if (decoded == '/') {
decoded = '\\';
}
#endif
path[path_index++] = decoded;
}
path[path_index] = '\0';
return path;
}
// Record the shared classpath index and loader type for classes loaded
// by the builtin loaders at dump time.
void ClassLoader::record_result(JavaThread* current, InstanceKlass* ik,
@@ -1276,7 +1325,7 @@ void ClassLoader::record_result(JavaThread* current, InstanceKlass* ik,
// Save the path from the file: protocol or the module name from the jrt: protocol
// if no protocol prefix is found, path is the same as stream->source(). This path
// must be valid since the class has been successfully parsed.
char* path = skip_uri_protocol(src);
const char* path = ClassLoader::uri_to_path(src);
assert(path != nullptr, "sanity");
for (int i = 0; i < FileMapInfo::get_number_of_shared_paths(); i++) {
SharedClassPathEntry* ent = FileMapInfo::shared_path(i);

View File

@@ -364,7 +364,7 @@ class ClassLoader: AllStatic {
// entries during shared classpath setup time.
static int num_module_path_entries();
static void exit_with_path_failure(const char* error, const char* message);
static char* skip_uri_protocol(char* source);
static char* uri_to_path(const char* uri);
static void record_result(JavaThread* current, InstanceKlass* ik,
const ClassFileStream* stream, bool redefined);
#endif

View File

@@ -98,12 +98,10 @@ void ClassLoaderExt::process_module_table(JavaThread* current, ModuleEntryTable*
ModulePathsGatherer(JavaThread* current, GrowableArray<char*>* module_paths) :
_current(current), _module_paths(module_paths) {}
void do_module(ModuleEntry* m) {
char* path = m->location()->as_C_string();
if (strncmp(path, "file:", 5) == 0) {
path = ClassLoader::skip_uri_protocol(path);
char* path_copy = NEW_RESOURCE_ARRAY(char, strlen(path) + 1);
strcpy(path_copy, path);
_module_paths->append(path_copy);
char* uri = m->location()->as_C_string();
if (strncmp(uri, "file:", 5) == 0) {
char* path = ClassLoader::uri_to_path(uri);
_module_paths->append(path);
}
}
};

View File

@@ -4046,8 +4046,7 @@ void InstanceKlass::verify_on(outputStream* st) {
}
guarantee(sib->is_klass(), "should be klass");
// TODO: (DCEVM) explain
guarantee(sib->super() == super || AllowEnhancedClassRedefinition && super->newest_version() == vmClasses::Object_klass(), "siblings should have same superklass");
guarantee(sib->super() == super || AllowEnhancedClassRedefinition && sib->super()->newest_version() == super->newest_version(), "siblings should have same superklass");
}
// Verify local interfaces

View File

@@ -202,6 +202,7 @@ Klass::Klass(KlassKind kind) : _kind(kind),
_is_redefining(false),
_update_information(NULL),
_is_copying_backwards(false),
_is_rolled_back(false),
_shared_class_path_index(-1) {
CDS_ONLY(_shared_class_flags = 0;)
CDS_JAVA_HEAP_ONLY(_archived_mirror_index = -1;)

View File

@@ -178,6 +178,7 @@ class Klass : public Metadata {
bool _is_redefining;
int* _update_information;
bool _is_copying_backwards; // Does the class need to copy fields backwards? => possibly overwrite itself?
bool _is_rolled_back; // true if class was rolled back in redefinition
private:
// This is an index into FileMapHeader::_shared_path_table[], to
@@ -416,6 +417,8 @@ protected:
void set_update_information(int *info) { _update_information = info; }
bool is_copying_backwards() const { return _is_copying_backwards; }
void set_copying_backwards(bool b) { _is_copying_backwards = b; }
bool is_rolled_back() { return _is_rolled_back; }
void set_rolled_back(bool b) { _is_rolled_back = b;}
protected: // internal accessors
void set_subklass(Klass* s);

View File

@@ -554,6 +554,9 @@ JNI_ENTRY(jint, jni_ThrowNew(JNIEnv *env, jclass clazz, const char *message))
report_fatal(INTERNAL_ERROR, "<dummy>", 0, "%s", message);
ShouldNotReachHere();
return 0;
} else if (name->equals("java/lang/Exception$JB$$Event")) {
Events::log(THREAD, "%s", message);
return 0;
}
Handle class_loader (THREAD, k->class_loader());
Handle protection_domain (THREAD, k->protection_domain());

View File

@@ -106,6 +106,7 @@ u8 VM_EnhancedRedefineClasses::_id_counter = 0;
// @param class_load_kind always jvmti_class_load_kind_redefine
VM_EnhancedRedefineClasses::VM_EnhancedRedefineClasses(jint class_count, const jvmtiClassDefinition *class_defs, JvmtiClassLoadKind class_load_kind) :
VM_GC_Operation(Universe::heap()->total_collections(), GCCause::_heap_inspection, Universe::heap()->total_full_collections(), true) {
_new_classes = nullptr;
_affected_klasses = nullptr;
_class_count = class_count;
_class_defs = class_defs;
@@ -717,12 +718,6 @@ void VM_EnhancedRedefineClasses::doit() {
Universe::objectArrayKlassObj()->append_to_sibling_list();
}
if (_object_klass_redefined) {
// TODO: This is a hack; it keeps old mirror instances on the heap. A correct solution could be to hold the old mirror class in the new mirror class.
ClassUnloading = false;
ClassUnloadingWithConcurrentMark = false;
}
if (needs_instance_update) {
// Do a full garbage collection to update the instance sizes accordingly
log_trace(redefine, class, redefine, metadata)("Before redefinition full GC run");
@@ -788,6 +783,10 @@ void VM_EnhancedRedefineClasses::doit() {
Universe::set_inside_redefinition(false);
_timer_vm_op_doit.stop();
if (log_is_enabled(Trace, redefine, class, codecache)) {
CodeCache::print();
}
}
// Cleanup - runs in JVM thread
@@ -953,6 +952,8 @@ jvmtiError VM_EnhancedRedefineClasses::load_new_class_versions_single_step(TRAPS
log_info(redefine, class, load, exceptions)("link_class exception: '%s'", ex_name->as_C_string());
}
CLEAR_PENDING_EXCEPTION;
the_class->set_redefinition_flags(Klass::NoRedefinition);
if (ex_name == vmSymbols::java_lang_OutOfMemoryError()) {
return JVMTI_ERROR_OUT_OF_MEMORY;
} else if (ex_name == vmSymbols::java_lang_NoClassDefFoundError()) {
@@ -1025,9 +1026,11 @@ jvmtiError VM_EnhancedRedefineClasses::load_new_class_versions_single_step(TRAPS
// The hidden class loader data has been artificially been kept alive to
// this point. The mirror and any instances of this class have to keep
// it alive afterwards.
if (the_class->class_loader_data()->keep_alive_cnt() > 0) {
the_class->class_loader_data()->dec_keep_alive();
}
// Keep old classloader data alive otherwise it crashes with ClassUnloading=true
// if (the_class->class_loader_data()->keep_alive_cnt() > 0) {
// the_class->class_loader_data()->dec_keep_alive();
// }
}
} else {
@@ -1614,6 +1617,7 @@ void VM_EnhancedRedefineClasses::rollback() {
new_class->set_redefining(false);
new_class->old_version()->set_new_version(nullptr);
new_class->set_old_version(nullptr);
new_class->set_rolled_back(true);
}
_new_classes->clear();
}
@@ -2242,6 +2246,10 @@ class AffectedKlassClosure : public KlassClosure {
return;
}
if (klass->is_rolled_back()) {
return;
}
if (klass->check_redefinition_flag(Klass::MarkedAsAffected)) {
_affected_klasses->append(klass);
return;

View File

@@ -382,7 +382,7 @@ public:
bool operator()(WeakHandle* entry) {
oop mem_name = entry->peek();
if (mem_name == NULL) {
if (mem_name == nullptr) {
// Removed
return true;
}
@@ -426,7 +426,7 @@ void ResolvedMethodTable::adjust_method_entries_dcevm(bool * trace_name_printed)
ResolvedMethodTableLookup lookup(thread, method_hash(old_method), old_method);
_local_table->remove(thread, lookup);
InstanceKlass* newer_klass = InstanceKlass::cast(old_method->method_holder()->new_version());
InstanceKlass* newer_klass = InstanceKlass::cast(old_method->method_holder()->newest_version());
Method* newer_method = nullptr;
// Method* new_method;
@@ -436,7 +436,7 @@ void ResolvedMethodTable::adjust_method_entries_dcevm(bool * trace_name_printed)
newer_method = newer_klass->method_with_idnum(old_method->orig_method_idnum());
// TODO: JBR21 - check why the newer_method can be nullptr
// assert(newer_method != NULL, "method_with_idnum() should not be NULL");
// assert(newer_method != nullptr, "method_with_idnum() should not be nullptr");
if (newer_method != nullptr) {
assert(newer_klass == newer_method->method_holder(), "call after swapping redefined guts");

View File

@@ -4021,15 +4021,6 @@ jint Arguments::parse(const JavaVMInitArgs* initial_cmd_args) {
setup_hotswap_agent();
if (AllowEnhancedClassRedefinition) {
if (!FLAG_IS_CMDLINE(ClassUnloading)) {
ClassUnloading = false;
ClassUnloadingWithConcurrentMark = false;
} else {
warning("The JVM is unstable when using -XX:+AllowEnhancedClassRedefinition and -XX:+ClassUnloading together!");
}
}
#if !INCLUDE_CDS
if (DumpSharedSpaces || RequireSharedSpaces) {
jio_fprintf(defaultStream::error_stream(),

View File

@@ -145,7 +145,7 @@ class Proxy {
interFace = target = null;
flags = 0;
}
inverse = inverseProxy == null ? new Proxy(repository, inverseInfo, specialization, this, null, null) : inverseProxy;
inverse = inverseProxy == null ? new Proxy(repository, inverseInfo, inverseSpecialization, this, null, null) : inverseProxy;
if (inverse.getInterface() != null) directDependencies = Set.of(inverse);
}

View File

@@ -103,7 +103,12 @@ class ProxyGenerator {
accessContext.canAccess(info.targetLookup.lookupClass()) ? info.targetLookup.lookupClass() : Object.class;
targetDescriptor = constructorTargetParameterType == null ? "" : constructorTargetParameterType.descriptorString();
proxyName = proxyGenLookup.lookupClass().getPackageName().replace('.', '/') + "/" + interFace.getSimpleName();
// Even though generated proxy is hidden and therefore has no qualified name,
// it can reference itself via internal name, which can lead to name collisions.
// Let's consider specialized proxy for java/util/List - if we name proxy similarly,
// methods calls to java/util/List will be treated by VM as calls to proxy class,
// not standard library interface. Therefore we append $$$ to proxy name to avoid name collision.
proxyName = proxyGenLookup.lookupClass().getPackageName().replace('.', '/') + "/" + interFace.getSimpleName() + "$$$";
originalProxyWriter = new ClassWriter(ClassWriter.COMPUTE_FRAMES) {
@Override

View File

@@ -124,4 +124,5 @@ public class Exception extends Throwable {
}
private static class JB$$Assertion {}
private static class JB$$Event {}
}

View File

@@ -1320,3 +1320,40 @@ JNU_Fatal(JNIEnv *env, const char *file, int line, const char *msg)
(*env)->ThrowNew(env, cls, real_msg);
}
}
JNIEXPORT void JNICALL
JNU_LogEvent(JNIEnv *env, const char *file, int line, const char *fmt, ...)
{
jclass cls = (*env)->FindClass(env, "java/lang/Exception$JB$$Event");
if (cls == 0) return;
va_list args;
va_start(args, fmt);
int len = vsnprintf(NULL, 0, fmt, args);
va_end(args);
if (len < 0) return;
int suffix_len = (int) strlen(file) + 64;
len += suffix_len;
char * real_msg = malloc(len);
if (real_msg == NULL) return;
va_start(args, fmt);
vsnprintf(real_msg, len, fmt, args);
va_end(args);
char * suffix = malloc(suffix_len);
if (suffix == NULL) {
free(real_msg);
return;
}
snprintf(suffix, suffix_len, " (%s:%d)", file, line);
strncat(real_msg, suffix, suffix_len);
free(suffix);
// Throwing an exception by this name will result in Events::log() call
(*env)->ThrowNew(env, cls, real_msg);
free(real_msg);
}

View File

@@ -244,6 +244,8 @@ JNU_GetStaticFieldByName(JNIEnv *env,
JNIEXPORT void JNICALL
JNU_Fatal(JNIEnv *env, const char *file, int line, const char *msg);
JNIEXPORT void JNICALL
JNU_LogEvent(JNIEnv *env, const char *file, int line, const char *fmt, ...);
/************************************************************************
* Miscellaneous utilities used by the class libraries
@@ -329,6 +331,11 @@ JNU_Fatal(JNIEnv *env, const char *file, int line, const char *msg);
} \
} while(0)
#define JNU_LOG_EVENT(env, fmt, ...) \
do { \
JNU_LogEvent((env), __FILE__, __LINE__, (fmt), ##__VA_ARGS__); \
} while(0)
/************************************************************************
* Debugging utilities
*/

View File

@@ -26,111 +26,37 @@
package sun.lwawt;
import java.awt.Component;
import java.awt.Container;
import java.awt.Cursor;
import java.awt.Point;
import java.util.concurrent.atomic.AtomicBoolean;
import sun.awt.AWTAccessor;
import sun.awt.SunToolkit;
import sun.awt.CachedCursorManager;
public abstract class LWCursorManager {
public abstract class LWCursorManager extends CachedCursorManager {
/**
* A flag to indicate if the update is scheduled, so we don't process it
* twice.
*/
private final AtomicBoolean updatePending = new AtomicBoolean(false);
protected LWCursorManager() {
}
/**
* Sets the cursor to correspond the component currently under mouse.
*
* This method should not be executed on the toolkit thread as it
* calls to user code (e.g. Container.findComponentAt).
*/
public final void updateCursor() {
updatePending.set(false);
updateCursorImpl();
}
/**
* Schedules updating the cursor on the corresponding event dispatch
* thread for the given window.
*
* This method is called on the toolkit thread as a result of a
* native update cursor request (e.g. WM_SETCURSOR on Windows).
*/
public final void updateCursorLater(final LWWindowPeer window) {
if (updatePending.compareAndSet(false, true)) {
Runnable r = new Runnable() {
@Override
public void run() {
updateCursor();
}
};
SunToolkit.executeOnEventHandlerThread(window.getTarget(), r);
}
}
private void updateCursorImpl() {
final Point cursorPos = getCursorPosition();
final Component c = findComponent(cursorPos);
final Cursor cursor;
@Override
protected Cursor getCursorByPosition(final Point cursorPos, Component c) {
final Object peer = LWToolkit.targetToPeer(c);
if (peer instanceof LWComponentPeer) {
final LWComponentPeer<?, ?> lwpeer = (LWComponentPeer<?, ?>) peer;
final Point p = lwpeer.getLocationOnScreen();
cursor = lwpeer.getCursor(new Point(cursorPos.x - p.x,
cursorPos.y - p.y));
} else {
cursor = (c != null) ? c.getCursor() : null;
return lwpeer.getCursor(new Point(cursorPos.x - p.x,
cursorPos.y - p.y));
}
setCursor(cursor);
return null;
}
/**
* Returns the first visible, enabled and showing component under cursor.
* Returns null for modal blocked windows.
*
* @param cursorPos Current cursor position.
* @return Component or null.
*/
private static final Component findComponent(final Point cursorPos) {
@Override
protected Component getComponentUnderCursor() {
final LWComponentPeer<?, ?> peer = LWWindowPeer.getPeerUnderCursor();
Component c = null;
if (peer != null && peer.getWindowPeerOrSelf().getBlocker() == null) {
c = peer.getTarget();
if (c instanceof Container) {
final Point p = peer.getLocationOnScreen();
c = AWTAccessor.getContainerAccessor().findComponentAt(
(Container) c, cursorPos.x - p.x, cursorPos.y - p.y, false);
}
while (c != null) {
final Object p = AWTAccessor.getComponentAccessor().getPeer(c);
if (c.isVisible() && c.isEnabled() && p != null) {
break;
}
c = c.getParent();
}
return peer.getTarget();
}
return c;
return null;
}
/**
* Returns the current cursor position.
*/
// TODO: make it public to reuse for MouseInfo
protected abstract Point getCursorPosition();
/**
* Sets a cursor. The cursor can be null if the mouse is not over a Java
* window.
* @param cursor the new {@code Cursor}.
*/
protected abstract void setCursor(Cursor cursor);
@Override
protected Point getLocationOnScreen(Component component) {
return AWTAccessor.getComponentAccessor().getPeer(component).getLocationOnScreen();
}
}

View File

@@ -39,7 +39,6 @@ import java.awt.GraphicsConfiguration;
import java.awt.GraphicsDevice;
import java.awt.GraphicsEnvironment;
import java.awt.Insets;
import java.awt.KeyboardFocusManager;
import java.awt.MenuBar;
import java.awt.Point;
import java.awt.Rectangle;
@@ -65,7 +64,6 @@ import javax.swing.JComponent;
import sun.awt.AWTAccessor;
import sun.awt.AWTAccessor.ComponentAccessor;
import sun.awt.AppContext;
import sun.awt.CGraphicsDevice;
import sun.awt.DisplayChangedListener;
import sun.awt.ExtendedKeyCodes;
@@ -160,6 +158,8 @@ public class LWWindowPeer
*/
private LWWindowPeer blocker;
Jbr7481MouseEnteredExitedFix jbr7481MouseEnteredExitedFix = null;
public LWWindowPeer(Window target, PlatformComponent platformComponent,
PlatformWindow platformWindow, PeerType peerType)
{
@@ -783,7 +783,7 @@ public class LWWindowPeer
@Override
public void notifyUpdateCursor() {
getLWToolkit().getCursorManager().updateCursorLater(this);
getLWToolkit().getCursorManager().updateCursorLater(this.getTarget());
}
@Override
@@ -812,6 +812,18 @@ public class LWWindowPeer
*/
@Override
public void notifyMouseEvent(int id, long when, int button,
int x, int y, int absX, int absY,
int modifiers, int clickCount, boolean popupTrigger,
byte[] bdata) {
if (Jbr7481MouseEnteredExitedFix.isEnabled) {
mouseEnteredExitedBugWorkaround().apply(id, when, button, x, y, absX, absY, modifiers, clickCount, popupTrigger, bdata);
}
else {
doNotifyMouseEvent(id, when, button, x, y, absX, absY, modifiers, clickCount, popupTrigger, bdata);
}
}
void doNotifyMouseEvent(int id, long when, int button,
int x, int y, int absX, int absY,
int modifiers, int clickCount, boolean popupTrigger,
byte[] bdata)
@@ -960,6 +972,13 @@ public class LWWindowPeer
notifyUpdateCursor();
}
private Jbr7481MouseEnteredExitedFix mouseEnteredExitedBugWorkaround() {
if (jbr7481MouseEnteredExitedFix == null) {
jbr7481MouseEnteredExitedFix = new Jbr7481MouseEnteredExitedFix(this);
}
return jbr7481MouseEnteredExitedFix;
}
private void generateMouseEnterExitEventsForComponents(long when,
int button, int x, int y, int screenX, int screenY,
int modifiers, int clickCount, boolean popupTrigger,
@@ -1506,4 +1525,122 @@ public class LWWindowPeer
}
return handle[0];
}
static class Jbr7481MouseEnteredExitedFix {
static final boolean isEnabled;
static {
boolean isEnabledLocal = false;
try {
isEnabledLocal = Boolean.parseBoolean(System.getProperty("awt.mac.enableMouseEnteredExitedWorkaround", "true"));
} catch (Exception ignored) {
}
isEnabled = isEnabledLocal;
}
private final LWWindowPeer peer;
long when;
int x;
int y;
int absX;
int absY;
int modifiers;
Jbr7481MouseEnteredExitedFix(LWWindowPeer peer) {
this.peer = peer;
}
void apply(
int id, long when, int button,
int x, int y, int absX, int absY,
int modifiers, int clickCount, boolean popupTrigger,
byte[] bdata
) {
this.when = when;
this.x = x;
this.y = y;
this.absX = absX;
this.absY = absY;
this.modifiers = modifiers;
switch (id) {
case MouseEvent.MOUSE_ENTERED -> {
var currentPeerWorkaround = getCurrentPeerWorkaroundOrNull();
// First, send a "mouse exited" to the current window, if any,
// to maintain a sensible "exited, entered" order.
if (currentPeerWorkaround != null && currentPeerWorkaround != this) {
currentPeerWorkaround.sendMouseExited();
}
// Then forward the "mouse entered" to this window, regardless of whether it's already current.
// Repeated "mouse entered" are allowed and may be even needed somewhere deep inside this call.
peer.doNotifyMouseEvent(id, when, button, x, y, absX, absY, modifiers, clickCount, popupTrigger, bdata);
}
case MouseEvent.MOUSE_EXITED -> {
var currentPeerWorkaround = getCurrentPeerWorkaroundOrNull();
// An "exited" event often arrives too late. Such events may cause the current window to get lost.
// And since we've already sent a "mouse exited" when entering the current window, we don't need another one.
if (currentPeerWorkaround == this) {
peer.doNotifyMouseEvent(id, when, button, x, y, absX, absY, modifiers, clickCount, popupTrigger, bdata);
}
}
case MouseEvent.MOUSE_MOVED -> {
var currentPeerWorkaround = getCurrentPeerWorkaroundOrNull();
if (currentPeerWorkaround != this) {
// Inconsistency detected: either the events arrived out of order or never did.
// First, send an "exited" event to the current window, if any.
if (currentPeerWorkaround != null) {
currentPeerWorkaround.sendMouseExited();
}
// Next, send a fake "mouse entered" to the new window.
sendMouseEntered();
}
peer.doNotifyMouseEvent(id, when, button, x, y, absX, absY, modifiers, clickCount, popupTrigger, bdata);
}
default -> {
peer.doNotifyMouseEvent(id, when, button, x, y, absX, absY, modifiers, clickCount, popupTrigger, bdata);
}
}
}
private static Jbr7481MouseEnteredExitedFix getCurrentPeerWorkaroundOrNull() {
var currentPeer = getCurrentWindowPeer();
return currentPeer == null ? null : currentPeer.jbr7481MouseEnteredExitedFix;
}
private static LWWindowPeer getCurrentWindowPeer() {
return LWWindowPeer.getWindowUnderCursor();
}
private void sendMouseEntered() {
peer.doNotifyMouseEvent(
MouseEvent.MOUSE_ENTERED, when, MouseEvent.NOBUTTON,
x, y, absX, absY,
modifiers, 0, false,
null
);
}
private void sendMouseExited() {
peer.doNotifyMouseEvent(
MouseEvent.MOUSE_EXITED, when, MouseEvent.NOBUTTON,
x, y, absX, absY,
modifiers, 0, false,
null
);
}
@Override
public String toString() {
return "Jbr7481MouseEnteredExitedFix{" +
"peer=" + peer +
", when=" + when +
", x=" + x +
", y=" + y +
", absX=" + absX +
", absY=" + absY +
", modifiers=" + modifiers +
'}';
}
}
}

View File

@@ -56,7 +56,7 @@ final class CCursorManager extends LWCursorManager {
private CCursorManager() { }
@Override
protected Point getCursorPosition() {
public Point getCursorPosition() {
final Point2D nativePosition = nativeGetCursorPosition();
return new Point((int)nativePosition.getX(), (int)nativePosition.getY());
}
@@ -67,6 +67,11 @@ final class CCursorManager extends LWCursorManager {
return;
}
currentCursor = cursor;
setCurrentCursor();
}
void setCurrentCursor() {
Cursor cursor = currentCursor;
if (cursor == null) {
nativeSetBuiltInCursor(Cursor.DEFAULT_CURSOR, null);

View File

@@ -117,6 +117,11 @@ final class CPlatformResponder {
boolean jpopupTrigger = NSEvent.isPopupTrigger(jmodifiers);
if (jeventType == MouseEvent.MOUSE_ENTERED || jeventType == MouseEvent.MOUSE_EXITED) {
// JBR-7484: AppKit resets the cursor we set previously on entered/exit events, so we re-set it.
CCursorManager.getInstance().setCurrentCursor();
}
eventNotifier.notifyMouseEvent(jeventType, System.currentTimeMillis(), jbuttonNumber,
x, y, absX, absY, jmodifiers, jclickCount,
jpopupTrigger, null);

View File

@@ -27,23 +27,28 @@ package sun.lwawt.macosx;
import com.jetbrains.exported.JBRApi;
import java.awt.*;
@JBRApi.Service
@JBRApi.Provides("TextInput")
public class JBRTextInputMacOS {
private EventListener listener;
JBRTextInputMacOS() {
var desc = (CInputMethodDescriptor) LWCToolkit.getLWCToolkit().getInputMethodAdapterDescriptor();
desc.textInputEventListener = new EventListener() {
public void handleSelectTextRangeEvent(SelectTextRangeEvent event) {
// This listener is called on the EDT
synchronized (JBRTextInputMacOS.this) {
if (listener != null) {
listener.handleSelectTextRangeEvent(event);
var toolkit = Toolkit.getDefaultToolkit();
if (toolkit instanceof LWCToolkit) {
var desc = (CInputMethodDescriptor) ((LWCToolkit) toolkit).getInputMethodAdapterDescriptor();
desc.textInputEventListener = new EventListener() {
public void handleSelectTextRangeEvent(SelectTextRangeEvent event) {
// This listener is called on the EDT
synchronized (JBRTextInputMacOS.this) {
if (listener != null) {
listener.handleSelectTextRangeEvent(event);
}
}
}
}
};
};
}
}
@JBRApi.Provides("TextInput.SelectTextRangeEvent")

View File

@@ -469,7 +469,7 @@ static void debugPrintNSEvent(NSEvent* event, const char* comment) {
debugPrintNSEvent(event, "performKeyEquivalent");
#endif
// if IM is active key events should be ignored
if (![self hasMarkedText] && !fIsPressAndHold) {
if (![self hasMarkedText] && !fIsPressAndHold && ![event willBeHandledByComplexInputMethod]) {
[self deliverJavaKeyEventHelper: event];
}

View File

@@ -32,6 +32,7 @@
#import "LWCToolkit.h"
@class AWTView;
@class AWTWindowZoomButtonMouseResponder;
@interface AWTWindow : NSObject <NSWindowDelegate> {
@private
@@ -79,6 +80,7 @@
@property (nonatomic, retain) NSLayoutConstraint *customTitleBarHeightConstraint;
@property (nonatomic, retain) NSMutableArray *customTitleBarButtonCenterXConstraints;
@property (nonatomic) BOOL hideTabController;
@property (nonatomic, retain) AWTWindowZoomButtonMouseResponder* zoomButtonMouseResponder;
- (id) initWithPlatformWindow:(jobject)javaPlatformWindow
ownerWindow:owner
@@ -128,8 +130,14 @@
NSColor* _color;
}
- (void)configureColors;
@end
@interface AWTWindowZoomButtonMouseResponder : NSResponder
- (id) initWithWindow:(NSWindow*)window;
@end
#endif _AWTWINDOW_H

View File

@@ -82,6 +82,7 @@ BOOL isColorMatchingEnabled() {
@interface NSWindow (Private)
- (void)_setTabBarAccessoryViewController:(id)controller;
- (void)setIgnoreMove:(BOOL)value;
- (BOOL)isIgnoreMove;
- (void)_adjustWindowToScreen;
@end
@@ -403,6 +404,10 @@ AWT_NS_WINDOW_IMPLEMENTATION
self.movable = !value;
}
- (BOOL)isIgnoreMove {
return _ignoreMove;
}
- (void)_adjustWindowToScreen {
if (_ignoreMove) {
self.movable = YES;
@@ -808,6 +813,7 @@ AWT_ASSERT_APPKIT_THREAD;
self.customTitleBarConstraints = nil;
self.customTitleBarHeightConstraint = nil;
self.customTitleBarButtonCenterXConstraints = nil;
self.zoomButtonMouseResponder = nil;
[super dealloc];
}
@@ -1659,6 +1665,9 @@ static const CGFloat DefaultHorizontalTitleBarButtonOffset = 20.0;
]];
[self.nsWindow setIgnoreMove:YES];
self.zoomButtonMouseResponder = [[AWTWindowZoomButtonMouseResponder alloc] initWithWindow:self.nsWindow];
[self.zoomButtonMouseResponder release]; // property retains the object
AWTWindowDragView* windowDragView = [[AWTWindowDragView alloc] initWithPlatformWindow:self.javaPlatformWindow];
[titlebar addSubview:windowDragView positioned:NSWindowBelow relativeTo:closeButtonView];
@@ -1906,6 +1915,62 @@ static const CGFloat DefaultHorizontalTitleBarButtonOffset = 20.0;
@end // AWTWindow
@implementation AWTWindowZoomButtonMouseResponder {
NSWindow* _window;
NSTrackingArea* _trackingArea;
}
- (id) initWithWindow:(NSWindow*)window {
self = [super init];
if (self == nil) {
return nil;
}
if (![window isKindOfClass: [AWTWindow_Normal class]]) {
[self release];
return nil;
}
NSView* zoomButtonView = [window standardWindowButton:NSWindowZoomButton];
if (zoomButtonView == nil) {
[self release];
return nil;
}
_window = [window retain];
_trackingArea = [[NSTrackingArea alloc]
initWithRect:zoomButtonView.bounds
options:NSTrackingMouseEnteredAndExited | NSTrackingActiveInKeyWindow
owner:self
userInfo:nil
];
[zoomButtonView addTrackingArea:_trackingArea];
return self;
}
- (void)mouseEntered:(NSEvent*)event {
if ([_window isIgnoreMove]) {
// Enable moving the window while we're mousing over the "maximize" button so that
// macOS 15 window tiling actions can properly appear
_window.movable = YES;
}
}
- (void)mouseExited:(NSEvent*)event {
if ([_window isIgnoreMove]) {
_window.movable = NO;
}
}
- (void)dealloc {
[_window release];
[_trackingArea release];
[super dealloc];
}
@end
@implementation AWTWindowDragView {
BOOL _dragging;
}

View File

@@ -85,10 +85,13 @@
@property jboolean useMaskColor;
@property (readonly, strong) id<MTLDevice> device;
@property (readonly) NSString* shadersLib;
@property (strong) id<MTLCommandQueue> commandQueue;
@property (strong) id<MTLCommandQueue> blitCommandQueue;
@property (strong) id<MTLBuffer> vertexBuffer;
@property (readonly) NSMutableDictionary<NSNumber*, NSValue*>* displayLinks;
@property (readonly) EncoderManager * encoderManager;
@property (readonly) MTLSamplerManager * samplerManager;
@property (readonly) MTLStencilManager * stencilManager;
@@ -106,10 +109,13 @@
*/
+ (MTLContext*) setSurfacesEnv:(JNIEnv*)env src:(jlong)pSrc dst:(jlong)pDst;
- (id)initWithDevice:(jint)displayID shadersLib:(NSString*)shadersLib;
+ (NSMutableDictionary*) contextStore;
+ (MTLContext*) createContextWithDeviceIfAbsent:(jint)displayID shadersLib:(NSString*)mtlShadersLib;
- (id)initWithDevice:(id<MTLDevice>)device display:(jint) displayID shadersLib:(NSString*)mtlShadersLib;
- (void)dealloc;
- (void)handleDisplayLink: (BOOL)enabled source:(const char*)src;
- (void)handleDisplayLink:(BOOL)enabled display:(jint)display source:(const char*)src;
- (void)createDisplayLinkIfAbsent: (jint)displayID;
/**
* Resets the current clip state (disables both scissor and depth tests).

View File

@@ -72,7 +72,11 @@ static struct TxtVertex verts[PGRAM_VERTEX_COUNT] = {
{{-1.0, 1.0}, {0.0, 0.0}}
};
MTLTransform* tempTransform = nil;
typedef struct {
jint displayID;
CVDisplayLinkRef displayLink;
MTLContext* mtlc;
} DLParams;
@implementation MTLCommandBufferWrapper {
id<MTLCommandBuffer> _commandBuffer;
@@ -134,7 +138,6 @@ MTLTransform* tempTransform = nil;
@implementation MTLContext {
MTLCommandBufferWrapper * _commandBufferWrapper;
CVDisplayLinkRef _displayLink;
NSMutableSet* _layers;
int _displayLinkCount;
CFTimeInterval _lastRedrawTime;
@@ -153,34 +156,101 @@ MTLTransform* tempTransform = nil;
@synthesize textureFunction,
vertexCacheEnabled, aaEnabled, useMaskColor,
device, pipelineStateStorage,
device, shadersLib, pipelineStateStorage,
commandQueue, blitCommandQueue, vertexBuffer,
texturePool, paint=_paint, encoderManager=_encoderManager,
samplerManager=_samplerManager, stencilManager=_stencilManager;
extern void initSamplers(id<MTLDevice> device);
- (id)initWithDevice:(jint)displayID shadersLib:(NSString*)shadersLib {
+ (NSMutableDictionary*) contextStore {
static NSMutableDictionary<NSNumber*, MTLContext*> *_contextStore;
static dispatch_once_t oncePredicate;
dispatch_once(&oncePredicate, ^{
_contextStore = [[NSMutableDictionary alloc] init];
});
return _contextStore;
}
+ (MTLContext*) createContextWithDeviceIfAbsent:(jint)displayID shadersLib:(NSString*)mtlShadersLib {
// Initialization code here.
id<MTLDevice> device = CGDirectDisplayCopyCurrentMetalDevice(displayID);
if (device == nil) {
J2dRlsTraceLn1(J2D_TRACE_ERROR, "MTLContext_createContextWithDeviceIfAbsent(): Cannot create device from "
"displayID=%d", displayID)
// Fallback to the default metal device for main display
jint mainDisplayID = CGMainDisplayID();
if (displayID == mainDisplayID) {
device = MTLCreateSystemDefaultDevice();
}
if (device == nil) {
J2dRlsTraceLn(J2D_TRACE_ERROR, "MTLContext_createContextWithDeviceIfAbsent(): Cannot fallback to default "
"metal device")
return nil;
}
}
id<NSCopying> devID = nil;
if (@available(macOS 10.13, *)) {
devID = @(device.registryID);
} else {
devID = device.name;
}
MTLContext* mtlc = MTLContext.contextStore[devID];
if (mtlc == nil) {
mtlc = [[MTLContext alloc] initWithDevice:device display:displayID shadersLib:mtlShadersLib];
if (mtlc != nil) {
MTLContext.contextStore[devID] = mtlc;
[mtlc release];
J2dRlsTraceLn4(J2D_TRACE_INFO,
"MTLContext_createContextWithDeviceIfAbsent: new context(%p) for display=%d device=\"%s\" "
"retainCount=%d", mtlc, displayID, [mtlc.device.name UTF8String], mtlc.retainCount)
}
} else {
if (![mtlc.shadersLib isEqualToString:mtlShadersLib]) {
J2dRlsTraceLn3(J2D_TRACE_ERROR,
"MTLContext_createContextWithDeviceIfAbsent: cannot reuse context(%p) for display=%d "
"device=\"%s\", shaders lib has been changed", mtlc, displayID, [mtlc.device.name UTF8String])
return nil;
}
[mtlc retain];
J2dRlsTraceLn4(J2D_TRACE_INFO,
"MTLContext_createContextWithDeviceIfAbsent: reuse context(%p) for display=%d device=\"%s\" "
"retainCount=%d", mtlc, displayID, [mtlc.device.name UTF8String], mtlc.retainCount)
}
[mtlc createDisplayLinkIfAbsent:displayID];
return mtlc;
}
+ (void) releaseContext:(MTLContext*) mtlc {
id<NSCopying> devID = nil;
if (@available(macOS 10.13, *)) {
devID = @(mtlc.device.registryID);
} else {
devID = mtlc.device.name;
}
MTLContext* ctx = MTLContext.contextStore[devID];
if (mtlc == ctx) {
if (mtlc.retainCount > 1) {
[mtlc release];
J2dRlsTraceLn2(J2D_TRACE_INFO, "MTLContext_releaseContext: release context(%p) retainCount=%d", mtlc, mtlc.retainCount);
} else {
[MTLContext.contextStore removeObjectForKey:devID];
J2dRlsTraceLn1(J2D_TRACE_INFO, "MTLContext_releaseContext: dealloc context(%p)", mtlc);
}
}
}
- (id)initWithDevice:(id<MTLDevice>)mtlDevice display:(jint) displayID shadersLib:(NSString*)mtlShadersLib {
self = [super init];
if (self) {
// Initialization code here.
device = CGDirectDisplayCopyCurrentMetalDevice(displayID);
if (device == nil) {
J2dRlsTraceLn1(J2D_TRACE_ERROR, "MTLContext.initWithDevice(): Cannot create device from displayID=%d",
displayID);
// Fallback to the default metal device for main display
jint mainDisplayID = CGMainDisplayID();
if (displayID == mainDisplayID) {
device = MTLCreateSystemDefaultDevice();
}
if (device == nil) {
J2dRlsTraceLn(J2D_TRACE_ERROR, "MTLContext.initWithDevice(): Cannot fallback to default metal device");
return nil;
}
}
device = mtlDevice;
shadersLib = [[NSString alloc] initWithString:mtlShadersLib];
pipelineStateStorage = [[MTLPipelineStatesStorage alloc] initWithDevice:device shaderLibPath:shadersLib];
if (pipelineStateStorage == nil) {
J2dRlsTraceLn(J2D_TRACE_ERROR, "MTLContext.initWithDevice(): Failed to initialize MTLPipelineStatesStorage.");
@@ -213,25 +283,8 @@ extern void initSamplers(id<MTLDevice> device);
_displayLinkCount = 0;
_lastRedrawTime = 0.0;
if (isDisplaySyncEnabled()) {
_displayLinks = [[NSMutableDictionary alloc] init];
_layers = [[NSMutableSet alloc] init];
if (TRACE_CVLINK) {
J2dRlsTraceLn2(J2D_TRACE_VERBOSE, "MTLContext_CVDisplayLinkCreateWithCGDisplay: "
"ctx=%p displayID=%d", self, displayID);
}
CHECK_CVLINK("CreateWithCGDisplay",
CVDisplayLinkCreateWithCGDisplay(displayID, &_displayLink));
if (_displayLink == nil) {
J2dRlsTraceLn(J2D_TRACE_ERROR,
"MTLContext.initWithDevice(): Failed to initialize CVDisplayLink.");
return nil;
} else {
CHECK_CVLINK("SetOutputCallback", CVDisplayLinkSetOutputCallback(
_displayLink,
&mtlDisplayLinkCallback,
(__bridge void *) self));
}
} else {
_displayLink = nil;
}
_glyphCacheLCD = [[MTLGlyphCache alloc] initWithContext:self];
_glyphCacheAA = [[MTLGlyphCache alloc] initWithContext:self];
@@ -239,13 +292,47 @@ extern void initSamplers(id<MTLDevice> device);
return self;
}
- (void)handleDisplayLink: (BOOL)enabled source:(const char*)src {
if (_displayLink == nil) {
- (void)createDisplayLinkIfAbsent: (jint)displayID {
if (isDisplaySyncEnabled() && _displayLinks[@(displayID)] == nil) {
CVDisplayLinkRef _displayLink;
if (TRACE_CVLINK) {
J2dRlsTraceLn2(J2D_TRACE_VERBOSE, "MTLContext_createDisplayLinkIfAbsent: "
"ctx=%p displayID=%d", self, displayID);
}
CHECK_CVLINK("CreateWithCGDisplay",
CVDisplayLinkCreateWithCGDisplay(displayID, &_displayLink));
if (_displayLink == nil) {
J2dRlsTraceLn(J2D_TRACE_ERROR,
"MTLContext_createDisplayLinkIfAbsent: Failed to initialize CVDisplayLink.");
} else {
DLParams* dlParams = malloc(sizeof (DLParams ));
dlParams->displayID = displayID;
dlParams->displayLink = _displayLink;
dlParams->mtlc = self;
_displayLinks[@(displayID)] = [NSValue valueWithPointer:dlParams];
CHECK_CVLINK("SetOutputCallback", CVDisplayLinkSetOutputCallback(
_displayLink,
&mtlDisplayLinkCallback,
(__bridge DLParams*) dlParams));
}
}
}
- (void)handleDisplayLink: (BOOL)enabled display:(jint) display source:(const char*)src {
if (_displayLinks == nil) {
if (TRACE_CVLINK) {
J2dRlsTraceLn2(J2D_TRACE_VERBOSE, "MTLContext_handleDisplayLink[%s]: "
"ctx=%p - displayLink = nil", src, self);
"ctx=%p - displayLinks = nil", src, self);
}
} else {
NSValue* dlParamsVal = _displayLinks[@(display)];
if (dlParamsVal == nil) {
J2dRlsTraceLn3(J2D_TRACE_ERROR, "MTLContext_handleDisplayLink[%s]: "
"ctx=%p, no display link for %d ", src, self, display);
return;
}
DLParams *dlParams = [dlParamsVal pointerValue];
CVDisplayLinkRef _displayLink = dlParams->displayLink;
if (enabled) {
if (!CVDisplayLinkIsRunning(_displayLink)) {
CHECK_CVLINK("Start", CVDisplayLinkStart(_displayLink));
@@ -269,10 +356,11 @@ extern void initSamplers(id<MTLDevice> device);
- (void)dealloc {
J2dTraceLn(J2D_TRACE_INFO, "MTLContext.dealloc");
if (_displayLink != nil) {
if (_displayLinks != nil) {
[self haltRedraw];
}
[shadersLib release];
// TODO : Check that texturePool is completely released.
// texturePool content is released in MTLCommandBufferWrapper.onComplete()
//self.texturePool = nil;
@@ -627,7 +715,7 @@ extern void initSamplers(id<MTLDevice> device);
}
}
- (void) redraw {
- (void) redraw:(NSNumber*)displayIDNum {
AWT_ASSERT_APPKIT_THREAD;
/*
* Avoid repeated invocations by UIKit Main Thread
@@ -650,7 +738,7 @@ extern void initSamplers(id<MTLDevice> device);
if (_layers.count > 0) {
[_layers removeAllObjects];
}
[self handleDisplayLink:NO source:"redraw"];
[self handleDisplayLink:NO display:[displayIDNum integerValue] source:"redraw"];
}
}
@@ -658,8 +746,8 @@ CVReturn mtlDisplayLinkCallback(CVDisplayLinkRef displayLink, const CVTimeStamp*
{
J2dTraceLn1(J2D_TRACE_VERBOSE, "MTLContext_mtlDisplayLinkCallback: ctx=%p", displayLinkContext);
@autoreleasepool {
MTLContext *ctx = (__bridge MTLContext *)displayLinkContext;
[ThreadUtilities performOnMainThread:@selector(redraw) on:ctx withObject:nil waitUntilDone:NO];
DLParams* dlParams = (__bridge DLParams* *)displayLinkContext;
[ThreadUtilities performOnMainThread:@selector(redraw:) on:dlParams->mtlc withObject:@(dlParams->displayID) waitUntilDone:NO];
}
return kCVReturnSuccess;
}
@@ -674,25 +762,25 @@ CVReturn mtlDisplayLinkCallback(CVDisplayLinkRef displayLink, const CVTimeStamp*
// Request for redraw before starting display link to avoid rendering problem on M2 processor
[layer setNeedsDisplay];
}
[self handleDisplayLink:YES source:"startRedraw"];
[self handleDisplayLink:YES display:layer.displayID source:"startRedraw"];
}
- (void)stopRedraw:(MTLLayer*) layer {
AWT_ASSERT_APPKIT_THREAD;
J2dTraceLn2(J2D_TRACE_VERBOSE, "MTLContext_stopRedraw: ctx=%p layer=%p", self, layer);
if (_displayLink != nil) {
if (_displayLinks != nil) {
if (--layer.redrawCount <= 0) {
[_layers removeObject:layer];
layer.redrawCount = 0;
}
if ((_layers.count == 0) && (_displayLinkCount == 0)) {
[self handleDisplayLink:NO source:"stopRedraw"];
[self handleDisplayLink:NO display:layer.displayID source:"stopRedraw"];
}
}
}
- (void)haltRedraw {
if (_displayLink != nil) {
if (_displayLinks != nil) {
if (TRACE_CVLINK) {
J2dRlsTraceLn1(J2D_TRACE_VERBOSE, "MTLContext_haltRedraw: ctx=%p", self);
}
@@ -703,10 +791,16 @@ CVReturn mtlDisplayLinkCallback(CVDisplayLinkRef displayLink, const CVTimeStamp*
[_layers removeAllObjects];
}
_displayLinkCount = 0;
[self handleDisplayLink:NO source:"haltRedraw"];
CVDisplayLinkRelease(_displayLink);
_displayLink = NULL;
NSEnumerator<NSNumber*>* keyEnum = _displayLinks.keyEnumerator;
NSNumber* displayIDVal;
while ((displayIDVal = [keyEnum nextObject])) {
[self handleDisplayLink:NO display:[displayIDVal integerValue] source:"haltRedraw"];
DLParams *dlParams = [(NSValue*)_displayLinks[displayIDVal] pointerValue];
CVDisplayLinkRelease(dlParams->displayLink);
free(dlParams);
}
[_displayLinks release];
_displayLinks = NULL;
}
}

View File

@@ -45,6 +45,7 @@
*/
typedef struct _MTLGraphicsConfigInfo {
MTLContext *context;
jint displayID;
} MTLGraphicsConfigInfo;
#endif /* MTLGraphicsConfig_h_Included */

View File

@@ -48,7 +48,7 @@ MTLGC_DestroyMTLGraphicsConfig(jlong pConfigInfo)
mtlinfo->context = nil;
[ThreadUtilities performOnMainThreadWaiting:NO block:^() {
if (mtlc != NULL) {
[mtlc release];
[MTLContext releaseContext:mtlc];
}
free(mtlinfo);
}];
@@ -92,7 +92,7 @@ JNI_COCOA_ENTER(env);
[ThreadUtilities performOnMainThreadWaiting:YES block:^() {
MTLContext* mtlc = [[MTLContext alloc] initWithDevice:displayID
MTLContext* mtlc = [MTLContext createContextWithDeviceIfAbsent:displayID
shadersLib:path];
if (mtlc != 0L) {
// create the MTLGraphicsConfigInfo record for this context
@@ -100,6 +100,7 @@ JNI_COCOA_ENTER(env);
if (mtlinfo != NULL) {
memset(mtlinfo, 0, sizeof(MTLGraphicsConfigInfo));
mtlinfo->context = mtlc;
mtlinfo->displayID = displayID;
} else {
J2dRlsTraceLn(J2D_TRACE_ERROR, "MTLGraphicsConfig_getMTLConfigInfo: could not allocate memory for mtlinfo");
[mtlc release];

View File

@@ -34,6 +34,7 @@
@property (nonatomic) jobject javaLayer;
@property (readwrite, assign) MTLContext* ctx;
@property (readwrite, assign) NSInteger displayID;
@property (readwrite, assign) id<MTLTexture>* buffer;
@property (readwrite, assign) id<MTLTexture>* outBuffer;
@property (readwrite, atomic) int nextDrawableCount;

View File

@@ -554,6 +554,7 @@ Java_sun_java2d_metal_MTLLayer_validate
layer.buffer = &bmtlsdo->pTexture;
layer.outBuffer = &bmtlsdo->pOutTexture;
layer.ctx = ((MTLSDOps *)bmtlsdo->privOps)->configInfo->context;
layer.displayID = ((MTLSDOps *)bmtlsdo->privOps)->configInfo->displayID;
layer.device = layer.ctx.device;
layer.pixelFormat = MTLPixelFormatBGRA8Unorm;

View File

@@ -36,8 +36,11 @@
#include "MTLRenderQueue.h"
#include "MTLRenderer.h"
#include "MTLTextRenderer.h"
#include "MTLUtils.h"
#import "ThreadUtilities.h"
#define TRACE_OP 0
/**
* References to the "current" context and destination surface.
*/
@@ -48,6 +51,64 @@ jint mtlPreviousOp = MTL_OP_INIT;
extern BOOL isDisplaySyncEnabled();
extern void MTLGC_DestroyMTLGraphicsConfig(jlong pConfigInfo);
static const char* mtlOpCodeToStr(uint opcode);
static const char* mtlOpToStr(uint op);
/*
* Derived from JNI_COCOA_ENTER(env):
* Create a pool and initiate a try block to catch any exception
*/
#define RENDER_LOOP_ENTER(env) \
BOOL sync = NO; \
jint opcode = -1; \
const NSAutoreleasePool *pool = [[NSAutoreleasePool alloc] init]; \
@try
/*
* Derived from JNI_COCOA_EXIT(env):
* Don't allow NSExceptions to escape to Java.
* If there is a Java exception that has been thrown that should escape.
* And ensure we drain the auto-release pool.
*/
#define RENDER_LOOP_EXIT(env, className) \
@catch (NSException *e) { \
lwc_plog(env, "%s_flushBuffer: Failed opcode=%s op=%s dstType=%s ctx=%p", \
className, mtlOpCodeToStr(opcode), mtlOpToStr(mtlPreviousOp), \
mtlDstTypeToStr(DST_TYPE(dstOps)), mtlc); \
char* str = [NSString stringWithFormat:@"%@", [e description]].UTF8String; \
lwc_plog(env, "%s_flushBuffer Exception: %s", className, str); \
str = [NSString stringWithFormat:@"%@", [e callStackSymbols]].UTF8String; \
lwc_plog(env, "%s_flushBuffer callstack: %s", className, str); \
/* Finally (JetBrains Runtime only) report this message to JVM crash log: */ \
JNU_LOG_EVENT(env, "%s_flushBuffer: Failed opcode=%s op=%s dstType=%s ctx=%p", \
className, mtlOpCodeToStr(opcode), mtlOpToStr(mtlPreviousOp), \
mtlDstTypeToStr(DST_TYPE(dstOps)), mtlc); \
/* report fatal failure to make a crash report: */ \
JNU_Fatal(env, __FILE__, __LINE__, "flushBuffer failed"); \
} \
@finally { \
/* flush GPU state before draining pool: */ \
MTLRenderQueue_reset(mtlc, sync); \
[pool drain]; \
}
void MTLRenderQueue_reset(MTLContext* context, BOOL sync)
{
// Ensure flushing encoder before draining the NSAutoreleasePool:
if (context != NULL) {
if (mtlPreviousOp == MTL_OP_MASK_OP) {
MTLVertexCache_DisableMaskCache(context);
}
if (isDisplaySyncEnabled()) {
[context commitCommandBuffer:NO display:YES];
} else {
[context commitCommandBuffer:sync display:YES];
}
}
RESET_PREVIOUS_OP();
}
void MTLRenderQueue_CheckPreviousOp(jint op) {
if (mtlPreviousOp == op) {
@@ -69,6 +130,9 @@ void MTLRenderQueue_CheckPreviousOp(jint op) {
J2dTraceLn1(J2D_TRACE_VERBOSE,
"MTLRenderQueue_CheckPreviousOp: new op=%d", op);
if (TRACE_OP) J2dRlsTraceLn2(J2D_TRACE_INFO, "MTLRenderQueue_CheckPreviousOp: op=%s\tnew op=%s",
mtlOpToStr(mtlPreviousOp), mtlOpToStr(op));
switch (mtlPreviousOp) {
case MTL_OP_INIT :
mtlPreviousOp = op;
@@ -99,7 +163,6 @@ Java_sun_java2d_metal_MTLRenderQueue_flushBuffer
jlong buf, jint limit)
{
unsigned char *b, *end;
BOOL sync = NO;
J2dTraceLn1(J2D_TRACE_INFO,
"MTLRenderQueue_flushBuffer: limit=%d", limit);
@@ -111,13 +174,17 @@ Java_sun_java2d_metal_MTLRenderQueue_flushBuffer
}
end = b + limit;
@autoreleasepool {
// Handle any NSException thrown:
RENDER_LOOP_ENTER(env)
{
while (b < end) {
jint opcode = NEXT_INT(b);
opcode = NEXT_INT(b);
J2dTraceLn2(J2D_TRACE_VERBOSE,
"MTLRenderQueue_flushBuffer: opcode=%d, rem=%d",
opcode, (end-b));
if (TRACE_OP) J2dRlsTraceLn1(J2D_TRACE_VERBOSE, "MTLRenderQueue_flushBuffer: opcode=%s", mtlOpCodeToStr(opcode));
switch (opcode) {
@@ -143,7 +210,6 @@ Java_sun_java2d_metal_MTLRenderQueue_flushBuffer
CHECK_RENDER_OP(MTL_OP_OTHER, dstOps, sync);
if ([mtlc useXORComposite]) {
[mtlc commitCommandBuffer:YES display:NO];
J2dTraceLn(J2D_TRACE_VERBOSE,
"DRAW_RECT in XOR mode - Force commit earlier draw calls before DRAW_RECT.");
@@ -660,6 +726,20 @@ Java_sun_java2d_metal_MTLRenderQueue_flushBuffer
break;
}
case sun_java2d_pipe_BufferedOpCodes_FLUSH_BUFFER:
{
CHECK_PREVIOUS_OP(MTL_OP_OTHER);
jlong pLayerPtr = NEXT_LONG(b);
MTLLayer* layer = (MTLLayer*)pLayerPtr;
if (layer != nil) {
[layer flushBuffer];
} else {
J2dRlsTraceLn(J2D_TRACE_ERROR,
"MTLRenderQueue_flushBuffer(FLUSH_BUFFER): MTLLayer is nil");
}
break;
}
case sun_java2d_pipe_BufferedOpCodes_SYNC:
{
CHECK_PREVIOUS_OP(MTL_OP_SYNC);
@@ -862,40 +942,14 @@ Java_sun_java2d_metal_MTLRenderQueue_flushBuffer
break;
}
case sun_java2d_pipe_BufferedOpCodes_FLUSH_BUFFER:
{
CHECK_PREVIOUS_OP(MTL_OP_OTHER);
jlong pLayerPtr = NEXT_LONG(b);
MTLLayer* layer = (MTLLayer*)pLayerPtr;
if (layer != nil) {
[layer flushBuffer];
} else {
J2dRlsTraceLn(J2D_TRACE_ERROR,
"MTLRenderQueue_flushBuffer(FLUSH_BUFFER): MTLLayer is nil");
}
break;
}
default:
J2dRlsTraceLn1(J2D_TRACE_ERROR,
"MTLRenderQueue_flushBuffer: invalid opcode=%d", opcode);
return;
}
}
if (mtlc != NULL) {
if (mtlPreviousOp == MTL_OP_MASK_OP) {
MTLVertexCache_DisableMaskCache(mtlc);
}
if (isDisplaySyncEnabled()) {
[mtlc commitCommandBuffer:NO display:YES];
} else {
[mtlc commitCommandBuffer:sync display:YES];
}
}
RESET_PREVIOUS_OP();
} // while op
}
RENDER_LOOP_EXIT(env, "MTLRenderQueue");
}
/**
@@ -916,4 +970,108 @@ BMTLSDOps *
MTLRenderQueue_GetCurrentDestination()
{
return dstOps;
}
}
/* debugging helper functions */
static const char* mtlOpToStr(uint op) {
#undef CASE_MTL_OP
#define CASE_MTL_OP(X) \
case MTL_OP_##X: \
return #X;
switch (op) {
CASE_MTL_OP(INIT)
CASE_MTL_OP(AA)
CASE_MTL_OP(SET_COLOR)
CASE_MTL_OP(RESET_PAINT)
CASE_MTL_OP(SYNC)
CASE_MTL_OP(SHAPE_CLIP_SPANS)
CASE_MTL_OP(MASK_OP)
CASE_MTL_OP(OTHER)
default:
return "";
}
#undef CASE_MTL_OP
}
static const char* mtlOpCodeToStr(uint opcode) {
#undef CASE_BUF_OP
#define CASE_BUF_OP(X) \
case sun_java2d_pipe_BufferedOpCodes_##X: \
return #X;
switch (opcode) {
// draw ops
CASE_BUF_OP(DRAW_LINE)
CASE_BUF_OP(DRAW_RECT)
CASE_BUF_OP(DRAW_POLY)
CASE_BUF_OP(DRAW_PIXEL)
CASE_BUF_OP(DRAW_SCANLINES)
CASE_BUF_OP(DRAW_PARALLELOGRAM)
CASE_BUF_OP(DRAW_AAPARALLELOGRAM)
// fill ops
CASE_BUF_OP(FILL_RECT)
CASE_BUF_OP(FILL_SPANS)
CASE_BUF_OP(FILL_PARALLELOGRAM)
CASE_BUF_OP(FILL_AAPARALLELOGRAM)
// copy-related ops
CASE_BUF_OP(COPY_AREA)
CASE_BUF_OP(BLIT)
CASE_BUF_OP(MASK_FILL)
CASE_BUF_OP(MASK_BLIT)
CASE_BUF_OP(SURFACE_TO_SW_BLIT)
// text-related ops
CASE_BUF_OP(DRAW_GLYPH_LIST)
// state-related ops
CASE_BUF_OP(SET_RECT_CLIP)
CASE_BUF_OP(BEGIN_SHAPE_CLIP)
CASE_BUF_OP(SET_SHAPE_CLIP_SPANS)
CASE_BUF_OP(END_SHAPE_CLIP)
CASE_BUF_OP(RESET_CLIP)
CASE_BUF_OP(SET_ALPHA_COMPOSITE)
CASE_BUF_OP(SET_XOR_COMPOSITE)
CASE_BUF_OP(RESET_COMPOSITE)
CASE_BUF_OP(SET_TRANSFORM)
CASE_BUF_OP(RESET_TRANSFORM)
// context-related ops
CASE_BUF_OP(SET_SURFACES)
CASE_BUF_OP(SET_SCRATCH_SURFACE)
CASE_BUF_OP(FLUSH_SURFACE)
CASE_BUF_OP(DISPOSE_SURFACE)
CASE_BUF_OP(DISPOSE_CONFIG)
CASE_BUF_OP(INVALIDATE_CONTEXT)
CASE_BUF_OP(SYNC)
CASE_BUF_OP(RESTORE_DEVICES)
CASE_BUF_OP(CONFIGURE_SURFACE) /* unsupported */
CASE_BUF_OP(SWAP_BUFFERS) /* unsupported */
CASE_BUF_OP(FLUSH_BUFFER)
// special no-op (mainly used for achieving 8-byte alignment)
CASE_BUF_OP(NOOP)
// paint-related ops
CASE_BUF_OP(RESET_PAINT)
CASE_BUF_OP(SET_COLOR)
CASE_BUF_OP(SET_GRADIENT_PAINT)
CASE_BUF_OP(SET_LINEAR_GRADIENT_PAINT)
CASE_BUF_OP(SET_RADIAL_GRADIENT_PAINT)
CASE_BUF_OP(SET_TEXTURE_PAINT)
// BufferedImageOp-related ops
CASE_BUF_OP(ENABLE_CONVOLVE_OP)
CASE_BUF_OP(DISABLE_CONVOLVE_OP)
CASE_BUF_OP(ENABLE_RESCALE_OP)
CASE_BUF_OP(DISABLE_RESCALE_OP)
CASE_BUF_OP(ENABLE_LOOKUP_OP)
CASE_BUF_OP(DISABLE_LOOKUP_OP)
default:
return "";
}
#undef CASE_BUF_OP
}

View File

@@ -1,5 +1,6 @@
/*
* Copyright (c) 2019, 2021, Oracle and/or its affiliates. All rights reserved.
* Copyright (c) 2019, 2024, Oracle and/or its affiliates. All rights reserved.
* Copyright (c) 2024, JetBrains s.r.o.. All rights reserved.
* DO NOT ALTER OR REMOVE COPYRIGHT NOTICES OR THIS FILE HEADER.
*
* This code is free software; you can redistribute it and/or modify it
@@ -25,50 +26,31 @@
#ifndef MTLTexturePool_h_Included
#define MTLTexturePool_h_Included
#include <time.h>
#import "MTLUtils.h"
@class MTLPoolCell;
@interface MTLTexturePoolItem : NSObject
@property (readwrite, retain) id<MTLTexture> texture;
@property (readwrite) bool isBusy;
@property (readwrite) time_t lastUsed;
@property (readwrite) bool isMultiSample;
@property (readwrite, assign) MTLTexturePoolItem* prev;
@property (readwrite, retain) MTLTexturePoolItem* next;
@property (readwrite, assign) MTLPoolCell* cell;
- (id) initWithTexture:(id<MTLTexture>)tex cell:(MTLPoolCell*)cell;
@end
#import <Metal/Metal.h>
@interface MTLPooledTextureHandle : NSObject
@property (readonly, assign) id<MTLTexture> texture;
@property (readonly) MTLRegion rect;
- (void) releaseTexture;
@property (readonly, assign) id<MTLTexture> texture;
@property (readonly) NSUInteger reqWidth;
@property (readonly) NSUInteger reqHeight;
// used by MTLCommandBufferWrapper.onComplete() to release used textures:
- (void) releaseTexture;
@end
// NOTE: owns all MTLTexture objects
@interface MTLTexturePool : NSObject
@property (readwrite, retain) id<MTLDevice> device;
@property (readwrite, retain) id<MTLDevice> device;
@property (readwrite) uint64_t memoryAllocated;
@property (readwrite) uint64_t totalMemoryAllocated;
@property (readwrite) uint32_t allocatedCount;
@property (readwrite) uint32_t totalAllocatedCount;
@property (readwrite) uint64_t cacheHits;
@property (readwrite) uint64_t totalHits;
- (id) initWithDevice:(id<MTLDevice>)device;
- (MTLPooledTextureHandle *) getTexture:(int)width height:(int)height format:(MTLPixelFormat)format;
- (MTLPooledTextureHandle *) getTexture:(int)width height:(int)height format:(MTLPixelFormat)format
isMultiSample:(bool)isMultiSample;
@end
- (id) initWithDevice:(id<MTLDevice>)device;
@interface MTLPoolCell : NSObject
@property (readwrite, retain) MTLTexturePoolItem* available;
@property (readwrite, assign) MTLTexturePoolItem* availableTail;
@property (readwrite, retain) MTLTexturePoolItem* occupied;
- (MTLTexturePoolItem *)createItem:(id<MTLDevice>)dev
width:(int)width
height:(int)height
format:(MTLPixelFormat)format
isMultiSample:(bool)isMultiSample;
- (NSUInteger)cleanIfBefore:(time_t)lastUsedTimeToRemove;
- (void)releaseItem:(MTLTexturePoolItem *)item;
- (MTLPooledTextureHandle *) getTexture:(int)width height:(int)height format:(MTLPixelFormat)format;
@end
#endif /* MTLTexturePool_h_Included */

View File

@@ -0,0 +1,823 @@
/*
* Copyright (c) 2019, 2024, Oracle and/or its affiliates. All rights reserved.
* Copyright (c) 2024, JetBrains s.r.o.. All rights reserved.
* DO NOT ALTER OR REMOVE COPYRIGHT NOTICES OR THIS FILE HEADER.
*
* This code is free software; you can redistribute it and/or modify it
* under the terms of the GNU General Public License version 2 only, as
* published by the Free Software Foundation. Oracle designates this
* particular file as subject to the "Classpath" exception as provided
* by Oracle in the LICENSE file that accompanied this code.
*
* This code is distributed in the hope that it will be useful, but WITHOUT
* ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or
* FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License
* version 2 for more details (a copy is included in the LICENSE file that
* accompanied this code).
*
* You should have received a copy of the GNU General Public License version
* 2 along with this work; if not, write to the Free Software Foundation,
* Inc., 51 Franklin St, Fifth Floor, Boston, MA 02110-1301 USA.
*
* Please contact Oracle, 500 Oracle Parkway, Redwood Shores, CA 94065 USA
* or visit www.oracle.com if you need additional information or have any
* questions.
*/
#include "time.h"
#import "AccelTexturePool.h"
#import "MTLTexturePool.h"
#import "Trace.h"
#define USE_ACCEL_TEXTURE_POOL 0
#define TRACE_LOCK 0
#define TRACE_TEX 0
/* lock API */
ATexturePoolLockPrivPtr* MTLTexturePoolLock_initImpl(void) {
NSLock* l = [[[NSLock alloc] init] autorelease];
[l retain];
if (TRACE_LOCK) J2dRlsTraceLn1(J2D_TRACE_VERBOSE, "MTLTexturePoolLock_initImpl: lock=%p", l);
return l;
}
void MTLTexturePoolLock_DisposeImpl(ATexturePoolLockPrivPtr *lock) {
NSLock* l = (NSLock*)lock;
if (TRACE_LOCK) J2dRlsTraceLn1(J2D_TRACE_VERBOSE, "MTLTexturePoolLock_DisposeImpl: lock=%p", l);
[l release];
}
void MTLTexturePoolLock_lockImpl(ATexturePoolLockPrivPtr *lock) {
NSLock* l = (NSLock*)lock;
if (TRACE_LOCK) J2dRlsTraceLn1(J2D_TRACE_VERBOSE, "MTLTexturePoolLock_lockImpl: lock=%p", l);
[l lock];
if (TRACE_LOCK) J2dRlsTraceLn1(J2D_TRACE_VERBOSE, "MTLTexturePoolLock_lockImpl: lock=%p - locked", l);
}
void MTLTexturePoolLock_unlockImpl(ATexturePoolLockPrivPtr *lock) {
NSLock* l = (NSLock*)lock;
if (TRACE_LOCK) J2dRlsTraceLn1(J2D_TRACE_VERBOSE, "MTLTexturePoolLock_unlockImpl: lock=%p", l);
[l unlock];
if (TRACE_LOCK) J2dRlsTraceLn1(J2D_TRACE_VERBOSE, "MTLTexturePoolLock_unlockImpl: lock=%p - unlocked", l);
}
/* Texture allocate/free API */
static id<MTLTexture> MTLTexturePool_createTexture(id<MTLDevice> device,
int width,
int height,
long format)
{
MTLTextureDescriptor *textureDescriptor =
[MTLTextureDescriptor texture2DDescriptorWithPixelFormat:(MTLPixelFormat)format
width:(NSUInteger) width
height:(NSUInteger) height
mipmapped:NO];
// By default:
// usage = MTLTextureUsageShaderRead
// storage = MTLStorageModeManaged
textureDescriptor.usage = MTLTextureUsageRenderTarget | MTLTextureUsageShaderRead;
// Use auto-release => MTLTexturePoolItem.dealloc to free texture !
id <MTLTexture> texture = (id <MTLTexture>) [[device newTextureWithDescriptor:textureDescriptor] autorelease];
[texture retain];
if (TRACE_TEX) J2dRlsTraceLn4(J2D_TRACE_VERBOSE, "MTLTexturePool_createTexture: created texture: tex=%p, w=%d h=%d, pf=%d",
texture, [texture width], [texture height], [texture pixelFormat]);
return texture;
}
static int MTLTexturePool_bytesPerPixel(long format) {
switch ((MTLPixelFormat)format) {
case MTLPixelFormatBGRA8Unorm:
return 4;
case MTLPixelFormatA8Unorm:
return 1;
default:
J2dRlsTraceLn1(J2D_TRACE_ERROR, "MTLTexturePool_bytesPerPixel: format=%d not supported (4 bytes by default)", format);
return 4;
}
}
static void MTLTexturePool_freeTexture(id<MTLDevice> device, id<MTLTexture> texture) {
if (TRACE_TEX) J2dRlsTraceLn4(J2D_TRACE_VERBOSE, "MTLTexturePool_freeTexture: free texture: tex=%p, w=%d h=%d, pf=%d",
texture, [texture width], [texture height], [texture pixelFormat]);
[texture release];
}
/*
* Former but updated MTLTexturePool implementation
*/
#define USE_MAX_GPU_DEVICE_MEM 1
#define MAX_GPU_DEVICE_MEM (512 * UNIT_MB)
#define SCREEN_MEMORY_SIZE_5K (5120 * 4096 * 4) // ~ 84 mb
#define MAX_POOL_ITEM_LIFETIME_SEC 30
// 32 pixel
#define CELL_WIDTH_BITS 5
#define CELL_HEIGHT_BITS 5
#define CELL_WIDTH_MASK ((1 << CELL_WIDTH_BITS) - 1)
#define CELL_HEIGHT_MASK ((1 << CELL_HEIGHT_BITS) - 1)
#define USE_CEIL_SIZE 1
#define FORCE_GC 1
// force gc (prune old textures):
#define FORCE_GC_INTERVAL_SEC (MAX_POOL_ITEM_LIFETIME_SEC * 10)
// force young gc every 15 seconds (prune only not reused textures):
#define YOUNG_GC_INTERVAL_SEC 15
#define YOUNG_GC_LIFETIME_SEC (FORCE_GC_INTERVAL_SEC * 2)
#define TRACE_GC 1
#define TRACE_GC_ALIVE 0
#define TRACE_MEM_API 0
#define TRACE_USE_API 0
#define TRACE_REUSE 0
#define INIT_TEST 0
#define INIT_TEST_STEP 1
#define INIT_TEST_MAX 1024
/* private definitions */
@class MTLPoolCell;
@interface MTLTexturePoolItem : NSObject
@property (readwrite, assign) id<MTLTexture> texture;
@property (readwrite, assign) MTLPoolCell* cell;
@property (readwrite, assign) MTLTexturePoolItem* prev;
@property (readwrite, retain) MTLTexturePoolItem* next;
@property (readwrite) time_t lastUsed;
@property (readwrite) int width;
@property (readwrite) int height;
@property (readwrite) MTLPixelFormat format;
@property (readwrite) int reuseCount;
@property (readwrite) bool isBusy;
@end
@interface MTLPoolCell : NSObject
@property (readwrite, retain) MTLTexturePoolItem* available;
@property (readwrite, assign) MTLTexturePoolItem* availableTail;
@property (readwrite, retain) MTLTexturePoolItem* occupied;
@end
@implementation MTLTexturePoolItem
@synthesize texture, lastUsed, next, cell, width, height, format, reuseCount, isBusy;
- (id) initWithTexture:(id<MTLTexture>)tex
cell:(MTLPoolCell*)c
width:(int)w
height:(int)h
format:(MTLPixelFormat)f
{
self = [super init];
if (self == nil) return self;
self.texture = tex;
self.cell = c;
self.next = nil;
self.prev = nil;
self.lastUsed = 0;
self.width = w;
self.height = h;
self.format = f;
self.reuseCount = 0;
self.isBusy = NO;
if (TRACE_MEM_API) J2dRlsTraceLn1(J2D_TRACE_INFO, "MTLTexturePoolItem_initWithTexture: item = %p", self);
return self;
}
- (void) dealloc {
if (TRACE_MEM_API) J2dRlsTraceLn2(J2D_TRACE_INFO, "MTLTexturePoolItem_dealloc: item = %p - reuse: %4d", self, self.reuseCount);
// use texture (native API) to release allocated texture:
MTLTexturePool_freeTexture(nil, self.texture);
[super dealloc];
}
- (void) releaseItem {
if (!isBusy) {
return;
}
if (TRACE_MEM_API) J2dRlsTraceLn1(J2D_TRACE_INFO, "MTLTexturePoolItem_releaseItem: item = %p", self);
if (self.cell != nil) {
[self.cell releaseCellItem:self];
} else {
J2dRlsTraceLn1(J2D_TRACE_ERROR, "MTLTexturePoolItem_releaseItem: item = %p (detached)", self);
}
}
@end
/* MTLPooledTextureHandle API */
@implementation MTLPooledTextureHandle
{
id<MTLTexture> _texture;
MTLTexturePoolItem * _poolItem;
ATexturePoolHandle* _texHandle;
NSUInteger _reqWidth;
NSUInteger _reqHeight;
}
@synthesize texture = _texture, reqWidth = _reqWidth, reqHeight = _reqHeight;
- (id) initWithPoolItem:(id<MTLTexture>)texture poolItem:(MTLTexturePoolItem *)poolItem reqWidth:(NSUInteger)w reqHeight:(NSUInteger)h {
self = [super init];
if (self == nil) return self;
_texture = texture;
_poolItem = poolItem;
_texHandle = nil;
_reqWidth = w;
_reqHeight = h;
if (TRACE_USE_API) J2dRlsTraceLn1(J2D_TRACE_INFO, "MTLPooledTextureHandle_initWithPoolItem: handle = %p", self);
return self;
}
- (id) initWithTextureHandle:(ATexturePoolHandle*)texHandle {
self = [super init];
if (self == nil) return self;
_texture = ATexturePoolHandle_GetTexture(texHandle);
_poolItem = nil;
_texHandle = texHandle;
_reqWidth = ATexturePoolHandle_GetRequestedWidth(texHandle);
_reqHeight = ATexturePoolHandle_GetRequestedHeight(texHandle);
if (TRACE_USE_API) J2dRlsTraceLn1(J2D_TRACE_INFO, "MTLPooledTextureHandle_initWithTextureHandle: handle = %p", self);
return self;
}
- (void) releaseTexture {
if (TRACE_USE_API) J2dRlsTraceLn1(J2D_TRACE_INFO, "MTLPooledTextureHandle_ReleaseTexture: handle = %p", self);
if (_texHandle != nil) {
ATexturePoolHandle_ReleaseTexture(_texHandle);
}
if (_poolItem != nil) {
[_poolItem releaseItem];
}
}
@end
@implementation MTLPoolCell {
MTLTexturePool* _pool;
NSLock* _lock;
}
@synthesize available, availableTail, occupied;
- (instancetype) init:(MTLTexturePool*)pool {
self = [super init];
if (self == nil) return self;
self.available = nil;
self.availableTail = nil;
self.occupied = nil;
_pool = pool;
_lock = [[NSLock alloc] init];
if (TRACE_MEM_API) J2dRlsTraceLn1(J2D_TRACE_INFO, "MTLPoolCell_init: cell = %p", self);
return self;
}
- (void) removeAllItems {
if (TRACE_MEM_API) J2dRlsTraceLn(J2D_TRACE_INFO, "MTLPoolCell_removeAllItems");
MTLTexturePoolItem *cur = self.available;
MTLTexturePoolItem *next = nil;
while (cur != nil) {
next = cur.next;
self.available = cur;
cur = next;
}
cur = self.occupied;
next = nil;
while (cur != nil) {
next = cur.next;
J2dRlsTraceLn1(J2D_TRACE_ERROR, "MTLPoolCell_removeAllItems: occupied item = %p (detached)", cur);
// Do not dispose (may leak) until MTLTexturePoolItem_releaseItem() is called by handle:
// mark item as detached:
cur.cell = nil;
cur = next;
self.occupied = cur;
}
self.availableTail = nil;
}
- (void) removeAvailableItem:(MTLTexturePoolItem*)item {
if (TRACE_MEM_API) J2dRlsTraceLn1(J2D_TRACE_INFO, "MTLPoolCell_removeAvailableItem: item = %p", item);
[item retain];
if (item.prev == nil) {
self.available = item.next;
if (item.next) {
item.next.prev = nil;
item.next = nil;
} else {
self.availableTail = item.prev;
}
} else {
item.prev.next = item.next;
if (item.next) {
item.next.prev = item.prev;
item.next = nil;
} else {
self.availableTail = item.prev;
}
}
[item release];
}
- (void) dealloc {
if (TRACE_MEM_API) J2dRlsTraceLn1(J2D_TRACE_INFO, "MTLPoolCell_dealloc: cell = %p", self);
[_lock lock];
@try {
[self removeAllItems];
} @finally {
[_lock unlock];
}
[_lock release];
[super dealloc];
}
- (void) occupyItem:(MTLTexturePoolItem *)item {
if (item.isBusy) {
return;
}
if (TRACE_USE_API) J2dRlsTraceLn1(J2D_TRACE_INFO, "MTLPoolCell_occupyItem: item = %p", item);
[item retain];
if (item.prev == nil) {
self.available = item.next;
if (item.next) {
item.next.prev = nil;
} else {
self.availableTail = item.prev;
}
} else {
item.prev.next = item.next;
if (item.next) {
item.next.prev = item.prev;
} else {
self.availableTail = item.prev;
}
item.prev = nil;
}
if (occupied) {
occupied.prev = item;
}
item.next = occupied;
self.occupied = item;
item.isBusy = YES;
[item release];
}
- (void) releaseCellItem:(MTLTexturePoolItem *)item {
if (!item.isBusy) return;
if (TRACE_USE_API) J2dRlsTraceLn1(J2D_TRACE_INFO, "MTLPoolCell_releaseCellItem: item = %p", item);
[_lock lock];
@try {
[item retain];
if (item.prev == nil) {
self.occupied = item.next;
if (item.next) {
item.next.prev = nil;
}
} else {
item.prev.next = item.next;
if (item.next) {
item.next.prev = item.prev;
}
item.prev = nil;
}
if (self.available) {
self.available.prev = item;
} else {
self.availableTail = item;
}
item.next = self.available;
self.available = item;
item.isBusy = NO;
[item release];
} @finally {
[_lock unlock];
}
}
- (void) addOccupiedItem:(MTLTexturePoolItem *)item {
if (TRACE_USE_API) J2dRlsTraceLn1(J2D_TRACE_INFO, "MTLPoolCell_addOccupiedItem: item = %p", item);
[_lock lock];
@try {
_pool.allocatedCount++;
_pool.totalAllocatedCount++;
if (self.occupied) {
self.occupied.prev = item;
}
item.next = self.occupied;
self.occupied = item;
item.isBusy = YES;
} @finally {
[_lock unlock];
}
}
- (void) cleanIfBefore:(time_t)lastUsedTimeToRemove {
[_lock lock];
@try {
MTLTexturePoolItem *cur = availableTail;
while (cur != nil) {
MTLTexturePoolItem *prev = cur.prev;
if ((cur.reuseCount == 0) || (lastUsedTimeToRemove <= 0) || (cur.lastUsed < lastUsedTimeToRemove)) {
if (TRACE_MEM_API) J2dRlsTraceLn4(J2D_TRACE_VERBOSE, "MTLPoolCell_cleanIfBefore: remove pool item: tex=%p, w=%d h=%d, elapsed=%d",
cur.texture, cur.width, cur.height,
time(NULL) - cur.lastUsed);
const int requestedBytes = cur.width * cur.height * MTLTexturePool_bytesPerPixel(cur.format);
// cur is nil after removeAvailableItem:
[self removeAvailableItem:cur];
_pool.allocatedCount--;
_pool.memoryAllocated -= requestedBytes;
} else {
if (TRACE_MEM_API || TRACE_GC_ALIVE) J2dRlsTraceLn2(J2D_TRACE_INFO, "MTLPoolCell_cleanIfBefore: item = %p - ALIVE - reuse: %4d -> 0",
cur, cur.reuseCount);
// clear reuse count anyway:
cur.reuseCount = 0;
}
cur = prev;
}
} @finally {
[_lock unlock];
}
}
- (MTLTexturePoolItem *) occupyCellItem:(int)width height:(int)height format:(MTLPixelFormat)format {
int minDeltaArea = -1;
const int requestedPixels = width*height;
MTLTexturePoolItem *minDeltaTpi = nil;
[_lock lock];
@try {
for (MTLTexturePoolItem *cur = available; cur != nil; cur = cur.next) {
// TODO: use swizzle when formats are not equal:
if (cur.format != format) {
continue;
}
if (cur.width < width || cur.height < height) {
continue;
}
const int deltaArea = (const int) (cur.width * cur.height - requestedPixels);
if (minDeltaArea < 0 || deltaArea < minDeltaArea) {
minDeltaArea = deltaArea;
minDeltaTpi = cur;
if (deltaArea == 0) {
// found exact match in current cell
break;
}
}
}
if (minDeltaTpi) {
[self occupyItem:minDeltaTpi];
}
} @finally {
[_lock unlock];
}
if (TRACE_USE_API) J2dRlsTraceLn1(J2D_TRACE_INFO, "MTLPoolCell_occupyCellItem: item = %p", minDeltaTpi);
return minDeltaTpi;
}
@end
/* MTLTexturePool API */
@implementation MTLTexturePool {
void ** _cells;
int _poolCellWidth;
int _poolCellHeight;
uint64_t _maxPoolMemory;
uint64_t _memoryAllocated;
uint64_t _memoryTotalAllocated;
time_t _lastYoungGC;
time_t _lastFullGC;
time_t _lastGC;
ATexturePool* _accelTexturePool;
bool _enableGC;
}
@synthesize device, allocatedCount, totalAllocatedCount,
memoryAllocated = _memoryAllocated,
totalMemoryAllocated = _memoryTotalAllocated;
- (void) autoTest:(MTLPixelFormat)format {
J2dRlsTraceLn1(J2D_TRACE_VERBOSE, "MTLTexturePool_autoTest: step = %d", INIT_TEST_STEP);
_enableGC = false;
for (int w = 1; w <= INIT_TEST_MAX; w += INIT_TEST_STEP) {
for (int h = 1; h <= INIT_TEST_MAX; h += INIT_TEST_STEP) {
/* use auto-release pool to free memory as early as possible */
@autoreleasepool {
MTLPooledTextureHandle *texHandle = [self getTexture:w height:h format:format];
id<MTLTexture> texture = texHandle.texture;
if (texture == nil) {
J2dRlsTraceLn2(J2D_TRACE_VERBOSE, "MTLTexturePool_autoTest: w= %d h= %d => texture is NULL !", w, h);
} else {
if (TRACE_MEM_API) J2dRlsTraceLn3(J2D_TRACE_VERBOSE, "MTLTexturePool_autoTest: w=%d h=%d => tex=%p",
w, h, texture);
}
[texHandle releaseTexture];
}
}
}
J2dRlsTraceLn2(J2D_TRACE_INFO, "MTLTexturePool_autoTest: before GC: total allocated memory = %lld Mb (total allocs: %d)",
_memoryTotalAllocated / UNIT_MB, self.totalAllocatedCount);
_enableGC = true;
[self cleanIfNecessary:FORCE_GC_INTERVAL_SEC];
J2dRlsTraceLn2(J2D_TRACE_INFO, "MTLTexturePool_autoTest: after GC: total allocated memory = %lld Mb (total allocs: %d)",
_memoryTotalAllocated / UNIT_MB, self.totalAllocatedCount);
}
- (id) initWithDevice:(id<MTLDevice>)dev {
// recommendedMaxWorkingSetSize typically greatly exceeds SCREEN_MEMORY_SIZE_5K constant.
// It usually corresponds to the VRAM available to the graphics card
uint64_t maxDeviceMemory = dev.recommendedMaxWorkingSetSize;
self = [super init];
if (self == nil) return self;
#if (USE_ACCEL_TEXTURE_POOL == 1)
ATexturePoolLockWrapper *lockWrapper = ATexturePoolLockWrapper_init(&MTLTexturePoolLock_initImpl,
&MTLTexturePoolLock_DisposeImpl,
&MTLTexturePoolLock_lockImpl,
&MTLTexturePoolLock_unlockImpl);
_accelTexturePool = ATexturePool_initWithDevice(dev,
(jlong)maxDeviceMemory,
&MTLTexturePool_createTexture,
&MTLTexturePool_freeTexture,
&MTLTexturePool_bytesPerPixel,
lockWrapper,
MTLPixelFormatBGRA8Unorm);
#endif
self.device = dev;
// use (5K) 5120-by-2880 resolution:
_poolCellWidth = 5120 >> CELL_WIDTH_BITS;
_poolCellHeight = 2880 >> CELL_HEIGHT_BITS;
const int cellsCount = _poolCellWidth * _poolCellHeight;
_cells = (void **)malloc(cellsCount * sizeof(void*));
CHECK_NULL_LOG_RETURN(_cells, NULL, "MTLTexturePool_initWithDevice: could not allocate cells");
memset(_cells, 0, cellsCount * sizeof(void*));
_maxPoolMemory = maxDeviceMemory / 2;
// Set maximum to handle at least 5K screen size
if (_maxPoolMemory < SCREEN_MEMORY_SIZE_5K) {
_maxPoolMemory = SCREEN_MEMORY_SIZE_5K;
} else if (USE_MAX_GPU_DEVICE_MEM && (_maxPoolMemory > MAX_GPU_DEVICE_MEM)) {
_maxPoolMemory = MAX_GPU_DEVICE_MEM;
}
self.allocatedCount = 0;
self.totalAllocatedCount = 0;
_memoryAllocated = 0;
_memoryTotalAllocated = 0;
_enableGC = true;
_lastGC = time(NULL);
_lastYoungGC = _lastGC;
_lastFullGC = _lastGC;
self.cacheHits = 0;
self.totalHits = 0;
if (TRACE_MEM_API) J2dRlsTraceLn2(J2D_TRACE_INFO, "MTLTexturePool_initWithDevice: pool = %p - maxPoolMemory = %lld", self, _maxPoolMemory);
if (INIT_TEST) {
static bool INIT_TEST_START = true;
if (INIT_TEST_START) {
INIT_TEST_START = false;
[self autoTest:MTLPixelFormatBGRA8Unorm];
}
}
return self;
}
- (void) dealloc {
#if (USE_ACCEL_TEXTURE_POOL == 1)
ATexturePoolLockWrapper_Dispose(ATexturePool_getLockWrapper(_accelTexturePool));
ATexturePool_Dispose(_accelTexturePool);
#endif
if (TRACE_MEM_API) J2dRlsTraceLn1(J2D_TRACE_INFO, "MTLTexturePool_dealloc: pool = %p", self);
for (int c = 0; c < _poolCellWidth * _poolCellHeight; ++c) {
MTLPoolCell *cell = _cells[c];
if (cell != NULL) {
[cell release];
}
}
free(_cells);
[super dealloc];
}
- (void) cleanIfNecessary:(int)lastUsedTimeThreshold {
time_t lastUsedTimeToRemove =
lastUsedTimeThreshold > 0 ?
time(NULL) - lastUsedTimeThreshold :
lastUsedTimeThreshold;
if (TRACE_MEM_API || TRACE_GC) {
J2dRlsTraceLn2(J2D_TRACE_VERBOSE, "MTLTexturePool_cleanIfNecessary: before GC: allocated memory = %lld Kb (allocs: %d)",
_memoryAllocated / UNIT_KB, self.allocatedCount);
}
for (int cy = 0; cy < _poolCellHeight; ++cy) {
for (int cx = 0; cx < _poolCellWidth; ++cx) {
MTLPoolCell *cell = _cells[cy * _poolCellWidth + cx];
if (cell != NULL) {
[cell cleanIfBefore:lastUsedTimeToRemove];
}
}
}
if (TRACE_MEM_API || TRACE_GC) {
J2dRlsTraceLn4(J2D_TRACE_VERBOSE, "MTLTexturePool_cleanIfNecessary: after GC: allocated memory = %lld Kb (allocs: %d) - hits = %lld (%.3lf %% cached)",
_memoryAllocated / UNIT_KB, self.allocatedCount,
self.totalHits, (self.totalHits != 0) ? (100.0 * self.cacheHits) / self.totalHits : 0.0);
// reset hits:
self.cacheHits = 0;
self.totalHits = 0;
}
}
// NOTE: called from RQ-thread (on blit operations)
- (MTLPooledTextureHandle *) getTexture:(int)width height:(int)height format:(MTLPixelFormat)format {
#if (USE_ACCEL_TEXTURE_POOL == 1)
ATexturePoolHandle* texHandle = ATexturePool_getTexture(_accelTexturePool, width, height, format);
CHECK_NULL_RETURN(texHandle, NULL);
return [[[MTLPooledTextureHandle alloc] initWithTextureHandle:texHandle] autorelease];
#else
const int reqWidth = width;
const int reqHeight = height;
int cellX0 = width >> CELL_WIDTH_BITS;
int cellY0 = height >> CELL_HEIGHT_BITS;
if (USE_CEIL_SIZE) {
// use upper cell size to maximize cache hits:
const int remX0 = width & CELL_WIDTH_MASK;
const int remY0 = height & CELL_HEIGHT_MASK;
if (remX0 != 0) {
cellX0++;
}
if (remY0 != 0) {
cellY0++;
}
// adjust width / height to cell upper boundaries:
width = (cellX0) << CELL_WIDTH_BITS;
height = (cellY0) << CELL_HEIGHT_BITS;
if (TRACE_MEM_API) J2dRlsTraceLn4(J2D_TRACE_VERBOSE, "MTLTexturePool_getTexture: fixed tex size: (%d %d) => (%d %d)",
reqWidth, reqHeight, width, height);
}
// 1. clean pool if necessary
const int requestedPixels = width * height;
const int requestedBytes = requestedPixels * MTLTexturePool_bytesPerPixel(format);
const uint64_t neededMemoryAllocated = _memoryAllocated + requestedBytes;
if (neededMemoryAllocated > _maxPoolMemory) {
// release all free textures
[self cleanIfNecessary:0];
} else {
time_t now = time(NULL);
// ensure 1s at least:
if ((now - _lastGC) > 0) {
_lastGC = now;
if (neededMemoryAllocated > _maxPoolMemory / 2) {
// release only old free textures
[self cleanIfNecessary:MAX_POOL_ITEM_LIFETIME_SEC];
} else if (FORCE_GC && _enableGC) {
if ((now - _lastFullGC) > FORCE_GC_INTERVAL_SEC) {
_lastFullGC = now;
_lastYoungGC = now;
// release only old free textures since last full-gc
[self cleanIfNecessary:FORCE_GC_INTERVAL_SEC];
} else if ((now - _lastYoungGC) > YOUNG_GC_INTERVAL_SEC) {
_lastYoungGC = now;
// release only not reused and old textures
[self cleanIfNecessary:YOUNG_GC_LIFETIME_SEC];
}
}
}
}
// 2. find free item
const int cellX1 = cellX0 + 1;
const int cellY1 = cellY0 + 1;
// Note: this code (test + resizing) is not thread-safe:
if (cellX1 > _poolCellWidth || cellY1 > _poolCellHeight) {
const int newCellWidth = cellX1 <= _poolCellWidth ? _poolCellWidth : cellX1;
const int newCellHeight = cellY1 <= _poolCellHeight ? _poolCellHeight : cellY1;
const int newCellsCount = newCellWidth * newCellHeight;
if (TRACE_MEM_API) J2dRlsTraceLn2(J2D_TRACE_VERBOSE, "MTLTexturePool_getTexture: resize: %d -> %d",
_poolCellWidth * _poolCellHeight, newCellsCount);
void **newcells = malloc(newCellsCount * sizeof(void*));
CHECK_NULL_LOG_RETURN(newcells, NULL, "MTLTexturePool_getTexture: could not allocate newCells");
const size_t strideBytes = _poolCellWidth * sizeof(void*);
for (int cy = 0; cy < _poolCellHeight; ++cy) {
void **dst = newcells + cy * newCellWidth;
void **src = _cells + cy * _poolCellWidth;
memcpy(dst, src, strideBytes);
if (newCellWidth > _poolCellWidth)
memset(dst + _poolCellWidth, 0, (newCellWidth - _poolCellWidth) * sizeof(void*));
}
if (newCellHeight > _poolCellHeight) {
void **dst = newcells + _poolCellHeight * newCellWidth;
memset(dst, 0, (newCellHeight - _poolCellHeight) * newCellWidth * sizeof(void*));
}
free(_cells);
_cells = newcells;
_poolCellWidth = newCellWidth;
_poolCellHeight = newCellHeight;
}
MTLTexturePoolItem *minDeltaTpi = nil;
int minDeltaArea = -1;
for (int cy = cellY0; cy < cellY1; ++cy) {
for (int cx = cellX0; cx < cellX1; ++cx) {
MTLPoolCell * cell = _cells[cy * _poolCellWidth + cx];
if (cell != NULL) {
MTLTexturePoolItem* tpi = [cell occupyCellItem:width height:height format:format];
if (!tpi) {
continue;
}
const int deltaArea = (const int) (tpi.width * tpi.height - requestedPixels);
if (minDeltaArea < 0 || deltaArea < minDeltaArea) {
minDeltaArea = deltaArea;
minDeltaTpi = tpi;
if (deltaArea == 0) {
// found exact match in current cell
break;
}
}
}
}
if (minDeltaTpi != nil) {
break;
}
}
if (minDeltaTpi == NULL) {
MTLPoolCell* cell = _cells[cellY0 * _poolCellWidth + cellX0];
if (cell == NULL) {
cell = [[MTLPoolCell alloc] init:self];
_cells[cellY0 * _poolCellWidth + cellX0] = cell;
}
// use device to allocate NEW texture:
id <MTLTexture> tex = MTLTexturePool_createTexture(device, width, height, format);
minDeltaTpi = [[[MTLTexturePoolItem alloc] initWithTexture:tex cell:cell
width:width height:height format:format] autorelease];
[cell addOccupiedItem: minDeltaTpi];
_memoryAllocated += requestedBytes;
_memoryTotalAllocated += requestedBytes;
J2dTraceLn6(J2D_TRACE_VERBOSE, "MTLTexturePool_getTexture: created pool item: tex=%p, w=%d h=%d, pf=%d | allocated memory = %lld Kb (allocs: %d)",
minDeltaTpi.texture, width, height, format, _memoryAllocated / UNIT_KB, allocatedCount);
if (TRACE_MEM_API) J2dRlsTraceLn6(J2D_TRACE_VERBOSE, "MTLTexturePool_getTexture: created pool item: tex=%p, w=%d h=%d, pf=%d | allocated memory = %lld Kb (allocs: %d)",
minDeltaTpi.texture, width, height, format, _memoryAllocated / UNIT_KB, allocatedCount);
} else {
self.cacheHits++;
minDeltaTpi.reuseCount++;
if (TRACE_REUSE) {
J2dRlsTraceLn5(J2D_TRACE_VERBOSE, "MTLTexturePool_getTexture: reused pool item: tex=%p, w=%d h=%d, pf=%d - reuse=%d",
minDeltaTpi.texture, width, height, format, minDeltaTpi.reuseCount);
}
}
self.totalHits++;
minDeltaTpi.lastUsed = time(NULL);
return [[[MTLPooledTextureHandle alloc] initWithPoolItem:minDeltaTpi.texture
poolItem:minDeltaTpi reqWidth:reqWidth reqHeight:reqHeight] autorelease];
#endif
}
@end

View File

@@ -1,455 +0,0 @@
/*
* Copyright (c) 2019, 2021, Oracle and/or its affiliates. All rights reserved.
* DO NOT ALTER OR REMOVE COPYRIGHT NOTICES OR THIS FILE HEADER.
*
* This code is free software; you can redistribute it and/or modify it
* under the terms of the GNU General Public License version 2 only, as
* published by the Free Software Foundation. Oracle designates this
* particular file as subject to the "Classpath" exception as provided
* by Oracle in the LICENSE file that accompanied this code.
*
* This code is distributed in the hope that it will be useful, but WITHOUT
* ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or
* FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License
* version 2 for more details (a copy is included in the LICENSE file that
* accompanied this code).
*
* You should have received a copy of the GNU General Public License version
* 2 along with this work; if not, write to the Free Software Foundation,
* Inc., 51 Franklin St, Fifth Floor, Boston, MA 02110-1301 USA.
*
* Please contact Oracle, 500 Oracle Parkway, Redwood Shores, CA 94065 USA
* or visit www.oracle.com if you need additional information or have any
* questions.
*/
#import "MTLTexturePool.h"
#import "Trace.h"
#define SCREEN_MEMORY_SIZE_5K (5120*4096*4) //~84 mb
#define MAX_POOL_ITEM_LIFETIME_SEC 30
#define CELL_WIDTH_BITS 5 // ~ 32 pixel
#define CELL_HEIGHT_BITS 5 // ~ 32 pixel
@implementation MTLTexturePoolItem
@synthesize texture, isBusy, lastUsed, isMultiSample, next, cell;
- (id) initWithTexture:(id<MTLTexture>)tex cell:(MTLPoolCell*)c{
self = [super init];
if (self == nil) return self;
self.texture = tex;
isBusy = NO;
self.next = nil;
self.prev = nil;
self.cell = c;
return self;
}
- (void) dealloc {
[texture release];
[super dealloc];
}
@end
@implementation MTLPooledTextureHandle
{
MTLRegion _rect;
id<MTLTexture> _texture;
MTLTexturePoolItem * _poolItem;
}
@synthesize texture = _texture, rect = _rect;
- (id) initWithPoolItem:(id<MTLTexture>)texture rect:(MTLRegion)rectangle poolItem:(MTLTexturePoolItem *)poolItem {
self = [super init];
if (self == nil) return self;
_rect = rectangle;
_texture = texture;
_poolItem = poolItem;
return self;
}
- (void) releaseTexture {
[_poolItem.cell releaseItem:_poolItem];
}
@end
@implementation MTLPoolCell {
NSLock* _lock;
}
@synthesize available, availableTail, occupied;
- (instancetype)init {
self = [super init];
if (self) {
self.available = nil;
self.availableTail = nil;
self.occupied = nil;
_lock = [[NSLock alloc] init];
}
return self;
}
- (void)occupyItem:(MTLTexturePoolItem *)item {
if (item.isBusy) return;
[item retain];
if (item.prev == nil) {
self.available = item.next;
if (item.next) {
item.next.prev = nil;
} else {
self.availableTail = item.prev;
}
} else {
item.prev.next = item.next;
if (item.next) {
item.next.prev = item.prev;
} else {
self.availableTail = item.prev;
}
item.prev = nil;
}
if (occupied) occupied.prev = item;
item.next = occupied;
self.occupied = item;
[item release];
item.isBusy = YES;
}
- (void)releaseItem:(MTLTexturePoolItem *)item {
[_lock lock];
@try {
if (!item.isBusy) return;
[item retain];
if (item.prev == nil) {
self.occupied = item.next;
if (item.next) item.next.prev = nil;
} else {
item.prev.next = item.next;
if (item.next) item.next.prev = item.prev;
item.prev = nil;
}
if (self.available) {
self.available.prev = item;
} else {
self.availableTail = item;
}
item.next = self.available;
self.available = item;
item.isBusy = NO;
[item release];
} @finally {
[_lock unlock];
}
}
- (void)addOccupiedItem:(MTLTexturePoolItem *)item {
if (self.occupied) self.occupied.prev = item;
item.next = self.occupied;
item.isBusy = YES;
self.occupied = item;
}
- (void)removeAvailableItem:(MTLTexturePoolItem*)item {
[item retain];
if (item.prev == nil) {
self.available = item.next;
if (item.next) {
item.next.prev = nil;
item.next = nil;
} else {
self.availableTail = item.prev;
}
} else {
item.prev.next = item.next;
if (item.next) {
item.next.prev = item.prev;
item.next = nil;
} else {
self.availableTail = item.prev;
}
}
[item release];
}
- (void)removeAllItems {
MTLTexturePoolItem *cur = self.available;
while (cur != nil) {
cur = cur.next;
self.available = cur;
}
cur = self.occupied;
while (cur != nil) {
cur = cur.next;
self.occupied = cur;
}
self.availableTail = nil;
}
- (MTLTexturePoolItem *)createItem:(id<MTLDevice>)dev
width:(int)width
height:(int)height
format:(MTLPixelFormat)format
isMultiSample:(bool)isMultiSample
{
MTLTextureDescriptor *textureDescriptor =
[MTLTextureDescriptor texture2DDescriptorWithPixelFormat:format
width:(NSUInteger) width
height:(NSUInteger) height
mipmapped:NO];
textureDescriptor.usage = MTLTextureUsageRenderTarget |
MTLTextureUsageShaderRead;
if (isMultiSample) {
textureDescriptor.textureType = MTLTextureType2DMultisample;
textureDescriptor.sampleCount = MTLAASampleCount;
textureDescriptor.storageMode = MTLStorageModePrivate;
}
id <MTLTexture> tex = (id <MTLTexture>) [[dev newTextureWithDescriptor:textureDescriptor] autorelease];
MTLTexturePoolItem* item = [[[MTLTexturePoolItem alloc] initWithTexture:tex cell:self] autorelease];
item.isMultiSample = isMultiSample;
[_lock lock];
@try {
[self addOccupiedItem:item];
} @finally {
[_lock unlock];
}
return item;
}
- (NSUInteger)cleanIfBefore:(time_t)lastUsedTimeToRemove {
NSUInteger deallocMem = 0;
[_lock lock];
MTLTexturePoolItem *cur = availableTail;
@try {
while (cur != nil) {
MTLTexturePoolItem *prev = cur.prev;
if (lastUsedTimeToRemove <= 0 ||
cur.lastUsed < lastUsedTimeToRemove) {
#ifdef DEBUG
J2dTraceImpl(J2D_TRACE_VERBOSE, JNI_TRUE,
"MTLTexturePool: remove pool item: tex=%p, w=%d h=%d, elapsed=%d",
cur.texture, cur.texture.width, cur.texture.height,
time(NULL) - cur.lastUsed);
#endif //DEBUG
deallocMem += cur.texture.width * cur.texture.height * 4;
[self removeAvailableItem:cur];
} else {
if (lastUsedTimeToRemove > 0) break;
}
cur = prev;
}
} @finally {
[_lock unlock];
}
return deallocMem;
}
- (MTLTexturePoolItem *)occupyItem:(int)width height:(int)height format:(MTLPixelFormat)format
isMultiSample:(bool)isMultiSample {
int minDeltaArea = -1;
const int requestedPixels = width*height;
MTLTexturePoolItem *minDeltaTpi = nil;
[_lock lock];
@try {
for (MTLTexturePoolItem *cur = available; cur != nil; cur = cur.next) {
if (cur.texture.pixelFormat != format
|| cur.isMultiSample != isMultiSample) { // TODO: use swizzle when formats are not equal
continue;
}
if (cur.texture.width < width || cur.texture.height < height) {
continue;
}
const int deltaArea = (const int) (cur.texture.width * cur.texture.height - requestedPixels);
if (minDeltaArea < 0 || deltaArea < minDeltaArea) {
minDeltaArea = deltaArea;
minDeltaTpi = cur;
if (deltaArea == 0) {
// found exact match in current cell
break;
}
}
}
if (minDeltaTpi) {
[self occupyItem:minDeltaTpi];
}
} @finally {
[_lock unlock];
}
return minDeltaTpi;
}
- (void) dealloc {
[_lock lock];
@try {
[self removeAllItems];
} @finally {
[_lock unlock];
}
[_lock release];
[super dealloc];
}
@end
@implementation MTLTexturePool {
int _memoryTotalAllocated;
void ** _cells;
int _poolCellWidth;
int _poolCellHeight;
uint64_t _maxPoolMemory;
}
@synthesize device;
- (id) initWithDevice:(id<MTLDevice>)dev {
self = [super init];
if (self == nil) return self;
_memoryTotalAllocated = 0;
_poolCellWidth = 10;
_poolCellHeight = 10;
const int cellsCount = _poolCellWidth * _poolCellHeight;
_cells = (void **)malloc(cellsCount * sizeof(void*));
memset(_cells, 0, cellsCount * sizeof(void*));
self.device = dev;
// recommendedMaxWorkingSetSize typically greatly exceeds SCREEN_MEMORY_SIZE_5K constant.
// It usually corresponds to the VRAM available to the graphics card
_maxPoolMemory = self.device.recommendedMaxWorkingSetSize/2;
// Set maximum to handle at least 5K screen size
if (_maxPoolMemory < SCREEN_MEMORY_SIZE_5K) {
_maxPoolMemory = SCREEN_MEMORY_SIZE_5K;
}
return self;
}
- (void) dealloc {
for (int c = 0; c < _poolCellWidth * _poolCellHeight; ++c) {
MTLPoolCell * cell = _cells[c];
if (cell != NULL) {
[cell release];
}
}
free(_cells);
[super dealloc];
}
// NOTE: called from RQ-thread (on blit operations)
- (MTLPooledTextureHandle *) getTexture:(int)width height:(int)height format:(MTLPixelFormat)format {
return [self getTexture:width height:height format:format isMultiSample:NO];
}
// NOTE: called from RQ-thread (on blit operations)
- (MTLPooledTextureHandle *) getTexture:(int)width height:(int)height format:(MTLPixelFormat)format
isMultiSample:(bool)isMultiSample {
// 1. clean pool if necessary
const int requestedPixels = width*height;
const int requestedBytes = requestedPixels*4;
if (_memoryTotalAllocated + requestedBytes > _maxPoolMemory) {
[self cleanIfNecessary:0]; // release all free textures
} else if (_memoryTotalAllocated + requestedBytes > _maxPoolMemory/2) {
[self cleanIfNecessary:MAX_POOL_ITEM_LIFETIME_SEC]; // release only old free textures
}
// 2. find free item
const int cellX0 = width >> CELL_WIDTH_BITS;
const int cellY0 = height >> CELL_HEIGHT_BITS;
const int cellX1 = cellX0 + 1;
const int cellY1 = cellY0 + 1;
if (cellX1 > _poolCellWidth || cellY1 > _poolCellHeight) {
const int newCellWidth = cellX1 <= _poolCellWidth ? _poolCellWidth : cellX1;
const int newCellHeight = cellY1 <= _poolCellHeight ? _poolCellHeight : cellY1;
const int newCellsCount = newCellWidth*newCellHeight;
#ifdef DEBUG
J2dTraceLn2(J2D_TRACE_VERBOSE, "MTLTexturePool: resize: %d -> %d", _poolCellWidth * _poolCellHeight, newCellsCount);
#endif
void ** newcells = malloc(newCellsCount*sizeof(void*));
const int strideBytes = _poolCellWidth * sizeof(void*);
for (int cy = 0; cy < _poolCellHeight; ++cy) {
void ** dst = newcells + cy*newCellWidth;
void ** src = _cells + cy * _poolCellWidth;
memcpy(dst, src, strideBytes);
if (newCellWidth > _poolCellWidth)
memset(dst + _poolCellWidth, 0, (newCellWidth - _poolCellWidth) * sizeof(void*));
}
if (newCellHeight > _poolCellHeight) {
void ** dst = newcells + _poolCellHeight * newCellWidth;
memset(dst, 0, (newCellHeight - _poolCellHeight) * newCellWidth * sizeof(void*));
}
free(_cells);
_cells = newcells;
_poolCellWidth = newCellWidth;
_poolCellHeight = newCellHeight;
}
MTLTexturePoolItem * minDeltaTpi = nil;
int minDeltaArea = -1;
for (int cy = cellY0; cy < cellY1; ++cy) {
for (int cx = cellX0; cx < cellX1; ++cx) {
MTLPoolCell * cell = _cells[cy * _poolCellWidth + cx];
if (cell != NULL) {
MTLTexturePoolItem* tpi = [cell occupyItem:width height:height
format:format isMultiSample:isMultiSample];
if (!tpi) continue;
const int deltaArea = (const int) (tpi.texture.width * tpi.texture.height - requestedPixels);
if (minDeltaArea < 0 || deltaArea < minDeltaArea) {
minDeltaArea = deltaArea;
minDeltaTpi = tpi;
if (deltaArea == 0) {
// found exact match in current cell
break;
}
}
}
}
if (minDeltaTpi != nil) {
break;
}
}
if (minDeltaTpi == NULL) {
MTLPoolCell* cell = _cells[cellY0 * _poolCellWidth + cellX0];
if (cell == NULL) {
cell = [[MTLPoolCell alloc] init];
_cells[cellY0 * _poolCellWidth + cellX0] = cell;
}
minDeltaTpi = [cell createItem:device width:width height:height format:format isMultiSample:isMultiSample];
_memoryTotalAllocated += requestedBytes;
J2dTraceLn5(J2D_TRACE_VERBOSE, "MTLTexturePool: created pool item: tex=%p, w=%d h=%d, pf=%d | total memory = %d Kb", minDeltaTpi.texture, width, height, format, _memoryTotalAllocated/1024);
}
minDeltaTpi.isBusy = YES;
minDeltaTpi.lastUsed = time(NULL);
return [[[MTLPooledTextureHandle alloc] initWithPoolItem:minDeltaTpi.texture
rect:MTLRegionMake2D(0, 0,
minDeltaTpi.texture.width,
minDeltaTpi.texture.height)
poolItem:minDeltaTpi] autorelease];
}
- (void) cleanIfNecessary:(int)lastUsedTimeThreshold {
time_t lastUsedTimeToRemove =
lastUsedTimeThreshold > 0 ?
time(NULL) - lastUsedTimeThreshold :
lastUsedTimeThreshold;
for (int cy = 0; cy < _poolCellHeight; ++cy) {
for (int cx = 0; cx < _poolCellWidth; ++cx) {
MTLPoolCell * cell = _cells[cy * _poolCellWidth + cx];
if (cell != NULL) {
_memoryTotalAllocated -= [cell cleanIfBefore:lastUsedTimeToRemove];
}
}
}
}
@end

View File

@@ -27,7 +27,12 @@
#define MTLUtils_h_Included
#import <Metal/Metal.h>
#import "MTLSurfaceDataBase.h"
#define MTLAASampleCount 4
#define DST_TYPE(dstOps) ((dstOps != NULL) ? dstOps->drawableType : MTLSD_UNDEFINED)
const char* mtlDstTypeToStr(uint op);
#endif /* MTLUtils_h_Included */

View File

@@ -82,3 +82,22 @@ jboolean isOptionEnabled(const char * option) {
NSString * lowerCaseProp = [optionProp localizedLowercaseString];
return [@"true" isEqual:lowerCaseProp];
}
const char* mtlDstTypeToStr(uint op) {
#undef CASE_MTLSD_OP
#define CASE_MTLSD_OP(X) \
case MTLSD_##X: \
return #X;
switch (op) {
CASE_MTLSD_OP(UNDEFINED)
CASE_MTLSD_OP(WINDOW)
CASE_MTLSD_OP(TEXTURE)
CASE_MTLSD_OP(FLIP_BACKBUFFER)
CASE_MTLSD_OP(RT_TEXTURE)
default:
return "";
}
#undef CASE_MTLSD_OP
}

View File

@@ -222,11 +222,12 @@
*/
#define JNI_COCOA_EXIT(env) \
} \
@catch (NSException *e) { \
NSLog(@"%@", [e callStackSymbols]); \
@catch (NSException *e) { \
NSLog(@"Apple AWT Cocoa Exception: %@", [e description]); \
NSLog(@"Apple AWT Cocoa Exception callstack: %@", [e callStackSymbols]); \
} \
@finally { \
[pool drain]; \
[pool drain]; \
};
/* Same as above but adds a clean up action.
@@ -236,10 +237,11 @@
} \
@catch (NSException *e) { \
{ action; }; \
NSLog(@"%@", [e callStackSymbols]); \
NSLog(@"Apple AWT Cocoa Exception: %@", [e description]); \
NSLog(@"Apple AWT Cocoa Exception callstack: %@", [e callStackSymbols]); \
} \
@finally { \
[pool drain]; \
[pool drain]; \
};
/******** STRING CONVERSION SUPPORT *********/

View File

@@ -139,7 +139,6 @@ __attribute__((visibility("default")))
+ (void)detachCurrentThread;
+ (void)setAppkitThreadGroup:(jobject)group;
+ (NSString*)getCaller;
+ (void)performOnMainThreadNowOrLater:(void (^)())block;
+ (void)performOnMainThreadWaiting:(BOOL)wait block:(void (^)())block;
+ (void)performOnMainThread:(SEL)aSelector on:(id)target withObject:(id)arg waitUntilDone:(BOOL)wait;
@@ -149,4 +148,7 @@ __attribute__((visibility("default")))
JNIEXPORT void OSXAPP_SetJavaVM(JavaVM *vm);
/* LWCToolkit's PlatformLogger wrapper */
JNIEXPORT void lwc_plog(JNIEnv* env, const char *formatMsg, ...);
#endif /* __THREADUTILITIES_H */

View File

@@ -31,12 +31,6 @@
#import "ThreadUtilities.h"
#define USE_LWC_LOG 1
#if USE_LWC_LOG == 1
void lwc_plog(JNIEnv* env, const char *formatMsg, ...);
#endif
/* Returns the MainThread latency threshold in milliseconds
* used to detect slow operations that may cause high latencies or delays.
* If <=0, the MainThread monitor is disabled */
@@ -142,7 +136,7 @@ AWT_ASSERT_APPKIT_THREAD;
if ([NSThread isMainThread]) {
block();
} else {
[self performOnMainThread:@selector(invokeBlockCopy:) on:self withObject:Block_copy(block) waitUntilDone:NO];
[ThreadUtilities performOnMainThread:@selector(invokeBlockCopy:) on:self withObject:Block_copy(block) waitUntilDone:NO];
}
}
@@ -150,7 +144,7 @@ AWT_ASSERT_APPKIT_THREAD;
if ([NSThread isMainThread] && wait) {
block();
} else {
[self performOnMainThread:@selector(invokeBlockCopy:) on:self withObject:Block_copy(block) waitUntilDone:wait];
[ThreadUtilities performOnMainThread:@selector(invokeBlockCopy:) on:self withObject:Block_copy(block) waitUntilDone:wait];
}
}
@@ -180,22 +174,19 @@ AWT_ASSERT_APPKIT_THREAD;
setBlockingEventDispatchThread(NO);
}
});
[self performSelectorOnMainThread:@selector(invokeBlockCopy:) withObject:blockCopy waitUntilDone:YES modes:javaModes];
[ThreadUtilities performSelectorOnMainThread:@selector(invokeBlockCopy:) withObject:blockCopy waitUntilDone:YES modes:javaModes];
} else {
[target performSelectorOnMainThread:aSelector withObject:arg waitUntilDone:wait modes:javaModes];
}
} else {
// Perform instrumentation on selector:
const NSString* caller = [self getCaller];
const NSString* caller = [ThreadUtilities getCaller];
BOOL invokeDirect = NO;
BOOL blockingEDT;
BOOL blockingEDT = NO;
if ([NSThread isMainThread] && wait) {
invokeDirect = YES;
blockingEDT = NO;
} else if (wait && isEventDispatchThread()) {
blockingEDT = YES;
} else {
blockingEDT = NO;
}
const char* operation = (invokeDirect ? "now " : (blockingEDT ? "block" : "later"));
@@ -212,18 +203,14 @@ AWT_ASSERT_APPKIT_THREAD;
}
const double elapsedMs = (CACurrentMediaTime() - start) * 1000.0;
if (elapsedMs > mtThreshold) {
#if USE_LWC_LOG == 1
lwc_plog([self getJNIEnv], "performOnMainThread(%s)[time: %.3lf ms]: [%s]", operation, elapsedMs, toCString(caller));
#else
NSLog(@"performOnMainThread(%s)[time: %.3lf ms]: [%@]", operation, elapsedMs, caller);
#endif
lwc_plog([ThreadUtilities getJNIEnv], "performOnMainThread(%s)[time: %.3lf ms]: [%s]", operation, elapsedMs, toCString(caller));
}
}
});
if (invokeDirect) {
[self performSelector:@selector(invokeBlockCopy:) withObject:blockCopy];
[ThreadUtilities performSelector:@selector(invokeBlockCopy:) withObject:blockCopy];
} else {
[self performSelectorOnMainThread:@selector(invokeBlockCopy:) withObject:blockCopy waitUntilDone:wait modes:javaModes];
[ThreadUtilities performSelectorOnMainThread:@selector(invokeBlockCopy:) withObject:blockCopy waitUntilDone:wait modes:javaModes];
}
}
}
@@ -264,11 +251,11 @@ JNIEXPORT void JNICALL Java_sun_awt_AWTThreading_notifyEventDispatchThreadStarte
}
}
#if USE_LWC_LOG == 1
void lwc_plog(JNIEnv* env, const char *formatMsg, ...) {
if (formatMsg == NULL)
/* LWCToolkit's PlatformLogger wrapper */
JNIEXPORT void lwc_plog(JNIEnv* env, const char *formatMsg, ...) {
if ((env == NULL) || (formatMsg == NULL)) {
return;
}
static jobject loggerObject = NULL;
static jmethodID midWarn = NULL;
@@ -286,20 +273,28 @@ void lwc_plog(JNIEnv* env, const char *formatMsg, ...) {
}
GET_METHOD(midWarn, clazz, "warning", "(Ljava/lang/String;)V");
}
if (midWarn != NULL) {
va_list args;
va_start(args, formatMsg);
/* formatted message can be large (stack trace ?) => 16 kb */
const int bufSize = 16 * 1024;
char buf[bufSize];
#pragma clang diagnostic push
#pragma clang diagnostic ignored "-Wformat-nonliteral"
vsnprintf(buf, bufSize, formatMsg, args);
#pragma clang diagnostic pop
va_end(args);
va_list args;
va_start(args, formatMsg);
const int bufSize = 512;
char buf[bufSize];
#pragma clang diagnostic push
#pragma clang diagnostic ignored "-Wformat-nonliteral"
vsnprintf(buf, bufSize, formatMsg, args);
#pragma clang diagnostic pop
va_end(args);
jstring jstr = (*env)->NewStringUTF(env, buf);
(*env)->CallVoidMethod(env, loggerObject, midWarn, jstr);
(*env)->DeleteLocalRef(env, jstr);
const jstring javaString = (*env)->NewStringUTF(env, buf);
if ((*env)->ExceptionCheck(env)) {
// fallback:
NSLog(@"%s\n", buf); \
} else {
JNU_CHECK_EXCEPTION(env);
(*env)->CallVoidMethod(env, loggerObject, midWarn, javaString);
CHECK_EXCEPTION();
return;
}
(*env)->DeleteLocalRef(env, javaString);
}
}
#endif

View File

@@ -4876,6 +4876,9 @@ public abstract class Component implements ImageObserver, MenuContainer,
eventLog.finest("{0}", e);
}
if (id == MouseEvent.MOUSE_ENTERED && getToolkit() instanceof SunToolkit toolkit) {
toolkit.updateLastMouseEventComponent(this);
}
/*
* 0. Set timestamp and modifiers of current event.
*/
@@ -7157,6 +7160,12 @@ public abstract class Component implements ImageObserver, MenuContainer,
setGlobalPermanentFocusOwner(null);
}
if (getToolkit() instanceof SunToolkit toolkit) {
if (toolkit.getLastMouseEventComponent() == this) {
toolkit.updateLastMouseEventComponent(null);
}
}
synchronized (getTreeLock()) {
if (isFocusOwner() && KeyboardFocusManager.isAutoFocusTransferEnabledFor(this)) {
transferFocus(true);

View File

@@ -1,5 +1,5 @@
/*
* Copyright (c) 1997, 2021, Oracle and/or its affiliates. All rights reserved.
* Copyright (c) 1997, 2024, Oracle and/or its affiliates. All rights reserved.
* DO NOT ALTER OR REMOVE COPYRIGHT NOTICES OR THIS FILE HEADER.
*
* This code is free software; you can redistribute it and/or modify it
@@ -23,7 +23,6 @@
* questions.
*/
package java.awt;
import java.awt.image.BufferedImage;
@@ -38,7 +37,6 @@ import sun.font.FontManager;
import sun.font.FontManagerFactory;
import sun.java2d.HeadlessGraphicsEnvironment;
import sun.java2d.SunGraphicsEnvironment;
import sun.security.action.GetPropertyAction;
/**
*

View File

@@ -0,0 +1,131 @@
/*
* Copyright (c) 1996, 2024, Oracle and/or its affiliates. All rights reserved.
* Copyright (c) 2024, JetBrains s.r.o.. All rights reserved.
* DO NOT ALTER OR REMOVE COPYRIGHT NOTICES OR THIS FILE HEADER.
*
* This code is free software; you can redistribute it and/or modify it
* under the terms of the GNU General Public License version 2 only, as
* published by the Free Software Foundation. Oracle designates this
* particular file as subject to the "Classpath" exception as provided
* by Oracle in the LICENSE file that accompanied this code.
*
* This code is distributed in the hope that it will be useful, but WITHOUT
* ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or
* FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License
* version 2 for more details (a copy is included in the LICENSE file that
* accompanied this code).
*
* You should have received a copy of the GNU General Public License version
* 2 along with this work; if not, write to the Free Software Foundation,
* Inc., 51 Franklin St, Fifth Floor, Boston, MA 02110-1301 USA.
*
* Please contact Oracle, 500 Oracle Parkway, Redwood Shores, CA 94065 USA
* or visit www.oracle.com if you need additional information or have any
* questions.
*/
package sun.awt;
import java.awt.Component;
import java.awt.Container;
import java.awt.Cursor;
import java.awt.Point;
import java.util.concurrent.atomic.AtomicBoolean;
public abstract class CachedCursorManager {
/**
* A flag to indicate if the update is scheduled, so we don't process it
* twice.
*/
private final AtomicBoolean updatePending = new AtomicBoolean(false);
/**
* Sets the cursor to correspond the component currently under mouse.
*
* This method should not be executed on the toolkit thread as it
* calls to user code (e.g. Container.findComponentAt).
*/
public final void updateCursor() {
updatePending.set(false);
updateCursorImpl();
}
/**
* Schedules updating the cursor on the corresponding event dispatch
* thread for the given window.
*
* This method is called on the toolkit thread as a result of a
* native update cursor request (e.g. WM_SETCURSOR on Windows).
*/
public final void updateCursorLater(final Component window) {
if (updatePending.compareAndSet(false, true)) {
Runnable r = new Runnable() {
@Override
public void run() {
updateCursor();
}
};
SunToolkit.executeOnEventHandlerThread(window, r);
}
}
protected abstract Cursor getCursorByPosition(final Point cursorPos, Component c);
private void updateCursorImpl() {
final Point cursorPos = getCursorPosition();
final Component c = findComponent(cursorPos);
Cursor cursor = getCursorByPosition(cursorPos, c);
if (cursor == null) {
cursor = (c != null) ? c.getCursor() : null;
}
if (cursor != null) {
setCursor(cursor);
}
}
protected abstract Component getComponentUnderCursor();
protected abstract Point getLocationOnScreen(Component component);
/**
* Returns the first visible, enabled and showing component under cursor.
* Returns null for modal blocked windows.
*
* @param cursorPos Current cursor position.
* @return Component or null.
*/
protected Component findComponent(final Point cursorPos) {
Component component = getComponentUnderCursor();
if (component != null) {
if (component instanceof Container && component.isShowing()) {
final Point p = getLocationOnScreen(component);
component = AWTAccessor.getContainerAccessor().findComponentAt(
(Container) component, cursorPos.x - p.x, cursorPos.y - p.y, false);
}
while (component != null) {
final Object p = AWTAccessor.getComponentAccessor().getPeer(component);
if (component.isVisible() && component.isEnabled() && p != null) {
break;
}
component = component.getParent();
}
}
return component;
}
/**
* Returns the current cursor position.
*/
public abstract Point getCursorPosition();
/**
* Sets a cursor. The cursor can be null if the mouse is not over a Java
* window.
* @param cursor the new {@code Cursor}.
*/
protected abstract void setCursor(Cursor cursor);
}

View File

@@ -236,6 +236,17 @@ public abstract class SunToolkit extends Toolkit
AccessController.doPrivileged(new GetBooleanAction("awt.lock.fair")));
private static final Condition AWT_LOCK_COND = AWT_LOCK.newCondition();
/*
* A component where the last mouse event came to. Used by cursor manager to
* find the component under cursor. Currently, uses only on Windows
*/
public void updateLastMouseEventComponent(Component component) {
}
public Component getLastMouseEventComponent() {
return null;
}
public interface AwtLockListener {
void afterAwtLocked();
void beforeAwtUnlocked();

View File

@@ -74,6 +74,7 @@ public final class BufferedOpCodes {
@Native public static final int INVALIDATE_CONTEXT = 75;
@Native public static final int SYNC = 76;
@Native public static final int RESTORE_DEVICES = 77;
@Native public static final int CONFIGURE_SURFACE = 78;
// multibuffering ops
@Native public static final int SWAP_BUFFERS = 80;

View File

@@ -110,9 +110,9 @@ public abstract class VKSurfaceData extends SurfaceData
}
}
protected final int scale;
protected final int width;
protected final int height;
protected int scale;
protected int width;
protected int height;
protected int type;
private VKGraphicsConfig graphicsConfig;
// these fields are set from the native code when the surface is

View File

@@ -1,8 +0,0 @@
#version 450
layout(location = 0) in flat vec4 fragColor;
layout(location = 0) out vec4 outColor;
void main() {
outColor = fragColor;
}

View File

@@ -1,19 +0,0 @@
#version 450
layout(push_constant) uniform PushConstants {
vec4 fragColor;
} pushConstants;
const vec2 positions[4] = vec2[4](
vec2(-1.0, -1.0),
vec2( 1.0, -1.0),
vec2(-1.0, 1.0),
vec2( 1.0, 1.0)
);
layout(location = 0) out flat vec4 fragColor;
void main() {
gl_Position = vec4(positions[gl_VertexIndex], 0.0, 1.0);
fragColor = pushConstants.fragColor;
}

View File

@@ -0,0 +1,829 @@
/*
* Copyright (c) 2024, Oracle and/or its affiliates. All rights reserved.
* Copyright (c) 2024, JetBrains s.r.o.. All rights reserved.
* DO NOT ALTER OR REMOVE COPYRIGHT NOTICES OR THIS FILE HEADER.
*
* This code is free software; you can redistribute it and/or modify it
* under the terms of the GNU General Public License version 2 only, as
* published by the Free Software Foundation. Oracle designates this
* particular file as subject to the "Classpath" exception as provided
* by Oracle in the LICENSE file that accompanied this code.
*
* This code is distributed in the hope that it will be useful, but WITHOUT
* ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or
* FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License
* version 2 for more details (a copy is included in the LICENSE file that
* accompanied this code).
*
* You should have received a copy of the GNU General Public License version
* 2 along with this work; if not, write to the Free Software Foundation,
* Inc., 51 Franklin St, Fifth Floor, Boston, MA 02110-1301 USA.
*
* Please contact Oracle, 500 Oracle Parkway, Redwood Shores, CA 94065 USA
* or visit www.oracle.com if you need additional information or have any
* questions.
*/
#include <stdlib.h>
#include <string.h>
#include <time.h>
#include "AccelTexturePool.h"
#include "jni.h"
#include "Trace.h"
#define USE_MAX_GPU_DEVICE_MEM 1
#define MAX_GPU_DEVICE_MEM (512 * UNIT_MB)
#define SCREEN_MEMORY_SIZE_5K (5120 * 4096 * 4) // ~ 84 mb
#define MAX_POOL_ITEM_LIFETIME_SEC 30
// 32 pixel
#define CELL_WIDTH_BITS 5
#define CELL_HEIGHT_BITS 5
#define CELL_WIDTH_MASK ((1 << CELL_WIDTH_BITS) - 1)
#define CELL_HEIGHT_MASK ((1 << CELL_HEIGHT_BITS) - 1)
#define USE_CEIL_SIZE 1
#define FORCE_GC 1
// force gc (prune old textures):
#define FORCE_GC_INTERVAL_SEC (MAX_POOL_ITEM_LIFETIME_SEC * 10)
// force young gc every 5 seconds (prune only not reused textures):
#define YOUNG_GC_INTERVAL_SEC 15
#define YOUNG_GC_LIFETIME_SEC (FORCE_GC_INTERVAL_SEC * 2)
#define TRACE_GC 1
#define TRACE_GC_ALIVE 0
#define TRACE_MEM_API 0
#define TRACE_USE_API 0
#define TRACE_REUSE 0
#define INIT_TEST 1
#define INIT_TEST_STEP 1
#define INIT_TEST_MAX 1024
#define LOCK_WRAPPER(cell) ((cell)->pool->lockWrapper)
#define LOCK_WRAPPER_LOCK(cell) (LOCK_WRAPPER(cell)->lockFunc((cell)->lock))
#define LOCK_WRAPPER_UNLOCK(cell) (LOCK_WRAPPER(cell)->unlockFunc((cell)->lock))
/* Private definitions */
struct ATexturePoolLockWrapper_ {
ATexturePoolLock_init *initFunc;
ATexturePoolLock_dispose *disposeFunc;
ATexturePoolLock_lock *lockFunc;
ATexturePoolLock_unlock *unlockFunc;
};
struct ATexturePoolItem_ {
ATexturePool_freeTexture *freeTextureFunc;
ADevicePrivPtr *device;
ATexturePrivPtr *texture;
ATexturePoolCell *cell;
ATexturePoolItem *prev;
ATexturePoolItem *next;
time_t lastUsed;
jint width;
jint height;
jlong format;
jint reuseCount;
jboolean isBusy;
};
struct ATexturePoolCell_ {
ATexturePool *pool;
ATexturePoolLockPrivPtr *lock;
ATexturePoolItem *available;
ATexturePoolItem *availableTail;
ATexturePoolItem *occupied;
};
static void ATexturePoolCell_releaseItem(ATexturePoolCell *cell, ATexturePoolItem *item);
struct ATexturePoolHandle_ {
ATexturePrivPtr *texture;
ATexturePoolItem *_poolItem;
jint reqWidth;
jint reqHeight;
};
// NOTE: owns all texture objects
struct ATexturePool_ {
ATexturePool_createTexture *createTextureFunc;
ATexturePool_freeTexture *freeTextureFunc;
ATexturePool_bytesPerPixel *bytesPerPixelFunc;
ATexturePoolLockWrapper *lockWrapper;
ADevicePrivPtr *device;
ATexturePoolCell **_cells;
jint poolCellWidth;
jint poolCellHeight;
jlong maxPoolMemory;
jlong memoryAllocated;
jlong totalMemoryAllocated;
jlong allocatedCount;
jlong totalAllocatedCount;
jlong cacheHits;
jlong totalHits;
time_t lastGC;
time_t lastYoungGC;
time_t lastFullGC;
jboolean enableGC;
};
/* ATexturePoolLockWrapper API */
ATexturePoolLockWrapper* ATexturePoolLockWrapper_init(ATexturePoolLock_init *initFunc,
ATexturePoolLock_dispose *disposeFunc,
ATexturePoolLock_lock *lockFunc,
ATexturePoolLock_unlock *unlockFunc)
{
CHECK_NULL_LOG_RETURN(initFunc, NULL, "ATexturePoolLockWrapper_init: initFunc function is null !");
CHECK_NULL_LOG_RETURN(disposeFunc, NULL, "ATexturePoolLockWrapper_init: disposeFunc function is null !");
CHECK_NULL_LOG_RETURN(lockFunc, NULL, "ATexturePoolLockWrapper_init: lockFunc function is null !");
CHECK_NULL_LOG_RETURN(unlockFunc, NULL, "ATexturePoolLockWrapper_init: unlockFunc function is null !");
ATexturePoolLockWrapper *lockWrapper = (ATexturePoolLockWrapper*)malloc(sizeof(ATexturePoolLockWrapper));
CHECK_NULL_LOG_RETURN(lockWrapper, NULL, "ATexturePoolLockWrapper_init: could not allocate ATexturePoolLockWrapper");
lockWrapper->initFunc = initFunc;
lockWrapper->disposeFunc = disposeFunc;
lockWrapper->lockFunc = lockFunc;
lockWrapper->unlockFunc = unlockFunc;
if (TRACE_MEM_API) J2dRlsTraceLn1(J2D_TRACE_INFO, "ATexturePoolLockWrapper_init: lockWrapper = %p", lockWrapper);
return lockWrapper;
}
void ATexturePoolLockWrapper_Dispose(ATexturePoolLockWrapper *lockWrapper) {
CHECK_NULL(lockWrapper);
if (TRACE_MEM_API) J2dRlsTraceLn1(J2D_TRACE_INFO, "ATexturePoolLockWrapper_Dispose: lockWrapper = %p", lockWrapper);
free(lockWrapper);
}
/* ATexturePoolItem API */
static ATexturePoolItem* ATexturePoolItem_initWithTexture(ATexturePool_freeTexture *freeTextureFunc,
ADevicePrivPtr *device,
ATexturePrivPtr *texture,
ATexturePoolCell *cell,
jint width,
jint height,
jlong format)
{
CHECK_NULL_LOG_RETURN(freeTextureFunc, NULL, "ATexturePoolItem_initWithTexture: freeTextureFunc function is null !");
CHECK_NULL_RETURN(texture, NULL);
CHECK_NULL_RETURN(cell, NULL);
ATexturePoolItem *item = (ATexturePoolItem*)malloc(sizeof(ATexturePoolItem));
CHECK_NULL_LOG_RETURN(item, NULL, "ATexturePoolItem_initWithTexture: could not allocate ATexturePoolItem");
item->freeTextureFunc = freeTextureFunc;
item->device = device;
item->texture = texture;
item->cell = cell;
item->prev = NULL;
item->next = NULL;
item->lastUsed = 0;
item->width = width;
item->height = height;
item->format = format;
item->reuseCount = 0;
item->isBusy = JNI_FALSE;
if (TRACE_MEM_API) J2dRlsTraceLn1(J2D_TRACE_INFO, "ATexturePoolItem_initWithTexture: item = %p", item);
return item;
}
static void ATexturePoolItem_Dispose(ATexturePoolItem *item) {
CHECK_NULL(item);
if (TRACE_MEM_API) J2dRlsTraceLn2(J2D_TRACE_INFO, "ATexturePoolItem_Dispose: item = %p - reuse: %4d", item, item->reuseCount);
// use texture (native API) to release allocated texture:
item->freeTextureFunc(item->device, item->texture);
free(item);
}
/* Callback from metal pipeline => multi-thread (cell lock) */
static void ATexturePoolItem_ReleaseItem(ATexturePoolItem *item) {
CHECK_NULL(item);
if (!item->isBusy) {
return;
}
if (TRACE_MEM_API) J2dRlsTraceLn1(J2D_TRACE_INFO, "ATexturePoolItem_ReleaseItem: item = %p", item);
if IS_NOT_NULL(item->cell) {
ATexturePoolCell_releaseItem(item->cell, item);
} else {
J2dRlsTraceLn1(J2D_TRACE_INFO, "ATexturePoolItem_ReleaseItem: item = %p (detached)", item);
// item marked as detached:
ATexturePoolItem_Dispose(item);
}
}
/* ATexturePoolCell API */
static ATexturePoolCell* ATexturePoolCell_init(ATexturePool *pool) {
CHECK_NULL_RETURN(pool, NULL);
ATexturePoolCell *cell = (ATexturePoolCell*)malloc(sizeof(ATexturePoolCell));
CHECK_NULL_LOG_RETURN(cell, NULL, "ATexturePoolCell_init: could not allocate ATexturePoolCell");
cell->pool = pool;
ATexturePoolLockPrivPtr* lock = LOCK_WRAPPER(cell)->initFunc();
CHECK_NULL_LOG_RETURN(lock, NULL, "ATexturePoolCell_init: could not allocate ATexturePoolLockPrivPtr");
cell->lock = lock;
cell->available = NULL;
cell->availableTail = NULL;
cell->occupied = NULL;
if (TRACE_MEM_API) J2dRlsTraceLn1(J2D_TRACE_INFO, "ATexturePoolCell_init: cell = %p", cell);
return cell;
}
static void ATexturePoolCell_removeAllItems(ATexturePoolCell *cell) {
CHECK_NULL(cell);
if (TRACE_MEM_API) J2dRlsTraceLn(J2D_TRACE_INFO, "ATexturePoolCell_removeAllItems");
ATexturePoolItem *cur = cell->available;
ATexturePoolItem *next = NULL;
while IS_NOT_NULL(cur) {
next = cur->next;
ATexturePoolItem_Dispose(cur);
cur = next;
}
cell->available = NULL;
cur = cell->occupied;
next = NULL;
while IS_NOT_NULL(cur) {
next = cur->next;
J2dRlsTraceLn1(J2D_TRACE_INFO, "ATexturePoolCell_removeAllItems: occupied item = %p", cur);
// Do not dispose (may leak) until ATexturePoolItem_Release() is called by handle:
// mark item as detached:
cur->cell = NULL;
cur = next;
}
cell->occupied = NULL;
cell->availableTail = NULL;
}
static void ATexturePoolCell_removeAvailableItem(ATexturePoolCell *cell, ATexturePoolItem *item) {
CHECK_NULL(cell);
CHECK_NULL(item);
if (TRACE_MEM_API) J2dRlsTraceLn1(J2D_TRACE_INFO, "ATexturePoolCell_removeAvailableItem: item = %p", item);
if IS_NULL(item->prev) {
cell->available = item->next;
if IS_NOT_NULL(item->next) {
item->next->prev = NULL;
item->next = NULL;
} else {
cell->availableTail = item->prev;
}
} else {
item->prev->next = item->next;
if IS_NOT_NULL(item->next) {
item->next->prev = item->prev;
item->next = NULL;
} else {
cell->availableTail = item->prev;
}
}
ATexturePoolItem_Dispose(item);
}
static void ATexturePoolCell_Dispose(ATexturePoolCell *cell) {
CHECK_NULL(cell);
if (TRACE_MEM_API) J2dRlsTraceLn1(J2D_TRACE_INFO, "ATexturePoolCell_Dispose: cell = %p", cell);
LOCK_WRAPPER_LOCK(cell);
{
ATexturePoolCell_removeAllItems(cell);
}
LOCK_WRAPPER_UNLOCK(cell);
LOCK_WRAPPER(cell)->disposeFunc(cell->lock);
free(cell);
}
/* RQ thread from metal pipeline (cell locked) */
static void ATexturePoolCell_occupyItem(ATexturePoolCell *cell, ATexturePoolItem *item) {
CHECK_NULL(cell);
CHECK_NULL(item);
if (item->isBusy) {
return;
}
if (TRACE_USE_API) J2dRlsTraceLn1(J2D_TRACE_INFO, "ATexturePoolCell_occupyItem: item = %p", item);
if IS_NULL(item->prev) {
cell->available = item->next;
if IS_NOT_NULL(item->next) {
item->next->prev = NULL;
} else {
cell->availableTail = item->prev;
}
} else {
item->prev->next = item->next;
if IS_NOT_NULL(item->next) {
item->next->prev = item->prev;
} else {
cell->availableTail = item->prev;
}
item->prev = NULL;
}
if (cell->occupied) {
cell->occupied->prev = item;
}
item->next = cell->occupied;
cell->occupied = item;
item->isBusy = JNI_TRUE;
}
/* Callback from native java2D pipeline => multi-thread (cell lock) */
static void ATexturePoolCell_releaseItem(ATexturePoolCell *cell, ATexturePoolItem *item) {
CHECK_NULL(cell);
CHECK_NULL(item);
if (!item->isBusy) {
return;
}
if (TRACE_USE_API) J2dRlsTraceLn1(J2D_TRACE_INFO, "ATexturePoolCell_releaseItem: item = %p", item);
LOCK_WRAPPER_LOCK(cell);
{
if IS_NULL(item->prev) {
cell->occupied = item->next;
if IS_NOT_NULL(item->next) {
item->next->prev = NULL;
}
} else {
item->prev->next = item->next;
if IS_NOT_NULL(item->next) {
item->next->prev = item->prev;
}
item->prev = NULL;
}
if IS_NOT_NULL(cell->available) {
cell->available->prev = item;
} else {
cell->availableTail = item;
}
item->next = cell->available;
cell->available = item;
item->isBusy = JNI_FALSE;
}
LOCK_WRAPPER_UNLOCK(cell);
}
static void ATexturePoolCell_addOccupiedItem(ATexturePoolCell *cell, ATexturePoolItem *item) {
CHECK_NULL(cell);
CHECK_NULL(item);
if (TRACE_USE_API) J2dRlsTraceLn1(J2D_TRACE_INFO, "ATexturePoolCell_addOccupiedItem: item = %p", item);
LOCK_WRAPPER_LOCK(cell);
{
cell->pool->allocatedCount++;
cell->pool->totalAllocatedCount++;
if IS_NOT_NULL(cell->occupied) {
cell->occupied->prev = item;
}
item->next = cell->occupied;
cell->occupied = item;
item->isBusy = JNI_TRUE;
}
LOCK_WRAPPER_UNLOCK(cell);
}
static void ATexturePoolCell_cleanIfBefore(ATexturePoolCell *cell, time_t lastUsedTimeToRemove) {
CHECK_NULL(cell);
LOCK_WRAPPER_LOCK(cell);
{
ATexturePoolItem *cur = cell->availableTail;
while IS_NOT_NULL(cur) {
ATexturePoolItem *prev = cur->prev;
if ((cur->reuseCount == 0) || (lastUsedTimeToRemove <= 0) || (cur->lastUsed < lastUsedTimeToRemove)) {
if (TRACE_MEM_API) J2dRlsTraceLn4(J2D_TRACE_VERBOSE, "ATexturePoolCell_cleanIfBefore: remove pool item: tex=%p, w=%d h=%d, elapsed=%d",
cur->texture, cur->width, cur->height,
time(NULL) - cur->lastUsed);
const int requestedBytes = cur->width * cur->height * cell->pool->bytesPerPixelFunc(cur->format);
// cur is NULL after removeAvailableItem:
ATexturePoolCell_removeAvailableItem(cell, cur);
cell->pool->allocatedCount--;
cell->pool->memoryAllocated -= requestedBytes;
} else {
if (TRACE_MEM_API || TRACE_GC_ALIVE) J2dRlsTraceLn2(J2D_TRACE_INFO, "ATexturePoolCell_cleanIfBefore: item = %p - ALIVE - reuse: %4d -> 0",
cur, cur->reuseCount);
// clear reuse count anyway:
cur->reuseCount = 0;
}
cur = prev;
}
}
LOCK_WRAPPER_UNLOCK(cell);
}
/* RQ thread from metal pipeline <=> multi-thread callbacks (cell lock) */
static ATexturePoolItem* ATexturePoolCell_occupyCellItem(ATexturePoolCell *cell,
jint width,
jint height,
jlong format)
{
CHECK_NULL_RETURN(cell, NULL);
int minDeltaArea = -1;
const int requestedPixels = width * height;
ATexturePoolItem *minDeltaTpi = NULL;
LOCK_WRAPPER_LOCK(cell);
{
for (ATexturePoolItem *cur = cell->available; IS_NOT_NULL(cur); cur = cur->next) {
// TODO: use swizzle when formats are not equal
if (cur->format != format) {
continue;
}
if (cur->width < width || cur->height < height) {
continue;
}
const int deltaArea = (const int) (cur->width * cur->height - requestedPixels);
if ((minDeltaArea < 0) || (deltaArea < minDeltaArea)) {
minDeltaArea = deltaArea;
minDeltaTpi = cur;
if (deltaArea == 0) {
// found exact match in current cell
break;
}
}
}
if IS_NOT_NULL(minDeltaTpi) {
ATexturePoolCell_occupyItem(cell, minDeltaTpi);
}
}
LOCK_WRAPPER_UNLOCK(cell);
if (TRACE_USE_API) J2dRlsTraceLn1(J2D_TRACE_INFO, "ATexturePoolCell_occupyCellItem: item = %p", minDeltaTpi);
return minDeltaTpi;
}
/* ATexturePoolHandle API */
static ATexturePoolHandle* ATexturePoolHandle_initWithPoolItem(ATexturePoolItem *item, jint reqWidth, jint reqHeight) {
CHECK_NULL_RETURN(item, NULL);
ATexturePoolHandle *handle = (ATexturePoolHandle*)malloc(sizeof(ATexturePoolHandle));
CHECK_NULL_LOG_RETURN(handle, NULL, "ATexturePoolHandle_initWithPoolItem: could not allocate ATexturePoolHandle");
handle->texture = item->texture;
handle->_poolItem = item;
handle->reqWidth = reqWidth;
handle->reqHeight = reqHeight;
if (TRACE_USE_API) J2dRlsTraceLn1(J2D_TRACE_INFO, "ATexturePoolHandle_initWithPoolItem: handle = %p", handle);
return handle;
}
/* Callback from metal pipeline => multi-thread (cell lock) */
void ATexturePoolHandle_ReleaseTexture(ATexturePoolHandle *handle) {
CHECK_NULL(handle);
if (TRACE_USE_API) J2dRlsTraceLn1(J2D_TRACE_INFO, "ATexturePoolHandle_ReleaseTexture: handle = %p", handle);
ATexturePoolItem_ReleaseItem(handle->_poolItem);
free(handle);
}
ATexturePrivPtr* ATexturePoolHandle_GetTexture(ATexturePoolHandle *handle) {
CHECK_NULL_RETURN(handle, NULL);
if (TRACE_USE_API) J2dRlsTraceLn1(J2D_TRACE_INFO, "ATexturePoolHandle_GetTexture: handle = %p", handle);
return handle->texture;
}
jint ATexturePoolHandle_GetRequestedWidth(ATexturePoolHandle *handle) {
CHECK_NULL_RETURN(handle, 0);
if (TRACE_USE_API) J2dRlsTraceLn1(J2D_TRACE_INFO, "ATexturePoolHandle_GetRequestedWidth: handle = %p", handle);
return handle->reqWidth;
}
jint ATexturePoolHandle_GetRequestedHeight(ATexturePoolHandle *handle) {
CHECK_NULL_RETURN(handle, 0);
if (TRACE_USE_API) J2dRlsTraceLn1(J2D_TRACE_INFO, "ATexturePoolHandle_GetRequestedHeight: handle = %p", handle);
return handle->reqHeight;
}
/* ATexturePool API */
static void ATexturePool_cleanIfNecessary(ATexturePool *pool, int lastUsedTimeThreshold);
static void ATexturePool_autoTest(ATexturePool *pool, jlong format) {
CHECK_NULL(pool);
J2dRlsTraceLn1(J2D_TRACE_VERBOSE, "ATexturePool_autoTest: step = %d", INIT_TEST_STEP);
pool->enableGC = JNI_FALSE;
for (int w = 1; w <= INIT_TEST_MAX; w += INIT_TEST_STEP) {
for (int h = 1; h <= INIT_TEST_MAX; h += INIT_TEST_STEP) {
/* use auto-release pool to free memory as early as possible */
ATexturePoolHandle *texHandle = ATexturePool_getTexture(pool, w, h, format);
ATexturePrivPtr *texture = ATexturePoolHandle_GetTexture(texHandle);
if IS_NULL(texture) {
J2dRlsTraceLn2(J2D_TRACE_VERBOSE, "ATexturePool_autoTest: w= %d h= %d => texture is NULL !", w, h);
} else {
if (TRACE_MEM_API) J2dRlsTraceLn3(J2D_TRACE_VERBOSE, "ATexturePool_autoTest: w=%d h=%d => tex=%p",
w, h, texture);
}
ATexturePoolHandle_ReleaseTexture(texHandle);
}
}
J2dRlsTraceLn2(J2D_TRACE_INFO, "ATexturePool_autoTest: before GC: total allocated memory = %lld Mb (total allocs: %d)",
pool->totalMemoryAllocated / UNIT_MB, pool->totalAllocatedCount);
pool->enableGC = JNI_TRUE;
ATexturePool_cleanIfNecessary(pool, FORCE_GC_INTERVAL_SEC);
J2dRlsTraceLn2(J2D_TRACE_INFO, "ATexturePool_autoTest: after GC: total allocated memory = %lld Mb (total allocs: %d)",
pool->totalMemoryAllocated / UNIT_MB, pool->totalAllocatedCount);
}
ATexturePool* ATexturePool_initWithDevice(ADevicePrivPtr *device,
jlong maxDeviceMemory,
ATexturePool_createTexture *createTextureFunc,
ATexturePool_freeTexture *freeTextureFunc,
ATexturePool_bytesPerPixel *bytesPerPixelFunc,
ATexturePoolLockWrapper *lockWrapper,
jlong autoTestFormat)
{
CHECK_NULL_LOG_RETURN(device, NULL, "ATexturePool_initWithDevice: device is null !");
CHECK_NULL_LOG_RETURN(createTextureFunc, NULL, "ATexturePool_initWithDevice: createTextureFunc function is null !");
CHECK_NULL_LOG_RETURN(freeTextureFunc, NULL, "ATexturePool_initWithDevice: freeTextureFunc function is null !");
CHECK_NULL_LOG_RETURN(bytesPerPixelFunc, NULL, "ATexturePool_initWithDevice: bytesPerPixelFunc function is null !");
CHECK_NULL_LOG_RETURN(lockWrapper, NULL, "ATexturePool_initWithDevice: lockWrapper is null !");
ATexturePool *pool = (ATexturePool*)malloc(sizeof(ATexturePool));
CHECK_NULL_LOG_RETURN(pool, NULL, "ATexturePool_initWithDevice: could not allocate ATexturePool");
pool->createTextureFunc = createTextureFunc;
pool->freeTextureFunc = freeTextureFunc;
pool->bytesPerPixelFunc = bytesPerPixelFunc;
pool->lockWrapper = lockWrapper;
pool->device = device;
// use (5K) 5120-by-2880 resolution:
pool->poolCellWidth = 5120 >> CELL_WIDTH_BITS;
pool->poolCellHeight = 2880 >> CELL_HEIGHT_BITS;
const int cellsCount = pool->poolCellWidth * pool->poolCellHeight;
pool->_cells = (ATexturePoolCell**)malloc(cellsCount * sizeof(void*));
CHECK_NULL_LOG_RETURN(pool->_cells, NULL, "ATexturePool_initWithDevice: could not allocate cells");
memset(pool->_cells, 0, cellsCount * sizeof(void*));
pool->maxPoolMemory = maxDeviceMemory / 2;
// Set maximum to handle at least 5K screen size
if (pool->maxPoolMemory < SCREEN_MEMORY_SIZE_5K) {
pool->maxPoolMemory = SCREEN_MEMORY_SIZE_5K;
} else if (USE_MAX_GPU_DEVICE_MEM && (pool->maxPoolMemory > MAX_GPU_DEVICE_MEM)) {
pool->maxPoolMemory = MAX_GPU_DEVICE_MEM;
}
pool->allocatedCount = 0L;
pool->totalAllocatedCount = 0L;
pool->memoryAllocated = 0L;
pool->totalMemoryAllocated = 0L;
pool->enableGC = JNI_TRUE;
pool->lastGC = time(NULL);
pool->lastYoungGC = pool->lastGC;
pool->lastFullGC = pool->lastGC;
pool->cacheHits = 0L;
pool->totalHits = 0L;
if (TRACE_MEM_API) J2dRlsTraceLn1(J2D_TRACE_INFO, "ATexturePool_initWithDevice: pool = %p", pool);
if (INIT_TEST) {
static jboolean INIT_TEST_START = JNI_TRUE;
if (INIT_TEST_START) {
INIT_TEST_START = JNI_FALSE;
ATexturePool_autoTest(pool, autoTestFormat);
}
}
return pool;
}
void ATexturePool_Dispose(ATexturePool *pool) {
CHECK_NULL(pool);
if (TRACE_MEM_API) J2dRlsTraceLn1(J2D_TRACE_INFO, "ATexturePool_Dispose: pool = %p", pool);
const int cellsCount = pool->poolCellWidth * pool->poolCellHeight;
for (int c = 0; c < cellsCount; ++c) {
ATexturePoolCell *cell = pool->_cells[c];
if IS_NOT_NULL(cell) {
ATexturePoolCell_Dispose(cell);
}
}
free(pool->_cells);
free(pool);
}
ATexturePoolLockWrapper* ATexturePool_getLockWrapper(ATexturePool *pool) {
CHECK_NULL_RETURN(pool, NULL);
if (TRACE_MEM_API) J2dRlsTraceLn1(J2D_TRACE_INFO, "ATexturePool_getLockWrapper: pool = %p", pool);
return pool->lockWrapper;
}
static void ATexturePool_cleanIfNecessary(ATexturePool *pool, int lastUsedTimeThreshold) {
CHECK_NULL(pool);
time_t lastUsedTimeToRemove =
lastUsedTimeThreshold > 0 ?
time(NULL) - lastUsedTimeThreshold :
lastUsedTimeThreshold;
if (TRACE_MEM_API || TRACE_GC) {
J2dRlsTraceLn2(J2D_TRACE_VERBOSE, "ATexturePool_cleanIfNecessary: before GC: allocated memory = %lld Kb (allocs: %d)",
pool->memoryAllocated / UNIT_KB, pool->allocatedCount);
}
for (int cy = 0; cy < pool->poolCellHeight; ++cy) {
for (int cx = 0; cx < pool->poolCellWidth; ++cx) {
ATexturePoolCell *cell = pool->_cells[cy * pool->poolCellWidth + cx];
if IS_NOT_NULL(cell) {
ATexturePoolCell_cleanIfBefore(cell, lastUsedTimeToRemove);
}
}
}
if (TRACE_MEM_API || TRACE_GC) {
J2dRlsTraceLn4(J2D_TRACE_VERBOSE, "ATexturePool_cleanIfNecessary: after GC: allocated memory = %lld Kb (allocs: %d) - hits = %lld (%.3lf %% cached)",
pool->memoryAllocated / UNIT_KB, pool->allocatedCount,
pool->totalHits, (pool->totalHits != 0L) ? (100.0 * pool->cacheHits) / pool->totalHits : 0.0);
// reset hits:
pool->cacheHits = 0L;
pool->totalHits = 0L;
}
}
ATexturePoolHandle* ATexturePool_getTexture(ATexturePool* pool,
jint width,
jint height,
jlong format)
{
CHECK_NULL_RETURN(pool, NULL);
const int reqWidth = width;
const int reqHeight = height;
int cellX0 = width >> CELL_WIDTH_BITS;
int cellY0 = height >> CELL_HEIGHT_BITS;
if (USE_CEIL_SIZE) {
// use upper cell size to maximize cache hits:
const int remX0 = width & CELL_WIDTH_MASK;
const int remY0 = height & CELL_HEIGHT_MASK;
if (remX0 != 0) {
cellX0++;
}
if (remY0 != 0) {
cellY0++;
}
// adjust width / height to cell upper boundaries:
width = (cellX0) << CELL_WIDTH_BITS;
height = (cellY0) << CELL_HEIGHT_BITS;
if (TRACE_MEM_API) J2dRlsTraceLn4(J2D_TRACE_VERBOSE, "ATexturePool_getTexture: fixed tex size: (%d %d) => (%d %d)",
reqWidth, reqHeight, width, height);
}
// 1. clean pool if necessary
const int requestedPixels = width * height;
const int requestedBytes = requestedPixels * pool->bytesPerPixelFunc(format);
const jlong neededMemoryAllocated = pool->memoryAllocated + requestedBytes;
if (neededMemoryAllocated > pool->maxPoolMemory) {
// release all free textures
ATexturePool_cleanIfNecessary(pool, 0);
} else {
time_t now = time(NULL);
// ensure 1s at least:
if ((now - pool->lastGC) > 0) {
pool->lastGC = now;
if (neededMemoryAllocated > pool->maxPoolMemory / 2) {
// release only old free textures
ATexturePool_cleanIfNecessary(pool, MAX_POOL_ITEM_LIFETIME_SEC);
} else if (FORCE_GC && pool->enableGC) {
if ((now - pool->lastFullGC) > FORCE_GC_INTERVAL_SEC) {
pool->lastFullGC = now;
pool->lastYoungGC = now;
// release only old free textures since last full-gc
ATexturePool_cleanIfNecessary(pool, FORCE_GC_INTERVAL_SEC);
} else if ((now - pool->lastYoungGC) > YOUNG_GC_INTERVAL_SEC) {
pool->lastYoungGC = now;
// release only not reused and old textures
ATexturePool_cleanIfNecessary(pool, YOUNG_GC_LIFETIME_SEC);
}
}
}
}
// 2. find free item
const int cellX1 = cellX0 + 1;
const int cellY1 = cellY0 + 1;
// Note: this code (test + resizing) is not thread-safe:
if (cellX1 > pool->poolCellWidth || cellY1 > pool->poolCellHeight) {
const int newCellWidth = cellX1 <= pool->poolCellWidth ? pool->poolCellWidth : cellX1;
const int newCellHeight = cellY1 <= pool->poolCellHeight ? pool->poolCellHeight : cellY1;
const int newCellsCount = newCellWidth*newCellHeight;
if (TRACE_MEM_API) J2dRlsTraceLn2(J2D_TRACE_VERBOSE, "ATexturePool_getTexture: resize: %d -> %d",
pool->poolCellWidth * pool->poolCellHeight, newCellsCount);
ATexturePoolCell **newCells = (ATexturePoolCell **)malloc(newCellsCount * sizeof(void*));
CHECK_NULL_LOG_RETURN(newCells, NULL, "ATexturePool_getTexture: could not allocate newCells");
const size_t strideBytes = pool->poolCellWidth * sizeof(void*);
for (int cy = 0; cy < pool->poolCellHeight; ++cy) {
ATexturePoolCell **dst = newCells + cy * newCellWidth;
ATexturePoolCell **src = pool->_cells + cy * pool->poolCellWidth;
memcpy(dst, src, strideBytes);
if (newCellWidth > pool->poolCellWidth)
memset(dst + pool->poolCellWidth, 0, (newCellWidth - pool->poolCellWidth) * sizeof(void*));
}
if (newCellHeight > pool->poolCellHeight) {
ATexturePoolCell **dst = newCells + pool->poolCellHeight * newCellWidth;
memset(dst, 0, (newCellHeight - pool->poolCellHeight) * newCellWidth * sizeof(void*));
}
free(pool->_cells);
pool->_cells = newCells;
pool->poolCellWidth = newCellWidth;
pool->poolCellHeight = newCellHeight;
}
ATexturePoolItem *minDeltaTpi = NULL;
int minDeltaArea = -1;
for (int cy = cellY0; cy < cellY1; ++cy) {
for (int cx = cellX0; cx < cellX1; ++cx) {
ATexturePoolCell* cell = pool->_cells[cy * pool->poolCellWidth + cx];
if IS_NOT_NULL(cell) {
ATexturePoolItem *tpi = ATexturePoolCell_occupyCellItem(cell, width, height, format);
if IS_NULL(tpi) {
continue;
}
const int deltaArea = (const int) (tpi->width * tpi->height - requestedPixels);
if (minDeltaArea < 0 || deltaArea < minDeltaArea) {
minDeltaArea = deltaArea;
minDeltaTpi = tpi;
if (deltaArea == 0) {
// found exact match in current cell
break;
}
}
}
}
if IS_NOT_NULL(minDeltaTpi) {
break;
}
}
if IS_NULL(minDeltaTpi) {
ATexturePoolCell* cell = pool->_cells[cellY0 * pool->poolCellWidth + cellX0];
if IS_NULL(cell) {
cell = ATexturePoolCell_init(pool);
CHECK_NULL_RETURN(cell, NULL);
pool->_cells[cellY0 * pool->poolCellWidth + cellX0] = cell;
}
// use device to allocate NEW texture:
ATexturePrivPtr* tex = pool->createTextureFunc(pool->device, width, height, format);
CHECK_NULL_LOG_RETURN(tex, NULL, "ATexturePool_getTexture: createTextureFunc failed to allocate texture !");
minDeltaTpi = ATexturePoolItem_initWithTexture(pool->freeTextureFunc, pool->device, tex, cell,
width, height, format);
ATexturePoolCell_addOccupiedItem(cell, minDeltaTpi);
pool->memoryAllocated += requestedBytes;
pool->totalMemoryAllocated += requestedBytes;
J2dTraceLn6(J2D_TRACE_VERBOSE, "ATexturePool_getTexture: created pool item: tex=%p, w=%d h=%d, pf=%d | allocated memory = %lld Kb (allocs: %d)",
minDeltaTpi->texture, width, height, format, pool->memoryAllocated / UNIT_KB, pool->allocatedCount)
if (TRACE_MEM_API) J2dRlsTraceLn6(J2D_TRACE_VERBOSE, "ATexturePool_getTexture: created pool item: tex=%p, w=%d h=%d, pf=%d | allocated memory = %lld Kb (allocs: %d)",
minDeltaTpi->texture, width, height, format, pool->memoryAllocated / UNIT_KB, pool->allocatedCount)
} else {
pool->cacheHits++;
minDeltaTpi->reuseCount++;
if (TRACE_REUSE) {
J2dRlsTraceLn5(J2D_TRACE_VERBOSE, "ATexturePool_getTexture: reused pool item: tex=%p, w=%d h=%d, pf=%d - reuse: %4d",
minDeltaTpi->texture, width, height, format, minDeltaTpi->reuseCount)
}
}
pool->totalHits++;
minDeltaTpi->lastUsed = time(NULL);
return ATexturePoolHandle_initWithPoolItem(minDeltaTpi, reqWidth, reqHeight);
}

View File

@@ -0,0 +1,139 @@
/*
* Copyright (c) 2024, Oracle and/or its affiliates. All rights reserved.
* Copyright (c) 2024, JetBrains s.r.o.. All rights reserved.
* DO NOT ALTER OR REMOVE COPYRIGHT NOTICES OR THIS FILE HEADER.
*
* This code is free software; you can redistribute it and/or modify it
* under the terms of the GNU General Public License version 2 only, as
* published by the Free Software Foundation. Oracle designates this
* particular file as subject to the "Classpath" exception as provided
* by Oracle in the LICENSE file that accompanied this code.
*
* This code is distributed in the hope that it will be useful, but WITHOUT
* ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or
* FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License
* version 2 for more details (a copy is included in the LICENSE file that
* accompanied this code).
*
* You should have received a copy of the GNU General Public License version
* 2 along with this work; if not, write to the Free Software Foundation,
* Inc., 51 Franklin St, Fifth Floor, Boston, MA 02110-1301 USA.
*
* Please contact Oracle, 500 Oracle Parkway, Redwood Shores, CA 94065 USA
* or visit www.oracle.com if you need additional information or have any
* questions.
*/
#ifndef AccelTexturePool_h_Included
#define AccelTexturePool_h_Included
#include "jni.h"
#define UNIT_KB 1024
#define UNIT_MB (UNIT_KB * UNIT_KB)
/* useful macros (from jni_utils.h) */
#define IS_NULL(obj) ((obj) == NULL)
#define IS_NOT_NULL(obj) ((obj) != NULL)
#define CHECK_NULL(x) \
do { \
if ((x) == NULL) { \
return; \
} \
} while (0) \
#define CHECK_NULL_RETURN(x, y) \
do { \
if ((x) == NULL) { \
return (y); \
} \
} while (0) \
#define CHECK_NULL_LOG_RETURN(x, y, msg) \
do { \
if IS_NULL(x) { \
J2dRlsTraceLn(J2D_TRACE_ERROR, (msg)); \
return (y); \
} \
} while (0) \
/* Generic native-specific device (platform specific) */
typedef void ADevicePrivPtr;
/* Generic native-specific texture (platform specific) */
typedef void ATexturePrivPtr;
/* Texture allocate/free API */
typedef ATexturePrivPtr* (ATexturePool_createTexture)(ADevicePrivPtr *device,
int width,
int height,
long format);
typedef void (ATexturePool_freeTexture)(ADevicePrivPtr *device,
ATexturePrivPtr *texture);
typedef int (ATexturePool_bytesPerPixel)(long format);
/* lock API */
/* Generic native-specific lock (platform specific) */
typedef void ATexturePoolLockPrivPtr;
typedef ATexturePoolLockPrivPtr* (ATexturePoolLock_init)(void);
typedef void (ATexturePoolLock_dispose) (ATexturePoolLockPrivPtr *lock);
typedef void (ATexturePoolLock_lock) (ATexturePoolLockPrivPtr *lock);
typedef void (ATexturePoolLock_unlock) (ATexturePoolLockPrivPtr *lock);
/* forward-definitions (private) */
typedef struct ATexturePoolLockWrapper_ ATexturePoolLockWrapper;
typedef struct ATexturePoolItem_ ATexturePoolItem;
typedef struct ATexturePoolHandle_ ATexturePoolHandle;
typedef struct ATexturePoolCell_ ATexturePoolCell;
typedef struct ATexturePool_ ATexturePool;
/* ATexturePoolLockWrapper API */
ATexturePoolLockWrapper* ATexturePoolLockWrapper_init(ATexturePoolLock_init *initFunc,
ATexturePoolLock_dispose *disposeFunc,
ATexturePoolLock_lock *lockFunc,
ATexturePoolLock_unlock *unlockFunc);
void ATexturePoolLockWrapper_Dispose(ATexturePoolLockWrapper *lockWrapper);
/* ATexturePoolHandle API */
void ATexturePoolHandle_ReleaseTexture(ATexturePoolHandle *handle);
ATexturePrivPtr* ATexturePoolHandle_GetTexture(ATexturePoolHandle *handle);
jint ATexturePoolHandle_GetRequestedWidth(ATexturePoolHandle *handle);
jint ATexturePoolHandle_GetRequestedHeight(ATexturePoolHandle *handle);
/* ATexturePool API */
ATexturePool* ATexturePool_initWithDevice(ADevicePrivPtr *device,
jlong maxDeviceMemory,
ATexturePool_createTexture *createTextureFunc,
ATexturePool_freeTexture *freeTextureFunc,
ATexturePool_bytesPerPixel *bytesPerPixelFunc,
ATexturePoolLockWrapper *lockWrapper,
jlong autoTestFormat);
void ATexturePool_Dispose(ATexturePool *pool);
ATexturePoolLockWrapper* ATexturePool_getLockWrapper(ATexturePool *pool);
ATexturePoolHandle* ATexturePool_getTexture(ATexturePool *pool,
jint width,
jint height,
jlong format);
#endif /* AccelTexturePool_h_Included */

View File

@@ -8,25 +8,48 @@ void* CARR_array_alloc(size_t elem_size, size_t capacity) {
if (pvec == NULL) {
return NULL;
}
pvec->elem_size = elem_size;
pvec->size = 0;
pvec->capacity = capacity;
return pvec->data;
}
void* CARR_array_realloc(CARR_array_t* vec, size_t new_capacity) {
void* CARR_array_realloc(CARR_array_t* vec, size_t elem_size, size_t new_capacity) {
if (vec->capacity == new_capacity) {
return vec->data;
}
CARR_array_t* new_vec =
(CARR_array_t*)((char*)CARR_array_alloc(vec->elem_size, new_capacity) - offsetof(CARR_array_t, data));
(CARR_array_t*)((char*)CARR_array_alloc(elem_size, new_capacity) - offsetof(CARR_array_t, data));
if (new_vec == NULL) {
return NULL;
return vec == NULL ? NULL : vec->data;
}
new_vec->capacity = new_capacity;
new_vec->size = MIN(vec->size, new_capacity);
new_vec->elem_size = vec->elem_size;
memcpy(new_vec->data, vec->data, new_vec->size*new_vec->elem_size);
memcpy(new_vec->data, vec->data, new_vec->size*elem_size);
free(vec);
return new_vec->data;
}
void* CARR_ring_buffer_realloc(CARR_ring_buffer_t* buf, size_t elem_size, size_t new_capacity) {
if (buf != NULL && buf->capacity == new_capacity) {
return buf->data;
}
CARR_ring_buffer_t* new_buf =
(CARR_ring_buffer_t*) malloc(elem_size * new_capacity + offsetof(CARR_ring_buffer_t, data));
if (new_buf == NULL) {
return NULL;
}
new_buf->head = new_buf->tail = 0;
new_buf->capacity = new_capacity;
if (buf != NULL) {
if (buf->tail > buf->head) {
new_buf->tail = buf->tail - buf->head;
memcpy(new_buf->data, buf->data + buf->head*elem_size, new_buf->tail*elem_size);
} else if (buf->tail < buf->head) {
new_buf->tail = buf->capacity + buf->tail - buf->head;
memcpy(new_buf->data, buf->data + buf->head*elem_size, (buf->capacity-buf->head)*elem_size);
memcpy(new_buf->data + (new_buf->tail-buf->tail)*elem_size, buf->data, buf->tail*elem_size);
}
free(buf);
}
return new_buf->data;
}

View File

@@ -1,48 +1,63 @@
#ifndef C_ARRAY_UTIL_H
#define C_ARRAY_UTIL_H
#include <stdio.h>
#include <malloc.h>
#include <assert.h>
#define ARRAY_CAPACITY_MULT 2
// C_ARRAY_UTIL_ALLOCATION_FAILED is called when allocation fails.
// Default implementation calls abort().
// Functions that can call C_ARRAY_UTIL_ALLOCATION_FAILED explicitly state
// this in the documentation. Functions with *_TRY_* return NULL on failure.
#ifndef C_ARRAY_UTIL_ALLOCATION_FAILED
#include <stdlib.h>
#define C_ARRAY_UTIL_ALLOCATION_FAILED() abort()
#endif
#define ARRAY_CAPACITY_GROW(C) (((C) * 3 + 1) / 2) // 1.5 multiplier
#define ARRAY_DEFAULT_CAPACITY 10
typedef struct {
size_t elem_size;
size_t size;
size_t capacity;
char data[];
} CARR_array_t;
void* CARR_array_alloc(size_t elem_size, size_t capacity);
void* CARR_array_realloc(CARR_array_t* vec, size_t new_capacity);
void* CARR_array_realloc(CARR_array_t* vec, size_t elem_size, size_t new_capacity);
typedef struct {
size_t head;
size_t tail;
size_t capacity;
char data[];
} CARR_ring_buffer_t;
void* CARR_ring_buffer_realloc(CARR_ring_buffer_t* buf, size_t elem_size, size_t new_capacity);
/**
* Allocate array
* Allocate array. Returns NULL on allocation failure.
* @param T type of elements
* @param CAPACITY capacity of the array
*/
#define ARRAY_ALLOC(T, CAPACITY) (T*)CARR_array_alloc(sizeof(T), CAPACITY)
#define ARRAY_T(P) (CARR_array_t *)((char*)P - offsetof(CARR_array_t, data))
#define ARRAY_T(P) ((CARR_array_t *)((P) == NULL ? NULL : (char*)(P) - offsetof(CARR_array_t, data)))
/**
* @param P pointer to the first data element of the array
* @return size of the array
*/
#define ARRAY_SIZE(P) (ARRAY_T(P))->size
#define ARRAY_SIZE(P) ((P) == NULL ? (size_t) 0 : (ARRAY_T(P))->size)
/**
* @param P pointer to the first data element of the array
* @return capacity of the array
*/
#define ARRAY_CAPACITY(P) (ARRAY_T(P))->capacity
#define ARRAY_CAPACITY(P) ((P) == NULL ? (size_t) 0 : (ARRAY_T(P))->capacity)
/**
* @param P pointer to the first data element of the array
* @return last element in the array
*/
#define ARRAY_LAST(P) (P[(ARRAY_T(P))->size - 1])
#define ARRAY_LAST(P) ((P)[ARRAY_SIZE(P) - 1])
/**
* Deallocate the vector
@@ -56,7 +71,7 @@ void* CARR_array_realloc(CARR_array_t* vec, size_t new_capacity);
* @param F function to apply
*/
#define ARRAY_APPLY(P, F) do { \
for (uint32_t _i = 0; _i < ARRAY_SIZE(P); _i++) F(&(P[_i])); \
for (size_t _i = 0; _i < ARRAY_SIZE(P); _i++) F(&((P)[_i])); \
} while(0)
/**
@@ -65,7 +80,7 @@ void* CARR_array_realloc(CARR_array_t* vec, size_t new_capacity);
* @param F function to apply
*/
#define ARRAY_APPLY_LEADING(P, F, ...) do { \
for (uint32_t _i = 0; _i < ARRAY_SIZE(P); _i++) F(&(P[_i]), __VA_ARGS__); \
for (size_t _i = 0; _i < ARRAY_SIZE(P); _i++) F(&((P)[_i]), __VA_ARGS__); \
} while(0)
/**
@@ -74,29 +89,148 @@ void* CARR_array_realloc(CARR_array_t* vec, size_t new_capacity);
* @param F function to apply
*/
#define ARRAY_APPLY_TRAILING(P, F, ...) do { \
for (uint32_t _i = 0; _i < ARRAY_SIZE(P); _i++) F(__VA_ARGS__, &(P[_i])); \
for (size_t _i = 0; _i < ARRAY_SIZE(P); _i++) F(__VA_ARGS__, &((P)[_i])); \
} while(0)
/**
* Shrink capacity of the array to its size
* @param PP pointer to the pointer to the first data element of the array
* Ensure array capacity. Implicitly initializes when array is NULL.
* On allocation failure, array is unchanged.
* @param P pointer to the first data element of the array
* @param CAPACITY required capacity of the array
*/
#define ARRAY_SHRINK_TO_FIT(PP) do { \
*PP = CARR_array_realloc(ARRAY_T(*PP), ARRAY_SIZE(*PP)); \
#define ARRAY_TRY_ENSURE_CAPACITY(P, CAPACITY) do { \
if ((P) == NULL) { \
if ((CAPACITY) > 0) (P) = CARR_array_alloc(sizeof((P)[0]), CAPACITY); \
} else if (ARRAY_CAPACITY(P) < (CAPACITY)) { \
(P) = CARR_array_realloc(ARRAY_T(P), sizeof((P)[0]), CAPACITY); \
} \
} while(0)
/**
* Add element to the end of the array
* @param PP pointer to the pointer to the first data element of the array
* Ensure array capacity. Implicitly initializes when array is NULL.
* On allocation failure, C_ARRAY_UTIL_ALLOCATION_FAILED is called.
* @param P pointer to the first data element of the array
* @param CAPACITY required capacity of the array
*/
#define ARRAY_PUSH_BACK(PP, D) do { \
if (ARRAY_SIZE(*PP) >= ARRAY_CAPACITY(*PP)) { \
*PP = CARR_array_realloc(ARRAY_T(*PP), ARRAY_SIZE(*PP)*ARRAY_CAPACITY_MULT);\
} \
*(*PP + ARRAY_SIZE(*PP)) = (D); \
ARRAY_SIZE(*PP)++; \
#define ARRAY_ENSURE_CAPACITY(P, CAPACITY) do { \
ARRAY_TRY_ENSURE_CAPACITY(P, CAPACITY); \
if (ARRAY_CAPACITY(P) < (CAPACITY)) C_ARRAY_UTIL_ALLOCATION_FAILED(); \
} while(0)
#define SARRAY_COUNT_OF(STATIC_ARRAY) (sizeof(STATIC_ARRAY)/sizeof(STATIC_ARRAY[0]))
/**
* Shrink capacity of the array to its size.
* On allocation failure, array is unchanged.
* @param P pointer to the first data element of the array
*/
#define ARRAY_SHRINK_TO_FIT(P) do { \
if ((P) != NULL) { \
(P) = CARR_array_realloc(ARRAY_T(P), sizeof((P)[0]), ARRAY_SIZE(P)); \
} \
} while(0)
#define ARRAY_RESIZE_IMPL(P, SIZE, ...) do { \
if ((P) != NULL || (SIZE) > 0) { \
ARRAY_ENSURE_CAPACITY(P, SIZE); \
if ((P) != NULL && (ARRAY_T(P))->capacity >= (SIZE)) { \
(ARRAY_T(P))->size = (SIZE); \
} __VA_ARGS__ \
} \
} while(0)
/**
* Resize an array. Implicitly initializes when array is NULL.
* On allocation failure, array is unchanged.
* @param P pointer to the first data element of the array
* @param SIZE required size of the array
*/
#define ARRAY_TRY_RESIZE(P, SIZE) ARRAY_RESIZE_IMPL(P, SIZE, )
/**
* Resize an array. Implicitly initializes when array is NULL.
* On allocation failure, C_ARRAY_UTIL_ALLOCATION_FAILED is called.
* @param P pointer to the first data element of the array
* @param SIZE required size of the array
*/
#define ARRAY_RESIZE(P, SIZE) ARRAY_RESIZE_IMPL(P, SIZE, else if ((SIZE) > 0) C_ARRAY_UTIL_ALLOCATION_FAILED();)
/**
* Add element to the end of the array. Implicitly initializes when array is NULL.
* On allocation failure, C_ARRAY_UTIL_ALLOCATION_FAILED is called.
* @param P pointer to the first data element of the array
*/
#define ARRAY_PUSH_BACK(P, ...) do { \
if ((P) == NULL) { \
(P) = CARR_array_alloc(sizeof((P)[0]), ARRAY_DEFAULT_CAPACITY); \
} else if (ARRAY_SIZE(P) >= ARRAY_CAPACITY(P)) { \
(P) = CARR_array_realloc(ARRAY_T(P), sizeof((P)[0]), ARRAY_CAPACITY_GROW(ARRAY_SIZE(P))); \
} \
if (ARRAY_SIZE(P) >= ARRAY_CAPACITY(P)) C_ARRAY_UTIL_ALLOCATION_FAILED(); \
*((P) + ARRAY_SIZE(P)) = (__VA_ARGS__); \
(ARRAY_T(P))->size++; \
} while(0)
#define SARRAY_COUNT_OF(STATIC_ARRAY) (sizeof(STATIC_ARRAY)/sizeof((STATIC_ARRAY)[0]))
#define RING_BUFFER_T(P) ((CARR_ring_buffer_t *)((P) == NULL ? NULL : (char*)(P) - offsetof(CARR_ring_buffer_t, data)))
/**
* @param P pointer to the first data element of the ring buffer
* @return size of the ring buffer
*/
#define RING_BUFFER_SIZE(P) ((P) == NULL ? (size_t) 0 : \
(RING_BUFFER_T(P)->capacity + RING_BUFFER_T(P)->tail - RING_BUFFER_T(P)->head) % RING_BUFFER_T(P)->capacity)
/**
* @param P pointer to the first data element of the ring buffer
* @return capacity of the ring buffer
*/
#define RING_BUFFER_CAPACITY(P) ((P) == NULL ? (size_t) 0 : RING_BUFFER_T(P)->capacity)
/**
* Add element to the end of the ring buffer. Implicitly initializes when buffer is NULL.
* On allocation failure, C_ARRAY_UTIL_ALLOCATION_FAILED is called.
* @param P pointer to the first data element of the buffer
*/
#define RING_BUFFER_PUSH(P, ...) RING_BUFFER_PUSH_CUSTOM(P, (P)[tail] = (__VA_ARGS__);)
#define RING_BUFFER_PUSH_CUSTOM(P, ...) do { \
size_t head, tail, new_tail; \
if ((P) == NULL) { \
(P) = CARR_ring_buffer_realloc(NULL, sizeof((P)[0]), ARRAY_DEFAULT_CAPACITY); \
if ((P) == NULL) C_ARRAY_UTIL_ALLOCATION_FAILED(); \
head = tail = 0; \
new_tail = 1; \
} else { \
head = RING_BUFFER_T(P)->head; \
tail = RING_BUFFER_T(P)->tail; \
new_tail = (tail + 1) % RING_BUFFER_T(P)->capacity; \
if (new_tail == head) { \
(P) = CARR_ring_buffer_realloc(RING_BUFFER_T(P), sizeof(P[0]), ARRAY_CAPACITY_GROW(RING_BUFFER_T(P)->capacity)); \
if ((P) == NULL) C_ARRAY_UTIL_ALLOCATION_FAILED(); \
head = 0; \
tail = RING_BUFFER_T(P)->tail; \
new_tail = RING_BUFFER_T(P)->tail + 1; \
} \
} \
__VA_ARGS__ \
RING_BUFFER_T(P)->tail = new_tail; \
} while(0)
/**
* Get pointer to the first element of the ring buffer.
* @param P pointer to the first data element of the buffer
*/
#define RING_BUFFER_PEEK(P) ((P) == NULL || RING_BUFFER_T(P)->head == RING_BUFFER_T(P)->tail ? NULL : &(P)[RING_BUFFER_T(P)->head])
/**
* Move beginning of the ring buffer forward (remove first element).
* @param P pointer to the first data element of the buffer
*/
#define RING_BUFFER_POP(P) RING_BUFFER_T(P)->head = (RING_BUFFER_T(P)->head + 1) % RING_BUFFER_T(P)->capacity
/**
* Deallocate the ring buffer
* @param P pointer to the first data element of the buffer
*/
#define RING_BUFFER_FREE(P) free(RING_BUFFER_T(P))
#endif // CARRAYUTILS_H

View File

@@ -24,43 +24,25 @@
* questions.
*/
#include <malloc.h>
#include <Trace.h>
#include "jlong_md.h"
#include "jvm_md.h"
#include "jni_util.h"
#include "VKBase.h"
#include "VKVertex.h"
#include "VKRenderer.h"
#include "CArrayUtil.h"
#include <vulkan/vulkan.h>
#include <dlfcn.h>
#include <string.h>
#include "VKUtil.h"
#include "VKBase.h"
#include "VKRenderer.h"
#include "VKTexturePool.h"
#define VULKAN_DLL JNI_LIB_NAME("vulkan")
#define VULKAN_1_DLL VERSIONED_JNI_LIB_NAME("vulkan", "1")
static const uint32_t REQUIRED_VULKAN_VERSION = VK_MAKE_API_VERSION(0, 1, 2, 0);
#define MAX_ENABLED_LAYERS 5
#define MAX_ENABLED_EXTENSIONS 5
#define VALIDATION_LAYER_NAME "VK_LAYER_KHRONOS_validation"
#define COUNT_OF(x) (sizeof(x)/sizeof(x[0]))
static jboolean verbose;
static VKGraphicsEnvironment* geInstance = NULL;
static void* pVulkanLib = NULL;
#define INCLUDE_BYTECODE
#define SHADER_ENTRY(NAME, TYPE) static uint32_t NAME ## _ ## TYPE ## _data[] = {
#define BYTECODE_END };
#include "vulkan/shader_list.h"
#undef INCLUDE_BYTECODE
#undef SHADER_ENTRY
#undef BYTECODE_END
#define GET_VK_PROC_RET_FALSE_IF_ERR(GETPROCADDR, STRUCT, HANDLE, NAME) do { \
STRUCT->NAME = (PFN_ ## NAME) GETPROCADDR(HANDLE, #NAME); \
if (STRUCT->NAME == NULL) { \
#define GET_VK_PROC_RET_FALSE_IF_ERR(GETPROCADDR, STRUCT, HANDLE, NAME) do { \
(STRUCT)->NAME = (PFN_ ## NAME) GETPROCADDR(HANDLE, #NAME); \
if ((STRUCT)->NAME == NULL) { \
J2dRlsTraceLn(J2D_TRACE_ERROR, "Required api is not supported. " #NAME " is missing.") \
return JNI_FALSE; \
} \
@@ -72,20 +54,17 @@ static void vulkanLibClose() {
ARRAY_FREE(geInstance->physicalDevices);
if (geInstance->devices != NULL) {
for (uint32_t i = 0; i < ARRAY_SIZE(geInstance->devices); i++) {
if (geInstance->devices[i].enabledExtensions != NULL) {
free(geInstance->devices[i].enabledExtensions);
}
if (geInstance->devices[i].enabledLayers != NULL) {
free(geInstance->devices[i].enabledLayers);
}
if (geInstance->devices[i].name != NULL) {
free(geInstance->devices[i].name);
}
if (geInstance->devices[i].vkDestroyDevice != NULL && geInstance->devices[i].device != NULL) {
geInstance->devices[i].vkDestroyDevice(geInstance->devices[i].device, NULL);
VKDevice* device = &geInstance->devices[i];
VKRenderer_Destroy(device->renderer);
VKTexturePool_Dispose(device->texturePool);
ARRAY_FREE(device->enabledExtensions);
ARRAY_FREE(device->enabledLayers);
free(device->name);
if (device->vkDestroyDevice != NULL && device->handle != NULL) {
device->vkDestroyDevice(device->handle, NULL);
}
}
free(geInstance->devices);
ARRAY_FREE(geInstance->devices);
}
#if defined(DEBUG)
@@ -113,7 +92,7 @@ static PFN_vkGetInstanceProcAddr vulkanLibOpen() {
pVulkanLib = dlopen(VULKAN_1_DLL, RTLD_NOW);
}
if (pVulkanLib == NULL) {
J2dRlsTraceLn1(J2D_TRACE_ERROR, "Failed to load %s", VULKAN_DLL)
J2dRlsTraceLn1(J2D_TRACE_ERROR, "Vulkan: Failed to load %s", VULKAN_DLL)
return NULL;
}
}
@@ -121,7 +100,7 @@ static PFN_vkGetInstanceProcAddr vulkanLibOpen() {
PFN_vkGetInstanceProcAddr vkGetInstanceProcAddr = (PFN_vkGetInstanceProcAddr) dlsym(pVulkanLib, "vkGetInstanceProcAddr");
if (vkGetInstanceProcAddr == NULL) {
J2dRlsTraceLn1(J2D_TRACE_ERROR,
"Failed to get proc address of vkGetInstanceProcAddr from %s", VULKAN_DLL)
"Vulkan: Failed to get proc address of vkGetInstanceProcAddr from %s", VULKAN_DLL)
vulkanLibClose();
return NULL;
}
@@ -163,7 +142,7 @@ static VkBool32 debugCallback(
J2dRlsTraceLn(level, pCallbackData->pMessage);
if (messageSeverity == VK_DEBUG_UTILS_MESSAGE_SEVERITY_ERROR_BIT_EXT) {
// raise(SIGABRT); TODO uncomment when all validation errors are fixed.
VK_FATAL_ERROR("Unhandled Vulkan validation error");
}
return VK_FALSE;
}
@@ -181,10 +160,7 @@ static jboolean VK_InitGraphicsEnvironment(PFN_vkGetInstanceProcAddr vkGetInstan
uint32_t apiVersion = 0;
if (geInstance->vkEnumerateInstanceVersion(&apiVersion) != VK_SUCCESS) {
J2dRlsTraceLn(J2D_TRACE_ERROR, "Vulkan: unable to enumerate Vulkan instance version")
return JNI_FALSE;
}
VK_IF_ERROR(geInstance->vkEnumerateInstanceVersion(&apiVersion)) return JNI_FALSE;
J2dRlsTraceLn3(J2D_TRACE_INFO, "Vulkan: Available (%d.%d.%d)",
VK_API_VERSION_MAJOR(apiVersion),
@@ -201,30 +177,14 @@ static jboolean VK_InitGraphicsEnvironment(PFN_vkGetInstanceProcAddr vkGetInstan
uint32_t extensionsCount;
// Get the number of extensions and layers
if (geInstance->vkEnumerateInstanceExtensionProperties(NULL, &extensionsCount, NULL) != VK_SUCCESS)
{
J2dRlsTraceLn(J2D_TRACE_ERROR, "Vulkan: vkEnumerateInstanceExtensionProperties fails")
return JNI_FALSE;
}
VK_IF_ERROR(geInstance->vkEnumerateInstanceExtensionProperties(NULL, &extensionsCount, NULL)) return JNI_FALSE;
VkExtensionProperties extensions[extensionsCount];
if (geInstance->vkEnumerateInstanceExtensionProperties(NULL, &extensionsCount, extensions) != VK_SUCCESS)
{
J2dRlsTraceLn(J2D_TRACE_ERROR, "Vulkan: vkEnumerateInstanceExtensionProperties fails")
return JNI_FALSE;
}
VK_IF_ERROR(geInstance->vkEnumerateInstanceExtensionProperties(NULL, &extensionsCount, extensions)) return JNI_FALSE;
uint32_t layersCount;
if (geInstance->vkEnumerateInstanceLayerProperties(&layersCount, NULL) != VK_SUCCESS)
{
J2dRlsTraceLn(J2D_TRACE_ERROR, "Vulkan: vkEnumerateInstanceLayerProperties fails")
return JNI_FALSE;
}
VK_IF_ERROR(geInstance->vkEnumerateInstanceLayerProperties(&layersCount, NULL)) return JNI_FALSE;
VkLayerProperties layers[layersCount];
if (geInstance->vkEnumerateInstanceLayerProperties(&layersCount, layers) != VK_SUCCESS)
{
J2dRlsTraceLn(J2D_TRACE_ERROR, "Vulkan: vkEnumerateInstanceLayerProperties fails")
return JNI_FALSE;
}
VK_IF_ERROR(geInstance->vkEnumerateInstanceLayerProperties(&layersCount, layers)) return JNI_FALSE;
J2dRlsTraceLn(J2D_TRACE_VERBOSE, " Supported instance layers:")
for (uint32_t i = 0; i < layersCount; i++) {
@@ -236,13 +196,13 @@ static jboolean VK_InitGraphicsEnvironment(PFN_vkGetInstanceProcAddr vkGetInstan
J2dRlsTraceLn1(J2D_TRACE_VERBOSE, " %s", (char *) extensions[i].extensionName)
}
pchar* enabledLayers = ARRAY_ALLOC(pchar, MAX_ENABLED_LAYERS);
pchar* enabledExtensions = ARRAY_ALLOC(pchar, MAX_ENABLED_EXTENSIONS);
pchar* enabledLayers = NULL;
pchar* enabledExtensions = NULL;
void *pNext = NULL;
#if defined(VK_USE_PLATFORM_WAYLAND_KHR)
ARRAY_PUSH_BACK(&enabledExtensions, VK_KHR_WAYLAND_SURFACE_EXTENSION_NAME);
ARRAY_PUSH_BACK(enabledExtensions, VK_KHR_WAYLAND_SURFACE_EXTENSION_NAME);
#endif
ARRAY_PUSH_BACK(&enabledExtensions, VK_KHR_SURFACE_EXTENSION_NAME);
ARRAY_PUSH_BACK(enabledExtensions, VK_KHR_SURFACE_EXTENSION_NAME);
// Check required layers & extensions.
for (uint32_t i = 0; i < ARRAY_SIZE(enabledExtensions); i++) {
@@ -273,7 +233,7 @@ static jboolean VK_InitGraphicsEnvironment(PFN_vkGetInstanceProcAddr vkGetInstan
VkValidationFeaturesEXT features = {};
features.sType = VK_STRUCTURE_TYPE_VALIDATION_FEATURES_EXT;
features.enabledValidationFeatureCount = COUNT_OF(enables);
features.enabledValidationFeatureCount = SARRAY_COUNT_OF(enables);
features.pEnabledValidationFeatures = enables;
// Includes the validation features into the instance creation process
@@ -295,8 +255,8 @@ static jboolean VK_InitGraphicsEnvironment(PFN_vkGetInstanceProcAddr vkGetInstan
}
if (foundDebugLayer && foundDebugExt) {
ARRAY_PUSH_BACK(&enabledLayers, VALIDATION_LAYER_NAME);
ARRAY_PUSH_BACK(&enabledExtensions, VK_EXT_DEBUG_UTILS_EXTENSION_NAME);
ARRAY_PUSH_BACK(enabledLayers, VALIDATION_LAYER_NAME);
ARRAY_PUSH_BACK(enabledExtensions, VK_EXT_DEBUG_UTILS_EXTENSION_NAME);
pNext = &features;
} else {
J2dRlsTraceLn2(J2D_TRACE_WARNING, "Vulkan: %s and %s are not supported",
@@ -324,14 +284,12 @@ static jboolean VK_InitGraphicsEnvironment(PFN_vkGetInstanceProcAddr vkGetInstan
.ppEnabledExtensionNames = (const char *const *) enabledExtensions
};
if (geInstance->vkCreateInstance(&instanceCreateInfo, NULL, &geInstance->vkInstance) != VK_SUCCESS) {
J2dRlsTraceLn(J2D_TRACE_ERROR, "Vulkan: Failed to create Vulkan instance")
VK_IF_ERROR(geInstance->vkCreateInstance(&instanceCreateInfo, NULL, &geInstance->vkInstance)) {
ARRAY_FREE(enabledLayers);
ARRAY_FREE(enabledExtensions);
return JNI_FALSE;
} else {
J2dRlsTraceLn(J2D_TRACE_INFO, "Vulkan: Instance Created")
}
J2dRlsTraceLn(J2D_TRACE_INFO, "Vulkan: Instance Created")
ARRAY_FREE(enabledLayers);
ARRAY_FREE(enabledExtensions);
@@ -351,6 +309,7 @@ static jboolean VK_InitGraphicsEnvironment(PFN_vkGetInstanceProcAddr vkGetInstan
INSTANCE_PROC(vkEnumerateDeviceLayerProperties);
INSTANCE_PROC(vkEnumerateDeviceExtensionProperties);
INSTANCE_PROC(vkCreateDevice);
INSTANCE_PROC(vkDestroySurfaceKHR);
INSTANCE_PROC(vkGetDeviceProcAddr);
// Create debug messenger
@@ -359,6 +318,7 @@ static jboolean VK_InitGraphicsEnvironment(PFN_vkGetInstanceProcAddr vkGetInstan
INSTANCE_PROC(vkDestroyDebugUtilsMessengerEXT);
if (pNext) {
VkDebugUtilsMessengerCreateInfoEXT debugUtilsMessengerCreateInfo = {
.sType = VK_STRUCTURE_TYPE_DEBUG_UTILS_MESSENGER_CREATE_INFO_EXT,
.flags = 0,
.messageSeverity = VK_DEBUG_UTILS_MESSAGE_SEVERITY_ERROR_BIT_EXT |
VK_DEBUG_UTILS_MESSAGE_SEVERITY_WARNING_BIT_EXT |
@@ -369,10 +329,8 @@ static jboolean VK_InitGraphicsEnvironment(PFN_vkGetInstanceProcAddr vkGetInstan
VK_DEBUG_UTILS_MESSAGE_TYPE_PERFORMANCE_BIT_EXT,
.pfnUserCallback = &debugCallback
};
if (geInstance->vkCreateDebugUtilsMessengerEXT(geInstance->vkInstance, &debugUtilsMessengerCreateInfo,
NULL, &geInstance->debugMessenger) != VK_SUCCESS) {
J2dRlsTraceLn(J2D_TRACE_WARNING, "Vulkan: Failed to create debug messenger")
}
VK_IF_ERROR(geInstance->vkCreateDebugUtilsMessengerEXT(geInstance->vkInstance, &debugUtilsMessengerCreateInfo,
NULL, &geInstance->debugMessenger)) {}
}
#endif
@@ -382,13 +340,8 @@ static jboolean VK_InitGraphicsEnvironment(PFN_vkGetInstanceProcAddr vkGetInstan
static jboolean VK_FindDevices() {
uint32_t physicalDevicesCount;
if (geInstance->vkEnumeratePhysicalDevices(geInstance->vkInstance,
&physicalDevicesCount,
NULL) != VK_SUCCESS)
{
J2dRlsTraceLn(J2D_TRACE_ERROR, "Vulkan: vkEnumeratePhysicalDevices fails")
return JNI_FALSE;
}
VK_IF_ERROR(geInstance->vkEnumeratePhysicalDevices(geInstance->vkInstance,
&physicalDevicesCount, NULL)) return JNI_FALSE;
if (physicalDevicesCount == 0) {
J2dRlsTraceLn(J2D_TRACE_ERROR, "Vulkan: Failed to find GPUs with Vulkan support")
@@ -397,26 +350,12 @@ static jboolean VK_FindDevices() {
J2dRlsTraceLn1(J2D_TRACE_INFO, "Vulkan: Found %d physical devices:", physicalDevicesCount)
}
geInstance->physicalDevices = ARRAY_ALLOC(VkPhysicalDevice, physicalDevicesCount);
if (geInstance->physicalDevices == NULL) {
J2dRlsTraceLn(J2D_TRACE_ERROR, "Vulkan: Cannot allocate VkPhysicalDevice")
return JNI_FALSE;
}
ARRAY_RESIZE(geInstance->physicalDevices, physicalDevicesCount);
if (geInstance->vkEnumeratePhysicalDevices(
geInstance->vkInstance,
&physicalDevicesCount,
geInstance->physicalDevices) != VK_SUCCESS)
{
J2dRlsTraceLn(J2D_TRACE_ERROR, "Vulkan: vkEnumeratePhysicalDevices fails")
return JNI_FALSE;
}
VK_IF_ERROR(geInstance->vkEnumeratePhysicalDevices(geInstance->vkInstance, &physicalDevicesCount,
geInstance->physicalDevices)) return JNI_FALSE;
geInstance->devices = ARRAY_ALLOC(VKLogicalDevice, physicalDevicesCount);
if (geInstance->devices == NULL) {
J2dRlsTraceLn(J2D_TRACE_ERROR, "Vulkan: Cannot allocate VKLogicalDevice")
return JNI_FALSE;
}
ARRAY_ENSURE_CAPACITY(geInstance->devices, physicalDevicesCount);
for (uint32_t i = 0; i < physicalDevicesCount; i++) {
VkPhysicalDeviceVulkan12Features device12Features = {
@@ -454,14 +393,7 @@ static jboolean VK_FindDevices() {
uint32_t queueFamilyCount = 0;
geInstance->vkGetPhysicalDeviceQueueFamilyProperties(
geInstance->physicalDevices[i], &queueFamilyCount, NULL);
VkQueueFamilyProperties *queueFamilies = (VkQueueFamilyProperties*)calloc(queueFamilyCount,
sizeof(VkQueueFamilyProperties));
if (queueFamilies == NULL) {
J2dRlsTraceLn(J2D_TRACE_ERROR, "Vulkan: Cannot allocate VkQueueFamilyProperties")
return JNI_FALSE;
}
VkQueueFamilyProperties queueFamilies[queueFamilyCount];
geInstance->vkGetPhysicalDeviceQueueFamilyProperties(
geInstance->physicalDevices[i], &queueFamilyCount, queueFamilies);
int64_t queueFamily = -1;
@@ -494,37 +426,28 @@ static jboolean VK_FindDevices() {
queueFamily = j;
}
}
free(queueFamilies);
if (queueFamily == -1) {
J2dRlsTraceLn(J2D_TRACE_INFO, " --------------------- Suitable queue not found, skipped")
continue;
}
uint32_t layerCount;
geInstance->vkEnumerateDeviceLayerProperties(geInstance->physicalDevices[i], &layerCount, NULL);
VkLayerProperties *layers = (VkLayerProperties *) calloc(layerCount, sizeof(VkLayerProperties));
if (layers == NULL) {
J2dRlsTraceLn(J2D_TRACE_ERROR, "Vulkan: Cannot allocate VkLayerProperties")
return JNI_FALSE;
}
geInstance->vkEnumerateDeviceLayerProperties(geInstance->physicalDevices[i], &layerCount, layers);
VK_IF_ERROR(geInstance->vkEnumerateDeviceLayerProperties(geInstance->physicalDevices[i],
&layerCount, NULL)) continue;
VkLayerProperties layers[layerCount];
VK_IF_ERROR(geInstance->vkEnumerateDeviceLayerProperties(geInstance->physicalDevices[i],
&layerCount, layers)) continue;
J2dRlsTraceLn(J2D_TRACE_VERBOSE, " Supported device layers:")
for (uint32_t j = 0; j < layerCount; j++) {
J2dRlsTraceLn1(J2D_TRACE_VERBOSE, " %s", (char *) layers[j].layerName)
}
uint32_t extensionCount;
geInstance->vkEnumerateDeviceExtensionProperties(geInstance->physicalDevices[i], NULL, &extensionCount, NULL);
VkExtensionProperties *extensions = (VkExtensionProperties *) calloc(
extensionCount, sizeof(VkExtensionProperties));
if (extensions == NULL) {
J2dRlsTraceLn(J2D_TRACE_ERROR, "Vulkan: Cannot allocate VkExtensionProperties")
return JNI_FALSE;
}
geInstance->vkEnumerateDeviceExtensionProperties(
geInstance->physicalDevices[i], NULL, &extensionCount, extensions);
VK_IF_ERROR(geInstance->vkEnumerateDeviceExtensionProperties(geInstance->physicalDevices[i],
NULL, &extensionCount, NULL)) continue;
VkExtensionProperties extensions[extensionCount];
VK_IF_ERROR(geInstance->vkEnumerateDeviceExtensionProperties(geInstance->physicalDevices[i],
NULL, &extensionCount, extensions)) continue;
J2dRlsTraceLn(J2D_TRACE_VERBOSE, " Supported device extensions:")
VkBool32 hasSwapChain = VK_FALSE;
for (uint32_t j = 0; j < extensionCount; j++) {
@@ -532,31 +455,18 @@ static jboolean VK_FindDevices() {
hasSwapChain = hasSwapChain ||
strcmp(VK_KHR_SWAPCHAIN_EXTENSION_NAME, extensions[j].extensionName) == 0;
}
free(extensions);
J2dRlsTraceLn(J2D_TRACE_VERBOSE, " Found:")
if (hasSwapChain) {
J2dRlsTraceLn(J2D_TRACE_VERBOSE, " VK_KHR_SWAPCHAIN_EXTENSION_NAME")
}
J2dRlsTraceLn(J2D_TRACE_VERBOSE, "Vulkan: Found device extensions:")
J2dRlsTraceLn1(J2D_TRACE_VERBOSE, " " VK_KHR_SWAPCHAIN_EXTENSION_NAME " = %s", hasSwapChain ? "true" : "false")
if (!hasSwapChain) {
J2dRlsTraceLn(J2D_TRACE_INFO,
" --------------------- Required VK_KHR_SWAPCHAIN_EXTENSION_NAME not found, skipped")
" --------------------- Required " VK_KHR_SWAPCHAIN_EXTENSION_NAME " not found, skipped")
continue;
}
pchar* deviceEnabledLayers = ARRAY_ALLOC(pchar, MAX_ENABLED_LAYERS);
if (deviceEnabledLayers == NULL) {
J2dRlsTraceLn(J2D_TRACE_ERROR, "Vulkan: Cannot allocate deviceEnabledLayers array")
return JNI_FALSE;
}
pchar* deviceEnabledExtensions = ARRAY_ALLOC(pchar, MAX_ENABLED_EXTENSIONS);
if (deviceEnabledExtensions == NULL) {
J2dRlsTraceLn(J2D_TRACE_ERROR, "Vulkan: Cannot allocate deviceEnabledExtensions array")
return JNI_FALSE;
}
ARRAY_PUSH_BACK(&deviceEnabledExtensions, VK_KHR_SWAPCHAIN_EXTENSION_NAME);
pchar* deviceEnabledLayers = NULL;
pchar* deviceEnabledExtensions = NULL;
ARRAY_PUSH_BACK(deviceEnabledExtensions, VK_KHR_SWAPCHAIN_EXTENSION_NAME);
// Validation layer
#ifdef DEBUG
@@ -564,7 +474,7 @@ static jboolean VK_FindDevices() {
for (uint32_t j = 0; j < layerCount; j++) {
if (strcmp(VALIDATION_LAYER_NAME, layers[j].layerName) == 0) {
validationLayerNotSupported = 0;
ARRAY_PUSH_BACK(&deviceEnabledLayers, VALIDATION_LAYER_NAME);
ARRAY_PUSH_BACK(deviceEnabledLayers, VALIDATION_LAYER_NAME);
break;
}
}
@@ -572,76 +482,31 @@ static jboolean VK_FindDevices() {
J2dRlsTraceLn1(J2D_TRACE_INFO, " %s device layer is not supported", VALIDATION_LAYER_NAME)
}
#endif
free(layers);
char* deviceName = strdup(deviceProperties2.properties.deviceName);
if (deviceName == NULL) {
J2dRlsTraceLn(J2D_TRACE_ERROR, "Vulkan: Cannot duplicate deviceName")
return JNI_FALSE;
}
ARRAY_PUSH_BACK(&geInstance->devices,
((VKLogicalDevice) {
ARRAY_PUSH_BACK(geInstance->devices,
((VKDevice) {
.name = deviceName,
.device = VK_NULL_HANDLE,
.handle = VK_NULL_HANDLE,
.physicalDevice = geInstance->physicalDevices[i],
.queueFamily = queueFamily,
.enabledLayers = deviceEnabledLayers,
.enabledExtensions = deviceEnabledExtensions,
.enabledExtensions = deviceEnabledExtensions
}));
}
if (ARRAY_SIZE(geInstance->devices) == 0) {
J2dRlsTraceLn(J2D_TRACE_ERROR, "No compatible device found")
J2dRlsTraceLn(J2D_TRACE_ERROR, "Vulkan: No compatible device found")
return JNI_FALSE;
}
return JNI_TRUE;
}
static VkRenderPassCreateInfo* VK_GetGenericRenderPassInfo() {
static VkAttachmentDescription colorAttachment = {
.format = VK_FORMAT_B8G8R8A8_UNORM, //TODO: swapChain colorFormat
.samples = VK_SAMPLE_COUNT_1_BIT,
.loadOp = VK_ATTACHMENT_LOAD_OP_LOAD,
.storeOp = VK_ATTACHMENT_STORE_OP_STORE,
.stencilLoadOp = VK_ATTACHMENT_LOAD_OP_DONT_CARE,
.stencilStoreOp = VK_ATTACHMENT_STORE_OP_DONT_CARE,
.initialLayout = VK_IMAGE_LAYOUT_UNDEFINED,
.finalLayout = VK_IMAGE_LAYOUT_PRESENT_SRC_KHR
};
static VkAttachmentReference colorReference = {
.attachment = 0,
.layout = VK_IMAGE_LAYOUT_COLOR_ATTACHMENT_OPTIMAL
};
static VkSubpassDescription subpassDescription = {
.pipelineBindPoint = VK_PIPELINE_BIND_POINT_GRAPHICS,
.colorAttachmentCount = 1,
.pColorAttachments = &colorReference
};
// Subpass dependencies for layout transitions
static VkSubpassDependency dependency = {
.srcSubpass = VK_SUBPASS_EXTERNAL,
.dstSubpass = 0,
.srcStageMask = VK_PIPELINE_STAGE_COLOR_ATTACHMENT_OUTPUT_BIT,
.dstStageMask = VK_PIPELINE_STAGE_COLOR_ATTACHMENT_OUTPUT_BIT,
.srcAccessMask = 0,
.dstAccessMask = VK_ACCESS_COLOR_ATTACHMENT_WRITE_BIT
};
static VkRenderPassCreateInfo renderPassInfo = {
.sType = VK_STRUCTURE_TYPE_RENDER_PASS_CREATE_INFO,
.attachmentCount = 1,
.pAttachments = &colorAttachment,
.subpassCount = 1,
.pSubpasses = &subpassDescription,
.dependencyCount = 1,
.pDependencies = &dependency
};
return &renderPassInfo;
}
static jboolean VK_InitLogicalDevice(VKLogicalDevice* logicalDevice) {
if (logicalDevice->device != VK_NULL_HANDLE) {
static jboolean VK_InitDevice(VKDevice* device) {
if (device->handle != VK_NULL_HANDLE) {
return JNI_TRUE;
}
if (geInstance == NULL) {
@@ -650,7 +515,7 @@ static jboolean VK_InitLogicalDevice(VKLogicalDevice* logicalDevice) {
}
if (verbose) {
for (uint32_t i = 0; i < ARRAY_SIZE(geInstance->devices); i++) {
fprintf(stderr, " %c%d: %s\n", &geInstance->devices[i] == logicalDevice ? '*' : ' ',
fprintf(stderr, " %c%d: %s\n", &geInstance->devices[i] == device ? '*' : ' ',
i, geInstance->devices[i].name);
}
fprintf(stderr, "\n");
@@ -659,48 +524,58 @@ static jboolean VK_InitLogicalDevice(VKLogicalDevice* logicalDevice) {
float queuePriority = 1.0f;
VkDeviceQueueCreateInfo queueCreateInfo = {
.sType = VK_STRUCTURE_TYPE_DEVICE_QUEUE_CREATE_INFO,
.queueFamilyIndex = logicalDevice->queueFamily, // obtained separately
.queueFamilyIndex = device->queueFamily, // obtained separately
.queueCount = 1,
.pQueuePriorities = &queuePriority
};
VkPhysicalDeviceFeatures features10 = { .logicOp = VK_TRUE };
VkPhysicalDeviceVulkan12Features features12 = {
.sType = VK_STRUCTURE_TYPE_PHYSICAL_DEVICE_VULKAN_1_2_FEATURES,
.timelineSemaphore = VK_TRUE
};
void *pNext = &features12;
VkDeviceCreateInfo createInfo = {
.sType = VK_STRUCTURE_TYPE_DEVICE_CREATE_INFO,
.pNext = NULL,
.pNext = pNext,
.flags = 0,
.queueCreateInfoCount = 1,
.pQueueCreateInfos = &queueCreateInfo,
.enabledLayerCount = ARRAY_SIZE(logicalDevice->enabledLayers),
.ppEnabledLayerNames = (const char *const *) logicalDevice->enabledLayers,
.enabledExtensionCount = ARRAY_SIZE(logicalDevice->enabledExtensions),
.ppEnabledExtensionNames = (const char *const *) logicalDevice->enabledExtensions,
.enabledLayerCount = ARRAY_SIZE(device->enabledLayers),
.ppEnabledLayerNames = (const char *const *) device->enabledLayers,
.enabledExtensionCount = ARRAY_SIZE(device->enabledExtensions),
.ppEnabledExtensionNames = (const char *const *) device->enabledExtensions,
.pEnabledFeatures = &features10
};
if (geInstance->vkCreateDevice(logicalDevice->physicalDevice, &createInfo, NULL, &logicalDevice->device) != VK_SUCCESS)
{
J2dRlsTraceLn1(J2D_TRACE_ERROR, "Cannot create device:\n %s", logicalDevice->name)
vulkanLibClose();
VK_IF_ERROR(geInstance->vkCreateDevice(device->physicalDevice, &createInfo, NULL, &device->handle)) {
J2dRlsTraceLn1(J2D_TRACE_ERROR, "Vulkan: Cannot create device: %s", device->name)
return JNI_FALSE;
}
VkDevice device = logicalDevice->device;
J2dRlsTraceLn1(J2D_TRACE_INFO, "Logical device (%s) created", logicalDevice->name)
J2dRlsTraceLn1(J2D_TRACE_INFO, "Vulkan: Device created (%s)", device->name)
#define DEVICE_PROC(NAME) GET_VK_PROC_RET_FALSE_IF_ERR(geInstance->vkGetDeviceProcAddr, logicalDevice, device, NAME)
#define DEVICE_PROC(NAME) GET_VK_PROC_RET_FALSE_IF_ERR(geInstance->vkGetDeviceProcAddr, device, device->handle, NAME)
DEVICE_PROC(vkDestroyDevice);
DEVICE_PROC(vkCreateShaderModule);
DEVICE_PROC(vkCreatePipelineLayout);
DEVICE_PROC(vkCreateGraphicsPipelines);
DEVICE_PROC(vkDestroyShaderModule);
DEVICE_PROC(vkCreatePipelineLayout);
DEVICE_PROC(vkDestroyPipelineLayout);
DEVICE_PROC(vkCreateGraphicsPipelines);
DEVICE_PROC(vkDestroyPipeline);
DEVICE_PROC(vkCreateSwapchainKHR);
DEVICE_PROC(vkDestroySwapchainKHR);
DEVICE_PROC(vkGetSwapchainImagesKHR);
DEVICE_PROC(vkCreateImageView);
DEVICE_PROC(vkCreateFramebuffer);
DEVICE_PROC(vkCreateCommandPool);
DEVICE_PROC(vkDestroyCommandPool);
DEVICE_PROC(vkAllocateCommandBuffers);
DEVICE_PROC(vkFreeCommandBuffers);
DEVICE_PROC(vkCreateSemaphore);
DEVICE_PROC(vkDestroySemaphore);
DEVICE_PROC(vkWaitSemaphores);
DEVICE_PROC(vkGetSemaphoreCounterValue);
DEVICE_PROC(vkCreateFence);
DEVICE_PROC(vkGetDeviceQueue);
DEVICE_PROC(vkWaitForFences);
@@ -710,7 +585,11 @@ static jboolean VK_InitLogicalDevice(VKLogicalDevice* logicalDevice) {
DEVICE_PROC(vkQueueSubmit);
DEVICE_PROC(vkQueuePresentKHR);
DEVICE_PROC(vkBeginCommandBuffer);
DEVICE_PROC(vkCmdBlitImage);
DEVICE_PROC(vkCmdPipelineBarrier);
DEVICE_PROC(vkCmdBeginRenderPass);
DEVICE_PROC(vkCmdExecuteCommands);
DEVICE_PROC(vkCmdClearAttachments);
DEVICE_PROC(vkCmdBindPipeline);
DEVICE_PROC(vkCmdSetViewport);
DEVICE_PROC(vkCmdSetScissor);
@@ -719,9 +598,11 @@ static jboolean VK_InitLogicalDevice(VKLogicalDevice* logicalDevice) {
DEVICE_PROC(vkEndCommandBuffer);
DEVICE_PROC(vkCreateImage);
DEVICE_PROC(vkCreateSampler);
DEVICE_PROC(vkDestroySampler);
DEVICE_PROC(vkAllocateMemory);
DEVICE_PROC(vkBindImageMemory);
DEVICE_PROC(vkCreateDescriptorSetLayout);
DEVICE_PROC(vkDestroyDescriptorSetLayout);
DEVICE_PROC(vkUpdateDescriptorSets);
DEVICE_PROC(vkCreateDescriptorPool);
DEVICE_PROC(vkAllocateDescriptorSets);
@@ -734,6 +615,7 @@ static jboolean VK_InitLogicalDevice(VKLogicalDevice* logicalDevice) {
DEVICE_PROC(vkUnmapMemory);
DEVICE_PROC(vkCmdBindVertexBuffers);
DEVICE_PROC(vkCreateRenderPass);
DEVICE_PROC(vkDestroyRenderPass);
DEVICE_PROC(vkDestroyBuffer);
DEVICE_PROC(vkFreeMemory);
DEVICE_PROC(vkDestroyImageView);
@@ -742,67 +624,28 @@ static jboolean VK_InitLogicalDevice(VKLogicalDevice* logicalDevice) {
DEVICE_PROC(vkFlushMappedMemoryRanges);
DEVICE_PROC(vkCmdPushConstants);
// Create command pool
VkCommandPoolCreateInfo poolInfo = {
.sType = VK_STRUCTURE_TYPE_COMMAND_POOL_CREATE_INFO,
.flags = VK_COMMAND_POOL_CREATE_RESET_COMMAND_BUFFER_BIT,
.queueFamilyIndex = logicalDevice->queueFamily
};
if (logicalDevice->vkCreateCommandPool(device, &poolInfo, NULL, &logicalDevice->commandPool) != VK_SUCCESS) {
J2dRlsTraceLn(J2D_TRACE_INFO, "failed to create command pool!")
device->vkGetDeviceQueue(device->handle, device->queueFamily, 0, &device->queue);
if (device->queue == NULL) {
J2dRlsTraceLn(J2D_TRACE_INFO, "Vulkan: Failed to get device queue");
VK_UNHANDLED_ERROR();
return JNI_FALSE;
}
// Create command buffer
VkCommandBufferAllocateInfo allocInfo = {
.sType = VK_STRUCTURE_TYPE_COMMAND_BUFFER_ALLOCATE_INFO,
.commandPool = logicalDevice->commandPool,
.level = VK_COMMAND_BUFFER_LEVEL_PRIMARY,
.commandBufferCount = 1
};
if (logicalDevice->vkAllocateCommandBuffers(device, &allocInfo, &logicalDevice->commandBuffer) != VK_SUCCESS) {
J2dRlsTraceLn(J2D_TRACE_INFO, "failed to allocate command buffers!");
device->renderer = VKRenderer_Create(device);
if (!device->renderer) {
J2dRlsTraceLn(J2D_TRACE_ERROR, "Vulkan: Cannot create renderer")
VK_UNHANDLED_ERROR();
return JNI_FALSE;
}
// Create semaphores
VkSemaphoreCreateInfo semaphoreInfo = {
.sType = VK_STRUCTURE_TYPE_SEMAPHORE_CREATE_INFO
};
VkFenceCreateInfo fenceInfo = {
.sType = VK_STRUCTURE_TYPE_FENCE_CREATE_INFO,
.flags = VK_FENCE_CREATE_SIGNALED_BIT
};
if (logicalDevice->vkCreateSemaphore(device, &semaphoreInfo, NULL, &logicalDevice->imageAvailableSemaphore) != VK_SUCCESS ||
logicalDevice->vkCreateSemaphore(device, &semaphoreInfo, NULL, &logicalDevice->renderFinishedSemaphore) != VK_SUCCESS ||
logicalDevice->vkCreateFence(device, &fenceInfo, NULL, &logicalDevice->inFlightFence) != VK_SUCCESS)
{
J2dRlsTraceLn(J2D_TRACE_INFO, "failed to create semaphores!");
device->texturePool = VKTexturePool_InitWithDevice(device);
if (!device->texturePool) {
J2dRlsTraceLn(J2D_TRACE_ERROR, "Vulkan: Cannot create texture pool")
VK_UNHANDLED_ERROR();
return JNI_FALSE;
}
logicalDevice->vkGetDeviceQueue(device, logicalDevice->queueFamily, 0, &logicalDevice->queue);
if (logicalDevice->queue == NULL) {
J2dRlsTraceLn(J2D_TRACE_INFO, "failed to get device queue!");
return JNI_FALSE;
}
VKTxVertex* vertices = ARRAY_ALLOC(VKTxVertex, 4);
ARRAY_PUSH_BACK(&vertices, ((VKTxVertex){-1.0f, -1.0f, 0.0f, 0.0f}));
ARRAY_PUSH_BACK(&vertices, ((VKTxVertex){1.0f, -1.0f, 1.0f, 0.0f}));
ARRAY_PUSH_BACK(&vertices, ((VKTxVertex){-1.0f, 1.0f, 0.0f, 1.0f}));
ARRAY_PUSH_BACK(&vertices, ((VKTxVertex){1.0f, 1.0f, 1.0f, 1.0f}));
logicalDevice->blitVertexBuffer = ARRAY_TO_VERTEX_BUF(logicalDevice, vertices);
if (!logicalDevice->blitVertexBuffer) {
J2dRlsTraceLn(J2D_TRACE_ERROR, "Cannot create vertex buffer")
return JNI_FALSE;
}
ARRAY_FREE(vertices);
geInstance->currentDevice = logicalDevice;
geInstance->currentDevice = device;
return JNI_TRUE;
}
@@ -844,23 +687,11 @@ Java_sun_java2d_vulkan_VKInstance_initNative(JNIEnv *env, jclass wlge, jlong nat
if (requestedDevice < 0 || (uint32_t)requestedDevice >= ARRAY_SIZE(geInstance->devices)) {
requestedDevice = 0;
}
if (!VK_InitLogicalDevice(&geInstance->devices[requestedDevice])) {
if (!VK_InitDevice(&geInstance->devices[requestedDevice])) {
vulkanLibClose();
return JNI_FALSE;
}
if (geInstance->currentDevice->vkCreateRenderPass(
geInstance->currentDevice->device, VK_GetGenericRenderPassInfo(),
NULL, &geInstance->currentDevice->renderPass) != VK_SUCCESS)
{
J2dRlsTrace(J2D_TRACE_INFO, "Cannot create render pass for device")
return JNI_FALSE;
}
if (!VK_CreateLogicalDeviceRenderers(geInstance->currentDevice)) {
vulkanLibClose();
return JNI_FALSE;
}
return JNI_TRUE;
}

View File

@@ -27,40 +27,40 @@
#ifndef VKBase_h_Included
#define VKBase_h_Included
#include "VKTypes.h"
#include "VKTexturePool.h"
struct VKLogicalDevice {
VkDevice device;
VkPhysicalDevice physicalDevice;
VKRenderer* fillTexturePoly;
VKRenderer* fillColorPoly;
VKRenderer* drawColorPoly;
VKRenderer* fillMaxColorPoly;
char* name;
uint32_t queueFamily;
pchar* enabledLayers;
pchar* enabledExtensions;
VkCommandPool commandPool;
VkCommandBuffer commandBuffer;
VkSemaphore imageAvailableSemaphore;
VkSemaphore renderFinishedSemaphore;
VkFence inFlightFence;
VkQueue queue;
VkSampler textureSampler;
VKBuffer* blitVertexBuffer;
VkRenderPass renderPass;
struct VKDevice {
VkDevice handle;
VkPhysicalDevice physicalDevice;
char* name;
uint32_t queueFamily;
pchar* enabledLayers;
pchar* enabledExtensions;
VkQueue queue;
VKRenderer* renderer;
VKTexturePool* texturePool;
PFN_vkDestroyDevice vkDestroyDevice;
PFN_vkCreateShaderModule vkCreateShaderModule;
PFN_vkCreatePipelineLayout vkCreatePipelineLayout;
PFN_vkCreateGraphicsPipelines vkCreateGraphicsPipelines;
PFN_vkDestroyShaderModule vkDestroyShaderModule;
PFN_vkCreatePipelineLayout vkCreatePipelineLayout;
PFN_vkDestroyPipelineLayout vkDestroyPipelineLayout;
PFN_vkCreateGraphicsPipelines vkCreateGraphicsPipelines;
PFN_vkDestroyPipeline vkDestroyPipeline;
PFN_vkCreateSwapchainKHR vkCreateSwapchainKHR;
PFN_vkDestroySwapchainKHR vkDestroySwapchainKHR;
PFN_vkGetSwapchainImagesKHR vkGetSwapchainImagesKHR;
PFN_vkCreateImageView vkCreateImageView;
PFN_vkCreateFramebuffer vkCreateFramebuffer;
PFN_vkCreateCommandPool vkCreateCommandPool;
PFN_vkDestroyCommandPool vkDestroyCommandPool;
PFN_vkAllocateCommandBuffers vkAllocateCommandBuffers;
PFN_vkFreeCommandBuffers vkFreeCommandBuffers;
PFN_vkCreateSemaphore vkCreateSemaphore;
PFN_vkDestroySemaphore vkDestroySemaphore;
PFN_vkWaitSemaphores vkWaitSemaphores;
PFN_vkGetSemaphoreCounterValue vkGetSemaphoreCounterValue;
PFN_vkCreateFence vkCreateFence;
PFN_vkGetDeviceQueue vkGetDeviceQueue;
PFN_vkWaitForFences vkWaitForFences;
@@ -70,7 +70,11 @@ struct VKLogicalDevice {
PFN_vkQueueSubmit vkQueueSubmit;
PFN_vkQueuePresentKHR vkQueuePresentKHR;
PFN_vkBeginCommandBuffer vkBeginCommandBuffer;
PFN_vkCmdBlitImage vkCmdBlitImage;
PFN_vkCmdPipelineBarrier vkCmdPipelineBarrier;
PFN_vkCmdBeginRenderPass vkCmdBeginRenderPass;
PFN_vkCmdExecuteCommands vkCmdExecuteCommands;
PFN_vkCmdClearAttachments vkCmdClearAttachments;
PFN_vkCmdBindPipeline vkCmdBindPipeline;
PFN_vkCmdSetViewport vkCmdSetViewport;
PFN_vkCmdSetScissor vkCmdSetScissor;
@@ -79,9 +83,11 @@ struct VKLogicalDevice {
PFN_vkEndCommandBuffer vkEndCommandBuffer;
PFN_vkCreateImage vkCreateImage;
PFN_vkCreateSampler vkCreateSampler;
PFN_vkDestroySampler vkDestroySampler;
PFN_vkAllocateMemory vkAllocateMemory;
PFN_vkBindImageMemory vkBindImageMemory;
PFN_vkCreateDescriptorSetLayout vkCreateDescriptorSetLayout;
PFN_vkDestroyDescriptorSetLayout vkDestroyDescriptorSetLayout;
PFN_vkUpdateDescriptorSets vkUpdateDescriptorSets;
PFN_vkCreateDescriptorPool vkCreateDescriptorPool;
PFN_vkAllocateDescriptorSets vkAllocateDescriptorSets;
@@ -94,6 +100,7 @@ struct VKLogicalDevice {
PFN_vkUnmapMemory vkUnmapMemory;
PFN_vkCmdBindVertexBuffers vkCmdBindVertexBuffers;
PFN_vkCreateRenderPass vkCreateRenderPass;
PFN_vkDestroyRenderPass vkDestroyRenderPass;
PFN_vkDestroyBuffer vkDestroyBuffer;
PFN_vkFreeMemory vkFreeMemory;
PFN_vkDestroyImageView vkDestroyImageView;
@@ -103,12 +110,11 @@ struct VKLogicalDevice {
PFN_vkCmdPushConstants vkCmdPushConstants;
};
struct VKGraphicsEnvironment {
VkInstance vkInstance;
VkPhysicalDevice* physicalDevices;
VKLogicalDevice* devices;
VKLogicalDevice* currentDevice;
VkInstance vkInstance;
VkPhysicalDevice* physicalDevices;
VKDevice* devices;
VKDevice* currentDevice;
#if defined(DEBUG)
VkDebugUtilsMessengerEXT debugMessenger;
@@ -140,6 +146,7 @@ struct VKGraphicsEnvironment {
PFN_vkEnumerateDeviceLayerProperties vkEnumerateDeviceLayerProperties;
PFN_vkEnumerateDeviceExtensionProperties vkEnumerateDeviceExtensionProperties;
PFN_vkCreateDevice vkCreateDevice;
PFN_vkDestroySurfaceKHR vkDestroySurfaceKHR;
PFN_vkGetDeviceProcAddr vkGetDeviceProcAddr;
};

View File

@@ -25,8 +25,7 @@
*/
#include <string.h>
#include <Trace.h>
#include "CArrayUtil.h"
#include "VKUtil.h"
#include "VKBase.h"
#include "VKBuffer.h"
@@ -46,10 +45,11 @@ VkResult VKBuffer_FindMemoryType(VkPhysicalDevice physicalDevice, uint32_t typeF
return VK_ERROR_UNKNOWN;
}
VKBuffer* VKBuffer_Create(VKLogicalDevice* logicalDevice, VkDeviceSize size,
VKBuffer* VKBuffer_Create(VKDevice* device, VkDeviceSize size,
VkBufferUsageFlags usage, VkMemoryPropertyFlags properties)
{
VKBuffer* buffer = malloc(sizeof (VKBuffer));
VKBuffer* buffer = calloc(1, sizeof(VKBuffer));
VK_RUNTIME_ASSERT(buffer);
VkBufferCreateInfo bufferInfo = {
.sType = VK_STRUCTURE_TYPE_BUFFER_CREATE_INFO,
@@ -58,23 +58,22 @@ VKBuffer* VKBuffer_Create(VKLogicalDevice* logicalDevice, VkDeviceSize size,
.sharingMode = VK_SHARING_MODE_EXCLUSIVE
};
if (logicalDevice->vkCreateBuffer(logicalDevice->device, &bufferInfo, NULL, &buffer->buffer) != VK_SUCCESS) {
J2dRlsTraceLn(J2D_TRACE_ERROR, "failed to allocate descriptor sets!")
VK_IF_ERROR(device->vkCreateBuffer(device->handle, &bufferInfo, NULL, &buffer->buffer)) {
VKBuffer_free(device, buffer);
return NULL;
}
buffer->size = size;
VkMemoryRequirements memRequirements;
logicalDevice->vkGetBufferMemoryRequirements(logicalDevice->device, buffer->buffer, &memRequirements);
device->vkGetBufferMemoryRequirements(device->handle, buffer->buffer, &memRequirements);
uint32_t memoryType;
if (VKBuffer_FindMemoryType(logicalDevice->physicalDevice,
memRequirements.memoryTypeBits,
properties, &memoryType) != VK_SUCCESS)
{
J2dRlsTraceLn(J2D_TRACE_ERROR, "failed to find memory!")
VK_IF_ERROR(VKBuffer_FindMemoryType(device->physicalDevice,
memRequirements.memoryTypeBits,
properties, &memoryType)) {
VKBuffer_free(device, buffer);
return NULL;
}
@@ -84,28 +83,28 @@ VKBuffer* VKBuffer_Create(VKLogicalDevice* logicalDevice, VkDeviceSize size,
.memoryTypeIndex = memoryType
};
if (logicalDevice->vkAllocateMemory(logicalDevice->device, &allocInfo, NULL, &buffer->memory) != VK_SUCCESS) {
J2dRlsTraceLn(J2D_TRACE_ERROR, "failed to allocate buffer memory!");
VK_IF_ERROR(device->vkAllocateMemory(device->handle, &allocInfo, NULL, &buffer->memory)) {
VKBuffer_free(device, buffer);
return NULL;
}
if (logicalDevice->vkBindBufferMemory(logicalDevice->device, buffer->buffer, buffer->memory, 0) != VK_SUCCESS) {
J2dRlsTraceLn(J2D_TRACE_ERROR, "failed to bind buffer memory!");
VK_IF_ERROR(device->vkBindBufferMemory(device->handle, buffer->buffer, buffer->memory, 0)) {
VKBuffer_free(device, buffer);
return NULL;
}
return buffer;
}
VKBuffer* VKBuffer_CreateFromData(VKLogicalDevice* logicalDevice, void* vertices, VkDeviceSize bufferSize)
VKBuffer* VKBuffer_CreateFromData(VKDevice* device, void* vertices, VkDeviceSize bufferSize)
{
VKBuffer* buffer = VKBuffer_Create(logicalDevice, bufferSize,
VKBuffer* buffer = VKBuffer_Create(device, bufferSize,
VK_BUFFER_USAGE_TRANSFER_DST_BIT | VK_BUFFER_USAGE_VERTEX_BUFFER_BIT,
VK_MEMORY_PROPERTY_DEVICE_LOCAL_BIT | VK_MEMORY_PROPERTY_HOST_VISIBLE_BIT |
VK_MEMORY_PROPERTY_HOST_COHERENT_BIT);
void* data;
if (logicalDevice->vkMapMemory(logicalDevice->device, buffer->memory, 0, bufferSize, 0, &data) != VK_SUCCESS) {
J2dRlsTraceLn(J2D_TRACE_ERROR, "failed to map memory!");
VK_IF_ERROR(device->vkMapMemory(device->handle, buffer->memory, 0, VK_WHOLE_SIZE, 0, &data)) {
VKBuffer_free(device, buffer);
return NULL;
}
memcpy(data, vertices, bufferSize);
@@ -119,23 +118,23 @@ VKBuffer* VKBuffer_CreateFromData(VKLogicalDevice* logicalDevice, void* vertices
};
if (logicalDevice->vkFlushMappedMemoryRanges(logicalDevice->device, 1, &memoryRange) != VK_SUCCESS) {
J2dRlsTraceLn(J2D_TRACE_ERROR, "failed to flush memory!");
VK_IF_ERROR(device->vkFlushMappedMemoryRanges(device->handle, 1, &memoryRange)) {
VKBuffer_free(device, buffer);
return NULL;
}
logicalDevice->vkUnmapMemory(logicalDevice->device, buffer->memory);
device->vkUnmapMemory(device->handle, buffer->memory);
buffer->size = bufferSize;
return buffer;
}
void VKBuffer_free(VKLogicalDevice* logicalDevice, VKBuffer* buffer) {
void VKBuffer_free(VKDevice* device, VKBuffer* buffer) {
if (buffer != NULL) {
if (buffer->buffer != VK_NULL_HANDLE) {
logicalDevice->vkDestroyBuffer(logicalDevice->device, buffer->buffer, NULL);
device->vkDestroyBuffer(device->handle, buffer->buffer, NULL);
}
if (buffer->memory != VK_NULL_HANDLE) {
logicalDevice->vkFreeMemory(logicalDevice->device, buffer->memory, NULL);
device->vkFreeMemory(device->handle, buffer->memory, NULL);
}
free(buffer);
}

View File

@@ -38,11 +38,11 @@ struct VKBuffer {
VkResult VKBuffer_FindMemoryType(VkPhysicalDevice physicalDevice, uint32_t typeFilter,
VkMemoryPropertyFlags properties, uint32_t* pMemoryType);
VKBuffer* VKBuffer_Create(VKLogicalDevice* logicalDevice, VkDeviceSize size,
VKBuffer* VKBuffer_Create(VKDevice* device, VkDeviceSize size,
VkBufferUsageFlags usage, VkMemoryPropertyFlags properties);
VKBuffer* VKBuffer_CreateFromData(VKLogicalDevice* logicalDevice, void* vertices, VkDeviceSize bufferSize);
VKBuffer* VKBuffer_CreateFromData(VKDevice* device, void* vertices, VkDeviceSize bufferSize);
void VKBuffer_free(VKLogicalDevice* logicalDevice, VKBuffer* buffer);
void VKBuffer_free(VKDevice* device, VKBuffer* buffer);
#endif // VKBuffer_h_Included

View File

@@ -24,14 +24,13 @@
* questions.
*/
#include <Trace.h>
#include "CArrayUtil.h"
#include "VKUtil.h"
#include "VKBase.h"
#include "VKBuffer.h"
#include "VKImage.h"
VkBool32 VKImage_CreateView(VKLogicalDevice* logicalDevice, VKImage* image) {
VkBool32 VKImage_CreateView(VKDevice* device, VKImage* image) {
VkImageViewCreateInfo viewInfo = {
.sType = VK_STRUCTURE_TYPE_IMAGE_VIEW_CREATE_INFO,
.image = image->image,
@@ -44,49 +43,19 @@ VkBool32 VKImage_CreateView(VKLogicalDevice* logicalDevice, VKImage* image) {
.subresourceRange.layerCount = 1,
};
if (logicalDevice->vkCreateImageView(logicalDevice->device, &viewInfo, NULL, &image->view) != VK_SUCCESS) {
J2dRlsTrace(J2D_TRACE_ERROR, "Cannot surface image view\n");
VK_IF_ERROR(device->vkCreateImageView(device->handle, &viewInfo, NULL, &image->view)) {
return VK_FALSE;
}
return VK_TRUE;
}
VkBool32 VKImage_CreateFramebuffer(VKLogicalDevice* logicalDevice, VKImage *image, VkRenderPass renderPass) {
VkImageView attachments[] = {
image->view
};
VkFramebufferCreateInfo framebufferInfo = {
.sType = VK_STRUCTURE_TYPE_FRAMEBUFFER_CREATE_INFO,
.renderPass = renderPass,
.attachmentCount = 1,
.pAttachments = attachments,
.width = image->extent.width,
.height = image->extent.height,
.layers = 1
};
if (logicalDevice->vkCreateFramebuffer(logicalDevice->device, &framebufferInfo, NULL,
&image->framebuffer) != VK_SUCCESS)
{
J2dRlsTraceLn(J2D_TRACE_ERROR, "failed to create framebuffer!")
return VK_FALSE;
}
return VK_TRUE;
}
VKImage* VKImage_Create(VKLogicalDevice* logicalDevice,
uint32_t width, uint32_t height,
VKImage* VKImage_Create(VKDevice* device, uint32_t width, uint32_t height,
VkFormat format, VkImageTiling tiling,
VkImageUsageFlags usage,
VkMemoryPropertyFlags properties)
{
VKImage* image = malloc(sizeof (VKImage));
if (!image) {
J2dRlsTraceLn(J2D_TRACE_ERROR, "Cannot allocate data for image")
return NULL;
}
VKImage* image = calloc(1, sizeof(VKImage));
VK_RUNTIME_ASSERT(image);
image->format = format;
image->extent = (VkExtent2D) {width, height};
@@ -107,22 +76,20 @@ VKImage* VKImage_Create(VKLogicalDevice* logicalDevice,
.sharingMode = VK_SHARING_MODE_EXCLUSIVE
};
if (logicalDevice->vkCreateImage(logicalDevice->device, &imageInfo, NULL, &image->image) != VK_SUCCESS) {
J2dRlsTraceLn(J2D_TRACE_ERROR, "Cannot create surface image")
VKImage_free(logicalDevice, image);
VK_IF_ERROR(device->vkCreateImage(device->handle, &imageInfo, NULL, &image->image)) {
VKImage_free(device, image);
return NULL;
}
VkMemoryRequirements memRequirements;
logicalDevice->vkGetImageMemoryRequirements(logicalDevice->device, image->image, &memRequirements);
device->vkGetImageMemoryRequirements(device->handle, image->image, &memRequirements);
uint32_t memoryType;
if (VKBuffer_FindMemoryType(logicalDevice->physicalDevice,
VK_IF_ERROR(VKBuffer_FindMemoryType(device->physicalDevice,
memRequirements.memoryTypeBits,
properties, &memoryType) != VK_SUCCESS)
properties, &memoryType))
{
J2dRlsTraceLn(J2D_TRACE_ERROR, "Failed to find memory")
VKImage_free(logicalDevice, image);
VKImage_free(device, image);
return NULL;
}
@@ -132,92 +99,41 @@ VKImage* VKImage_Create(VKLogicalDevice* logicalDevice,
.memoryTypeIndex = memoryType
};
if (logicalDevice->vkAllocateMemory(logicalDevice->device, &allocInfo, NULL, &image->memory) != VK_SUCCESS) {
J2dRlsTraceLn(J2D_TRACE_ERROR, "Failed to allocate image memory");
VKImage_free(logicalDevice, image);
VK_IF_ERROR(device->vkAllocateMemory(device->handle, &allocInfo, NULL, &image->memory)) {
VKImage_free(device, image);
return NULL;
}
logicalDevice->vkBindImageMemory(logicalDevice->device, image->image, image->memory, 0);
VK_IF_ERROR(device->vkBindImageMemory(device->handle, image->image, image->memory, 0)) {
VKImage_free(device, image);
return NULL;
}
if (!VKImage_CreateView(logicalDevice, image)) {
VKImage_free(logicalDevice, image);
if (!VKImage_CreateView(device, image)) {
VKImage_free(device, image);
return NULL;
}
return image;
}
VKImage* VKImage_CreateImageArrayFromSwapChain(VKLogicalDevice* logicalDevice,
VkSwapchainKHR swapchainKhr, VkRenderPass renderPass,
VkFormat format, VkExtent2D extent)
{
uint32_t swapChainImagesCount;
if (logicalDevice->vkGetSwapchainImagesKHR(logicalDevice->device, swapchainKhr, &swapChainImagesCount,
NULL) != VK_SUCCESS) {
J2dRlsTrace(J2D_TRACE_ERROR, "Cannot get swapchain images\n");
return NULL;
}
if (swapChainImagesCount == 0) {
J2dRlsTrace(J2D_TRACE_ERROR, "No swapchain images found\n");
return NULL;
}
VkImage swapChainImages[swapChainImagesCount];
if (logicalDevice->vkGetSwapchainImagesKHR(logicalDevice->device, swapchainKhr, &swapChainImagesCount,
swapChainImages) != VK_SUCCESS) {
J2dRlsTrace(J2D_TRACE_ERROR, "Cannot get swapchain images\n");
return NULL;
}
VKImage* images = ARRAY_ALLOC(VKImage, swapChainImagesCount);
for (uint32_t i = 0; i < swapChainImagesCount; i++) {
ARRAY_PUSH_BACK(&images, ((VKImage){
.image = swapChainImages[i],
.memory = VK_NULL_HANDLE,
.format = format,
.extent = extent,
.noImageDealloc = VK_TRUE
}));
if (!VKImage_CreateView(logicalDevice, &ARRAY_LAST(images))) {
ARRAY_APPLY_TRAILING(images, VKImage_dealloc, logicalDevice);
ARRAY_FREE(images);
return NULL;
}
if (!VKImage_CreateFramebuffer(logicalDevice, &ARRAY_LAST(images), renderPass)) {
ARRAY_APPLY_TRAILING(images, VKImage_dealloc, logicalDevice);
ARRAY_FREE(images);
return NULL;
}
}
return images;
}
void VKImage_dealloc(VKLogicalDevice* logicalDevice, VKImage* image) {
void VKImage_free(VKDevice* device, VKImage* image) {
if (!image) return;
if (image->framebuffer != VK_NULL_HANDLE) {
logicalDevice->vkDestroyFramebuffer(logicalDevice->device, image->framebuffer, NULL);
}
if (image->view != VK_NULL_HANDLE) {
logicalDevice->vkDestroyImageView(logicalDevice->device, image->view, NULL);
device->vkDestroyImageView(device->handle, image->view, NULL);
image->view = VK_NULL_HANDLE;
}
if (image->memory != VK_NULL_HANDLE) {
logicalDevice->vkFreeMemory(logicalDevice->device, image->memory, NULL);
device->vkFreeMemory(device->handle, image->memory, NULL);
image->memory = VK_NULL_HANDLE;
}
if (image->image != VK_NULL_HANDLE && !image->noImageDealloc) {
logicalDevice->vkDestroyImage(logicalDevice->device, image->image, NULL);
if (image->image != VK_NULL_HANDLE) {
device->vkDestroyImage(device->handle, image->image, NULL);
image->image = VK_NULL_HANDLE;
}
}
void VKImage_free(VKLogicalDevice* logicalDevice, VKImage* image) {
VKImage_dealloc(logicalDevice, image);
free(image);
}

View File

@@ -32,28 +32,15 @@
struct VKImage {
VkImage image;
VkDeviceMemory memory;
VkFramebuffer framebuffer;
VkImageView view;
VkFormat format;
VkExtent2D extent;
VkBool32 noImageDealloc;
};
VKImage* VKImage_Create(VKLogicalDevice* logicalDevice,
uint32_t width, uint32_t height,
VKImage* VKImage_Create(VKDevice* device, uint32_t width, uint32_t height,
VkFormat format, VkImageTiling tiling,
VkImageUsageFlags usage,
VkMemoryPropertyFlags properties);
VKImage* VKImage_CreateImageArrayFromSwapChain(VKLogicalDevice* logicalDevice,
VkSwapchainKHR swapchainKhr,
VkRenderPass renderPass,
VkFormat format,
VkExtent2D extent);
VkBool32 VKImage_CreateFramebuffer(VKLogicalDevice* logicalDevice,
VKImage *image, VkRenderPass renderPass);
void VKImage_free(VKLogicalDevice* logicalDevice, VKImage* image);
void VKImage_dealloc(VKLogicalDevice* logicalDevice, VKImage* image);
void VKImage_free(VKDevice* device, VKImage* image);
#endif // VKImage_h_Included

View File

@@ -0,0 +1,433 @@
// Copyright 2024 JetBrains s.r.o.
// DO NOT ALTER OR REMOVE COPYRIGHT NOTICES OR THIS FILE HEADER.
//
// This code is free software; you can redistribute it and/or modify it
// under the terms of the GNU General Public License version 2 only, as
// published by the Free Software Foundation. Oracle designates this
// particular file as subject to the "Classpath" exception as provided
// by Oracle in the LICENSE file that accompanied this code.
//
// This code is distributed in the hope that it will be useful, but WITHOUT
// ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or
// FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License
// version 2 for more details (a copy is included in the LICENSE file that
// accompanied this code).
//
// You should have received a copy of the GNU General Public License version
// 2 along with this work; if not, write to the Free Software Foundation,
// Inc., 51 Franklin St, Fifth Floor, Boston, MA 02110-1301 USA.
//
// Please contact Oracle, 500 Oracle Parkway, Redwood Shores, CA 94065 USA
// or visit www.oracle.com if you need additional information or have any
// questions.
#include "VKUtil.h"
#include "VKBase.h"
#include "VKPipelines.h"
#define INCLUDE_BYTECODE
#define SHADER_ENTRY(NAME, TYPE) static uint32_t NAME ## _ ## TYPE ## _data[] = {
#define BYTECODE_END };
#include "vulkan/shader_list.h"
#undef INCLUDE_BYTECODE
#undef SHADER_ENTRY
#undef BYTECODE_END
struct VKPipelineSet {
VkPipeline pipelines[PIPELINE_COUNT];
};
struct VKShaders {
# define SHADER_ENTRY(NAME, TYPE) VkPipelineShaderStageCreateInfo NAME ## _ ## TYPE;
# include "vulkan/shader_list.h"
# undef SHADER_ENTRY
};
static void VKPipelines_DestroyShaders(VKDevice* device, VKShaders* shaders) {
assert(device != NULL);
if (shaders == NULL) return;
# define SHADER_ENTRY(NAME, TYPE) if (shaders->NAME##_##TYPE.module != VK_NULL_HANDLE) \
device->vkDestroyShaderModule(device->handle, shaders->NAME##_##TYPE.module, NULL);
# include "vulkan/shader_list.h"
# undef SHADER_ENTRY
free(shaders);
}
static VKShaders* VKPipelines_CreateShaders(VKDevice* device) {
assert(device != NULL);
const VkShaderStageFlagBits vert = VK_SHADER_STAGE_VERTEX_BIT;
const VkShaderStageFlagBits frag = VK_SHADER_STAGE_FRAGMENT_BIT;
VKShaders* shaders = (VKShaders*) calloc(1, sizeof(VKShaders));
VK_RUNTIME_ASSERT(shaders);
VkShaderModuleCreateInfo createInfo = { .sType = VK_STRUCTURE_TYPE_SHADER_MODULE_CREATE_INFO };
# define SHADER_ENTRY(NAME, TYPE) \
shaders->NAME ## _ ## TYPE = (VkPipelineShaderStageCreateInfo) { \
.sType = VK_STRUCTURE_TYPE_PIPELINE_SHADER_STAGE_CREATE_INFO, \
.stage = (TYPE), \
.pName = "main" \
}; \
createInfo.codeSize = sizeof(NAME ## _ ## TYPE ## _data); \
createInfo.pCode = NAME ## _ ## TYPE ## _data; \
VK_IF_ERROR(device->vkCreateShaderModule(device->handle, \
&createInfo, NULL, &shaders->NAME##_##TYPE.module)) { \
VKPipelines_DestroyShaders(device, shaders); \
return NULL; \
}
# include "vulkan/shader_list.h"
# undef SHADER_ENTRY
return shaders;
}
#define MAKE_INPUT_STATE(TYPE, ...) \
static const VkVertexInputAttributeDescription attributes[] = { __VA_ARGS__ }; \
static const VkVertexInputBindingDescription binding = { \
.binding = 0, \
.stride = sizeof(TYPE), \
.inputRate = VK_VERTEX_INPUT_RATE_VERTEX \
}; \
static const VkPipelineVertexInputStateCreateInfo inputState = { \
.sType = VK_STRUCTURE_TYPE_PIPELINE_VERTEX_INPUT_STATE_CREATE_INFO, \
.vertexBindingDescriptionCount = 1, \
.pVertexBindingDescriptions = &binding, \
.vertexAttributeDescriptionCount = SARRAY_COUNT_OF(attributes), \
.pVertexAttributeDescriptions = attributes \
}
typedef struct {
VkGraphicsPipelineCreateInfo createInfo;
VkPipelineMultisampleStateCreateInfo multisampleState;
VkPipelineColorBlendStateCreateInfo colorBlendState;
VkPipelineDynamicStateCreateInfo dynamicState;
VkDynamicState dynamicStates[2];
} PipelineCreateState;
typedef struct {
VkPipelineShaderStageCreateInfo createInfos[2]; // vert + frag
} ShaderStages;
/**
* Init default pipeline state. Some members are left uninitialized:
* - pStages (but stageCount is set to 2)
* - pVertexInputState
* - pInputAssemblyState
* - colorBlendState.pAttachments (but attachmentCount is set to 1)
* - createInfo.layout
* - createInfo.renderPass
* - renderingCreateInfo.pColorAttachmentFormats (but colorAttachmentCount is set to 1)
*/
static void VKPipelines_InitPipelineCreateState(PipelineCreateState* state) {
static const VkViewport viewport = {};
static const VkRect2D scissor = {};
static const VkPipelineViewportStateCreateInfo viewportState = {
.sType = VK_STRUCTURE_TYPE_PIPELINE_VIEWPORT_STATE_CREATE_INFO,
.viewportCount = 1,
.pViewports = &viewport,
.scissorCount = 1,
.pScissors = &scissor
};
static const VkPipelineRasterizationStateCreateInfo rasterizationState = {
.sType = VK_STRUCTURE_TYPE_PIPELINE_RASTERIZATION_STATE_CREATE_INFO,
.polygonMode = VK_POLYGON_MODE_FILL,
.cullMode = VK_CULL_MODE_NONE,
.lineWidth = 1.0f
};
state->multisampleState = (VkPipelineMultisampleStateCreateInfo) {
.sType = VK_STRUCTURE_TYPE_PIPELINE_MULTISAMPLE_STATE_CREATE_INFO,
.rasterizationSamples = VK_SAMPLE_COUNT_1_BIT
};
state->colorBlendState = (VkPipelineColorBlendStateCreateInfo) {
.sType = VK_STRUCTURE_TYPE_PIPELINE_COLOR_BLEND_STATE_CREATE_INFO,
.logicOpEnable = VK_FALSE,
.logicOp = VK_LOGIC_OP_XOR,
.attachmentCount = 1,
.pAttachments = NULL,
};
state->dynamicState = (VkPipelineDynamicStateCreateInfo) {
.sType = VK_STRUCTURE_TYPE_PIPELINE_DYNAMIC_STATE_CREATE_INFO,
.dynamicStateCount = 2,
.pDynamicStates = state->dynamicStates
};
state->dynamicStates[0] = VK_DYNAMIC_STATE_VIEWPORT;
state->dynamicStates[1] = VK_DYNAMIC_STATE_SCISSOR;
state->createInfo = (VkGraphicsPipelineCreateInfo) {
.sType = VK_STRUCTURE_TYPE_GRAPHICS_PIPELINE_CREATE_INFO,
.stageCount = 2,
.pViewportState = &viewportState,
.pRasterizationState = &rasterizationState,
.pMultisampleState = &state->multisampleState,
.pColorBlendState = &state->colorBlendState,
.pDynamicState = &state->dynamicState,
.subpass = 0,
.basePipelineHandle = VK_NULL_HANDLE,
.basePipelineIndex = -1
};
}
static const VkPipelineInputAssemblyStateCreateInfo INPUT_ASSEMBLY_STATE_TRIANGLE_STRIP = {
.sType = VK_STRUCTURE_TYPE_PIPELINE_INPUT_ASSEMBLY_STATE_CREATE_INFO,
.topology = VK_PRIMITIVE_TOPOLOGY_TRIANGLE_STRIP
};
static const VkPipelineInputAssemblyStateCreateInfo INPUT_ASSEMBLY_STATE_TRIANGLE_LIST = {
.sType = VK_STRUCTURE_TYPE_PIPELINE_INPUT_ASSEMBLY_STATE_CREATE_INFO,
.topology = VK_PRIMITIVE_TOPOLOGY_TRIANGLE_LIST
};
static const VkPipelineInputAssemblyStateCreateInfo INPUT_ASSEMBLY_STATE_LINE_LIST = {
.sType = VK_STRUCTURE_TYPE_PIPELINE_INPUT_ASSEMBLY_STATE_CREATE_INFO,
.topology = VK_PRIMITIVE_TOPOLOGY_LINE_LIST
};
const VkPipelineColorBlendAttachmentState BLEND_STATE = {
.blendEnable = VK_FALSE,
.colorWriteMask = VK_COLOR_COMPONENT_R_BIT | VK_COLOR_COMPONENT_G_BIT |
VK_COLOR_COMPONENT_B_BIT | VK_COLOR_COMPONENT_A_BIT
};
static void VKPipelines_DestroyPipelineSet(VKDevice* device, VKPipelineSet* set) {
if (set == NULL) return;
for (uint32_t i = 0; i < PIPELINE_COUNT; i++) {
device->vkDestroyPipeline(device->handle, set->pipelines[i], NULL);
}
free(set);
}
static VKPipelineSet* VKPipelines_CreatePipelineSet(VKRenderPassContext* renderPassContext) {
assert(renderPassContext != NULL && renderPassContext->pipelineContext != NULL);
VKPipelineContext* pipelineContext = renderPassContext->pipelineContext;
VKPipelineSet* set = calloc(1, sizeof(VKPipelineSet));
VK_RUNTIME_ASSERT(set);
VKDevice* device = pipelineContext->device;
VKShaders* shaders = pipelineContext->shaders;
// Setup default pipeline parameters.
PipelineCreateState base;
VKPipelines_InitPipelineCreateState(&base);
base.createInfo.layout = pipelineContext->pipelineLayout;
base.createInfo.renderPass = renderPassContext->renderPass;
base.colorBlendState.pAttachments = &BLEND_STATE;
assert(base.dynamicState.dynamicStateCount <= SARRAY_COUNT_OF(base.dynamicStates));
ShaderStages stages[PIPELINE_COUNT];
VkGraphicsPipelineCreateInfo createInfos[PIPELINE_COUNT];
for (uint32_t i = 0; i < PIPELINE_COUNT; i++) {
createInfos[i] = base.createInfo;
createInfos[i].pStages = stages[i].createInfos;
}
static const VkVertexInputAttributeDescription positionAttribute = {
.location = 0,
.binding = 0,
.format = VK_FORMAT_R32G32_SFLOAT,
.offset = 0
};
static const VkVertexInputAttributeDescription texcoordAttribute = {
.location = 1,
.binding = 0,
.format = VK_FORMAT_R32G32_SFLOAT,
.offset = sizeof(float) * 2
};
{ // Setup plain color pipelines.
MAKE_INPUT_STATE(VKVertex, positionAttribute);
createInfos[PIPELINE_DRAW_COLOR].pVertexInputState = createInfos[PIPELINE_FILL_COLOR].pVertexInputState = &inputState;
createInfos[PIPELINE_FILL_COLOR].pInputAssemblyState = &INPUT_ASSEMBLY_STATE_TRIANGLE_LIST;
createInfos[PIPELINE_DRAW_COLOR].pInputAssemblyState = &INPUT_ASSEMBLY_STATE_LINE_LIST;
stages[PIPELINE_DRAW_COLOR] = stages[PIPELINE_FILL_COLOR] = (ShaderStages) {{ shaders->color_vert, shaders->color_frag }};
}
{ // Setup texture pipeline.
MAKE_INPUT_STATE(VKTxVertex, positionAttribute, texcoordAttribute);
createInfos[PIPELINE_BLIT].pVertexInputState = &inputState;
createInfos[PIPELINE_BLIT].pInputAssemblyState = &INPUT_ASSEMBLY_STATE_TRIANGLE_STRIP;
createInfos[PIPELINE_BLIT].layout = pipelineContext->texturePipelineLayout;
stages[PIPELINE_BLIT] = (ShaderStages) {{ shaders->blit_vert, shaders->blit_frag }};
}
// Create pipelines.
// TODO pipeline cache
VK_IF_ERROR(device->vkCreateGraphicsPipelines(device->handle, VK_NULL_HANDLE, PIPELINE_COUNT,
createInfos, NULL, set->pipelines)) VK_UNHANDLED_ERROR();
J2dRlsTraceLn(J2D_TRACE_INFO, "VKPipelines_CreatePipelineSet");
return set;
}
static VkResult VKPipelines_InitRenderPass(VKDevice* device, VKRenderPassContext* renderPassContext) {
VkAttachmentDescription colorAttachment = {
.format = renderPassContext->format,
.samples = VK_SAMPLE_COUNT_1_BIT,
.loadOp = VK_ATTACHMENT_LOAD_OP_LOAD,
.storeOp = VK_ATTACHMENT_STORE_OP_STORE,
.stencilLoadOp = VK_ATTACHMENT_LOAD_OP_DONT_CARE,
.stencilStoreOp = VK_ATTACHMENT_STORE_OP_DONT_CARE,
.initialLayout = VK_IMAGE_LAYOUT_COLOR_ATTACHMENT_OPTIMAL,
.finalLayout = VK_IMAGE_LAYOUT_COLOR_ATTACHMENT_OPTIMAL
};
VkAttachmentReference colorReference = {
.attachment = 0,
.layout = VK_IMAGE_LAYOUT_COLOR_ATTACHMENT_OPTIMAL
};
VkSubpassDescription subpassDescription = {
.pipelineBindPoint = VK_PIPELINE_BIND_POINT_GRAPHICS,
.colorAttachmentCount = 1,
.pColorAttachments = &colorReference
};
// TODO this is probably not needed?
// // Subpass dependencies for layout transitions
// VkSubpassDependency dependency = {
// .srcSubpass = VK_SUBPASS_EXTERNAL,
// .dstSubpass = 0,
// .srcStageMask = VK_PIPELINE_STAGE_COLOR_ATTACHMENT_OUTPUT_BIT,
// .dstStageMask = VK_PIPELINE_STAGE_COLOR_ATTACHMENT_OUTPUT_BIT,
// .srcAccessMask = 0,
// .dstAccessMask = VK_ACCESS_COLOR_ATTACHMENT_WRITE_BIT
// };
VkRenderPassCreateInfo createInfo = {
.sType = VK_STRUCTURE_TYPE_RENDER_PASS_CREATE_INFO,
.attachmentCount = 1,
.pAttachments = &colorAttachment,
.subpassCount = 1,
.pSubpasses = &subpassDescription,
.dependencyCount = 0,
.pDependencies = NULL
};
return device->vkCreateRenderPass(device->handle, &createInfo, NULL, &renderPassContext->renderPass);
}
static void VKPipelines_DestroyRenderPassContext(VKRenderPassContext* renderPassContext) {
if (renderPassContext == NULL) return;
VKDevice* device = renderPassContext->pipelineContext->device;
assert(device != NULL);
VKPipelines_DestroyPipelineSet(device, renderPassContext->pipelineSet);
device->vkDestroyRenderPass(device->handle, renderPassContext->renderPass, NULL);
free(renderPassContext);
}
static VKRenderPassContext* VKPipelines_CreateRenderPassContext(VKPipelineContext* pipelineContext, VkFormat format) {
assert(pipelineContext != NULL && pipelineContext->device != NULL);
VKRenderPassContext* renderPassContext = calloc(1, sizeof(VKRenderPassContext));
VK_RUNTIME_ASSERT(renderPassContext);
renderPassContext->pipelineContext = pipelineContext;
renderPassContext->format = format;
VK_IF_ERROR(VKPipelines_InitRenderPass(pipelineContext->device, renderPassContext)) {
VKPipelines_DestroyRenderPassContext(renderPassContext);
return NULL;
}
return renderPassContext;
}
static VkResult VKPipelines_InitPipelineLayouts(VKDevice* device, VKPipelineContext* pipelines) {
assert(device != NULL && pipelines != NULL);
VkResult result;
// We want all our pipelines to have same push constant range to ensure common state is compatible between pipelines.
VkPushConstantRange pushConstantRange = {
.stageFlags = VK_SHADER_STAGE_VERTEX_BIT,
.offset = 0,
.size = sizeof(float) * 4
};
VkPipelineLayoutCreateInfo createInfo = {
.sType = VK_STRUCTURE_TYPE_PIPELINE_LAYOUT_CREATE_INFO,
.setLayoutCount = 0,
.pushConstantRangeCount = 1,
.pPushConstantRanges = &pushConstantRange
};
result = device->vkCreatePipelineLayout(device->handle, &createInfo, NULL, &pipelines->pipelineLayout);
VK_IF_ERROR(result) return result;
VkDescriptorSetLayoutBinding textureLayoutBinding = {
.binding = 0,
.descriptorType = VK_DESCRIPTOR_TYPE_COMBINED_IMAGE_SAMPLER,
.descriptorCount = 1,
.stageFlags = VK_SHADER_STAGE_FRAGMENT_BIT,
.pImmutableSamplers = NULL
};
VkDescriptorSetLayoutCreateInfo textureDescriptorSetLayoutCreateInfo = {
.sType = VK_STRUCTURE_TYPE_DESCRIPTOR_SET_LAYOUT_CREATE_INFO,
.bindingCount = 1,
.pBindings = &textureLayoutBinding
};
result = device->vkCreateDescriptorSetLayout(device->handle, &textureDescriptorSetLayoutCreateInfo, NULL, &pipelines->textureDescriptorSetLayout);
VK_IF_ERROR(result) return result;
createInfo.setLayoutCount = 1;
createInfo.pSetLayouts = &pipelines->textureDescriptorSetLayout;
result = device->vkCreatePipelineLayout(device->handle, &createInfo, NULL, &pipelines->texturePipelineLayout);
VK_IF_ERROR(result) return result;
return VK_SUCCESS;
}
VKPipelineContext* VKPipelines_CreateContext(VKDevice* device) {
assert(device != NULL);
VKPipelineContext* pipelineContext = (VKPipelineContext*) calloc(1, sizeof(VKPipelineContext));
VK_RUNTIME_ASSERT(pipelineContext);
pipelineContext->device = device;
pipelineContext->shaders = VKPipelines_CreateShaders(device);
if (pipelineContext->shaders == NULL) {
VKPipelines_DestroyContext(pipelineContext);
return NULL;
}
VK_IF_ERROR(VKPipelines_InitPipelineLayouts(device, pipelineContext)) {
VKPipelines_DestroyContext(pipelineContext);
return NULL;
}
// Create sampler.
VkSamplerCreateInfo samplerCreateInfo = {
.sType = VK_STRUCTURE_TYPE_SAMPLER_CREATE_INFO,
.magFilter = VK_FILTER_LINEAR,
.minFilter = VK_FILTER_LINEAR,
.addressModeU = VK_SAMPLER_ADDRESS_MODE_REPEAT,
.addressModeV = VK_SAMPLER_ADDRESS_MODE_REPEAT,
.addressModeW = VK_SAMPLER_ADDRESS_MODE_REPEAT
};
VK_IF_ERROR(device->vkCreateSampler(device->handle, &samplerCreateInfo, NULL, &pipelineContext->linearRepeatSampler)) {
VKPipelines_DestroyContext(pipelineContext);
return NULL;
}
return pipelineContext;
}
void VKPipelines_DestroyContext(VKPipelineContext* pipelineContext) {
if (pipelineContext == NULL) return;
VKDevice* device = pipelineContext->device;
assert(device != NULL);
for (uint32_t i = 0; i < ARRAY_SIZE(pipelineContext->renderPassContexts); i++) {
VKPipelines_DestroyRenderPassContext(pipelineContext->renderPassContexts[i]);
}
ARRAY_FREE(pipelineContext->renderPassContexts);
VKPipelines_DestroyShaders(device, pipelineContext->shaders);
device->vkDestroySampler(device->handle, pipelineContext->linearRepeatSampler, NULL);
device->vkDestroyPipelineLayout(device->handle, pipelineContext->pipelineLayout, NULL);
device->vkDestroyPipelineLayout(device->handle, pipelineContext->texturePipelineLayout, NULL);
device->vkDestroyDescriptorSetLayout(device->handle, pipelineContext->textureDescriptorSetLayout, NULL);
free(pipelineContext);
}
VKRenderPassContext* VKPipelines_GetRenderPassContext(VKPipelineContext* pipelineContext, VkFormat format) {
assert(pipelineContext != NULL && pipelineContext->device != NULL);
for (uint32_t i = 0; i < ARRAY_SIZE(pipelineContext->renderPassContexts); i++) {
if (pipelineContext->renderPassContexts[i]->format == format) return pipelineContext->renderPassContexts[i];
}
// Not found, create.
VKRenderPassContext* renderPassContext = VKPipelines_CreateRenderPassContext(pipelineContext, format);
ARRAY_PUSH_BACK(pipelineContext->renderPassContexts, renderPassContext);
return renderPassContext;
}
VkPipeline VKPipelines_GetPipeline(VKRenderPassContext* renderPassContext, VKPipeline pipeline) {
if (renderPassContext->pipelineSet == NULL) {
renderPassContext->pipelineSet = VKPipelines_CreatePipelineSet(renderPassContext);
}
return renderPassContext->pipelineSet->pipelines[pipeline];
}

View File

@@ -0,0 +1,82 @@
// Copyright 2024 JetBrains s.r.o.
// DO NOT ALTER OR REMOVE COPYRIGHT NOTICES OR THIS FILE HEADER.
//
// This code is free software; you can redistribute it and/or modify it
// under the terms of the GNU General Public License version 2 only, as
// published by the Free Software Foundation. Oracle designates this
// particular file as subject to the "Classpath" exception as provided
// by Oracle in the LICENSE file that accompanied this code.
//
// This code is distributed in the hope that it will be useful, but WITHOUT
// ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or
// FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License
// version 2 for more details (a copy is included in the LICENSE file that
// accompanied this code).
//
// You should have received a copy of the GNU General Public License version
// 2 along with this work; if not, write to the Free Software Foundation,
// Inc., 51 Franklin St, Fifth Floor, Boston, MA 02110-1301 USA.
//
// Please contact Oracle, 500 Oracle Parkway, Redwood Shores, CA 94065 USA
// or visit www.oracle.com if you need additional information or have any
// questions.
#ifndef VKPipelines_h_Included
#define VKPipelines_h_Included
#include "VKTypes.h"
typedef struct VKShaders VKShaders;
typedef struct VKPipelineSet VKPipelineSet;
/**
* All pipeline types, use these to index into VKPipelineSet.pipelines.
*/
typedef enum {
PIPELINE_FILL_COLOR = 0,
PIPELINE_DRAW_COLOR = 1,
PIPELINE_BLIT = 2,
PIPELINE_COUNT = 3
} VKPipeline;
/**
* Global pipeline context.
*/
struct VKPipelineContext {
VKDevice* device;
VkPipelineLayout pipelineLayout;
VkPipelineLayout texturePipelineLayout;
VkDescriptorSetLayout textureDescriptorSetLayout;
VkSampler linearRepeatSampler;
VKShaders* shaders;
VKRenderPassContext** renderPassContexts;
};
/**
* Per-format context.
*/
struct VKRenderPassContext {
VKPipelineContext* pipelineContext;
VkFormat format;
VkRenderPass renderPass;
VKPipelineSet* pipelineSet; // TODO we will need a real hash map for this in the future.
};
typedef struct {
float x, y;
} VKVertex;
typedef struct {
float px, py;
float u, v;
} VKTxVertex;
VKPipelineContext* VKPipelines_CreateContext(VKDevice* device);
void VKPipelines_DestroyContext(VKPipelineContext* pipelines);
VKRenderPassContext* VKPipelines_GetRenderPassContext(VKPipelineContext* pipelineContext, VkFormat format);
VkPipeline VKPipelines_GetPipeline(VKRenderPassContext* renderPassContext, VKPipeline pipeline);
#endif //VKPipelines_h_Included

View File

@@ -32,10 +32,38 @@
#include "sun_java2d_vulkan_VKBlitLoops.h"
#include "Trace.h"
#include "jlong.h"
#include "VKRenderQueue.h"
#include "VKBase.h"
#include "VKSurfaceData.h"
#include "VKRenderer.h"
#include "VKVertex.h"
#include "VKUtil.h"
/*
* The following macros are used to pick values (of the specified type) off
* the queue.
*/
#define NEXT_VAL(buf, type) (((type *)((buf) = ((unsigned char*)(buf)) + sizeof(type)))[-1])
#define NEXT_BYTE(buf) NEXT_VAL(buf, unsigned char)
#define NEXT_INT(buf) NEXT_VAL(buf, jint)
#define NEXT_FLOAT(buf) NEXT_VAL(buf, jfloat)
#define NEXT_BOOLEAN(buf) (jboolean)NEXT_INT(buf)
#define NEXT_LONG(buf) NEXT_VAL(buf, jlong)
#define NEXT_DOUBLE(buf) NEXT_VAL(buf, jdouble)
#define NEXT_SURFACE(buf) ((VKSDOps*) (SurfaceDataOps*) jlong_to_ptr(NEXT_LONG(buf)))
/*
* Increments a pointer (buf) by the given number of bytes.
*/
#define SKIP_BYTES(buf, numbytes) (buf) = ((unsigned char*)(buf)) + (numbytes)
/*
* Extracts a value at the given offset from the provided packed value.
*/
#define EXTRACT_VAL(packedval, offset, mask) \
(((packedval) >> (offset)) & (mask))
#define EXTRACT_BYTE(packedval, offset) \
(unsigned char)EXTRACT_VAL(packedval, offset, 0xff)
#define EXTRACT_BOOLEAN(packedval, offset) \
(jboolean)EXTRACT_VAL(packedval, offset, 0x1)
#define BYTES_PER_POLY_POINT \
sun_java2d_pipe_BufferedRenderPipe_BYTES_PER_POLY_POINT
@@ -63,19 +91,16 @@
#define OFFSET_XFORM sun_java2d_vulkan_VKBlitLoops_OFFSET_XFORM
#define OFFSET_ISOBLIT sun_java2d_vulkan_VKBlitLoops_OFFSET_ISOBLIT
static VKSDOps *dstOps = NULL;
static VKLogicalDevice* currentDevice;
// TODO move this property to special drawing context structure
static int color = -1;
// Rendering context is only accessed from VKRenderQueue_flushBuffer,
// which is only called from queue flusher thread, no need for synchronization.
static VKRenderingContext context = {};
JNIEXPORT void JNICALL Java_sun_java2d_vulkan_VKRenderQueue_flushBuffer
(JNIEnv *env, jobject oglrq, jlong buf, jint limit)
{
unsigned char *b, *end;
J2dTraceLn1(J2D_TRACE_INFO,
J2dTraceLn1(J2D_TRACE_VERBOSE,
"VKRenderQueue_flushBuffer: limit=%d", limit);
b = (unsigned char *)jlong_to_ptr(buf);
@@ -160,8 +185,7 @@ JNIEXPORT void JNICALL Java_sun_java2d_vulkan_VKRenderQueue_flushBuffer
J2dRlsTraceLn8(J2D_TRACE_VERBOSE,
"VKRenderQueue_flushBuffer: DRAW_PARALLELOGRAM(%f, %f, %f, %f, %f, %f, %f, %f)",
x11, y11, dx21, dy21, dx12, dy12, lwr21, lwr12);
VKRenderer_RenderParallelogram(currentDevice, currentDevice->drawColorPoly,
color, dstOps, x11, y11, dx21, dy21, dx12, dy12);
VKRenderer_RenderParallelogram(&context, PIPELINE_DRAW_COLOR, x11, y11, dx21, dy21, dx12, dy12);
}
break;
case sun_java2d_pipe_BufferedOpCodes_DRAW_AAPARALLELOGRAM:
@@ -189,7 +213,7 @@ JNIEXPORT void JNICALL Java_sun_java2d_vulkan_VKRenderQueue_flushBuffer
jint h = NEXT_INT(b);
J2dRlsTraceLn4(J2D_TRACE_VERBOSE,
"VKRenderQueue_flushBuffer: FILL_RECT(%d, %d, %d, %d)", x, y, w, h);
VKRenderer_FillRect(currentDevice, x, y, w, h);
VKRenderer_FillRect(&context, x, y, w, h);
}
break;
case sun_java2d_pipe_BufferedOpCodes_FILL_SPANS:
@@ -197,7 +221,7 @@ JNIEXPORT void JNICALL Java_sun_java2d_vulkan_VKRenderQueue_flushBuffer
jint count = NEXT_INT(b);
J2dRlsTraceLn(J2D_TRACE_VERBOSE,
"VKRenderQueue_flushBuffer: FILL_SPANS");
VKRenderer_FillSpans(currentDevice, color, dstOps, count, (jint *)b);
VKRenderer_FillSpans(&context, count, (jint *)b);
SKIP_BYTES(b, count * BYTES_PER_SPAN);
}
break;
@@ -212,8 +236,7 @@ JNIEXPORT void JNICALL Java_sun_java2d_vulkan_VKRenderQueue_flushBuffer
J2dRlsTraceLn6(J2D_TRACE_VERBOSE,
"VKRenderQueue_flushBuffer: FILL_PARALLELOGRAM(%f, %f, %f, %f, %f, %f)",
x11, y11, dx21, dy21, dx12, dy12);
VKRenderer_RenderParallelogram(currentDevice, currentDevice->fillColorPoly,
color, dstOps, x11, y11, dx21, dy21, dx12, dy12);
VKRenderer_RenderParallelogram(&context, PIPELINE_FILL_COLOR, x11, y11, dx21, dy21, dx12, dy12);
}
break;
case sun_java2d_pipe_BufferedOpCodes_FILL_AAPARALLELOGRAM:
@@ -426,24 +449,7 @@ JNIEXPORT void JNICALL Java_sun_java2d_vulkan_VKRenderQueue_flushBuffer
J2dRlsTraceLn(J2D_TRACE_VERBOSE,
"VKRenderQueue_flushBuffer: SET_SURFACES");
dstOps = (VKSDOps *) jlong_to_ptr(dst);
if (dstOps != NULL) {
currentDevice = dstOps->device;
if (dstOps->drawableType == VKSD_WINDOW && dstOps->bgColorUpdated) {
VKWinSDOps *winDstOps = (VKWinSDOps *)dstOps;
currentDevice->vkWaitForFences(currentDevice->device, 1, &currentDevice->inFlightFence, VK_TRUE, UINT64_MAX);
currentDevice->vkResetFences(currentDevice->device, 1, &currentDevice->inFlightFence);
currentDevice->vkResetCommandBuffer(currentDevice->commandBuffer, 0);
VKRenderer_BeginRendering(currentDevice);
VKRenderer_ColorRenderMaxRect(currentDevice, winDstOps->vksdOps.image, winDstOps->vksdOps.bgColor);
VKRenderer_EndRendering(currentDevice, VK_FALSE, VK_FALSE);
}
}
context.surface = dst;
}
break;
case sun_java2d_pipe_BufferedOpCodes_SET_SCRATCH_SURFACE:
@@ -451,7 +457,7 @@ JNIEXPORT void JNICALL Java_sun_java2d_vulkan_VKRenderQueue_flushBuffer
jlong pConfigInfo = NEXT_LONG(b);
J2dRlsTraceLn(J2D_TRACE_VERBOSE,
"VKRenderQueue_flushBuffer: SET_SCRATCH_SURFACE");
dstOps = NULL;
context.surface = NULL;
}
break;
case sun_java2d_pipe_BufferedOpCodes_FLUSH_SURFACE:
@@ -473,14 +479,14 @@ JNIEXPORT void JNICALL Java_sun_java2d_vulkan_VKRenderQueue_flushBuffer
jlong pConfigInfo = NEXT_LONG(b);
J2dRlsTraceLn(J2D_TRACE_VERBOSE,
"VKRenderQueue_flushBuffer: DISPOSE_CONFIG")
dstOps = NULL;
context.surface = NULL;
}
break;
case sun_java2d_pipe_BufferedOpCodes_INVALIDATE_CONTEXT:
{
J2dRlsTraceLn(J2D_TRACE_VERBOSE,
"VKRenderQueue_flushBuffer: INVALIDATE_CONTEXT");
dstOps = NULL;
context.surface = NULL;
}
break;
case sun_java2d_pipe_BufferedOpCodes_SYNC:
@@ -490,6 +496,17 @@ JNIEXPORT void JNICALL Java_sun_java2d_vulkan_VKRenderQueue_flushBuffer
}
break;
case sun_java2d_pipe_BufferedOpCodes_CONFIGURE_SURFACE:
{
VKSDOps* surface = NEXT_SURFACE(b);
jint width = NEXT_INT(b);
jint height = NEXT_INT(b);
J2dRlsTraceLn2(J2D_TRACE_VERBOSE,
"VKRenderQueue_flushBuffer: CONFIGURE_SURFACE %dx%d", width, height);
VKRenderer_ConfigureSurface(surface, (VkExtent2D) {width, height});
}
break;
// multibuffering ops
case sun_java2d_pipe_BufferedOpCodes_SWAP_BUFFERS:
{
@@ -499,6 +516,16 @@ JNIEXPORT void JNICALL Java_sun_java2d_vulkan_VKRenderQueue_flushBuffer
}
break;
case sun_java2d_pipe_BufferedOpCodes_FLUSH_BUFFER:
{
VKSDOps* surface = NEXT_SURFACE(b);
J2dRlsTraceLn(J2D_TRACE_VERBOSE,
"VKRenderQueue_flushBuffer: FLUSH_BUFFER");
VKRenderer_FlushSurface(surface);
}
break;
// special no-op (mainly used for achieving 8-byte alignment)
case sun_java2d_pipe_BufferedOpCodes_NOOP:
{
@@ -516,10 +543,11 @@ JNIEXPORT void JNICALL Java_sun_java2d_vulkan_VKRenderQueue_flushBuffer
break;
case sun_java2d_pipe_BufferedOpCodes_SET_COLOR:
{
jint pixel = NEXT_INT(b);
color = pixel;
J2dRlsTraceLn1(J2D_TRACE_VERBOSE,
"VKRenderQueue_flushBuffer: SET_COLOR %d", pixel);
jint javaColor = NEXT_INT(b);
context.color = VKUtil_DecodeJavaColor(javaColor);
J2dRlsTraceLn5(J2D_TRACE_VERBOSE,
"VKRenderQueue_flushBuffer: SET_COLOR 0x%08x, linear_rgba={%.3f, %.3f, %.3f, %.3f}",
javaColor, context.color.r, context.color.g, context.color.b, context.color.a);
}
break;
case sun_java2d_pipe_BufferedOpCodes_SET_GRADIENT_PAINT:
@@ -652,43 +680,10 @@ JNIEXPORT void JNICALL Java_sun_java2d_vulkan_VKRenderQueue_flushBuffer
}
}
if (dstOps != NULL && dstOps->drawableType == VKSD_WINDOW && currentDevice != NULL) {
VKWinSDOps *winDstOps = (VKWinSDOps *)dstOps;
currentDevice->vkWaitForFences(currentDevice->device, 1, &currentDevice->inFlightFence, VK_TRUE, UINT64_MAX);
currentDevice->vkResetFences(currentDevice->device, 1, &currentDevice->inFlightFence);
uint32_t imageIndex;
currentDevice->vkAcquireNextImageKHR(currentDevice->device, winDstOps->swapchainKhr, UINT64_MAX,
currentDevice->imageAvailableSemaphore, VK_NULL_HANDLE, &imageIndex);
currentDevice->vkResetCommandBuffer(currentDevice->commandBuffer, 0);
VKRenderer_BeginRendering(currentDevice);
VKRenderer_TextureRender(
currentDevice,
&winDstOps->swapChainImages[imageIndex],
winDstOps->vksdOps.image,
currentDevice->blitVertexBuffer->buffer, 4
);
VKRenderer_EndRendering(currentDevice, VK_TRUE, VK_TRUE);
VkSemaphore signalSemaphores[] = {currentDevice->renderFinishedSemaphore};
VkSwapchainKHR swapChains[] = {winDstOps->swapchainKhr};
VkPresentInfoKHR presentInfo = {
.sType = VK_STRUCTURE_TYPE_PRESENT_INFO_KHR,
.waitSemaphoreCount = 1,
.pWaitSemaphores = signalSemaphores,
.swapchainCount = 1,
.pSwapchains = swapChains,
.pImageIndices = &imageIndex
};
currentDevice->vkQueuePresentKHR(currentDevice->queue, &presentInfo);
// Flush all pending GPU work
VKGraphicsEnvironment* ge = VKGE_graphics_environment();
for (uint32_t i = 0; i < ARRAY_SIZE(ge->devices); i++) {
VKRenderer_Flush(ge->devices[i].renderer);
}
}

View File

@@ -1,72 +0,0 @@
/*
* Copyright (c) 2024, Oracle and/or its affiliates. All rights reserved.
* Copyright (c) 2024, JetBrains s.r.o.. All rights reserved.
* DO NOT ALTER OR REMOVE COPYRIGHT NOTICES OR THIS FILE HEADER.
*
* This code is free software; you can redistribute it and/or modify it
* under the terms of the GNU General Public License version 2 only, as
* published by the Free Software Foundation. Oracle designates this
* particular file as subject to the "Classpath" exception as provided
* by Oracle in the LICENSE file that accompanied this code.
*
* This code is distributed in the hope that it will be useful, but WITHOUT
* ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or
* FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License
* version 2 for more details (a copy is included in the LICENSE file that
* accompanied this code).
*
* You should have received a copy of the GNU General Public License version
* 2 along with this work; if not, write to the Free Software Foundation,
* Inc., 51 Franklin St, Fifth Floor, Boston, MA 02110-1301 USA.
*
* Please contact Oracle, 500 Oracle Parkway, Redwood Shores, CA 94065 USA
* or visit www.oracle.com if you need additional information or have any
* questions.
*/
#ifndef VKRenderQueue_h_Included
#define VKRenderQueue_h_Included
/*
* The following macros are used to pick values (of the specified type) off
* the queue.
*/
#define NEXT_VAL(buf, type) (((type *)((buf) += sizeof(type)))[-1])
#define NEXT_BYTE(buf) NEXT_VAL(buf, unsigned char)
#define NEXT_INT(buf) NEXT_VAL(buf, jint)
#define NEXT_FLOAT(buf) NEXT_VAL(buf, jfloat)
#define NEXT_BOOLEAN(buf) (jboolean)NEXT_INT(buf)
#define NEXT_LONG(buf) NEXT_VAL(buf, jlong)
#define NEXT_DOUBLE(buf) NEXT_VAL(buf, jdouble)
#define NEXT_SURFACE(buf) ((VKSDOps*) (SurfaceDataOps*) jlong_to_ptr(NEXT_LONG(buf)))
/*
* Increments a pointer (buf) by the given number of bytes.
*/
#define SKIP_BYTES(buf, numbytes) (buf) = ((unsigned char*)buf) + (numbytes)
/*
* Extracts a value at the given offset from the provided packed value.
*/
#define EXTRACT_VAL(packedval, offset, mask) \
(((packedval) >> (offset)) & (mask))
#define EXTRACT_BYTE(packedval, offset) \
(unsigned char)EXTRACT_VAL(packedval, offset, 0xff)
#define EXTRACT_BOOLEAN(packedval, offset) \
(jboolean)EXTRACT_VAL(packedval, offset, 0x1)
/*
* The following macros allow the caller to return (or continue) if the
* provided value is NULL. (The strange else clause is included below to
* allow for a trailing ';' after RETURN/CONTINUE_IF_NULL() invocations.)
*/
#define ACT_IF_NULL(ACTION, value) \
if ((value) == NULL) { \
J2dTraceLn1(J2D_TRACE_ERROR, \
"%s is null", #value); \
ACTION; \
} else do { } while (0)
#define RETURN_IF_NULL(value) ACT_IF_NULL(return, value)
#define CONTINUE_IF_NULL(value) ACT_IF_NULL(continue, value)
#endif /* VKRenderQueue_h_Included */

File diff suppressed because it is too large Load Diff

View File

@@ -27,50 +27,58 @@
#ifndef VKRenderer_h_Included
#define VKRenderer_h_Included
#include "j2d_md.h"
#include "VKBase.h"
#include "VKBuffer.h"
#include "VKImage.h"
#include "VKSurfaceData.h"
#include "VKTypes.h"
#include "VKPipelines.h"
struct VKRenderer {
VkDescriptorSetLayout descriptorSetLayout;
VkDescriptorPool descriptorPool;
VkDescriptorSet descriptorSets;
VkPipelineLayout pipelineLayout;
VkPipeline graphicsPipeline;
VkPrimitiveTopology primitiveTopology;
struct VKRenderingContext {
VKSDOps* surface;
Color color;
};
VKRenderer *VKRenderer_CreateRenderColorPoly(VKLogicalDevice* logicalDevice, VkPrimitiveTopology primitiveTopology, VkPolygonMode polygonMode);
VKRenderer* VKRenderer_Create(VKDevice* device);
VKRenderer* VKRenderer_CreateFillTexturePoly(VKLogicalDevice* logicalDevice);
void VKRenderer_Destroy(VKRenderer* renderer);
VKRenderer* VKRenderer_CreateFillMaxColorPoly(VKLogicalDevice* logicalDevice);
/**
* Wait for all rendering commands to complete.
*/
void VKRenderer_Sync(VKRenderer* renderer);
void VKRenderer_BeginRendering(VKLogicalDevice* logicalDevice);
/**
* Submit pending command buffer, completed render passes & presentation requests.
*/
void VKRenderer_Flush(VKRenderer* renderer);
void VKRenderer_EndRendering(VKLogicalDevice* logicalDevice,
VkBool32 notifyRenderFinish, VkBool32 waitForDisplayImage);
/**
* Cancel render pass of the surface, release all associated resources and deallocate render pass.
*/
void VKRenderer_ReleaseRenderPass(VKSDOps* surface);
void VKRenderer_TextureRender(VKLogicalDevice* logicalDevice,
/**
* Flush pending render pass and queue surface for presentation (if applicable).
*/
void VKRenderer_FlushSurface(VKSDOps* surface);
/**
* Request size for the surface. It has no effect, if it is already of the same size.
* Actual resize will be performed later, before starting a new frame.
*/
void VKRenderer_ConfigureSurface(VKSDOps* surface, VkExtent2D extent);
// Blit ops.
void VKRenderer_TextureRender(VKRenderingContext* context,
VKImage *destImage, VKImage *srcImage,
VkBuffer vertexBuffer, uint32_t vertexNum);
void VKRenderer_ColorRender(VKLogicalDevice* logicalDevice,
VKImage *destImage,
VKRenderer *renderer, uint32_t rgba, VkBuffer vertexBuffer, uint32_t vertexNum);
void VKRenderer_ColorRenderMaxRect(VKLogicalDevice* logicalDevice, VKImage *destImage, uint32_t rgba);
// fill ops
void VKRenderer_FillRect(VKLogicalDevice* logicalDevice, jint x, jint y, jint w, jint h);
void VKRenderer_RenderParallelogram(VKLogicalDevice* logicalDevice,
VKRenderer* renderer, jint color, VKSDOps *dstOps,
void VKRenderer_FillRect(VKRenderingContext* context, jint x, jint y, jint w, jint h);
void VKRenderer_RenderParallelogram(VKRenderingContext* context, VKPipeline pipeline,
jfloat x11, jfloat y11,
jfloat dx21, jfloat dy21,
jfloat dx12, jfloat dy12);
void VKRenderer_FillSpans(VKLogicalDevice* logicalDevice, jint color, VKSDOps *dstOps, jint spanCount, jint *spans);
void VKRenderer_FillSpans(VKRenderingContext* context, jint spanCount, jint *spans);
jboolean VK_CreateLogicalDeviceRenderers(VKLogicalDevice* logicalDevice);
#endif //VKRenderer_h_Included

View File

@@ -26,104 +26,191 @@
#ifndef HEADLESS
#include <stdlib.h>
#include "jlong.h"
#include "SurfaceData.h"
#include "VKUtil.h"
#include "VKBase.h"
#include "VKSurfaceData.h"
#include "VKImage.h"
#include "VKRenderer.h"
#include <Trace.h>
void VKSD_InitImageSurface(VKLogicalDevice* logicalDevice, VKSDOps *vksdo) {
if (vksdo->image != VK_NULL_HANDLE) {
return;
}
/**
* Release VKSDOps resources & reset to initial state.
*/
static void VKSD_ResetImageSurface(VKSDOps* vksdo) {
if (vksdo == NULL) return;
vksdo->device = logicalDevice;
vksdo->image = VKImage_Create(logicalDevice, vksdo->width, vksdo->height, VK_FORMAT_R8G8B8A8_UNORM, VK_IMAGE_TILING_LINEAR,
VK_IMAGE_USAGE_TRANSFER_DST_BIT | VK_IMAGE_USAGE_SAMPLED_BIT,
VK_MEMORY_PROPERTY_DEVICE_LOCAL_BIT);
if (!vksdo->image)
{
J2dRlsTrace(J2D_TRACE_ERROR, "Cannot create image\n");
return;
}
// ReleaseRenderPass also waits while the surface resources are being used by device.
VKRenderer_ReleaseRenderPass(vksdo);
if (!VKImage_CreateFramebuffer(logicalDevice, vksdo->image, logicalDevice->renderPass)) {
J2dRlsTraceLn(J2D_TRACE_ERROR, "Cannot create framebuffer for window surface")
return;
if (vksdo->device != NULL && vksdo->image != NULL) {
VKImage_free(vksdo->device, vksdo->image);
vksdo->image = NULL;
}
}
void VKSD_InitWindowSurface(VKLogicalDevice* logicalDevice, VKWinSDOps *vkwinsdo) {
void VKSD_ResetSurface(VKSDOps* vksdo) {
VKSD_ResetImageSurface(vksdo);
// Release VKWinSDOps resources, if applicable.
if (vksdo->drawableType == VKSD_WINDOW) {
VKWinSDOps* vkwinsdo = (VKWinSDOps*) vksdo;
ARRAY_FREE(vkwinsdo->swapchainImages);
vkwinsdo->swapchainImages = NULL;
if (vkwinsdo->vksdOps.device != NULL && vkwinsdo->swapchain != VK_NULL_HANDLE) {
vkwinsdo->vksdOps.device->vkDestroySwapchainKHR(vkwinsdo->vksdOps.device->handle, vkwinsdo->swapchain, NULL);
}
if (vkwinsdo->surface != VK_NULL_HANDLE) {
VKGraphicsEnvironment* ge = VKGE_graphics_environment();
if (ge != NULL) ge->vkDestroySurfaceKHR(ge->vkInstance, vkwinsdo->surface, NULL);
}
vkwinsdo->swapchain = VK_NULL_HANDLE;
vkwinsdo->surface = VK_NULL_HANDLE;
vkwinsdo->swapchainDevice = NULL;
}
}
VkBool32 VKSD_ConfigureImageSurface(VKSDOps* vksdo) {
// Initialize the device. currentDevice can be changed on the fly, and we must reconfigure surfaces accordingly.
VKDevice* device = VKGE_graphics_environment()->currentDevice;
if (device != vksdo->device) {
VKSD_ResetImageSurface(vksdo);
vksdo->device = device;
J2dRlsTraceLn1(J2D_TRACE_INFO, "VKSD_ConfigureImageSurface(%p): device updated", vksdo);
}
// Initialize image.
if (vksdo->requestedExtent.width > 0 && vksdo->requestedExtent.height > 0 && (vksdo->image == NULL ||
vksdo->requestedExtent.width != vksdo->image->extent.width ||
vksdo->requestedExtent.height != vksdo->image->extent.height)) {
// VK_FORMAT_B8G8R8A8_UNORM is the most widely-supported format for our use.
VKImage* image = VKImage_Create(device, vksdo->requestedExtent.width, vksdo->requestedExtent.height,
VK_FORMAT_B8G8R8A8_UNORM, VK_IMAGE_TILING_OPTIMAL,
VK_IMAGE_USAGE_TRANSFER_SRC_BIT | VK_IMAGE_USAGE_COLOR_ATTACHMENT_BIT,
VK_MEMORY_PROPERTY_DEVICE_LOCAL_BIT);
if (image == NULL) {
J2dRlsTraceLn1(J2D_TRACE_ERROR, "VKSD_ConfigureImageSurface(%p): cannot create image", vksdo);
VK_UNHANDLED_ERROR();
}
VKSD_ResetImageSurface(vksdo);
vksdo->image = image;
J2dRlsTraceLn3(J2D_TRACE_INFO, "VKSD_ConfigureImageSurface(%p): image updated %dx%d", vksdo, image->extent.width, image->extent.height);
}
return vksdo->image != NULL;
}
VkBool32 VKSD_ConfigureWindowSurface(VKWinSDOps* vkwinsdo) {
// Check that image is ready.
if (vkwinsdo->vksdOps.image == NULL) {
J2dRlsTraceLn1(J2D_TRACE_WARNING, "VKSD_ConfigureWindowSurface(%p): image is not ready", vkwinsdo);
return VK_FALSE;
}
// Check for changes.
if (vkwinsdo->swapchain != VK_NULL_HANDLE && vkwinsdo->swapchainDevice == vkwinsdo->vksdOps.device &&
vkwinsdo->swapchainExtent.width == vkwinsdo->vksdOps.image->extent.width &&
vkwinsdo->swapchainExtent.height == vkwinsdo->vksdOps.image->extent.height) return VK_TRUE;
// Check that surface is ready.
if (vkwinsdo->surface == VK_NULL_HANDLE) {
J2dRlsTraceLn1(J2D_TRACE_WARNING, "VKSD_ConfigureWindowSurface(%p): surface is not ready", vkwinsdo);
return VK_FALSE;
}
VKGraphicsEnvironment* ge = VKGE_graphics_environment();
VkPhysicalDevice physicalDevice = logicalDevice->physicalDevice;
VKDevice* device = vkwinsdo->vksdOps.device;
VkPhysicalDevice physicalDevice = device->physicalDevice;
if (vkwinsdo->swapchainKhr == VK_NULL_HANDLE) {
ge->vkGetPhysicalDeviceSurfaceCapabilitiesKHR(physicalDevice, vkwinsdo->surface, &vkwinsdo->capabilitiesKhr);
ge->vkGetPhysicalDeviceSurfaceFormatsKHR(physicalDevice, vkwinsdo->surface, &vkwinsdo->formatsKhrCount, NULL);
if (vkwinsdo->formatsKhrCount == 0) {
J2dRlsTrace(J2D_TRACE_ERROR, "No formats for swapchain found\n");
return;
}
vkwinsdo->formatsKhr = calloc(vkwinsdo->formatsKhrCount, sizeof(VkSurfaceFormatKHR));
ge->vkGetPhysicalDeviceSurfaceFormatsKHR(physicalDevice, vkwinsdo->surface, &vkwinsdo->formatsKhrCount,
vkwinsdo->formatsKhr);
VkSurfaceCapabilitiesKHR capabilities;
VK_IF_ERROR(ge->vkGetPhysicalDeviceSurfaceCapabilitiesKHR(physicalDevice, vkwinsdo->surface, &capabilities)) {
return VK_FALSE;
}
ge->vkGetPhysicalDeviceSurfacePresentModesKHR(physicalDevice, vkwinsdo->surface,
&vkwinsdo->presentModeKhrCount, NULL);
uint32_t formatCount;
VK_IF_ERROR(ge->vkGetPhysicalDeviceSurfaceFormatsKHR(physicalDevice, vkwinsdo->surface, &formatCount, NULL)) {
return VK_FALSE;
}
VkSurfaceFormatKHR formats[formatCount];
VK_IF_ERROR(ge->vkGetPhysicalDeviceSurfaceFormatsKHR(physicalDevice, vkwinsdo->surface, &formatCount, formats)) {
return VK_FALSE;
}
if (vkwinsdo->presentModeKhrCount == 0) {
J2dRlsTrace(J2D_TRACE_ERROR, "No present modes found\n");
return;
}
vkwinsdo->presentModesKhr = calloc(vkwinsdo->presentModeKhrCount, sizeof(VkPresentModeKHR));
ge->vkGetPhysicalDeviceSurfacePresentModesKHR(physicalDevice, vkwinsdo->surface, &vkwinsdo->presentModeKhrCount,
vkwinsdo->presentModesKhr);
VkExtent2D extent = {
(uint32_t) (vkwinsdo->vksdOps.width),
(uint32_t) (vkwinsdo->vksdOps.height)
};
uint32_t imageCount = vkwinsdo->capabilitiesKhr.minImageCount + 1;
if (vkwinsdo->capabilitiesKhr.maxImageCount > 0 && imageCount > vkwinsdo->capabilitiesKhr.maxImageCount) {
imageCount = vkwinsdo->capabilitiesKhr.maxImageCount;
}
VkSwapchainCreateInfoKHR createInfoKhr = {
.sType = VK_STRUCTURE_TYPE_SWAPCHAIN_CREATE_INFO_KHR,
.surface = vkwinsdo->surface,
.minImageCount = imageCount,
.imageFormat = vkwinsdo->formatsKhr[0].format,
.imageColorSpace = vkwinsdo->formatsKhr[0].colorSpace,
.imageExtent = extent,
.imageArrayLayers = 1,
.imageUsage = VK_IMAGE_USAGE_COLOR_ATTACHMENT_BIT,
.imageSharingMode = VK_SHARING_MODE_EXCLUSIVE,
.queueFamilyIndexCount = 0,
.pQueueFamilyIndices = NULL,
.preTransform = vkwinsdo->capabilitiesKhr.currentTransform,
.compositeAlpha = VK_COMPOSITE_ALPHA_OPAQUE_BIT_KHR,
.presentMode = VK_PRESENT_MODE_FIFO_KHR,
.clipped = VK_TRUE
};
if (logicalDevice->vkCreateSwapchainKHR(logicalDevice->device, &createInfoKhr, NULL, &vkwinsdo->swapchainKhr) != VK_SUCCESS) {
J2dRlsTrace(J2D_TRACE_ERROR, "Cannot create swapchain\n");
return;
}
vkwinsdo->swapChainImages = VKImage_CreateImageArrayFromSwapChain(
logicalDevice, vkwinsdo->swapchainKhr,
logicalDevice->renderPass,
vkwinsdo->formatsKhr[0].format, extent);
if (!vkwinsdo->swapChainImages) {
J2dRlsTraceLn(J2D_TRACE_ERROR, "Cannot get swapchain images");
return;
VkSurfaceFormatKHR* format = NULL;
J2dRlsTraceLn1(J2D_TRACE_INFO, "VKSD_ConfigureWindowSurface(%p): available swapchain formats:", vkwinsdo);
for (uint32_t i = 0; i < formatCount; i++) {
J2dRlsTraceLn2(J2D_TRACE_INFO, " format=%d, colorSpace=%d", formats[i].format, formats[i].colorSpace);
// We draw with sRGB colors (see VKUtil_DecodeJavaColor()), so we don't want Vulkan to do color space
// conversions when drawing to surface. We use *_UNORM formats, so that colors are written "as is".
// With VK_COLOR_SPACE_SRGB_NONLINEAR_KHR these colors will be interpreted by presentation engine as sRGB.
if (formats[i].colorSpace == VK_COLOR_SPACE_SRGB_NONLINEAR_KHR && (
formats[i].format == VK_FORMAT_A8B8G8R8_UNORM_PACK32 ||
formats[i].format == VK_FORMAT_B8G8R8A8_UNORM ||
formats[i].format == VK_FORMAT_R8G8B8A8_UNORM ||
formats[i].format == VK_FORMAT_B8G8R8_UNORM ||
formats[i].format == VK_FORMAT_R8G8B8_UNORM
)) {
format = &formats[i];
}
}
if (format == NULL) {
J2dRlsTraceLn1(J2D_TRACE_ERROR, "VKSD_ConfigureWindowSurface(%p): no suitable format found", vkwinsdo);
return VK_FALSE;
}
// TODO inspect and choose present mode
uint32_t presentModeCount;
VK_IF_ERROR(ge->vkGetPhysicalDeviceSurfacePresentModesKHR(physicalDevice, vkwinsdo->surface, &presentModeCount, NULL)) {
return VK_FALSE;
}
VkPresentModeKHR presentModes[presentModeCount];
VK_IF_ERROR(ge->vkGetPhysicalDeviceSurfacePresentModesKHR(physicalDevice, vkwinsdo->surface, &presentModeCount, presentModes)) {
VK_UNHANDLED_ERROR();
}
uint32_t imageCount = capabilities.minImageCount + 1;
if (capabilities.maxImageCount > 0 && imageCount > capabilities.maxImageCount) {
imageCount = capabilities.maxImageCount;
}
VkSwapchainKHR swapchain;
VkSwapchainCreateInfoKHR createInfoKhr = {
.sType = VK_STRUCTURE_TYPE_SWAPCHAIN_CREATE_INFO_KHR,
.surface = vkwinsdo->surface,
.minImageCount = imageCount,
.imageFormat = format->format,
.imageColorSpace = format->colorSpace,
.imageExtent = vkwinsdo->vksdOps.image->extent, // TODO consider capabilities.currentExtent, capabilities.minImageExtent and capabilities.maxImageExtent
.imageArrayLayers = 1,
.imageUsage = VK_IMAGE_USAGE_TRANSFER_DST_BIT,
.imageSharingMode = VK_SHARING_MODE_EXCLUSIVE,
.queueFamilyIndexCount = 0,
.pQueueFamilyIndices = NULL,
.preTransform = capabilities.currentTransform,
.compositeAlpha = VK_COMPOSITE_ALPHA_OPAQUE_BIT_KHR,
.presentMode = VK_PRESENT_MODE_FIFO_KHR, // TODO need more flexibility
.clipped = VK_TRUE,
.oldSwapchain = vkwinsdo->swapchain
};
VK_IF_ERROR(device->vkCreateSwapchainKHR(device->handle, &createInfoKhr, NULL, &swapchain)) {
return VK_FALSE;
}
J2dRlsTraceLn1(J2D_TRACE_INFO, "VKSD_ConfigureWindowSurface(%p): swapchain created", vkwinsdo);
if (vkwinsdo->swapchain != VK_NULL_HANDLE) {
// Destroy old swapchain.
// TODO is it possible that old swapchain is still being presented, can we destroy it right now?
device->vkDestroySwapchainKHR(device->handle, vkwinsdo->swapchain, NULL);
J2dRlsTraceLn1(J2D_TRACE_INFO, "VKSD_ConfigureWindowSurface(%p): old swapchain destroyed", vkwinsdo);
}
vkwinsdo->swapchain = swapchain;
vkwinsdo->swapchainDevice = device;
vkwinsdo->swapchainExtent = vkwinsdo->vksdOps.image->extent;
uint32_t swapchainImageCount;
VK_IF_ERROR(device->vkGetSwapchainImagesKHR(device->handle, vkwinsdo->swapchain, &swapchainImageCount, NULL)) {
return VK_FALSE;
}
ARRAY_RESIZE(vkwinsdo->swapchainImages, swapchainImageCount);
VK_IF_ERROR(device->vkGetSwapchainImagesKHR(device->handle, vkwinsdo->swapchain,
&swapchainImageCount, vkwinsdo->swapchainImages)) {
return VK_FALSE;
}
return VK_TRUE;
}
#endif /* !HEADLESS */

View File

@@ -27,15 +27,15 @@
#ifndef VKSurfaceData_h_Included
#define VKSurfaceData_h_Included
#include <pthread.h>
#include "jni.h"
#include "SurfaceData.h"
#include "sun_java2d_pipe_hw_AccelSurface.h"
#include "VKTypes.h"
#include "VKRenderer.h"
/**
* These are shorthand names for the surface type constants defined in
* VKSurfaceData.java.
* TODO which constants?
*/
#define VKSD_UNDEFINED sun_java2d_pipe_hw_AccelSurface_UNDEFINED
#define VKSD_WINDOW sun_java2d_pipe_hw_AccelSurface_WINDOW
@@ -44,59 +44,46 @@
* The VKSDOps structure describes a native Vulkan surface and contains all
* information pertaining to the native surface.
*/
typedef struct {
SurfaceDataOps sdOps;
jint drawableType;
pthread_mutex_t mutex;
uint32_t width;
uint32_t height;
uint32_t scale; // TODO Is it needed there at all?
uint32_t bgColor;
VkBool32 bgColorUpdated;
VKLogicalDevice* device;
VKImage* image;
// We track any access and write access separately, as read-read access does not need synchronization.
VkPipelineStageFlagBits lastStage;
VkPipelineStageFlagBits lastWriteStage;
VkAccessFlagBits lastAccess;
VkAccessFlagBits lastWriteAccess;
} VKSDOps;
struct VKSDOps {
SurfaceDataOps sdOps;
jint drawableType;
VKDevice* device;
VKImage* image;
Color background;
VkExtent2D requestedExtent;
VKRenderPass* renderPass;
};
/**
* The VKWinSDOps structure describes a native Vulkan surface connected with a window.
* Some information about the more important/different fields:
*
* void *privOps;
* Pointer to native-specific SurfaceData info, such as the
* native Drawable handle and GraphicsConfig data.
*/
typedef struct {
VKSDOps vksdOps;
void *privOps;
VkSurfaceKHR surface;
VkSurfaceCapabilitiesKHR capabilitiesKhr;
VkSurfaceFormatKHR* formatsKhr;
uint32_t formatsKhrCount;
VkPresentModeKHR* presentModesKhr;
uint32_t presentModeKhrCount;
VkSwapchainKHR swapchainKhr;
VKImage* swapChainImages;
} VKWinSDOps;
struct VKWinSDOps {
VKSDOps vksdOps;
VkSurfaceKHR surface;
VkSwapchainKHR swapchain;
VkImage* swapchainImages;
VKDevice* swapchainDevice;
VkExtent2D swapchainExtent;
};
/**
* Exported methods.
* Release all resources of the surface, resetting it to initial state.
* This function must also be used to initialize newly allocated surfaces.
* VKSDOps.drawableType must be properly set before calling this function.
*/
jint VKSD_Lock(JNIEnv *env,
SurfaceDataOps *ops, SurfaceDataRasInfo *pRasInfo,
jint lockflags);
void VKSD_GetRasInfo(JNIEnv *env,
SurfaceDataOps *ops, SurfaceDataRasInfo *pRasInfo);
void VKSD_Unlock(JNIEnv *env,
SurfaceDataOps *ops, SurfaceDataRasInfo *pRasInfo);
void VKSD_Dispose(JNIEnv *env, SurfaceDataOps *ops);
void VKSD_Delete(JNIEnv *env, VKSDOps *oglsdo);
void VKSD_ResetSurface(VKSDOps* vksdo);
void VKSD_InitImageSurface(VKLogicalDevice* logicalDevice, VKSDOps *vksdo);
void VKSD_InitWindowSurface(VKLogicalDevice* logicalDevice, VKWinSDOps *vkwinsdo);
/**
* Configure image surface. This [re]initializes the device and surface image.
*/
VkBool32 VKSD_ConfigureImageSurface(VKSDOps* vksdo);
/**
* Configure window surface. This [re]initializes the swapchain.
* VKSD_ConfigureImageSurface must have been called before.
*/
VkBool32 VKSD_ConfigureWindowSurface(VKWinSDOps* vkwinsdo);
#endif /* VKSurfaceData_h_Included */

View File

@@ -0,0 +1,175 @@
/*
* Copyright (c) 2024, Oracle and/or its affiliates. All rights reserved.
* Copyright (c) 2024, JetBrains s.r.o.. All rights reserved.
* DO NOT ALTER OR REMOVE COPYRIGHT NOTICES OR THIS FILE HEADER.
*
* This code is free software; you can redistribute it and/or modify it
* under the terms of the GNU General Public License version 2 only, as
* published by the Free Software Foundation. Oracle designates this
* particular file as subject to the "Classpath" exception as provided
* by Oracle in the LICENSE file that accompanied this code.
*
* This code is distributed in the hope that it will be useful, but WITHOUT
* ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or
* FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License
* version 2 for more details (a copy is included in the LICENSE file that
* accompanied this code).
*
* You should have received a copy of the GNU General Public License version
* 2 along with this work; if not, write to the Free Software Foundation,
* Inc., 51 Franklin St, Fifth Floor, Boston, MA 02110-1301 USA.
*
* Please contact Oracle, 500 Oracle Parkway, Redwood Shores, CA 94065 USA
* or visit www.oracle.com if you need additional information or have any
* questions.
*/
#include <stdlib.h>
#include <pthread.h>
#include "VKImage.h"
#include "VKTexturePool.h"
#include "jni.h"
#include "jni_util.h"
#include "Trace.h"
#define TRACE_LOCK 0
#define TRACE_TEX 0
/* lock API */
ATexturePoolLockPrivPtr* VKTexturePoolLock_initImpl(void) {
pthread_mutex_t *l = (pthread_mutex_t*)malloc(sizeof(pthread_mutex_t));
CHECK_NULL_LOG_RETURN(l, NULL, "VKTexturePoolLock_initImpl: could not allocate pthread_mutex_t");
int status = pthread_mutex_init(l, NULL);
if (status != 0) {
return NULL;
}
if (TRACE_LOCK) J2dRlsTraceLn1(J2D_TRACE_VERBOSE, "VKTexturePoolLock_initImpl: lock=%p", l);
return (ATexturePoolLockPrivPtr*)l;
}
void VKTexturePoolLock_DisposeImpl(ATexturePoolLockPrivPtr *lock) {
pthread_mutex_t* l = (pthread_mutex_t*)lock;
if (TRACE_LOCK) J2dRlsTraceLn1(J2D_TRACE_VERBOSE, "VKTexturePoolLock_DisposeImpl: lock=%p", l);
pthread_mutex_destroy(l);
free(l);
}
void VKTexturePoolLock_lockImpl(ATexturePoolLockPrivPtr *lock) {
pthread_mutex_t* l = (pthread_mutex_t*)lock;
if (TRACE_LOCK) J2dRlsTraceLn1(J2D_TRACE_VERBOSE, "VKTexturePoolLock_lockImpl: lock=%p", l);
pthread_mutex_lock(l);
if (TRACE_LOCK) J2dRlsTraceLn1(J2D_TRACE_VERBOSE, "VKTexturePoolLock_lockImpl: lock=%p - locked", l);
}
void VKTexturePoolLock_unlockImpl(ATexturePoolLockPrivPtr *lock) {
pthread_mutex_t* l = (pthread_mutex_t*)lock;
if (TRACE_LOCK) J2dRlsTraceLn1(J2D_TRACE_VERBOSE, "VKTexturePoolLock_unlockImpl: lock=%p", l);
pthread_mutex_unlock(l);
if (TRACE_LOCK) J2dRlsTraceLn1(J2D_TRACE_VERBOSE, "VKTexturePoolLock_unlockImpl: lock=%p - unlocked", l);
}
/* Texture allocate/free API */
static ATexturePrivPtr* VKTexturePool_createTexture(ADevicePrivPtr *device,
int width,
int height,
long format)
{
CHECK_NULL_RETURN(device, NULL);
VKImage* texture = VKImage_Create((VKDevice*)device, width, height,
(VkFormat)format,
VK_IMAGE_TILING_LINEAR,
VK_IMAGE_USAGE_TRANSFER_DST_BIT | VK_IMAGE_USAGE_SAMPLED_BIT,
VK_MEMORY_PROPERTY_DEVICE_LOCAL_BIT | VK_MEMORY_PROPERTY_HOST_VISIBLE_BIT);
if IS_NULL(texture) {
J2dRlsTrace(J2D_TRACE_ERROR, "VKTexturePool_createTexture: Cannot create VKImage");
return NULL;
}
// usage = MTLTextureUsageRenderTarget | MTLTextureUsageShaderRead;
// storage = MTLStorageModeManaged
if (TRACE_TEX) J2dRlsTraceLn4(J2D_TRACE_VERBOSE, "VKTexturePool_createTexture: created texture: tex=%p, w=%d h=%d, pf=%d",
texture, width, height, format);
return texture;
}
static int VKTexturePool_bytesPerPixel(long format) {
switch ((VkFormat)format) {
case VK_FORMAT_R8G8B8A8_UNORM:
return 4;
case VK_FORMAT_R8_UNORM:
return 1;
default:
J2dRlsTraceLn1(J2D_TRACE_ERROR, "VKTexturePool_bytesPerPixel: format=%d not supported (4 bytes by default)", format);
return 4;
}
}
static void VKTexturePool_freeTexture(ADevicePrivPtr *device, ATexturePrivPtr *texture) {
CHECK_NULL(device);
CHECK_NULL(texture);
VKImage* tex = (VKImage*)texture;
if (TRACE_TEX) J2dRlsTraceLn4(J2D_TRACE_VERBOSE, "VKTexturePool_freeTexture: free texture: tex=%p, w=%d h=%d, pf=%d",
tex, tex->extent.width, tex->extent.height, tex->format);
VKImage_free((VKDevice*)device, tex);
}
/* VKTexturePoolHandle API */
void VKTexturePoolHandle_ReleaseTexture(VKTexturePoolHandle *handle) {
ATexturePoolHandle_ReleaseTexture(handle);
}
ATexturePrivPtr* VKTexturePoolHandle_GetTexture(VKTexturePoolHandle *handle) {
return ATexturePoolHandle_GetTexture(handle);
}
jint VKTexturePoolHandle_GetRequestedWidth(VKTexturePoolHandle *handle) {
return ATexturePoolHandle_GetRequestedWidth(handle);
}
jint VKTexturePoolHandle_GetRequestedHeight(VKTexturePoolHandle *handle) {
return ATexturePoolHandle_GetRequestedHeight(handle);
}
/* VKTexturePool API */
VKTexturePool* VKTexturePool_InitWithDevice(VKDevice *device) {
CHECK_NULL_RETURN(device, NULL);
// TODO: get vulkan device memory information (1gb fixed here):
uint64_t maxDeviceMemory = 1024 * UNIT_MB;
ATexturePoolLockWrapper *lockWrapper = ATexturePoolLockWrapper_init(&VKTexturePoolLock_initImpl,
&VKTexturePoolLock_DisposeImpl,
&VKTexturePoolLock_lockImpl,
&VKTexturePoolLock_unlockImpl);
return ATexturePool_initWithDevice(device,
(jlong)maxDeviceMemory,
&VKTexturePool_createTexture,
&VKTexturePool_freeTexture,
&VKTexturePool_bytesPerPixel,
lockWrapper,
VK_FORMAT_R8G8B8A8_UNORM);
}
void VKTexturePool_Dispose(VKTexturePool *pool) {
ATexturePoolLockWrapper_Dispose(ATexturePool_getLockWrapper(pool));
ATexturePool_Dispose(pool);
}
ATexturePoolLockWrapper* VKTexturePool_GetLockWrapper(VKTexturePool *pool) {
return ATexturePool_getLockWrapper(pool);
}
VKTexturePoolHandle* VKTexturePool_GetTexture(VKTexturePool *pool,
jint width,
jint height,
jlong format)
{
return ATexturePool_getTexture(pool, width, height, format);
}

View File

@@ -24,40 +24,36 @@
* questions.
*/
#ifndef VKVertex_h_Included
#define VKVertex_h_Included
#ifndef VKTexturePool_h_Included
#define VKTexturePool_h_Included
#include "VKTypes.h"
#define RGBA_TO_L4(c) \
(((c) >> 16) & (0xFF))/255.0f, \
(((c) >> 8) & 0xFF)/255.0f, \
((c) & 0xFF)/255.0f, \
(((c) >> 24) & 0xFF)/255.0f
#define ARRAY_TO_VERTEX_BUF(logicalDevice, vertices) \
VKBuffer_CreateFromData(logicalDevice, vertices, ARRAY_SIZE(vertices)*sizeof (vertices[0]))
typedef struct {
VkVertexInputAttributeDescription *attributeDescriptions;
uint32_t attributeDescriptionCount;
VkVertexInputBindingDescription* bindingDescriptions;
uint32_t bindingDescriptionCount;
} VKVertexDescr;
typedef struct {
float px, py;
float u, v;
} VKTxVertex;
typedef struct {
float px, py;
} VKVertex;
#include "AccelTexturePool.h"
#include "jni.h"
VKVertexDescr VKVertex_GetTxVertexDescr();
VKVertexDescr VKVertex_GetVertexDescr();
/* VKTexturePoolHandle API */
typedef ATexturePoolHandle VKTexturePoolHandle;
void VKTexturePoolHandle_ReleaseTexture(VKTexturePoolHandle *handle);
ATexturePrivPtr* VKTexturePoolHandle_GetTexture(VKTexturePoolHandle *handle);
jint VKTexturePoolHandle_GetRequestedWidth(VKTexturePoolHandle *handle);
jint VKTexturePoolHandle_GetRequestedHeight(VKTexturePoolHandle *handle);
/* VKTexturePool API */
typedef ATexturePool VKTexturePool;
#endif //VKVertex_h_Included
VKTexturePool* VKTexturePool_InitWithDevice(VKDevice *device) ;
void VKTexturePool_Dispose(VKTexturePool *pool);
ATexturePoolLockWrapper* VKTexturePool_GetLockWrapper(VKTexturePool *pool);
VKTexturePoolHandle* VKTexturePool_GetTexture(VKTexturePool *pool,
jint width,
jint height,
jlong format);
#endif /* VKTexturePool_h_Included */

View File

@@ -25,14 +25,29 @@
#define VKTypes_h_Included
#include <vulkan/vulkan.h>
#define STRUCT(NAME) typedef struct NAME NAME
/**
* Floating-point RGBA color with sRGB encoding.
*/
typedef union {
struct {
float r, g, b, a;
};
VkClearValue vkClearValue;
} Color;
typedef struct VKGraphicsEnvironment VKGraphicsEnvironment;
typedef struct VKDevice VKDevice;
typedef struct VKRenderer VKRenderer;
typedef struct VKRenderPass VKRenderPass;
typedef struct VKRenderingContext VKRenderingContext;
typedef struct VKPipelineContext VKPipelineContext;
typedef struct VKRenderPassContext VKRenderPassContext;
typedef struct VKShaders VKShaders;
typedef struct VKBuffer VKBuffer;
typedef struct VKImage VKImage;
typedef struct VKSDOps VKSDOps;
typedef struct VKWinSDOps VKWinSDOps;
typedef char* pchar;
STRUCT(VKGraphicsEnvironment);
STRUCT(VKLogicalDevice);
STRUCT(VKRenderer);
STRUCT(VKBuffer);
STRUCT(VKImage);
#endif //VKTypes_h_Included

View File

@@ -0,0 +1,98 @@
// Copyright 2024 JetBrains s.r.o.
// DO NOT ALTER OR REMOVE COPYRIGHT NOTICES OR THIS FILE HEADER.
//
// This code is free software; you can redistribute it and/or modify it
// under the terms of the GNU General Public License version 2 only, as
// published by the Free Software Foundation. Oracle designates this
// particular file as subject to the "Classpath" exception as provided
// by Oracle in the LICENSE file that accompanied this code.
//
// This code is distributed in the hope that it will be useful, but WITHOUT
// ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or
// FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License
// version 2 for more details (a copy is included in the LICENSE file that
// accompanied this code).
//
// You should have received a copy of the GNU General Public License version
// 2 along with this work; if not, write to the Free Software Foundation,
// Inc., 51 Franklin St, Fifth Floor, Boston, MA 02110-1301 USA.
//
// Please contact Oracle, 500 Oracle Parkway, Redwood Shores, CA 94065 USA
// or visit www.oracle.com if you need additional information or have any
// questions.
#include "VKUtil.h"
Color VKUtil_DecodeJavaColor(uint32_t srgb) {
// Just map [0, 255] integer colors onto [0, 1] floating-point range, it remains in sRGB color space.
// sRGB gamma correction remains unsupported.
static const float NormTable256[256] = {
#define NORM1(N) ((float)(N) / 255.0F)
#define NORM8(N) NORM1(N),NORM1(N+1),NORM1(N+2),NORM1(N+3),NORM1(N+4),NORM1(N+5),NORM1(N+6),NORM1(N+7)
#define NORM64(N) NORM8(N),NORM8(N+8),NORM8(N+16),NORM8(N+24),NORM8(N+32),NORM8(N+40),NORM8(N+48),NORM8(N+56)
NORM64(0),NORM64(64),NORM64(128),NORM64(192)
};
Color c = {
.r = NormTable256[(srgb >> 16) & 0xFF],
.g = NormTable256[(srgb >> 8) & 0xFF],
.b = NormTable256[ srgb & 0xFF],
.a = NormTable256[(srgb >> 24) & 0xFF]
};
return c;
}
void VKUtil_LogResultError(const char* string, VkResult result) {
const char* r;
switch (result) {
#define RESULT(T) case T: r = #T; break
RESULT(VK_SUCCESS);
RESULT(VK_NOT_READY);
RESULT(VK_TIMEOUT);
RESULT(VK_EVENT_SET);
RESULT(VK_EVENT_RESET);
RESULT(VK_INCOMPLETE);
RESULT(VK_ERROR_OUT_OF_HOST_MEMORY);
RESULT(VK_ERROR_OUT_OF_DEVICE_MEMORY);
RESULT(VK_ERROR_INITIALIZATION_FAILED);
RESULT(VK_ERROR_DEVICE_LOST);
RESULT(VK_ERROR_MEMORY_MAP_FAILED);
RESULT(VK_ERROR_LAYER_NOT_PRESENT);
RESULT(VK_ERROR_EXTENSION_NOT_PRESENT);
RESULT(VK_ERROR_FEATURE_NOT_PRESENT);
RESULT(VK_ERROR_INCOMPATIBLE_DRIVER);
RESULT(VK_ERROR_TOO_MANY_OBJECTS);
RESULT(VK_ERROR_FORMAT_NOT_SUPPORTED);
RESULT(VK_ERROR_FRAGMENTED_POOL);
RESULT(VK_ERROR_UNKNOWN);
RESULT(VK_ERROR_OUT_OF_POOL_MEMORY);
RESULT(VK_ERROR_INVALID_EXTERNAL_HANDLE);
RESULT(VK_ERROR_FRAGMENTATION);
RESULT(VK_ERROR_INVALID_OPAQUE_CAPTURE_ADDRESS);
RESULT(VK_PIPELINE_COMPILE_REQUIRED);
RESULT(VK_ERROR_SURFACE_LOST_KHR);
RESULT(VK_ERROR_NATIVE_WINDOW_IN_USE_KHR);
RESULT(VK_SUBOPTIMAL_KHR);
RESULT(VK_ERROR_OUT_OF_DATE_KHR);
RESULT(VK_ERROR_INCOMPATIBLE_DISPLAY_KHR);
RESULT(VK_ERROR_VALIDATION_FAILED_EXT);
RESULT(VK_ERROR_INVALID_SHADER_NV);
RESULT(VK_ERROR_IMAGE_USAGE_NOT_SUPPORTED_KHR);
RESULT(VK_ERROR_VIDEO_PICTURE_LAYOUT_NOT_SUPPORTED_KHR);
RESULT(VK_ERROR_VIDEO_PROFILE_OPERATION_NOT_SUPPORTED_KHR);
RESULT(VK_ERROR_VIDEO_PROFILE_FORMAT_NOT_SUPPORTED_KHR);
RESULT(VK_ERROR_VIDEO_PROFILE_CODEC_NOT_SUPPORTED_KHR);
RESULT(VK_ERROR_VIDEO_STD_VERSION_NOT_SUPPORTED_KHR);
RESULT(VK_ERROR_INVALID_DRM_FORMAT_MODIFIER_PLANE_LAYOUT_EXT);
RESULT(VK_ERROR_NOT_PERMITTED_KHR);
RESULT(VK_ERROR_FULL_SCREEN_EXCLUSIVE_MODE_LOST_EXT);
RESULT(VK_THREAD_IDLE_KHR);
RESULT(VK_THREAD_DONE_KHR);
RESULT(VK_OPERATION_DEFERRED_KHR);
RESULT(VK_OPERATION_NOT_DEFERRED_KHR);
RESULT(VK_ERROR_INVALID_VIDEO_STD_PARAMETERS_KHR);
RESULT(VK_ERROR_COMPRESSION_EXHAUSTED_EXT);
RESULT(VK_ERROR_INCOMPATIBLE_SHADER_BINARY_EXT);
default: r = "<UNKNOWN>"; break;
}
J2dRlsTraceLn1(J2D_TRACE_ERROR, string, r)
}

View File

@@ -0,0 +1,70 @@
// Copyright 2024 JetBrains s.r.o.
// DO NOT ALTER OR REMOVE COPYRIGHT NOTICES OR THIS FILE HEADER.
//
// This code is free software; you can redistribute it and/or modify it
// under the terms of the GNU General Public License version 2 only, as
// published by the Free Software Foundation. Oracle designates this
// particular file as subject to the "Classpath" exception as provided
// by Oracle in the LICENSE file that accompanied this code.
//
// This code is distributed in the hope that it will be useful, but WITHOUT
// ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or
// FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License
// version 2 for more details (a copy is included in the LICENSE file that
// accompanied this code).
//
// You should have received a copy of the GNU General Public License version
// 2 along with this work; if not, write to the Free Software Foundation,
// Inc., 51 Franklin St, Fifth Floor, Boston, MA 02110-1301 USA.
//
// Please contact Oracle, 500 Oracle Parkway, Redwood Shores, CA 94065 USA
// or visit www.oracle.com if you need additional information or have any
// questions.
#ifndef VKUtil_h_Included
#define VKUtil_h_Included
#include <stdlib.h>
#include <vulkan/vulkan.h>
#include <Trace.h>
#include "awt.h"
#include "jni_util.h"
#include "VKTypes.h"
#define C_ARRAY_UTIL_ALLOCATION_FAILED() VK_FATAL_ERROR("CArrayUtil allocation failed")
#include "CArrayUtil.h"
// Useful logging & result checking macros
void VKUtil_LogResultError(const char* string, VkResult result);
inline VkBool32 VKUtil_CheckError(VkResult result, const char* errorMessage) {
if (result != VK_SUCCESS) {
VKUtil_LogResultError(errorMessage, result);
return VK_TRUE;
} else return VK_FALSE;
}
// Hack for converting __LINE__ to string taken from here: https://stackoverflow.com/a/19343239
#define TO_STRING_HACK(T) #T
#define TO_STRING(T) TO_STRING_HACK(T)
#define LOCATION __FILE__ ": " TO_STRING(__LINE__)
#define VK_IF_ERROR(EXPR) if (VKUtil_CheckError(EXPR, #EXPR " == %s\n at " LOCATION))
#define VK_FATAL_ERROR(MESSAGE) do { \
J2dRlsTraceLn(J2D_TRACE_ERROR, MESSAGE "\n at " LOCATION); \
JNIEnv* env = (JNIEnv*)JNU_GetEnv(jvm, JNI_VERSION_1_2); \
if (env != NULL) JNU_RUNTIME_ASSERT(env, 0, (MESSAGE)); \
else abort(); \
} while(0)
#define VK_UNHANDLED_ERROR() VK_FATAL_ERROR("Unhandled Vulkan error")
#define VK_RUNTIME_ASSERT(...) if (!(__VA_ARGS__)) VK_FATAL_ERROR("Vulkan assertion failed: " #__VA_ARGS__)
/**
* Vulkan expects linear colors.
* However Java2D expects legacy behavior, as if colors were blended in sRGB color space.
* Therefore this function just remaps color components from [0, 255] to [0, 1] range,
* they still represent sRGB color.
* This is also accounted for in VKSD_ConfigureWindowSurface, so that Vulkan doesn't do any
* color space conversions on its own, as the colors we are drawing are already in sRGB.
*/
Color VKUtil_DecodeJavaColor(uint32_t color);
#endif //VKUtil_h_Included

View File

@@ -1,91 +0,0 @@
/*
* Copyright (c) 2024, Oracle and/or its affiliates. All rights reserved.
* Copyright (c) 2024, JetBrains s.r.o.. All rights reserved.
* DO NOT ALTER OR REMOVE COPYRIGHT NOTICES OR THIS FILE HEADER.
*
* This code is free software; you can redistribute it and/or modify it
* under the terms of the GNU General Public License version 2 only, as
* published by the Free Software Foundation. Oracle designates this
* particular file as subject to the "Classpath" exception as provided
* by Oracle in the LICENSE file that accompanied this code.
*
* This code is distributed in the hope that it will be useful, but WITHOUT
* ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or
* FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License
* version 2 for more details (a copy is included in the LICENSE file that
* accompanied this code).
*
* You should have received a copy of the GNU General Public License version
* 2 along with this work; if not, write to the Free Software Foundation,
* Inc., 51 Franklin St, Fifth Floor, Boston, MA 02110-1301 USA.
*
* Please contact Oracle, 500 Oracle Parkway, Redwood Shores, CA 94065 USA
* or visit www.oracle.com if you need additional information or have any
* questions.
*/
#ifndef HEADLESS
#include <string.h>
#include "CArrayUtil.h"
#include "VKVertex.h"
VKVertexDescr VKVertex_GetTxVertexDescr() {
static VkVertexInputBindingDescription bindingDescriptions[] = {
{
.binding = 0,
.stride = sizeof(VKTxVertex),
.inputRate = VK_VERTEX_INPUT_RATE_VERTEX
}
};
static VkVertexInputAttributeDescription attributeDescriptions [] = {
{
.binding = 0,
.location = 0,
.format = VK_FORMAT_R32G32_SFLOAT,
.offset = offsetof(VKTxVertex, px)
},
{
.binding = 0,
.location = 1,
.format = VK_FORMAT_R32G32_SFLOAT,
.offset = offsetof(VKTxVertex, u)
}
};
return (VKVertexDescr) {
.attributeDescriptions = attributeDescriptions,
.attributeDescriptionCount = SARRAY_COUNT_OF(attributeDescriptions),
.bindingDescriptions = bindingDescriptions,
.bindingDescriptionCount = SARRAY_COUNT_OF(bindingDescriptions)
};
}
VKVertexDescr VKVertex_GetVertexDescr() {
static VkVertexInputBindingDescription bindingDescriptions[] = {
{
.binding = 0,
.stride = sizeof(VKVertex),
.inputRate = VK_VERTEX_INPUT_RATE_VERTEX
}
};
static VkVertexInputAttributeDescription attributeDescriptions [] = {
{
.binding = 0,
.location = 0,
.format = VK_FORMAT_R32G32_SFLOAT,
.offset = offsetof(VKVertex, px)
}
};
return (VKVertexDescr) {
.attributeDescriptions = attributeDescriptions,
.attributeDescriptionCount = SARRAY_COUNT_OF(attributeDescriptions),
.bindingDescriptions = bindingDescriptions,
.bindingDescriptionCount = SARRAY_COUNT_OF(bindingDescriptions)
};
}
#endif /* !HEADLESS */

File diff suppressed because it is too large Load Diff

View File

@@ -1,135 +0,0 @@
#ifndef VULKAN_MEMORY_ALLOCATOR_HPP
#define VULKAN_MEMORY_ALLOCATOR_HPP
#if !defined(AMD_VULKAN_MEMORY_ALLOCATOR_H)
#include <vk_mem_alloc.h>
#endif
#include <vulkan/vulkan.hpp>
#if !defined(VMA_HPP_NAMESPACE)
#define VMA_HPP_NAMESPACE vma
#endif
#define VMA_HPP_NAMESPACE_STRING VULKAN_HPP_STRINGIFY(VMA_HPP_NAMESPACE)
#ifndef VULKAN_HPP_NO_SMART_HANDLE
namespace VMA_HPP_NAMESPACE {
struct Dispatcher {}; // VMA uses function pointers from VmaAllocator instead
class Allocator;
template<class T>
VULKAN_HPP_NAMESPACE::UniqueHandle<T, Dispatcher> createUniqueHandle(const T& t) VULKAN_HPP_NOEXCEPT {
return VULKAN_HPP_NAMESPACE::UniqueHandle<T, Dispatcher>(t);
}
template<class T, class O>
VULKAN_HPP_NAMESPACE::UniqueHandle<T, Dispatcher> createUniqueHandle(const T& t, const O* o) VULKAN_HPP_NOEXCEPT {
return VULKAN_HPP_NAMESPACE::UniqueHandle<T, Dispatcher>(t, o);
}
template<class F, class S, class O>
std::pair<VULKAN_HPP_NAMESPACE::UniqueHandle<F, Dispatcher>, VULKAN_HPP_NAMESPACE::UniqueHandle<S, Dispatcher>>
createUniqueHandle(const std::pair<F, S>& t, const O* o) VULKAN_HPP_NOEXCEPT {
return {
VULKAN_HPP_NAMESPACE::UniqueHandle<F, Dispatcher>(t.first, o),
VULKAN_HPP_NAMESPACE::UniqueHandle<S, Dispatcher>(t.second, o)
};
}
template<class T, class UniqueVectorAllocator, class VectorAllocator, class O>
std::vector<VULKAN_HPP_NAMESPACE::UniqueHandle<T, Dispatcher>, UniqueVectorAllocator>
createUniqueHandleVector(const std::vector<T, VectorAllocator>& vector, const O* o,
const UniqueVectorAllocator& vectorAllocator) VULKAN_HPP_NOEXCEPT {
std::vector<VULKAN_HPP_NAMESPACE::UniqueHandle<T, Dispatcher>, UniqueVectorAllocator> result(vectorAllocator);
result.reserve(vector.size());
for (const T& t : vector) result.emplace_back(t, o);
return result;
}
template<class T, class Owner> class Deleter {
const Owner* owner;
public:
Deleter() = default;
Deleter(const Owner* owner) VULKAN_HPP_NOEXCEPT : owner(owner) {}
protected:
void destroy(const T& t) VULKAN_HPP_NOEXCEPT; // Implemented manually for each handle type
};
template<class T> class Deleter<T, void> {
protected:
void destroy(const T& t) VULKAN_HPP_NOEXCEPT { t.destroy(); }
};
}
namespace VULKAN_HPP_NAMESPACE {
template<> struct UniqueHandleTraits<Buffer, VMA_HPP_NAMESPACE::Dispatcher> {
using deleter = VMA_HPP_NAMESPACE::Deleter<Buffer, VMA_HPP_NAMESPACE::Allocator>;
};
template<> struct UniqueHandleTraits<Image, VMA_HPP_NAMESPACE::Dispatcher> {
using deleter = VMA_HPP_NAMESPACE::Deleter<Image, VMA_HPP_NAMESPACE::Allocator>;
};
}
namespace VMA_HPP_NAMESPACE {
using UniqueBuffer = VULKAN_HPP_NAMESPACE::UniqueHandle<VULKAN_HPP_NAMESPACE::Buffer, Dispatcher>;
using UniqueImage = VULKAN_HPP_NAMESPACE::UniqueHandle<VULKAN_HPP_NAMESPACE::Image, Dispatcher>;
}
#endif
#include "vk_mem_alloc_enums.hpp"
#include "vk_mem_alloc_handles.hpp"
#include "vk_mem_alloc_structs.hpp"
#include "vk_mem_alloc_funcs.hpp"
namespace VMA_HPP_NAMESPACE {
#ifndef VULKAN_HPP_NO_SMART_HANDLE
# define VMA_HPP_DESTROY_IMPL(NAME) \
template<> VULKAN_HPP_INLINE void VULKAN_HPP_NAMESPACE::UniqueHandleTraits<NAME, Dispatcher>::deleter::destroy(const NAME& t) VULKAN_HPP_NOEXCEPT
VMA_HPP_DESTROY_IMPL(VULKAN_HPP_NAMESPACE::Buffer) { owner->destroyBuffer(t, nullptr); }
VMA_HPP_DESTROY_IMPL(VULKAN_HPP_NAMESPACE::Image) { owner->destroyImage(t, nullptr); }
VMA_HPP_DESTROY_IMPL(Pool) { owner->destroyPool(t); }
VMA_HPP_DESTROY_IMPL(Allocation) { owner->freeMemory(t); }
VMA_HPP_DESTROY_IMPL(VirtualAllocation) { owner->virtualFree(t); }
# undef VMA_HPP_DESTROY_IMPL
#endif
template<class InstanceDispatcher, class DeviceDispatcher>
VULKAN_HPP_CONSTEXPR VulkanFunctions functionsFromDispatcher(InstanceDispatcher const * instance,
DeviceDispatcher const * device) VULKAN_HPP_NOEXCEPT {
return VulkanFunctions {
instance->vkGetInstanceProcAddr,
instance->vkGetDeviceProcAddr,
instance->vkGetPhysicalDeviceProperties,
instance->vkGetPhysicalDeviceMemoryProperties,
device->vkAllocateMemory,
device->vkFreeMemory,
device->vkMapMemory,
device->vkUnmapMemory,
device->vkFlushMappedMemoryRanges,
device->vkInvalidateMappedMemoryRanges,
device->vkBindBufferMemory,
device->vkBindImageMemory,
device->vkGetBufferMemoryRequirements,
device->vkGetImageMemoryRequirements,
device->vkCreateBuffer,
device->vkDestroyBuffer,
device->vkCreateImage,
device->vkDestroyImage,
device->vkCmdCopyBuffer,
device->vkGetBufferMemoryRequirements2KHR ? device->vkGetBufferMemoryRequirements2KHR : device->vkGetBufferMemoryRequirements2,
device->vkGetImageMemoryRequirements2KHR ? device->vkGetImageMemoryRequirements2KHR : device->vkGetImageMemoryRequirements2,
device->vkBindBufferMemory2KHR ? device->vkBindBufferMemory2KHR : device->vkBindBufferMemory2,
device->vkBindImageMemory2KHR ? device->vkBindImageMemory2KHR : device->vkBindImageMemory2,
instance->vkGetPhysicalDeviceMemoryProperties2KHR ? instance->vkGetPhysicalDeviceMemoryProperties2KHR : instance->vkGetPhysicalDeviceMemoryProperties2,
device->vkGetDeviceBufferMemoryRequirements,
device->vkGetDeviceImageMemoryRequirements
};
}
template<class Dispatch = VULKAN_HPP_DEFAULT_DISPATCHER_TYPE>
VULKAN_HPP_CONSTEXPR VulkanFunctions functionsFromDispatcher(Dispatch const & dispatch
VULKAN_HPP_DEFAULT_DISPATCHER_ASSIGNMENT) VULKAN_HPP_NOEXCEPT {
return functionsFromDispatcher(&dispatch, &dispatch);
}
}
#endif

View File

@@ -1,478 +0,0 @@
#ifndef VULKAN_MEMORY_ALLOCATOR_ENUMS_HPP
#define VULKAN_MEMORY_ALLOCATOR_ENUMS_HPP
namespace VMA_HPP_NAMESPACE {
enum class AllocatorCreateFlagBits : VmaAllocatorCreateFlags {
eExternallySynchronized = VMA_ALLOCATOR_CREATE_EXTERNALLY_SYNCHRONIZED_BIT,
eKhrDedicatedAllocation = VMA_ALLOCATOR_CREATE_KHR_DEDICATED_ALLOCATION_BIT,
eKhrBindMemory2 = VMA_ALLOCATOR_CREATE_KHR_BIND_MEMORY2_BIT,
eExtMemoryBudget = VMA_ALLOCATOR_CREATE_EXT_MEMORY_BUDGET_BIT,
eAmdDeviceCoherentMemory = VMA_ALLOCATOR_CREATE_AMD_DEVICE_COHERENT_MEMORY_BIT,
eBufferDeviceAddress = VMA_ALLOCATOR_CREATE_BUFFER_DEVICE_ADDRESS_BIT,
eExtMemoryPriority = VMA_ALLOCATOR_CREATE_EXT_MEMORY_PRIORITY_BIT
};
# if !defined( VULKAN_HPP_NO_TO_STRING )
VULKAN_HPP_INLINE std::string to_string(AllocatorCreateFlagBits value) {
if (value == AllocatorCreateFlagBits::eExternallySynchronized) return "ExternallySynchronized";
if (value == AllocatorCreateFlagBits::eKhrDedicatedAllocation) return "KhrDedicatedAllocation";
if (value == AllocatorCreateFlagBits::eKhrBindMemory2) return "KhrBindMemory2";
if (value == AllocatorCreateFlagBits::eExtMemoryBudget) return "ExtMemoryBudget";
if (value == AllocatorCreateFlagBits::eAmdDeviceCoherentMemory) return "AmdDeviceCoherentMemory";
if (value == AllocatorCreateFlagBits::eBufferDeviceAddress) return "BufferDeviceAddress";
if (value == AllocatorCreateFlagBits::eExtMemoryPriority) return "ExtMemoryPriority";
return "invalid ( " + VULKAN_HPP_NAMESPACE::toHexString(static_cast<uint32_t>(value)) + " )";
}
# endif
}
namespace VULKAN_HPP_NAMESPACE {
template<> struct FlagTraits<VMA_HPP_NAMESPACE::AllocatorCreateFlagBits> {
static VULKAN_HPP_CONST_OR_CONSTEXPR bool isBitmask = true;
static VULKAN_HPP_CONST_OR_CONSTEXPR Flags<VMA_HPP_NAMESPACE::AllocatorCreateFlagBits> allFlags =
VMA_HPP_NAMESPACE::AllocatorCreateFlagBits::eExternallySynchronized
| VMA_HPP_NAMESPACE::AllocatorCreateFlagBits::eKhrDedicatedAllocation
| VMA_HPP_NAMESPACE::AllocatorCreateFlagBits::eKhrBindMemory2
| VMA_HPP_NAMESPACE::AllocatorCreateFlagBits::eExtMemoryBudget
| VMA_HPP_NAMESPACE::AllocatorCreateFlagBits::eAmdDeviceCoherentMemory
| VMA_HPP_NAMESPACE::AllocatorCreateFlagBits::eBufferDeviceAddress
| VMA_HPP_NAMESPACE::AllocatorCreateFlagBits::eExtMemoryPriority;
};
}
namespace VMA_HPP_NAMESPACE {
using AllocatorCreateFlags = VULKAN_HPP_NAMESPACE::Flags<AllocatorCreateFlagBits>;
VULKAN_HPP_INLINE VULKAN_HPP_CONSTEXPR AllocatorCreateFlags operator|(AllocatorCreateFlagBits bit0, AllocatorCreateFlagBits bit1) VULKAN_HPP_NOEXCEPT {
return AllocatorCreateFlags(bit0) | bit1;
}
VULKAN_HPP_INLINE VULKAN_HPP_CONSTEXPR AllocatorCreateFlags operator&(AllocatorCreateFlagBits bit0, AllocatorCreateFlagBits bit1) VULKAN_HPP_NOEXCEPT {
return AllocatorCreateFlags(bit0) & bit1;
}
VULKAN_HPP_INLINE VULKAN_HPP_CONSTEXPR AllocatorCreateFlags operator^(AllocatorCreateFlagBits bit0, AllocatorCreateFlagBits bit1) VULKAN_HPP_NOEXCEPT {
return AllocatorCreateFlags(bit0) ^ bit1;
}
VULKAN_HPP_INLINE VULKAN_HPP_CONSTEXPR AllocatorCreateFlags operator~(AllocatorCreateFlagBits bits) VULKAN_HPP_NOEXCEPT {
return ~(AllocatorCreateFlags(bits));
}
# if !defined( VULKAN_HPP_NO_TO_STRING )
VULKAN_HPP_INLINE std::string to_string(AllocatorCreateFlags value) {
if (!value) return "{}";
std::string result;
if (value & AllocatorCreateFlagBits::eExternallySynchronized) result += "ExternallySynchronized | ";
if (value & AllocatorCreateFlagBits::eKhrDedicatedAllocation) result += "KhrDedicatedAllocation | ";
if (value & AllocatorCreateFlagBits::eKhrBindMemory2) result += "KhrBindMemory2 | ";
if (value & AllocatorCreateFlagBits::eExtMemoryBudget) result += "ExtMemoryBudget | ";
if (value & AllocatorCreateFlagBits::eAmdDeviceCoherentMemory) result += "AmdDeviceCoherentMemory | ";
if (value & AllocatorCreateFlagBits::eBufferDeviceAddress) result += "BufferDeviceAddress | ";
if (value & AllocatorCreateFlagBits::eExtMemoryPriority) result += "ExtMemoryPriority | ";
return "{ " + result.substr( 0, result.size() - 3 ) + " }";
}
# endif
}
namespace VMA_HPP_NAMESPACE {
enum class MemoryUsage {
eUnknown = VMA_MEMORY_USAGE_UNKNOWN,
eGpuOnly = VMA_MEMORY_USAGE_GPU_ONLY,
eCpuOnly = VMA_MEMORY_USAGE_CPU_ONLY,
eCpuToGpu = VMA_MEMORY_USAGE_CPU_TO_GPU,
eGpuToCpu = VMA_MEMORY_USAGE_GPU_TO_CPU,
eCpuCopy = VMA_MEMORY_USAGE_CPU_COPY,
eGpuLazilyAllocated = VMA_MEMORY_USAGE_GPU_LAZILY_ALLOCATED,
eAuto = VMA_MEMORY_USAGE_AUTO,
eAutoPreferDevice = VMA_MEMORY_USAGE_AUTO_PREFER_DEVICE,
eAutoPreferHost = VMA_MEMORY_USAGE_AUTO_PREFER_HOST
};
# if !defined( VULKAN_HPP_NO_TO_STRING )
VULKAN_HPP_INLINE std::string to_string(MemoryUsage value) {
if (value == MemoryUsage::eUnknown) return "Unknown";
if (value == MemoryUsage::eGpuOnly) return "GpuOnly";
if (value == MemoryUsage::eCpuOnly) return "CpuOnly";
if (value == MemoryUsage::eCpuToGpu) return "CpuToGpu";
if (value == MemoryUsage::eGpuToCpu) return "GpuToCpu";
if (value == MemoryUsage::eCpuCopy) return "CpuCopy";
if (value == MemoryUsage::eGpuLazilyAllocated) return "GpuLazilyAllocated";
if (value == MemoryUsage::eAuto) return "Auto";
if (value == MemoryUsage::eAutoPreferDevice) return "AutoPreferDevice";
if (value == MemoryUsage::eAutoPreferHost) return "AutoPreferHost";
return "invalid ( " + VULKAN_HPP_NAMESPACE::toHexString(static_cast<uint32_t>(value)) + " )";
}
# endif
}
namespace VMA_HPP_NAMESPACE {
enum class AllocationCreateFlagBits : VmaAllocationCreateFlags {
eDedicatedMemory = VMA_ALLOCATION_CREATE_DEDICATED_MEMORY_BIT,
eNeverAllocate = VMA_ALLOCATION_CREATE_NEVER_ALLOCATE_BIT,
eMapped = VMA_ALLOCATION_CREATE_MAPPED_BIT,
eUserDataCopyString = VMA_ALLOCATION_CREATE_USER_DATA_COPY_STRING_BIT,
eUpperAddress = VMA_ALLOCATION_CREATE_UPPER_ADDRESS_BIT,
eDontBind = VMA_ALLOCATION_CREATE_DONT_BIND_BIT,
eWithinBudget = VMA_ALLOCATION_CREATE_WITHIN_BUDGET_BIT,
eCanAlias = VMA_ALLOCATION_CREATE_CAN_ALIAS_BIT,
eHostAccessSequentialWrite = VMA_ALLOCATION_CREATE_HOST_ACCESS_SEQUENTIAL_WRITE_BIT,
eHostAccessRandom = VMA_ALLOCATION_CREATE_HOST_ACCESS_RANDOM_BIT,
eHostAccessAllowTransferInstead = VMA_ALLOCATION_CREATE_HOST_ACCESS_ALLOW_TRANSFER_INSTEAD_BIT,
eStrategyMinMemory = VMA_ALLOCATION_CREATE_STRATEGY_MIN_MEMORY_BIT,
eStrategyMinTime = VMA_ALLOCATION_CREATE_STRATEGY_MIN_TIME_BIT,
eStrategyMinOffset = VMA_ALLOCATION_CREATE_STRATEGY_MIN_OFFSET_BIT,
eStrategyBestFit = VMA_ALLOCATION_CREATE_STRATEGY_BEST_FIT_BIT,
eStrategyFirstFit = VMA_ALLOCATION_CREATE_STRATEGY_FIRST_FIT_BIT
};
# if !defined( VULKAN_HPP_NO_TO_STRING )
VULKAN_HPP_INLINE std::string to_string(AllocationCreateFlagBits value) {
if (value == AllocationCreateFlagBits::eDedicatedMemory) return "DedicatedMemory";
if (value == AllocationCreateFlagBits::eNeverAllocate) return "NeverAllocate";
if (value == AllocationCreateFlagBits::eMapped) return "Mapped";
if (value == AllocationCreateFlagBits::eUserDataCopyString) return "UserDataCopyString";
if (value == AllocationCreateFlagBits::eUpperAddress) return "UpperAddress";
if (value == AllocationCreateFlagBits::eDontBind) return "DontBind";
if (value == AllocationCreateFlagBits::eWithinBudget) return "WithinBudget";
if (value == AllocationCreateFlagBits::eCanAlias) return "CanAlias";
if (value == AllocationCreateFlagBits::eHostAccessSequentialWrite) return "HostAccessSequentialWrite";
if (value == AllocationCreateFlagBits::eHostAccessRandom) return "HostAccessRandom";
if (value == AllocationCreateFlagBits::eHostAccessAllowTransferInstead) return "HostAccessAllowTransferInstead";
if (value == AllocationCreateFlagBits::eStrategyMinMemory) return "StrategyMinMemory";
if (value == AllocationCreateFlagBits::eStrategyMinTime) return "StrategyMinTime";
if (value == AllocationCreateFlagBits::eStrategyMinOffset) return "StrategyMinOffset";
if (value == AllocationCreateFlagBits::eStrategyBestFit) return "StrategyBestFit";
if (value == AllocationCreateFlagBits::eStrategyFirstFit) return "StrategyFirstFit";
return "invalid ( " + VULKAN_HPP_NAMESPACE::toHexString(static_cast<uint32_t>(value)) + " )";
}
# endif
}
namespace VULKAN_HPP_NAMESPACE {
template<> struct FlagTraits<VMA_HPP_NAMESPACE::AllocationCreateFlagBits> {
static VULKAN_HPP_CONST_OR_CONSTEXPR bool isBitmask = true;
static VULKAN_HPP_CONST_OR_CONSTEXPR Flags<VMA_HPP_NAMESPACE::AllocationCreateFlagBits> allFlags =
VMA_HPP_NAMESPACE::AllocationCreateFlagBits::eDedicatedMemory
| VMA_HPP_NAMESPACE::AllocationCreateFlagBits::eNeverAllocate
| VMA_HPP_NAMESPACE::AllocationCreateFlagBits::eMapped
| VMA_HPP_NAMESPACE::AllocationCreateFlagBits::eUserDataCopyString
| VMA_HPP_NAMESPACE::AllocationCreateFlagBits::eUpperAddress
| VMA_HPP_NAMESPACE::AllocationCreateFlagBits::eDontBind
| VMA_HPP_NAMESPACE::AllocationCreateFlagBits::eWithinBudget
| VMA_HPP_NAMESPACE::AllocationCreateFlagBits::eCanAlias
| VMA_HPP_NAMESPACE::AllocationCreateFlagBits::eHostAccessSequentialWrite
| VMA_HPP_NAMESPACE::AllocationCreateFlagBits::eHostAccessRandom
| VMA_HPP_NAMESPACE::AllocationCreateFlagBits::eHostAccessAllowTransferInstead
| VMA_HPP_NAMESPACE::AllocationCreateFlagBits::eStrategyMinMemory
| VMA_HPP_NAMESPACE::AllocationCreateFlagBits::eStrategyMinTime
| VMA_HPP_NAMESPACE::AllocationCreateFlagBits::eStrategyMinOffset
| VMA_HPP_NAMESPACE::AllocationCreateFlagBits::eStrategyBestFit
| VMA_HPP_NAMESPACE::AllocationCreateFlagBits::eStrategyFirstFit;
};
}
namespace VMA_HPP_NAMESPACE {
using AllocationCreateFlags = VULKAN_HPP_NAMESPACE::Flags<AllocationCreateFlagBits>;
VULKAN_HPP_INLINE VULKAN_HPP_CONSTEXPR AllocationCreateFlags operator|(AllocationCreateFlagBits bit0, AllocationCreateFlagBits bit1) VULKAN_HPP_NOEXCEPT {
return AllocationCreateFlags(bit0) | bit1;
}
VULKAN_HPP_INLINE VULKAN_HPP_CONSTEXPR AllocationCreateFlags operator&(AllocationCreateFlagBits bit0, AllocationCreateFlagBits bit1) VULKAN_HPP_NOEXCEPT {
return AllocationCreateFlags(bit0) & bit1;
}
VULKAN_HPP_INLINE VULKAN_HPP_CONSTEXPR AllocationCreateFlags operator^(AllocationCreateFlagBits bit0, AllocationCreateFlagBits bit1) VULKAN_HPP_NOEXCEPT {
return AllocationCreateFlags(bit0) ^ bit1;
}
VULKAN_HPP_INLINE VULKAN_HPP_CONSTEXPR AllocationCreateFlags operator~(AllocationCreateFlagBits bits) VULKAN_HPP_NOEXCEPT {
return ~(AllocationCreateFlags(bits));
}
# if !defined( VULKAN_HPP_NO_TO_STRING )
VULKAN_HPP_INLINE std::string to_string(AllocationCreateFlags value) {
if (!value) return "{}";
std::string result;
if (value & AllocationCreateFlagBits::eDedicatedMemory) result += "DedicatedMemory | ";
if (value & AllocationCreateFlagBits::eNeverAllocate) result += "NeverAllocate | ";
if (value & AllocationCreateFlagBits::eMapped) result += "Mapped | ";
if (value & AllocationCreateFlagBits::eUserDataCopyString) result += "UserDataCopyString | ";
if (value & AllocationCreateFlagBits::eUpperAddress) result += "UpperAddress | ";
if (value & AllocationCreateFlagBits::eDontBind) result += "DontBind | ";
if (value & AllocationCreateFlagBits::eWithinBudget) result += "WithinBudget | ";
if (value & AllocationCreateFlagBits::eCanAlias) result += "CanAlias | ";
if (value & AllocationCreateFlagBits::eHostAccessSequentialWrite) result += "HostAccessSequentialWrite | ";
if (value & AllocationCreateFlagBits::eHostAccessRandom) result += "HostAccessRandom | ";
if (value & AllocationCreateFlagBits::eHostAccessAllowTransferInstead) result += "HostAccessAllowTransferInstead | ";
if (value & AllocationCreateFlagBits::eStrategyMinMemory) result += "StrategyMinMemory | ";
if (value & AllocationCreateFlagBits::eStrategyMinTime) result += "StrategyMinTime | ";
if (value & AllocationCreateFlagBits::eStrategyMinOffset) result += "StrategyMinOffset | ";
if (value & AllocationCreateFlagBits::eStrategyBestFit) result += "StrategyBestFit | ";
if (value & AllocationCreateFlagBits::eStrategyFirstFit) result += "StrategyFirstFit | ";
return "{ " + result.substr( 0, result.size() - 3 ) + " }";
}
# endif
}
namespace VMA_HPP_NAMESPACE {
enum class PoolCreateFlagBits : VmaPoolCreateFlags {
eIgnoreBufferImageGranularity = VMA_POOL_CREATE_IGNORE_BUFFER_IMAGE_GRANULARITY_BIT,
eLinearAlgorithm = VMA_POOL_CREATE_LINEAR_ALGORITHM_BIT
};
# if !defined( VULKAN_HPP_NO_TO_STRING )
VULKAN_HPP_INLINE std::string to_string(PoolCreateFlagBits value) {
if (value == PoolCreateFlagBits::eIgnoreBufferImageGranularity) return "IgnoreBufferImageGranularity";
if (value == PoolCreateFlagBits::eLinearAlgorithm) return "LinearAlgorithm";
return "invalid ( " + VULKAN_HPP_NAMESPACE::toHexString(static_cast<uint32_t>(value)) + " )";
}
# endif
}
namespace VULKAN_HPP_NAMESPACE {
template<> struct FlagTraits<VMA_HPP_NAMESPACE::PoolCreateFlagBits> {
static VULKAN_HPP_CONST_OR_CONSTEXPR bool isBitmask = true;
static VULKAN_HPP_CONST_OR_CONSTEXPR Flags<VMA_HPP_NAMESPACE::PoolCreateFlagBits> allFlags =
VMA_HPP_NAMESPACE::PoolCreateFlagBits::eIgnoreBufferImageGranularity
| VMA_HPP_NAMESPACE::PoolCreateFlagBits::eLinearAlgorithm;
};
}
namespace VMA_HPP_NAMESPACE {
using PoolCreateFlags = VULKAN_HPP_NAMESPACE::Flags<PoolCreateFlagBits>;
VULKAN_HPP_INLINE VULKAN_HPP_CONSTEXPR PoolCreateFlags operator|(PoolCreateFlagBits bit0, PoolCreateFlagBits bit1) VULKAN_HPP_NOEXCEPT {
return PoolCreateFlags(bit0) | bit1;
}
VULKAN_HPP_INLINE VULKAN_HPP_CONSTEXPR PoolCreateFlags operator&(PoolCreateFlagBits bit0, PoolCreateFlagBits bit1) VULKAN_HPP_NOEXCEPT {
return PoolCreateFlags(bit0) & bit1;
}
VULKAN_HPP_INLINE VULKAN_HPP_CONSTEXPR PoolCreateFlags operator^(PoolCreateFlagBits bit0, PoolCreateFlagBits bit1) VULKAN_HPP_NOEXCEPT {
return PoolCreateFlags(bit0) ^ bit1;
}
VULKAN_HPP_INLINE VULKAN_HPP_CONSTEXPR PoolCreateFlags operator~(PoolCreateFlagBits bits) VULKAN_HPP_NOEXCEPT {
return ~(PoolCreateFlags(bits));
}
# if !defined( VULKAN_HPP_NO_TO_STRING )
VULKAN_HPP_INLINE std::string to_string(PoolCreateFlags value) {
if (!value) return "{}";
std::string result;
if (value & PoolCreateFlagBits::eIgnoreBufferImageGranularity) result += "IgnoreBufferImageGranularity | ";
if (value & PoolCreateFlagBits::eLinearAlgorithm) result += "LinearAlgorithm | ";
return "{ " + result.substr( 0, result.size() - 3 ) + " }";
}
# endif
}
namespace VMA_HPP_NAMESPACE {
enum class DefragmentationFlagBits : VmaDefragmentationFlags {
eFlagAlgorithmFast = VMA_DEFRAGMENTATION_FLAG_ALGORITHM_FAST_BIT,
eFlagAlgorithmBalanced = VMA_DEFRAGMENTATION_FLAG_ALGORITHM_BALANCED_BIT,
eFlagAlgorithmFull = VMA_DEFRAGMENTATION_FLAG_ALGORITHM_FULL_BIT,
eFlagAlgorithmExtensive = VMA_DEFRAGMENTATION_FLAG_ALGORITHM_EXTENSIVE_BIT
};
# if !defined( VULKAN_HPP_NO_TO_STRING )
VULKAN_HPP_INLINE std::string to_string(DefragmentationFlagBits value) {
if (value == DefragmentationFlagBits::eFlagAlgorithmFast) return "FlagAlgorithmFast";
if (value == DefragmentationFlagBits::eFlagAlgorithmBalanced) return "FlagAlgorithmBalanced";
if (value == DefragmentationFlagBits::eFlagAlgorithmFull) return "FlagAlgorithmFull";
if (value == DefragmentationFlagBits::eFlagAlgorithmExtensive) return "FlagAlgorithmExtensive";
return "invalid ( " + VULKAN_HPP_NAMESPACE::toHexString(static_cast<uint32_t>(value)) + " )";
}
# endif
}
namespace VULKAN_HPP_NAMESPACE {
template<> struct FlagTraits<VMA_HPP_NAMESPACE::DefragmentationFlagBits> {
static VULKAN_HPP_CONST_OR_CONSTEXPR bool isBitmask = true;
static VULKAN_HPP_CONST_OR_CONSTEXPR Flags<VMA_HPP_NAMESPACE::DefragmentationFlagBits> allFlags =
VMA_HPP_NAMESPACE::DefragmentationFlagBits::eFlagAlgorithmFast
| VMA_HPP_NAMESPACE::DefragmentationFlagBits::eFlagAlgorithmBalanced
| VMA_HPP_NAMESPACE::DefragmentationFlagBits::eFlagAlgorithmFull
| VMA_HPP_NAMESPACE::DefragmentationFlagBits::eFlagAlgorithmExtensive;
};
}
namespace VMA_HPP_NAMESPACE {
using DefragmentationFlags = VULKAN_HPP_NAMESPACE::Flags<DefragmentationFlagBits>;
VULKAN_HPP_INLINE VULKAN_HPP_CONSTEXPR DefragmentationFlags operator|(DefragmentationFlagBits bit0, DefragmentationFlagBits bit1) VULKAN_HPP_NOEXCEPT {
return DefragmentationFlags(bit0) | bit1;
}
VULKAN_HPP_INLINE VULKAN_HPP_CONSTEXPR DefragmentationFlags operator&(DefragmentationFlagBits bit0, DefragmentationFlagBits bit1) VULKAN_HPP_NOEXCEPT {
return DefragmentationFlags(bit0) & bit1;
}
VULKAN_HPP_INLINE VULKAN_HPP_CONSTEXPR DefragmentationFlags operator^(DefragmentationFlagBits bit0, DefragmentationFlagBits bit1) VULKAN_HPP_NOEXCEPT {
return DefragmentationFlags(bit0) ^ bit1;
}
VULKAN_HPP_INLINE VULKAN_HPP_CONSTEXPR DefragmentationFlags operator~(DefragmentationFlagBits bits) VULKAN_HPP_NOEXCEPT {
return ~(DefragmentationFlags(bits));
}
# if !defined( VULKAN_HPP_NO_TO_STRING )
VULKAN_HPP_INLINE std::string to_string(DefragmentationFlags value) {
if (!value) return "{}";
std::string result;
if (value & DefragmentationFlagBits::eFlagAlgorithmFast) result += "FlagAlgorithmFast | ";
if (value & DefragmentationFlagBits::eFlagAlgorithmBalanced) result += "FlagAlgorithmBalanced | ";
if (value & DefragmentationFlagBits::eFlagAlgorithmFull) result += "FlagAlgorithmFull | ";
if (value & DefragmentationFlagBits::eFlagAlgorithmExtensive) result += "FlagAlgorithmExtensive | ";
return "{ " + result.substr( 0, result.size() - 3 ) + " }";
}
# endif
}
namespace VMA_HPP_NAMESPACE {
enum class DefragmentationMoveOperation {
eCopy = VMA_DEFRAGMENTATION_MOVE_OPERATION_COPY,
eIgnore = VMA_DEFRAGMENTATION_MOVE_OPERATION_IGNORE,
eDestroy = VMA_DEFRAGMENTATION_MOVE_OPERATION_DESTROY
};
# if !defined( VULKAN_HPP_NO_TO_STRING )
VULKAN_HPP_INLINE std::string to_string(DefragmentationMoveOperation value) {
if (value == DefragmentationMoveOperation::eCopy) return "Copy";
if (value == DefragmentationMoveOperation::eIgnore) return "Ignore";
if (value == DefragmentationMoveOperation::eDestroy) return "Destroy";
return "invalid ( " + VULKAN_HPP_NAMESPACE::toHexString(static_cast<uint32_t>(value)) + " )";
}
# endif
}
namespace VMA_HPP_NAMESPACE {
enum class VirtualBlockCreateFlagBits : VmaVirtualBlockCreateFlags {
eLinearAlgorithm = VMA_VIRTUAL_BLOCK_CREATE_LINEAR_ALGORITHM_BIT
};
# if !defined( VULKAN_HPP_NO_TO_STRING )
VULKAN_HPP_INLINE std::string to_string(VirtualBlockCreateFlagBits value) {
if (value == VirtualBlockCreateFlagBits::eLinearAlgorithm) return "LinearAlgorithm";
return "invalid ( " + VULKAN_HPP_NAMESPACE::toHexString(static_cast<uint32_t>(value)) + " )";
}
# endif
}
namespace VULKAN_HPP_NAMESPACE {
template<> struct FlagTraits<VMA_HPP_NAMESPACE::VirtualBlockCreateFlagBits> {
static VULKAN_HPP_CONST_OR_CONSTEXPR bool isBitmask = true;
static VULKAN_HPP_CONST_OR_CONSTEXPR Flags<VMA_HPP_NAMESPACE::VirtualBlockCreateFlagBits> allFlags =
VMA_HPP_NAMESPACE::VirtualBlockCreateFlagBits::eLinearAlgorithm;
};
}
namespace VMA_HPP_NAMESPACE {
using VirtualBlockCreateFlags = VULKAN_HPP_NAMESPACE::Flags<VirtualBlockCreateFlagBits>;
VULKAN_HPP_INLINE VULKAN_HPP_CONSTEXPR VirtualBlockCreateFlags operator|(VirtualBlockCreateFlagBits bit0, VirtualBlockCreateFlagBits bit1) VULKAN_HPP_NOEXCEPT {
return VirtualBlockCreateFlags(bit0) | bit1;
}
VULKAN_HPP_INLINE VULKAN_HPP_CONSTEXPR VirtualBlockCreateFlags operator&(VirtualBlockCreateFlagBits bit0, VirtualBlockCreateFlagBits bit1) VULKAN_HPP_NOEXCEPT {
return VirtualBlockCreateFlags(bit0) & bit1;
}
VULKAN_HPP_INLINE VULKAN_HPP_CONSTEXPR VirtualBlockCreateFlags operator^(VirtualBlockCreateFlagBits bit0, VirtualBlockCreateFlagBits bit1) VULKAN_HPP_NOEXCEPT {
return VirtualBlockCreateFlags(bit0) ^ bit1;
}
VULKAN_HPP_INLINE VULKAN_HPP_CONSTEXPR VirtualBlockCreateFlags operator~(VirtualBlockCreateFlagBits bits) VULKAN_HPP_NOEXCEPT {
return ~(VirtualBlockCreateFlags(bits));
}
# if !defined( VULKAN_HPP_NO_TO_STRING )
VULKAN_HPP_INLINE std::string to_string(VirtualBlockCreateFlags value) {
if (!value) return "{}";
std::string result;
if (value & VirtualBlockCreateFlagBits::eLinearAlgorithm) result += "LinearAlgorithm | ";
return "{ " + result.substr( 0, result.size() - 3 ) + " }";
}
# endif
}
namespace VMA_HPP_NAMESPACE {
enum class VirtualAllocationCreateFlagBits : VmaVirtualAllocationCreateFlags {
eUpperAddress = VMA_VIRTUAL_ALLOCATION_CREATE_UPPER_ADDRESS_BIT,
eStrategyMinMemory = VMA_VIRTUAL_ALLOCATION_CREATE_STRATEGY_MIN_MEMORY_BIT,
eStrategyMinTime = VMA_VIRTUAL_ALLOCATION_CREATE_STRATEGY_MIN_TIME_BIT,
eStrategyMinOffset = VMA_VIRTUAL_ALLOCATION_CREATE_STRATEGY_MIN_OFFSET_BIT
};
# if !defined( VULKAN_HPP_NO_TO_STRING )
VULKAN_HPP_INLINE std::string to_string(VirtualAllocationCreateFlagBits value) {
if (value == VirtualAllocationCreateFlagBits::eUpperAddress) return "UpperAddress";
if (value == VirtualAllocationCreateFlagBits::eStrategyMinMemory) return "StrategyMinMemory";
if (value == VirtualAllocationCreateFlagBits::eStrategyMinTime) return "StrategyMinTime";
if (value == VirtualAllocationCreateFlagBits::eStrategyMinOffset) return "StrategyMinOffset";
return "invalid ( " + VULKAN_HPP_NAMESPACE::toHexString(static_cast<uint32_t>(value)) + " )";
}
# endif
}
namespace VULKAN_HPP_NAMESPACE {
template<> struct FlagTraits<VMA_HPP_NAMESPACE::VirtualAllocationCreateFlagBits> {
static VULKAN_HPP_CONST_OR_CONSTEXPR bool isBitmask = true;
static VULKAN_HPP_CONST_OR_CONSTEXPR Flags<VMA_HPP_NAMESPACE::VirtualAllocationCreateFlagBits> allFlags =
VMA_HPP_NAMESPACE::VirtualAllocationCreateFlagBits::eUpperAddress
| VMA_HPP_NAMESPACE::VirtualAllocationCreateFlagBits::eStrategyMinMemory
| VMA_HPP_NAMESPACE::VirtualAllocationCreateFlagBits::eStrategyMinTime
| VMA_HPP_NAMESPACE::VirtualAllocationCreateFlagBits::eStrategyMinOffset;
};
}
namespace VMA_HPP_NAMESPACE {
using VirtualAllocationCreateFlags = VULKAN_HPP_NAMESPACE::Flags<VirtualAllocationCreateFlagBits>;
VULKAN_HPP_INLINE VULKAN_HPP_CONSTEXPR VirtualAllocationCreateFlags operator|(VirtualAllocationCreateFlagBits bit0, VirtualAllocationCreateFlagBits bit1) VULKAN_HPP_NOEXCEPT {
return VirtualAllocationCreateFlags(bit0) | bit1;
}
VULKAN_HPP_INLINE VULKAN_HPP_CONSTEXPR VirtualAllocationCreateFlags operator&(VirtualAllocationCreateFlagBits bit0, VirtualAllocationCreateFlagBits bit1) VULKAN_HPP_NOEXCEPT {
return VirtualAllocationCreateFlags(bit0) & bit1;
}
VULKAN_HPP_INLINE VULKAN_HPP_CONSTEXPR VirtualAllocationCreateFlags operator^(VirtualAllocationCreateFlagBits bit0, VirtualAllocationCreateFlagBits bit1) VULKAN_HPP_NOEXCEPT {
return VirtualAllocationCreateFlags(bit0) ^ bit1;
}
VULKAN_HPP_INLINE VULKAN_HPP_CONSTEXPR VirtualAllocationCreateFlags operator~(VirtualAllocationCreateFlagBits bits) VULKAN_HPP_NOEXCEPT {
return ~(VirtualAllocationCreateFlags(bits));
}
# if !defined( VULKAN_HPP_NO_TO_STRING )
VULKAN_HPP_INLINE std::string to_string(VirtualAllocationCreateFlags value) {
if (!value) return "{}";
std::string result;
if (value & VirtualAllocationCreateFlagBits::eUpperAddress) result += "UpperAddress | ";
if (value & VirtualAllocationCreateFlagBits::eStrategyMinMemory) result += "StrategyMinMemory | ";
if (value & VirtualAllocationCreateFlagBits::eStrategyMinTime) result += "StrategyMinTime | ";
if (value & VirtualAllocationCreateFlagBits::eStrategyMinOffset) result += "StrategyMinOffset | ";
return "{ " + result.substr( 0, result.size() - 3 ) + " }";
}
# endif
}
#endif

File diff suppressed because it is too large Load Diff

View File

@@ -1,935 +0,0 @@
#ifndef VULKAN_MEMORY_ALLOCATOR_HANDLES_HPP
#define VULKAN_MEMORY_ALLOCATOR_HANDLES_HPP
namespace VMA_HPP_NAMESPACE {
struct DeviceMemoryCallbacks;
struct VulkanFunctions;
struct AllocatorCreateInfo;
struct AllocatorInfo;
struct Statistics;
struct DetailedStatistics;
struct TotalStatistics;
struct Budget;
struct AllocationCreateInfo;
struct PoolCreateInfo;
struct AllocationInfo;
struct DefragmentationInfo;
struct DefragmentationMove;
struct DefragmentationPassMoveInfo;
struct DefragmentationStats;
struct VirtualBlockCreateInfo;
struct VirtualAllocationCreateInfo;
struct VirtualAllocationInfo;
class Allocator;
class Pool;
class Allocation;
class DefragmentationContext;
class VirtualAllocation;
class VirtualBlock;
}
namespace VMA_HPP_NAMESPACE {
class Pool {
public:
using CType = VmaPool;
using NativeType = VmaPool;
public:
VULKAN_HPP_CONSTEXPR Pool() = default;
VULKAN_HPP_CONSTEXPR Pool(std::nullptr_t) VULKAN_HPP_NOEXCEPT {}
VULKAN_HPP_TYPESAFE_EXPLICIT Pool(VmaPool pool) VULKAN_HPP_NOEXCEPT : m_pool(pool) {}
#if defined(VULKAN_HPP_TYPESAFE_CONVERSION)
Pool& operator=(VmaPool pool) VULKAN_HPP_NOEXCEPT {
m_pool = pool;
return *this;
}
#endif
Pool& operator=(std::nullptr_t) VULKAN_HPP_NOEXCEPT {
m_pool = {};
return *this;
}
#if defined( VULKAN_HPP_HAS_SPACESHIP_OPERATOR )
auto operator<=>(Pool const &) const = default;
#else
bool operator==(Pool const & rhs) const VULKAN_HPP_NOEXCEPT {
return m_pool == rhs.m_pool;
}
#endif
VULKAN_HPP_TYPESAFE_EXPLICIT operator VmaPool() const VULKAN_HPP_NOEXCEPT {
return m_pool;
}
explicit operator bool() const VULKAN_HPP_NOEXCEPT {
return m_pool != VK_NULL_HANDLE;
}
bool operator!() const VULKAN_HPP_NOEXCEPT {
return m_pool == VK_NULL_HANDLE;
}
private:
VmaPool m_pool = {};
};
VULKAN_HPP_STATIC_ASSERT(sizeof(Pool) == sizeof(VmaPool),
"handle and wrapper have different size!");
}
#ifndef VULKAN_HPP_NO_SMART_HANDLE
namespace VULKAN_HPP_NAMESPACE {
template<> class UniqueHandleTraits<VMA_HPP_NAMESPACE::Pool, VMA_HPP_NAMESPACE::Dispatcher> {
public:
using deleter = VMA_HPP_NAMESPACE::Deleter<VMA_HPP_NAMESPACE::Pool, VMA_HPP_NAMESPACE::Allocator>;
};
}
namespace VMA_HPP_NAMESPACE { using UniquePool = VULKAN_HPP_NAMESPACE::UniqueHandle<Pool, Dispatcher>; }
#endif
namespace VMA_HPP_NAMESPACE {
class Allocation {
public:
using CType = VmaAllocation;
using NativeType = VmaAllocation;
public:
VULKAN_HPP_CONSTEXPR Allocation() = default;
VULKAN_HPP_CONSTEXPR Allocation(std::nullptr_t) VULKAN_HPP_NOEXCEPT {}
VULKAN_HPP_TYPESAFE_EXPLICIT Allocation(VmaAllocation allocation) VULKAN_HPP_NOEXCEPT : m_allocation(allocation) {}
#if defined(VULKAN_HPP_TYPESAFE_CONVERSION)
Allocation& operator=(VmaAllocation allocation) VULKAN_HPP_NOEXCEPT {
m_allocation = allocation;
return *this;
}
#endif
Allocation& operator=(std::nullptr_t) VULKAN_HPP_NOEXCEPT {
m_allocation = {};
return *this;
}
#if defined( VULKAN_HPP_HAS_SPACESHIP_OPERATOR )
auto operator<=>(Allocation const &) const = default;
#else
bool operator==(Allocation const & rhs) const VULKAN_HPP_NOEXCEPT {
return m_allocation == rhs.m_allocation;
}
#endif
VULKAN_HPP_TYPESAFE_EXPLICIT operator VmaAllocation() const VULKAN_HPP_NOEXCEPT {
return m_allocation;
}
explicit operator bool() const VULKAN_HPP_NOEXCEPT {
return m_allocation != VK_NULL_HANDLE;
}
bool operator!() const VULKAN_HPP_NOEXCEPT {
return m_allocation == VK_NULL_HANDLE;
}
private:
VmaAllocation m_allocation = {};
};
VULKAN_HPP_STATIC_ASSERT(sizeof(Allocation) == sizeof(VmaAllocation),
"handle and wrapper have different size!");
}
#ifndef VULKAN_HPP_NO_SMART_HANDLE
namespace VULKAN_HPP_NAMESPACE {
template<> class UniqueHandleTraits<VMA_HPP_NAMESPACE::Allocation, VMA_HPP_NAMESPACE::Dispatcher> {
public:
using deleter = VMA_HPP_NAMESPACE::Deleter<VMA_HPP_NAMESPACE::Allocation, VMA_HPP_NAMESPACE::Allocator>;
};
}
namespace VMA_HPP_NAMESPACE { using UniqueAllocation = VULKAN_HPP_NAMESPACE::UniqueHandle<Allocation, Dispatcher>; }
#endif
namespace VMA_HPP_NAMESPACE {
class DefragmentationContext {
public:
using CType = VmaDefragmentationContext;
using NativeType = VmaDefragmentationContext;
public:
VULKAN_HPP_CONSTEXPR DefragmentationContext() = default;
VULKAN_HPP_CONSTEXPR DefragmentationContext(std::nullptr_t) VULKAN_HPP_NOEXCEPT {}
VULKAN_HPP_TYPESAFE_EXPLICIT DefragmentationContext(VmaDefragmentationContext defragmentationContext) VULKAN_HPP_NOEXCEPT : m_defragmentationContext(defragmentationContext) {}
#if defined(VULKAN_HPP_TYPESAFE_CONVERSION)
DefragmentationContext& operator=(VmaDefragmentationContext defragmentationContext) VULKAN_HPP_NOEXCEPT {
m_defragmentationContext = defragmentationContext;
return *this;
}
#endif
DefragmentationContext& operator=(std::nullptr_t) VULKAN_HPP_NOEXCEPT {
m_defragmentationContext = {};
return *this;
}
#if defined( VULKAN_HPP_HAS_SPACESHIP_OPERATOR )
auto operator<=>(DefragmentationContext const &) const = default;
#else
bool operator==(DefragmentationContext const & rhs) const VULKAN_HPP_NOEXCEPT {
return m_defragmentationContext == rhs.m_defragmentationContext;
}
#endif
VULKAN_HPP_TYPESAFE_EXPLICIT operator VmaDefragmentationContext() const VULKAN_HPP_NOEXCEPT {
return m_defragmentationContext;
}
explicit operator bool() const VULKAN_HPP_NOEXCEPT {
return m_defragmentationContext != VK_NULL_HANDLE;
}
bool operator!() const VULKAN_HPP_NOEXCEPT {
return m_defragmentationContext == VK_NULL_HANDLE;
}
private:
VmaDefragmentationContext m_defragmentationContext = {};
};
VULKAN_HPP_STATIC_ASSERT(sizeof(DefragmentationContext) == sizeof(VmaDefragmentationContext),
"handle and wrapper have different size!");
}
#ifndef VULKAN_HPP_NO_SMART_HANDLE
namespace VULKAN_HPP_NAMESPACE {
template<> class UniqueHandleTraits<VMA_HPP_NAMESPACE::DefragmentationContext, VMA_HPP_NAMESPACE::Dispatcher> {
public:
using deleter = VMA_HPP_NAMESPACE::Deleter<VMA_HPP_NAMESPACE::DefragmentationContext, void>;
};
}
namespace VMA_HPP_NAMESPACE { using UniqueDefragmentationContext = VULKAN_HPP_NAMESPACE::UniqueHandle<DefragmentationContext, Dispatcher>; }
#endif
namespace VMA_HPP_NAMESPACE {
class Allocator {
public:
using CType = VmaAllocator;
using NativeType = VmaAllocator;
public:
VULKAN_HPP_CONSTEXPR Allocator() = default;
VULKAN_HPP_CONSTEXPR Allocator(std::nullptr_t) VULKAN_HPP_NOEXCEPT {}
VULKAN_HPP_TYPESAFE_EXPLICIT Allocator(VmaAllocator allocator) VULKAN_HPP_NOEXCEPT : m_allocator(allocator) {}
#if defined(VULKAN_HPP_TYPESAFE_CONVERSION)
Allocator& operator=(VmaAllocator allocator) VULKAN_HPP_NOEXCEPT {
m_allocator = allocator;
return *this;
}
#endif
Allocator& operator=(std::nullptr_t) VULKAN_HPP_NOEXCEPT {
m_allocator = {};
return *this;
}
#if defined( VULKAN_HPP_HAS_SPACESHIP_OPERATOR )
auto operator<=>(Allocator const &) const = default;
#else
bool operator==(Allocator const & rhs) const VULKAN_HPP_NOEXCEPT {
return m_allocator == rhs.m_allocator;
}
#endif
VULKAN_HPP_TYPESAFE_EXPLICIT operator VmaAllocator() const VULKAN_HPP_NOEXCEPT {
return m_allocator;
}
explicit operator bool() const VULKAN_HPP_NOEXCEPT {
return m_allocator != VK_NULL_HANDLE;
}
bool operator!() const VULKAN_HPP_NOEXCEPT {
return m_allocator == VK_NULL_HANDLE;
}
#ifndef VULKAN_HPP_DISABLE_ENHANCED_MODE
void destroy() const;
#else
void destroy() const;
#endif
#ifndef VULKAN_HPP_DISABLE_ENHANCED_MODE
VULKAN_HPP_NODISCARD_WHEN_NO_EXCEPTIONS AllocatorInfo getAllocatorInfo() const;
#endif
void getAllocatorInfo(AllocatorInfo* allocatorInfo) const;
#ifndef VULKAN_HPP_DISABLE_ENHANCED_MODE
VULKAN_HPP_NODISCARD_WHEN_NO_EXCEPTIONS const VULKAN_HPP_NAMESPACE::PhysicalDeviceProperties* getPhysicalDeviceProperties() const;
#endif
void getPhysicalDeviceProperties(const VULKAN_HPP_NAMESPACE::PhysicalDeviceProperties** physicalDeviceProperties) const;
#ifndef VULKAN_HPP_DISABLE_ENHANCED_MODE
VULKAN_HPP_NODISCARD_WHEN_NO_EXCEPTIONS const VULKAN_HPP_NAMESPACE::PhysicalDeviceMemoryProperties* getMemoryProperties() const;
#endif
void getMemoryProperties(const VULKAN_HPP_NAMESPACE::PhysicalDeviceMemoryProperties** physicalDeviceMemoryProperties) const;
#ifndef VULKAN_HPP_DISABLE_ENHANCED_MODE
VULKAN_HPP_NODISCARD_WHEN_NO_EXCEPTIONS VULKAN_HPP_NAMESPACE::MemoryPropertyFlags getMemoryTypeProperties(uint32_t memoryTypeIndex) const;
#endif
void getMemoryTypeProperties(uint32_t memoryTypeIndex,
VULKAN_HPP_NAMESPACE::MemoryPropertyFlags* flags) const;
#ifndef VULKAN_HPP_DISABLE_ENHANCED_MODE
void setCurrentFrameIndex(uint32_t frameIndex) const;
#else
void setCurrentFrameIndex(uint32_t frameIndex) const;
#endif
#ifndef VULKAN_HPP_DISABLE_ENHANCED_MODE
VULKAN_HPP_NODISCARD_WHEN_NO_EXCEPTIONS TotalStatistics calculateStatistics() const;
#endif
void calculateStatistics(TotalStatistics* stats) const;
#ifndef VULKAN_HPP_DISABLE_ENHANCED_MODE
template<typename VectorAllocator = std::allocator<Budget>,
typename B = VectorAllocator,
typename std::enable_if<std::is_same<typename B::value_type, Budget>::value, int>::type = 0>
VULKAN_HPP_NODISCARD_WHEN_NO_EXCEPTIONS std::vector<Budget, VectorAllocator> getHeapBudgets(VectorAllocator& vectorAllocator) const;
template<typename VectorAllocator = std::allocator<Budget>>
VULKAN_HPP_NODISCARD_WHEN_NO_EXCEPTIONS std::vector<Budget, VectorAllocator> getHeapBudgets() const;
#endif
void getHeapBudgets(Budget* budgets) const;
#ifndef VULKAN_HPP_DISABLE_ENHANCED_MODE
VULKAN_HPP_NODISCARD_WHEN_NO_EXCEPTIONS typename VULKAN_HPP_NAMESPACE::ResultValueType<uint32_t>::type findMemoryTypeIndex(uint32_t memoryTypeBits,
const AllocationCreateInfo& allocationCreateInfo) const;
#endif
VULKAN_HPP_NODISCARD VULKAN_HPP_NAMESPACE::Result findMemoryTypeIndex(uint32_t memoryTypeBits,
const AllocationCreateInfo* allocationCreateInfo,
uint32_t* memoryTypeIndex) const;
#ifndef VULKAN_HPP_DISABLE_ENHANCED_MODE
VULKAN_HPP_NODISCARD_WHEN_NO_EXCEPTIONS typename VULKAN_HPP_NAMESPACE::ResultValueType<uint32_t>::type findMemoryTypeIndexForBufferInfo(const VULKAN_HPP_NAMESPACE::BufferCreateInfo& bufferCreateInfo,
const AllocationCreateInfo& allocationCreateInfo) const;
#endif
VULKAN_HPP_NODISCARD VULKAN_HPP_NAMESPACE::Result findMemoryTypeIndexForBufferInfo(const VULKAN_HPP_NAMESPACE::BufferCreateInfo* bufferCreateInfo,
const AllocationCreateInfo* allocationCreateInfo,
uint32_t* memoryTypeIndex) const;
#ifndef VULKAN_HPP_DISABLE_ENHANCED_MODE
VULKAN_HPP_NODISCARD_WHEN_NO_EXCEPTIONS typename VULKAN_HPP_NAMESPACE::ResultValueType<uint32_t>::type findMemoryTypeIndexForImageInfo(const VULKAN_HPP_NAMESPACE::ImageCreateInfo& imageCreateInfo,
const AllocationCreateInfo& allocationCreateInfo) const;
#endif
VULKAN_HPP_NODISCARD VULKAN_HPP_NAMESPACE::Result findMemoryTypeIndexForImageInfo(const VULKAN_HPP_NAMESPACE::ImageCreateInfo* imageCreateInfo,
const AllocationCreateInfo* allocationCreateInfo,
uint32_t* memoryTypeIndex) const;
#ifndef VULKAN_HPP_DISABLE_ENHANCED_MODE
VULKAN_HPP_NODISCARD_WHEN_NO_EXCEPTIONS typename VULKAN_HPP_NAMESPACE::ResultValueType<Pool>::type createPool(const PoolCreateInfo& createInfo) const;
#ifndef VULKAN_HPP_NO_SMART_HANDLE
VULKAN_HPP_NODISCARD_WHEN_NO_EXCEPTIONS typename VULKAN_HPP_NAMESPACE::ResultValueType<UniquePool>::type createPoolUnique(const PoolCreateInfo& createInfo) const;
#endif
#endif
VULKAN_HPP_NODISCARD VULKAN_HPP_NAMESPACE::Result createPool(const PoolCreateInfo* createInfo,
Pool* pool) const;
#ifndef VULKAN_HPP_DISABLE_ENHANCED_MODE
void destroyPool(Pool pool) const;
#else
void destroyPool(Pool pool) const;
#endif
#ifndef VULKAN_HPP_DISABLE_ENHANCED_MODE
VULKAN_HPP_NODISCARD_WHEN_NO_EXCEPTIONS Statistics getPoolStatistics(Pool pool) const;
#endif
void getPoolStatistics(Pool pool,
Statistics* poolStats) const;
#ifndef VULKAN_HPP_DISABLE_ENHANCED_MODE
VULKAN_HPP_NODISCARD_WHEN_NO_EXCEPTIONS DetailedStatistics calculatePoolStatistics(Pool pool) const;
#endif
void calculatePoolStatistics(Pool pool,
DetailedStatistics* poolStats) const;
#ifndef VULKAN_HPP_DISABLE_ENHANCED_MODE
typename VULKAN_HPP_NAMESPACE::ResultValueType<void>::type checkPoolCorruption(Pool pool) const;
#else
VULKAN_HPP_NODISCARD VULKAN_HPP_NAMESPACE::Result checkPoolCorruption(Pool pool) const;
#endif
#ifndef VULKAN_HPP_DISABLE_ENHANCED_MODE
VULKAN_HPP_NODISCARD_WHEN_NO_EXCEPTIONS const char* getPoolName(Pool pool) const;
#endif
void getPoolName(Pool pool,
const char** name) const;
#ifndef VULKAN_HPP_DISABLE_ENHANCED_MODE
void setPoolName(Pool pool,
const char* name) const;
#else
void setPoolName(Pool pool,
const char* name) const;
#endif
#ifndef VULKAN_HPP_DISABLE_ENHANCED_MODE
VULKAN_HPP_NODISCARD_WHEN_NO_EXCEPTIONS typename VULKAN_HPP_NAMESPACE::ResultValueType<Allocation>::type allocateMemory(const VULKAN_HPP_NAMESPACE::MemoryRequirements& vkMemoryRequirements,
const AllocationCreateInfo& createInfo,
VULKAN_HPP_NAMESPACE::Optional<AllocationInfo> allocationInfo = nullptr) const;
#ifndef VULKAN_HPP_NO_SMART_HANDLE
VULKAN_HPP_NODISCARD_WHEN_NO_EXCEPTIONS typename VULKAN_HPP_NAMESPACE::ResultValueType<UniqueAllocation>::type allocateMemoryUnique(const VULKAN_HPP_NAMESPACE::MemoryRequirements& vkMemoryRequirements,
const AllocationCreateInfo& createInfo,
VULKAN_HPP_NAMESPACE::Optional<AllocationInfo> allocationInfo = nullptr) const;
#endif
#endif
VULKAN_HPP_NODISCARD VULKAN_HPP_NAMESPACE::Result allocateMemory(const VULKAN_HPP_NAMESPACE::MemoryRequirements* vkMemoryRequirements,
const AllocationCreateInfo* createInfo,
Allocation* allocation,
AllocationInfo* allocationInfo) const;
#ifndef VULKAN_HPP_DISABLE_ENHANCED_MODE
template<typename VectorAllocator = std::allocator<Allocation>,
typename B = VectorAllocator,
typename std::enable_if<std::is_same<typename B::value_type, Allocation>::value, int>::type = 0>
VULKAN_HPP_NODISCARD_WHEN_NO_EXCEPTIONS typename VULKAN_HPP_NAMESPACE::ResultValueType<std::vector<Allocation, VectorAllocator>>::type allocateMemoryPages(VULKAN_HPP_NAMESPACE::ArrayProxy<const VULKAN_HPP_NAMESPACE::MemoryRequirements> vkMemoryRequirements,
VULKAN_HPP_NAMESPACE::ArrayProxy<const AllocationCreateInfo> createInfo,
VULKAN_HPP_NAMESPACE::ArrayProxyNoTemporaries<AllocationInfo> allocationInfo,
VectorAllocator& vectorAllocator) const;
template<typename VectorAllocator = std::allocator<Allocation>>
VULKAN_HPP_NODISCARD_WHEN_NO_EXCEPTIONS typename VULKAN_HPP_NAMESPACE::ResultValueType<std::vector<Allocation, VectorAllocator>>::type allocateMemoryPages(VULKAN_HPP_NAMESPACE::ArrayProxy<const VULKAN_HPP_NAMESPACE::MemoryRequirements> vkMemoryRequirements,
VULKAN_HPP_NAMESPACE::ArrayProxy<const AllocationCreateInfo> createInfo,
VULKAN_HPP_NAMESPACE::ArrayProxyNoTemporaries<AllocationInfo> allocationInfo = nullptr) const;
#ifndef VULKAN_HPP_NO_SMART_HANDLE
template<typename VectorAllocator = std::allocator<UniqueAllocation>,
typename B = VectorAllocator,
typename std::enable_if<std::is_same<typename B::value_type, UniqueAllocation>::value, int>::type = 0>
VULKAN_HPP_NODISCARD_WHEN_NO_EXCEPTIONS typename VULKAN_HPP_NAMESPACE::ResultValueType<std::vector<UniqueAllocation, VectorAllocator>>::type allocateMemoryPagesUnique(VULKAN_HPP_NAMESPACE::ArrayProxy<const VULKAN_HPP_NAMESPACE::MemoryRequirements> vkMemoryRequirements,
VULKAN_HPP_NAMESPACE::ArrayProxy<const AllocationCreateInfo> createInfo,
VULKAN_HPP_NAMESPACE::ArrayProxyNoTemporaries<AllocationInfo> allocationInfo,
VectorAllocator& vectorAllocator) const;
template<typename VectorAllocator = std::allocator<UniqueAllocation>>
VULKAN_HPP_NODISCARD_WHEN_NO_EXCEPTIONS typename VULKAN_HPP_NAMESPACE::ResultValueType<std::vector<UniqueAllocation, VectorAllocator>>::type allocateMemoryPagesUnique(VULKAN_HPP_NAMESPACE::ArrayProxy<const VULKAN_HPP_NAMESPACE::MemoryRequirements> vkMemoryRequirements,
VULKAN_HPP_NAMESPACE::ArrayProxy<const AllocationCreateInfo> createInfo,
VULKAN_HPP_NAMESPACE::ArrayProxyNoTemporaries<AllocationInfo> allocationInfo = nullptr) const;
#endif
#endif
VULKAN_HPP_NODISCARD VULKAN_HPP_NAMESPACE::Result allocateMemoryPages(const VULKAN_HPP_NAMESPACE::MemoryRequirements* vkMemoryRequirements,
const AllocationCreateInfo* createInfo,
size_t allocationCount,
Allocation* allocations,
AllocationInfo* allocationInfo) const;
#ifndef VULKAN_HPP_DISABLE_ENHANCED_MODE
VULKAN_HPP_NODISCARD_WHEN_NO_EXCEPTIONS typename VULKAN_HPP_NAMESPACE::ResultValueType<Allocation>::type allocateMemoryForBuffer(VULKAN_HPP_NAMESPACE::Buffer buffer,
const AllocationCreateInfo& createInfo,
VULKAN_HPP_NAMESPACE::Optional<AllocationInfo> allocationInfo = nullptr) const;
#ifndef VULKAN_HPP_NO_SMART_HANDLE
VULKAN_HPP_NODISCARD_WHEN_NO_EXCEPTIONS typename VULKAN_HPP_NAMESPACE::ResultValueType<UniqueAllocation>::type allocateMemoryForBufferUnique(VULKAN_HPP_NAMESPACE::Buffer buffer,
const AllocationCreateInfo& createInfo,
VULKAN_HPP_NAMESPACE::Optional<AllocationInfo> allocationInfo = nullptr) const;
#endif
#endif
VULKAN_HPP_NODISCARD VULKAN_HPP_NAMESPACE::Result allocateMemoryForBuffer(VULKAN_HPP_NAMESPACE::Buffer buffer,
const AllocationCreateInfo* createInfo,
Allocation* allocation,
AllocationInfo* allocationInfo) const;
#ifndef VULKAN_HPP_DISABLE_ENHANCED_MODE
VULKAN_HPP_NODISCARD_WHEN_NO_EXCEPTIONS typename VULKAN_HPP_NAMESPACE::ResultValueType<Allocation>::type allocateMemoryForImage(VULKAN_HPP_NAMESPACE::Image image,
const AllocationCreateInfo& createInfo,
VULKAN_HPP_NAMESPACE::Optional<AllocationInfo> allocationInfo = nullptr) const;
#ifndef VULKAN_HPP_NO_SMART_HANDLE
VULKAN_HPP_NODISCARD_WHEN_NO_EXCEPTIONS typename VULKAN_HPP_NAMESPACE::ResultValueType<UniqueAllocation>::type allocateMemoryForImageUnique(VULKAN_HPP_NAMESPACE::Image image,
const AllocationCreateInfo& createInfo,
VULKAN_HPP_NAMESPACE::Optional<AllocationInfo> allocationInfo = nullptr) const;
#endif
#endif
VULKAN_HPP_NODISCARD VULKAN_HPP_NAMESPACE::Result allocateMemoryForImage(VULKAN_HPP_NAMESPACE::Image image,
const AllocationCreateInfo* createInfo,
Allocation* allocation,
AllocationInfo* allocationInfo) const;
#ifndef VULKAN_HPP_DISABLE_ENHANCED_MODE
void freeMemory(Allocation allocation) const;
#else
void freeMemory(Allocation allocation) const;
#endif
#ifndef VULKAN_HPP_DISABLE_ENHANCED_MODE
void freeMemoryPages(VULKAN_HPP_NAMESPACE::ArrayProxy<const Allocation> allocations) const;
#endif
void freeMemoryPages(size_t allocationCount,
const Allocation* allocations) const;
#ifndef VULKAN_HPP_DISABLE_ENHANCED_MODE
VULKAN_HPP_NODISCARD_WHEN_NO_EXCEPTIONS AllocationInfo getAllocationInfo(Allocation allocation) const;
#endif
void getAllocationInfo(Allocation allocation,
AllocationInfo* allocationInfo) const;
#ifndef VULKAN_HPP_DISABLE_ENHANCED_MODE
void setAllocationUserData(Allocation allocation,
void* userData) const;
#else
void setAllocationUserData(Allocation allocation,
void* userData) const;
#endif
#ifndef VULKAN_HPP_DISABLE_ENHANCED_MODE
void setAllocationName(Allocation allocation,
const char* name) const;
#else
void setAllocationName(Allocation allocation,
const char* name) const;
#endif
#ifndef VULKAN_HPP_DISABLE_ENHANCED_MODE
VULKAN_HPP_NODISCARD_WHEN_NO_EXCEPTIONS VULKAN_HPP_NAMESPACE::MemoryPropertyFlags getAllocationMemoryProperties(Allocation allocation) const;
#endif
void getAllocationMemoryProperties(Allocation allocation,
VULKAN_HPP_NAMESPACE::MemoryPropertyFlags* flags) const;
#ifndef VULKAN_HPP_DISABLE_ENHANCED_MODE
VULKAN_HPP_NODISCARD_WHEN_NO_EXCEPTIONS typename VULKAN_HPP_NAMESPACE::ResultValueType<void*>::type mapMemory(Allocation allocation) const;
#endif
VULKAN_HPP_NODISCARD VULKAN_HPP_NAMESPACE::Result mapMemory(Allocation allocation,
void** data) const;
#ifndef VULKAN_HPP_DISABLE_ENHANCED_MODE
void unmapMemory(Allocation allocation) const;
#else
void unmapMemory(Allocation allocation) const;
#endif
#ifndef VULKAN_HPP_DISABLE_ENHANCED_MODE
typename VULKAN_HPP_NAMESPACE::ResultValueType<void>::type flushAllocation(Allocation allocation,
VULKAN_HPP_NAMESPACE::DeviceSize offset,
VULKAN_HPP_NAMESPACE::DeviceSize size) const;
#else
VULKAN_HPP_NODISCARD VULKAN_HPP_NAMESPACE::Result flushAllocation(Allocation allocation,
VULKAN_HPP_NAMESPACE::DeviceSize offset,
VULKAN_HPP_NAMESPACE::DeviceSize size) const;
#endif
#ifndef VULKAN_HPP_DISABLE_ENHANCED_MODE
typename VULKAN_HPP_NAMESPACE::ResultValueType<void>::type invalidateAllocation(Allocation allocation,
VULKAN_HPP_NAMESPACE::DeviceSize offset,
VULKAN_HPP_NAMESPACE::DeviceSize size) const;
#else
VULKAN_HPP_NODISCARD VULKAN_HPP_NAMESPACE::Result invalidateAllocation(Allocation allocation,
VULKAN_HPP_NAMESPACE::DeviceSize offset,
VULKAN_HPP_NAMESPACE::DeviceSize size) const;
#endif
#ifndef VULKAN_HPP_DISABLE_ENHANCED_MODE
typename VULKAN_HPP_NAMESPACE::ResultValueType<void>::type flushAllocations(VULKAN_HPP_NAMESPACE::ArrayProxy<const Allocation> allocations,
VULKAN_HPP_NAMESPACE::ArrayProxy<const VULKAN_HPP_NAMESPACE::DeviceSize> offsets,
VULKAN_HPP_NAMESPACE::ArrayProxy<const VULKAN_HPP_NAMESPACE::DeviceSize> sizes) const;
#endif
VULKAN_HPP_NODISCARD VULKAN_HPP_NAMESPACE::Result flushAllocations(uint32_t allocationCount,
const Allocation* allocations,
const VULKAN_HPP_NAMESPACE::DeviceSize* offsets,
const VULKAN_HPP_NAMESPACE::DeviceSize* sizes) const;
#ifndef VULKAN_HPP_DISABLE_ENHANCED_MODE
typename VULKAN_HPP_NAMESPACE::ResultValueType<void>::type invalidateAllocations(VULKAN_HPP_NAMESPACE::ArrayProxy<const Allocation> allocations,
VULKAN_HPP_NAMESPACE::ArrayProxy<const VULKAN_HPP_NAMESPACE::DeviceSize> offsets,
VULKAN_HPP_NAMESPACE::ArrayProxy<const VULKAN_HPP_NAMESPACE::DeviceSize> sizes) const;
#endif
VULKAN_HPP_NODISCARD VULKAN_HPP_NAMESPACE::Result invalidateAllocations(uint32_t allocationCount,
const Allocation* allocations,
const VULKAN_HPP_NAMESPACE::DeviceSize* offsets,
const VULKAN_HPP_NAMESPACE::DeviceSize* sizes) const;
#ifndef VULKAN_HPP_DISABLE_ENHANCED_MODE
typename VULKAN_HPP_NAMESPACE::ResultValueType<void>::type checkCorruption(uint32_t memoryTypeBits) const;
#else
VULKAN_HPP_NODISCARD VULKAN_HPP_NAMESPACE::Result checkCorruption(uint32_t memoryTypeBits) const;
#endif
#ifndef VULKAN_HPP_DISABLE_ENHANCED_MODE
VULKAN_HPP_NODISCARD_WHEN_NO_EXCEPTIONS typename VULKAN_HPP_NAMESPACE::ResultValueType<DefragmentationContext>::type beginDefragmentation(const DefragmentationInfo& info) const;
#endif
VULKAN_HPP_NODISCARD VULKAN_HPP_NAMESPACE::Result beginDefragmentation(const DefragmentationInfo* info,
DefragmentationContext* context) const;
#ifndef VULKAN_HPP_DISABLE_ENHANCED_MODE
void endDefragmentation(DefragmentationContext context,
VULKAN_HPP_NAMESPACE::Optional<DefragmentationStats> stats = nullptr) const;
#endif
void endDefragmentation(DefragmentationContext context,
DefragmentationStats* stats) const;
#ifndef VULKAN_HPP_DISABLE_ENHANCED_MODE
VULKAN_HPP_NODISCARD_WHEN_NO_EXCEPTIONS typename VULKAN_HPP_NAMESPACE::ResultValueType<DefragmentationPassMoveInfo>::type beginDefragmentationPass(DefragmentationContext context) const;
#endif
VULKAN_HPP_NODISCARD VULKAN_HPP_NAMESPACE::Result beginDefragmentationPass(DefragmentationContext context,
DefragmentationPassMoveInfo* passInfo) const;
#ifndef VULKAN_HPP_DISABLE_ENHANCED_MODE
VULKAN_HPP_NODISCARD_WHEN_NO_EXCEPTIONS typename VULKAN_HPP_NAMESPACE::ResultValueType<DefragmentationPassMoveInfo>::type endDefragmentationPass(DefragmentationContext context) const;
#endif
VULKAN_HPP_NODISCARD VULKAN_HPP_NAMESPACE::Result endDefragmentationPass(DefragmentationContext context,
DefragmentationPassMoveInfo* passInfo) const;
#ifndef VULKAN_HPP_DISABLE_ENHANCED_MODE
typename VULKAN_HPP_NAMESPACE::ResultValueType<void>::type bindBufferMemory(Allocation allocation,
VULKAN_HPP_NAMESPACE::Buffer buffer) const;
#else
VULKAN_HPP_NODISCARD VULKAN_HPP_NAMESPACE::Result bindBufferMemory(Allocation allocation,
VULKAN_HPP_NAMESPACE::Buffer buffer) const;
#endif
#ifndef VULKAN_HPP_DISABLE_ENHANCED_MODE
typename VULKAN_HPP_NAMESPACE::ResultValueType<void>::type bindBufferMemory2(Allocation allocation,
VULKAN_HPP_NAMESPACE::DeviceSize allocationLocalOffset,
VULKAN_HPP_NAMESPACE::Buffer buffer,
const void* next) const;
#else
VULKAN_HPP_NODISCARD VULKAN_HPP_NAMESPACE::Result bindBufferMemory2(Allocation allocation,
VULKAN_HPP_NAMESPACE::DeviceSize allocationLocalOffset,
VULKAN_HPP_NAMESPACE::Buffer buffer,
const void* next) const;
#endif
#ifndef VULKAN_HPP_DISABLE_ENHANCED_MODE
typename VULKAN_HPP_NAMESPACE::ResultValueType<void>::type bindImageMemory(Allocation allocation,
VULKAN_HPP_NAMESPACE::Image image) const;
#else
VULKAN_HPP_NODISCARD VULKAN_HPP_NAMESPACE::Result bindImageMemory(Allocation allocation,
VULKAN_HPP_NAMESPACE::Image image) const;
#endif
#ifndef VULKAN_HPP_DISABLE_ENHANCED_MODE
typename VULKAN_HPP_NAMESPACE::ResultValueType<void>::type bindImageMemory2(Allocation allocation,
VULKAN_HPP_NAMESPACE::DeviceSize allocationLocalOffset,
VULKAN_HPP_NAMESPACE::Image image,
const void* next) const;
#else
VULKAN_HPP_NODISCARD VULKAN_HPP_NAMESPACE::Result bindImageMemory2(Allocation allocation,
VULKAN_HPP_NAMESPACE::DeviceSize allocationLocalOffset,
VULKAN_HPP_NAMESPACE::Image image,
const void* next) const;
#endif
#ifndef VULKAN_HPP_DISABLE_ENHANCED_MODE
VULKAN_HPP_NODISCARD_WHEN_NO_EXCEPTIONS typename VULKAN_HPP_NAMESPACE::ResultValueType<std::pair<VULKAN_HPP_NAMESPACE::Buffer, Allocation>>::type createBuffer(const VULKAN_HPP_NAMESPACE::BufferCreateInfo& bufferCreateInfo,
const AllocationCreateInfo& allocationCreateInfo,
VULKAN_HPP_NAMESPACE::Optional<AllocationInfo> allocationInfo = nullptr) const;
#ifndef VULKAN_HPP_NO_SMART_HANDLE
VULKAN_HPP_NODISCARD_WHEN_NO_EXCEPTIONS typename VULKAN_HPP_NAMESPACE::ResultValueType<std::pair<UniqueBuffer, UniqueAllocation>>::type createBufferUnique(const VULKAN_HPP_NAMESPACE::BufferCreateInfo& bufferCreateInfo,
const AllocationCreateInfo& allocationCreateInfo,
VULKAN_HPP_NAMESPACE::Optional<AllocationInfo> allocationInfo = nullptr) const;
#endif
#endif
VULKAN_HPP_NODISCARD VULKAN_HPP_NAMESPACE::Result createBuffer(const VULKAN_HPP_NAMESPACE::BufferCreateInfo* bufferCreateInfo,
const AllocationCreateInfo* allocationCreateInfo,
VULKAN_HPP_NAMESPACE::Buffer* buffer,
Allocation* allocation,
AllocationInfo* allocationInfo) const;
#ifndef VULKAN_HPP_DISABLE_ENHANCED_MODE
VULKAN_HPP_NODISCARD_WHEN_NO_EXCEPTIONS typename VULKAN_HPP_NAMESPACE::ResultValueType<std::pair<VULKAN_HPP_NAMESPACE::Buffer, Allocation>>::type createBufferWithAlignment(const VULKAN_HPP_NAMESPACE::BufferCreateInfo& bufferCreateInfo,
const AllocationCreateInfo& allocationCreateInfo,
VULKAN_HPP_NAMESPACE::DeviceSize minAlignment,
VULKAN_HPP_NAMESPACE::Optional<AllocationInfo> allocationInfo = nullptr) const;
#ifndef VULKAN_HPP_NO_SMART_HANDLE
VULKAN_HPP_NODISCARD_WHEN_NO_EXCEPTIONS typename VULKAN_HPP_NAMESPACE::ResultValueType<std::pair<UniqueBuffer, UniqueAllocation>>::type createBufferWithAlignmentUnique(const VULKAN_HPP_NAMESPACE::BufferCreateInfo& bufferCreateInfo,
const AllocationCreateInfo& allocationCreateInfo,
VULKAN_HPP_NAMESPACE::DeviceSize minAlignment,
VULKAN_HPP_NAMESPACE::Optional<AllocationInfo> allocationInfo = nullptr) const;
#endif
#endif
VULKAN_HPP_NODISCARD VULKAN_HPP_NAMESPACE::Result createBufferWithAlignment(const VULKAN_HPP_NAMESPACE::BufferCreateInfo* bufferCreateInfo,
const AllocationCreateInfo* allocationCreateInfo,
VULKAN_HPP_NAMESPACE::DeviceSize minAlignment,
VULKAN_HPP_NAMESPACE::Buffer* buffer,
Allocation* allocation,
AllocationInfo* allocationInfo) const;
#ifndef VULKAN_HPP_DISABLE_ENHANCED_MODE
VULKAN_HPP_NODISCARD_WHEN_NO_EXCEPTIONS typename VULKAN_HPP_NAMESPACE::ResultValueType<VULKAN_HPP_NAMESPACE::Buffer>::type createAliasingBuffer(Allocation allocation,
const VULKAN_HPP_NAMESPACE::BufferCreateInfo& bufferCreateInfo) const;
#endif
VULKAN_HPP_NODISCARD VULKAN_HPP_NAMESPACE::Result createAliasingBuffer(Allocation allocation,
const VULKAN_HPP_NAMESPACE::BufferCreateInfo* bufferCreateInfo,
VULKAN_HPP_NAMESPACE::Buffer* buffer) const;
#ifndef VULKAN_HPP_DISABLE_ENHANCED_MODE
void destroyBuffer(VULKAN_HPP_NAMESPACE::Buffer buffer,
Allocation allocation) const;
#else
void destroyBuffer(VULKAN_HPP_NAMESPACE::Buffer buffer,
Allocation allocation) const;
#endif
#ifndef VULKAN_HPP_DISABLE_ENHANCED_MODE
VULKAN_HPP_NODISCARD_WHEN_NO_EXCEPTIONS typename VULKAN_HPP_NAMESPACE::ResultValueType<std::pair<VULKAN_HPP_NAMESPACE::Image, Allocation>>::type createImage(const VULKAN_HPP_NAMESPACE::ImageCreateInfo& imageCreateInfo,
const AllocationCreateInfo& allocationCreateInfo,
VULKAN_HPP_NAMESPACE::Optional<AllocationInfo> allocationInfo = nullptr) const;
#ifndef VULKAN_HPP_NO_SMART_HANDLE
VULKAN_HPP_NODISCARD_WHEN_NO_EXCEPTIONS typename VULKAN_HPP_NAMESPACE::ResultValueType<std::pair<UniqueImage, UniqueAllocation>>::type createImageUnique(const VULKAN_HPP_NAMESPACE::ImageCreateInfo& imageCreateInfo,
const AllocationCreateInfo& allocationCreateInfo,
VULKAN_HPP_NAMESPACE::Optional<AllocationInfo> allocationInfo = nullptr) const;
#endif
#endif
VULKAN_HPP_NODISCARD VULKAN_HPP_NAMESPACE::Result createImage(const VULKAN_HPP_NAMESPACE::ImageCreateInfo* imageCreateInfo,
const AllocationCreateInfo* allocationCreateInfo,
VULKAN_HPP_NAMESPACE::Image* image,
Allocation* allocation,
AllocationInfo* allocationInfo) const;
#ifndef VULKAN_HPP_DISABLE_ENHANCED_MODE
VULKAN_HPP_NODISCARD_WHEN_NO_EXCEPTIONS typename VULKAN_HPP_NAMESPACE::ResultValueType<VULKAN_HPP_NAMESPACE::Image>::type createAliasingImage(Allocation allocation,
const VULKAN_HPP_NAMESPACE::ImageCreateInfo& imageCreateInfo) const;
#endif
VULKAN_HPP_NODISCARD VULKAN_HPP_NAMESPACE::Result createAliasingImage(Allocation allocation,
const VULKAN_HPP_NAMESPACE::ImageCreateInfo* imageCreateInfo,
VULKAN_HPP_NAMESPACE::Image* image) const;
#ifndef VULKAN_HPP_DISABLE_ENHANCED_MODE
void destroyImage(VULKAN_HPP_NAMESPACE::Image image,
Allocation allocation) const;
#else
void destroyImage(VULKAN_HPP_NAMESPACE::Image image,
Allocation allocation) const;
#endif
#if VMA_STATS_STRING_ENABLED
#ifndef VULKAN_HPP_DISABLE_ENHANCED_MODE
VULKAN_HPP_NODISCARD_WHEN_NO_EXCEPTIONS char* buildStatsString(VULKAN_HPP_NAMESPACE::Bool32 detailedMap) const;
#endif
void buildStatsString(char** statsString,
VULKAN_HPP_NAMESPACE::Bool32 detailedMap) const;
#ifndef VULKAN_HPP_DISABLE_ENHANCED_MODE
void freeStatsString(char* statsString) const;
#else
void freeStatsString(char* statsString) const;
#endif
#endif
private:
VmaAllocator m_allocator = {};
};
VULKAN_HPP_STATIC_ASSERT(sizeof(Allocator) == sizeof(VmaAllocator),
"handle and wrapper have different size!");
}
#ifndef VULKAN_HPP_NO_SMART_HANDLE
namespace VULKAN_HPP_NAMESPACE {
template<> class UniqueHandleTraits<VMA_HPP_NAMESPACE::Allocator, VMA_HPP_NAMESPACE::Dispatcher> {
public:
using deleter = VMA_HPP_NAMESPACE::Deleter<VMA_HPP_NAMESPACE::Allocator, void>;
};
}
namespace VMA_HPP_NAMESPACE { using UniqueAllocator = VULKAN_HPP_NAMESPACE::UniqueHandle<Allocator, Dispatcher>; }
#endif
namespace VMA_HPP_NAMESPACE {
class VirtualAllocation {
public:
using CType = VmaVirtualAllocation;
using NativeType = VmaVirtualAllocation;
public:
VULKAN_HPP_CONSTEXPR VirtualAllocation() = default;
VULKAN_HPP_CONSTEXPR VirtualAllocation(std::nullptr_t) VULKAN_HPP_NOEXCEPT {}
VULKAN_HPP_TYPESAFE_EXPLICIT VirtualAllocation(VmaVirtualAllocation virtualAllocation) VULKAN_HPP_NOEXCEPT : m_virtualAllocation(virtualAllocation) {}
#if defined(VULKAN_HPP_TYPESAFE_CONVERSION)
VirtualAllocation& operator=(VmaVirtualAllocation virtualAllocation) VULKAN_HPP_NOEXCEPT {
m_virtualAllocation = virtualAllocation;
return *this;
}
#endif
VirtualAllocation& operator=(std::nullptr_t) VULKAN_HPP_NOEXCEPT {
m_virtualAllocation = {};
return *this;
}
#if defined( VULKAN_HPP_HAS_SPACESHIP_OPERATOR )
auto operator<=>(VirtualAllocation const &) const = default;
#else
bool operator==(VirtualAllocation const & rhs) const VULKAN_HPP_NOEXCEPT {
return m_virtualAllocation == rhs.m_virtualAllocation;
}
#endif
VULKAN_HPP_TYPESAFE_EXPLICIT operator VmaVirtualAllocation() const VULKAN_HPP_NOEXCEPT {
return m_virtualAllocation;
}
explicit operator bool() const VULKAN_HPP_NOEXCEPT {
return m_virtualAllocation != VK_NULL_HANDLE;
}
bool operator!() const VULKAN_HPP_NOEXCEPT {
return m_virtualAllocation == VK_NULL_HANDLE;
}
private:
VmaVirtualAllocation m_virtualAllocation = {};
};
VULKAN_HPP_STATIC_ASSERT(sizeof(VirtualAllocation) == sizeof(VmaVirtualAllocation),
"handle and wrapper have different size!");
}
#ifndef VULKAN_HPP_NO_SMART_HANDLE
namespace VULKAN_HPP_NAMESPACE {
template<> class UniqueHandleTraits<VMA_HPP_NAMESPACE::VirtualAllocation, VMA_HPP_NAMESPACE::Dispatcher> {
public:
using deleter = VMA_HPP_NAMESPACE::Deleter<VMA_HPP_NAMESPACE::VirtualAllocation, VMA_HPP_NAMESPACE::VirtualBlock>;
};
}
namespace VMA_HPP_NAMESPACE { using UniqueVirtualAllocation = VULKAN_HPP_NAMESPACE::UniqueHandle<VirtualAllocation, Dispatcher>; }
#endif
namespace VMA_HPP_NAMESPACE {
class VirtualBlock {
public:
using CType = VmaVirtualBlock;
using NativeType = VmaVirtualBlock;
public:
VULKAN_HPP_CONSTEXPR VirtualBlock() = default;
VULKAN_HPP_CONSTEXPR VirtualBlock(std::nullptr_t) VULKAN_HPP_NOEXCEPT {}
VULKAN_HPP_TYPESAFE_EXPLICIT VirtualBlock(VmaVirtualBlock virtualBlock) VULKAN_HPP_NOEXCEPT : m_virtualBlock(virtualBlock) {}
#if defined(VULKAN_HPP_TYPESAFE_CONVERSION)
VirtualBlock& operator=(VmaVirtualBlock virtualBlock) VULKAN_HPP_NOEXCEPT {
m_virtualBlock = virtualBlock;
return *this;
}
#endif
VirtualBlock& operator=(std::nullptr_t) VULKAN_HPP_NOEXCEPT {
m_virtualBlock = {};
return *this;
}
#if defined( VULKAN_HPP_HAS_SPACESHIP_OPERATOR )
auto operator<=>(VirtualBlock const &) const = default;
#else
bool operator==(VirtualBlock const & rhs) const VULKAN_HPP_NOEXCEPT {
return m_virtualBlock == rhs.m_virtualBlock;
}
#endif
VULKAN_HPP_TYPESAFE_EXPLICIT operator VmaVirtualBlock() const VULKAN_HPP_NOEXCEPT {
return m_virtualBlock;
}
explicit operator bool() const VULKAN_HPP_NOEXCEPT {
return m_virtualBlock != VK_NULL_HANDLE;
}
bool operator!() const VULKAN_HPP_NOEXCEPT {
return m_virtualBlock == VK_NULL_HANDLE;
}
#ifndef VULKAN_HPP_DISABLE_ENHANCED_MODE
void destroy() const;
#else
void destroy() const;
#endif
#ifndef VULKAN_HPP_DISABLE_ENHANCED_MODE
VULKAN_HPP_NODISCARD_WHEN_NO_EXCEPTIONS VULKAN_HPP_NAMESPACE::Bool32 isVirtualBlockEmpty() const;
#else
VULKAN_HPP_NODISCARD VULKAN_HPP_NAMESPACE::Bool32 isVirtualBlockEmpty() const;
#endif
#ifndef VULKAN_HPP_DISABLE_ENHANCED_MODE
VULKAN_HPP_NODISCARD_WHEN_NO_EXCEPTIONS VirtualAllocationInfo getVirtualAllocationInfo(VirtualAllocation allocation) const;
#endif
void getVirtualAllocationInfo(VirtualAllocation allocation,
VirtualAllocationInfo* virtualAllocInfo) const;
#ifndef VULKAN_HPP_DISABLE_ENHANCED_MODE
VULKAN_HPP_NODISCARD_WHEN_NO_EXCEPTIONS typename VULKAN_HPP_NAMESPACE::ResultValueType<VirtualAllocation>::type virtualAllocate(const VirtualAllocationCreateInfo& createInfo,
VULKAN_HPP_NAMESPACE::Optional<VULKAN_HPP_NAMESPACE::DeviceSize> offset = nullptr) const;
#ifndef VULKAN_HPP_NO_SMART_HANDLE
VULKAN_HPP_NODISCARD_WHEN_NO_EXCEPTIONS typename VULKAN_HPP_NAMESPACE::ResultValueType<UniqueVirtualAllocation>::type virtualAllocateUnique(const VirtualAllocationCreateInfo& createInfo,
VULKAN_HPP_NAMESPACE::Optional<VULKAN_HPP_NAMESPACE::DeviceSize> offset = nullptr) const;
#endif
#endif
VULKAN_HPP_NODISCARD VULKAN_HPP_NAMESPACE::Result virtualAllocate(const VirtualAllocationCreateInfo* createInfo,
VirtualAllocation* allocation,
VULKAN_HPP_NAMESPACE::DeviceSize* offset) const;
#ifndef VULKAN_HPP_DISABLE_ENHANCED_MODE
void virtualFree(VirtualAllocation allocation) const;
#else
void virtualFree(VirtualAllocation allocation) const;
#endif
#ifndef VULKAN_HPP_DISABLE_ENHANCED_MODE
void clearVirtualBlock() const;
#else
void clearVirtualBlock() const;
#endif
#ifndef VULKAN_HPP_DISABLE_ENHANCED_MODE
void setVirtualAllocationUserData(VirtualAllocation allocation,
void* userData) const;
#else
void setVirtualAllocationUserData(VirtualAllocation allocation,
void* userData) const;
#endif
#ifndef VULKAN_HPP_DISABLE_ENHANCED_MODE
VULKAN_HPP_NODISCARD_WHEN_NO_EXCEPTIONS Statistics getVirtualBlockStatistics() const;
#endif
void getVirtualBlockStatistics(Statistics* stats) const;
#ifndef VULKAN_HPP_DISABLE_ENHANCED_MODE
VULKAN_HPP_NODISCARD_WHEN_NO_EXCEPTIONS DetailedStatistics calculateVirtualBlockStatistics() const;
#endif
void calculateVirtualBlockStatistics(DetailedStatistics* stats) const;
#if VMA_STATS_STRING_ENABLED
#ifndef VULKAN_HPP_DISABLE_ENHANCED_MODE
VULKAN_HPP_NODISCARD_WHEN_NO_EXCEPTIONS char* buildVirtualBlockStatsString(VULKAN_HPP_NAMESPACE::Bool32 detailedMap) const;
#endif
void buildVirtualBlockStatsString(char** statsString,
VULKAN_HPP_NAMESPACE::Bool32 detailedMap) const;
#ifndef VULKAN_HPP_DISABLE_ENHANCED_MODE
void freeVirtualBlockStatsString(char* statsString) const;
#else
void freeVirtualBlockStatsString(char* statsString) const;
#endif
#endif
private:
VmaVirtualBlock m_virtualBlock = {};
};
VULKAN_HPP_STATIC_ASSERT(sizeof(VirtualBlock) == sizeof(VmaVirtualBlock),
"handle and wrapper have different size!");
}
#ifndef VULKAN_HPP_NO_SMART_HANDLE
namespace VULKAN_HPP_NAMESPACE {
template<> class UniqueHandleTraits<VMA_HPP_NAMESPACE::VirtualBlock, VMA_HPP_NAMESPACE::Dispatcher> {
public:
using deleter = VMA_HPP_NAMESPACE::Deleter<VMA_HPP_NAMESPACE::VirtualBlock, void>;
};
}
namespace VMA_HPP_NAMESPACE { using UniqueVirtualBlock = VULKAN_HPP_NAMESPACE::UniqueHandle<VirtualBlock, Dispatcher>; }
#endif
namespace VMA_HPP_NAMESPACE {
#ifndef VULKAN_HPP_DISABLE_ENHANCED_MODE
VULKAN_HPP_NODISCARD_WHEN_NO_EXCEPTIONS typename VULKAN_HPP_NAMESPACE::ResultValueType<Allocator>::type createAllocator(const AllocatorCreateInfo& createInfo);
#ifndef VULKAN_HPP_NO_SMART_HANDLE
VULKAN_HPP_NODISCARD_WHEN_NO_EXCEPTIONS typename VULKAN_HPP_NAMESPACE::ResultValueType<UniqueAllocator>::type createAllocatorUnique(const AllocatorCreateInfo& createInfo);
#endif
#endif
VULKAN_HPP_NODISCARD VULKAN_HPP_NAMESPACE::Result createAllocator(const AllocatorCreateInfo* createInfo,
Allocator* allocator);
#ifndef VULKAN_HPP_DISABLE_ENHANCED_MODE
VULKAN_HPP_NODISCARD_WHEN_NO_EXCEPTIONS typename VULKAN_HPP_NAMESPACE::ResultValueType<VirtualBlock>::type createVirtualBlock(const VirtualBlockCreateInfo& createInfo);
#ifndef VULKAN_HPP_NO_SMART_HANDLE
VULKAN_HPP_NODISCARD_WHEN_NO_EXCEPTIONS typename VULKAN_HPP_NAMESPACE::ResultValueType<UniqueVirtualBlock>::type createVirtualBlockUnique(const VirtualBlockCreateInfo& createInfo);
#endif
#endif
VULKAN_HPP_NODISCARD VULKAN_HPP_NAMESPACE::Result createVirtualBlock(const VirtualBlockCreateInfo* createInfo,
VirtualBlock* virtualBlock);
}
#endif

View File

@@ -634,6 +634,7 @@ public final class X11GraphicsDevice extends GraphicsDevice
if (x11gd.isScaleFactorDefault.get() || !uiScaleEnabled) {
x11gd.scale = (int)Math.round(xftDpiScale * (uiScaleEnabled ? GDK_SCALE_MULTIPLIER : 1));
x11gd.isScaleFactorDefault.set(false);
x11gd.bounds = x11gd.getBoundsImpl();
}
}
}

View File

@@ -61,6 +61,7 @@ public final class WLClipboard extends SunClipboard {
// A handle to the native clipboard representation, 0 if not available.
private long clipboardNativePtr; // guarded by 'this'
private long ourOfferNativePtr; // guarded by 'this'
// The list of numeric format IDs the current clipboard is available in;
// could be null or empty.
@@ -134,8 +135,19 @@ public final class WLClipboard extends SunClipboard {
protected void setContentsNative(Transferable contents) {
// The server requires "serial number of the event that triggered this request"
// as a proof of the right to copy data.
WLPointerEvent wlPointerEvent = WLToolkit.getInputState().eventWithSerial();
long eventSerial = wlPointerEvent == null ? 0 : wlPointerEvent.getSerial();
// It is not specified which event's serial number the Wayland server expects here,
// so the following is a speculation based on experiments.
// The worst case is that a "wrong" serial will be silently ignored, and our clipboard
// will be out of sync with the real one that Wayland maintains.
long eventSerial = isPrimary
? WLToolkit.getInputState().pointerButtonSerial()
: WLToolkit.getInputState().keySerial();
if (!isPrimary && eventSerial == 0) {
// The "regular" clipboard's content can be changed with either a mouse click
// (like on a menu item) or with the keyboard (Ctrl-C).
eventSerial = WLToolkit.getInputState().pointerButtonSerial();
}
if (log.isLoggable(PlatformLogger.Level.FINE)) {
log.fine("Clipboard: About to offer new contents using Wayland event serial " + eventSerial);
}
@@ -161,7 +173,12 @@ public final class WLClipboard extends SunClipboard {
log.fine("Clipboard: Offering new contents (" + contents + ") in these MIME formats: " + Arrays.toString(mime));
}
offerData(eventSerial, mime, contents, dataOfferQueuePtr);
synchronized (this) {
if (ourOfferNativePtr != 0) {
cancelOffer(ourOfferNativePtr);
}
ourOfferNativePtr = offerData(eventSerial, mime, contents, dataOfferQueuePtr);
}
// Once we have offered the data, someone may come back and ask to provide them.
// In that event, the transferContentsWithType() will be called from native on the
@@ -303,24 +320,25 @@ public final class WLClipboard extends SunClipboard {
* has been made available. The list of supported formats
* should have already been received and saved in newClipboardFormats.
*
* @param nativePtr a native handle to the clipboard
* @param newClipboardNativePtr a native handle to the clipboard
*/
private void handleNewClipboard(long nativePtr) {
private void handleNewClipboard(long newClipboardNativePtr) {
// Since we have a new clipboard, the existing one is no longer valid.
// We have now way of knowing whether this "new" one is the same as the "old" one.
// We have no way of knowing whether this "new" one is the same as the "old" one.
lostOwnershipNow(null);
if (log.isLoggable(PlatformLogger.Level.FINE)) {
log.fine("Clipboard: new clipboard is available: " + nativePtr);
log.fine("Clipboard: new clipboard is available: " + newClipboardNativePtr);
}
synchronized (this) {
long oldClipboardNativePtr = clipboardNativePtr;
if (oldClipboardNativePtr != 0) {
// "The client must destroy the previous selection data_offer, if any, upon receiving this event."
destroyClipboard(oldClipboardNativePtr);
}
clipboardFormats = newClipboardFormats;
clipboardNativePtr = nativePtr; // Could be NULL
clipboardNativePtr = newClipboardNativePtr; // Could be NULL
newClipboardFormats = new ArrayList<>(INITIAL_MIME_FORMATS_COUNT);
@@ -328,6 +346,13 @@ public final class WLClipboard extends SunClipboard {
}
}
private void handleOfferCancelled(long offerNativePtr) {
synchronized (this) {
assert offerNativePtr == ourOfferNativePtr;
ourOfferNativePtr = 0;
}
}
@Override
protected void registerClipboardViewerChecked() {
// TODO: is there any need to do more here?
@@ -416,8 +441,8 @@ public final class WLClipboard extends SunClipboard {
private static native long createDataOfferQueue();
private static native void dispatchDataOfferQueueImpl(long dataOfferQueuePtr);
private native long initNative(boolean isPrimary);
private native void offerData(long eventSerial, String[] mime, Object data, long dataOfferQueuePtr);
private native void cancelOffer(long eventSerial); // TODO: this is unused, delete, maybe?
private native long offerData(long eventSerial, String[] mime, Object data, long dataOfferQueuePtr);
private native void cancelOffer(long offerNativePtr);
private native int requestDataInFormat(long clipboardNativePtr, String mime);
private native void destroyClipboard(long clipboardNativePtr);

View File

@@ -1,6 +1,6 @@
/*
* Copyright (c) 2022, Oracle and/or its affiliates. All rights reserved.
* Copyright (c) 2022, JetBrains s.r.o.. All rights reserved.
* Copyright (c) 2022-2024, Oracle and/or its affiliates. All rights reserved.
* Copyright (c) 2022-2024, JetBrains s.r.o.. All rights reserved.
* DO NOT ALTER OR REMOVE COPYRIGHT NOTICES OR THIS FILE HEADER.
*
* This code is free software; you can redistribute it and/or modify it
@@ -73,7 +73,6 @@ import java.awt.image.ColorModel;
import java.awt.image.VolatileImage;
import java.awt.peer.ComponentPeer;
import java.awt.peer.ContainerPeer;
import java.awt.peer.KeyboardFocusManagerPeer;
import java.util.ArrayList;
import java.util.Objects;
import java.util.function.Supplier;
@@ -86,7 +85,7 @@ public class WLComponentPeer implements ComponentPeer {
// mapping of AWT cursor types to X cursor names
// multiple variants can be specified, that will be tried in order
private static final String[][] CURSOR_NAMES = {
{"default", "arrow"}, // DEFAULT_CURSOR
{"default", "arrow", "left_ptr", "left_arrow"}, // DEFAULT_CURSOR
{"crosshair"}, // CROSSHAIR_CURSOR
{"text", "xterm"}, // TEXT_CURSOR
{"wait", "watch"}, // WAIT_CURSOR
@@ -101,7 +100,6 @@ public class WLComponentPeer implements ComponentPeer {
{"hand"}, // HAND_CURSOR
{"move"}, // MOVE_CURSOR
};
private static final int WHEEL_SCROLL_AMOUNT = 3;
private static final int MINIMUM_WIDTH = 1;
private static final int MINIMUM_HEIGHT = 1;
@@ -121,10 +119,10 @@ public class WLComponentPeer implements ComponentPeer {
boolean visible = false;
private final Object dataLock = new Object();
int width; // in native pixels, protected by dataLock
int height; // in native pixels, protected by dataLock
int wlBufferScale; // protected by dataLock
boolean sizeIsBeingConfigured = false; // protected by dataLock
int displayScale; // protected by dataLock
double effectiveScale; // protected by dataLock
private final WLSize wlSize = new WLSize();
static {
initIDs();
@@ -137,32 +135,32 @@ public class WLComponentPeer implements ComponentPeer {
this.target = target;
this.background = target.getBackground();
Dimension size = constrainSize(target.getBounds().getSize());
width = size.width;
height = size.height;
final WLGraphicsConfig config = (WLGraphicsConfig)target.getGraphicsConfiguration();
wlBufferScale = config.getWlScale();
displayScale = config.getDisplayScale();
effectiveScale = config.getEffectiveScale();
wlSize.deriveFromJavaSize(size.width, size.height);
surfaceData = config.createSurfaceData(this);
nativePtr = nativeCreateFrame();
paintArea = new WLRepaintArea();
if (log.isLoggable(Level.FINE)) {
log.fine("WLComponentPeer: target=" + target + " with size=" + width + "x" + height);
log.fine("WLComponentPeer: target=" + target + " with size=" + wlSize);
}
// TODO
// setup parent window for target
}
public int getWidth() {
int getDisplayScale() {
synchronized (dataLock) {
return width;
return displayScale;
}
}
public int getWidth() {
return wlSize.getJavaWidth();
}
public int getHeight() {
synchronized (dataLock) {
return height;
}
return wlSize.getJavaHeight();
}
public Color getBackground() {
@@ -272,14 +270,42 @@ public class WLComponentPeer implements ComponentPeer {
private static Window getToplevelFor(Component component) {
Container container = component instanceof Container c ? c : component.getParent();
for(Container p = container; p != null; p = p.getParent()) {
if (p instanceof Window) {
return (Window)p;
if (p instanceof Window window && !isWlPopup(window)) {
return window;
}
}
return null;
}
static Point getRelativeLocation(Component c, Window toplevel) {
Objects.requireNonNull(c);
if (toplevel == null) {
return c.getLocation();
}
int x = 0, y = 0;
while (c != null) {
if (c instanceof Window window) {
// The location of non-popup windows has no relevance since
// there are no absolute coordinates in Wayland.
// The popup windows position, on the other hand, is set relative to their
// parent toplevel.
if (isWlPopup(window)) {
x += c.getX();
y += c.getY();
}
break;
}
x += c.getX();
y += c.getY();
c = c.getParent();
}
return new Point(x, y);
}
protected void wlSetVisible(boolean v) {
synchronized (getStateLock()) {
if (this.visible == v) return;
@@ -305,9 +331,7 @@ public class WLComponentPeer implements ComponentPeer {
final Window toplevel = getToplevelFor(popupParent);
// We need to provide popup "parent" location relative to
// the surface it is painted upon:
final Point toplevelLocation = toplevel == null
? new Point(popupParent.getX(), popupParent.getY())
: SwingUtilities.convertPoint(popupParent, 0, 0, toplevel);
final Point toplevelLocation = getRelativeLocation(popupParent, toplevel);
final int parentX = javaUnitsToSurfaceUnits(toplevelLocation.x);
final int parentY = javaUnitsToSurfaceUnits(toplevelLocation.y);
@@ -324,7 +348,7 @@ public class WLComponentPeer implements ComponentPeer {
popupLog.fine("\toffset from anchor: " + offsetFromParent);
}
nativeCreateWLPopup(nativePtr, getParentNativePtr(target),
nativeCreateWLPopup(nativePtr, getNativePtrFor(toplevel),
thisWidth, thisHeight,
parentX + offsetX, parentY + offsetY);
} else {
@@ -374,53 +398,34 @@ public class WLComponentPeer implements ComponentPeer {
void updateSurfaceData() {
SurfaceData.convertTo(WLSurfaceDataExt.class, surfaceData).revalidate(
getBufferWidth(), getBufferHeight(), getBufferScale());
getBufferWidth(), getBufferHeight(), getDisplayScale());
}
public void updateSurfaceSize() {
assert SunToolkit.isAWTLockHeldByCurrentThread();
// Note: must be called after a buffer of proper size has been attached to the surface,
// but the surface has not yet been committed. Otherwise, the sizes may get out of sync,
// but the surface has not yet been committed. Otherwise, the sizes will get out of sync,
// which may result in visual artifacts.
int thisWidth = javaUnitsToSurfaceUnits(getWidth());
int thisHeight = javaUnitsToSurfaceUnits(getHeight());
Rectangle nativeVisibleBounds = getVisibleBounds();
nativeVisibleBounds.x = javaUnitsToSurfaceUnits(nativeVisibleBounds.x);
nativeVisibleBounds.y = javaUnitsToSurfaceUnits(nativeVisibleBounds.y);
nativeVisibleBounds.width = javaUnitsToSurfaceUnits(nativeVisibleBounds.width);
nativeVisibleBounds.height = javaUnitsToSurfaceUnits(nativeVisibleBounds.height);
Dimension nativeMinSize = constrainSize(getMinimumSize());
nativeMinSize.width = javaUnitsToSurfaceUnits(nativeMinSize.width);
nativeMinSize.height = javaUnitsToSurfaceUnits(nativeMinSize.height);
int surfaceWidth = wlSize.getSurfaceWidth();
int surfaceHeight = wlSize.getSurfaceHeight();
Dimension surfaceMinSize = javaUnitsToSurfaceUnits(constrainSize(getMinimumSize()));
Dimension maxSize = target.isMaximumSizeSet() ? target.getMaximumSize() : null;
Dimension nativeMaxSize = maxSize != null ? constrainSize(maxSize) : null;
if (nativeMaxSize != null) {
nativeMaxSize.width = javaUnitsToSurfaceUnits(nativeMaxSize.width);
nativeMaxSize.height = javaUnitsToSurfaceUnits(nativeMaxSize.height);
}
Dimension surfaceMaxSize = maxSize != null ? javaUnitsToSurfaceUnits(constrainSize(maxSize)) : null;
nativeSetSurfaceSize(nativePtr, thisWidth, thisHeight);
nativeSetSurfaceSize(nativePtr, surfaceWidth, surfaceHeight);
if (!surfaceData.getColorModel().hasAlpha()) {
nativeSetOpaqueRegion(nativePtr,
nativeVisibleBounds.x, nativeVisibleBounds.y,
nativeVisibleBounds.width, nativeVisibleBounds.height);
nativeSetOpaqueRegion(nativePtr, 0, 0, surfaceWidth, surfaceHeight);
}
nativeSetWindowGeometry(nativePtr,
nativeVisibleBounds.x, nativeVisibleBounds.y,
nativeVisibleBounds.width, nativeVisibleBounds.height);
nativeSetMinimumSize(nativePtr, nativeMinSize.width, nativeMinSize.height);
if (nativeMaxSize != null) {
nativeSetMaximumSize(nativePtr, nativeMaxSize.width, nativeMaxSize.height);
nativeSetWindowGeometry(nativePtr, 0, 0, surfaceWidth, surfaceHeight);
nativeSetMinimumSize(nativePtr, surfaceMinSize.width, surfaceMinSize.height);
if (surfaceMaxSize != null) {
nativeSetMaximumSize(nativePtr, surfaceMaxSize.width, surfaceMaxSize.height);
}
}
void configureWLSurface() {
if (log.isLoggable(PlatformLogger.Level.FINE)) {
log.fine(String.format("%s is configured to %dx%d with %dx scale", this, getBufferWidth(), getBufferHeight(), getBufferScale()));
log.fine(String.format("%s is configured to %dx%d pixels", this, getBufferWidth(), getBufferHeight()));
}
updateSurfaceData();
}
@@ -484,7 +489,7 @@ public class WLComponentPeer implements ComponentPeer {
public void commitToServer() {
performLocked(() -> {
if (getWLSurface(nativePtr) != 0) {
surfaceData.flush();
SurfaceData.convertTo(WLSurfaceDataExt.class, surfaceData).commit();
}
});
Toolkit.getDefaultToolkit().sync();
@@ -530,8 +535,8 @@ public class WLComponentPeer implements ComponentPeer {
if (sizeChanged) {
setSizeTo(newSize.width, newSize.height);
if (log.isLoggable(PlatformLogger.Level.FINE)) {
log.fine(String.format("%s is resizing its buffer to %dx%d with %dx scale",
this, getBufferWidth(), getBufferHeight(), getBufferScale()));
log.fine(String.format("%s is resizing its buffer to %dx%d pixels",
this, getBufferWidth(), getBufferHeight()));
}
updateSurfaceData();
layout();
@@ -542,11 +547,28 @@ public class WLComponentPeer implements ComponentPeer {
postPaintEvent();
}
boolean isSizeBeingConfigured() {
synchronized (dataLock) {
return sizeIsBeingConfigured;
}
}
private void setSizeIsBeingConfigured(boolean value) {
synchronized (dataLock) {
sizeIsBeingConfigured = value;
}
}
private void setSizeTo(int newWidth, int newHeight) {
Dimension newSize = constrainSize(newWidth, newHeight);
synchronized (dataLock) {
this.width = newSize.width;
this.height = newSize.height;
if (isSizeBeingConfigured() && wlSize.hasPixelSizeSet()) {
// Must be careful not to override the size of the Wayland surface because
// some implementations (Weston) react badly when the size of the surface
// mismatches the configured size. We can't always precisely derive the surface
// size from the Java (client) size because of scaling rounding errors.
wlSize.setJavaSize(newSize.width, newSize.height);
} else {
wlSize.deriveFromJavaSize(newSize.width, newSize.height);
}
}
@@ -557,9 +579,7 @@ public class WLComponentPeer implements ComponentPeer {
final Window toplevel = getToplevelFor(popupParent);
// We need to provide popup "parent" location relative to
// the surface it is painted upon:
final Point toplevelLocation = toplevel == null
? new Point(popupParent.getX(), popupParent.getY())
: SwingUtilities.convertPoint(popupParent, 0, 0, toplevel);
final Point toplevelLocation = getRelativeLocation(popupParent, toplevel);
final int parentX = javaUnitsToSurfaceUnits(toplevelLocation.x);
final int parentY = javaUnitsToSurfaceUnits(toplevelLocation.y);
int newXNative = javaUnitsToSurfaceUnits(newX);
@@ -577,34 +597,12 @@ public class WLComponentPeer implements ComponentPeer {
} );
}
public Rectangle getVisibleBounds() {
synchronized(dataLock) {
return new Rectangle(0, 0, width, height);
}
}
/**
* Represents the scale ratio of Wayland's backing buffer in pixel units
* to surface units. Wayland events are generated in surface units, while
* painting should be performed in pixel units.
* The ratio is enforced with nativeSetSurfaceSize().
*/
int getBufferScale() {
synchronized(dataLock) {
return wlBufferScale;
}
}
public int getBufferWidth() {
synchronized (dataLock) {
return (int)(width * effectiveScale);
}
return wlSize.getPixelWidth();
}
public int getBufferHeight() {
synchronized (dataLock) {
return (int)(height * effectiveScale);
}
return wlSize.getPixelHeight();
}
public Rectangle getBufferBounds() {
@@ -773,10 +771,16 @@ public class WLComponentPeer implements ComponentPeer {
return target.getSize();
}
void showWindowMenu(int x, int y) {
void showWindowMenu(long serial, int x, int y) {
// "This request must be used in response to some sort of user action like
// a button press, key press, or touch down event."
// So 'serial' must appertain to such an event.
assert serial != 0;
int xNative = javaUnitsToSurfaceUnits(x);
int yNative = javaUnitsToSurfaceUnits(y);
performLocked(() -> nativeShowWindowMenu(nativePtr, xNative, yNative));
performLocked(() -> nativeShowWindowMenu(serial, nativePtr, xNative, yNative));
}
@Override
@@ -854,10 +858,10 @@ public class WLComponentPeer implements ComponentPeer {
}
private void updateCursorImmediately(WLInputState inputState) {
WLComponentPeer peer = inputState.getPeer();
WLComponentPeer peer = inputState.peerForPointerEvents();
if (peer == null) return;
Cursor cursor = peer.getCursor(inputState.getPointerX(), inputState.getPointerY());
setCursor(cursor, getGraphicsDevice() != null ? getGraphicsDevice().getWlScale() : 1);
setCursor(cursor, getGraphicsDevice() != null ? getGraphicsDevice().getDisplayScale() : 1);
}
Cursor getCursor(int x, int y) {
@@ -872,6 +876,14 @@ public class WLComponentPeer implements ComponentPeer {
}
private static void setCursor(Cursor c, int scale) {
long serial = WLToolkit.getInputState().pointerEnterSerial();
if (serial == 0) {
if (log.isLoggable(Level.WARNING)) {
log.warning("setCursor aborted due to missing event serial");
}
return; // Wayland will ignore the request anyway
}
Cursor cursor;
if (c.getType() == Cursor.CUSTOM_CURSOR && !(c instanceof WLCustomCursor)) {
cursor = Cursor.getDefaultCursor();
@@ -896,7 +908,7 @@ public class WLComponentPeer implements ComponentPeer {
}
AWTAccessor.getCursorAccessor().setPData(cursor, scale, pData);
}
nativeSetCursor(pData, scale);
nativeSetCursor(pData, scale, serial);
});
}
@@ -968,16 +980,17 @@ public class WLComponentPeer implements ComponentPeer {
@Override
public boolean updateGraphicsData(GraphicsConfiguration gc) {
final int newWlScale = ((WLGraphicsConfig)gc).getWlScale();
final int newScale = ((WLGraphicsConfig)gc).getDisplayScale();
WLGraphicsDevice gd = ((WLGraphicsConfig) gc).getDevice();
gd.addWindow(this);
synchronized (dataLock) {
if (newWlScale != wlBufferScale) {
wlBufferScale = newWlScale;
if (newScale != displayScale) {
displayScale = newScale;
effectiveScale = ((WLGraphicsConfig)gc).getEffectiveScale();
wlSize.updateWithNewScale();
if (log.isLoggable(PlatformLogger.Level.FINE)) {
log.fine(String.format("%s is updating buffer to %dx%d with %dx scale", this, getBufferWidth(), getBufferHeight(), wlBufferScale));
log.fine(String.format("%s is updating buffer to %dx%d pixels", this, getBufferWidth(), getBufferHeight()));
}
updateSurfaceData();
postPaintEvent();
@@ -1020,7 +1033,16 @@ public class WLComponentPeer implements ComponentPeer {
}
final void activate() {
performLocked(() -> nativeActivate(nativePtr));
// "The serial can come from an input or focus event."
long serial = WLToolkit.getInputState().keyboardEnterSerial();
long surface = WLToolkit.getInputState().surfaceForKeyboardInput();
if (serial != 0) {
performLocked(() -> nativeActivate(serial, nativePtr, surface));
} else {
if (log.isLoggable(Level.WARNING)) {
log.warning("activate() aborted due to missing keyboard enter event serial");
}
}
}
private static native void initIDs();
@@ -1043,8 +1065,8 @@ public class WLComponentPeer implements ComponentPeer {
protected native void nativeDisposeFrame(long ptr);
private native long getWLSurface(long ptr);
private native void nativeStartDrag(long ptr);
private native void nativeStartResize(long ptr, int edges);
private native void nativeStartDrag(long serial, long ptr);
private native void nativeStartResize(long serial, long ptr, int edges);
private native void nativeSetTitle(long ptr, String title);
private native void nativeRequestMinimized(long ptr);
@@ -1058,20 +1080,22 @@ public class WLComponentPeer implements ComponentPeer {
private native void nativeSetWindowGeometry(long ptr, int x, int y, int width, int height);
private native void nativeSetMinimumSize(long ptr, int width, int height);
private native void nativeSetMaximumSize(long ptr, int width, int height);
private static native void nativeSetCursor(long pData, int scale);
private static native void nativeSetCursor(long pData, int scale, long pointerEnterSerial);
private static native long nativeGetPredefinedCursor(String name, int scale);
private static native long nativeDestroyPredefinedCursor(long pData);
private native void nativeShowWindowMenu(long ptr, int x, int y);
private native void nativeActivate(long ptr);
private native void nativeShowWindowMenu(long serial, long ptr, int x, int y);
private native void nativeActivate(long serial, long ptr, long activatingSurfacePtr);
static long getNativePtrFor(Component component) {
final ComponentAccessor acc = AWTAccessor.getComponentAccessor();
ComponentPeer peer = acc.getPeer(component);
return ((WLComponentPeer)peer).nativePtr;
}
static long getParentNativePtr(Component target) {
Component parent = target.getParent();
if (parent == null) return 0;
final ComponentAccessor acc = AWTAccessor.getComponentAccessor();
ComponentPeer peer = acc.getPeer(parent);
return ((WLComponentPeer)peer).nativePtr;
return parent == null ? 0 : getNativePtrFor(parent);
}
private final Object state_lock = new Object();
@@ -1152,19 +1176,128 @@ public class WLComponentPeer implements ComponentPeer {
}
}
if (e.hasAxisEvent() && e.getIsAxis0Valid()) {
final MouseEvent mouseEvent = new MouseWheelEvent(getTarget(),
MouseEvent.MOUSE_WHEEL,
timestamp,
newInputState.getModifiers(),
x, y,
xAbsolute, yAbsolute,
1,
isPopupTrigger,
MouseWheelEvent.WHEEL_UNIT_SCROLL,
1,
Integer.signum(e.getAxis0Value()) * WHEEL_SCROLL_AMOUNT);
postMouseEvent(mouseEvent);
if (e.hasAxisEvent()) {
convertPointerEventToMWEParameters(e, xAxisWheelRoundRotationsAccumulator, yAxisWheelRoundRotationsAccumulator, mweConversionInfo);
if (log.isLoggable(PlatformLogger.Level.FINE)) {
log.fine("{0} -> {1}", e, mweConversionInfo);
}
// macOS's and Windows' AWT implement the following logic, so do we:
// Shift + a vertical scroll means a horizontal scroll.
// AWT/Swing components are also aware of it.
final boolean isShiftPressed = (newInputState.getModifiers() & KeyEvent.SHIFT_DOWN_MASK) != 0;
// These values decide whether a horizontal scrolling MouseWheelEvent will be created and posted
final int horizontalMWEScrollAmount;
final double horizontalMWEPreciseRotations;
final int horizontalMWERoundRotations;
// These values decide whether a vertical scrolling MouseWheelEvent will be created and posted
final int verticalMWEScrollAmount;
final double verticalMWEPreciseRotations;
final int verticalMWERoundRotations;
if (isShiftPressed) {
// Pressing Shift makes only a horizontal scrolling MouseWheelEvent possible
verticalMWEScrollAmount = 0;
verticalMWEPreciseRotations = 0;
verticalMWERoundRotations = 0;
// Now we're deciding values of which axis will be used to generate a horizontal MouseWheelEvent
if (mweConversionInfo.xAxisDirectionSign == mweConversionInfo.yAxisDirectionSign) {
// The scrolling directions don't contradict each other.
// Let's pick the more influencing axis.
final var xAxisUnitsToScroll = mweConversionInfo.xAxisMWEScrollAmount * (
Math.abs(mweConversionInfo.xAxisMWEPreciseRotations) > Math.abs(mweConversionInfo.xAxisMWERoundRotations)
? mweConversionInfo.xAxisMWEPreciseRotations
: mweConversionInfo.xAxisMWERoundRotations );
final var yAxisUnitsToScroll = mweConversionInfo.yAxisMWEScrollAmount * (
Math.abs(mweConversionInfo.yAxisMWEPreciseRotations) > Math.abs(mweConversionInfo.yAxisMWERoundRotations)
? mweConversionInfo.yAxisMWEPreciseRotations
: mweConversionInfo.yAxisMWERoundRotations );
if (xAxisUnitsToScroll > yAxisUnitsToScroll) {
horizontalMWEScrollAmount = mweConversionInfo.xAxisMWEScrollAmount;
horizontalMWEPreciseRotations = mweConversionInfo.xAxisMWEPreciseRotations;
horizontalMWERoundRotations = mweConversionInfo.xAxisMWERoundRotations;
} else {
horizontalMWEScrollAmount = mweConversionInfo.yAxisMWEScrollAmount;
horizontalMWEPreciseRotations = mweConversionInfo.yAxisMWEPreciseRotations;
horizontalMWERoundRotations = mweConversionInfo.yAxisMWERoundRotations;
}
} else if (mweConversionInfo.yAxisMWERoundRotations != 0 || mweConversionInfo.yAxisMWEPreciseRotations != 0) {
// The scrolling directions contradict.
// I think consistently choosing the Y axis values (unless they're zero) provides the most expected UI behavior here.
horizontalMWEScrollAmount = mweConversionInfo.yAxisMWEScrollAmount;
horizontalMWEPreciseRotations = mweConversionInfo.yAxisMWEPreciseRotations;
horizontalMWERoundRotations = mweConversionInfo.yAxisMWERoundRotations;
} else {
horizontalMWEScrollAmount = mweConversionInfo.xAxisMWEScrollAmount;
horizontalMWEPreciseRotations = mweConversionInfo.xAxisMWEPreciseRotations;
horizontalMWERoundRotations = mweConversionInfo.xAxisMWERoundRotations;
}
} else {
// Shift is not pressed, so both horizontal and vertical MouseWheelEvent s are possible.
horizontalMWEScrollAmount = mweConversionInfo.xAxisMWEScrollAmount;
horizontalMWEPreciseRotations = mweConversionInfo.xAxisMWEPreciseRotations;
horizontalMWERoundRotations = mweConversionInfo.xAxisMWERoundRotations;
verticalMWEScrollAmount = mweConversionInfo.yAxisMWEScrollAmount;
verticalMWEPreciseRotations = mweConversionInfo.yAxisMWEPreciseRotations;
verticalMWERoundRotations = mweConversionInfo.yAxisMWERoundRotations;
}
if (e.xAxisHasStopEvent()) {
xAxisWheelRoundRotationsAccumulator.reset();
}
if (e.yAxisHasStopEvent()) {
yAxisWheelRoundRotationsAccumulator.reset();
}
if (verticalMWERoundRotations != 0 || verticalMWEPreciseRotations != 0) {
assert(verticalMWEScrollAmount > 0);
final MouseEvent mouseEvent = new MouseWheelEvent(getTarget(),
MouseEvent.MOUSE_WHEEL,
timestamp,
// Making sure the event will cause scrolling along the vertical axis
newInputState.getModifiers() & ~KeyEvent.SHIFT_DOWN_MASK,
x, y,
xAbsolute, yAbsolute,
1,
isPopupTrigger,
MouseWheelEvent.WHEEL_UNIT_SCROLL,
verticalMWEScrollAmount,
verticalMWERoundRotations,
verticalMWEPreciseRotations);
postMouseEvent(mouseEvent);
}
if (horizontalMWERoundRotations != 0 || horizontalMWEPreciseRotations != 0) {
assert(horizontalMWEScrollAmount > 0);
final MouseEvent mouseEvent = new MouseWheelEvent(getTarget(),
MouseEvent.MOUSE_WHEEL,
timestamp,
// Making sure the event will cause scrolling along the horizontal axis
newInputState.getModifiers() | KeyEvent.SHIFT_DOWN_MASK,
x, y,
xAbsolute, yAbsolute,
1,
isPopupTrigger,
MouseWheelEvent.WHEEL_UNIT_SCROLL,
horizontalMWEScrollAmount,
horizontalMWERoundRotations,
horizontalMWEPreciseRotations);
postMouseEvent(mouseEvent);
}
}
if (e.hasMotionEvent()) {
@@ -1195,12 +1328,182 @@ public class WLComponentPeer implements ComponentPeer {
}
}
void startDrag() {
performLocked(() -> nativeStartDrag(nativePtr));
/**
* Accumulates fractional parts of wheel rotation steps until their absolute sum represents one or more full step(s).
* This allows implementing smoother scrolling, e.g., the sequence of wl_pointer::axis events with values
* [0.2, 0.1, 0.4, 0.4] can be accumulated into 1.1=0.2+0.1+0.4+0.4, making it possible to
* generate a MouseWheelEvent with wheelRotation=1
* (instead of 4 tries to generate a MouseWheelEvent with wheelRotation=0 due to double->int conversion)
*/
private static final class MouseWheelRoundRotationsAccumulator {
/**
* This method is intended to accumulate fractional numbers of wheel rotations.
*
* @param fractionalRotations - fractional number of wheel rotations (usually got from a {@code wl_pointer::axis} event)
* @return The number of wheel round rotations accumulated
* @see #accumulateSteps120Rotations
*/
public int accumulateFractionalRotations(double fractionalRotations) {
// The code assumes that the target component ({@link WLComponentPeer#target}) never changes.
// If it did, all the accumulating fields would have to be reset each time the target changed.
accumulatedFractionalRotations += fractionalRotations;
final int result = (int)accumulatedFractionalRotations;
accumulatedFractionalRotations -= result;
return result;
}
/**
* This method is intended to accumulate 1/120 fractions of a rotation step.
*
* @param steps120Rotations - a number of 1/120 parts of a wheel step (so that, e.g.,
* 30 means one quarter of a step,
* 240 means two steps,
* -240 means two steps in the negative direction,
* 540 means 4.5 steps).
* Usually got from a {@code wl_pointer::axis_discrete}/{@code axis_value120} event.
* @return The number of wheel round rotations accumulated
* @see #accumulateFractionalRotations
*/
public int accumulateSteps120Rotations(int steps120Rotations) {
// The code assumes that the target component ({@link WLComponentPeer#target}) never changes.
// If it did, all the accumulating fields would have to be reset each time the target changed.
accumulatedSteps120Rotations += steps120Rotations;
final int result = accumulatedSteps120Rotations / 120;
accumulatedSteps120Rotations %= 120;
return result;
}
public void reset() {
accumulatedFractionalRotations = 0;
accumulatedSteps120Rotations = 0;
}
private double accumulatedFractionalRotations = 0;
private int accumulatedSteps120Rotations = 0;
}
private final MouseWheelRoundRotationsAccumulator xAxisWheelRoundRotationsAccumulator = new MouseWheelRoundRotationsAccumulator();
private final MouseWheelRoundRotationsAccumulator yAxisWheelRoundRotationsAccumulator = new MouseWheelRoundRotationsAccumulator();
private static final class PointerEventToMWEConversionInfo {
public double xAxisVector = 0;
public int xAxisSteps120 = 0;
public int xAxisDirectionSign = 0;
public double xAxisMWEPreciseRotations = 0;
public int xAxisMWERoundRotations = 0;
public int xAxisMWEScrollAmount = 0;
public double yAxisVector = 0;
public int yAxisSteps120 = 0;
public int yAxisDirectionSign = 0;
public double yAxisMWEPreciseRotations = 0;
public int yAxisMWERoundRotations = 0;
public int yAxisMWEScrollAmount = 0;
private final StringBuilder toStringBuf = new StringBuilder(1024);
@Override
public String toString() {
toStringBuf.setLength(0);
toStringBuf.append("PointerEventToMWEConversionInfo[")
.append("xAxisVector=" ).append(xAxisVector ).append(", ")
.append("xAxisSteps120=" ).append(xAxisSteps120 ).append(", ")
.append("xAxisDirectionSign=" ).append(xAxisDirectionSign ).append(", ")
.append("xAxisMWEPreciseRotations=").append(xAxisMWEPreciseRotations).append(", ")
.append("xAxisMWERoundRotations=" ).append(xAxisMWERoundRotations ).append(", ")
.append("xAxisMWEScrollAmount=" ).append(xAxisMWEScrollAmount ).append(", ")
.append("yAxisVector=" ).append(yAxisVector ).append(", ")
.append("yAxisSteps120=" ).append(yAxisSteps120 ).append(", ")
.append("yAxisDirectionSign=" ).append(yAxisDirectionSign ).append(", ")
.append("yAxisMWEPreciseRotations=").append(yAxisMWEPreciseRotations).append(", ")
.append("yAxisMWERoundRotations=" ).append(yAxisMWERoundRotations ).append(", ")
.append("yAxisMWEScrollAmount=" ).append(yAxisMWEScrollAmount )
.append(']');
return toStringBuf.toString();
}
}
private final PointerEventToMWEConversionInfo mweConversionInfo = new PointerEventToMWEConversionInfo();
private static void convertPointerEventToMWEParameters(
WLPointerEvent dispatchingEvent,
MouseWheelRoundRotationsAccumulator xAxisWheelRoundRotationsAccumulator,
MouseWheelRoundRotationsAccumulator yAxisWheelRoundRotationsAccumulator,
PointerEventToMWEConversionInfo mweConversionInfo) {
// WLPointerEvent -> MouseWheelEvent conversion constants.
// Please keep in mind that they're all related, so that changing one may require adjusting the others
// (or altering this conversion routine).
// XToolkit uses 3 units per a wheel step, so do we here to preserve the user experience
final int STEPS120_MWE_SCROLL_AMOUNT = 3;
// For touchpad scrolling, it's worth being able to scroll the minimum possible number of units (i.e. 1)
final int VECTOR_MWE_SCROLL_AMOUNT = 1;
// 0.28 has experimentally been found as providing a good balance between
// wheel scrolling sensitivity and touchpad scrolling sensitivity
final double VECTOR_LENGTH_TO_MWE_ROTATIONS_FACTOR = 0.28;
mweConversionInfo.xAxisVector = dispatchingEvent.xAxisHasVectorValue() ? dispatchingEvent.getXAxisVectorValue() : 0;
mweConversionInfo.xAxisSteps120 = dispatchingEvent.xAxisHasSteps120Value() ? dispatchingEvent.getXAxisSteps120Value() : 0;
// Converting the X axis Wayland values to MouseWheelEvent parameters.
// wl_pointer::axis_discrete/axis_value120 are preferred over wl_pointer::axis because
// they're closer to MouseWheelEvent by their nature.
if (mweConversionInfo.xAxisSteps120 != 0) {
mweConversionInfo.xAxisDirectionSign = Integer.signum(mweConversionInfo.xAxisSteps120);
mweConversionInfo.xAxisMWEPreciseRotations = mweConversionInfo.xAxisSteps120 / 120d;
mweConversionInfo.xAxisMWERoundRotations = xAxisWheelRoundRotationsAccumulator.accumulateSteps120Rotations(mweConversionInfo.xAxisSteps120);
// It would be probably better to calculate the scrollAmount taking the xAxisVector value into
// consideration, so that the wheel scrolling speed could be adjusted via some system settings.
// However, neither Gnome nor KDE currently provide such a setting, making it difficult to test
// how well such an approach would work. So leaving it as is for now.
mweConversionInfo.xAxisMWEScrollAmount = STEPS120_MWE_SCROLL_AMOUNT;
} else {
mweConversionInfo.xAxisDirectionSign = (int)Math.signum(mweConversionInfo.xAxisVector);
mweConversionInfo.xAxisMWEPreciseRotations = mweConversionInfo.xAxisVector * VECTOR_LENGTH_TO_MWE_ROTATIONS_FACTOR;
mweConversionInfo.xAxisMWERoundRotations = xAxisWheelRoundRotationsAccumulator.accumulateFractionalRotations(mweConversionInfo.xAxisMWEPreciseRotations);
mweConversionInfo.xAxisMWEScrollAmount = VECTOR_MWE_SCROLL_AMOUNT;
}
mweConversionInfo.yAxisVector = dispatchingEvent.yAxisHasVectorValue() ? dispatchingEvent.getYAxisVectorValue() : 0;
mweConversionInfo.yAxisSteps120 = dispatchingEvent.yAxisHasSteps120Value() ? dispatchingEvent.getYAxisSteps120Value() : 0;
// Converting the Y axis Wayland values to MouseWheelEvent parameters.
// (Currently, the routine is exactly like for X axis)
if (mweConversionInfo.yAxisSteps120 != 0) {
mweConversionInfo.yAxisDirectionSign = Integer.signum(mweConversionInfo.yAxisSteps120);
mweConversionInfo.yAxisMWEPreciseRotations = mweConversionInfo.yAxisSteps120 / 120d;
mweConversionInfo.yAxisMWERoundRotations = yAxisWheelRoundRotationsAccumulator.accumulateSteps120Rotations(mweConversionInfo.yAxisSteps120);
mweConversionInfo.yAxisMWEScrollAmount = STEPS120_MWE_SCROLL_AMOUNT;
} else {
mweConversionInfo.yAxisDirectionSign = (int)Math.signum(mweConversionInfo.yAxisVector);
mweConversionInfo.yAxisMWEPreciseRotations = mweConversionInfo.yAxisVector * VECTOR_LENGTH_TO_MWE_ROTATIONS_FACTOR;
mweConversionInfo.yAxisMWERoundRotations = yAxisWheelRoundRotationsAccumulator.accumulateFractionalRotations(mweConversionInfo.yAxisMWEPreciseRotations);
mweConversionInfo.yAxisMWEScrollAmount = VECTOR_MWE_SCROLL_AMOUNT;
}
}
void startResize(int edges) {
performLocked(() -> nativeStartResize(nativePtr, edges));
void startDrag(long serial) {
// "This request must be used in response to some sort of user action like a button press,
// key press, or touch down event. The passed serial is used to determine the type
// of interactive move (touch, pointer, etc)."
assert serial != 0;
performLocked(() -> nativeStartDrag(serial, nativePtr));
}
void startResize(long serial, int edges) {
// "This request must be used in response to some sort of user action like a button press,
// key press, or touch down event. The passed serial is used to determine the type
// of interactive resize (touch, pointer, etc)."
assert serial != 0;
performLocked(() -> nativeStartResize(serial, nativePtr, edges));
}
/**
@@ -1212,7 +1515,7 @@ public class WLComponentPeer implements ComponentPeer {
return value;
} else {
synchronized (dataLock) {
return (int)(value * wlBufferScale / effectiveScale);
return (int)(value * displayScale / effectiveScale);
}
}
}
@@ -1226,38 +1529,45 @@ public class WLComponentPeer implements ComponentPeer {
return value;
} else {
synchronized (dataLock) {
return (int)(value * effectiveScale / wlBufferScale);
return (int)(value * effectiveScale / displayScale);
}
}
}
void notifyConfigured(int newXNative, int newYNative, int newWidthNative, int newHeightNative, boolean active, boolean maximized) {
int newWidth = surfaceUnitsToJavaUnits(newWidthNative);
int newHeight = surfaceUnitsToJavaUnits(newHeightNative);
final long wlSurfacePtr = getWLSurface(nativePtr);
if (!surfaceAssigned) {
SurfaceData.convertTo(WLSurfaceDataExt.class, surfaceData).assignSurface(wlSurfacePtr);
surfaceAssigned = true;
}
Dimension javaUnitsToSurfaceUnits(Dimension d) {
return new Dimension(javaUnitsToSurfaceUnits(d.width), javaUnitsToSurfaceUnits(d.height));
}
void notifyConfigured(int newSurfaceX, int newSurfaceY, int newSurfaceWidth, int newSurfaceHeight, boolean active, boolean maximized) {
// NB: The width and height, as well as X and Y arguments specify the size and the location
// of the window in surface-local coordinates.
if (log.isLoggable(PlatformLogger.Level.FINE)) {
log.fine(String.format("%s configured to %dx%d", this, newWidth, newHeight));
log.fine(String.format("%s configured to %dx%d surface units", this, newSurfaceWidth, newSurfaceHeight));
}
boolean isWlPopup = targetIsWlPopup();
if (isWlPopup) { // Only popups provide (relative) location
int newX = surfaceUnitsToJavaUnits(newXNative);
int newY = surfaceUnitsToJavaUnits(newYNative);
int newX = surfaceUnitsToJavaUnits(newSurfaceX);
int newY = surfaceUnitsToJavaUnits(newSurfaceY);
setLocationTo(newX, newY);
}
if (newWidth != 0 && newHeight != 0) performUnlocked(() -> target.setSize(newWidth, newHeight));
// From xdg-shell.xml: "If the width or height arguments are zero,
// it means the client should decide its own window dimension".
boolean clientDecidesDimension = newSurfaceWidth == 0 || newSurfaceHeight == 0;
if (!clientDecidesDimension) {
changeSizeToConfigured(newSurfaceWidth, newSurfaceHeight);
}
if (newWidth == 0 || newHeight == 0 || isWlPopup) {
// From xdg-shell.xml: "If the width or height arguments are zero,
// it means the client should decide its own window dimension".
if (!surfaceAssigned) {
long wlSurfacePtr = getWLSurface(nativePtr);
SurfaceData.convertTo(WLSurfaceDataExt.class, surfaceData).assignSurface(wlSurfacePtr);
surfaceAssigned = true;
}
// In case this is the first configure after setVisible(true), we
if (clientDecidesDimension || isWlPopup) {
// In case this is the first 'configure' after setVisible(true), we
// need to post the initial paint event for the window to appear on
// the screen. In the other case, this paint event is posted
// by setBounds() eventually called from target.setSize() above.
@@ -1269,6 +1579,22 @@ public class WLComponentPeer implements ComponentPeer {
}
}
private void changeSizeToConfigured(int newSurfaceWidth, int newSurfaceHeight) {
wlSize.deriveFromSurfaceSize(newSurfaceWidth, newSurfaceHeight);
int newWidth = wlSize.getJavaWidth();
int newHeight = wlSize.getJavaHeight();
try {
// Must not confuse the size given by the server with the size set by the user.
// The former originates from the surface size in surface-local coordinates,
// while the latter is set in the client (Java) units. These are not always
// precisely convertible.
setSizeIsBeingConfigured(true);
performUnlocked(() -> target.setSize(newWidth, newHeight));
} finally {
setSizeIsBeingConfigured(false);
}
}
void notifyEnteredOutput(int wlOutputID) {
// NB: May also be called from native code whenever the corresponding wl_surface enters a new output
synchronized (devices) {
@@ -1322,8 +1648,8 @@ public class WLComponentPeer implements ComponentPeer {
// Wayland's output and are removed as soon as we have left.
synchronized (devices) {
for (WLGraphicsDevice gd : devices) {
if (gd.getWlScale() > scale) {
scale = gd.getWlScale();
if (gd.getDisplayScale() > scale) {
scale = gd.getDisplayScale();
theDevice = gd;
}
}
@@ -1361,7 +1687,6 @@ public class WLComponentPeer implements ComponentPeer {
return bounds;
}
private Dimension constrainSize(int width, int height) {
Dimension maxBounds = getMaxBufferBounds();
return new Dimension(
@@ -1443,4 +1768,109 @@ public class WLComponentPeer implements ComponentPeer {
}
return result;
}
private class WLSize {
/**
* Represents the full size of the component in "client" units as returned by Component.getSize().
*/
private final Dimension javaSize = new Dimension(); // in the client (Java) space, protected by dataLock
/**
* Represents the full size of the component in screen pixels.
* The SurfaceData associated with this component takes its size from this value.
*/
private final Dimension pixelSize = new Dimension(); // in pixels, protected by dataLock
/**
* Represents the full size of the component in "surface-local" units;
* these are the units that Wayland uses in most of its API.
* Unless the debug scale is used (WLGraphicsEnvironment.isDebugScaleEnabled()), it is identical
* to javaSize.
*/
private final Dimension surfaceSize = new Dimension(); // in surface units, protected by dataLock
void deriveFromJavaSize(int width, int height) {
synchronized (dataLock) {
javaSize.width = width;
javaSize.height = height;
pixelSize.width = (int) (width * effectiveScale);
pixelSize.height = (int) (height * effectiveScale);
surfaceSize.width = javaUnitsToSurfaceUnits(width);
surfaceSize.height = javaUnitsToSurfaceUnits(height);
}
}
void deriveFromSurfaceSize(int width, int height) {
synchronized (dataLock) {
javaSize.width = surfaceUnitsToJavaUnits(width);
javaSize.height = surfaceUnitsToJavaUnits(height);
pixelSize.width = width * displayScale;
pixelSize.height = height * displayScale;
surfaceSize.width = width;
surfaceSize.height = height;
}
}
void updateWithNewScale() {
synchronized (dataLock) {
pixelSize.width = (int)(javaSize.width * effectiveScale);
pixelSize.height = (int)(javaSize.height * effectiveScale);
surfaceSize.width = javaUnitsToSurfaceUnits(javaSize.width);
surfaceSize.height = javaUnitsToSurfaceUnits(javaSize.height);
}
}
boolean hasPixelSizeSet() {
synchronized (dataLock) {
return pixelSize.width > 0 && pixelSize.height > 0;
}
}
void setJavaSize(int width, int height) {
synchronized (dataLock) {
javaSize.width = width;
javaSize.height = height;
}
}
int getPixelWidth() {
synchronized (dataLock) {
return pixelSize.width;
}
}
int getPixelHeight() {
synchronized (dataLock) {
return pixelSize.height;
}
}
int getJavaWidth() {
synchronized (dataLock) {
return javaSize.width;
}
}
int getJavaHeight() {
synchronized (dataLock) {
return javaSize.height;
}
}
int getSurfaceWidth() {
synchronized (dataLock) {
return surfaceSize.width;
}
}
int getSurfaceHeight() {
synchronized (dataLock) {
return surfaceSize.height;
}
}
public String toString() {
return "WLSize[client=" + javaSize + ", pixel=" + pixelSize + ", surface=" + surfaceSize + "]";
}
}
}

View File

@@ -110,18 +110,11 @@ public abstract class WLDecoratedPeer extends WLWindowPeer {
}
@Override
void notifyConfigured(int newX, int newY, int newWidthNative, int newHeightNative, boolean active, boolean maximized) {
super.notifyConfigured(newX, newY, newWidthNative, newHeightNative, active, maximized);
void notifyConfigured(int newSurfaceX, int newSurfaceY, int newSurfaceWidth, int newSurfaceHeight, boolean active, boolean maximized) {
super.notifyConfigured(newSurfaceX, newSurfaceY, newSurfaceWidth, newSurfaceHeight, active, maximized);
decoration.setActive(active);
}
@Override
public Rectangle getVisibleBounds() {
// TODO: modify if our decorations ever acquire special effects that
// do not count into "visible bounds" of the window
return super.getVisibleBounds();
}
@Override
public void setBounds(int newX, int newY, int newWidth, int newHeight, int op) {
super.setBounds(newX, newY, newWidth, newHeight, op);

Some files were not shown because too many files have changed in this diff Show More