Compare commits

...

44 Commits

Author SHA1 Message Date
Xiaolong Peng
8f8fda7c80 8373048: Genshen: Remove dead code from Shenandoah
Reviewed-by: wkemper
2025-12-03 22:46:18 +00:00
Xiaolong Peng
db2a5420a2 8372861: Genshen: Override parallel_region_stride of ShenandoahResetBitmapClosure to a reasonable value for better parallelism
Reviewed-by: kdnilsen, shade, wkemper
2025-12-03 22:43:17 +00:00
Serguei Spitsyn
1294d55b19 8372769: Test runtime/handshake/HandshakeDirectTest.java failed - JVMTI ERROR 13
Reviewed-by: lmesnik, pchilanomate, cjplummer, amenkov
2025-12-03 22:42:47 +00:00
Evgeny Nikitin
9b386014a0 8373049: Update JCStress test suite
Reviewed-by: epavlova, lmesnik
2025-12-03 21:58:17 +00:00
Volodymyr Paprotski
70e2bc876a 8372816: New test sun/security/provider/acvp/ML_DSA_Intrinsic_Test.java succeeds in case of error
Reviewed-by: azeller, mdoerr
2025-12-03 21:32:29 +00:00
Alexander Zvegintsev
5ea2b64021 8372977: unnecessary gthread-2.0 loading
Reviewed-by: prr, kizune
2025-12-03 20:03:33 +00:00
Patricio Chilano Mateo
e534ee9932 8364343: Virtual Thread transition management needs to be independent of JVM TI
Co-authored-by: Alan Bateman <alanb@openjdk.org>
Reviewed-by: coleenp, dholmes, sspitsyn
2025-12-03 20:01:45 +00:00
Brian Burkhalter
ba777f6610 8372851: Modify java/io/File/GetXSpace.java to print path on failure of native call
Reviewed-by: jpai, naoto
2025-12-03 19:58:53 +00:00
Brian Burkhalter
8a5db916af 8171432: (fs) WindowsWatchService.Poller::run does not call ReadDirectoryChangesW after a ERROR_NOTIFY_ENUM_DIR
Reviewed-by: alanb, djelinski
2025-12-03 19:58:28 +00:00
Phil Race
aff25f135a 4690476: NegativeArraySizeException from AffineTransformOp with shear
Reviewed-by: psadhukhan, jdv
2025-12-03 18:20:31 +00:00
Markus Grönlund
e93b10d084 8365400: Enhance JFR to emit file and module metadata for class loading
Reviewed-by: coleenp, egahlin
2025-12-03 18:12:58 +00:00
Joel Sikström
8d80778e05 8373023: [REDO] Remove the default value of InitialRAMPercentage
Reviewed-by: stefank, sjohanss, aboldtch
2025-12-03 18:02:06 +00:00
Justin Lu
fa6ca0bbd1 8362428: Update IANA Language Subtag Registry to Version 2025-08-25
Reviewed-by: lancea, naoto, iris
2025-12-03 17:25:05 +00:00
Chris Plummer
0bcef61a6d 8372957: After JDK-8282441 JDWP might allow some invalid FrameIDs to be used
Reviewed-by: amenkov, sspitsyn
2025-12-03 17:15:37 +00:00
Chris Plummer
c432150397 8372809: Test vmTestbase/nsk/jdi/ThreadReference/isSuspended/issuspended001/TestDescription.java failed: JVMTI_ERROR_THREAD_NOT_ALIVE
Reviewed-by: amenkov, sspitsyn
2025-12-03 16:37:10 +00:00
Daniel Fuchs
af8977e406 8372951: The property jdk.httpclient.quic.maxBidiStreams should be renamed to jdk.internal
8365794: StreamLimitTest vs H3StreamLimitReachedTest: consider renaming or merging

Reviewed-by: jpai
2025-12-03 15:32:46 +00:00
Albert Mingkun Yang
6d5bf9c801 8372999: Parallel: Old generation min size constraint broken
Reviewed-by: stefank, jsikstro
2025-12-03 15:30:14 +00:00
Axel Boldt-Christmas
3d54a802e3 8372995: SerialGC: Allow SerialHeap::allocate_loaded_archive_space expand old_gen
Reviewed-by: ayang, jsikstro
2025-12-03 15:21:11 +00:00
Nizar Benalla
1d753f1161 8373010: Update starting-next-release.html after JDK-8372940
Reviewed-by: jpai, erikj
2025-12-03 15:14:57 +00:00
Volodymyr Paprotski
829b85813a 8372703: Test compiler/arguments/TestCodeEntryAlignment.java failed: assert(allocates2(pc)) failed: not in CodeBuffer memory
Reviewed-by: mhaessig, dfenacci, thartmann
2025-12-03 14:53:35 +00:00
Erik Joelsson
87c4b01ea3 8372943: Restore --with-tools-dir
Reviewed-by: mikael, tbell, shade
2025-12-03 14:38:53 +00:00
Erik Joelsson
44e2d499f8 8372705: The riscv-64 cross-compilation build is failing in the CI
Reviewed-by: dholmes, shade
2025-12-03 14:38:32 +00:00
Joel Sikström
c0636734bd 8372993: Serial: max_eden_size is too small after JDK-8368740
Reviewed-by: ayang, aboldtch, stefank
2025-12-03 14:34:05 +00:00
Thomas Schatzl
135661b438 8372179: Remove Unused ConcurrentHashTable::MultiGetHandle
Reviewed-by: dholmes, iwalulya
2025-12-03 13:36:55 +00:00
Alan Bateman
afb6a0c2fe 8372958: SocketInputStream.read throws SocketException instead of returning -1 when input shutdown
Reviewed-by: djelinski, michaelm
2025-12-03 13:03:51 +00:00
Kerem Kat
abb75ba656 8372587: Put jdk/jfr/jvm/TestWaste.java into the ProblemList
Reviewed-by: dholmes
2025-12-03 13:01:32 +00:00
Galder Zamarreño
a655ea4845 8371792: Refactor barrier loop tests out of TestIfMinMax
Reviewed-by: chagedorn, epeter, bmaillard
2025-12-03 12:31:26 +00:00
Galder Zamarreño
125d1820f1 8372393: Document requirement for separate metallib installation with Xcode 26.1.1
Reviewed-by: erikj
2025-12-03 11:12:00 +00:00
Aleksey Shipilev
3f447edf0e 8372862: AArch64: Fix GetAndSet-acquire costs after JDK-8372188
Reviewed-by: dlong, mhaessig
2025-12-03 10:55:12 +00:00
Igor Rudenko
170ebdc5b7 8346657: Improve out of bounds exception messages for MemorySegments
Reviewed-by: jvernee, liach, mcimadamore
2025-12-03 10:37:55 +00:00
Richard Reingruber
804ce0a239 8370473: C2: Better Aligment of Vector Spill Slots
Reviewed-by: goetz, mdoerr
2025-12-03 10:29:09 +00:00
Casper Norrbin
f1a4d1bfde 8372615: Many container tests fail when running rootless on cgroup v1
Reviewed-by: sgehwolf, dholmes
2025-12-03 10:06:01 +00:00
Casper Norrbin
94977063ba 8358706: Integer overflow with -XX:MinOopMapAllocation=-1
Reviewed-by: phubner, coleenp
2025-12-03 10:03:50 +00:00
Jonas Norlinder
858d2e434d 8372584: [Linux]: Replace reading proc to get thread user CPU time with clock_gettime
Reviewed-by: dholmes, kevinw, redestad
2025-12-03 09:35:59 +00:00
Erik Österlund
3e04e11482 8372738: ZGC: C2 allocation reloc promotion deopt race
Reviewed-by: aboldtch, stefank
2025-12-03 09:28:30 +00:00
Aleksey Shipilev
177f3404df 8372733: GHA: Bump to Ubuntu 24.04
Reviewed-by: erikj, ayang
2025-12-03 09:24:33 +00:00
Ramkumar Sunderbabu
a25e6f6462 8319158: Parallel: Make TestObjectTenuringFlags use createTestJavaProcessBuilder
Reviewed-by: stefank, aboldtch
2025-12-03 09:22:13 +00:00
Jaikiran Pai
e65fd45dc7 8366101: Replace the use of ThreadTracker with ScopedValue in java.util.jar.JarFile
Reviewed-by: vyazici, alanb
2025-12-03 09:17:08 +00:00
root
b3e063c2c3 8372710: Update ProcessBuilder/Basic regex
Reviewed-by: shade, amitkumar
2025-12-03 09:04:11 +00:00
Dean Long
a1e8694109 8371306: JDK-8367002 behavior might not match existing HotSpot behavior.
Reviewed-by: thartmann, dholmes
2025-12-03 09:01:40 +00:00
Thomas Schatzl
2139c8c6e6 8372571: ResourceHashTable for some AOT data structures miss placement operator when allocating
Reviewed-by: aboldtch, jsjolen, kvn
2025-12-03 08:08:14 +00:00
Matthias Baesken
8f3d0ade11 8371893: [macOS] use dead_strip linker option to reduce binary size
Reviewed-by: erikj, lucy, serb
2025-12-03 08:06:15 +00:00
Prasanta Sadhukhan
530493fed4 8364146: JList getScrollableUnitIncrement return 0
Reviewed-by: prr, tr
2025-12-03 02:46:02 +00:00
Joe Darcy
1f206e5e12 8372850: Update comment in SourceVersion for language evolution history for changes in 26
Reviewed-by: liach
2025-12-03 00:27:42 +00:00
190 changed files with 3715 additions and 2309 deletions

View File

@@ -59,7 +59,7 @@ on:
jobs:
build-linux:
name: build
runs-on: ubuntu-22.04
runs-on: ubuntu-24.04
container:
image: alpine:3.20

View File

@@ -48,7 +48,7 @@ on:
jobs:
build-cross-compile:
name: build
runs-on: ubuntu-22.04
runs-on: ubuntu-24.04
strategy:
fail-fast: false

View File

@@ -75,7 +75,7 @@ on:
jobs:
build-linux:
name: build
runs-on: ubuntu-22.04
runs-on: ubuntu-24.04
strategy:
fail-fast: false
@@ -115,9 +115,21 @@ jobs:
if [[ '${{ inputs.apt-architecture }}' != '' ]]; then
sudo dpkg --add-architecture ${{ inputs.apt-architecture }}
fi
sudo apt-get update
sudo apt-get install --only-upgrade apt
sudo apt-get install gcc-${{ inputs.gcc-major-version }}${{ inputs.gcc-package-suffix }} g++-${{ inputs.gcc-major-version }}${{ inputs.gcc-package-suffix }} libxrandr-dev${{ steps.arch.outputs.suffix }} libxtst-dev${{ steps.arch.outputs.suffix }} libcups2-dev${{ steps.arch.outputs.suffix }} libasound2-dev${{ steps.arch.outputs.suffix }} ${{ inputs.apt-extra-packages }}
sudo apt update
sudo apt install --only-upgrade apt
sudo apt install \
gcc-${{ inputs.gcc-major-version }}${{ inputs.gcc-package-suffix }} \
g++-${{ inputs.gcc-major-version }}${{ inputs.gcc-package-suffix }} \
libasound2-dev${{ steps.arch.outputs.suffix }} \
libcups2-dev${{ steps.arch.outputs.suffix }} \
libfontconfig1-dev${{ steps.arch.outputs.suffix }} \
libx11-dev${{ steps.arch.outputs.suffix }} \
libxext-dev${{ steps.arch.outputs.suffix }} \
libxrandr-dev${{ steps.arch.outputs.suffix }} \
libxrender-dev${{ steps.arch.outputs.suffix }} \
libxt-dev${{ steps.arch.outputs.suffix }} \
libxtst-dev${{ steps.arch.outputs.suffix }} \
${{ inputs.apt-extra-packages }}
sudo update-alternatives --install /usr/bin/gcc gcc /usr/bin/gcc-${{ inputs.gcc-major-version }} 100 --slave /usr/bin/g++ g++ /usr/bin/g++-${{ inputs.gcc-major-version }}
- name: 'Configure'

View File

@@ -57,7 +57,7 @@ jobs:
prepare:
name: 'Prepare the run'
runs-on: ubuntu-22.04
runs-on: ubuntu-24.04
env:
# List of platforms to exclude by default
EXCLUDED_PLATFORMS: 'alpine-linux-x64'
@@ -405,7 +405,7 @@ jobs:
with:
platform: linux-x64
bootjdk-platform: linux-x64
runs-on: ubuntu-22.04
runs-on: ubuntu-24.04
dry-run: ${{ needs.prepare.outputs.dry-run == 'true' }}
debug-suffix: -debug
@@ -419,7 +419,7 @@ jobs:
with:
platform: linux-x64
bootjdk-platform: linux-x64
runs-on: ubuntu-22.04
runs-on: ubuntu-24.04
dry-run: ${{ needs.prepare.outputs.dry-run == 'true' }}
static-suffix: "-static"

View File

@@ -541,6 +541,11 @@ href="#apple-xcode">Apple Xcode</a> on some strategies to deal with
this.</p>
<p>It is recommended that you use at least macOS 14 and Xcode 15.4, but
earlier versions may also work.</p>
<p>Starting with Xcode 26, introduced in macOS 26, the Metal toolchain
no longer comes bundled with Xcode, so it needs to be installed
separately. This can either be done via the Xcode's Settings/Components
UI, or in the command line calling
<code>xcodebuild -downloadComponent metalToolchain</code>.</p>
<p>The standard macOS environment contains the basic tooling needed to
build, but for external libraries a package manager is recommended. The
JDK uses <a href="https://brew.sh/">homebrew</a> in the examples, but

View File

@@ -352,6 +352,11 @@ on some strategies to deal with this.
It is recommended that you use at least macOS 14 and Xcode 15.4, but
earlier versions may also work.
Starting with Xcode 26, introduced in macOS 26, the Metal toolchain no longer
comes bundled with Xcode, so it needs to be installed separately. This can
either be done via the Xcode's Settings/Components UI, or in the command line
calling `xcodebuild -downloadComponent metalToolchain`.
The standard macOS environment contains the basic tooling needed to build, but
for external libraries a package manager is recommended. The JDK uses
[homebrew](https://brew.sh/) in the examples, but feel free to use whatever

View File

@@ -119,6 +119,9 @@ cover the new source version</li>
and
<code>test/langtools/tools/javac/preview/classReaderTest/Client.preview.out</code>:
update expected messages for preview errors and warnings</li>
<li><code>test/langtools/tools/javac/versions/Versions.java</code>: add
new source version to the set of valid sources and add new enum constant
for the new class file version.</li>
</ul>
</body>
</html>

View File

@@ -353,7 +353,12 @@ AC_DEFUN_ONCE([BASIC_SETUP_DEVKIT],
[set up toolchain on Mac OS using a path to an Xcode installation])])
UTIL_DEPRECATED_ARG_WITH(sys-root)
UTIL_DEPRECATED_ARG_WITH(tools-dir)
AC_ARG_WITH([tools-dir], [AS_HELP_STRING([--with-tools-dir],
[Point to a nonstandard Visual Studio installation location on Windows by
specifying any existing directory 2 or 3 levels below the installation
root.])]
)
if test "x$with_xcode_path" != x; then
if test "x$OPENJDK_BUILD_OS" = "xmacosx"; then

View File

@@ -34,7 +34,7 @@ AC_DEFUN([FLAGS_SETUP_LDFLAGS],
FLAGS_SETUP_LDFLAGS_CPU_DEP([TARGET])
# Setup the build toolchain
FLAGS_SETUP_LDFLAGS_CPU_DEP([BUILD], [OPENJDK_BUILD_])
FLAGS_SETUP_LDFLAGS_CPU_DEP([BUILD], [OPENJDK_BUILD_], [BUILD_])
AC_SUBST(ADLC_LDFLAGS)
])
@@ -52,11 +52,6 @@ AC_DEFUN([FLAGS_SETUP_LDFLAGS_HELPER],
# add --no-as-needed to disable default --as-needed link flag on some GCC toolchains
# add --icf=all (Identical Code Folding — merges identical functions)
BASIC_LDFLAGS="-Wl,-z,defs -Wl,-z,relro -Wl,-z,now -Wl,--no-as-needed -Wl,--exclude-libs,ALL"
if test "x$LINKER_TYPE" = "xgold"; then
if test x$DEBUG_LEVEL = xrelease; then
BASIC_LDFLAGS="$BASIC_LDFLAGS -Wl,--icf=all"
fi
fi
# Linux : remove unused code+data in link step
if test "x$ENABLE_LINKTIME_GC" = xtrue; then
@@ -108,6 +103,9 @@ AC_DEFUN([FLAGS_SETUP_LDFLAGS_HELPER],
# Setup OS-dependent LDFLAGS
if test "x$OPENJDK_TARGET_OS" = xmacosx && test "x$TOOLCHAIN_TYPE" = xclang; then
if test x$DEBUG_LEVEL = xrelease; then
BASIC_LDFLAGS_JDK_ONLY="$BASIC_LDFLAGS_JDK_ONLY -Wl,-dead_strip"
fi
# FIXME: We should really generalize SetSharedLibraryOrigin instead.
OS_LDFLAGS_JVM_ONLY="-Wl,-rpath,@loader_path/. -Wl,-rpath,@loader_path/.."
OS_LDFLAGS="-mmacosx-version-min=$MACOSX_VERSION_MIN -Wl,-reproducible"
@@ -166,7 +164,8 @@ AC_DEFUN([FLAGS_SETUP_LDFLAGS_HELPER],
################################################################################
# $1 - Either BUILD or TARGET to pick the correct OS/CPU variables to check
# conditionals against.
# $2 - Optional prefix for each variable defined.
# $2 - Optional prefix for each variable defined (OPENJDK_BUILD_ or nothing).
# $3 - Optional prefix for toolchain variables (BUILD_ or nothing).
AC_DEFUN([FLAGS_SETUP_LDFLAGS_CPU_DEP],
[
# Setup CPU-dependent basic LDFLAGS. These can differ between the target and
@@ -200,6 +199,12 @@ AC_DEFUN([FLAGS_SETUP_LDFLAGS_CPU_DEP],
fi
fi
if test "x${$3LD_TYPE}" = "xgold"; then
if test x$DEBUG_LEVEL = xrelease; then
$1_CPU_LDFLAGS="${$1_CPU_LDFLAGS} -Wl,--icf=all"
fi
fi
# Export variables according to old definitions, prefix with $2 if present.
LDFLAGS_JDK_COMMON="$BASIC_LDFLAGS $BASIC_LDFLAGS_JDK_ONLY \
$OS_LDFLAGS $DEBUGLEVEL_LDFLAGS_JDK_ONLY ${$2EXTRA_LDFLAGS}"

View File

@@ -516,7 +516,7 @@ AC_DEFUN([TOOLCHAIN_EXTRACT_LD_VERSION],
if [ [[ "$LINKER_VERSION_STRING" == *gold* ]] ]; then
[ LINKER_VERSION_NUMBER=`$ECHO $LINKER_VERSION_STRING | \
$SED -e 's/.* \([0-9][0-9]*\(\.[0-9][0-9]*\)*\).*) .*/\1/'` ]
LINKER_TYPE=gold
$1_TYPE=gold
else
[ LINKER_VERSION_NUMBER=`$ECHO $LINKER_VERSION_STRING | \
$SED -e 's/.* \([0-9][0-9]*\(\.[0-9][0-9]*\)*\).*/\1/'` ]

View File

@@ -2003,6 +2003,9 @@ uint MachSpillCopyNode::implementation(C2_MacroAssembler *masm, PhaseRegAlloc *r
if (bottom_type()->isa_vect() && !bottom_type()->isa_vectmask()) {
uint ireg = ideal_reg();
DEBUG_ONLY(int algm = MIN2(RegMask::num_registers(ireg), (int)Matcher::stack_alignment_in_slots()) * VMRegImpl::stack_slot_size);
assert((src_lo_rc != rc_stack) || is_aligned(src_offset, algm), "unaligned vector spill sp offset %d (src)", src_offset);
assert((dst_lo_rc != rc_stack) || is_aligned(dst_offset, algm), "unaligned vector spill sp offset %d (dst)", dst_offset);
if (ireg == Op_VecA && masm) {
int sve_vector_reg_size_in_bytes = Matcher::scalable_vector_reg_size(T_BYTE);
if (src_lo_rc == rc_stack && dst_lo_rc == rc_stack) {

View File

@@ -695,7 +695,7 @@ instruct getAndSetP(indirect mem, iRegP newval, iRegPNoSp oldval) %{
instruct getAndSetIAcq(indirect mem, iRegI newval, iRegINoSp oldval) %{
predicate(needs_acquiring_load_exclusive(n));
match(Set oldval (GetAndSetI mem newval));
ins_cost(2*VOLATILE_REF_COST);
ins_cost(VOLATILE_REF_COST);
format %{ "atomic_xchgw_acq $oldval, $newval, [$mem]" %}
ins_encode %{
__ atomic_xchgalw($oldval$$Register, $newval$$Register, as_Register($mem$$base));
@@ -706,7 +706,7 @@ instruct getAndSetIAcq(indirect mem, iRegI newval, iRegINoSp oldval) %{
instruct getAndSetLAcq(indirect mem, iRegL newval, iRegLNoSp oldval) %{
predicate(needs_acquiring_load_exclusive(n));
match(Set oldval (GetAndSetL mem newval));
ins_cost(2*VOLATILE_REF_COST);
ins_cost(VOLATILE_REF_COST);
format %{ "atomic_xchg_acq $oldval, $newval, [$mem]" %}
ins_encode %{
__ atomic_xchgal($oldval$$Register, $newval$$Register, as_Register($mem$$base));
@@ -717,7 +717,7 @@ instruct getAndSetLAcq(indirect mem, iRegL newval, iRegLNoSp oldval) %{
instruct getAndSetNAcq(indirect mem, iRegN newval, iRegNNoSp oldval) %{
predicate(needs_acquiring_load_exclusive(n) && n->as_LoadStore()->barrier_data() == 0);
match(Set oldval (GetAndSetN mem newval));
ins_cost(2*VOLATILE_REF_COST);
ins_cost(VOLATILE_REF_COST);
format %{ "atomic_xchgw_acq $oldval, $newval, [$mem]" %}
ins_encode %{
__ atomic_xchgalw($oldval$$Register, $newval$$Register, as_Register($mem$$base));
@@ -728,7 +728,7 @@ instruct getAndSetNAcq(indirect mem, iRegN newval, iRegNNoSp oldval) %{
instruct getAndSetPAcq(indirect mem, iRegP newval, iRegPNoSp oldval) %{
predicate(needs_acquiring_load_exclusive(n) && (n->as_LoadStore()->barrier_data() == 0));
match(Set oldval (GetAndSetP mem newval));
ins_cost(2*VOLATILE_REF_COST);
ins_cost(VOLATILE_REF_COST);
format %{ "atomic_xchg_acq $oldval, $newval, [$mem]" %}
ins_encode %{
__ atomic_xchgal($oldval$$Register, $newval$$Register, as_Register($mem$$base));

View File

@@ -187,7 +187,7 @@ ifelse($1$3,PAcq,INDENT(predicate(needs_acquiring_load_exclusive(n) && (n->as_Lo
$3,Acq,INDENT(predicate(needs_acquiring_load_exclusive(n));),
`dnl')
match(Set oldval (GetAndSet$1 mem newval));
ins_cost(`'ifelse($4,Acq,,2*)VOLATILE_REF_COST);
ins_cost(`'ifelse($3,Acq,,2*)VOLATILE_REF_COST);
format %{ "atomic_xchg$2`'ifelse($3,Acq,_acq) $oldval, $newval, [$mem]" %}
ins_encode %{
__ atomic_xchg`'ifelse($3,Acq,al)$2($oldval$$Register, $newval$$Register, as_Register($mem$$base));

View File

@@ -1795,10 +1795,13 @@ uint MachSpillCopyNode::implementation(C2_MacroAssembler *masm, PhaseRegAlloc *r
return size; // Self copy, no move.
if (bottom_type()->isa_vect() != nullptr && ideal_reg() == Op_VecX) {
int src_offset = ra_->reg2offset(src_lo);
int dst_offset = ra_->reg2offset(dst_lo);
DEBUG_ONLY(int algm = MIN2(RegMask::num_registers(ideal_reg()), (int)Matcher::stack_alignment_in_slots()) * VMRegImpl::stack_slot_size);
assert((src_lo_rc != rc_stack) || is_aligned(src_offset, algm), "unaligned vector spill sp offset %d (src)", src_offset);
assert((dst_lo_rc != rc_stack) || is_aligned(dst_offset, algm), "unaligned vector spill sp offset %d (dst)", dst_offset);
// Memory->Memory Spill.
if (src_lo_rc == rc_stack && dst_lo_rc == rc_stack) {
int src_offset = ra_->reg2offset(src_lo);
int dst_offset = ra_->reg2offset(dst_lo);
if (masm) {
__ ld(R0, src_offset, R1_SP);
__ std(R0, dst_offset, R1_SP);
@@ -1806,26 +1809,20 @@ uint MachSpillCopyNode::implementation(C2_MacroAssembler *masm, PhaseRegAlloc *r
__ std(R0, dst_offset+8, R1_SP);
}
size += 16;
#ifndef PRODUCT
if (st != nullptr) {
st->print("%-7s [R1_SP + #%d] -> [R1_SP + #%d] \t// vector spill copy", "SPILL", src_offset, dst_offset);
}
#endif // !PRODUCT
}
// VectorRegister->Memory Spill.
else if (src_lo_rc == rc_vec && dst_lo_rc == rc_stack) {
VectorSRegister Rsrc = as_VectorRegister(Matcher::_regEncode[src_lo]).to_vsr();
int dst_offset = ra_->reg2offset(dst_lo);
if (PowerArchitecturePPC64 >= 9) {
if (is_aligned(dst_offset, 16)) {
if (masm) {
__ stxv(Rsrc, dst_offset, R1_SP); // matches storeV16_Power9
}
size += 4;
} else {
// Other alignment can be used by Vector API (VectorPayload in rearrangeOp,
// observed with VectorRearrangeTest.java on Power9).
if (masm) {
__ addi(R0, R1_SP, dst_offset);
__ stxvx(Rsrc, R0); // matches storeV16_Power9 (regarding element ordering)
}
size += 8;
if (masm) {
__ stxv(Rsrc, dst_offset, R1_SP); // matches storeV16_Power9
}
size += 4;
} else {
if (masm) {
__ addi(R0, R1_SP, dst_offset);
@@ -1833,24 +1830,25 @@ uint MachSpillCopyNode::implementation(C2_MacroAssembler *masm, PhaseRegAlloc *r
}
size += 8;
}
#ifndef PRODUCT
if (st != nullptr) {
if (PowerArchitecturePPC64 >= 9) {
st->print("%-7s %s, [R1_SP + #%d] \t// vector spill copy", "STXV", Matcher::regName[src_lo], dst_offset);
} else {
st->print("%-7s R0, R1_SP, %d \t// vector spill copy\n\t"
"%-7s %s, [R0] \t// vector spill copy", "ADDI", dst_offset, "STXVD2X", Matcher::regName[src_lo]);
}
}
#endif // !PRODUCT
}
// Memory->VectorRegister Spill.
else if (src_lo_rc == rc_stack && dst_lo_rc == rc_vec) {
VectorSRegister Rdst = as_VectorRegister(Matcher::_regEncode[dst_lo]).to_vsr();
int src_offset = ra_->reg2offset(src_lo);
if (PowerArchitecturePPC64 >= 9) {
if (is_aligned(src_offset, 16)) {
if (masm) {
__ lxv(Rdst, src_offset, R1_SP);
}
size += 4;
} else {
if (masm) {
__ addi(R0, R1_SP, src_offset);
__ lxvx(Rdst, R0);
}
size += 8;
if (masm) {
__ lxv(Rdst, src_offset, R1_SP);
}
size += 4;
} else {
if (masm) {
__ addi(R0, R1_SP, src_offset);
@@ -1858,6 +1856,16 @@ uint MachSpillCopyNode::implementation(C2_MacroAssembler *masm, PhaseRegAlloc *r
}
size += 8;
}
#ifndef PRODUCT
if (st != nullptr) {
if (PowerArchitecturePPC64 >= 9) {
st->print("%-7s %s, [R1_SP + #%d] \t// vector spill copy", "LXV", Matcher::regName[dst_lo], src_offset);
} else {
st->print("%-7s R0, R1_SP, %d \t// vector spill copy\n\t"
"%-7s %s, [R0] \t// vector spill copy", "ADDI", src_offset, "LXVD2X", Matcher::regName[dst_lo]);
}
}
#endif // !PRODUCT
}
// VectorRegister->VectorRegister.
else if (src_lo_rc == rc_vec && dst_lo_rc == rc_vec) {
@@ -1867,6 +1875,12 @@ uint MachSpillCopyNode::implementation(C2_MacroAssembler *masm, PhaseRegAlloc *r
__ xxlor(Rdst, Rsrc, Rsrc);
}
size += 4;
#ifndef PRODUCT
if (st != nullptr) {
st->print("%-7s %s, %s, %s\t// vector spill copy",
"XXLOR", Matcher::regName[dst_lo], Matcher::regName[src_lo], Matcher::regName[src_lo]);
}
#endif // !PRODUCT
}
else {
ShouldNotReachHere(); // No VR spill.

View File

@@ -73,7 +73,7 @@
do_arch_blob, \
do_arch_entry, \
do_arch_entry_init) \
do_arch_blob(compiler, 109000 WINDOWS_ONLY(+2000)) \
do_arch_blob(compiler, 120000 WINDOWS_ONLY(+2000)) \
do_stub(compiler, vector_float_sign_mask) \
do_arch_entry(x86, compiler, vector_float_sign_mask, \
vector_float_sign_mask, vector_float_sign_mask) \

View File

@@ -4305,7 +4305,7 @@ OSReturn os::get_native_priority(const Thread* const thread,
// For reference, please, see IEEE Std 1003.1-2004:
// http://www.unix.org/single_unix_specification
jlong os::Linux::total_thread_cpu_time(clockid_t clockid) {
jlong os::Linux::thread_cpu_time(clockid_t clockid) {
struct timespec tp;
int status = clock_gettime(clockid, &tp);
assert(status == 0, "clock_gettime error: %s", os::strerror(errno));
@@ -4960,20 +4960,42 @@ int os::open(const char *path, int oflag, int mode) {
return fd;
}
// Since kernel v2.6.12 the Linux ABI has had support for encoding the clock
// types in the last three bits. Bit 2 indicates whether a cpu clock refers to a
// thread or a process. Bits 1 and 0 give the type: PROF=0, VIRT=1, SCHED=2, or
// FD=3. The clock CPUCLOCK_VIRT (0b001) reports the thread's consumed user
// time. POSIX compliant implementations of pthread_getcpuclockid return the
// clock CPUCLOCK_SCHED (0b010) which reports the thread's consumed system+user
// time (as mandated by the POSIX standard POSIX.1-2024/IEEE Std 1003.1-2024
// §3.90).
static bool get_thread_clockid(Thread* thread, clockid_t* clockid, bool total) {
constexpr clockid_t CLOCK_TYPE_MASK = 3;
constexpr clockid_t CPUCLOCK_VIRT = 1;
int rc = pthread_getcpuclockid(thread->osthread()->pthread_id(), clockid);
if (rc != 0) {
// It's possible to encounter a terminated native thread that failed
// to detach itself from the VM - which should result in ESRCH.
assert_status(rc == ESRCH, rc, "pthread_getcpuclockid failed");
return false;
}
if (!total) {
clockid_t clockid_tmp = *clockid;
clockid_tmp = (clockid_tmp & ~CLOCK_TYPE_MASK) | CPUCLOCK_VIRT;
*clockid = clockid_tmp;
}
return true;
}
static jlong user_thread_cpu_time(Thread *thread);
static jlong total_thread_cpu_time(Thread *thread) {
clockid_t clockid;
int rc = pthread_getcpuclockid(thread->osthread()->pthread_id(),
&clockid);
if (rc == 0) {
return os::Linux::total_thread_cpu_time(clockid);
} else {
// It's possible to encounter a terminated native thread that failed
// to detach itself from the VM - which should result in ESRCH.
assert_status(rc == ESRCH, rc, "pthread_getcpuclockid failed");
return -1;
}
clockid_t clockid;
bool success = get_thread_clockid(thread, &clockid, true);
return success ? os::Linux::thread_cpu_time(clockid) : -1;
}
// current_thread_cpu_time(bool) and thread_cpu_time(Thread*, bool)
@@ -4984,7 +5006,7 @@ static jlong total_thread_cpu_time(Thread *thread) {
// the fast estimate available on the platform.
jlong os::current_thread_cpu_time() {
return os::Linux::total_thread_cpu_time(CLOCK_THREAD_CPUTIME_ID);
return os::Linux::thread_cpu_time(CLOCK_THREAD_CPUTIME_ID);
}
jlong os::thread_cpu_time(Thread* thread) {
@@ -4993,7 +5015,7 @@ jlong os::thread_cpu_time(Thread* thread) {
jlong os::current_thread_cpu_time(bool user_sys_cpu_time) {
if (user_sys_cpu_time) {
return os::Linux::total_thread_cpu_time(CLOCK_THREAD_CPUTIME_ID);
return os::Linux::thread_cpu_time(CLOCK_THREAD_CPUTIME_ID);
} else {
return user_thread_cpu_time(Thread::current());
}
@@ -5007,46 +5029,11 @@ jlong os::thread_cpu_time(Thread *thread, bool user_sys_cpu_time) {
}
}
// -1 on error.
static jlong user_thread_cpu_time(Thread *thread) {
pid_t tid = thread->osthread()->thread_id();
char *s;
char stat[2048];
size_t statlen;
char proc_name[64];
int count;
long sys_time, user_time;
char cdummy;
int idummy;
long ldummy;
FILE *fp;
clockid_t clockid;
bool success = get_thread_clockid(thread, &clockid, false);
os::snprintf_checked(proc_name, 64, "/proc/self/task/%d/stat", tid);
fp = os::fopen(proc_name, "r");
if (fp == nullptr) return -1;
statlen = fread(stat, 1, 2047, fp);
stat[statlen] = '\0';
fclose(fp);
// Skip pid and the command string. Note that we could be dealing with
// weird command names, e.g. user could decide to rename java launcher
// to "java 1.4.2 :)", then the stat file would look like
// 1234 (java 1.4.2 :)) R ... ...
// We don't really need to know the command string, just find the last
// occurrence of ")" and then start parsing from there. See bug 4726580.
s = strrchr(stat, ')');
if (s == nullptr) return -1;
// Skip blank chars
do { s++; } while (s && isspace((unsigned char) *s));
count = sscanf(s,"%c %d %d %d %d %d %lu %lu %lu %lu %lu %lu %lu",
&cdummy, &idummy, &idummy, &idummy, &idummy, &idummy,
&ldummy, &ldummy, &ldummy, &ldummy, &ldummy,
&user_time, &sys_time);
if (count != 13) return -1;
return (jlong)user_time * (1000000000 / os::Posix::clock_tics_per_second());
return success ? os::Linux::thread_cpu_time(clockid) : -1;
}
void os::current_thread_cpu_time_info(jvmtiTimerInfo *info_ptr) {

View File

@@ -142,7 +142,7 @@ class os::Linux {
static bool manually_expand_stack(JavaThread * t, address addr);
static void expand_stack_to(address bottom);
static jlong total_thread_cpu_time(clockid_t clockid);
static jlong thread_cpu_time(clockid_t clockid);
static jlong sendfile(int out_fd, int in_fd, jlong* offset, jlong count);

View File

@@ -86,9 +86,9 @@ void AOTMappedHeapWriter::init() {
if (CDSConfig::is_dumping_heap()) {
Universe::heap()->collect(GCCause::_java_lang_system_gc);
_buffer_offset_to_source_obj_table = new BufferOffsetToSourceObjectTable(/*size (prime)*/36137, /*max size*/1 * M);
_buffer_offset_to_source_obj_table = new (mtClassShared) BufferOffsetToSourceObjectTable(/*size (prime)*/36137, /*max size*/1 * M);
_dumped_interned_strings = new (mtClass)DumpedInternedStrings(INITIAL_TABLE_SIZE, MAX_TABLE_SIZE);
_fillers = new FillersTable();
_fillers = new (mtClassShared) FillersTable();
_requested_bottom = nullptr;
_requested_top = nullptr;

View File

@@ -357,7 +357,7 @@ InstanceKlass* LambdaProxyClassDictionary::load_and_init_lambda_proxy_class(Inst
InstanceKlass* nest_host = caller_ik->nest_host(THREAD);
assert(nest_host == shared_nest_host, "mismatched nest host");
EventClassLoad class_load_start_event;
EventClassLoad class_load_event;
// Add to class hierarchy, and do possible deoptimizations.
lambda_ik->add_to_hierarchy(THREAD);
@@ -368,8 +368,8 @@ InstanceKlass* LambdaProxyClassDictionary::load_and_init_lambda_proxy_class(Inst
if (JvmtiExport::should_post_class_load()) {
JvmtiExport::post_class_load(THREAD, lambda_ik);
}
if (class_load_start_event.should_commit()) {
SystemDictionary::post_class_load_event(&class_load_start_event, lambda_ik, ClassLoaderData::class_loader_data(class_loader()));
if (class_load_event.should_commit()) {
JFR_ONLY(SystemDictionary::post_class_load_event(&class_load_event, lambda_ik, ClassLoaderData::class_loader_data(class_loader()));)
}
lambda_ik->initialize(CHECK_NULL);

View File

@@ -89,9 +89,6 @@
#if INCLUDE_CDS
#include "classfile/systemDictionaryShared.hpp"
#endif
#if INCLUDE_JFR
#include "jfr/support/jfrTraceIdExtension.hpp"
#endif
// We generally try to create the oops directly when parsing, rather than
// allocating temporary data structures and copying the bytes twice. A
@@ -5272,8 +5269,6 @@ void ClassFileParser::fill_instance_klass(InstanceKlass* ik,
}
}
JFR_ONLY(INIT_ID(ik);)
// If we reach here, all is well.
// Now remove the InstanceKlass* from the _klass_to_deallocate field
// in order for it to not be destroyed in the ClassFileParser destructor.

View File

@@ -500,6 +500,8 @@ class ClassFileParser {
InstanceKlass* create_instance_klass(bool cf_changed_in_CFLH, const ClassInstanceInfo& cl_inst_info, TRAPS);
const ClassFileStream& stream() const { return *_stream; }
const ClassFileStream* clone_stream() const;
void set_klass_to_deallocate(InstanceKlass* klass);

View File

@@ -1684,8 +1684,8 @@ int java_lang_Thread::_name_offset;
int java_lang_Thread::_contextClassLoader_offset;
int java_lang_Thread::_eetop_offset;
int java_lang_Thread::_jvmti_thread_state_offset;
int java_lang_Thread::_jvmti_VTMS_transition_disable_count_offset;
int java_lang_Thread::_jvmti_is_in_VTMS_transition_offset;
int java_lang_Thread::_vthread_transition_disable_count_offset;
int java_lang_Thread::_is_in_vthread_transition_offset;
int java_lang_Thread::_interrupted_offset;
int java_lang_Thread::_interruptLock_offset;
int java_lang_Thread::_tid_offset;
@@ -1745,34 +1745,34 @@ void java_lang_Thread::set_jvmti_thread_state(oop java_thread, JvmtiThreadState*
java_thread->address_field_put(_jvmti_thread_state_offset, (address)state);
}
int java_lang_Thread::VTMS_transition_disable_count(oop java_thread) {
return java_thread->int_field(_jvmti_VTMS_transition_disable_count_offset);
int java_lang_Thread::vthread_transition_disable_count(oop java_thread) {
jint* addr = java_thread->field_addr<jint>(_vthread_transition_disable_count_offset);
return AtomicAccess::load(addr);
}
void java_lang_Thread::inc_VTMS_transition_disable_count(oop java_thread) {
assert(JvmtiVTMSTransition_lock->owned_by_self(), "Must be locked");
int val = VTMS_transition_disable_count(java_thread);
java_thread->int_field_put(_jvmti_VTMS_transition_disable_count_offset, val + 1);
void java_lang_Thread::inc_vthread_transition_disable_count(oop java_thread) {
assert(VThreadTransition_lock->owned_by_self(), "Must be locked");
jint* addr = java_thread->field_addr<jint>(_vthread_transition_disable_count_offset);
int val = AtomicAccess::load(addr);
AtomicAccess::store(addr, val + 1);
}
void java_lang_Thread::dec_VTMS_transition_disable_count(oop java_thread) {
assert(JvmtiVTMSTransition_lock->owned_by_self(), "Must be locked");
int val = VTMS_transition_disable_count(java_thread);
assert(val > 0, "VTMS_transition_disable_count should never be negative");
java_thread->int_field_put(_jvmti_VTMS_transition_disable_count_offset, val - 1);
void java_lang_Thread::dec_vthread_transition_disable_count(oop java_thread) {
assert(VThreadTransition_lock->owned_by_self(), "Must be locked");
jint* addr = java_thread->field_addr<jint>(_vthread_transition_disable_count_offset);
int val = AtomicAccess::load(addr);
AtomicAccess::store(addr, val - 1);
}
bool java_lang_Thread::is_in_VTMS_transition(oop java_thread) {
return java_thread->bool_field_volatile(_jvmti_is_in_VTMS_transition_offset);
bool java_lang_Thread::is_in_vthread_transition(oop java_thread) {
jboolean* addr = java_thread->field_addr<jboolean>(_is_in_vthread_transition_offset);
return AtomicAccess::load(addr);
}
void java_lang_Thread::set_is_in_VTMS_transition(oop java_thread, bool val) {
assert(is_in_VTMS_transition(java_thread) != val, "already %s transition", val ? "inside" : "outside");
java_thread->bool_field_put_volatile(_jvmti_is_in_VTMS_transition_offset, val);
}
int java_lang_Thread::is_in_VTMS_transition_offset() {
return _jvmti_is_in_VTMS_transition_offset;
void java_lang_Thread::set_is_in_vthread_transition(oop java_thread, bool val) {
assert(is_in_vthread_transition(java_thread) != val, "already %s transition", val ? "inside" : "outside");
jboolean* addr = java_thread->field_addr<jboolean>(_is_in_vthread_transition_offset);
AtomicAccess::store(addr, (jboolean)val);
}
void java_lang_Thread::clear_scopedValueBindings(oop java_thread) {

View File

@@ -375,8 +375,8 @@ class java_lang_Class : AllStatic {
#define THREAD_INJECTED_FIELDS(macro) \
macro(java_lang_Thread, jvmti_thread_state, intptr_signature, false) \
macro(java_lang_Thread, jvmti_VTMS_transition_disable_count, int_signature, false) \
macro(java_lang_Thread, jvmti_is_in_VTMS_transition, bool_signature, false) \
macro(java_lang_Thread, vthread_transition_disable_count, int_signature, false) \
macro(java_lang_Thread, is_in_vthread_transition, bool_signature, false) \
JFR_ONLY(macro(java_lang_Thread, jfr_epoch, short_signature, false))
class java_lang_Thread : AllStatic {
@@ -390,8 +390,8 @@ class java_lang_Thread : AllStatic {
static int _contextClassLoader_offset;
static int _eetop_offset;
static int _jvmti_thread_state_offset;
static int _jvmti_VTMS_transition_disable_count_offset;
static int _jvmti_is_in_VTMS_transition_offset;
static int _vthread_transition_disable_count_offset;
static int _is_in_vthread_transition_offset;
static int _interrupted_offset;
static int _interruptLock_offset;
static int _tid_offset;
@@ -444,12 +444,15 @@ class java_lang_Thread : AllStatic {
static JvmtiThreadState* jvmti_thread_state(oop java_thread);
static void set_jvmti_thread_state(oop java_thread, JvmtiThreadState* state);
static int VTMS_transition_disable_count(oop java_thread);
static void inc_VTMS_transition_disable_count(oop java_thread);
static void dec_VTMS_transition_disable_count(oop java_thread);
static bool is_in_VTMS_transition(oop java_thread);
static void set_is_in_VTMS_transition(oop java_thread, bool val);
static int is_in_VTMS_transition_offset();
static int vthread_transition_disable_count(oop java_thread);
static void inc_vthread_transition_disable_count(oop java_thread);
static void dec_vthread_transition_disable_count(oop java_thread);
static int vthread_transition_disable_count_offset() { return _vthread_transition_disable_count_offset; }
static bool is_in_vthread_transition(oop java_thread);
static void set_is_in_vthread_transition(oop java_thread, bool val);
static int is_in_vthread_transition_offset() { return _is_in_vthread_transition_offset; }
// Clear all scoped value bindings on error
static void clear_scopedValueBindings(oop java_thread);

View File

@@ -37,7 +37,7 @@
#include "runtime/handles.inline.hpp"
#include "utilities/macros.hpp"
#if INCLUDE_JFR
#include "jfr/support/jfrKlassExtension.hpp"
#include "jfr/jfr.hpp"
#endif
@@ -99,6 +99,9 @@ InstanceKlass* KlassFactory::check_shared_class_file_load_hook(
new_ik->set_classpath_index(path_index);
}
JFR_ONLY(Jfr::on_klass_creation(new_ik, parser, THREAD);)
return new_ik;
}
}
@@ -213,7 +216,7 @@ InstanceKlass* KlassFactory::create_from_stream(ClassFileStream* stream,
result->set_cached_class_file(cached_class_file);
}
JFR_ONLY(ON_KLASS_CREATION(result, parser, THREAD);)
JFR_ONLY(Jfr::on_klass_creation(result, parser, THREAD);)
#if INCLUDE_CDS
if (CDSConfig::is_dumping_archive()) {

View File

@@ -560,15 +560,6 @@ static InstanceKlass* handle_parallel_loading(JavaThread* current,
return nullptr;
}
void SystemDictionary::post_class_load_event(EventClassLoad* event, const InstanceKlass* k, const ClassLoaderData* init_cld) {
assert(event != nullptr, "invariant");
assert(k != nullptr, "invariant");
event->set_loadedClass(k);
event->set_definingClassLoader(k->class_loader_data());
event->set_initiatingClassLoader(init_cld);
event->commit();
}
// SystemDictionary::resolve_instance_class_or_null is the main function for class name resolution.
// After checking if the InstanceKlass already exists, it checks for ClassCircularityError and
// whether the thread must wait for loading in parallel. It eventually calls load_instance_class,
@@ -582,7 +573,7 @@ InstanceKlass* SystemDictionary::resolve_instance_class_or_null(Symbol* name,
assert(name != nullptr && !Signature::is_array(name) &&
!Signature::has_envelope(name), "invalid class name: %s", name == nullptr ? "nullptr" : name->as_C_string());
EventClassLoad class_load_start_event;
EventClassLoad class_load_event;
HandleMark hm(THREAD);
@@ -713,8 +704,8 @@ InstanceKlass* SystemDictionary::resolve_instance_class_or_null(Symbol* name,
return nullptr;
}
if (class_load_start_event.should_commit()) {
post_class_load_event(&class_load_start_event, loaded_class, loader_data);
if (class_load_event.should_commit()) {
JFR_ONLY(post_class_load_event(&class_load_event, loaded_class, loader_data);)
}
// Make sure we have the right class in the dictionary
@@ -789,7 +780,7 @@ InstanceKlass* SystemDictionary::resolve_hidden_class_from_stream(
const ClassLoadInfo& cl_info,
TRAPS) {
EventClassLoad class_load_start_event;
EventClassLoad class_load_event;
ClassLoaderData* loader_data;
// - for hidden classes that are not strong: create a new CLD that has a class holder and
@@ -819,15 +810,16 @@ InstanceKlass* SystemDictionary::resolve_hidden_class_from_stream(
k->add_to_hierarchy(THREAD);
// But, do not add to dictionary.
if (class_load_event.should_commit()) {
JFR_ONLY(post_class_load_event(&class_load_event, k, loader_data);)
}
k->link_class(CHECK_NULL);
// notify jvmti
if (JvmtiExport::should_post_class_load()) {
JvmtiExport::post_class_load(THREAD, k);
}
if (class_load_start_event.should_commit()) {
post_class_load_event(&class_load_start_event, k, loader_data);
}
return k;
}
@@ -1154,6 +1146,17 @@ void SystemDictionary::load_shared_class_misc(InstanceKlass* ik, ClassLoaderData
}
}
#if INCLUDE_JFR
void SystemDictionary::post_class_load_event(EventClassLoad* event, const InstanceKlass* k, const ClassLoaderData* init_cld) {
assert(event != nullptr, "invariant");
assert(k != nullptr, "invariant");
event->set_loadedClass(k);
event->set_definingClassLoader(k->class_loader_data());
event->set_initiatingClassLoader(init_cld);
event->commit();
}
#endif // INCLUDE_JFR
// This is much more lightweight than SystemDictionary::resolve_or_null
// - There's only a single Java thread at this point. No need for placeholder.
// - All supertypes of ik have been loaded
@@ -1182,6 +1185,8 @@ void SystemDictionary::preload_class(Handle class_loader, InstanceKlass* ik, TRA
}
#endif
EventClassLoad class_load_event;
ClassLoaderData* loader_data = ClassLoaderData::class_loader_data(class_loader());
oop java_mirror = ik->archived_java_mirror();
precond(java_mirror != nullptr);
@@ -1203,6 +1208,10 @@ void SystemDictionary::preload_class(Handle class_loader, InstanceKlass* ik, TRA
update_dictionary(THREAD, ik, loader_data);
}
if (class_load_event.should_commit()) {
JFR_ONLY(post_class_load_event(&class_load_event, ik, loader_data);)
}
assert(ik->is_loaded(), "Must be in at least loaded state");
}
@@ -1380,15 +1389,6 @@ InstanceKlass* SystemDictionary::load_instance_class(Symbol* name,
return loaded_class;
}
static void post_class_define_event(InstanceKlass* k, const ClassLoaderData* def_cld) {
EventClassDefine event;
if (event.should_commit()) {
event.set_definedClass(k);
event.set_definingClassLoader(def_cld);
event.commit();
}
}
void SystemDictionary::define_instance_class(InstanceKlass* k, Handle class_loader, TRAPS) {
ClassLoaderData* loader_data = k->class_loader_data();
@@ -1440,7 +1440,6 @@ void SystemDictionary::define_instance_class(InstanceKlass* k, Handle class_load
if (JvmtiExport::should_post_class_load()) {
JvmtiExport::post_class_load(THREAD, k);
}
post_class_define_event(k, loader_data);
}
// Support parallel classloading

View File

@@ -326,11 +326,10 @@ private:
static void restore_archived_method_handle_intrinsics_impl(TRAPS) NOT_CDS_RETURN;
protected:
// Used by AOTLinkedClassBulkLoader, LambdaProxyClassDictionary, and SystemDictionaryShared
// Used by AOTLinkedClassBulkLoader, LambdaProxyClassDictionary, VMClasses and SystemDictionaryShared
static bool add_loader_constraint(Symbol* name, Klass* klass_being_linked, Handle loader1,
Handle loader2);
static void post_class_load_event(EventClassLoad* event, const InstanceKlass* k, const ClassLoaderData* init_cld);
static InstanceKlass* load_shared_class(InstanceKlass* ik,
Handle class_loader,
Handle protection_domain,
@@ -342,6 +341,9 @@ protected:
static InstanceKlass* find_or_define_instance_class(Symbol* class_name,
Handle class_loader,
InstanceKlass* k, TRAPS);
JFR_ONLY(static void post_class_load_event(EventClassLoad* event,
const InstanceKlass* k,
const ClassLoaderData* init_cld);)
public:
static bool is_system_class_loader(oop class_loader);

View File

@@ -35,6 +35,7 @@
#include "classfile/vmClasses.hpp"
#include "classfile/vmSymbols.hpp"
#include "gc/shared/collectedHeap.hpp"
#include "jfr/jfrEvents.hpp"
#include "memory/metaspaceClosure.hpp"
#include "memory/universe.hpp"
#include "oops/instanceKlass.hpp"
@@ -240,6 +241,8 @@ void vmClasses::resolve_shared_class(InstanceKlass* klass, ClassLoaderData* load
return;
}
EventClassLoad class_load_event;
// add super and interfaces first
InstanceKlass* super = klass->super();
if (super != nullptr && super->class_loader_data() == nullptr) {
@@ -261,6 +264,10 @@ void vmClasses::resolve_shared_class(InstanceKlass* klass, ClassLoaderData* load
dictionary->add_klass(THREAD, klass->name(), klass);
klass->add_to_hierarchy(THREAD);
assert(klass->is_loaded(), "Must be in at least loaded state");
if (class_load_event.should_commit()) {
JFR_ONLY(SystemDictionary::post_class_load_event(&class_load_event, klass, loader_data);)
}
}
#endif // INCLUDE_CDS

View File

@@ -649,10 +649,10 @@ class methodHandle;
do_intrinsic(_Continuation_unpin, jdk_internal_vm_Continuation, unpin_name, void_method_signature, F_SN) \
\
/* java/lang/VirtualThread */ \
do_intrinsic(_notifyJvmtiVThreadStart, java_lang_VirtualThread, notifyJvmtiStart_name, void_method_signature, F_RN) \
do_intrinsic(_notifyJvmtiVThreadEnd, java_lang_VirtualThread, notifyJvmtiEnd_name, void_method_signature, F_RN) \
do_intrinsic(_notifyJvmtiVThreadMount, java_lang_VirtualThread, notifyJvmtiMount_name, bool_void_signature, F_RN) \
do_intrinsic(_notifyJvmtiVThreadUnmount, java_lang_VirtualThread, notifyJvmtiUnmount_name, bool_void_signature, F_RN) \
do_intrinsic(_vthreadEndFirstTransition, java_lang_VirtualThread, endFirstTransition_name, void_method_signature, F_RN) \
do_intrinsic(_vthreadStartFinalTransition, java_lang_VirtualThread, startFinalTransition_name, void_method_signature, F_RN) \
do_intrinsic(_vthreadStartTransition, java_lang_VirtualThread, startTransition_name, bool_void_signature, F_RN) \
do_intrinsic(_vthreadEndTransition, java_lang_VirtualThread, endTransition_name, bool_void_signature, F_RN) \
do_intrinsic(_notifyJvmtiVThreadDisableSuspend, java_lang_VirtualThread, notifyJvmtiDisableSuspend_name, bool_void_signature, F_SN) \
\
/* support for UnsafeConstants */ \

View File

@@ -395,10 +395,10 @@ class SerializeClosure;
template(run_finalization_name, "runFinalization") \
template(dispatchUncaughtException_name, "dispatchUncaughtException") \
template(loadClass_name, "loadClass") \
template(notifyJvmtiStart_name, "notifyJvmtiStart") \
template(notifyJvmtiEnd_name, "notifyJvmtiEnd") \
template(notifyJvmtiMount_name, "notifyJvmtiMount") \
template(notifyJvmtiUnmount_name, "notifyJvmtiUnmount") \
template(startTransition_name, "startTransition") \
template(endTransition_name, "endTransition") \
template(startFinalTransition_name, "startFinalTransition") \
template(endFirstTransition_name, "endFirstTransition") \
template(notifyJvmtiDisableSuspend_name, "notifyJvmtiDisableSuspend") \
template(doYield_name, "doYield") \
template(enter_name, "enter") \
@@ -497,8 +497,8 @@ class SerializeClosure;
template(java_lang_Boolean_signature, "Ljava/lang/Boolean;") \
template(url_code_signer_array_void_signature, "(Ljava/net/URL;[Ljava/security/CodeSigner;)V") \
template(jvmti_thread_state_name, "jvmti_thread_state") \
template(jvmti_VTMS_transition_disable_count_name, "jvmti_VTMS_transition_disable_count") \
template(jvmti_is_in_VTMS_transition_name, "jvmti_is_in_VTMS_transition") \
template(vthread_transition_disable_count_name, "vthread_transition_disable_count") \
template(is_in_vthread_transition_name, "is_in_vthread_transition") \
template(module_entry_name, "module_entry") \
template(resolved_references_name, "<resolved_references>") \
template(init_lock_name, "<init_lock>") \

View File

@@ -1346,18 +1346,16 @@ void AOTCodeAddressTable::init_extrs() {
SET_ADDRESS(_extrs, OptoRuntime::multianewarray4_C);
SET_ADDRESS(_extrs, OptoRuntime::multianewarray5_C);
SET_ADDRESS(_extrs, OptoRuntime::multianewarrayN_C);
#if INCLUDE_JVMTI
SET_ADDRESS(_extrs, SharedRuntime::notify_jvmti_vthread_start);
SET_ADDRESS(_extrs, SharedRuntime::notify_jvmti_vthread_end);
SET_ADDRESS(_extrs, SharedRuntime::notify_jvmti_vthread_mount);
SET_ADDRESS(_extrs, SharedRuntime::notify_jvmti_vthread_unmount);
#endif
SET_ADDRESS(_extrs, OptoRuntime::complete_monitor_locking_C);
SET_ADDRESS(_extrs, OptoRuntime::monitor_notify_C);
SET_ADDRESS(_extrs, OptoRuntime::monitor_notifyAll_C);
SET_ADDRESS(_extrs, OptoRuntime::rethrow_C);
SET_ADDRESS(_extrs, OptoRuntime::slow_arraycopy_C);
SET_ADDRESS(_extrs, OptoRuntime::register_finalizer_C);
SET_ADDRESS(_extrs, OptoRuntime::vthread_end_first_transition_C);
SET_ADDRESS(_extrs, OptoRuntime::vthread_start_final_transition_C);
SET_ADDRESS(_extrs, OptoRuntime::vthread_start_transition_C);
SET_ADDRESS(_extrs, OptoRuntime::vthread_end_transition_C);
#if defined(AARCH64)
SET_ADDRESS(_extrs, JavaThread::verify_cross_modify_fence_failure);
#endif // AARCH64

View File

@@ -891,9 +891,23 @@ void ParallelScavengeHeap::resize_after_young_gc(bool is_survivor_overflowing) {
// Consider if should shrink old-gen
if (!is_survivor_overflowing) {
// Upper bound for a single step shrink
size_t max_shrink_bytes = SpaceAlignment;
assert(old_gen()->capacity_in_bytes() >= old_gen()->min_gen_size(), "inv");
// Old gen min_gen_size constraint.
const size_t max_shrink_bytes_gen_size_constraint = old_gen()->capacity_in_bytes() - old_gen()->min_gen_size();
// Per-step delta to avoid too aggressive shrinking.
const size_t max_shrink_bytes_per_step_constraint = SpaceAlignment;
// Combining the above two constraints.
const size_t max_shrink_bytes = MIN2(max_shrink_bytes_gen_size_constraint,
max_shrink_bytes_per_step_constraint);
size_t shrink_bytes = _size_policy->compute_old_gen_shrink_bytes(old_gen()->free_in_bytes(), max_shrink_bytes);
assert(old_gen()->capacity_in_bytes() >= shrink_bytes, "inv");
assert(old_gen()->capacity_in_bytes() - shrink_bytes >= old_gen()->min_gen_size(), "inv");
if (shrink_bytes != 0) {
if (MinHeapFreeRatio != 0) {
size_t new_capacity = old_gen()->capacity_in_bytes() - shrink_bytes;

View File

@@ -236,7 +236,10 @@ DefNewGeneration::DefNewGeneration(ReservedSpace rs,
// These values are exported as performance counters.
uintx size = _virtual_space.reserved_size();
_max_survivor_size = compute_survivor_size(size, SpaceAlignment);
_max_eden_size = size - (2*_max_survivor_size);
// Eden might grow to be almost as large as the entire young generation.
// We approximate this as the entire virtual space.
_max_eden_size = size;
// allocate the performance counters

View File

@@ -147,7 +147,8 @@ GrowableArray<MemoryPool*> SerialHeap::memory_pools() {
HeapWord* SerialHeap::allocate_loaded_archive_space(size_t word_size) {
MutexLocker ml(Heap_lock);
return old_gen()->allocate(word_size);
HeapWord* const addr = old_gen()->allocate(word_size);
return addr != nullptr ? addr : old_gen()->expand_and_allocate(word_size);
}
void SerialHeap::complete_loaded_archive_space(MemRegion archive_space) {

View File

@@ -291,7 +291,7 @@
"size on systems with small physical memory size") \
range(0.0, 100.0) \
\
product(double, InitialRAMPercentage, 0.2, \
product(double, InitialRAMPercentage, 0.0, \
"Percentage of real memory used for initial heap size") \
range(0.0, 100.0) \
\

View File

@@ -175,7 +175,6 @@ ShenandoahRegionPartitions::ShenandoahRegionPartitions(size_t max_regions, Shena
void ShenandoahFreeSet::account_for_pip_regions(size_t mutator_regions, size_t mutator_bytes,
size_t collector_regions, size_t collector_bytes) {
shenandoah_assert_heaplocked();
size_t region_size_bytes = ShenandoahHeapRegion::region_size_bytes();
// We have removed all of these regions from their respective partition. Each pip region is "in" the NotFree partition.
// We want to account for all pip pad memory as if it had been consumed from within the Mutator partition.
@@ -1605,18 +1604,13 @@ HeapWord* ShenandoahFreeSet::try_allocate_in(ShenandoahHeapRegion* r, Shenandoah
}
}
size_t ac = alloc_capacity(r);
ShenandoahFreeSetPartitionId orig_partition;
ShenandoahGeneration* request_generation = nullptr;
if (req.is_mutator_alloc()) {
request_generation = _heap->mode()->is_generational()? _heap->young_generation(): _heap->global_generation();
orig_partition = ShenandoahFreeSetPartitionId::Mutator;
} else if (req.is_old()) {
request_generation = _heap->old_generation();
orig_partition = ShenandoahFreeSetPartitionId::OldCollector;
} else {
// Not old collector alloc, so this is a young collector gclab or shared allocation
request_generation = _heap->mode()->is_generational()? _heap->young_generation(): _heap->global_generation();
orig_partition = ShenandoahFreeSetPartitionId::Collector;
}
if (alloc_capacity(r) < PLAB::min_size() * HeapWordSize) {
@@ -1688,7 +1682,6 @@ HeapWord* ShenandoahFreeSet::allocate_contiguous(ShenandoahAllocRequest& req, bo
idx_t num = ShenandoahHeapRegion::required_regions(words_size * HeapWordSize);
assert(req.is_young(), "Humongous regions always allocated in YOUNG");
ShenandoahGeneration* generation = _heap->generation_for(req.affiliation());
// Check if there are enough regions left to satisfy allocation.
if (num > (idx_t) _partitions.count(ShenandoahFreeSetPartitionId::Mutator)) {
@@ -1833,107 +1826,7 @@ HeapWord* ShenandoahFreeSet::allocate_contiguous(ShenandoahAllocRequest& req, bo
}
class ShenandoahRecycleTrashedRegionClosure final : public ShenandoahHeapRegionClosure {
private:
static const ssize_t SentinelUsed = -1;
static const ssize_t SentinelIndex = -1;
static const size_t MaxSavedRegions = 128;
ShenandoahRegionPartitions* _partitions;
volatile size_t _recycled_region_count;
ssize_t _region_indices[MaxSavedRegions];
ssize_t _region_used[MaxSavedRegions];
void get_lock_and_flush_buffer(size_t region_count, size_t overflow_region_used, size_t overflow_region_index) {
ShenandoahHeap* heap = ShenandoahHeap::heap();
ShenandoahHeapLocker locker(heap->lock());
size_t recycled_regions = AtomicAccess::load(&_recycled_region_count);
size_t region_tallies[int(ShenandoahRegionPartitions::NumPartitions)];
size_t used_byte_tallies[int(ShenandoahRegionPartitions::NumPartitions)];
for (int p = 0; p < int(ShenandoahRegionPartitions::NumPartitions); p++) {
region_tallies[p] = 0;
used_byte_tallies[p] = 0;
}
ShenandoahFreeSetPartitionId p = _partitions->membership(overflow_region_index);
used_byte_tallies[int(p)] += overflow_region_used;
if (region_count <= recycled_regions) {
// _recycled_region_count has not been decremented after I incremented it to obtain region_count, so I will
// try to flush the buffer.
// Multiple worker threads may attempt to flush this buffer. The first thread to acquire the lock does the work.
// _recycled_region_count is only decreased while holding the heap lock.
if (region_count > recycled_regions) {
region_count = recycled_regions;
}
for (size_t i = 0; i < region_count; i++) {
ssize_t used;
// wait for other threads to finish updating their entries within the region buffer before processing entry
do {
used = _region_used[i];
} while (used == SentinelUsed);
ssize_t index;
do {
index = _region_indices[i];
} while (index == SentinelIndex);
ShenandoahFreeSetPartitionId p = _partitions->membership(index);
assert(p != ShenandoahFreeSetPartitionId::NotFree, "Trashed regions should be in a free partition");
used_byte_tallies[int(p)] += used;
region_tallies[int(p)]++;
}
if (region_count > 0) {
for (size_t i = 0; i < MaxSavedRegions; i++) {
_region_indices[i] = SentinelIndex;
_region_used[i] = SentinelUsed;
}
}
// The almost last thing we do before releasing the lock is to set the _recycled_region_count to 0. What happens next?
//
// 1. Any worker thread that attempted to buffer a new region while we were flushing the buffer will have seen
// that _recycled_region_count > MaxSavedRegions. All such worker threads will first wait for the lock, then
// discover that the _recycled_region_count is zero, then, while holding the lock, they will process the
// region so it doesn't have to be placed into the buffer. This handles the large majority of cases.
//
// 2. However, there's a race that can happen, which will result in someewhat different behavior. Suppose
// this thread resets _recycled_region_count to 0. Then some other worker thread increments _recycled_region_count
// in order to stores its region into the buffer and suppose this happens before all of the other worker threads
// which are waiting to acquire the heap lock have finished their efforts to flush the buffer. If this happens,
// then the workers who are waiting to acquire the heap lock and flush the buffer will find that _recycled_region_count
// has decreased from the value it held when they last tried to increment its value. In this case, these worker
// threads will process their overflow region while holding the lock, but they will not attempt to process regions
// newly placed into the buffer. Otherwise, confusion could result.
//
// Assumption: all worker threads who are attempting to acquire lock and flush buffer will finish their efforts before
// the buffer once again overflows.
// How could we avoid depending on this assumption?
// 1. Let MaxSavedRegions be as large as number of regions, or at least as large as the collection set.
// 2. Keep a count of how many times the buffer has been flushed per instantation of the
// ShenandoahRecycleTrashedRegionClosure object, and only consult/update this value while holding the heap lock.
// Need to think about how this helps resolve the race.
_recycled_region_count = 0;
} else {
// Some other thread has already processed the buffer, resetting _recycled_region_count to zero. Its current value
// may be greater than zero because other workers may have accumulated entries into the buffer. But it is "extremely"
// unlikely that it will overflow again before all waiting workers have had a chance to clear their state. While I've
// got the heap lock, I'll go ahead and update the global state for my overflow region. I'll let other heap regions
// accumulate in the buffer to be processed when the buffer is once again full.
region_count = 0;
}
for (size_t p = 0; p < int(ShenandoahRegionPartitions::NumPartitions); p++) {
_partitions->decrease_used(ShenandoahFreeSetPartitionId(p), used_byte_tallies[p]);
}
}
public:
ShenandoahRecycleTrashedRegionClosure(ShenandoahRegionPartitions* p): ShenandoahHeapRegionClosure() {
_partitions = p;
_recycled_region_count = 0;
for (size_t i = 0; i < MaxSavedRegions; i++) {
_region_indices[i] = SentinelIndex;
_region_used[i] = SentinelUsed;
}
}
void heap_region_do(ShenandoahHeapRegion* r) {
r->try_recycle();
}
@@ -1950,14 +1843,12 @@ void ShenandoahFreeSet::recycle_trash() {
ShenandoahHeap* heap = ShenandoahHeap::heap();
heap->assert_gc_workers(heap->workers()->active_workers());
ShenandoahRecycleTrashedRegionClosure closure(&_partitions);
ShenandoahRecycleTrashedRegionClosure closure;
heap->parallel_heap_region_iterate(&closure);
}
bool ShenandoahFreeSet::transfer_one_region_from_mutator_to_old_collector(size_t idx, size_t alloc_capacity) {
ShenandoahGenerationalHeap* gen_heap = ShenandoahGenerationalHeap::heap();
ShenandoahYoungGeneration* young_gen = gen_heap->young_generation();
ShenandoahOldGeneration* old_gen = gen_heap->old_generation();
size_t region_size_bytes = ShenandoahHeapRegion::region_size_bytes();
assert(alloc_capacity == region_size_bytes, "Region must be empty");
if (young_unaffiliated_regions() > 0) {
@@ -1985,7 +1876,6 @@ bool ShenandoahFreeSet::flip_to_old_gc(ShenandoahHeapRegion* r) {
assert(_partitions.partition_id_matches(idx, ShenandoahFreeSetPartitionId::Mutator), "Should be in mutator view");
assert(can_allocate_from(r), "Should not be allocated");
ShenandoahGenerationalHeap* gen_heap = ShenandoahGenerationalHeap::heap();
const size_t region_alloc_capacity = alloc_capacity(r);
if (transfer_one_region_from_mutator_to_old_collector(idx, region_alloc_capacity)) {
@@ -2133,7 +2023,6 @@ void ShenandoahFreeSet::find_regions_with_alloc_capacity(size_t &young_trashed_r
size_t total_mutator_regions = 0;
size_t total_old_collector_regions = 0;
bool is_generational = _heap->mode()->is_generational();
size_t num_regions = _heap->num_regions();
for (size_t idx = 0; idx < num_regions; idx++) {
ShenandoahHeapRegion* region = _heap->get_region(idx);
@@ -2222,7 +2111,6 @@ void ShenandoahFreeSet::find_regions_with_alloc_capacity(size_t &young_trashed_r
}
} else {
assert(_partitions.membership(idx) == ShenandoahFreeSetPartitionId::NotFree, "Region should have been retired");
size_t ac = alloc_capacity(region);
size_t humongous_waste_bytes = 0;
if (region->is_humongous_start()) {
oop obj = cast_to_oop(region->bottom());
@@ -3120,7 +3008,6 @@ void ShenandoahFreeSet::log_status() {
size_t total_used = 0;
size_t total_free = 0;
size_t total_free_ext = 0;
size_t total_trashed_free = 0;
for (idx_t idx = _partitions.leftmost(ShenandoahFreeSetPartitionId::Mutator);
idx <= _partitions.rightmost(ShenandoahFreeSetPartitionId::Mutator); idx++) {

View File

@@ -76,6 +76,9 @@ public:
}
}
// Bitmap reset task is heavy-weight and benefits from much smaller tasks than the default.
size_t parallel_region_stride() override { return 8; }
bool is_thread_safe() override { return true; }
};
@@ -524,7 +527,6 @@ size_t ShenandoahGeneration::select_aged_regions(const size_t old_promotion_rese
assert_no_in_place_promotions();
auto const heap = ShenandoahGenerationalHeap::heap();
ShenandoahYoungGeneration* young_gen = heap->young_generation();
ShenandoahFreeSet* free_set = heap->free_set();
bool* const candidate_regions_for_promotion_by_copy = heap->collection_set()->preselected_regions();
ShenandoahMarkingContext* const ctx = heap->marking_context();
@@ -562,7 +564,6 @@ size_t ShenandoahGeneration::select_aged_regions(const size_t old_promotion_rese
size_t pip_mutator_bytes = 0;
size_t pip_collector_bytes = 0;
size_t min_remnant_size = PLAB::min_size() * HeapWordSize;
for (idx_t i = 0; i < num_regions; i++) {
ShenandoahHeapRegion* const r = heap->get_region(i);
if (r->is_empty() || !r->has_live() || !r->is_young() || !r->is_regular()) {

View File

@@ -688,19 +688,6 @@ void ShenandoahGenerationalHeap::reset_generation_reserves() {
old_generation()->set_promoted_reserve(0);
}
void ShenandoahGenerationalHeap::TransferResult::print_on(const char* when, outputStream* ss) const {
auto heap = ShenandoahGenerationalHeap::heap();
ShenandoahYoungGeneration* const young_gen = heap->young_generation();
ShenandoahOldGeneration* const old_gen = heap->old_generation();
const size_t young_available = young_gen->available();
const size_t old_available = old_gen->available();
ss->print_cr("After %s, %s %zu regions to %s to prepare for next gc, old available: "
PROPERFMT ", young_available: " PROPERFMT,
when,
success? "successfully transferred": "failed to transfer", region_count, region_destination,
PROPERFMTARGS(old_available), PROPERFMTARGS(young_available));
}
void ShenandoahGenerationalHeap::coalesce_and_fill_old_regions(bool concurrent) {
class ShenandoahGlobalCoalesceAndFill : public WorkerTask {
private:

View File

@@ -132,24 +132,12 @@ public:
bool requires_barriers(stackChunkOop obj) const override;
// Used for logging the result of a region transfer outside the heap lock
struct TransferResult {
bool success;
size_t region_count;
const char* region_destination;
void print_on(const char* when, outputStream* ss) const;
};
// Zeros out the evacuation and promotion reserves
void reset_generation_reserves();
// Computes the optimal size for the old generation, represented as a surplus or deficit of old regions
void compute_old_generation_balance(size_t old_xfer_limit, size_t old_cset_regions);
// Transfers surplus old regions to young, or takes regions from young to satisfy old region deficit
TransferResult balance_generations();
// Balances generations, coalesces and fills old regions if necessary
void complete_degenerated_cycle();
void complete_concurrent_cycle();

View File

@@ -1962,7 +1962,7 @@ void ShenandoahHeap::parallel_heap_region_iterate(ShenandoahHeapRegionClosure* b
assert(blk->is_thread_safe(), "Only thread-safe closures here");
const uint active_workers = workers()->active_workers();
const size_t n_regions = num_regions();
size_t stride = ShenandoahParallelRegionStride;
size_t stride = blk->parallel_region_stride();
if (stride == 0 && active_workers > 1) {
// Automatically derive the stride to balance the work between threads
// evenly. Do not try to split work if below the reasonable threshold.

View File

@@ -113,6 +113,7 @@ public:
class ShenandoahHeapRegionClosure : public StackObj {
public:
virtual void heap_region_do(ShenandoahHeapRegion* r) = 0;
virtual size_t parallel_region_stride() { return ShenandoahParallelRegionStride; }
virtual bool is_thread_safe() { return false; }
};

View File

@@ -157,7 +157,7 @@ inline void ShenandoahHeapRegion::increase_live_data_gc_words(size_t s) {
}
inline void ShenandoahHeapRegion::internal_increase_live_data(size_t s) {
size_t new_live_data = AtomicAccess::add(&_live_data, s, memory_order_relaxed);
AtomicAccess::add(&_live_data, s, memory_order_relaxed);
}
inline void ShenandoahHeapRegion::clear_live_data() {

View File

@@ -44,6 +44,10 @@ public:
}
}
size_t parallel_region_stride() override {
return _closure->parallel_region_stride();
}
bool is_thread_safe() override {
return _closure->is_thread_safe();
}
@@ -64,6 +68,10 @@ public:
}
}
size_t parallel_region_stride() override {
return _closure->parallel_region_stride();
}
bool is_thread_safe() override {
return _closure->is_thread_safe();
}

View File

@@ -62,7 +62,6 @@ class ShenandoahRegulatorThread: public ConcurrentGCThread {
bool start_old_cycle() const;
bool start_young_cycle() const;
bool start_global_cycle() const;
bool resume_old_cycle();
// The generational mode can only unload classes in a global cycle. The regulator
// thread itself will trigger a global cycle if metaspace is out of memory.

View File

@@ -233,8 +233,6 @@ public:
inline bool is_write_card_dirty(size_t card_index) const;
inline void mark_card_as_dirty(size_t card_index);
inline void mark_range_as_dirty(size_t card_index, size_t num_cards);
inline void mark_card_as_clean(size_t card_index);
inline void mark_range_as_clean(size_t card_index, size_t num_cards);
inline bool is_card_dirty(HeapWord* p) const;
inline bool is_write_card_dirty(HeapWord* p) const;
inline void mark_card_as_dirty(HeapWord* p);

View File

@@ -217,11 +217,10 @@ static void deoptimize_allocation(JavaThread* thread) {
void ZBarrierSet::on_slowpath_allocation_exit(JavaThread* thread, oop new_obj) {
const ZPage* const page = ZHeap::heap()->page(to_zaddress(new_obj));
const ZPageAge age = page->age();
if (age == ZPageAge::old) {
if (!page->allows_raw_null()) {
// We promised C2 that its allocations would end up in young gen. This object
// breaks that promise. Take a few steps in the interpreter instead, which has
// no such assumptions about where an object resides.
// is too old to guarantee that. Take a few steps in the interpreter instead,
// which does not elide barriers based on the age of an object.
deoptimize_allocation(thread);
}
}

View File

@@ -190,7 +190,8 @@ void ZGeneration::flip_age_pages(const ZRelocationSetSelector* selector) {
ZRendezvousHandshakeClosure cl;
Handshake::execute(&cl);
_relocate.barrier_flip_promoted_pages(_relocation_set.flip_promoted_pages());
_relocate.barrier_promoted_pages(_relocation_set.flip_promoted_pages(),
_relocation_set.relocate_promoted_pages());
}
static double fragmentation_limit(ZGenerationId generation) {

View File

@@ -41,7 +41,8 @@ ZPage::ZPage(ZPageType type, ZPageAge age, const ZVirtualMemory& vmem, ZMultiPar
_top(to_zoffset_end(start())),
_livemap(object_max_count()),
_remembered_set(),
_multi_partition_tracker(multi_partition_tracker) {
_multi_partition_tracker(multi_partition_tracker),
_relocate_promoted(false) {
assert(!_virtual.is_null(), "Should not be null");
assert((_type == ZPageType::small && size() == ZPageSizeSmall) ||
(_type == ZPageType::medium && ZPageSizeMediumMin <= size() && size() <= ZPageSizeMediumMax) ||
@@ -70,6 +71,14 @@ ZPage* ZPage::clone_for_promotion() const {
return page;
}
bool ZPage::allows_raw_null() const {
return is_young() && !AtomicAccess::load(&_relocate_promoted);
}
void ZPage::set_is_relocate_promoted() {
AtomicAccess::store(&_relocate_promoted, true);
}
ZGeneration* ZPage::generation() {
return ZGeneration::generation(_generation_id);
}

View File

@@ -52,6 +52,7 @@ private:
ZLiveMap _livemap;
ZRememberedSet _remembered_set;
ZMultiPartitionTracker* const _multi_partition_tracker;
volatile bool _relocate_promoted;
const char* type_to_string() const;
@@ -103,6 +104,9 @@ public:
ZPageAge age() const;
bool allows_raw_null() const;
void set_is_relocate_promoted();
uint32_t seqnum() const;
bool is_allocating() const;
bool is_relocatable() const;

View File

@@ -1366,27 +1366,35 @@ public:
class ZPromoteBarrierTask : public ZTask {
private:
ZArrayParallelIterator<ZPage*> _iter;
ZArrayParallelIterator<ZPage*> _flip_promoted_iter;
ZArrayParallelIterator<ZPage*> _relocate_promoted_iter;
public:
ZPromoteBarrierTask(const ZArray<ZPage*>* pages)
ZPromoteBarrierTask(const ZArray<ZPage*>* flip_promoted_pages,
const ZArray<ZPage*>* relocate_promoted_pages)
: ZTask("ZPromoteBarrierTask"),
_iter(pages) {}
_flip_promoted_iter(flip_promoted_pages),
_relocate_promoted_iter(relocate_promoted_pages) {}
virtual void work() {
SuspendibleThreadSetJoiner sts_joiner;
for (ZPage* page; _iter.next(&page);) {
// When promoting an object (and before relocate start), we must ensure that all
// contained zpointers are store good. The marking code ensures that for non-null
// pointers, but null pointers are ignored. This code ensures that even null pointers
// are made store good, for the promoted objects.
page->object_iterate([&](oop obj) {
ZIterator::basic_oop_iterate_safe(obj, ZBarrier::promote_barrier_on_young_oop_field);
});
auto promote_barriers = [&](ZArrayParallelIterator<ZPage*>* iter) {
for (ZPage* page; iter->next(&page);) {
// When promoting an object (and before relocate start), we must ensure that all
// contained zpointers are store good. The marking code ensures that for non-null
// pointers, but null pointers are ignored. This code ensures that even null pointers
// are made store good, for the promoted objects.
page->object_iterate([&](oop obj) {
ZIterator::basic_oop_iterate_safe(obj, ZBarrier::promote_barrier_on_young_oop_field);
});
SuspendibleThreadSet::yield();
}
SuspendibleThreadSet::yield();
}
};
promote_barriers(&_flip_promoted_iter);
promote_barriers(&_relocate_promoted_iter);
}
};
@@ -1395,8 +1403,9 @@ void ZRelocate::flip_age_pages(const ZArray<ZPage*>* pages) {
workers()->run(&flip_age_task);
}
void ZRelocate::barrier_flip_promoted_pages(const ZArray<ZPage*>* pages) {
ZPromoteBarrierTask promote_barrier_task(pages);
void ZRelocate::barrier_promoted_pages(const ZArray<ZPage*>* flip_promoted_pages,
const ZArray<ZPage*>* relocate_promoted_pages) {
ZPromoteBarrierTask promote_barrier_task(flip_promoted_pages, relocate_promoted_pages);
workers()->run(&promote_barrier_task);
}

View File

@@ -119,7 +119,8 @@ public:
void relocate(ZRelocationSet* relocation_set);
void flip_age_pages(const ZArray<ZPage*>* pages);
void barrier_flip_promoted_pages(const ZArray<ZPage*>* pages);
void barrier_promoted_pages(const ZArray<ZPage*>* flip_promoted_pages,
const ZArray<ZPage*>* relocate_promoted_pages);
void synchronize();
void desynchronize();

View File

@@ -38,6 +38,7 @@
class ZRelocationSetInstallTask : public ZTask {
private:
ZRelocationSet* _relocation_set;
ZForwardingAllocator* const _allocator;
ZForwarding** _forwardings;
const size_t _nforwardings;
@@ -54,16 +55,6 @@ private:
page->log_msg(" (relocation selected)");
_forwardings[index] = forwarding;
if (forwarding->is_promotion()) {
// Before promoting an object (and before relocate start), we must ensure that all
// contained zpointers are store good. The marking code ensures that for non-null
// pointers, but null pointers are ignored. This code ensures that even null pointers
// are made store good, for the promoted objects.
page->object_iterate([&](oop obj) {
ZIterator::basic_oop_iterate_safe(obj, ZBarrier::promote_barrier_on_young_oop_field);
});
}
}
void install_small(ZForwarding* forwarding, size_t index) {
@@ -78,10 +69,18 @@ private:
return ZRelocate::compute_to_age(page->age());
}
void track_if_promoted(ZPage* page, ZForwarding* forwarding, ZArray<ZPage*>& relocate_promoted) {
if (forwarding->is_promotion()) {
page->set_is_relocate_promoted();
relocate_promoted.append(page);
}
}
public:
ZRelocationSetInstallTask(ZForwardingAllocator* allocator, const ZRelocationSetSelector* selector)
ZRelocationSetInstallTask(ZRelocationSet* relocation_set, const ZRelocationSetSelector* selector)
: ZTask("ZRelocationSetInstallTask"),
_allocator(allocator),
_relocation_set(relocation_set),
_allocator(&relocation_set->_allocator),
_forwardings(nullptr),
_nforwardings((size_t)selector->selected_small()->length() + (size_t)selector->selected_medium()->length()),
_small(selector->selected_small()),
@@ -108,11 +107,14 @@ public:
// Join the STS to block out VMThreads while running promote_barrier_on_young_oop_field
SuspendibleThreadSetJoiner sts_joiner;
ZArray<ZPage*> relocate_promoted;
// Allocate and install forwardings for small pages
for (size_t page_index; _small_iter.next_index(&page_index);) {
ZPage* page = _small->at(int(page_index));
ZForwarding* const forwarding = ZForwarding::alloc(_allocator, page, to_age(page));
install_small(forwarding, (size_t)_medium->length() + page_index);
track_if_promoted(page, forwarding, relocate_promoted);
SuspendibleThreadSet::yield();
}
@@ -122,9 +124,12 @@ public:
ZPage* page = _medium->at(int(page_index));
ZForwarding* const forwarding = ZForwarding::alloc(_allocator, page, to_age(page));
install_medium(forwarding, page_index);
track_if_promoted(page, forwarding, relocate_promoted);
SuspendibleThreadSet::yield();
}
_relocation_set->register_relocate_promoted(relocate_promoted);
}
ZForwarding** forwardings() const {
@@ -143,6 +148,7 @@ ZRelocationSet::ZRelocationSet(ZGeneration* generation)
_nforwardings(0),
_promotion_lock(),
_flip_promoted_pages(),
_relocate_promoted_pages(),
_in_place_relocate_promoted_pages() {}
ZWorkers* ZRelocationSet::workers() const {
@@ -157,9 +163,13 @@ ZArray<ZPage*>* ZRelocationSet::flip_promoted_pages() {
return &_flip_promoted_pages;
}
ZArray<ZPage*>* ZRelocationSet::relocate_promoted_pages() {
return &_relocate_promoted_pages;
}
void ZRelocationSet::install(const ZRelocationSetSelector* selector) {
// Install relocation set
ZRelocationSetInstallTask task(&_allocator, selector);
ZRelocationSetInstallTask task(this, selector);
workers()->run(&task);
_forwardings = task.forwardings();
@@ -189,6 +199,7 @@ void ZRelocationSet::reset(ZPageAllocator* page_allocator) {
destroy_and_clear(page_allocator, &_in_place_relocate_promoted_pages);
destroy_and_clear(page_allocator, &_flip_promoted_pages);
_relocate_promoted_pages.clear();
}
void ZRelocationSet::register_flip_promoted(const ZArray<ZPage*>& pages) {
@@ -199,6 +210,18 @@ void ZRelocationSet::register_flip_promoted(const ZArray<ZPage*>& pages) {
}
}
void ZRelocationSet::register_relocate_promoted(const ZArray<ZPage*>& pages) {
if (pages.is_empty()) {
return;
}
ZLocker<ZLock> locker(&_promotion_lock);
for (ZPage* const page : pages) {
assert(!_relocate_promoted_pages.contains(page), "no duplicates allowed");
_relocate_promoted_pages.append(page);
}
}
void ZRelocationSet::register_in_place_relocate_promoted(ZPage* page) {
ZLocker<ZLock> locker(&_promotion_lock);
assert(!_in_place_relocate_promoted_pages.contains(page), "no duplicates allowed");

View File

@@ -37,6 +37,7 @@ class ZWorkers;
class ZRelocationSet {
template <bool> friend class ZRelocationSetIteratorImpl;
friend class ZRelocationSetInstallTask;
private:
ZGeneration* _generation;
@@ -45,6 +46,7 @@ private:
size_t _nforwardings;
ZLock _promotion_lock;
ZArray<ZPage*> _flip_promoted_pages;
ZArray<ZPage*> _relocate_promoted_pages;
ZArray<ZPage*> _in_place_relocate_promoted_pages;
ZWorkers* workers() const;
@@ -58,8 +60,10 @@ public:
void reset(ZPageAllocator* page_allocator);
ZGeneration* generation() const;
ZArray<ZPage*>* flip_promoted_pages();
ZArray<ZPage*>* relocate_promoted_pages();
void register_flip_promoted(const ZArray<ZPage*>& pages);
void register_relocate_promoted(const ZArray<ZPage*>& pages);
void register_in_place_relocate_promoted(ZPage* page);
};

View File

@@ -1100,16 +1100,16 @@ JVM_GetEnclosingMethodInfo(JNIEnv* env, jclass ofClass);
* Virtual thread support.
*/
JNIEXPORT void JNICALL
JVM_VirtualThreadStart(JNIEnv* env, jobject vthread);
JVM_VirtualThreadEndFirstTransition(JNIEnv* env, jobject vthread);
JNIEXPORT void JNICALL
JVM_VirtualThreadEnd(JNIEnv* env, jobject vthread);
JVM_VirtualThreadStartFinalTransition(JNIEnv* env, jobject vthread);
JNIEXPORT void JNICALL
JVM_VirtualThreadMount(JNIEnv* env, jobject vthread, jboolean hide);
JVM_VirtualThreadStartTransition(JNIEnv* env, jobject vthread, jboolean is_mount);
JNIEXPORT void JNICALL
JVM_VirtualThreadUnmount(JNIEnv* env, jobject vthread, jboolean hide);
JVM_VirtualThreadEndTransition(JNIEnv* env, jobject vthread, jboolean is_mount);
JNIEXPORT void JNICALL
JVM_VirtualThreadDisableSuspend(JNIEnv* env, jclass clazz, jboolean enter);

View File

@@ -1427,10 +1427,10 @@ static void transform(InstanceKlass*& ik, ClassFileParser& parser, JavaThread* t
} else {
JfrClassTransformer::cache_class_file_data(new_ik, stream, thread);
}
JfrClassTransformer::copy_traceid(ik, new_ik);
if (is_instrumented && JdkJfrEvent::is_subklass(new_ik)) {
bless_commit_method(new_ik);
}
JfrClassTransformer::copy_traceid(ik, new_ik);
JfrClassTransformer::rewrite_klass_pointer(ik, new_ik, parser, thread);
}

View File

@@ -22,6 +22,7 @@
*
*/
#include "classfile/classFileParser.hpp"
#include "jfr/instrumentation/jfrEventClassTransformer.hpp"
#include "jfr/jfr.hpp"
#include "jfr/jni/jfrJavaSupport.hpp"
@@ -31,6 +32,7 @@
#include "jfr/recorder/repository/jfrEmergencyDump.hpp"
#include "jfr/recorder/repository/jfrRepository.hpp"
#include "jfr/recorder/service/jfrOptionSet.hpp"
#include "jfr/support/jfrClassDefineEvent.hpp"
#include "jfr/support/jfrKlassExtension.hpp"
#include "jfr/support/jfrResolution.hpp"
#include "jfr/support/jfrThreadLocal.hpp"
@@ -78,13 +80,15 @@ void Jfr::on_unloading_classes() {
}
void Jfr::on_klass_creation(InstanceKlass*& ik, ClassFileParser& parser, TRAPS) {
JfrTraceId::assign(ik);
if (IS_EVENT_OR_HOST_KLASS(ik)) {
JfrEventClassTransformer::on_klass_creation(ik, parser, THREAD);
return;
}
if (JfrMethodTracer::in_use()) {
} else if (JfrMethodTracer::in_use()) {
JfrMethodTracer::on_klass_creation(ik, parser, THREAD);
}
if (!parser.is_internal()) {
JfrClassDefineEvent::on_creation(ik, parser, THREAD);
}
}
void Jfr::on_klass_redefinition(const InstanceKlass* ik, const InstanceKlass* scratch_klass) {
@@ -168,3 +172,11 @@ bool Jfr::on_flight_recorder_option(const JavaVMOption** option, char* delimiter
bool Jfr::on_start_flight_recording_option(const JavaVMOption** option, char* delimiter) {
return JfrOptionSet::parse_start_flight_recording_option(option, delimiter);
}
void Jfr::on_restoration(const Klass* k, JavaThread* jt) {
assert(k != nullptr, "invariant");
JfrTraceId::restore(k);
if (k->is_instance_klass()) {
JfrClassDefineEvent::on_restoration(InstanceKlass::cast(k), jt);
}
}

View File

@@ -25,6 +25,7 @@
#ifndef SHARE_JFR_JFR_HPP
#define SHARE_JFR_JFR_HPP
#include "jfr/utilities/jfrTypes.hpp"
#include "memory/allStatic.hpp"
#include "oops/oopsHierarchy.hpp"
#include "utilities/exceptions.hpp"
@@ -78,6 +79,7 @@ class Jfr : AllStatic {
static void initialize_main_thread(JavaThread* jt);
static bool has_sample_request(JavaThread* jt);
static void check_and_process_sample_request(JavaThread* jt);
static void on_restoration(const Klass* k, JavaThread* jt);
};
#endif // SHARE_JFR_JFR_HPP

View File

@@ -215,6 +215,7 @@
<Event name="ClassDefine" category="Java Virtual Machine, Class Loading" label="Class Define" thread="true" stackTrace="true" startTime="false">
<Field type="Class" name="definedClass" label="Defined Class" />
<Field type="ClassLoader" name="definingClassLoader" label="Defining Class Loader" />
<Field type="Symbol" name="source" label="Source" />
</Event>
<Event name="ClassRedefinition" category="Java Virtual Machine, Class Loading" label="Class Redefinition" thread="false" stackTrace="false" startTime="false">

View File

@@ -30,11 +30,12 @@
#include "classfile/vmClasses.hpp"
#include "jfr/leakprofiler/checkpoint/objectSampleCheckpoint.hpp"
#include "jfr/recorder/checkpoint/types/jfrTypeSet.hpp"
#include "jfr/recorder/checkpoint/types/jfrTypeSetUtils.hpp"
#include "jfr/recorder/checkpoint/types/jfrTypeSetUtils.inline.hpp"
#include "jfr/recorder/checkpoint/types/traceid/jfrTraceId.inline.hpp"
#include "jfr/recorder/checkpoint/types/traceid/jfrTraceIdLoadBarrier.inline.hpp"
#include "jfr/recorder/jfrRecorder.hpp"
#include "jfr/support/jfrKlassUnloading.hpp"
#include "jfr/support/jfrSymbolTable.inline.hpp"
#include "jfr/support/methodtracer/jfrInstrumentedClass.hpp"
#include "jfr/support/methodtracer/jfrMethodTracer.hpp"
#include "jfr/utilities/jfrHashtable.hpp"
@@ -1262,7 +1263,7 @@ static size_t teardown() {
clear_klasses_and_methods();
clear_method_tracer_klasses();
JfrKlassUnloading::clear();
_artifacts->increment_checkpoint_id();
_artifacts->clear();
_initial_type_set = true;
} else {
_initial_type_set = false;

View File

@@ -23,18 +23,19 @@
*/
#include "jfr/recorder/checkpoint/types/jfrTypeSetUtils.hpp"
#include "jfr/support/jfrSymbolTable.inline.hpp"
#include "jfr/utilities/jfrPredicate.hpp"
#include "jfr/utilities/jfrRelation.hpp"
#include "oops/instanceKlass.hpp"
#include "oops/oop.inline.hpp"
#include "oops/symbol.hpp"
JfrArtifactSet::JfrArtifactSet(bool class_unload, bool previous_epoch) : _symbol_table(nullptr),
_klass_set(nullptr),
JfrArtifactSet::JfrArtifactSet(bool class_unload, bool previous_epoch) : _klass_set(nullptr),
_klass_loader_set(nullptr),
_klass_loader_leakp_set(nullptr),
_total_count(0),
_class_unload(class_unload) {
_class_unload(class_unload),
_previous_epoch(previous_epoch) {
initialize(class_unload, previous_epoch);
assert(!previous_epoch || _klass_loader_leakp_set != nullptr, "invariant");
assert(_klass_loader_set != nullptr, "invariant");
@@ -47,12 +48,7 @@ static unsigned initial_klass_loader_leakp_set_size = 64;
void JfrArtifactSet::initialize(bool class_unload, bool previous_epoch) {
_class_unload = class_unload;
if (_symbol_table == nullptr) {
_symbol_table = JfrSymbolTable::create();
assert(_symbol_table != nullptr, "invariant");
}
assert(_symbol_table != nullptr, "invariant");
_symbol_table->set_class_unload(class_unload);
_previous_epoch = previous_epoch;
_total_count = 0;
// Resource allocations. Keep in this allocation order.
if (previous_epoch) {
@@ -63,45 +59,33 @@ void JfrArtifactSet::initialize(bool class_unload, bool previous_epoch) {
}
void JfrArtifactSet::clear() {
if (_symbol_table != nullptr) {
_symbol_table->clear();
}
assert(_previous_epoch, "invariant");
JfrSymbolTable::clear_previous_epoch();
assert(_klass_loader_leakp_set != nullptr, "invariant");
initial_klass_loader_leakp_set_size = MAX2(initial_klass_loader_leakp_set_size, _klass_loader_leakp_set->table_size());
}
JfrArtifactSet::~JfrArtifactSet() {
delete _symbol_table;
// _klass_loader_set, _klass_loader_leakp_set and
// _klass_list will be cleared by a ResourceMark
}
traceid JfrArtifactSet::bootstrap_name(bool leakp) {
return _symbol_table->bootstrap_name(leakp);
}
traceid JfrArtifactSet::mark_hidden_klass_name(const Klass* klass, bool leakp) {
assert(klass->is_instance_klass(), "invariant");
return _symbol_table->mark_hidden_klass_name((const InstanceKlass*)klass, leakp);
}
traceid JfrArtifactSet::mark(uintptr_t hash, const Symbol* sym, bool leakp) {
return _symbol_table->mark(hash, sym, leakp);
return JfrSymbolTable::bootstrap_name(leakp);
}
traceid JfrArtifactSet::mark(const Klass* klass, bool leakp) {
return _symbol_table->mark(klass, leakp);
return JfrSymbolTable::mark(klass, leakp, _class_unload, _previous_epoch);
}
traceid JfrArtifactSet::mark(const Symbol* symbol, bool leakp) {
return _symbol_table->mark(symbol, leakp);
}
traceid JfrArtifactSet::mark(uintptr_t hash, const char* const str, bool leakp) {
return _symbol_table->mark(hash, str, leakp);
return JfrSymbolTable::mark(symbol, leakp, _class_unload, _previous_epoch);
}
bool JfrArtifactSet::has_klass_entries() const {
return _klass_set->is_nonempty();
}
static inline bool not_in_set(JfrArtifactSet::JfrKlassSet* set, const Klass* k) {
assert(set != nullptr, "invariant");
assert(k != nullptr, "invariant");
@@ -129,10 +113,3 @@ size_t JfrArtifactSet::total_count() const {
initial_klass_loader_set_size = MAX2(initial_klass_loader_set_size, _klass_loader_set->table_size());
return _total_count;
}
void JfrArtifactSet::increment_checkpoint_id() {
assert(_symbol_table != nullptr, "invariant");
_symbol_table->increment_checkpoint_id();
assert(_klass_loader_leakp_set != nullptr, "invariant");
initial_klass_loader_leakp_set_size = MAX2(initial_klass_loader_leakp_set_size, _klass_loader_leakp_set->table_size());
}

View File

@@ -207,12 +207,12 @@ class JfrArtifactSet : public JfrCHeapObj {
typedef JfrSet<JfrArtifactSetConfig> JfrKlassSet;
private:
JfrSymbolTable* _symbol_table;
JfrKlassSet* _klass_set;
JfrKlassSet* _klass_loader_set;
JfrKlassSet* _klass_loader_leakp_set;
size_t _total_count;
bool _class_unload;
bool _previous_epoch;
public:
JfrArtifactSet(bool class_unload, bool previous_epoch);
@@ -222,32 +222,20 @@ class JfrArtifactSet : public JfrCHeapObj {
void initialize(bool class_unload, bool previous_epoch);
void clear();
traceid mark(uintptr_t hash, const Symbol* sym, bool leakp);
traceid mark(const Klass* klass, bool leakp);
traceid mark(const Symbol* symbol, bool leakp);
traceid mark(uintptr_t hash, const char* const str, bool leakp);
traceid mark_hidden_klass_name(const Klass* klass, bool leakp);
traceid bootstrap_name(bool leakp);
const JfrSymbolTable::SymbolEntry* map_symbol(const Symbol* symbol) const;
const JfrSymbolTable::SymbolEntry* map_symbol(uintptr_t hash) const;
const JfrSymbolTable::StringEntry* map_string(uintptr_t hash) const;
bool has_klass_entries() const;
size_t total_count() const;
void register_klass(const Klass* k);
bool should_do_cld_klass(const Klass* k, bool leakp);
void increment_checkpoint_id();
template <typename T>
void iterate_symbols(T& functor) {
_symbol_table->iterate_symbols(functor);
}
void iterate_symbols(T& functor);
template <typename T>
void iterate_strings(T& functor) {
_symbol_table->iterate_strings(functor);
}
void iterate_strings(T& functor);
template <typename Writer>
void tally(Writer& writer) {

View File

@@ -0,0 +1,42 @@
/*
* Copyright (c) 2025, Oracle and/or its affiliates. All rights reserved.
* DO NOT ALTER OR REMOVE COPYRIGHT NOTICES OR THIS FILE HEADER.
*
* This code is free software; you can redistribute it and/or modify it
* under the terms of the GNU General Public License version 2 only, as
* published by the Free Software Foundation.
*
* This code is distributed in the hope that it will be useful, but WITHOUT
* ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or
* FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License
* version 2 for more details (a copy is included in the LICENSE file that
* accompanied this code).
*
* You should have received a copy of the GNU General Public License version
* 2 along with this work; if not, write to the Free Software Foundation,
* Inc., 51 Franklin St, Fifth Floor, Boston, MA 02110-1301 USA.
*
* Please contact Oracle, 500 Oracle Parkway, Redwood Shores, CA 94065 USA
* or visit www.oracle.com if you need additional information or have any
* questions.
*
*/
#ifndef SHARE_JFR_RECORDER_CHECKPOINT_TYPES_JFRTYPESETUTILS_INLINE_HPP
#define SHARE_JFR_RECORDER_CHECKPOINT_TYPES_JFRTYPESETUTILS_INLINE_HPP
#include "jfr/recorder/checkpoint/types/jfrTypeSetUtils.hpp"
#include "jfr/support/jfrSymbolTable.inline.hpp"
template <typename T>
inline void JfrArtifactSet::iterate_symbols(T& functor) {
JfrSymbolTable::iterate_symbols(functor, _previous_epoch);
}
template <typename T>
inline void JfrArtifactSet::iterate_strings(T& functor) {
JfrSymbolTable::iterate_strings(functor, _previous_epoch);
}
#endif // SHARE_JFR_RECORDER_CHECKPOINT_TYPES_JFRTYPESETUTILS_INLINE_HPP

View File

@@ -109,6 +109,7 @@ static void check_klass(const Klass* klass) {
void JfrTraceId::assign(const Klass* klass) {
assert(klass != nullptr, "invariant");
assert(klass->trace_id() == 0, "invariant");
klass->set_trace_id(next_class_id());
check_klass(klass);
const Klass* const super = klass->super();

View File

@@ -43,6 +43,7 @@
#include "jfr/recorder/stacktrace/jfrStackTraceRepository.hpp"
#include "jfr/recorder/storage/jfrStorage.hpp"
#include "jfr/recorder/stringpool/jfrStringPool.hpp"
#include "jfr/support/jfrSymbolTable.hpp"
#include "jfr/support/jfrThreadLocal.hpp"
#include "jfr/utilities/jfrTime.hpp"
#include "jfr/writers/jfrJavaEventWriter.hpp"
@@ -315,6 +316,9 @@ bool JfrRecorder::create_components() {
if (!create_thread_group_manager()) {
return false;
}
if (!create_symbol_table()) {
return false;
}
return true;
}
@@ -413,6 +417,10 @@ bool JfrRecorder::create_thread_group_manager() {
return JfrThreadGroupManager::create();
}
bool JfrRecorder::create_symbol_table() {
return JfrSymbolTable::create();
}
void JfrRecorder::destroy_components() {
JfrJvmtiAgent::destroy();
if (_post_box != nullptr) {
@@ -453,6 +461,7 @@ void JfrRecorder::destroy_components() {
}
JfrEventThrottler::destroy();
JfrThreadGroupManager::destroy();
JfrSymbolTable::destroy();
}
bool JfrRecorder::create_recorder_thread() {

View File

@@ -57,6 +57,7 @@ class JfrRecorder : public JfrCHeapObj {
static bool create_thread_sampler();
static bool create_cpu_time_thread_sampling();
static bool create_event_throttler();
static bool create_symbol_table();
static bool create_components();
static void destroy_components();
static void on_recorder_thread_exit();

View File

@@ -39,6 +39,7 @@
#include "jfr/recorder/storage/jfrStorageControl.hpp"
#include "jfr/recorder/stringpool/jfrStringPool.hpp"
#include "jfr/support/jfrDeprecationManager.hpp"
#include "jfr/support/jfrSymbolTable.inline.hpp"
#include "jfr/utilities/jfrAllocation.hpp"
#include "jfr/utilities/jfrThreadIterator.hpp"
#include "jfr/utilities/jfrTime.hpp"
@@ -450,6 +451,7 @@ void JfrRecorderService::clear() {
void JfrRecorderService::pre_safepoint_clear() {
_storage.clear();
JfrStackTraceRepository::clear();
JfrSymbolTable::allocate_next_epoch();
}
void JfrRecorderService::invoke_safepoint_clear() {
@@ -558,6 +560,7 @@ void JfrRecorderService::pre_safepoint_write() {
}
write_storage(_storage, _chunkwriter);
write_stacktrace(_stack_trace_repository, _chunkwriter, true);
JfrSymbolTable::allocate_next_epoch();
}
void JfrRecorderService::invoke_safepoint_write() {

View File

@@ -0,0 +1,188 @@
/*
* Copyright (c) 2025, Oracle and/or its affiliates. All rights reserved.
* DO NOT ALTER OR REMOVE COPYRIGHT NOTICES OR THIS FILE HEADER.
*
* This code is free software; you can redistribute it and/or modify it
* under the terms of the GNU General Public License version 2 only, as
* published by the Free Software Foundation.
*
* This code is distributed in the hope that it will be useful, but WITHOUT
* ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or
* FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License
* version 2 for more details (a copy is included in the LICENSE file that
* accompanied this code).
*
* You should have received a copy of the GNU General Public License version
* 2 along with this work; if not, write to the Free Software Foundation,
* Inc., 51 Franklin St, Fifth Floor, Boston, MA 02110-1301 USA.
*
* Please contact Oracle, 500 Oracle Parkway, Redwood Shores, CA 94065 USA
* or visit www.oracle.com if you need additional information or have any
* questions.
*
*/
#include "cds/aotClassLocation.hpp"
#include "classfile/classFileParser.hpp"
#include "classfile/classFileStream.hpp"
#include "classfile/classLoaderData.inline.hpp"
#include "jfr/instrumentation/jfrClassTransformer.hpp"
#include "jfr/recorder/checkpoint/types/traceid/jfrTraceId.inline.hpp"
#include "jfr/support/jfrClassDefineEvent.hpp"
#include "jfr/support/jfrSymbolTable.hpp"
#include "jfrfiles/jfrEventClasses.hpp"
#include "oops/instanceKlass.hpp"
#include "runtime/javaThread.hpp"
/*
* Two cases for JDK modules as outlined by JEP 200: The Modular JDK.
*
* The modular structure of the JDK implements the following principles:
* 1. Standard modules, whose specifications are governed by the JCP, have names starting with the string "java.".
* 2. All other modules are merely part of the JDK, and have names starting with the string "jdk.".
* */
static inline bool is_jdk_module(const char* module_name) {
assert(module_name != nullptr, "invariant");
return strstr(module_name, "java.") == module_name || strstr(module_name, "jdk.") == module_name;
}
static inline bool is_unnamed_module(const ModuleEntry* module) {
return module == nullptr || !module->is_named();
}
static inline bool is_jdk_module(const ModuleEntry* module, JavaThread* jt) {
assert(jt != nullptr, "invariant");
if (is_unnamed_module(module)) {
return false;
}
const Symbol* const module_symbol = module->name();
assert(module_symbol != nullptr, "invariant");
return is_jdk_module(module_symbol->as_C_string());
}
static inline bool is_jdk_module(const InstanceKlass* ik, JavaThread* jt) {
assert(ik != nullptr, "invariant");
assert(jt != nullptr, "invariant");
return is_jdk_module(ik->module(), jt);
}
static traceid module_path(const InstanceKlass* ik, JavaThread* jt) {
assert(ik != nullptr, "invariant");
const ModuleEntry* const module_entry = ik->module();
if (is_unnamed_module(module_entry)) {
return 0;
}
const char* const module_name = module_entry->name()->as_C_string();
assert(module_name != nullptr, "invariant");
if (is_jdk_module(module_name)) {
const size_t module_name_len = strlen(module_name);
char* const path = NEW_RESOURCE_ARRAY_IN_THREAD(jt, char, module_name_len + 6); // "jrt:/"
jio_snprintf(path, module_name_len + 6, "%s%s", "jrt:/", module_name);
return JfrSymbolTable::add(path);
}
return 0;
}
static traceid caller_path(const InstanceKlass* ik, JavaThread* jt) {
assert(ik != nullptr, "invariant");
assert(jt != nullptr, "invariant");
assert(ik->class_loader_data()->is_the_null_class_loader_data(), "invariant");
const Klass* const caller = jt->security_get_caller_class(1);
// caller can be null, for example, during a JVMTI VM_Init hook
if (caller != nullptr) {
const char* caller_name = caller->external_name();
assert(caller_name != nullptr, "invariant");
const size_t caller_name_len = strlen(caller_name);
char* const path = NEW_RESOURCE_ARRAY_IN_THREAD(jt, char, caller_name_len + 13); // "instance of "
jio_snprintf(path, caller_name_len + 13, "%s%s", "instance of ", caller_name);
return JfrSymbolTable::add(path);
}
return 0;
}
static traceid class_loader_path(const InstanceKlass* ik, JavaThread* jt) {
assert(ik != nullptr, "invariant");
assert(jt != nullptr, "invariant");
assert(!ik->class_loader_data()->is_the_null_class_loader_data(), "invariant");
oop class_loader = ik->class_loader_data()->class_loader();
const char* class_loader_name = class_loader->klass()->external_name();
return class_loader_name != nullptr ? JfrSymbolTable::add(class_loader_name) : 0;
}
static inline bool is_not_retransforming(const InstanceKlass* ik, JavaThread* jt) {
return JfrClassTransformer::find_existing_klass(ik, jt) == nullptr;
}
static traceid get_source(const InstanceKlass* ik, JavaThread* jt) {
traceid source_id = 0;
if (is_jdk_module(ik, jt)) {
source_id = module_path(ik, jt);
} else if (ik->class_loader_data()->is_the_null_class_loader_data()) {
source_id = caller_path(ik, jt);
} else {
source_id = class_loader_path(ik, jt);
}
return source_id;
}
static traceid get_source(const AOTClassLocation* cl, JavaThread* jt) {
assert(cl != nullptr, "invariant");
assert(!cl->is_modules_image(), "invariant");
const char* const path = cl->path();
assert(path != nullptr, "invariant");
size_t len = strlen(path);
const char* file_type = cl->file_type_string();
assert(file_type != nullptr, "invariant");
len += strlen(file_type) + 3; // ":/" + null
char* const url = NEW_RESOURCE_ARRAY_IN_THREAD(jt, char, len);
jio_snprintf(url, len, "%s%s%s", file_type, ":/", path);
return JfrSymbolTable::add(url);
}
static inline void send_event(const InstanceKlass* ik, traceid source_id) {
EventClassDefine event;
event.set_definedClass(ik);
event.set_definingClassLoader(ik->class_loader_data());
event.set_source(source_id);
event.commit();
}
void JfrClassDefineEvent::on_creation(const InstanceKlass* ik, const ClassFileParser& parser, JavaThread* jt) {
assert(ik != nullptr, "invariant");
assert(ik->trace_id() != 0, "invariant");
assert(!parser.is_internal(), "invariant");
assert(jt != nullptr, "invariant");
if (EventClassDefine::is_enabled() && is_not_retransforming(ik, jt)) {
ResourceMark rm(jt);
traceid source_id = 0;
const ClassFileStream& stream = parser.stream();
if (stream.source() != nullptr) {
if (stream.from_boot_loader_modules_image()) {
assert(is_jdk_module(ik, jt), "invariant");
source_id = module_path(ik, jt);
} else {
source_id = JfrSymbolTable::add(stream.source());
}
} else {
source_id = get_source(ik, jt);
}
send_event(ik, source_id);
}
}
void JfrClassDefineEvent::on_restoration(const InstanceKlass* ik, JavaThread* jt) {
assert(ik != nullptr, "invariant");
assert(ik->trace_id() != 0, "invariant");
assert(jt != nullptr, "invariant");
if (EventClassDefine::is_enabled()) {
ResourceMark rm(jt);
assert(is_not_retransforming(ik, jt), "invariant");
const int index = ik->shared_classpath_index();
assert(index >= 0, "invariant");
const AOTClassLocation* const cl = AOTClassLocationConfig::runtime()->class_location_at(index);
assert(cl != nullptr, "invariant");
send_event(ik, cl->is_modules_image() ? module_path(ik, jt) : get_source(cl, jt));
}
}

View File

@@ -0,0 +1,40 @@
/*
* Copyright (c) 2025, Oracle and/or its affiliates. All rights reserved.
* DO NOT ALTER OR REMOVE COPYRIGHT NOTICES OR THIS FILE HEADER.
*
* This code is free software; you can redistribute it and/or modify it
* under the terms of the GNU General Public License version 2 only, as
* published by the Free Software Foundation.
*
* This code is distributed in the hope that it will be useful, but WITHOUT
* ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or
* FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License
* version 2 for more details (a copy is included in the LICENSE file that
* accompanied this code).
*
* You should have received a copy of the GNU General Public License version
* 2 along with this work; if not, write to the Free Software Foundation,
* Inc., 51 Franklin St, Fifth Floor, Boston, MA 02110-1301 USA.
*
* Please contact Oracle, 500 Oracle Parkway, Redwood Shores, CA 94065 USA
* or visit www.oracle.com if you need additional information or have any
* questions.
*
*/
#ifndef SHARE_JFR_SUPPORT_JFRCLASSDEFINEEVENT_HPP
#define SHARE_JFR_SUPPORT_JFRCLASSDEFINEEVENT_HPP
#include "memory/allStatic.hpp"
class ClassFileParser;
class InstanceKlass;
class JavaThread;
class JfrClassDefineEvent : AllStatic {
public:
static void on_creation(const InstanceKlass* ik, const ClassFileParser& parser, JavaThread* jt);
static void on_restoration(const InstanceKlass* ik, JavaThread* jt);
};
#endif // SHARE_JFR_SUPPORT_JFRCLASSDEFINEEVENT_HPP

View File

@@ -40,6 +40,5 @@
#define EVENT_STICKY_BIT 8192
#define IS_EVENT_KLASS(ptr) (((ptr)->trace_id() & (JDK_JFR_EVENT_KLASS | JDK_JFR_EVENT_SUBKLASS)) != 0)
#define IS_EVENT_OR_HOST_KLASS(ptr) (((ptr)->trace_id() & (JDK_JFR_EVENT_KLASS | JDK_JFR_EVENT_SUBKLASS | EVENT_HOST_KLASS)) != 0)
#define ON_KLASS_CREATION(k, p, t) Jfr::on_klass_creation(k, p, t)
#endif // SHARE_JFR_SUPPORT_JFRKLASSEXTENSION_HPP

View File

@@ -24,131 +24,32 @@
#include "classfile/classLoaderData.hpp"
#include "classfile/javaClasses.hpp"
#include "jfr/support/jfrSymbolTable.hpp"
#include "jfr/recorder/jfrRecorder.hpp"
#include "jfr/support/jfrSymbolTable.inline.hpp"
#include "oops/klass.hpp"
#include "oops/symbol.hpp"
#include "runtime/atomicAccess.hpp"
#include "runtime/mutexLocker.hpp"
// incremented on each rotation
static u8 checkpoint_id = 1;
JfrSymbolTable::Impl* JfrSymbolTable::_epoch_0 = nullptr;
JfrSymbolTable::Impl* JfrSymbolTable::_epoch_1 = nullptr;
JfrSymbolTable::StringEntry* JfrSymbolTable::_bootstrap = nullptr;
// creates a unique id by combining a checkpoint relative symbol id (2^24)
// with the current checkpoint id (2^40)
#define CREATE_SYMBOL_ID(sym_id) (((u8)((checkpoint_id << 24) | sym_id)))
static traceid create_symbol_id(traceid artifact_id) {
return artifact_id != 0 ? CREATE_SYMBOL_ID(artifact_id) : 0;
}
static uintptr_t string_hash(const char* str) {
return java_lang_String::hash_code(reinterpret_cast<const jbyte*>(str), static_cast<int>(strlen(str)));
}
static JfrSymbolTable::StringEntry* bootstrap = nullptr;
static JfrSymbolTable* _instance = nullptr;
static JfrSymbolTable& instance() {
assert(_instance != nullptr, "invariant");
return *_instance;
}
JfrSymbolTable* JfrSymbolTable::create() {
assert(_instance == nullptr, "invariant");
assert_locked_or_safepoint(ClassLoaderDataGraph_lock);
_instance = new JfrSymbolTable();
return _instance;
}
void JfrSymbolTable::destroy() {
assert_locked_or_safepoint(ClassLoaderDataGraph_lock);
if (_instance != nullptr) {
delete _instance;
_instance = nullptr;
}
assert(_instance == nullptr, "invariant");
}
JfrSymbolTable::JfrSymbolTable() :
_symbols(new Symbols(this)),
_strings(new Strings(this)),
_symbol_list(nullptr),
_string_list(nullptr),
_symbol_query(nullptr),
_string_query(nullptr),
_id_counter(1),
_class_unload(false) {
assert(_symbols != nullptr, "invariant");
assert(_strings != nullptr, "invariant");
bootstrap = new StringEntry(0, (const char*)&BOOTSTRAP_LOADER_NAME);
assert(bootstrap != nullptr, "invariant");
bootstrap->set_id(create_symbol_id(1));
_string_list = bootstrap;
}
JfrSymbolTable::~JfrSymbolTable() {
clear();
delete _symbols;
delete _strings;
delete bootstrap;
}
void JfrSymbolTable::clear() {
assert(_symbols != nullptr, "invariant");
if (_symbols->has_entries()) {
_symbols->clear_entries();
}
assert(!_symbols->has_entries(), "invariant");
assert(_strings != nullptr, "invariant");
if (_strings->has_entries()) {
_strings->clear_entries();
}
assert(!_strings->has_entries(), "invariant");
_symbol_list = nullptr;
_id_counter = 1;
_symbol_query = nullptr;
_string_query = nullptr;
assert(bootstrap != nullptr, "invariant");
bootstrap->reset();
_string_list = bootstrap;
}
void JfrSymbolTable::set_class_unload(bool class_unload) {
_class_unload = class_unload;
}
void JfrSymbolTable::increment_checkpoint_id() {
assert_lock_strong(ClassLoaderDataGraph_lock);
clear();
++checkpoint_id;
}
JfrSymbolCallback::JfrSymbolCallback() : _id_counter(2) {} // 1 is reserved for "bootstrap" entry
template <typename T>
inline void JfrSymbolTable::assign_id(T* entry) {
inline void JfrSymbolCallback::assign_id(const T* entry) {
assert(entry != nullptr, "invariant");
assert(entry->id() == 0, "invariant");
entry->set_id(create_symbol_id(++_id_counter));
entry->set_id(AtomicAccess::fetch_then_add(&_id_counter, (traceid)1));
}
void JfrSymbolTable::on_link(const SymbolEntry* entry) {
void JfrSymbolCallback::on_link(const JfrSymbolTable::SymbolEntry* entry) {
assign_id(entry);
const_cast<Symbol*>(entry->literal())->increment_refcount();
entry->set_list_next(_symbol_list);
_symbol_list = entry;
}
bool JfrSymbolTable::on_equals(uintptr_t hash, const SymbolEntry* entry) {
assert(entry != nullptr, "invariant");
assert(entry->hash() == hash, "invariant");
assert(_symbol_query != nullptr, "invariant");
return _symbol_query == entry->literal();
}
void JfrSymbolTable::on_unlink(const SymbolEntry* entry) {
void JfrSymbolCallback::on_unlink(const JfrSymbolTable::SymbolEntry* entry) {
assert(entry != nullptr, "invariant");
const_cast<Symbol*>(entry->literal())->decrement_refcount();
}
@@ -162,75 +63,241 @@ static const char* resource_to_c_heap_string(const char* resource_str) {
return c_string;
}
void JfrSymbolTable::on_link(const StringEntry* entry) {
void JfrSymbolCallback::on_link(const JfrSymbolTable::StringEntry* entry) {
assign_id(entry);
const_cast<StringEntry*>(entry)->set_literal(resource_to_c_heap_string(entry->literal()));
entry->set_list_next(_string_list);
_string_list = entry;
const_cast<JfrSymbolTable::StringEntry*>(entry)->set_literal(resource_to_c_heap_string(entry->literal()));
}
static bool string_compare(const char* query, const char* candidate) {
assert(query != nullptr, "invariant");
assert(candidate != nullptr, "invariant");
const size_t length = strlen(query);
return strncmp(query, candidate, length) == 0;
}
bool JfrSymbolTable::on_equals(uintptr_t hash, const StringEntry* entry) {
void JfrSymbolCallback::on_unlink(const JfrSymbolTable::StringEntry* entry) {
assert(entry != nullptr, "invariant");
assert(entry->hash() == hash, "invariant");
assert(_string_query != nullptr, "invariant");
return string_compare(_string_query, entry->literal());
JfrCHeapObj::free(const_cast<char*>(entry->literal()), strlen(entry->literal()) + 1);
}
void JfrSymbolTable::on_unlink(const StringEntry* entry) {
assert(entry != nullptr, "invariant");
JfrCHeapObj::free(const_cast<char*>(entry->literal()), strlen(entry->literal() + 1));
static JfrSymbolCallback* _callback = nullptr;
template <typename T, typename IdType>
JfrSymbolTableEntry<T, IdType>::JfrSymbolTableEntry(unsigned hash, const T& data) :
JfrConcurrentHashtableEntry<T, IdType>(hash, data), _serialized(false), _unloading(false), _leakp(false) {}
template <typename T, typename IdType>
bool JfrSymbolTableEntry<T, IdType>::on_equals(const char* str) {
assert(str != nullptr, "invariant");
return strcmp((const char*)this->literal(), str) == 0;
}
static const constexpr unsigned max_capacity = 1 << 30;
static inline unsigned calculate_capacity(unsigned size, unsigned capacity) {
assert(is_power_of_2(capacity), "invariant");
assert(capacity <= max_capacity, "invariant");
double load_factor = (double)size / (double)capacity;
if (load_factor < 0.75) {
return capacity;
}
do {
capacity <<= 1;
assert(is_power_of_2(capacity), "invariant");
guarantee(capacity <= max_capacity, "overflow");
load_factor = (double)size / (double)capacity;
} while (load_factor >= 0.75);
return capacity;
}
bool JfrSymbolTable::create() {
assert(_callback == nullptr, "invariant");
// Allocate callback instance before tables.
_callback = new JfrSymbolCallback();
if (_callback == nullptr) {
return false;
}
assert(_bootstrap == nullptr, "invariant");
_bootstrap = new StringEntry(0, (const char*)&BOOTSTRAP_LOADER_NAME);
if (_bootstrap == nullptr) {
return false;
}
_bootstrap->set_id(1);
assert(this_epoch_table() == nullptr, "invariant");
Impl* table = new JfrSymbolTable::Impl();
if (table == nullptr) {
return false;
}
set_this_epoch(table);
assert(previous_epoch_table() == nullptr, "invariant");
return true;
}
void JfrSymbolTable::destroy() {
if (_callback != nullptr) {
delete _callback;
_callback = nullptr;
}
if (_bootstrap != nullptr) {
delete _bootstrap;
_bootstrap = nullptr;
}
if (_epoch_0 != nullptr) {
delete _epoch_0;
_epoch_0 = nullptr;
}
if (_epoch_1 != nullptr) {
delete _epoch_1;
_epoch_1 = nullptr;
}
}
void JfrSymbolTable::allocate_next_epoch() {
assert(nullptr == previous_epoch_table(), "invariant");
const Impl* const current = this_epoch_table();
assert(current != nullptr, "invariant");
const unsigned next_symbols_capacity = calculate_capacity(current->symbols_size(), current->symbols_capacity());
const unsigned next_strings_capacity = calculate_capacity(current->strings_size(), current->strings_capacity());
assert(_callback != nullptr, "invariant");
// previous epoch to become the next epoch.
set_previous_epoch(new JfrSymbolTable::Impl(next_symbols_capacity, next_strings_capacity));
assert(this_epoch_table() != nullptr, "invariant");
assert(previous_epoch_table() != nullptr, "invariant");
}
void JfrSymbolTable::clear_previous_epoch() {
Impl* const table = previous_epoch_table();
assert(table != nullptr, "invariant");
set_previous_epoch(nullptr);
delete table;
assert(_bootstrap != nullptr, "invariant");
_bootstrap->reset();
assert(!_bootstrap->is_serialized(), "invariant");
}
void JfrSymbolTable::set_this_epoch(JfrSymbolTable::Impl* table) {
assert(table != nullptr, "invariant");
const u1 epoch = JfrTraceIdEpoch::current();
if (epoch == 0) {
_epoch_0 = table;
} else {
_epoch_1 = table;
}
}
void JfrSymbolTable::set_previous_epoch(JfrSymbolTable::Impl* table) {
const u1 epoch = JfrTraceIdEpoch::previous();
if (epoch == 0) {
_epoch_0 = table;
} else {
_epoch_1 = table;
}
}
inline bool JfrSymbolTable::Impl::has_symbol_entries() const {
return _symbols->is_nonempty();
}
inline bool JfrSymbolTable::Impl::has_string_entries() const {
return _strings->is_nonempty();
}
inline bool JfrSymbolTable::Impl::has_entries() const {
return has_symbol_entries() || has_string_entries();
}
inline unsigned JfrSymbolTable::Impl::symbols_capacity() const {
return _symbols->capacity();
}
inline unsigned JfrSymbolTable::Impl::symbols_size() const {
return _symbols->size();
}
inline unsigned JfrSymbolTable::Impl::strings_capacity() const {
return _strings->capacity();
}
inline unsigned JfrSymbolTable::Impl::strings_size() const {
return _strings->size();
}
bool JfrSymbolTable::has_entries(bool previous_epoch /* false */) {
const Impl* table = previous_epoch ? previous_epoch_table() : this_epoch_table();
assert(table != nullptr, "invariant");
return table->has_entries();
}
bool JfrSymbolTable::has_symbol_entries(bool previous_epoch /* false */) {
const Impl* table = previous_epoch ? previous_epoch_table() : this_epoch_table();
assert(table != nullptr, "invariant");
return table->has_symbol_entries();
}
bool JfrSymbolTable::has_string_entries(bool previous_epoch /* false */) {
const Impl* table = previous_epoch ? previous_epoch_table() : this_epoch_table();
assert(table != nullptr, "invariant");
return table->has_string_entries();
}
traceid JfrSymbolTable::bootstrap_name(bool leakp) {
assert(bootstrap != nullptr, "invariant");
assert(_bootstrap != nullptr, "invariant");
if (leakp) {
bootstrap->set_leakp();
_bootstrap->set_leakp();
}
return bootstrap->id();
return _bootstrap->id();
}
traceid JfrSymbolTable::mark(const Symbol* sym, bool leakp /* false */) {
JfrSymbolTable::Impl::Impl(unsigned symbols_capacity /* 0*/, unsigned strings_capacity /* 0 */) :
_symbols(new Symbols(_callback, symbols_capacity)), _strings(new Strings(_callback, strings_capacity)) {}
JfrSymbolTable::Impl::~Impl() {
delete _symbols;
delete _strings;
}
traceid JfrSymbolTable::Impl::mark(const Symbol* sym, bool leakp /* false */, bool class_unload /* false */) {
assert(sym != nullptr, "invariant");
return mark(sym->identity_hash(), sym, leakp);
return mark(sym->identity_hash(), sym, leakp, class_unload);
}
traceid JfrSymbolTable::mark(uintptr_t hash, const Symbol* sym, bool leakp) {
traceid JfrSymbolTable::Impl::mark(unsigned hash, const Symbol* sym, bool leakp, bool class_unload /* false */) {
assert(sym != nullptr, "invariant");
assert(_symbols != nullptr, "invariant");
_symbol_query = sym;
const SymbolEntry& entry = _symbols->lookup_put(hash, sym);
if (_class_unload) {
entry.set_unloading();
}
const SymbolEntry* entry = _symbols->lookup_put(hash, sym);
assert(entry != nullptr, "invariant");
if (leakp) {
entry.set_leakp();
entry->set_leakp();
} else if (class_unload) {
entry->set_unloading();
}
return entry.id();
return entry->id();
}
traceid JfrSymbolTable::mark(const char* str, bool leakp /* false*/) {
return mark(string_hash(str), str, leakp);
traceid JfrSymbolTable::mark(const Symbol* sym, bool leakp /* false */, bool class_unload /* false */, bool previous_epoch /* false */) {
Impl* const table = previous_epoch ? previous_epoch_table() : this_epoch_table();
assert(table != nullptr, "invariant");
return table->mark(sym->identity_hash(), sym, leakp, class_unload);
}
traceid JfrSymbolTable::mark(uintptr_t hash, const char* str, bool leakp) {
static inline unsigned string_hash(const char* str) {
return java_lang_String::hash_code(reinterpret_cast<const jbyte*>(str), static_cast<int>(strlen(str)));
}
traceid JfrSymbolTable::Impl::mark(const char* str, bool leakp /* false*/, bool class_unload /* false */) {
return mark(string_hash(str), str, leakp, class_unload);
}
traceid JfrSymbolTable::Impl::mark(unsigned hash, const char* str, bool leakp, bool class_unload /* false */) {
assert(str != nullptr, "invariant");
assert(_strings != nullptr, "invariant");
_string_query = str;
const StringEntry& entry = _strings->lookup_put(hash, str);
if (_class_unload) {
entry.set_unloading();
}
const StringEntry* entry = _strings->lookup_put(hash, str);
assert(entry != nullptr, "invariant");
if (leakp) {
entry.set_leakp();
entry->set_leakp();
} else if (class_unload) {
entry->set_unloading();
}
return entry.id();
return entry->id();
}
traceid JfrSymbolTable::mark(unsigned hash, const char* str, bool leakp, bool class_unload /* false */, bool previous_epoch /* false */) {
Impl* const table = previous_epoch ? previous_epoch_table() : this_epoch_table();
assert(table != nullptr, "invariant");
return table->mark(hash, str, leakp, class_unload);
}
/*
@@ -241,40 +308,47 @@ traceid JfrSymbolTable::mark(uintptr_t hash, const char* str, bool leakp) {
*
* Caller needs ResourceMark.
*/
traceid JfrSymbolTable::mark_hidden_klass_name(const Klass* k, bool leakp) {
inline traceid JfrSymbolTable::Impl::mark_hidden_klass_name(const Klass* k, bool leakp, bool class_unload /* false */) {
assert(k != nullptr, "invariant");
assert(k->is_hidden(), "invariant");
const uintptr_t hash = k->name()->identity_hash();
return mark(hash, k->external_name(), leakp);
return mark(k->name()->identity_hash(), k->external_name(), leakp, class_unload);
}
traceid JfrSymbolTable::mark(const Klass* k, bool leakp) {
traceid JfrSymbolTable::Impl::mark(const Klass* k, bool leakp, bool class_unload /* false */) {
assert(k != nullptr, "invariant");
traceid symbol_id = 0;
if (k->is_hidden()) {
symbol_id = mark_hidden_klass_name(k, leakp);
symbol_id = mark_hidden_klass_name(k, leakp, class_unload);
} else {
Symbol* const sym = k->name();
if (sym != nullptr) {
symbol_id = mark(sym, leakp);
symbol_id = mark(sym, leakp, class_unload);
}
}
assert(symbol_id > 0, "a symbol handler must mark the symbol for writing");
return symbol_id;
}
template <typename T>
traceid JfrSymbolTable::add_impl(const T* sym) {
assert(sym != nullptr, "invariant");
assert(_instance != nullptr, "invariant");
assert_locked_or_safepoint(ClassLoaderDataGraph_lock);
return instance().mark(sym);
traceid JfrSymbolTable::mark(const Klass* k, bool leakp, bool class_unload /* false */, bool previous_epoch /* false */) {
Impl* const table = previous_epoch ? previous_epoch_table() : this_epoch_table();
assert(table != nullptr, "invariant");
return table->mark(k, leakp, class_unload);
}
traceid JfrSymbolTable::add(const Symbol* sym) {
return add_impl(sym);
inline traceid JfrSymbolTable::Impl::add(const Symbol* sym) {
assert(sym != nullptr, "invariant");
return _symbols->lookup_put(sym->identity_hash(), sym)->id();
}
traceid JfrSymbolTable::Impl::add(const char* str) {
assert(str != nullptr, "invariant");
return _strings->lookup_put(string_hash(str), str)->id();
}
inline traceid JfrSymbolTable::add(const Symbol* sym) {
return this_epoch_table()->add(sym);
}
traceid JfrSymbolTable::add(const char* str) {
return add_impl(str);
return this_epoch_table()->add(str);
}

View File

@@ -1,5 +1,5 @@
/*
* Copyright (c) 2021, 2024, Oracle and/or its affiliates. All rights reserved.
* Copyright (c) 2021, 2025, Oracle and/or its affiliates. All rights reserved.
* DO NOT ALTER OR REMOVE COPYRIGHT NOTICES OR THIS FILE HEADER.
*
* This code is free software; you can redistribute it and/or modify it
@@ -25,32 +25,58 @@
#ifndef SHARE_JFR_SUPPORT_JFRSYMBOLTABLE_HPP
#define SHARE_JFR_SUPPORT_JFRSYMBOLTABLE_HPP
#include "jfr/utilities/jfrHashtable.hpp"
#include "jfr/utilities/jfrAllocation.hpp"
#include "jfr/utilities/jfrConcurrentHashtable.hpp"
#include "jfr/utilities/jfrTypes.hpp"
template <typename T, typename IdType>
class ListEntry : public JfrHashtableEntry<T, IdType> {
class JfrSymbolTableEntry : public JfrConcurrentHashtableEntry<T, IdType> {
public:
ListEntry(uintptr_t hash, const T& data) : JfrHashtableEntry<T, IdType>(hash, data),
_list_next(nullptr), _serialized(false), _unloading(false), _leakp(false) {}
const ListEntry<T, IdType>* list_next() const { return _list_next; }
void reset() const {
_list_next = nullptr; _serialized = false; _unloading = false; _leakp = false;
}
void set_list_next(const ListEntry<T, IdType>* next) const { _list_next = next; }
JfrSymbolTableEntry(unsigned hash, const T& data);
bool is_serialized() const { return _serialized; }
void set_serialized() const { _serialized = true; }
bool is_unloading() const { return _unloading; }
void set_unloading() const { _unloading = true; }
bool is_leakp() const { return _leakp; }
void set_leakp() const { _leakp = true; }
void reset() const {
_serialized = false;
_unloading = false;
_leakp = false;
}
bool on_equals(const Symbol* sym) {
assert(sym != nullptr, "invariant");
return sym == (const Symbol*)this->literal();
}
bool on_equals(const char* str);
private:
mutable const ListEntry<T, IdType>* _list_next;
mutable bool _serialized;
mutable bool _unloading;
mutable bool _leakp;
};
class JfrSymbolCallback : public JfrCHeapObj {
friend class JfrSymbolTable;
public:
typedef JfrConcurrentHashTableHost<const Symbol*, traceid, JfrSymbolTableEntry, JfrSymbolCallback> Symbols;
typedef JfrConcurrentHashTableHost<const char*, traceid, JfrSymbolTableEntry, JfrSymbolCallback> Strings;
void on_link(const Symbols::Entry* entry);
void on_unlink(const Symbols::Entry* entry);
void on_link(const Strings::Entry* entry);
void on_unlink(const Strings::Entry* entry);
private:
traceid _id_counter;
JfrSymbolCallback();
template <typename T>
void assign_id(const T* entry);
};
/*
* This table maps an oop/Symbol* or a char* to the Jfr type 'Symbol'.
*
@@ -58,88 +84,93 @@ class ListEntry : public JfrHashtableEntry<T, IdType> {
* which is represented in the binary format as a sequence of checkpoint events.
* The returned id can be used as a foreign key, but please note that the id is
* epoch-relative, and is therefore only valid in the current epoch / chunk.
* The table is cleared as part of rotation.
*
* Caller must ensure mutual exclusion by means of the ClassLoaderDataGraph_lock or by safepointing.
*/
class JfrSymbolTable : public JfrCHeapObj {
template <typename, typename, template<typename, typename> class, typename, size_t>
friend class HashTableHost;
typedef HashTableHost<const Symbol*, traceid, ListEntry, JfrSymbolTable> Symbols;
typedef HashTableHost<const char*, traceid, ListEntry, JfrSymbolTable> Strings;
class JfrSymbolTable : public AllStatic {
friend class JfrArtifactSet;
template <typename, typename, template<typename, typename> class, typename, unsigned>
friend class JfrConcurrentHashTableHost;
friend class JfrRecorder;
friend class JfrRecorderService;
friend class JfrSymbolCallback;
typedef JfrConcurrentHashTableHost<const Symbol*, traceid, JfrSymbolTableEntry, JfrSymbolCallback> Symbols;
typedef JfrConcurrentHashTableHost<const char*, traceid, JfrSymbolTableEntry, JfrSymbolCallback> Strings;
public:
typedef Symbols::HashEntry SymbolEntry;
typedef Strings::HashEntry StringEntry;
typedef Symbols::Entry SymbolEntry;
typedef Strings::Entry StringEntry;
static traceid add(const Symbol* sym);
static traceid add(const char* str);
private:
Symbols* _symbols;
Strings* _strings;
const SymbolEntry* _symbol_list;
const StringEntry* _string_list;
const Symbol* _symbol_query;
const char* _string_query;
traceid _id_counter;
bool _class_unload;
class Impl : public JfrCHeapObj {
friend class JfrSymbolTable;
private:
Symbols* _symbols;
Strings* _strings;
JfrSymbolTable();
~JfrSymbolTable();
static JfrSymbolTable* create();
Impl(unsigned symbol_capacity = 0, unsigned string_capacity = 0);
~Impl();
void clear();
traceid add(const Symbol* sym);
traceid add(const char* str);
traceid mark(unsigned hash, const Symbol* sym, bool leakp, bool class_unload = false);
traceid mark(const Klass* k, bool leakp, bool class_unload = false);
traceid mark(const Symbol* sym, bool leakp = false, bool class_unload = false);
traceid mark(const char* str, bool leakp = false, bool class_unload = false);
traceid mark(unsigned hash, const char* str, bool leakp, bool class_unload = false);
traceid mark_hidden_klass_name(const Klass* k, bool leakp, bool class_unload = false);
bool has_entries() const;
bool has_symbol_entries() const;
bool has_string_entries() const;
unsigned symbols_capacity() const;
unsigned symbols_size() const;
unsigned strings_capacity() const;
unsigned strings_size() const;
template <typename Functor>
void iterate_symbols(Functor& functor);
template <typename Functor>
void iterate_strings(Functor& functor);
};
static Impl* _epoch_0;
static Impl* _epoch_1;
static StringEntry* _bootstrap;
static bool create();
static void destroy();
void clear();
void increment_checkpoint_id();
void set_class_unload(bool class_unload);
static Impl* this_epoch_table();
static Impl* previous_epoch_table();
static Impl* epoch_table_selector(u1 epoch);
static void set_this_epoch(Impl* table);
static void set_previous_epoch(Impl* table);
traceid mark(uintptr_t hash, const Symbol* sym, bool leakp);
traceid mark(const Klass* k, bool leakp);
traceid mark(const Symbol* sym, bool leakp = false);
traceid mark(const char* str, bool leakp = false);
traceid mark(uintptr_t hash, const char* str, bool leakp);
traceid mark_hidden_klass_name(const Klass* k, bool leakp);
traceid bootstrap_name(bool leakp);
static void clear_previous_epoch();
static void allocate_next_epoch();
bool has_entries() const { return has_symbol_entries() || has_string_entries(); }
bool has_symbol_entries() const { return _symbol_list != nullptr; }
bool has_string_entries() const { return _string_list != nullptr; }
static bool has_entries(bool previous_epoch = false);
static bool has_symbol_entries(bool previous_epoch = false);
static bool has_string_entries(bool previous_epoch = false);
// hashtable(s) callbacks
void on_link(const SymbolEntry* entry);
bool on_equals(uintptr_t hash, const SymbolEntry* entry);
void on_unlink(const SymbolEntry* entry);
void on_link(const StringEntry* entry);
bool on_equals(uintptr_t hash, const StringEntry* entry);
void on_unlink(const StringEntry* entry);
template <typename T>
static traceid add_impl(const T* sym);
template <typename T>
void assign_id(T* entry);
static traceid mark(const Klass* k, bool leakp, bool class_unload = false, bool previous_epoch = false);
static traceid mark(const Symbol* sym, bool leakp = false, bool class_unload = false, bool previous_epoch = false);
static traceid mark(unsigned hash, const char* str, bool leakp, bool class_unload = false, bool previous_epoch = false);
static traceid bootstrap_name(bool leakp);
template <typename Functor>
void iterate_symbols(Functor& functor) {
iterate(functor, _symbol_list);
}
static void iterate_symbols(Functor& functor, bool previous_epoch = false);
template <typename Functor>
void iterate_strings(Functor& functor) {
iterate(functor, _string_list);
}
template <typename Functor, typename T>
void iterate(Functor& functor, const T* list) {
const T* symbol = list;
while (symbol != nullptr) {
const T* next = symbol->list_next();
functor(symbol);
symbol = next;
}
}
static void iterate_strings(Functor& functor, bool previous_epoch = false);
};
#endif // SHARE_JFR_SUPPORT_JFRSYMBOLTABLE_HPP

View File

@@ -0,0 +1,72 @@
/*
* Copyright (c) 2025, Oracle and/or its affiliates. All rights reserved.
* DO NOT ALTER OR REMOVE COPYRIGHT NOTICES OR THIS FILE HEADER.
*
* This code is free software; you can redistribute it and/or modify it
* under the terms of the GNU General Public License version 2 only, as
* published by the Free Software Foundation.
*
* This code is distributed in the hope that it will be useful, but WITHOUT
* ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or
* FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License
* version 2 for more details (a copy is included in the LICENSE file that
* accompanied this code).
*
* You should have received a copy of the GNU General Public License version
* 2 along with this work; if not, write to the Free Software Foundation,
* Inc., 51 Franklin St, Fifth Floor, Boston, MA 02110-1301 USA.
*
* Please contact Oracle, 500 Oracle Parkway, Redwood Shores, CA 94065 USA
* or visit www.oracle.com if you need additional information or have any
* questions.
*
*/
#ifndef SHARE_JFR_SUPPORT_JFRSYMBOLTABLE_INLINE_HPP
#define SHARE_JFR_SUPPORT_JFRSYMBOLTABLE_INLINE_HPP
#include "jfr/support/jfrSymbolTable.hpp"
#include "jfr/recorder/checkpoint/types/traceid/jfrTraceIdEpoch.hpp"
#include "jfr/utilities/jfrConcurrentHashtable.inline.hpp"
inline JfrSymbolTable::Impl* JfrSymbolTable::epoch_table_selector(u1 epoch) {
return epoch == 0 ? _epoch_0 : _epoch_1;
}
inline JfrSymbolTable::Impl* JfrSymbolTable::this_epoch_table() {
return epoch_table_selector(JfrTraceIdEpoch::current());
}
inline JfrSymbolTable::Impl* JfrSymbolTable::previous_epoch_table() {
return epoch_table_selector(JfrTraceIdEpoch::previous());
}
template <typename Functor>
inline void JfrSymbolTable::Impl::iterate_symbols(Functor& functor) {
_symbols->iterate_entry(functor);
}
template <typename Functor>
inline void JfrSymbolTable::iterate_symbols(Functor& functor, bool previous_epoch /* false */) {
Impl* const table = previous_epoch ? previous_epoch_table() : this_epoch_table();
assert(table != nullptr, "invariant");
table->iterate_symbols(functor);
}
template <typename Functor>
inline void JfrSymbolTable::Impl::iterate_strings(Functor& functor) {
_strings->iterate_entry(functor);
}
template <typename Functor>
inline void JfrSymbolTable::iterate_strings(Functor& functor, bool previous_epoch /* false */) {
Impl* const table = previous_epoch ? previous_epoch_table() : this_epoch_table();
assert(table != nullptr, "invariant");
if (!functor(_bootstrap)) {
return;
}
table->iterate_strings(functor);
}
#endif // SHARE_JFR_SUPPORT_JFRSYMBOLTABLE_INLINE_HPP

View File

@@ -42,7 +42,6 @@
#define ASSIGN_PRIMITIVE_CLASS_ID(data) JfrTraceId::assign_primitive_klass_id()
#define REMOVE_ID(k) JfrTraceId::remove(k);
#define REMOVE_METHOD_ID(method) JfrTraceId::remove(method);
#define RESTORE_ID(k) JfrTraceId::restore(k);
static constexpr const uint16_t cleared_epoch_bits = 512 | 256;

View File

@@ -0,0 +1,139 @@
/*
* Copyright (c) 2025, Oracle and/or its affiliates. All rights reserved.
* DO NOT ALTER OR REMOVE COPYRIGHT NOTICES OR THIS FILE HEADER.
*
* This code is free software; you can redistribute it and/or modify it
* under the terms of the GNU General Public License version 2 only, as
* published by the Free Software Foundation.
*
* This code is distributed in the hope that it will be useful, but WITHOUT
* ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or
* FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License
* version 2 for more details (a copy is included in the LICENSE file that
* accompanied this code).
*
* You should have received a copy of the GNU General Public License version
* 2 along with this work; if not, write to the Free Software Foundation,
* Inc., 51 Franklin St, Fifth Floor, Boston, MA 02110-1301 USA.
*
* Please contact Oracle, 500 Oracle Parkway, Redwood Shores, CA 94065 USA
* or visit www.oracle.com if you need additional information or have any
* questions.
*
*/
#ifndef SHARE_JFR_UTILITIES_JFRCONCURRENTHASHTABLE_HPP
#define SHARE_JFR_UTILITIES_JFRCONCURRENTHASHTABLE_HPP
#include "jfr/utilities/jfrLinkedList.hpp"
#include "memory/allocation.hpp"
template <typename T, typename IdType, template <typename, typename> class TableEntry>
class JfrConcurrentAscendingId {
private:
IdType _id;
public:
JfrConcurrentAscendingId() : _id(1) {}
// Callbacks.
void on_link(TableEntry<T, IdType>* entry);
bool on_equals(unsigned hash, const TableEntry<T, IdType>* entry);
};
template <typename T, typename IdType>
class JfrConcurrentHashtableEntry : public CHeapObj<mtTracing> {
template <typename, typename>
friend class JfrLinkedList;
private:
typedef JfrConcurrentHashtableEntry<T, IdType> Entry;
Entry* _next;
T _literal; // ref to item in table.
mutable IdType _id;
unsigned _hash;
public:
JfrConcurrentHashtableEntry(unsigned hash, const T& data) : _next(nullptr), _literal(data), _id(0), _hash(hash) {}
unsigned hash() const { return _hash; }
T literal() const { return _literal; }
T* literal_addr() { return &_literal; }
void set_literal(T s) { _literal = s; }
void set_next(Entry* next) { _next = next; }
Entry* next() const { return _next; }
Entry** next_addr() { return &_next; }
IdType id() const { return _id; }
void set_id(IdType id) const { _id = id; }
T& value() const { return *const_cast<Entry*>(this)->literal_addr(); }
const T* value_addr() const { return const_cast<Entry*>(this)->literal_addr(); }
};
template <typename T, typename IdType, template <typename, typename> class TableEntry>
class JfrConcurrentHashtable : public CHeapObj<mtTracing> {
public:
typedef TableEntry<T, IdType> Entry;
typedef JfrLinkedList<Entry> Bucket;
unsigned capacity() const { return _capacity; }
unsigned size() const;
protected:
JfrConcurrentHashtable(unsigned size);
~JfrConcurrentHashtable();
unsigned index(unsigned hash) const {
return hash & _mask;
}
Bucket& bucket(unsigned idx) { return _buckets[idx]; }
Bucket* bucket_addr(unsigned idx) { return &_buckets[idx]; }
Entry* head(unsigned idx) { return bucket(idx).head(); }
bool try_add(unsigned idx, Entry* entry, Entry* next);
template <typename Callback>
void iterate(Callback& cb);
template <typename Callback>
void iterate(unsigned idx, Callback& cb);
template <typename Callback>
static void iterate(Entry* entry, Callback& cb);
void unlink_entry(Entry* entry);
private:
Bucket* _buckets;
unsigned _capacity;
unsigned _mask;
unsigned _size;
};
template <typename T, typename IdType, template <typename, typename> class TableEntry,
typename Callback = JfrConcurrentAscendingId<IdType,T, TableEntry>, unsigned TABLE_CAPACITY = 1024>
class JfrConcurrentHashTableHost : public JfrConcurrentHashtable<T, IdType, TableEntry> {
public:
typedef TableEntry<T, IdType> Entry;
JfrConcurrentHashTableHost(unsigned initial_capacity = 0);
JfrConcurrentHashTableHost(Callback* cb, unsigned initial_capacity = 0);
~JfrConcurrentHashTableHost();
// lookup entry, will put if not found
Entry* lookup_put(unsigned hash, const T& data);
// id retrieval
IdType id(unsigned hash, const T& data);
bool is_empty() const;
bool is_nonempty() const { return !is_empty(); }
template <typename Functor>
void iterate_value(Functor& f);
template <typename Functor>
void iterate_entry(Functor& f);
private:
Callback* _callback;
Entry* new_entry(unsigned hash, const T& data);
};
#endif // SHARE_JFR_UTILITIES_JFRCONCURRENTHASHTABLE_HPP

View File

@@ -0,0 +1,254 @@
/*
* Copyright (c) 2025, Oracle and/or its affiliates. All rights reserved.
* DO NOT ALTER OR REMOVE COPYRIGHT NOTICES OR THIS FILE HEADER.
*
* This code is free software; you can redistribute it and/or modify it
* under the terms of the GNU General Public License version 2 only, as
* published by the Free Software Foundation.
*
* This code is distributed in the hope that it will be useful, but WITHOUT
* ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or
* FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License
* version 2 for more details (a copy is included in the LICENSE file that
* accompanied this code).
*
* You should have received a copy of the GNU General Public License version
* 2 along with this work; if not, write to the Free Software Foundation,
* Inc., 51 Franklin St, Fifth Floor, Boston, MA 02110-1301 USA.
*
* Please contact Oracle, 500 Oracle Parkway, Redwood Shores, CA 94065 USA
* or visit www.oracle.com if you need additional information or have any
* questions.
*
*/
#ifndef SHARE_JFR_UTILITIES_JFRCONCURRENTHASHTABLE_INLINE_HPP
#define SHARE_JFR_UTILITIES_JFRCONCURRENTHASHTABLE_INLINE_HPP
#include "jfr/utilities/jfrConcurrentHashtable.hpp"
#include "jfr/utilities/jfrLinkedList.inline.hpp"
#include "memory/allocation.inline.hpp"
#include "nmt/memTracker.hpp"
#include "runtime/atomicAccess.hpp"
#include "utilities/debug.hpp"
#include "utilities/macros.hpp"
template <typename T, typename IdType, template <typename, typename> class TableEntry>
inline void JfrConcurrentAscendingId<T, IdType, TableEntry>::on_link(TableEntry<T, IdType>* entry) {
assert(entry != nullptr, "invariant");
assert(entry->id() == 0, "invariant");
entry->set_id(AtomicAccess::fetch_then_add(&_id, static_cast<IdType>(1)));
}
template <typename T, typename IdType, template <typename, typename> class TableEntry>
inline bool JfrConcurrentAscendingId<T, IdType, TableEntry>::on_equals(unsigned hash, const TableEntry<T, IdType>* entry) {
assert(entry != nullptr, "invariant");
assert(entry->hash() == hash, "invariant");
return true;
}
template <typename T, typename IdType, template <typename, typename> class TableEntry>
inline JfrConcurrentHashtable<T, IdType, TableEntry>::JfrConcurrentHashtable(unsigned initial_capacity) :
_buckets(nullptr), _capacity(initial_capacity), _mask(initial_capacity - 1), _size(0) {
assert(initial_capacity >= 2, "invariant");
assert(is_power_of_2(initial_capacity), "invariant");
_buckets = NEW_C_HEAP_ARRAY2(Bucket, initial_capacity, mtTracing, CURRENT_PC);
memset((void*)_buckets, 0, initial_capacity * sizeof(Bucket));
}
template <typename T, typename IdType, template <typename, typename> class TableEntry>
inline JfrConcurrentHashtable<T, IdType, TableEntry>::~JfrConcurrentHashtable() {
FREE_C_HEAP_ARRAY(Bucket, _buckets);
}
template <typename T, typename IdType, template <typename, typename> class TableEntry>
inline unsigned JfrConcurrentHashtable<T, IdType, TableEntry>::size() const {
return AtomicAccess::load(&_size);
}
template <typename T, typename IdType, template <typename, typename> class TableEntry>
template <typename Callback>
inline void JfrConcurrentHashtable<T, IdType, TableEntry>::iterate(unsigned idx, Callback& cb) {
assert(idx < _capacity, "invariant");
bucket(idx).iterate(cb);
}
template <typename T, typename IdType, template <typename, typename> class TableEntry>
template <typename Callback>
inline void JfrConcurrentHashtable<T, IdType, TableEntry>::iterate(Callback& cb) {
for (unsigned i = 0; i < _capacity; ++i) {
iterate(i, cb);
}
}
template <typename T, typename IdType, template <typename, typename> class TableEntry>
template <typename Callback>
inline void JfrConcurrentHashtable<T, IdType, TableEntry>::iterate(TableEntry<T, IdType>* entry, Callback& cb) {
Bucket::iterate(entry, cb);
}
template <typename T, typename IdType, template <typename, typename> class TableEntry>
inline bool JfrConcurrentHashtable<T, IdType, TableEntry>::try_add(unsigned idx, TableEntry<T, IdType>* entry, TableEntry<T, IdType>* next) {
assert(entry != nullptr, "invariant");
entry->set_next(next);
const bool added = bucket(idx).try_add(entry, next);
if (added) {
AtomicAccess::inc(&_size);
}
return added;
}
template <typename T, typename IdType, template <typename, typename> class TableEntry>
inline void JfrConcurrentHashtable<T, IdType, TableEntry>::unlink_entry(TableEntry<T, IdType>* entry) {
AtomicAccess::dec(&_size);
}
template <typename T, typename IdType, template <typename, typename> class TableEntry, typename Callback, unsigned TABLE_CAPACITY>
inline JfrConcurrentHashTableHost<T, IdType, TableEntry, Callback, TABLE_CAPACITY>::JfrConcurrentHashTableHost(unsigned initial_capacity /* 0 */) :
JfrConcurrentHashtable<T, IdType, TableEntry>(initial_capacity == 0 ? TABLE_CAPACITY : initial_capacity), _callback(new Callback()) {}
template <typename T, typename IdType, template <typename, typename> class TableEntry, typename Callback, unsigned TABLE_CAPACITY>
inline JfrConcurrentHashTableHost<T, IdType, TableEntry, Callback, TABLE_CAPACITY>::JfrConcurrentHashTableHost(Callback* cb, unsigned initial_capacity /* 0 */) :
JfrConcurrentHashtable<T, IdType, TableEntry>(initial_capacity == 0 ? TABLE_CAPACITY : initial_capacity), _callback(cb) {}
template <typename T, typename IdType, template <typename, typename> class TableEntry, typename Callback, unsigned TABLE_CAPACITY>
inline bool JfrConcurrentHashTableHost<T, IdType, TableEntry, Callback, TABLE_CAPACITY>::is_empty() const {
return this->size() == 0;
}
template <typename T, typename IdType, template <typename, typename> class TableEntry, typename Callback, unsigned TABLE_CAPACITY>
inline TableEntry<T, IdType>* JfrConcurrentHashTableHost<T, IdType, TableEntry, Callback, TABLE_CAPACITY>::new_entry(unsigned hash, const T& data) {
Entry* const entry = new Entry(hash, data);
assert(entry != nullptr, "invariant");
assert(0 == entry->id(), "invariant");
_callback->on_link(entry);
assert(0 != entry->id(), "invariant");
return entry;
}
template <typename T, typename Entry, typename Callback>
class JfrConcurrentHashtableLookup {
private:
Callback* const _cb;
const T& _data;
Entry* _found;
unsigned _hash;
public:
JfrConcurrentHashtableLookup(unsigned hash, const T& data, Callback* cb) : _cb(cb), _data(data), _found(nullptr), _hash(hash) {}
bool process(Entry* entry) {
assert(entry != nullptr, "invariant");
if (entry->hash() == _hash && entry->on_equals(_data)) {
_found = entry;
return false;
}
return true;
}
bool found() const { return _found != nullptr; }
Entry* result() const { return _found; }
};
template <typename T, typename IdType, template <typename, typename> class TableEntry, typename Callback, unsigned TABLE_CAPACITY>
inline TableEntry<T, IdType>* JfrConcurrentHashTableHost<T, IdType, TableEntry, Callback, TABLE_CAPACITY>::lookup_put(unsigned hash, const T& data) {
JfrConcurrentHashtableLookup<T, Entry, Callback> lookup(hash, data, _callback);
const unsigned idx = this->index(hash);
Entry* entry = nullptr;
while (true) {
assert(!lookup.found(), "invariant");
Entry* next = this->head(idx);
if (next != nullptr) {
JfrConcurrentHashtable<T, IdType, TableEntry>::iterate(next, lookup);
if (lookup.found()) {
if (entry != nullptr) {
_callback->on_unlink(entry);
delete entry;
}
entry = lookup.result();
break;
}
}
if (entry == nullptr) {
entry = new_entry(hash, data);
}
assert(entry != nullptr, "invariant");
if (this->try_add(idx, entry, next)) {
break;
}
// Concurrent insertion to this bucket. Retry.
}
assert(entry != nullptr, "invariant");
return entry;
}
// id retrieval
template <typename T, typename IdType, template <typename, typename> class TableEntry, typename Callback, unsigned TABLE_CAPACITY>
inline IdType JfrConcurrentHashTableHost<T, IdType, TableEntry, Callback, TABLE_CAPACITY>::id(unsigned hash, const T& data) {
assert(data != nullptr, "invariant");
const Entry* const entry = lookup_put(hash, data);
assert(entry != nullptr, "invariant");
assert(entry->id() > 0, "invariant");
return entry->id();
}
template <typename Entry, typename Callback>
class JfrConcurrentHashtableClear {
private:
Callback* const _cb;
public:
JfrConcurrentHashtableClear(Callback* cb) : _cb(cb) {}
bool process(const Entry* entry) {
assert(entry != nullptr, "invariant");
_cb->on_unlink(entry);
delete entry;
return true;
}
};
template <typename T, typename IdType, template <typename, typename> class TableEntry, typename Callback, unsigned TABLE_CAPACITY>
inline JfrConcurrentHashTableHost<T, IdType, TableEntry, Callback, TABLE_CAPACITY>::~JfrConcurrentHashTableHost() {
JfrConcurrentHashtableClear<Entry, Callback> cls(_callback);
this->iterate(cls);
}
template <typename Entry, typename Functor>
class JfrConcurrentHashtableValueDelegator {
private:
Functor& _f;
public:
JfrConcurrentHashtableValueDelegator(Functor& f) : _f(f) {}
bool process(const Entry* entry) {
assert(entry != nullptr, "invariant");
return _f(entry->value());
}
};
template <typename Entry, typename Functor>
class JfrConcurrentHashtableEntryDelegator {
private:
Functor& _f;
public:
JfrConcurrentHashtableEntryDelegator(Functor& f) : _f(f) {}
bool process(const Entry* entry) {
assert(entry != nullptr, "invariant");
return _f(entry);
}
};
template <typename T, typename IdType, template <typename, typename> class TableEntry, typename Callback, unsigned TABLE_CAPACITY>
template <typename Functor>
inline void JfrConcurrentHashTableHost<T, IdType, TableEntry, Callback, TABLE_CAPACITY>::iterate_value(Functor& f) {
JfrConcurrentHashtableValueDelegator<Entry, Functor> delegator(f);
this->iterate(delegator);
}
template <typename T, typename IdType, template <typename, typename> class TableEntry, typename Callback, unsigned TABLE_CAPACITY>
template <typename Functor>
inline void JfrConcurrentHashTableHost<T, IdType, TableEntry, Callback, TABLE_CAPACITY>::iterate_entry(Functor& f) {
JfrConcurrentHashtableEntryDelegator<Entry, Functor> delegator(f);
this->iterate(delegator);
}
#endif // SHARE_JFR_UTILITIES_JFRCONCURRENTHASHTABLE_INLINE_HPP

View File

@@ -1,5 +1,5 @@
/*
* Copyright (c) 2020, 2023, Oracle and/or its affiliates. All rights reserved.
* Copyright (c) 2020, 2025, Oracle and/or its affiliates. All rights reserved.
* DO NOT ALTER OR REMOVE COPYRIGHT NOTICES OR THIS FILE HEADER.
*
* This code is free software; you can redistribute it and/or modify it
@@ -43,10 +43,13 @@ class JfrLinkedList : public AllocPolicy {
bool is_empty() const;
bool is_nonempty() const;
void add(NodePtr node);
bool try_add(NodePtr node, NodePtr next);
void add_list(NodePtr first);
NodePtr remove();
template <typename Callback>
void iterate(Callback& cb);
template <typename Callback>
static void iterate(NodePtr node, Callback& cb);
NodePtr head() const;
NodePtr excise(NodePtr prev, NodePtr node);
bool in_list(const NodeType* node) const;

View File

@@ -30,10 +30,10 @@
#include "runtime/atomicAccess.hpp"
template <typename NodeType, typename AllocPolicy>
JfrLinkedList<NodeType, AllocPolicy>::JfrLinkedList() : _head(nullptr) {}
inline JfrLinkedList<NodeType, AllocPolicy>::JfrLinkedList() : _head(nullptr) {}
template <typename NodeType, typename AllocPolicy>
bool JfrLinkedList<NodeType, AllocPolicy>::initialize() {
inline bool JfrLinkedList<NodeType, AllocPolicy>::initialize() {
return true;
}
@@ -62,6 +62,13 @@ inline void JfrLinkedList<NodeType, AllocPolicy>::add(NodeType* node) {
} while (AtomicAccess::cmpxchg(&_head, next, node) != next);
}
template <typename NodeType, typename AllocPolicy>
inline bool JfrLinkedList<NodeType, AllocPolicy>::try_add(NodeType* node, NodeType* next) {
assert(node != nullptr, "invariant");
assert(node->_next == next, "invariant");
return head() == next && AtomicAccess::cmpxchg(&_head, next, node) == next;
}
template <typename NodeType, typename AllocPolicy>
inline NodeType* JfrLinkedList<NodeType, AllocPolicy>::remove() {
NodePtr node;
@@ -76,19 +83,24 @@ inline NodeType* JfrLinkedList<NodeType, AllocPolicy>::remove() {
template <typename NodeType, typename AllocPolicy>
template <typename Callback>
void JfrLinkedList<NodeType, AllocPolicy>::iterate(Callback& cb) {
NodePtr current = head();
while (current != nullptr) {
NodePtr next = (NodePtr)current->_next;
if (!cb.process(current)) {
inline void JfrLinkedList<NodeType, AllocPolicy>::iterate(Callback& cb) {
JfrLinkedList<NodeType, AllocPolicy>::iterate(head(), cb);
}
template <typename NodeType, typename AllocPolicy>
template <typename Callback>
inline void JfrLinkedList<NodeType, AllocPolicy>::iterate(NodeType* node, Callback& cb) {
while (node != nullptr) {
NodePtr next = (NodePtr)node->_next;
if (!cb.process(node)) {
return;
}
current = next;
node = next;
}
}
template <typename NodeType, typename AllocPolicy>
NodeType* JfrLinkedList<NodeType, AllocPolicy>::excise(NodeType* prev, NodeType* node) {
inline NodeType* JfrLinkedList<NodeType, AllocPolicy>::excise(NodeType* prev, NodeType* node) {
NodePtr next = (NodePtr)node->_next;
if (prev == nullptr) {
prev = AtomicAccess::cmpxchg(&_head, node, next);
@@ -106,7 +118,7 @@ NodeType* JfrLinkedList<NodeType, AllocPolicy>::excise(NodeType* prev, NodeType*
}
template <typename NodeType, typename AllocPolicy>
bool JfrLinkedList<NodeType, AllocPolicy>::in_list(const NodeType* node) const {
inline bool JfrLinkedList<NodeType, AllocPolicy>::in_list(const NodeType* node) const {
assert(node != nullptr, "invariant");
const NodeType* current = head();
while (current != nullptr) {
@@ -119,7 +131,7 @@ bool JfrLinkedList<NodeType, AllocPolicy>::in_list(const NodeType* node) const {
}
template <typename NodeType, typename AllocPolicy>
NodeType* JfrLinkedList<NodeType, AllocPolicy>::cut() {
inline NodeType* JfrLinkedList<NodeType, AllocPolicy>::cut() {
NodePtr node;
do {
node = head();
@@ -128,7 +140,7 @@ NodeType* JfrLinkedList<NodeType, AllocPolicy>::cut() {
}
template <typename NodeType, typename AllocPolicy>
void JfrLinkedList<NodeType, AllocPolicy>::clear() {
inline void JfrLinkedList<NodeType, AllocPolicy>::clear() {
cut();
}

View File

@@ -37,6 +37,7 @@
#include "runtime/continuationEntry.hpp"
#include "runtime/deoptimization.hpp"
#include "runtime/flags/jvmFlag.hpp"
#include "runtime/mountUnmountDisabler.hpp"
#include "runtime/objectMonitor.hpp"
#include "runtime/osThread.hpp"
#include "runtime/sharedRuntime.hpp"
@@ -259,13 +260,13 @@
nonstatic_field(JavaThread, _om_cache, OMCache) \
nonstatic_field(JavaThread, _cont_entry, ContinuationEntry*) \
nonstatic_field(JavaThread, _unlocked_inflated_monitor, ObjectMonitor*) \
JVMTI_ONLY(nonstatic_field(JavaThread, _is_in_VTMS_transition, bool)) \
nonstatic_field(JavaThread, _is_in_vthread_transition, bool) \
JVMTI_ONLY(nonstatic_field(JavaThread, _is_disable_suspend, bool)) \
\
nonstatic_field(ContinuationEntry, _pin_count, uint32_t) \
nonstatic_field(LockStack, _top, uint32_t) \
\
JVMTI_ONLY(static_field(JvmtiVTMSTransitionDisabler, _VTMS_notify_jvmti_events, bool)) \
static_field(MountUnmountDisabler, _notify_jvmti_events, bool) \
\
static_field(java_lang_Class, _klass_offset, int) \
static_field(java_lang_Class, _array_klass_offset, int) \
@@ -435,7 +436,7 @@
JFR_ONLY(nonstatic_field(Thread, _jfr_thread_local, JfrThreadLocal)) \
\
static_field(java_lang_Thread, _tid_offset, int) \
static_field(java_lang_Thread, _jvmti_is_in_VTMS_transition_offset, int) \
static_field(java_lang_Thread, _is_in_vthread_transition_offset, int) \
JFR_ONLY(static_field(java_lang_Thread, _jfr_epoch_offset, int)) \
\
JFR_ONLY(nonstatic_field(JfrThreadLocal, _vthread_id, traceid)) \
@@ -877,10 +878,6 @@
declare_function(SharedRuntime::enable_stack_reserved_zone) \
declare_function(SharedRuntime::frem) \
declare_function(SharedRuntime::drem) \
JVMTI_ONLY(declare_function(SharedRuntime::notify_jvmti_vthread_start)) \
JVMTI_ONLY(declare_function(SharedRuntime::notify_jvmti_vthread_end)) \
JVMTI_ONLY(declare_function(SharedRuntime::notify_jvmti_vthread_mount)) \
JVMTI_ONLY(declare_function(SharedRuntime::notify_jvmti_vthread_unmount)) \
\
declare_function(os::dll_load) \
declare_function(os::dll_lookup) \

View File

@@ -57,6 +57,10 @@
#include "utilities/rotate_bits.hpp"
#include "utilities/stack.inline.hpp"
#if INCLUDE_JFR
#include "jfr/jfr.hpp"
#endif
void Klass::set_java_mirror(Handle m) {
assert(!m.is_null(), "New mirror should never be null.");
assert(_java_mirror.is_empty(), "should only be used to initialize mirror");
@@ -862,7 +866,6 @@ void Klass::restore_unshareable_info(ClassLoaderData* loader_data, Handle protec
assert(is_klass(), "ensure C++ vtable is restored");
assert(in_aot_cache(), "must be set");
assert(secondary_supers()->length() >= (int)population_count(_secondary_supers_bitmap), "must be");
JFR_ONLY(RESTORE_ID(this);)
if (log_is_enabled(Trace, aot, unshareable)) {
ResourceMark rm(THREAD);
oop class_loader = loader_data->class_loader();
@@ -876,10 +879,13 @@ void Klass::restore_unshareable_info(ClassLoaderData* loader_data, Handle protec
if (class_loader_data() == nullptr) {
set_class_loader_data(loader_data);
}
// Add to class loader list first before creating the mirror
// (same order as class file parsing)
loader_data->add_class(this);
JFR_ONLY(Jfr::on_restoration(this, THREAD);)
Handle loader(THREAD, loader_data->class_loader());
ModuleEntry* module_entry = nullptr;
Klass* k = this;

View File

@@ -859,11 +859,11 @@ bool C2Compiler::is_intrinsic_supported(vmIntrinsics::ID id) {
case vmIntrinsics::_VectorBinaryLibOp:
return EnableVectorSupport && Matcher::supports_vector_calling_convention();
case vmIntrinsics::_blackhole:
case vmIntrinsics::_vthreadEndFirstTransition:
case vmIntrinsics::_vthreadStartFinalTransition:
case vmIntrinsics::_vthreadStartTransition:
case vmIntrinsics::_vthreadEndTransition:
#if INCLUDE_JVMTI
case vmIntrinsics::_notifyJvmtiVThreadStart:
case vmIntrinsics::_notifyJvmtiVThreadEnd:
case vmIntrinsics::_notifyJvmtiVThreadMount:
case vmIntrinsics::_notifyJvmtiVThreadUnmount:
case vmIntrinsics::_notifyJvmtiVThreadDisableSuspend:
#endif
break;

View File

@@ -144,7 +144,7 @@ public:
private:
// Number of registers this live range uses when it colors
uint16_t _num_regs; // 2 for Longs and Doubles, 1 for all else
uint16_t _num_regs; // byte size of the value divided by slot size which is 4
// except _num_regs is kill count for fat_proj
// For scalable register, num_regs may not be the actual physical register size.

View File

@@ -55,6 +55,7 @@
#include "prims/jvmtiThreadState.hpp"
#include "prims/unsafe.hpp"
#include "runtime/jniHandles.inline.hpp"
#include "runtime/mountUnmountDisabler.hpp"
#include "runtime/objectMonitor.hpp"
#include "runtime/sharedRuntime.hpp"
#include "runtime/stubRoutines.hpp"
@@ -479,15 +480,15 @@ bool LibraryCallKit::try_to_inline(int predicate) {
case vmIntrinsics::_Continuation_pin: return inline_native_Continuation_pinning(false);
case vmIntrinsics::_Continuation_unpin: return inline_native_Continuation_pinning(true);
case vmIntrinsics::_vthreadEndFirstTransition: return inline_native_vthread_end_transition(CAST_FROM_FN_PTR(address, OptoRuntime::vthread_end_first_transition_Java()),
"endFirstTransition", true);
case vmIntrinsics::_vthreadStartFinalTransition: return inline_native_vthread_start_transition(CAST_FROM_FN_PTR(address, OptoRuntime::vthread_start_final_transition_Java()),
"startFinalTransition", true);
case vmIntrinsics::_vthreadStartTransition: return inline_native_vthread_start_transition(CAST_FROM_FN_PTR(address, OptoRuntime::vthread_start_transition_Java()),
"startTransition", false);
case vmIntrinsics::_vthreadEndTransition: return inline_native_vthread_end_transition(CAST_FROM_FN_PTR(address, OptoRuntime::vthread_end_transition_Java()),
"endTransition", false);
#if INCLUDE_JVMTI
case vmIntrinsics::_notifyJvmtiVThreadStart: return inline_native_notify_jvmti_funcs(CAST_FROM_FN_PTR(address, OptoRuntime::notify_jvmti_vthread_start()),
"notifyJvmtiStart", true, false);
case vmIntrinsics::_notifyJvmtiVThreadEnd: return inline_native_notify_jvmti_funcs(CAST_FROM_FN_PTR(address, OptoRuntime::notify_jvmti_vthread_end()),
"notifyJvmtiEnd", false, true);
case vmIntrinsics::_notifyJvmtiVThreadMount: return inline_native_notify_jvmti_funcs(CAST_FROM_FN_PTR(address, OptoRuntime::notify_jvmti_vthread_mount()),
"notifyJvmtiMount", false, false);
case vmIntrinsics::_notifyJvmtiVThreadUnmount: return inline_native_notify_jvmti_funcs(CAST_FROM_FN_PTR(address, OptoRuntime::notify_jvmti_vthread_unmount()),
"notifyJvmtiUnmount", false, false);
case vmIntrinsics::_notifyJvmtiVThreadDisableSuspend: return inline_native_notify_jvmti_sync();
#endif
@@ -3042,46 +3043,80 @@ bool LibraryCallKit::inline_native_time_funcs(address funcAddr, const char* func
return true;
}
#if INCLUDE_JVMTI
// When notifications are disabled then just update the VTMS transition bit and return.
// Otherwise, the bit is updated in the given function call implementing JVMTI notification protocol.
bool LibraryCallKit::inline_native_notify_jvmti_funcs(address funcAddr, const char* funcName, bool is_start, bool is_end) {
if (!DoJVMTIVirtualThreadTransitions) {
return true;
}
//--------------------inline_native_vthread_start_transition--------------------
// inline void startTransition(boolean is_mount);
// inline void startFinalTransition();
// Pseudocode of implementation:
//
// java_lang_Thread::set_is_in_vthread_transition(vthread, true);
// carrier->set_is_in_vthread_transition(true);
// OrderAccess::storeload();
// int disable_requests = java_lang_Thread::vthread_transition_disable_count(vthread)
// + global_vthread_transition_disable_count();
// if (disable_requests > 0) {
// slow path: runtime call
// }
bool LibraryCallKit::inline_native_vthread_start_transition(address funcAddr, const char* funcName, bool is_final_transition) {
Node* vt_oop = _gvn.transform(must_be_not_null(argument(0), true)); // VirtualThread this argument
IdealKit ideal(this);
Node* ONE = ideal.ConI(1);
Node* hide = is_start ? ideal.ConI(0) : (is_end ? ideal.ConI(1) : _gvn.transform(argument(1)));
Node* addr = makecon(TypeRawPtr::make((address)&JvmtiVTMSTransitionDisabler::_VTMS_notify_jvmti_events));
Node* notify_jvmti_enabled = ideal.load(ideal.ctrl(), addr, TypeInt::BOOL, T_BOOLEAN, Compile::AliasIdxRaw);
Node* thread = ideal.thread();
Node* jt_addr = basic_plus_adr(thread, in_bytes(JavaThread::is_in_vthread_transition_offset()));
Node* vt_addr = basic_plus_adr(vt_oop, java_lang_Thread::is_in_vthread_transition_offset());
access_store_at(nullptr, jt_addr, _gvn.type(jt_addr)->is_ptr(), ideal.ConI(1), TypeInt::BOOL, T_BOOLEAN, IN_NATIVE | MO_UNORDERED);
access_store_at(nullptr, vt_addr, _gvn.type(vt_addr)->is_ptr(), ideal.ConI(1), TypeInt::BOOL, T_BOOLEAN, IN_NATIVE | MO_UNORDERED);
insert_mem_bar(Op_MemBarVolatile);
ideal.sync_kit(this);
ideal.if_then(notify_jvmti_enabled, BoolTest::eq, ONE); {
Node* global_disable_addr = makecon(TypeRawPtr::make((address)MountUnmountDisabler::global_vthread_transition_disable_count_address()));
Node* global_disable = ideal.load(ideal.ctrl(), global_disable_addr, TypeInt::INT, T_INT, Compile::AliasIdxRaw, true /*require_atomic_access*/);
Node* vt_disable_addr = basic_plus_adr(vt_oop, java_lang_Thread::vthread_transition_disable_count_offset());
Node* vt_disable = ideal.load(ideal.ctrl(), vt_disable_addr, TypeInt::INT, T_INT, Compile::AliasIdxRaw, true /*require_atomic_access*/);
Node* disabled = _gvn.transform(new AddINode(global_disable, vt_disable));
ideal.if_then(disabled, BoolTest::ne, ideal.ConI(0)); {
sync_kit(ideal);
// if notifyJvmti enabled then make a call to the given SharedRuntime function
const TypeFunc* tf = OptoRuntime::notify_jvmti_vthread_Type();
make_runtime_call(RC_NO_LEAF, tf, funcAddr, funcName, TypePtr::BOTTOM, vt_oop, hide);
Node* is_mount = is_final_transition ? ideal.ConI(0) : _gvn.transform(argument(1));
const TypeFunc* tf = OptoRuntime::vthread_transition_Type();
make_runtime_call(RC_NO_LEAF, tf, funcAddr, funcName, TypePtr::BOTTOM, vt_oop, is_mount);
ideal.sync_kit(this);
} ideal.else_(); {
// set hide value to the VTMS transition bit in current JavaThread and VirtualThread object
Node* thread = ideal.thread();
Node* jt_addr = basic_plus_adr(thread, in_bytes(JavaThread::is_in_VTMS_transition_offset()));
Node* vt_addr = basic_plus_adr(vt_oop, java_lang_Thread::is_in_VTMS_transition_offset());
}
ideal.end_if();
sync_kit(ideal);
access_store_at(nullptr, jt_addr, _gvn.type(jt_addr)->is_ptr(), hide, _gvn.type(hide), T_BOOLEAN, IN_NATIVE | MO_UNORDERED);
access_store_at(nullptr, vt_addr, _gvn.type(vt_addr)->is_ptr(), hide, _gvn.type(hide), T_BOOLEAN, IN_NATIVE | MO_UNORDERED);
ideal.sync_kit(this);
} ideal.end_if();
final_sync(ideal);
return true;
}
bool LibraryCallKit::inline_native_vthread_end_transition(address funcAddr, const char* funcName, bool is_first_transition) {
Node* vt_oop = _gvn.transform(must_be_not_null(argument(0), true)); // VirtualThread this argument
IdealKit ideal(this);
Node* _notify_jvmti_addr = makecon(TypeRawPtr::make((address)MountUnmountDisabler::notify_jvmti_events_address()));
Node* _notify_jvmti = ideal.load(ideal.ctrl(), _notify_jvmti_addr, TypeInt::BOOL, T_BOOLEAN, Compile::AliasIdxRaw);
ideal.if_then(_notify_jvmti, BoolTest::eq, ideal.ConI(1)); {
sync_kit(ideal);
Node* is_mount = is_first_transition ? ideal.ConI(1) : _gvn.transform(argument(1));
const TypeFunc* tf = OptoRuntime::vthread_transition_Type();
make_runtime_call(RC_NO_LEAF, tf, funcAddr, funcName, TypePtr::BOTTOM, vt_oop, is_mount);
ideal.sync_kit(this);
} ideal.else_(); {
Node* thread = ideal.thread();
Node* jt_addr = basic_plus_adr(thread, in_bytes(JavaThread::is_in_vthread_transition_offset()));
Node* vt_addr = basic_plus_adr(vt_oop, java_lang_Thread::is_in_vthread_transition_offset());
sync_kit(ideal);
access_store_at(nullptr, jt_addr, _gvn.type(jt_addr)->is_ptr(), ideal.ConI(0), TypeInt::BOOL, T_BOOLEAN, IN_NATIVE | MO_UNORDERED);
access_store_at(nullptr, vt_addr, _gvn.type(vt_addr)->is_ptr(), ideal.ConI(0), TypeInt::BOOL, T_BOOLEAN, IN_NATIVE | MO_UNORDERED);
ideal.sync_kit(this);
} ideal.end_if();
final_sync(ideal);
return true;
}
#if INCLUDE_JVMTI
// Always update the is_disable_suspend bit.
bool LibraryCallKit::inline_native_notify_jvmti_sync() {
if (!DoJVMTIVirtualThreadTransitions) {

View File

@@ -275,9 +275,11 @@ class LibraryCallKit : public GraphKit {
bool inline_native_Continuation_pinning(bool unpin);
bool inline_native_time_funcs(address method, const char* funcName);
bool inline_native_vthread_start_transition(address funcAddr, const char* funcName, bool is_final_transition);
bool inline_native_vthread_end_transition(address funcAddr, const char* funcName, bool is_first_transition);
#if INCLUDE_JVMTI
bool inline_native_notify_jvmti_funcs(address funcAddr, const char* funcName, bool is_start, bool is_end);
bool inline_native_notify_jvmti_hide();
bool inline_native_notify_jvmti_sync();
#endif

View File

@@ -285,13 +285,12 @@ void Matcher::match( ) {
_parm_regs[i].set_pair(reg2, reg1);
}
// Finally, make sure the incoming arguments take up an even number of
// words, in case the arguments or locals need to contain doubleword stack
// slots. The rest of the system assumes that stack slot pairs (in
// particular, in the spill area) which look aligned will in fact be
// aligned relative to the stack pointer in the target machine. Double
// stack slots will always be allocated aligned.
_new_SP = OptoReg::Name(align_up(_in_arg_limit, (int)RegMask::SlotsPerLong));
// Allocated register sets are aligned to their size. Offsets to the stack
// pointer have to be aligned to the size of the access. For this _new_SP is
// aligned to the size of the largest register set with the stack alignment as
// limit and a minimum of SlotsPerLong (2).
int vector_aligment = MIN2(C->max_vector_size(), stack_alignment_in_bytes()) / VMRegImpl::stack_slot_size;
_new_SP = OptoReg::Name(align_up(_in_arg_limit, MAX2((int)RegMask::SlotsPerLong, vector_aligment)));
// Compute highest outgoing stack argument as
// _new_SP + out_preserve_stack_slots + max(outgoing argument size).

View File

@@ -354,16 +354,12 @@ public:
}
// SlotsPerLong is 2, since slots are 32 bits and longs are 64 bits.
// Also, consider the maximum alignment size for a normally allocated
// value. Since we allocate register pairs but not register quads (at
// present), this alignment is SlotsPerLong (== 2). A normally
// aligned allocated register is either a single register, or a pair
// of adjacent registers, the lower-numbered being even.
// See also is_aligned_Pairs() below, and the padding added before
// Matcher::_new_SP to keep allocated pairs aligned properly.
// If we ever go to quad-word allocations, SlotsPerQuad will become
// the controlling alignment constraint. Note that this alignment
// requirement is internal to the allocator, and independent of any
// We allocate single registers for 32 bit values and register pairs for 64
// bit values. The number of registers allocated for vectors match their size. E.g. for 128 bit
// vectors (VecX) we allocate a set of 4 registers. Allocated sets are adjacent and aligned.
// See RegMask::find_first_set(), is_aligned_pairs(), is_aligned_sets(), and the padding added before
// Matcher::_new_SP to keep allocated pairs and sets aligned properly.
// Note that this alignment requirement is internal to the allocator, and independent of any
// particular platform.
enum { SlotsPerLong = 2,
SlotsPerVecA = 4,

View File

@@ -66,6 +66,7 @@
#include "runtime/handles.inline.hpp"
#include "runtime/interfaceSupport.inline.hpp"
#include "runtime/javaCalls.hpp"
#include "runtime/mountUnmountDisabler.hpp"
#include "runtime/sharedRuntime.hpp"
#include "runtime/signature.hpp"
#include "runtime/stackWatermarkSet.hpp"
@@ -92,12 +93,9 @@
#define C2_STUB_FIELD_NAME(name) _ ## name ## _Java
#define C2_STUB_FIELD_DEFINE(name, f, t, r) \
address OptoRuntime:: C2_STUB_FIELD_NAME(name) = nullptr;
#define C2_JVMTI_STUB_FIELD_DEFINE(name) \
address OptoRuntime:: STUB_FIELD_NAME(name) = nullptr;
C2_STUBS_DO(C2_BLOB_FIELD_DEFINE, C2_STUB_FIELD_DEFINE, C2_JVMTI_STUB_FIELD_DEFINE)
C2_STUBS_DO(C2_BLOB_FIELD_DEFINE, C2_STUB_FIELD_DEFINE)
#undef C2_BLOB_FIELD_DEFINE
#undef C2_STUB_FIELD_DEFINE
#undef C2_JVMTI_STUB_FIELD_DEFINE
// This should be called in an assertion at the start of OptoRuntime routines
// which are entered from compiled code (all of them)
@@ -153,23 +151,9 @@ static bool check_compiled_frame(JavaThread* thread) {
pass_retpc); \
if (C2_STUB_FIELD_NAME(name) == nullptr) { return false; } \
#define C2_JVMTI_STUB_C_FUNC(name) CAST_FROM_FN_PTR(address, SharedRuntime::name)
#define GEN_C2_JVMTI_STUB(name) \
STUB_FIELD_NAME(name) = \
generate_stub(env, \
notify_jvmti_vthread_Type, \
C2_JVMTI_STUB_C_FUNC(name), \
C2_STUB_NAME(name), \
C2_STUB_ID(name), \
0, \
true, \
false); \
if (STUB_FIELD_NAME(name) == nullptr) { return false; } \
bool OptoRuntime::generate(ciEnv* env) {
C2_STUBS_DO(GEN_C2_BLOB, GEN_C2_STUB, GEN_C2_JVMTI_STUB)
C2_STUBS_DO(GEN_C2_BLOB, GEN_C2_STUB)
return true;
}
@@ -182,8 +166,6 @@ bool OptoRuntime::generate(ciEnv* env) {
#undef C2_STUB_NAME
#undef GEN_C2_STUB
#undef C2_JVMTI_STUB_C_FUNC
#undef GEN_C2_JVMTI_STUB
// #undef gen
const TypeFunc* OptoRuntime::_new_instance_Type = nullptr;
@@ -257,12 +239,10 @@ const TypeFunc* OptoRuntime::_updateBytesCRC32C_Type = nullptr;
const TypeFunc* OptoRuntime::_updateBytesAdler32_Type = nullptr;
const TypeFunc* OptoRuntime::_osr_end_Type = nullptr;
const TypeFunc* OptoRuntime::_register_finalizer_Type = nullptr;
const TypeFunc* OptoRuntime::_vthread_transition_Type = nullptr;
#if INCLUDE_JFR
const TypeFunc* OptoRuntime::_class_id_load_barrier_Type = nullptr;
#endif // INCLUDE_JFR
#if INCLUDE_JVMTI
const TypeFunc* OptoRuntime::_notify_jvmti_vthread_Type = nullptr;
#endif // INCLUDE_JVMTI
const TypeFunc* OptoRuntime::_dtrace_method_entry_exit_Type = nullptr;
const TypeFunc* OptoRuntime::_dtrace_object_alloc_Type = nullptr;
@@ -572,6 +552,26 @@ JRT_BLOCK_ENTRY(void, OptoRuntime::monitor_notifyAll_C(oopDesc* obj, JavaThread*
JRT_BLOCK_END;
JRT_END
JRT_ENTRY(void, OptoRuntime::vthread_end_first_transition_C(oopDesc* vt, jboolean is_mount, JavaThread* current))
MountUnmountDisabler::end_transition(current, vt, true /*is_mount*/, true /*is_thread_start*/);
JRT_END
JRT_ENTRY(void, OptoRuntime::vthread_start_final_transition_C(oopDesc* vt, jboolean is_mount, JavaThread* current))
java_lang_Thread::set_is_in_vthread_transition(vt, false);
current->set_is_in_vthread_transition(false);
MountUnmountDisabler::start_transition(current, vt, false /*is_mount */, true /*is_thread_end*/);
JRT_END
JRT_ENTRY(void, OptoRuntime::vthread_start_transition_C(oopDesc* vt, jboolean is_mount, JavaThread* current))
java_lang_Thread::set_is_in_vthread_transition(vt, false);
current->set_is_in_vthread_transition(false);
MountUnmountDisabler::start_transition(current, vt, is_mount, false /*is_thread_end*/);
JRT_END
JRT_ENTRY(void, OptoRuntime::vthread_end_transition_C(oopDesc* vt, jboolean is_mount, JavaThread* current))
MountUnmountDisabler::end_transition(current, vt, is_mount, false /*is_thread_start*/);
JRT_END
static const TypeFunc* make_new_instance_Type() {
// create input type (domain)
const Type **fields = TypeTuple::fields(1);
@@ -587,8 +587,7 @@ static const TypeFunc* make_new_instance_Type() {
return TypeFunc::make(domain, range);
}
#if INCLUDE_JVMTI
static const TypeFunc* make_notify_jvmti_vthread_Type() {
static const TypeFunc* make_vthread_transition_Type() {
// create input type (domain)
const Type **fields = TypeTuple::fields(2);
fields[TypeFunc::Parms+0] = TypeInstPtr::NOTNULL; // VirtualThread oop
@@ -602,7 +601,6 @@ static const TypeFunc* make_notify_jvmti_vthread_Type() {
return TypeFunc::make(domain,range);
}
#endif
static const TypeFunc* make_athrow_Type() {
// create input type (domain)
@@ -2336,12 +2334,10 @@ void OptoRuntime::initialize_types() {
_updateBytesAdler32_Type = make_updateBytesAdler32_Type();
_osr_end_Type = make_osr_end_Type();
_register_finalizer_Type = make_register_finalizer_Type();
_vthread_transition_Type = make_vthread_transition_Type();
JFR_ONLY(
_class_id_load_barrier_Type = make_class_id_load_barrier_Type();
)
#if INCLUDE_JVMTI
_notify_jvmti_vthread_Type = make_notify_jvmti_vthread_Type();
#endif // INCLUDE_JVMTI
_dtrace_method_entry_exit_Type = make_dtrace_method_entry_exit_Type();
_dtrace_object_alloc_Type = make_dtrace_object_alloc_Type();
}

View File

@@ -115,15 +115,12 @@ class OptoRuntime : public AllStatic {
#define C2_STUB_FIELD_NAME(name) _ ## name ## _Java
#define C2_STUB_FIELD_DECLARE(name, f, t, r) \
static address C2_STUB_FIELD_NAME(name) ;
#define C2_JVMTI_STUB_FIELD_DECLARE(name) \
static address STUB_FIELD_NAME(name);
C2_STUBS_DO(C2_BLOB_FIELD_DECLARE, C2_STUB_FIELD_DECLARE, C2_JVMTI_STUB_FIELD_DECLARE)
C2_STUBS_DO(C2_BLOB_FIELD_DECLARE, C2_STUB_FIELD_DECLARE)
#undef C2_BLOB_FIELD_DECLARE
#undef C2_STUB_FIELD_NAME
#undef C2_STUB_FIELD_DECLARE
#undef C2_JVMTI_STUB_FIELD_DECLARE
// static TypeFunc* data members
static const TypeFunc* _new_instance_Type;
@@ -197,12 +194,10 @@ class OptoRuntime : public AllStatic {
static const TypeFunc* _updateBytesAdler32_Type;
static const TypeFunc* _osr_end_Type;
static const TypeFunc* _register_finalizer_Type;
static const TypeFunc* _vthread_transition_Type;
#if INCLUDE_JFR
static const TypeFunc* _class_id_load_barrier_Type;
#endif // INCLUDE_JFR
#if INCLUDE_JVMTI
static const TypeFunc* _notify_jvmti_vthread_Type;
#endif // INCLUDE_JVMTI
static const TypeFunc* _dtrace_method_entry_exit_Type;
static const TypeFunc* _dtrace_object_alloc_Type;
@@ -239,6 +234,11 @@ public:
static void monitor_notify_C(oopDesc* obj, JavaThread* current);
static void monitor_notifyAll_C(oopDesc* obj, JavaThread* current);
static void vthread_end_first_transition_C(oopDesc* vt, jboolean hide, JavaThread* current);
static void vthread_start_final_transition_C(oopDesc* vt, jboolean hide, JavaThread* current);
static void vthread_start_transition_C(oopDesc* vt, jboolean hide, JavaThread* current);
static void vthread_end_transition_C(oopDesc* vt, jboolean hide, JavaThread* current);
private:
// Implicit exception support
@@ -293,12 +293,11 @@ private:
static address slow_arraycopy_Java() { return _slow_arraycopy_Java; }
static address register_finalizer_Java() { return _register_finalizer_Java; }
#if INCLUDE_JVMTI
static address notify_jvmti_vthread_start() { return _notify_jvmti_vthread_start; }
static address notify_jvmti_vthread_end() { return _notify_jvmti_vthread_end; }
static address notify_jvmti_vthread_mount() { return _notify_jvmti_vthread_mount; }
static address notify_jvmti_vthread_unmount() { return _notify_jvmti_vthread_unmount; }
#endif
static address vthread_end_first_transition_Java() { return _vthread_end_first_transition_Java; }
static address vthread_start_final_transition_Java() { return _vthread_start_final_transition_Java; }
static address vthread_start_transition_Java() { return _vthread_start_transition_Java; }
static address vthread_end_transition_Java() { return _vthread_end_transition_Java; }
static UncommonTrapBlob* uncommon_trap_blob() { return _uncommon_trap_blob; }
static ExceptionBlob* exception_blob() { return _exception_blob; }
@@ -718,6 +717,27 @@ private:
return _register_finalizer_Type;
}
static inline const TypeFunc* vthread_transition_Type() {
assert(_vthread_transition_Type != nullptr, "should be initialized");
return _vthread_transition_Type;
}
static inline const TypeFunc* vthread_end_first_transition_Type() {
return vthread_transition_Type();
}
static inline const TypeFunc* vthread_start_final_transition_Type() {
return vthread_transition_Type();
}
static inline const TypeFunc* vthread_start_transition_Type() {
return vthread_transition_Type();
}
static inline const TypeFunc* vthread_end_transition_Type() {
return vthread_transition_Type();
}
#if INCLUDE_JFR
static inline const TypeFunc* class_id_load_barrier_Type() {
assert(_class_id_load_barrier_Type != nullptr, "should be initialized");
@@ -725,13 +745,6 @@ private:
}
#endif // INCLUDE_JFR
#if INCLUDE_JVMTI
static inline const TypeFunc* notify_jvmti_vthread_Type() {
assert(_notify_jvmti_vthread_Type != nullptr, "should be initialized");
return _notify_jvmti_vthread_Type;
}
#endif
// Dtrace support. entry and exit probes have the same signature
static inline const TypeFunc* dtrace_method_entry_exit_Type() {
assert(_dtrace_method_entry_exit_Type != nullptr, "should be initialized");

View File

@@ -84,6 +84,7 @@
#include "runtime/javaThread.hpp"
#include "runtime/jfieldIDWorkaround.hpp"
#include "runtime/jniHandles.inline.hpp"
#include "runtime/mountUnmountDisabler.hpp"
#include "runtime/os.inline.hpp"
#include "runtime/osThread.hpp"
#include "runtime/perfData.hpp"
@@ -3661,68 +3662,24 @@ JVM_LEAF(jint, JVM_FindSignal(const char *name))
return os::get_signal_number(name);
JVM_END
JVM_ENTRY(void, JVM_VirtualThreadStart(JNIEnv* env, jobject vthread))
#if INCLUDE_JVMTI
if (!DoJVMTIVirtualThreadTransitions) {
assert(!JvmtiExport::can_support_virtual_threads(), "sanity check");
return;
}
if (JvmtiVTMSTransitionDisabler::VTMS_notify_jvmti_events()) {
JvmtiVTMSTransitionDisabler::VTMS_vthread_start(vthread);
} else {
// set VTMS transition bit value in JavaThread and java.lang.VirtualThread object
JvmtiVTMSTransitionDisabler::set_is_in_VTMS_transition(thread, vthread, false);
}
#endif
JVM_ENTRY(void, JVM_VirtualThreadEndFirstTransition(JNIEnv* env, jobject vthread))
oop vt = JNIHandles::resolve_external_guard(vthread);
MountUnmountDisabler::end_transition(thread, vt, true /*is_mount*/, true /*is_thread_start*/);
JVM_END
JVM_ENTRY(void, JVM_VirtualThreadEnd(JNIEnv* env, jobject vthread))
#if INCLUDE_JVMTI
if (!DoJVMTIVirtualThreadTransitions) {
assert(!JvmtiExport::can_support_virtual_threads(), "sanity check");
return;
}
if (JvmtiVTMSTransitionDisabler::VTMS_notify_jvmti_events()) {
JvmtiVTMSTransitionDisabler::VTMS_vthread_end(vthread);
} else {
// set VTMS transition bit value in JavaThread and java.lang.VirtualThread object
JvmtiVTMSTransitionDisabler::set_is_in_VTMS_transition(thread, vthread, true);
}
#endif
JVM_ENTRY(void, JVM_VirtualThreadStartFinalTransition(JNIEnv* env, jobject vthread))
oop vt = JNIHandles::resolve_external_guard(vthread);
MountUnmountDisabler::start_transition(thread, vt, false /*is_mount */, true /*is_thread_end*/);
JVM_END
// If notifications are disabled then just update the VTMS transition bit and return.
// Otherwise, the bit is updated in the given jvmtiVTMSTransitionDisabler function call.
JVM_ENTRY(void, JVM_VirtualThreadMount(JNIEnv* env, jobject vthread, jboolean hide))
#if INCLUDE_JVMTI
if (!DoJVMTIVirtualThreadTransitions) {
assert(!JvmtiExport::can_support_virtual_threads(), "sanity check");
return;
}
if (JvmtiVTMSTransitionDisabler::VTMS_notify_jvmti_events()) {
JvmtiVTMSTransitionDisabler::VTMS_vthread_mount(vthread, hide);
} else {
// set VTMS transition bit value in JavaThread and java.lang.VirtualThread object
JvmtiVTMSTransitionDisabler::set_is_in_VTMS_transition(thread, vthread, hide);
}
#endif
JVM_ENTRY(void, JVM_VirtualThreadStartTransition(JNIEnv* env, jobject vthread, jboolean is_mount))
oop vt = JNIHandles::resolve_external_guard(vthread);
MountUnmountDisabler::start_transition(thread, vt, is_mount, false /*is_thread_end*/);
JVM_END
// If notifications are disabled then just update the VTMS transition bit and return.
// Otherwise, the bit is updated in the given jvmtiVTMSTransitionDisabler function call below.
JVM_ENTRY(void, JVM_VirtualThreadUnmount(JNIEnv* env, jobject vthread, jboolean hide))
#if INCLUDE_JVMTI
if (!DoJVMTIVirtualThreadTransitions) {
assert(!JvmtiExport::can_support_virtual_threads(), "sanity check");
return;
}
if (JvmtiVTMSTransitionDisabler::VTMS_notify_jvmti_events()) {
JvmtiVTMSTransitionDisabler::VTMS_vthread_unmount(vthread, hide);
} else {
// set VTMS transition bit value in JavaThread and java.lang.VirtualThread object
JvmtiVTMSTransitionDisabler::set_is_in_VTMS_transition(thread, vthread, hide);
}
#endif
JVM_ENTRY(void, JVM_VirtualThreadEndTransition(JNIEnv* env, jobject vthread, jboolean is_mount))
oop vt = JNIHandles::resolve_external_guard(vthread);
MountUnmountDisabler::end_transition(thread, vt, is_mount, false /*is_thread_start*/);
JVM_END
// Notification from VirtualThread about disabling JVMTI Suspend in a sync critical section.
@@ -3772,6 +3729,7 @@ JVM_ENTRY(jobject, JVM_TakeVirtualThreadListToUnblock(JNIEnv* env, jclass ignore
parkEvent->park();
}
JVM_END
/*
* Return the current class's class file version. The low order 16 bits of the
* returned jint contain the class's major version. The high order 16 bits

View File

@@ -66,6 +66,7 @@
#include "runtime/javaThread.inline.hpp"
#include "runtime/jfieldIDWorkaround.hpp"
#include "runtime/jniHandles.inline.hpp"
#include "runtime/mountUnmountDisabler.hpp"
#include "runtime/objectMonitor.inline.hpp"
#include "runtime/os.hpp"
#include "runtime/osThread.hpp"
@@ -147,7 +148,7 @@ jvmtiError
JvmtiEnv::SetThreadLocalStorage(jthread thread, const void* data) {
JavaThread* current = JavaThread::current();
JvmtiThreadState* state = nullptr;
JvmtiVTMSTransitionDisabler disabler(thread);
MountUnmountDisabler disabler(thread);
ThreadsListHandle tlh(current);
JavaThread* java_thread = nullptr;
@@ -200,7 +201,7 @@ JvmtiEnv::GetThreadLocalStorage(jthread thread, void** data_ptr) {
VM_ENTRY_BASE(jvmtiError, JvmtiEnv::GetThreadLocalStorage , current_thread)
DEBUG_ONLY(VMNativeEntryWrapper __vew;)
JvmtiVTMSTransitionDisabler disabler(thread);
MountUnmountDisabler disabler(thread);
ThreadsListHandle tlh(current_thread);
JavaThread* java_thread = nullptr;
@@ -561,7 +562,7 @@ JvmtiEnv::SetNativeMethodPrefixes(jint prefix_count, char** prefixes) {
// size_of_callbacks - pre-checked to be greater than or equal to 0
jvmtiError
JvmtiEnv::SetEventCallbacks(const jvmtiEventCallbacks* callbacks, jint size_of_callbacks) {
JvmtiVTMSTransitionDisabler disabler;
MountUnmountDisabler disabler;
JvmtiEventController::set_event_callbacks(this, callbacks, size_of_callbacks);
return JVMTI_ERROR_NONE;
} /* end SetEventCallbacks */
@@ -585,7 +586,7 @@ JvmtiEnv::SetEventNotificationMode(jvmtiEventMode mode, jvmtiEvent event_type, j
if (event_type == JVMTI_EVENT_CLASS_FILE_LOAD_HOOK && enabled) {
record_class_file_load_hook_enabled();
}
JvmtiVTMSTransitionDisabler disabler;
MountUnmountDisabler disabler;
if (event_thread == nullptr) {
// Can be called at Agent_OnLoad() time with event_thread == nullptr
@@ -867,7 +868,7 @@ JvmtiEnv::GetJLocationFormat(jvmtiJlocationFormat* format_ptr) {
jvmtiError
JvmtiEnv::GetThreadState(jthread thread, jint* thread_state_ptr) {
JavaThread* current_thread = JavaThread::current();
JvmtiVTMSTransitionDisabler disabler(thread);
MountUnmountDisabler disabler(thread);
ThreadsListHandle tlh(current_thread);
JavaThread* java_thread = nullptr;
@@ -939,7 +940,7 @@ JvmtiEnv::SuspendThread(jthread thread) {
jvmtiError err;
{
JvmtiVTMSTransitionDisabler disabler(true);
MountUnmountDisabler disabler(true);
ThreadsListHandle tlh(current);
JavaThread* java_thread = nullptr;
oop thread_oop = nullptr;
@@ -949,7 +950,7 @@ JvmtiEnv::SuspendThread(jthread thread) {
return err;
}
// Do not use JvmtiVTMSTransitionDisabler in context of self suspend to avoid deadlocks.
// Do not use MountUnmountDisabler in context of self suspend to avoid deadlocks.
if (java_thread != current) {
err = suspend_thread(thread_oop, java_thread, /* single_suspend */ true);
return err;
@@ -974,7 +975,7 @@ JvmtiEnv::SuspendThreadList(jint request_count, const jthread* request_list, jvm
int self_idx = -1;
{
JvmtiVTMSTransitionDisabler disabler(true);
MountUnmountDisabler disabler(true);
ThreadsListHandle tlh(current);
for (int i = 0; i < request_count; i++) {
@@ -1007,7 +1008,7 @@ JvmtiEnv::SuspendThreadList(jint request_count, const jthread* request_list, jvm
}
}
// Self suspend after all other suspends if necessary.
// Do not use JvmtiVTMSTransitionDisabler in context of self suspend to avoid deadlocks.
// Do not use MountUnmountDisabler in context of self suspend to avoid deadlocks.
if (self_tobj() != nullptr) {
// there should not be any error for current java_thread
results[self_idx] = suspend_thread(self_tobj(), current, /* single_suspend */ true);
@@ -1028,7 +1029,7 @@ JvmtiEnv::SuspendAllVirtualThreads(jint except_count, const jthread* except_list
{
ResourceMark rm(current);
JvmtiVTMSTransitionDisabler disabler(true);
MountUnmountDisabler disabler(true);
ThreadsListHandle tlh(current);
GrowableArray<jthread>* elist = new GrowableArray<jthread>(except_count);
@@ -1078,7 +1079,7 @@ JvmtiEnv::SuspendAllVirtualThreads(jint except_count, const jthread* except_list
}
}
// Self suspend after all other suspends if necessary.
// Do not use JvmtiVTMSTransitionDisabler in context of self suspend to avoid deadlocks.
// Do not use MountUnmountDisabler in context of self suspend to avoid deadlocks.
if (self_tobj() != nullptr) {
suspend_thread(self_tobj(), current, /* single_suspend */ false);
}
@@ -1089,7 +1090,7 @@ JvmtiEnv::SuspendAllVirtualThreads(jint except_count, const jthread* except_list
// thread - NOT protected by ThreadsListHandle and NOT pre-checked
jvmtiError
JvmtiEnv::ResumeThread(jthread thread) {
JvmtiVTMSTransitionDisabler disabler(true);
MountUnmountDisabler disabler(true);
JavaThread* current = JavaThread::current();
ThreadsListHandle tlh(current);
@@ -1111,7 +1112,7 @@ jvmtiError
JvmtiEnv::ResumeThreadList(jint request_count, const jthread* request_list, jvmtiError* results) {
oop thread_oop = nullptr;
JavaThread* java_thread = nullptr;
JvmtiVTMSTransitionDisabler disabler(true);
MountUnmountDisabler disabler(true);
ThreadsListHandle tlh;
for (int i = 0; i < request_count; i++) {
@@ -1150,7 +1151,7 @@ JvmtiEnv::ResumeAllVirtualThreads(jint except_count, const jthread* except_list)
return err;
}
ResourceMark rm;
JvmtiVTMSTransitionDisabler disabler(true);
MountUnmountDisabler disabler(true);
GrowableArray<jthread>* elist = new GrowableArray<jthread>(except_count);
// Collect threads from except_list for which suspended status must be restored (only for VirtualThread case)
@@ -1196,7 +1197,7 @@ jvmtiError
JvmtiEnv::StopThread(jthread thread, jobject exception) {
JavaThread* current_thread = JavaThread::current();
JvmtiVTMSTransitionDisabler disabler(thread);
MountUnmountDisabler disabler(thread);
ThreadsListHandle tlh(current_thread);
JavaThread* java_thread = nullptr;
oop thread_oop = nullptr;
@@ -1234,7 +1235,7 @@ JvmtiEnv::InterruptThread(jthread thread) {
JavaThread* current_thread = JavaThread::current();
HandleMark hm(current_thread);
JvmtiVTMSTransitionDisabler disabler(thread);
MountUnmountDisabler disabler(thread);
ThreadsListHandle tlh(current_thread);
JavaThread* java_thread = nullptr;
@@ -1280,7 +1281,7 @@ JvmtiEnv::GetThreadInfo(jthread thread, jvmtiThreadInfo* info_ptr) {
JavaThread* java_thread = nullptr;
oop thread_oop = nullptr;
JvmtiVTMSTransitionDisabler disabler(thread);
MountUnmountDisabler disabler(thread);
ThreadsListHandle tlh(current_thread);
// if thread is null the current thread is used
@@ -1369,7 +1370,7 @@ JvmtiEnv::GetOwnedMonitorInfo(jthread thread, jint* owned_monitor_count_ptr, job
JavaThread* calling_thread = JavaThread::current();
HandleMark hm(calling_thread);
JvmtiVTMSTransitionDisabler disabler(thread);
MountUnmountDisabler disabler(thread);
ThreadsListHandle tlh(calling_thread);
JavaThread* java_thread = nullptr;
@@ -1424,7 +1425,7 @@ JvmtiEnv::GetOwnedMonitorStackDepthInfo(jthread thread, jint* monitor_info_count
JavaThread* calling_thread = JavaThread::current();
HandleMark hm(calling_thread);
JvmtiVTMSTransitionDisabler disabler(thread);
MountUnmountDisabler disabler(thread);
ThreadsListHandle tlh(calling_thread);
JavaThread* java_thread = nullptr;
@@ -1707,7 +1708,7 @@ JvmtiEnv::GetThreadListStackTraces(jint thread_count, const jthread* thread_list
*stack_info_ptr = op.stack_info();
}
} else {
JvmtiVTMSTransitionDisabler disabler;
MountUnmountDisabler disabler;
// JVMTI get stack traces at safepoint.
VM_GetThreadListStackTraces op(this, thread_count, thread_list, max_frame_count);
@@ -1740,7 +1741,7 @@ JvmtiEnv::PopFrame(jthread thread) {
if (thread == nullptr) {
return JVMTI_ERROR_INVALID_THREAD;
}
JvmtiVTMSTransitionDisabler disabler(thread);
MountUnmountDisabler disabler(thread);
ThreadsListHandle tlh(current_thread);
JavaThread* java_thread = nullptr;
@@ -1795,7 +1796,7 @@ JvmtiEnv::GetFrameLocation(jthread thread, jint depth, jmethodID* method_ptr, jl
jvmtiError
JvmtiEnv::NotifyFramePop(jthread thread, jint depth) {
ResourceMark rm;
JvmtiVTMSTransitionDisabler disabler(thread);
MountUnmountDisabler disabler(thread);
JavaThread* current = JavaThread::current();
ThreadsListHandle tlh(current);
@@ -1823,7 +1824,7 @@ JvmtiEnv::NotifyFramePop(jthread thread, jint depth) {
jvmtiError
JvmtiEnv::ClearAllFramePops(jthread thread) {
ResourceMark rm;
JvmtiVTMSTransitionDisabler disabler(thread);
MountUnmountDisabler disabler(thread);
JavaThread* current = JavaThread::current();
ThreadsListHandle tlh(current);
@@ -2084,7 +2085,7 @@ JvmtiEnv::GetLocalObject(jthread thread, jint depth, jint slot, jobject* value_p
// doit_prologue(), but after doit() is finished with it.
ResourceMark rm(current_thread);
HandleMark hm(current_thread);
JvmtiVTMSTransitionDisabler disabler(thread);
MountUnmountDisabler disabler(thread);
ThreadsListHandle tlh(current_thread);
JavaThread* java_thread = nullptr;
@@ -2125,7 +2126,7 @@ JvmtiEnv::GetLocalInstance(jthread thread, jint depth, jobject* value_ptr){
// doit_prologue(), but after doit() is finished with it.
ResourceMark rm(current_thread);
HandleMark hm(current_thread);
JvmtiVTMSTransitionDisabler disabler(thread);
MountUnmountDisabler disabler(thread);
ThreadsListHandle tlh(current_thread);
JavaThread* java_thread = nullptr;
@@ -2167,7 +2168,7 @@ JvmtiEnv::GetLocalInt(jthread thread, jint depth, jint slot, jint* value_ptr) {
// doit_prologue(), but after doit() is finished with it.
ResourceMark rm(current_thread);
HandleMark hm(current_thread);
JvmtiVTMSTransitionDisabler disabler(thread);
MountUnmountDisabler disabler(thread);
ThreadsListHandle tlh(current_thread);
JavaThread* java_thread = nullptr;
@@ -2209,7 +2210,7 @@ JvmtiEnv::GetLocalLong(jthread thread, jint depth, jint slot, jlong* value_ptr)
// doit_prologue(), but after doit() is finished with it.
ResourceMark rm(current_thread);
HandleMark hm(current_thread);
JvmtiVTMSTransitionDisabler disabler(thread);
MountUnmountDisabler disabler(thread);
ThreadsListHandle tlh(current_thread);
JavaThread* java_thread = nullptr;
@@ -2251,7 +2252,7 @@ JvmtiEnv::GetLocalFloat(jthread thread, jint depth, jint slot, jfloat* value_ptr
// doit_prologue(), but after doit() is finished with it.
ResourceMark rm(current_thread);
HandleMark hm(current_thread);
JvmtiVTMSTransitionDisabler disabler(thread);
MountUnmountDisabler disabler(thread);
ThreadsListHandle tlh(current_thread);
JavaThread* java_thread = nullptr;
@@ -2293,7 +2294,7 @@ JvmtiEnv::GetLocalDouble(jthread thread, jint depth, jint slot, jdouble* value_p
// doit_prologue(), but after doit() is finished with it.
ResourceMark rm(current_thread);
HandleMark hm(current_thread);
JvmtiVTMSTransitionDisabler disabler(thread);
MountUnmountDisabler disabler(thread);
ThreadsListHandle tlh(current_thread);
JavaThread* java_thread = nullptr;
@@ -2334,7 +2335,7 @@ JvmtiEnv::SetLocalObject(jthread thread, jint depth, jint slot, jobject value) {
// doit_prologue(), but after doit() is finished with it.
ResourceMark rm(current_thread);
HandleMark hm(current_thread);
JvmtiVTMSTransitionDisabler disabler(thread);
MountUnmountDisabler disabler(thread);
ThreadsListHandle tlh(current_thread);
JavaThread* java_thread = nullptr;
@@ -2371,7 +2372,7 @@ JvmtiEnv::SetLocalInt(jthread thread, jint depth, jint slot, jint value) {
// doit_prologue(), but after doit() is finished with it.
ResourceMark rm(current_thread);
HandleMark hm(current_thread);
JvmtiVTMSTransitionDisabler disabler(thread);
MountUnmountDisabler disabler(thread);
ThreadsListHandle tlh(current_thread);
JavaThread* java_thread = nullptr;
@@ -2408,7 +2409,7 @@ JvmtiEnv::SetLocalLong(jthread thread, jint depth, jint slot, jlong value) {
// doit_prologue(), but after doit() is finished with it.
ResourceMark rm(current_thread);
HandleMark hm(current_thread);
JvmtiVTMSTransitionDisabler disabler(thread);
MountUnmountDisabler disabler(thread);
ThreadsListHandle tlh(current_thread);
JavaThread* java_thread = nullptr;
@@ -2445,7 +2446,7 @@ JvmtiEnv::SetLocalFloat(jthread thread, jint depth, jint slot, jfloat value) {
// doit_prologue(), but after doit() is finished with it.
ResourceMark rm(current_thread);
HandleMark hm(current_thread);
JvmtiVTMSTransitionDisabler disabler(thread);
MountUnmountDisabler disabler(thread);
ThreadsListHandle tlh(current_thread);
JavaThread* java_thread = nullptr;
@@ -2482,7 +2483,7 @@ JvmtiEnv::SetLocalDouble(jthread thread, jint depth, jint slot, jdouble value) {
// doit_prologue(), but after doit() is finished with it.
ResourceMark rm(current_thread);
HandleMark hm(current_thread);
JvmtiVTMSTransitionDisabler disabler(thread);
MountUnmountDisabler disabler(thread);
ThreadsListHandle tlh(current_thread);
JavaThread* java_thread = nullptr;
@@ -2574,7 +2575,7 @@ JvmtiEnv::ClearBreakpoint(Method* method, jlocation location) {
jvmtiError
JvmtiEnv::SetFieldAccessWatch(fieldDescriptor* fdesc_ptr) {
JvmtiVTMSTransitionDisabler disabler;
MountUnmountDisabler disabler;
// make sure we haven't set this watch before
if (fdesc_ptr->is_field_access_watched()) return JVMTI_ERROR_DUPLICATE;
fdesc_ptr->set_is_field_access_watched(true);
@@ -2587,7 +2588,7 @@ JvmtiEnv::SetFieldAccessWatch(fieldDescriptor* fdesc_ptr) {
jvmtiError
JvmtiEnv::ClearFieldAccessWatch(fieldDescriptor* fdesc_ptr) {
JvmtiVTMSTransitionDisabler disabler;
MountUnmountDisabler disabler;
// make sure we have a watch to clear
if (!fdesc_ptr->is_field_access_watched()) return JVMTI_ERROR_NOT_FOUND;
fdesc_ptr->set_is_field_access_watched(false);
@@ -2600,7 +2601,7 @@ JvmtiEnv::ClearFieldAccessWatch(fieldDescriptor* fdesc_ptr) {
jvmtiError
JvmtiEnv::SetFieldModificationWatch(fieldDescriptor* fdesc_ptr) {
JvmtiVTMSTransitionDisabler disabler;
MountUnmountDisabler disabler;
// make sure we haven't set this watch before
if (fdesc_ptr->is_field_modification_watched()) return JVMTI_ERROR_DUPLICATE;
fdesc_ptr->set_is_field_modification_watched(true);
@@ -2613,7 +2614,7 @@ JvmtiEnv::SetFieldModificationWatch(fieldDescriptor* fdesc_ptr) {
jvmtiError
JvmtiEnv::ClearFieldModificationWatch(fieldDescriptor* fdesc_ptr) {
JvmtiVTMSTransitionDisabler disabler;
MountUnmountDisabler disabler;
// make sure we have a watch to clear
if (!fdesc_ptr->is_field_modification_watched()) return JVMTI_ERROR_NOT_FOUND;
fdesc_ptr->set_is_field_modification_watched(false);

View File

@@ -51,6 +51,7 @@
#include "runtime/javaThread.inline.hpp"
#include "runtime/jfieldIDWorkaround.hpp"
#include "runtime/jniHandles.inline.hpp"
#include "runtime/mountUnmountDisabler.hpp"
#include "runtime/objectMonitor.inline.hpp"
#include "runtime/osThread.hpp"
#include "runtime/signature.hpp"
@@ -697,7 +698,7 @@ JvmtiEnvBase::check_and_skip_hidden_frames(bool is_in_VTMS_transition, javaVFram
javaVFrame*
JvmtiEnvBase::check_and_skip_hidden_frames(JavaThread* jt, javaVFrame* jvf) {
jvf = check_and_skip_hidden_frames(jt->is_in_VTMS_transition(), jvf);
jvf = check_and_skip_hidden_frames(jt->is_in_vthread_transition(), jvf);
return jvf;
}
@@ -719,7 +720,7 @@ JvmtiEnvBase::get_vthread_jvf(oop vthread) {
return nullptr;
}
vframeStream vfs(java_thread);
assert(!java_thread->is_in_VTMS_transition(), "invariant");
assert(!java_thread->is_in_vthread_transition(), "invariant");
jvf = vfs.at_end() ? nullptr : vfs.asJavaVFrame();
jvf = check_and_skip_hidden_frames(false, jvf);
} else {
@@ -1693,8 +1694,7 @@ private:
// jt->jvmti_vthread() for VTMS transition protocol.
void correct_jvmti_thread_states() {
for (JavaThread* jt : ThreadsListHandle()) {
if (jt->is_in_VTMS_transition()) {
jt->set_VTMS_transition_mark(true);
if (jt->is_in_vthread_transition()) {
continue; // no need in JvmtiThreadState correction below if in transition
}
correct_jvmti_thread_state(jt);
@@ -1711,7 +1711,7 @@ public:
if (_enable) {
correct_jvmti_thread_states();
}
JvmtiVTMSTransitionDisabler::set_VTMS_notify_jvmti_events(_enable);
MountUnmountDisabler::set_notify_jvmti_events(_enable);
}
};
@@ -1722,7 +1722,7 @@ JvmtiEnvBase::enable_virtual_threads_notify_jvmti() {
if (!Continuations::enabled()) {
return false;
}
if (JvmtiVTMSTransitionDisabler::VTMS_notify_jvmti_events()) {
if (MountUnmountDisabler::notify_jvmti_events()) {
return false; // already enabled
}
VM_SetNotifyJvmtiEventsMode op(true);
@@ -1738,10 +1738,10 @@ JvmtiEnvBase::disable_virtual_threads_notify_jvmti() {
if (!Continuations::enabled()) {
return false;
}
if (!JvmtiVTMSTransitionDisabler::VTMS_notify_jvmti_events()) {
if (!MountUnmountDisabler::notify_jvmti_events()) {
return false; // already disabled
}
JvmtiVTMSTransitionDisabler disabler(true); // ensure there are no other disablers
MountUnmountDisabler disabler(true); // ensure there are no other disablers
VM_SetNotifyJvmtiEventsMode op(false);
VMThread::execute(&op);
return true;
@@ -1769,7 +1769,6 @@ JvmtiEnvBase::suspend_thread(oop thread_oop, JavaThread* java_thread, bool singl
// Platform thread or mounted vthread cases.
assert(java_thread != nullptr, "sanity check");
assert(!java_thread->is_in_VTMS_transition(), "sanity check");
// Don't allow hidden thread suspend request.
if (java_thread->is_hidden_from_external_view()) {
@@ -1828,7 +1827,6 @@ JvmtiEnvBase::resume_thread(oop thread_oop, JavaThread* java_thread, bool single
// Platform thread or mounted vthread cases.
assert(java_thread != nullptr, "sanity check");
assert(!java_thread->is_in_VTMS_transition(), "sanity check");
// Don't allow hidden thread resume request.
if (java_thread->is_hidden_from_external_view()) {
@@ -2008,12 +2006,12 @@ class AdapterClosure : public HandshakeClosure {
};
// Supports platform and virtual threads.
// JvmtiVTMSTransitionDisabler is always set by this function.
// MountUnmountDisabler is always set by this function.
void
JvmtiHandshake::execute(JvmtiUnitedHandshakeClosure* hs_cl, jthread target) {
JavaThread* current = JavaThread::current();
HandleMark hm(current);
JvmtiVTMSTransitionDisabler disabler(target);
MountUnmountDisabler disabler(target);
ThreadsListHandle tlh(current);
JavaThread* java_thread = nullptr;
oop thread_obj = nullptr;
@@ -2030,7 +2028,7 @@ JvmtiHandshake::execute(JvmtiUnitedHandshakeClosure* hs_cl, jthread target) {
// Supports platform and virtual threads.
// A virtual thread is always identified by the target_h oop handle.
// The target_jt is always nullptr for an unmounted virtual thread.
// JvmtiVTMSTransitionDisabler has to be set before call to this function.
// MountUnmountDisabler has to be set before call to this function.
void
JvmtiHandshake::execute(JvmtiUnitedHandshakeClosure* hs_cl, ThreadsListHandle* tlh,
JavaThread* target_jt, Handle target_h) {
@@ -2038,7 +2036,7 @@ JvmtiHandshake::execute(JvmtiUnitedHandshakeClosure* hs_cl, ThreadsListHandle* t
bool is_virtual = java_lang_VirtualThread::is_instance(target_h());
bool self = target_jt == current;
assert(!Continuations::enabled() || self || !is_virtual || current->is_VTMS_transition_disabler(), "sanity check");
assert(!Continuations::enabled() || self || !is_virtual || current->is_vthread_transition_disabler(), "sanity check");
hs_cl->set_target_jt(target_jt); // can be needed in the virtual thread case
hs_cl->set_is_virtual(is_virtual); // can be needed in the virtual thread case
@@ -2211,7 +2209,7 @@ JvmtiEnvBase::force_early_return(jthread thread, jvalue value, TosState tos) {
JavaThread* current_thread = JavaThread::current();
HandleMark hm(current_thread);
JvmtiVTMSTransitionDisabler disabler(thread);
MountUnmountDisabler disabler(thread);
ThreadsListHandle tlh(current_thread);
JavaThread* java_thread = nullptr;
@@ -2612,7 +2610,7 @@ PrintStackTraceClosure::do_thread_impl(Thread *target) {
"is_VTMS_transition_disabler: %d, is_in_VTMS_transition = %d\n",
tname, java_thread->name(), java_thread->is_exiting(),
java_thread->is_suspended(), java_thread->is_carrier_thread_suspended(), is_vt_suspended,
java_thread->is_VTMS_transition_disabler(), java_thread->is_in_VTMS_transition());
java_thread->is_vthread_transition_disabler(), java_thread->is_in_vthread_transition());
if (java_thread->has_last_Java_frame()) {
RegisterMap reg_map(java_thread,

View File

@@ -61,6 +61,7 @@
#include "runtime/javaThread.hpp"
#include "runtime/jniHandles.inline.hpp"
#include "runtime/keepStackGCProcessed.hpp"
#include "runtime/mountUnmountDisabler.hpp"
#include "runtime/objectMonitor.inline.hpp"
#include "runtime/os.hpp"
#include "runtime/osThread.hpp"
@@ -412,7 +413,7 @@ JvmtiExport::get_jvmti_interface(JavaVM *jvm, void **penv, jint version) {
if (Continuations::enabled()) {
// Virtual threads support for agents loaded into running VM.
// There is a performance impact when VTMS transitions are enabled.
if (!JvmtiVTMSTransitionDisabler::VTMS_notify_jvmti_events()) {
if (!MountUnmountDisabler::notify_jvmti_events()) {
JvmtiEnvBase::enable_virtual_threads_notify_jvmti();
}
}
@@ -426,7 +427,7 @@ JvmtiExport::get_jvmti_interface(JavaVM *jvm, void **penv, jint version) {
if (Continuations::enabled()) {
// Virtual threads support for agents loaded at startup.
// There is a performance impact when VTMS transitions are enabled.
JvmtiVTMSTransitionDisabler::set_VTMS_notify_jvmti_events(true);
MountUnmountDisabler::set_notify_jvmti_events(true, true /*is_onload*/);
}
return JNI_OK;
@@ -1639,7 +1640,7 @@ void JvmtiExport::post_vthread_end(jobject vthread) {
JVMTI_JAVA_THREAD_EVENT_CALLBACK_BLOCK(thread)
jvmtiEventVirtualThreadEnd callback = env->callbacks()->VirtualThreadEnd;
if (callback != nullptr) {
(*callback)(env->jvmti_external(), jem.jni_env(), vthread);
(*callback)(env->jvmti_external(), jem.jni_env(), jem.jni_thread());
}
}
}
@@ -2924,13 +2925,13 @@ void JvmtiExport::vthread_post_monitor_waited(JavaThread *current, ObjectMonitor
Handle vthread(current, current->vthread());
// Finish the VTMS transition temporarily to post the event.
JvmtiVTMSTransitionDisabler::VTMS_vthread_mount((jthread)vthread.raw_value(), false);
MountUnmountDisabler::end_transition(current, vthread(), true /*is_mount*/, false /*is_thread_start*/);
// Post event.
JvmtiExport::post_monitor_waited(current, obj_mntr, timed_out);
// Go back to VTMS transition state.
JvmtiVTMSTransitionDisabler::VTMS_vthread_unmount((jthread)vthread.raw_value(), true);
MountUnmountDisabler::start_transition(current, vthread(), false /*is_mount*/, false /*is_thread_start*/);
}
void JvmtiExport::post_vm_object_alloc(JavaThread *thread, oop object) {

View File

@@ -29,6 +29,7 @@
#include "runtime/handles.inline.hpp"
#include "runtime/interfaceSupport.inline.hpp"
#include "runtime/jniHandles.inline.hpp"
#include "runtime/mountUnmountDisabler.hpp"
// the list of extension functions
GrowableArray<jvmtiExtensionFunctionInfo*>* JvmtiExtensions::_ext_functions;
@@ -77,7 +78,7 @@ static jvmtiError JNICALL GetVirtualThread(const jvmtiEnv* env, ...) {
va_end(ap);
ThreadInVMfromNative tiv(current_thread);
JvmtiVTMSTransitionDisabler disabler;
MountUnmountDisabler disabler;
ThreadsListHandle tlh(current_thread);
jvmtiError err;
@@ -135,7 +136,7 @@ static jvmtiError JNICALL GetCarrierThread(const jvmtiEnv* env, ...) {
MACOS_AARCH64_ONLY(ThreadWXEnable __wx(WXWrite, current_thread));
ThreadInVMfromNative tiv(current_thread);
JvmtiVTMSTransitionDisabler disabler;
MountUnmountDisabler disabler;
ThreadsListHandle tlh(current_thread);
JavaThread* java_thread;

View File

@@ -57,6 +57,7 @@
#include "runtime/javaCalls.hpp"
#include "runtime/javaThread.inline.hpp"
#include "runtime/jniHandles.inline.hpp"
#include "runtime/mountUnmountDisabler.hpp"
#include "runtime/mutex.hpp"
#include "runtime/mutexLocker.hpp"
#include "runtime/safepoint.hpp"
@@ -3028,7 +3029,7 @@ void JvmtiTagMap::iterate_over_reachable_objects(jvmtiHeapRootCallback heap_root
jvmtiObjectReferenceCallback object_ref_callback,
const void* user_data) {
// VTMS transitions must be disabled before the EscapeBarrier.
JvmtiVTMSTransitionDisabler disabler;
MountUnmountDisabler disabler;
JavaThread* jt = JavaThread::current();
EscapeBarrier eb(true, jt);
@@ -3056,7 +3057,7 @@ void JvmtiTagMap::iterate_over_objects_reachable_from_object(jobject object,
Arena dead_object_arena(mtServiceability);
GrowableArray<jlong> dead_objects(&dead_object_arena, 10, 0, 0);
JvmtiVTMSTransitionDisabler disabler;
MountUnmountDisabler disabler;
{
MutexLocker ml(Heap_lock);
@@ -3076,7 +3077,7 @@ void JvmtiTagMap::follow_references(jint heap_filter,
const void* user_data)
{
// VTMS transitions must be disabled before the EscapeBarrier.
JvmtiVTMSTransitionDisabler disabler;
MountUnmountDisabler disabler;
oop obj = JNIHandles::resolve(object);
JavaThread* jt = JavaThread::current();

View File

@@ -209,479 +209,6 @@ JvmtiThreadState::periodic_clean_up() {
}
}
//
// Virtual Threads Mount State transition (VTMS transition) mechanism
//
// VTMS transitions for one virtual thread are disabled while it is positive
volatile int JvmtiVTMSTransitionDisabler::_VTMS_transition_disable_for_one_count = 0;
// VTMS transitions for all virtual threads are disabled while it is positive
volatile int JvmtiVTMSTransitionDisabler::_VTMS_transition_disable_for_all_count = 0;
// There is an active suspender or resumer.
volatile bool JvmtiVTMSTransitionDisabler::_SR_mode = false;
// Notifications from VirtualThread about VTMS events are enabled.
bool JvmtiVTMSTransitionDisabler::_VTMS_notify_jvmti_events = false;
// The JvmtiVTMSTransitionDisabler sync protocol is enabled if this count > 0.
volatile int JvmtiVTMSTransitionDisabler::_sync_protocol_enabled_count = 0;
// JvmtiVTMSTraansitionDisabler sync protocol is enabled permanently after seeing a suspender.
volatile bool JvmtiVTMSTransitionDisabler::_sync_protocol_enabled_permanently = false;
#ifdef ASSERT
void
JvmtiVTMSTransitionDisabler::print_info() {
log_error(jvmti)("_VTMS_transition_disable_for_one_count: %d\n", _VTMS_transition_disable_for_one_count);
log_error(jvmti)("_VTMS_transition_disable_for_all_count: %d\n\n", _VTMS_transition_disable_for_all_count);
int attempts = 10000;
for (JavaThreadIteratorWithHandle jtiwh; JavaThread *java_thread = jtiwh.next(); ) {
if (java_thread->VTMS_transition_mark()) {
log_error(jvmti)("jt: %p VTMS_transition_mark: %d\n",
(void*)java_thread, java_thread->VTMS_transition_mark());
}
ResourceMark rm;
// Handshake with target.
PrintStackTraceClosure pstc;
Handshake::execute(&pstc, java_thread);
}
}
#endif
// disable VTMS transitions for one virtual thread
// disable VTMS transitions for all threads if thread is nullptr or a platform thread
JvmtiVTMSTransitionDisabler::JvmtiVTMSTransitionDisabler(jthread thread)
: _is_SR(false),
_is_virtual(false),
_is_self(false),
_thread(thread)
{
if (!Continuations::enabled()) {
return; // JvmtiVTMSTransitionDisabler is no-op without virtual threads
}
if (Thread::current_or_null() == nullptr) {
return; // Detached thread, can be a call from Agent_OnLoad.
}
JavaThread* current = JavaThread::current();
oop thread_oop = JNIHandles::resolve_external_guard(thread);
_is_virtual = java_lang_VirtualThread::is_instance(thread_oop);
if (thread == nullptr ||
(!_is_virtual && thread_oop == current->threadObj()) ||
(_is_virtual && thread_oop == current->vthread())) {
_is_self = true;
return; // no need for current thread to disable and enable transitions for itself
}
if (!sync_protocol_enabled_permanently()) {
JvmtiVTMSTransitionDisabler::inc_sync_protocol_enabled_count();
}
// Target can be virtual or platform thread.
// If target is a platform thread then we have to disable VTMS transitions for all threads.
// It is by several reasons:
// - carrier threads can mount virtual threads which may cause incorrect behavior
// - there is no mechanism to disable transitions for a specific carrier thread yet
if (_is_virtual) {
VTMS_transition_disable_for_one(); // disable VTMS transitions for one virtual thread
} else {
VTMS_transition_disable_for_all(); // disable VTMS transitions for all virtual threads
}
}
// disable VTMS transitions for all virtual threads
JvmtiVTMSTransitionDisabler::JvmtiVTMSTransitionDisabler(bool is_SR)
: _is_SR(is_SR),
_is_virtual(false),
_is_self(false),
_thread(nullptr)
{
if (!Continuations::enabled()) {
return; // JvmtiVTMSTransitionDisabler is no-op without virtual threads
}
if (Thread::current_or_null() == nullptr) {
return; // Detached thread, can be a call from Agent_OnLoad.
}
if (!sync_protocol_enabled_permanently()) {
JvmtiVTMSTransitionDisabler::inc_sync_protocol_enabled_count();
if (is_SR) {
AtomicAccess::store(&_sync_protocol_enabled_permanently, true);
}
}
VTMS_transition_disable_for_all();
}
JvmtiVTMSTransitionDisabler::~JvmtiVTMSTransitionDisabler() {
if (!Continuations::enabled()) {
return; // JvmtiVTMSTransitionDisabler is a no-op without virtual threads
}
if (Thread::current_or_null() == nullptr) {
return; // Detached thread, can be a call from Agent_OnLoad.
}
if (_is_self) {
return; // no need for current thread to disable and enable transitions for itself
}
if (_is_virtual) {
VTMS_transition_enable_for_one(); // enable VTMS transitions for one virtual thread
} else {
VTMS_transition_enable_for_all(); // enable VTMS transitions for all virtual threads
}
if (!sync_protocol_enabled_permanently()) {
JvmtiVTMSTransitionDisabler::dec_sync_protocol_enabled_count();
}
}
// disable VTMS transitions for one virtual thread
void
JvmtiVTMSTransitionDisabler::VTMS_transition_disable_for_one() {
assert(_thread != nullptr, "sanity check");
JavaThread* thread = JavaThread::current();
HandleMark hm(thread);
Handle vth = Handle(thread, JNIHandles::resolve_external_guard(_thread));
assert(java_lang_VirtualThread::is_instance(vth()), "sanity check");
MonitorLocker ml(JvmtiVTMSTransition_lock);
while (_SR_mode) { // suspender or resumer is a JvmtiVTMSTransitionDisabler monopolist
ml.wait(10); // wait while there is an active suspender or resumer
}
AtomicAccess::inc(&_VTMS_transition_disable_for_one_count);
java_lang_Thread::inc_VTMS_transition_disable_count(vth());
while (java_lang_Thread::is_in_VTMS_transition(vth())) {
ml.wait(10); // wait while the virtual thread is in transition
}
#ifdef ASSERT
thread->set_is_VTMS_transition_disabler(true);
#endif
}
// disable VTMS transitions for all virtual threads
void
JvmtiVTMSTransitionDisabler::VTMS_transition_disable_for_all() {
JavaThread* thread = JavaThread::current();
int attempts = 50000;
{
MonitorLocker ml(JvmtiVTMSTransition_lock);
assert(!thread->is_in_VTMS_transition(), "VTMS_transition sanity check");
while (_SR_mode) { // Suspender or resumer is a JvmtiVTMSTransitionDisabler monopolist.
ml.wait(10); // Wait while there is an active suspender or resumer.
}
if (_is_SR) {
_SR_mode = true;
while (_VTMS_transition_disable_for_all_count > 0 ||
_VTMS_transition_disable_for_one_count > 0) {
ml.wait(10); // Wait while there is any active jvmtiVTMSTransitionDisabler.
}
}
AtomicAccess::inc(&_VTMS_transition_disable_for_all_count);
// Block while some mount/unmount transitions are in progress.
// Debug version fails and prints diagnostic information.
for (JavaThreadIteratorWithHandle jtiwh; JavaThread *jt = jtiwh.next(); ) {
while (jt->VTMS_transition_mark()) {
if (ml.wait(10)) {
attempts--;
}
DEBUG_ONLY(if (attempts == 0) break;)
}
}
assert(!thread->is_VTMS_transition_disabler(), "VTMS_transition sanity check");
#ifdef ASSERT
if (attempts > 0) {
thread->set_is_VTMS_transition_disabler(true);
}
#endif
}
#ifdef ASSERT
if (attempts == 0) {
print_info();
fatal("stuck in JvmtiVTMSTransitionDisabler::VTMS_transition_disable");
}
#endif
}
// enable VTMS transitions for one virtual thread
void
JvmtiVTMSTransitionDisabler::VTMS_transition_enable_for_one() {
JavaThread* thread = JavaThread::current();
HandleMark hm(thread);
Handle vth = Handle(thread, JNIHandles::resolve_external_guard(_thread));
if (!java_lang_VirtualThread::is_instance(vth())) {
return; // no-op if _thread is not a virtual thread
}
MonitorLocker ml(JvmtiVTMSTransition_lock);
java_lang_Thread::dec_VTMS_transition_disable_count(vth());
AtomicAccess::dec(&_VTMS_transition_disable_for_one_count);
if (_VTMS_transition_disable_for_one_count == 0) {
ml.notify_all();
}
#ifdef ASSERT
thread->set_is_VTMS_transition_disabler(false);
#endif
}
// enable VTMS transitions for all virtual threads
void
JvmtiVTMSTransitionDisabler::VTMS_transition_enable_for_all() {
JavaThread* current = JavaThread::current();
{
MonitorLocker ml(JvmtiVTMSTransition_lock);
assert(_VTMS_transition_disable_for_all_count > 0, "VTMS_transition sanity check");
if (_is_SR) { // Disabler is suspender or resumer.
_SR_mode = false;
}
AtomicAccess::dec(&_VTMS_transition_disable_for_all_count);
if (_VTMS_transition_disable_for_all_count == 0 || _is_SR) {
ml.notify_all();
}
#ifdef ASSERT
current->set_is_VTMS_transition_disabler(false);
#endif
}
}
void
JvmtiVTMSTransitionDisabler::start_VTMS_transition(jthread vthread, bool is_mount) {
JavaThread* thread = JavaThread::current();
oop vt = JNIHandles::resolve_external_guard(vthread);
assert(!thread->is_in_VTMS_transition(), "VTMS_transition sanity check");
// Avoid using MonitorLocker on performance critical path, use
// two-level synchronization with lock-free operations on state bits.
assert(!thread->VTMS_transition_mark(), "sanity check");
thread->set_VTMS_transition_mark(true); // Try to enter VTMS transition section optmistically.
java_lang_Thread::set_is_in_VTMS_transition(vt, true);
if (!sync_protocol_enabled()) {
thread->set_is_in_VTMS_transition(true);
return;
}
HandleMark hm(thread);
Handle vth = Handle(thread, vt);
int attempts = 50000;
// Do not allow suspends inside VTMS transitions.
// Block while transitions are disabled or there are suspend requests.
int64_t thread_id = java_lang_Thread::thread_id(vth()); // Cannot use oops while blocked.
if (_VTMS_transition_disable_for_all_count > 0 ||
java_lang_Thread::VTMS_transition_disable_count(vth()) > 0 ||
thread->is_suspended() ||
JvmtiVTSuspender::is_vthread_suspended(thread_id)
) {
// Slow path: undo unsuccessful optimistic set of the VTMS_transition_mark.
// It can cause an extra waiting cycle for VTMS transition disablers.
thread->set_VTMS_transition_mark(false);
java_lang_Thread::set_is_in_VTMS_transition(vth(), false);
while (true) {
MonitorLocker ml(JvmtiVTMSTransition_lock);
// Do not allow suspends inside VTMS transitions.
// Block while transitions are disabled or there are suspend requests.
if (_VTMS_transition_disable_for_all_count > 0 ||
java_lang_Thread::VTMS_transition_disable_count(vth()) > 0 ||
thread->is_suspended() ||
JvmtiVTSuspender::is_vthread_suspended(thread_id)
) {
// Block while transitions are disabled or there are suspend requests.
if (ml.wait(200)) {
attempts--;
}
DEBUG_ONLY(if (attempts == 0) break;)
continue; // ~ThreadBlockInVM has handshake-based suspend point.
}
thread->set_VTMS_transition_mark(true);
java_lang_Thread::set_is_in_VTMS_transition(vth(), true);
break;
}
}
#ifdef ASSERT
if (attempts == 0) {
log_error(jvmti)("start_VTMS_transition: thread->is_suspended: %d is_vthread_suspended: %d\n\n",
thread->is_suspended(), JvmtiVTSuspender::is_vthread_suspended(thread_id));
print_info();
fatal("stuck in JvmtiVTMSTransitionDisabler::start_VTMS_transition");
}
#endif
// Enter VTMS transition section.
thread->set_is_in_VTMS_transition(true);
}
void
JvmtiVTMSTransitionDisabler::finish_VTMS_transition(jthread vthread, bool is_mount) {
JavaThread* thread = JavaThread::current();
assert(thread->is_in_VTMS_transition(), "sanity check");
thread->set_is_in_VTMS_transition(false);
oop vt = JNIHandles::resolve_external_guard(vthread);
java_lang_Thread::set_is_in_VTMS_transition(vt, false);
assert(thread->VTMS_transition_mark(), "sanity check");
thread->set_VTMS_transition_mark(false);
if (!sync_protocol_enabled()) {
return;
}
int64_t thread_id = java_lang_Thread::thread_id(vt);
// Unblock waiting VTMS transition disablers.
if (_VTMS_transition_disable_for_one_count > 0 ||
_VTMS_transition_disable_for_all_count > 0) {
MonitorLocker ml(JvmtiVTMSTransition_lock);
ml.notify_all();
}
// In unmount case the carrier thread is attached after unmount transition.
// Check and block it if there was external suspend request.
int attempts = 10000;
if (!is_mount && thread->is_carrier_thread_suspended()) {
while (true) {
MonitorLocker ml(JvmtiVTMSTransition_lock);
// Block while there are suspend requests.
if ((!is_mount && thread->is_carrier_thread_suspended()) ||
(is_mount && JvmtiVTSuspender::is_vthread_suspended(thread_id))
) {
// Block while there are suspend requests.
if (ml.wait(200)) {
attempts--;
}
DEBUG_ONLY(if (attempts == 0) break;)
continue;
}
break;
}
}
#ifdef ASSERT
if (attempts == 0) {
log_error(jvmti)("finish_VTMS_transition: thread->is_suspended: %d is_vthread_suspended: %d\n\n",
thread->is_suspended(), JvmtiVTSuspender::is_vthread_suspended(thread_id));
print_info();
fatal("stuck in JvmtiVTMSTransitionDisabler::finish_VTMS_transition");
}
#endif
}
// set VTMS transition bit value in JavaThread and java.lang.VirtualThread object
void JvmtiVTMSTransitionDisabler::set_is_in_VTMS_transition(JavaThread* thread, jobject vthread, bool in_trans) {
oop vt = JNIHandles::resolve_external_guard(vthread);
java_lang_Thread::set_is_in_VTMS_transition(vt, in_trans);
thread->set_is_in_VTMS_transition(in_trans);
}
void
JvmtiVTMSTransitionDisabler::VTMS_vthread_start(jobject vthread) {
VTMS_mount_end(vthread);
JavaThread* thread = JavaThread::current();
assert(!thread->is_in_VTMS_transition(), "sanity check");
// If interp_only_mode has been enabled then we must eagerly create JvmtiThreadState
// objects for globally enabled virtual thread filtered events. Otherwise,
// it is an important optimization to create JvmtiThreadState objects lazily.
// This optimization is disabled when watchpoint capabilities are present. It is to
// work around a bug with virtual thread frames which can be not deoptimized in time.
if (JvmtiThreadState::seen_interp_only_mode() ||
JvmtiExport::should_post_field_access() ||
JvmtiExport::should_post_field_modification()){
JvmtiEventController::thread_started(thread);
}
if (JvmtiExport::should_post_vthread_start()) {
JvmtiExport::post_vthread_start(vthread);
}
// post VirtualThreadMount event after VirtualThreadStart
if (JvmtiExport::should_post_vthread_mount()) {
JvmtiExport::post_vthread_mount(vthread);
}
}
void
JvmtiVTMSTransitionDisabler::VTMS_vthread_end(jobject vthread) {
JavaThread* thread = JavaThread::current();
assert(!thread->is_in_VTMS_transition(), "sanity check");
// post VirtualThreadUnmount event before VirtualThreadEnd
if (JvmtiExport::should_post_vthread_unmount()) {
JvmtiExport::post_vthread_unmount(vthread);
}
if (JvmtiExport::should_post_vthread_end()) {
JvmtiExport::post_vthread_end(vthread);
}
VTMS_unmount_begin(vthread, /* last_unmount */ true);
if (thread->jvmti_thread_state() != nullptr) {
JvmtiExport::cleanup_thread(thread);
assert(thread->jvmti_thread_state() == nullptr, "should be null");
assert(java_lang_Thread::jvmti_thread_state(JNIHandles::resolve(vthread)) == nullptr, "should be null");
}
thread->rebind_to_jvmti_thread_state_of(thread->threadObj());
}
void
JvmtiVTMSTransitionDisabler::VTMS_vthread_mount(jobject vthread, bool hide) {
if (hide) {
VTMS_mount_begin(vthread);
} else {
VTMS_mount_end(vthread);
if (JvmtiExport::should_post_vthread_mount()) {
JvmtiExport::post_vthread_mount(vthread);
}
}
}
void
JvmtiVTMSTransitionDisabler::VTMS_vthread_unmount(jobject vthread, bool hide) {
if (hide) {
if (JvmtiExport::should_post_vthread_unmount()) {
JvmtiExport::post_vthread_unmount(vthread);
}
VTMS_unmount_begin(vthread, /* last_unmount */ false);
} else {
VTMS_unmount_end(vthread);
}
}
void
JvmtiVTMSTransitionDisabler::VTMS_mount_begin(jobject vthread) {
JavaThread* thread = JavaThread::current();
assert(!thread->is_in_VTMS_transition(), "sanity check");
start_VTMS_transition(vthread, /* is_mount */ true);
}
void
JvmtiVTMSTransitionDisabler::VTMS_mount_end(jobject vthread) {
JavaThread* thread = JavaThread::current();
oop vt = JNIHandles::resolve(vthread);
thread->rebind_to_jvmti_thread_state_of(vt);
assert(thread->is_in_VTMS_transition(), "sanity check");
finish_VTMS_transition(vthread, /* is_mount */ true);
}
void
JvmtiVTMSTransitionDisabler::VTMS_unmount_begin(jobject vthread, bool last_unmount) {
JavaThread* thread = JavaThread::current();
assert(!thread->is_in_VTMS_transition(), "sanity check");
start_VTMS_transition(vthread, /* is_mount */ false);
if (!last_unmount) {
thread->rebind_to_jvmti_thread_state_of(thread->threadObj());
}
}
void
JvmtiVTMSTransitionDisabler::VTMS_unmount_end(jobject vthread) {
JavaThread* thread = JavaThread::current();
assert(thread->is_in_VTMS_transition(), "sanity check");
finish_VTMS_transition(vthread, /* is_mount */ false);
}
//
// Virtual Threads Suspend/Resume management
//

View File

@@ -72,67 +72,6 @@ class JvmtiEnvThreadStateIterator : public StackObj {
JvmtiEnvThreadState* next(JvmtiEnvThreadState* ets);
};
///////////////////////////////////////////////////////////////
//
// class JvmtiVTMSTransitionDisabler
//
// Virtual Thread Mount State Transition (VTMS transition) mechanism
//
class JvmtiVTMSTransitionDisabler : public AnyObj {
private:
static volatile int _VTMS_transition_disable_for_one_count; // transitions for one virtual thread are disabled while it is positive
static volatile int _VTMS_transition_disable_for_all_count; // transitions for all virtual threads are disabled while it is positive
static volatile bool _SR_mode; // there is an active suspender or resumer
static volatile int _sync_protocol_enabled_count; // current number of JvmtiVTMSTransitionDisablers enabled sync protocol
static volatile bool _sync_protocol_enabled_permanently; // seen a suspender: JvmtiVTMSTransitionDisabler protocol is enabled permanently
bool _is_SR; // is suspender or resumer
bool _is_virtual; // target thread is virtual
bool _is_self; // JvmtiVTMSTransitionDisabler is a no-op for current platform, carrier or virtual thread
jthread _thread; // virtual thread to disable transitions for, no-op if it is a platform thread
DEBUG_ONLY(static void print_info();)
void VTMS_transition_disable_for_one();
void VTMS_transition_disable_for_all();
void VTMS_transition_enable_for_one();
void VTMS_transition_enable_for_all();
public:
static bool _VTMS_notify_jvmti_events; // enable notifications from VirtualThread about VTMS events
static bool VTMS_notify_jvmti_events() { return _VTMS_notify_jvmti_events; }
static void set_VTMS_notify_jvmti_events(bool val) { _VTMS_notify_jvmti_events = val; }
static void inc_sync_protocol_enabled_count() { AtomicAccess::inc(&_sync_protocol_enabled_count); }
static void dec_sync_protocol_enabled_count() { AtomicAccess::dec(&_sync_protocol_enabled_count); }
static int sync_protocol_enabled_count() { return AtomicAccess::load(&_sync_protocol_enabled_count); }
static bool sync_protocol_enabled_permanently() { return AtomicAccess::load(&_sync_protocol_enabled_permanently); }
static bool sync_protocol_enabled() { return sync_protocol_enabled_permanently() || sync_protocol_enabled_count() > 0; }
// parameter is_SR: suspender or resumer
JvmtiVTMSTransitionDisabler(bool is_SR = false);
JvmtiVTMSTransitionDisabler(jthread thread);
~JvmtiVTMSTransitionDisabler();
// set VTMS transition bit value in JavaThread and java.lang.VirtualThread object
static void set_is_in_VTMS_transition(JavaThread* thread, jobject vthread, bool in_trans);
static void start_VTMS_transition(jthread vthread, bool is_mount);
static void finish_VTMS_transition(jthread vthread, bool is_mount);
static void VTMS_vthread_start(jobject vthread);
static void VTMS_vthread_end(jobject vthread);
static void VTMS_vthread_mount(jobject vthread, bool hide);
static void VTMS_vthread_unmount(jobject vthread, bool hide);
static void VTMS_mount_begin(jobject vthread);
static void VTMS_mount_end(jobject vthread);
static void VTMS_unmount_begin(jobject vthread, bool last_unmount);
static void VTMS_unmount_end(jobject vthread);
};
///////////////////////////////////////////////////////////////
//
// class VirtualThreadList

View File

@@ -35,6 +35,7 @@
#include "runtime/interfaceSupport.inline.hpp"
#include "runtime/javaThread.inline.hpp"
#include "runtime/jniHandles.inline.hpp"
#include "runtime/mountUnmountDisabler.hpp"
#include "runtime/osThread.hpp"
#include "runtime/vframe.inline.hpp"
#include "runtime/vframe_hp.hpp"
@@ -56,66 +57,50 @@ JVM_ENTRY(void, CONT_unpin(JNIEnv* env, jclass cls)) {
}
JVM_END
#if INCLUDE_JVMTI
class JvmtiUnmountBeginMark : public StackObj {
class UnmountBeginMark : public StackObj {
Handle _vthread;
JavaThread* _current;
freeze_result _result;
bool _failed;
public:
JvmtiUnmountBeginMark(JavaThread* t) :
UnmountBeginMark(JavaThread* t) :
_vthread(t, t->vthread()), _current(t), _result(freeze_pinned_native), _failed(false) {
assert(!_current->is_in_VTMS_transition(), "must be");
assert(!_current->is_in_vthread_transition(), "must be");
if (JvmtiVTMSTransitionDisabler::VTMS_notify_jvmti_events()) {
JvmtiVTMSTransitionDisabler::VTMS_vthread_unmount((jthread)_vthread.raw_value(), true);
MountUnmountDisabler::start_transition(_current, _vthread(), false /*is_mount*/, false /*is_thread_start*/);
// Don't preempt if there is a pending popframe or earlyret operation. This can
// be installed in start_VTMS_transition() so we need to check it here.
if (JvmtiExport::can_pop_frame() || JvmtiExport::can_force_early_return()) {
JvmtiThreadState* state = _current->jvmti_thread_state();
if (_current->has_pending_popframe() || (state != nullptr && state->is_earlyret_pending())) {
_failed = true;
}
}
// Don't preempt in case there is an async exception installed since
// we would incorrectly throw it during the unmount logic in the carrier.
if (_current->has_async_exception_condition()) {
// Don't preempt if there is a pending popframe or earlyret operation. This can
// be installed in in process_at_transition_start() so we need to check it here.
if (JvmtiExport::can_pop_frame() || JvmtiExport::can_force_early_return()) {
JvmtiThreadState* state = _current->jvmti_thread_state();
if (_current->has_pending_popframe() || (state != nullptr && state->is_earlyret_pending())) {
_failed = true;
}
} else {
_current->set_is_in_VTMS_transition(true);
java_lang_Thread::set_is_in_VTMS_transition(_vthread(), true);
}
// Don't preempt in case there is an async exception installed since
// we would incorrectly throw it during the unmount logic in the carrier.
if (_current->has_async_exception_condition()) {
_failed = true;
}
}
~JvmtiUnmountBeginMark() {
~UnmountBeginMark() {
assert(!_current->is_suspended(), "must be");
assert(_current->is_in_VTMS_transition(), "must be");
assert(java_lang_Thread::is_in_VTMS_transition(_vthread()), "must be");
// Read it again since for late binding agents the flag could have
// been set while blocked in the allocation path during freeze.
bool jvmti_present = JvmtiVTMSTransitionDisabler::VTMS_notify_jvmti_events();
assert(_current->is_in_vthread_transition(), "must be");
if (_result != freeze_ok) {
// Undo transition
if (jvmti_present) {
JvmtiVTMSTransitionDisabler::VTMS_vthread_mount((jthread)_vthread.raw_value(), false);
} else {
_current->set_is_in_VTMS_transition(false);
java_lang_Thread::set_is_in_VTMS_transition(_vthread(), false);
}
MountUnmountDisabler::end_transition(_current, _vthread(), true /*is_mount*/, false /*is_thread_start*/);
}
}
void set_result(freeze_result res) { _result = res; }
bool failed() { return _failed; }
};
#if INCLUDE_JVMTI
static bool is_vthread_safe_to_preempt_for_jvmti(JavaThread* current) {
if (current->is_in_VTMS_transition()) {
if (current->is_in_vthread_transition()) {
// We are at the end of a mount transition.
return false;
}
@@ -150,11 +135,11 @@ freeze_result Continuation::try_preempt(JavaThread* current, oop continuation) {
return freeze_pinned_native;
}
JVMTI_ONLY(JvmtiUnmountBeginMark jubm(current);)
JVMTI_ONLY(if (jubm.failed()) return freeze_pinned_native;)
UnmountBeginMark ubm(current);
if (ubm.failed()) return freeze_pinned_native;
freeze_result res = CAST_TO_FN_PTR(FreezeContFnT, freeze_preempt_entry())(current, current->last_Java_sp());
log_trace(continuations, preempt)("try_preempt: %d", res);
JVMTI_ONLY(jubm.set_result(res);)
ubm.set_result(res);
if (current->has_pending_exception()) {
assert(res == freeze_exception, "expecting an exception result from freeze");

View File

@@ -58,6 +58,7 @@
#include "runtime/javaThread.inline.hpp"
#include "runtime/jniHandles.inline.hpp"
#include "runtime/keepStackGCProcessed.hpp"
#include "runtime/mountUnmountDisabler.hpp"
#include "runtime/objectMonitor.inline.hpp"
#include "runtime/orderAccess.hpp"
#include "runtime/prefetch.inline.hpp"
@@ -1690,7 +1691,7 @@ static void jvmti_mount_end(JavaThread* current, ContinuationWrapper& cont, fram
AnchorMark am(current, top); // Set anchor so that the stack is walkable.
JRT_BLOCK
JvmtiVTMSTransitionDisabler::VTMS_vthread_mount((jthread)vth.raw_value(), false);
MountUnmountDisabler::end_transition(current, vth(), true /*is_mount*/, false /*is_thread_start*/);
if (current->pending_contended_entered_event()) {
// No monitor JVMTI events for ObjectLocker case.
@@ -2629,19 +2630,21 @@ intptr_t* ThawBase::handle_preempted_continuation(intptr_t* sp, Continuation::pr
DEBUG_ONLY(verify_frame_kind(top, preempt_kind);)
NOT_PRODUCT(int64_t tid = _thread->monitor_owner_id();)
#if INCLUDE_JVMTI
// Finish the VTMS transition.
assert(_thread->is_in_VTMS_transition(), "must be");
assert(_thread->is_in_vthread_transition(), "must be");
bool is_vthread = Continuation::continuation_scope(_cont.continuation()) == java_lang_VirtualThread::vthread_scope();
if (is_vthread) {
if (JvmtiVTMSTransitionDisabler::VTMS_notify_jvmti_events()) {
#if INCLUDE_JVMTI
if (MountUnmountDisabler::notify_jvmti_events()) {
jvmti_mount_end(_thread, _cont, top, preempt_kind);
} else {
_thread->set_is_in_VTMS_transition(false);
java_lang_Thread::set_is_in_VTMS_transition(_thread->vthread(), false);
} else
#endif
{ // Faster version of MountUnmountDisabler::end_transition() to avoid
// unnecessary extra instructions from jvmti_mount_end().
java_lang_Thread::set_is_in_vthread_transition(_thread->vthread(), false);
_thread->set_is_in_vthread_transition(false);
}
}
#endif
if (fast_case) {
// If we thawed in the slow path the runtime stub/native wrapper frame already

View File

@@ -162,7 +162,7 @@ void JVMFlag::print_on(outputStream* st, bool withComments, bool printRanges) co
// uintx ThresholdTolerance = 10 {product} {default}
// size_t TLABSize = 0 {product} {default}
// uintx SurvivorRatio = 8 {product} {default}
// double InitialRAMPercentage = 1.562500 {product} {default}
// double InitialRAMPercentage = 0.000000 {product} {default}
// ccstr CompileCommandFile = MyFile.cmd {product} {command line}
// ccstrlist CompileOnly = Method1
// CompileOnly += Method2 {product} {command line}

View File

@@ -1671,8 +1671,9 @@ const int ObjectAlignmentInBytes = 8;
"putback") \
\
/* new oopmap storage allocation */ \
develop(intx, MinOopMapAllocation, 8, \
develop(int, MinOopMapAllocation, 8, \
"Minimum number of OopMap entries in an OopMapSet") \
range(0, max_jint) \
\
/* recompilation */ \
product_pd(intx, CompileThreshold, \

View File

@@ -34,6 +34,7 @@
#include "runtime/handshake.hpp"
#include "runtime/interfaceSupport.inline.hpp"
#include "runtime/javaThread.inline.hpp"
#include "runtime/mountUnmountDisabler.hpp"
#include "runtime/os.hpp"
#include "runtime/osThread.hpp"
#include "runtime/stackWatermarkSet.hpp"
@@ -361,6 +362,27 @@ void Handshake::execute(HandshakeClosure* hs_cl) {
VMThread::execute(&handshake);
}
void Handshake::execute(HandshakeClosure* hs_cl, oop vthread) {
assert(java_lang_VirtualThread::is_instance(vthread), "");
Handle vth(JavaThread::current(), vthread);
MountUnmountDisabler md(vthread);
oop carrier_thread = java_lang_VirtualThread::carrier_thread(vth());
if (carrier_thread != nullptr) {
JavaThread* target = java_lang_Thread::thread(carrier_thread);
assert(target != nullptr, "");
// Technically there is no need for a ThreadsListHandle since the target
// will block if it tries to unmount the vthread, so it can never exit.
ThreadsListHandle tlh(JavaThread::current());
assert(tlh.includes(target), "");
execute(hs_cl, &tlh, target);
assert(target->threadObj() == java_lang_VirtualThread::carrier_thread(vth()), "");
} else {
// unmounted vthread, execute closure with the current thread
hs_cl->do_thread(nullptr);
}
}
void Handshake::execute(HandshakeClosure* hs_cl, JavaThread* target) {
// tlh == nullptr means we rely on a ThreadsListHandle somewhere
// in the caller's context (and we sanity check for that).

View File

@@ -69,6 +69,7 @@ class Handshake : public AllStatic {
// This version of execute() relies on a ThreadListHandle somewhere in
// the caller's context to protect target (and we sanity check for that).
static void execute(HandshakeClosure* hs_cl, JavaThread* target);
static void execute(HandshakeClosure* hs_cl, oop vthread);
// This version of execute() is used when you have a ThreadListHandle in
// hand and are using it to protect target. If tlh == nullptr, then we
// sanity check for a ThreadListHandle somewhere in the caller's context

Some files were not shown because too many files have changed in this diff Show More