Compare commits

...

110 Commits

Author SHA1 Message Date
Thomas Schatzl
e8f2cd8f3d 8347052: Update java man page documentation to reflect current state of the UseNUMA flag
Reviewed-by: ayang
Backport-of: ea774b74e8
2025-07-18 11:36:08 +00:00
SendaoYan
e599ee4a88 8361827: [TESTBUG] serviceability/HeapDump/UnmountedVThreadNativeMethodAtTop.java throws OutOfMemoryError
Reviewed-by: rrich, lmesnik
Backport-of: cbb3d23e19
2025-07-18 06:08:46 +00:00
David Holmes
3a8e9dfe85 8362565: ProblemList jdk/jfr/event/io/TestIOTopFrame.java
Reviewed-by: egahlin
Backport-of: 04c0b130f0
2025-07-18 02:39:10 +00:00
William Kemper
347084bfbd 8360288: Shenandoah crash at size_given_klass in op_degenerated
Reviewed-by: shade
Backport-of: 3b44d7bfa4
2025-07-17 16:50:07 +00:00
SendaoYan
5cc7a31b3f 8361869: Tests which call ThreadController should mark as /native
Reviewed-by: jpai
Backport-of: 3bacf7ea85
2025-07-17 12:50:53 +00:00
SendaoYan
f1f6452e01 8358004: Delete applications/scimark/Scimark.java test
Reviewed-by: coleenp
Backport-of: a5c9bc7032
2025-07-17 12:41:34 +00:00
Erik Gahlin
331adac38e 8361639: JFR: Incorrect top frame for I/O events
Reviewed-by: mgronlun
Backport-of: 1a6cbe421f
2025-07-17 12:20:22 +00:00
Brian Burkhalter
e989c1d138 8362429: AssertionError in File.listFiles(FileFilter | FilenameFilter)
Reviewed-by: alanb
Backport-of: be0161a8e6
2025-07-17 06:57:58 +00:00
Boris Ulasevich
5129887dfe 8362250: ARM32: forward_exception_entry missing return address
Reviewed-by: shade
Backport-of: 6ed81641b1
2025-07-17 01:29:57 +00:00
Brian Burkhalter
69ea85ee12 8361587: AssertionError in File.listFiles() when path is empty and -esa is enabled
Reviewed-by: alanb
Backport-of: eefbfdce31
2025-07-16 15:35:50 +00:00
Erik Gahlin
93260d639e 8361640: JFR: RandomAccessFile::readLine emits events for each character
Reviewed-by: mgronlun, alanb
Backport-of: 9bef2d1610
2025-07-16 15:35:30 +00:00
Tobias Hartmann
b67fb82a03 8362171: C2 fails with unexpected node in SuperWord truncation: ModI
Reviewed-by: chagedorn
Backport-of: 70c1ff7e15
2025-07-16 14:51:08 +00:00
Johannes Bechberger
a626c1d92c 8358619: Fix interval recomputation in CPU Time Profiler
Reviewed-by: jbachorik
Backport-of: c70258ca1c
2025-07-16 10:13:57 +00:00
Johannes Bechberger
533211af73 8358621: Reduce busy waiting in worse case at the synchronization point returning from native in CPU Time Profiler
Reviewed-by: shade
Backport-of: d2082c58ff
2025-07-16 07:31:23 +00:00
Erik Gahlin
07bb0e3e2f 8362097: JFR: Active Settings view broken
Reviewed-by: mgronlun
Backport-of: 25e509b0db
2025-07-16 06:56:09 +00:00
Tobias Hartmann
60196a6b6f 8361952: Installation of MethodData::extra_data_lock() misses synchronization on reader side
Reviewed-by: chagedorn
Backport-of: 272e66d017
2025-07-16 06:33:47 +00:00
Brent Christian
0e6bf00550 Merge
Reviewed-by: jpai
2025-07-16 03:57:34 +00:00
Calvin Cheung
e1926a6d0e 8361328: cds/appcds/dynamicArchive/TestAutoCreateSharedArchive.java archive timestamps comparison failed
Reviewed-by: matsaave, iklam
Backport-of: 4a351e3e57
2025-07-15 21:46:00 +00:00
David Holmes
03a67a969b 8356942: invokeinterface Throws AbstractMethodError Instead of IncompatibleClassChangeError
Reviewed-by: iklam, coleenp
Backport-of: f36147b326
2025-07-15 20:56:47 +00:00
Chris Plummer
cf92877aa5 8361905: Problem list serviceability/sa/ClhsdbThreadContext.java on Windows due to JDK-8356704
Reviewed-by: sspitsyn
Backport-of: f7e8d255cc
2025-07-15 18:29:32 +00:00
Phil Race
121f5a72e4 8360147: Better Glyph drawing redux
Reviewed-by: rhalade, ahgross, psadhukhan, jdv
2025-07-15 19:00:48 +05:30
Phil Race
52e1e739af 8355884: [macos] java/awt/Frame/I18NTitle.java fails on MacOS
Reviewed-by: kcr, dmarkov, aivanov, honkar, kizune
2025-07-15 19:00:48 +05:30
Darragh Clarke
5ae719c8fc 8350991: Improve HTTP client header handling
Reviewed-by: rhalade, dfuchs, michaelm
2025-07-15 19:00:47 +05:30
Kevin Driver
3ec6eb6482 8349594: Enhance TLS protocol support
Reviewed-by: rhalade, ahgross, wetmore, jnimeh
2025-07-15 19:00:47 +05:30
Christian Hagedorn
fae2345971 8349584: Improve compiler processing
Reviewed-by: rhalade, ahgross, epeter, thartmann
2025-07-15 19:00:47 +05:30
Prasanta Sadhukhan
6e490a465a 8349111: Enhance Swing supports
Reviewed-by: rhalade, jdv, prr
2025-07-15 19:00:47 +05:30
Phil Race
2555b5a632 8348989: Better Glyph drawing
Reviewed-by: mschoene, psadhukhan, jdv, rhalade
2025-07-15 19:00:47 +05:30
Volkan Yazici
caac8172ad 8349551: Failures in tests after JDK-8345625
Reviewed-by: jpai, dfuchs
2025-07-15 19:00:47 +05:30
Volkan Yazici
d1ea951d39 8345625: Better HTTP connections
Reviewed-by: skoivu, rhalade, ahgross, dfuchs, jpai, aefimov
2025-07-15 19:00:47 +05:30
Tobias Hartmann
7aa3f31724 8359678: C2: assert(static_cast<T1>(result) == thing) caused by ReverseBytesNode::Value()
Reviewed-by: chagedorn
Backport-of: e5ab210713
2025-07-15 11:35:53 +00:00
Richard Reingruber
ce85123f3a 8361602: [TESTBUG] serviceability/HeapDump/UnmountedVThreadNativeMethodAtTop.java deadlocks on exception
Reviewed-by: clanger, cjplummer
Backport-of: 917d0182cb
2025-07-15 08:02:44 +00:00
William Kemper
20fc8f74d5 8361529: GenShen: Fix bad assert in swap card tables
Reviewed-by: shade
Backport-of: 1de2acea77
2025-07-14 16:50:47 +00:00
Tobias Hartmann
dd82a0922b 8350177: C2 SuperWord: Integer.numberOfLeadingZeros, numberOfTrailingZeros, reverse and bitCount have input types wrongly truncated for byte and short
Reviewed-by: chagedorn
Backport-of: 77bd417c99
2025-07-14 07:31:27 +00:00
Srinivas Vamsi Parasa
9f21845262 8360775: Fix Shenandoah GC test failures when APX is enabled
Reviewed-by: shade, sviswanathan, jbhateja
Backport-of: 1c560727b8
2025-07-14 02:55:02 +00:00
Srinivas Vamsi Parasa
c5d0f1bc5e 8360776: Disable Intel APX by default and enable it with -XX:+UnlockExperimentalVMOptions -XX:+UseAPX in all builds
Reviewed-by: kvn, sviswanathan
Backport-of: 26b002805a
2025-07-12 21:34:48 +00:00
Chen Liang
c374ac6df4 8361615: CodeBuilder::parameterSlot throws undocumented IOOBE
Reviewed-by: asotona
Backport-of: c9bea77342
2025-07-11 22:52:41 +00:00
Boris Ulasevich
44f5dfef97 8358183: [JVMCI] crash accessing nmethod::jvmci_name in CodeCache::aggregate
Reviewed-by: thartmann
Backport-of: 74822ce12a
2025-07-11 11:59:32 +00:00
David Holmes
9adc480ec3 8361447: [REDO] Checked version of JNI Release<type>ArrayElements needs to filter out known wrapped arrays
8361754: New test runtime/jni/checked/TestCharArrayReleasing.java can cause disk full errors

Reviewed-by: coleenp
Backport-of: f67e435431
2025-07-11 00:21:36 +00:00
William Kemper
4d5211ccb0 8357976: GenShen crash in swap_card_tables: Should be clean
Reviewed-by: shade
Backport-of: 382f870cd5
2025-07-10 22:26:14 +00:00
Vladimir Kozlov
e92f387ab5 8360942: [ubsan] aotCache tests trigger runtime error: applying non-zero offset 16 to null pointer in CodeBlob::relocation_end()
Reviewed-by: shade, thartmann
Backport-of: dedcce0450
2025-07-10 17:04:29 +00:00
Chris Plummer
96380509b3 8360312: Serviceability Agent tests fail with JFR enabled due to unknown thread type JfrRecorderThread
Reviewed-by: kevinw, sspitsyn
Backport-of: 712d866b72
2025-07-10 15:43:11 +00:00
Brian Burkhalter
9b99ed8b39 8361299: (bf) CharBuffer.getChars(int,int,char[],int) violates pre-existing specification
Reviewed-by: liach, alanb
Backport-of: 6249259c80
2025-07-10 15:14:31 +00:00
Richard Reingruber
532b1c732e 8360599: [TESTBUG] DumpThreadsWithEliminatedLock.java fails because of unstable inlining
Reviewed-by: mdoerr, kevinw
Backport-of: fea73c1d40
2025-07-10 07:34:40 +00:00
Erik Gahlin
1de8943731 8361175: JFR: Document differences between method sample events
Reviewed-by: mgronlun
Backport-of: 63e08d4af7
2025-07-09 15:32:57 +00:00
Jan Lahoda
50751da562 8361570: Incorrect 'sealed is not allowed here' compile-time error
Reviewed-by: liach, vromero
Backport-of: 853319439e
2025-07-09 13:41:05 +00:00
Jan Lahoda
21cb2acda0 8361445: javac crashes on unresolvable constant in @SuppressWarnings
Reviewed-by: liach, asotona
Backport-of: 0bd2f9cba2
2025-07-09 05:07:20 +00:00
Vicente Romero
0e4422b284 8361214: An anonymous class is erroneously being classify as an abstract class
Reviewed-by: liach
Backport-of: 05c9eec8d0
2025-07-08 21:13:43 +00:00
Ioi Lam
1e985168d6 8358680: AOT cache creation fails: no strings should have been added
Reviewed-by: shade, kvn
Backport-of: 3daa03c30f
2025-07-08 19:02:36 +00:00
Ioi Lam
b8965318c1 8360164: AOT cache creation crashes in ~ThreadTotalCPUTimeClosure()
Reviewed-by: shade, kvn
Backport-of: 7d7e60c8ae
2025-07-08 17:36:10 +00:00
Ioi Lam
afe6bd6910 8336147: Clarify CDS documentation about static vs dynamic archive
Reviewed-by: shade
Backport-of: 854de8c9c6
2025-07-08 17:34:39 +00:00
Erik Gahlin
b3b5595362 8361338: JFR: Min and max time in MethodTime event is confusing
Reviewed-by: shade
Backport-of: f3e0588d0b
2025-07-08 14:03:56 +00:00
Jan Lahoda
5e716fd7d1 8359596: Behavior change when both -Xlint:options and -Xlint:-options flags are given
Reviewed-by: liach
Backport-of: 3525a40f39
2025-07-08 07:16:25 +00:00
Roger Riggs
3e93b98baf 8354872: Clarify java.lang.Process resource cleanup
Reviewed-by: iris
Backport-of: afb4a1be9e
2025-07-07 22:18:03 +00:00
Manukumar V S
9a73987f9b 8359889: java/awt/MenuItem/SetLabelTest.java inadvertently triggers clicks on items pinned to the taskbar
Reviewed-by: abhiscxk, aivanov
Backport-of: b7fcd0b235
2025-07-07 13:14:30 +00:00
Erik Gahlin
8707167ef3 8358750: JFR: EventInstrumentation MASK_THROTTLE* constants should be computed in longs
Reviewed-by: mgronlun
Backport-of: 77e69e02eb
2025-07-04 15:07:32 +00:00
Erik Gahlin
e3bd9c6e1c 8360287: JFR: PlatformTracer class should be loaded lazily
Reviewed-by: mgronlun
Backport-of: 8ea544c33f
2025-07-03 18:34:38 +00:00
Martin Doerr
993215f3dd 8361259: JDK25: Backout JDK-8258229
Reviewed-by: mhaessig, thartmann, dlong
2025-07-03 08:52:23 +00:00
Martin Doerr
8a98738f44 8361183: JDK-8360887 needs fixes to avoid cycles and better tests (aix)
Reviewed-by: mbaesken
Backport-of: c460f842bf
2025-07-03 08:46:22 +00:00
Ashutosh Mehra
ab01396209 8361101: AOTCodeAddressTable::_stubs_addr not initialized/freed properly
Reviewed-by: shade
Backport-of: 3066a67e62
2025-07-02 17:49:52 +00:00
Kevin Walls
92268e17be 8359870: JVM crashes in AccessInternal::PostRuntimeDispatch
Reviewed-by: alanb, sspitsyn
Backport-of: 13a3927855
2025-07-02 16:59:29 +00:00
Martin Doerr
a98a5e54fc 8360887: (fs) Files.getFileAttributeView returns unusable FileAttributeView if UserDefinedFileAttributeView unavailable (aix)
Reviewed-by: mbaesken
Backport-of: 0572b6ece7
2025-07-02 15:34:12 +00:00
Aleksey Shipilev
b245c517e3 8359436: AOTCompileEagerly should not be diagnostic
Reviewed-by: kvn
Backport-of: e138297323
2025-07-02 11:52:28 +00:00
Tobias Hartmann
0a151c68d6 8358179: Performance regression in Math.cbrt
Reviewed-by: epeter
Backport-of: 38f59f84c9
2025-07-02 08:23:19 +00:00
Jaikiran Pai
554e38dd5a 8359337: XML/JAXP tests that make network connections should ensure that no proxy is selected
Reviewed-by: dfuchs, iris, joehw
Backport-of: 7583a7b857
2025-07-02 01:36:10 +00:00
Aleksey Shipilev
b5b0b3a33a 8360201: JFR: Initialize JfrThreadLocal::_sampling_critical_section
Reviewed-by: zgu
Backport-of: 5c1f77fab1
2025-06-30 13:28:03 +00:00
David Holmes
0dc9e8447b 8358645: Access violation in ThreadsSMRSupport::print_info_on during thread dump
Reviewed-by: shade, dcubed
Backport-of: 334683e634
2025-06-30 01:06:46 +00:00
Alisen Chung
12ffb0c131 8359761: JDK 25 RDP1 L10n resource files update
Reviewed-by: jlu, aivanov
Backport-of: da7080fffb
2025-06-27 19:28:15 +00:00
Roland Westrelin
eaaaae5be9 8356708: C2: loop strip mining expansion doesn't take sunk stores into account
Reviewed-by: thartmann, epeter
Backport-of: c11f36e620
2025-06-27 16:27:33 +00:00
Jaikiran Pai
926c900efa 8359830: Incorrect os.version reported on macOS Tahoe 26 (Beta)
Reviewed-by: rriggs
Backport-of: 8df6b2c4a3
2025-06-27 02:18:57 +00:00
Roman Kennke
658f80e392 8355319: Update Manpage for Compact Object Headers (Production)
Reviewed-by: coleenp
Backport-of: 75ce44aa84
2025-06-26 12:32:36 +00:00
Martin Doerr
274a2dd729 8360405: [PPC64] some environments don't support mfdscr instruction
Reviewed-by: haosun, rrich
Backport-of: f71d64fbeb
2025-06-26 09:14:18 +00:00
Michael McMahon
a84946dde4 8359268: 3 JNI exception pending defect groups in 2 files
Reviewed-by: dfuchs, djelinski
Backport-of: 1fa090524a
2025-06-25 16:17:18 +00:00
Igor Veresov
fdb3e37c71 8359788: Internal Error: assert(get_instanceKlass()->is_loaded()) failed: must be at least loaded
Reviewed-by: shade
Backport-of: 5c4f92ba9a
2025-06-25 16:12:45 +00:00
Hannes Wallnöfer
80cb773b7e 8328848: Inaccuracy in the documentation of the -group option
Reviewed-by: liach
Backport-of: f8de5bc582
2025-06-25 05:40:18 +00:00
Hannes Wallnöfer
a576952039 8359024: Accessibility bugs in API documentation
Reviewed-by: liach
Backport-of: 9a726df373
2025-06-25 05:36:31 +00:00
Anthony Scarpino
b89f364842 8358099: PEM spec updates
Reviewed-by: weijun
Backport-of: 78158f30ae
2025-06-24 19:32:07 +00:00
Coleen Phillimore
0694cc1d52 8352075: Perf regression accessing fields
Reviewed-by: shade, iklam
Backport-of: e18277b470
2025-06-24 17:10:28 +00:00
Markus Grönlund
a3abaadc15 8360403: Disable constant pool ID assert during troubleshooting
Reviewed-by: egahlin
Backport-of: cbcf401170
2025-06-24 16:49:43 +00:00
Aleksey Shipilev
7cc1f82b84 8360042: GHA: Bump MSVC to 14.44
Reviewed-by: serb
Backport-of: 72679c94ee
2025-06-24 05:48:20 +00:00
William Kemper
636b56374e 8357550: GenShen crashes during freeze: assert(!chunk->requires_barriers()) failed
Reviewed-by: shade
Backport-of: 17cf49746d
2025-06-23 21:03:04 +00:00
Phil Race
fe9efb75b0 8358526: Clarify behavior of java.awt.HeadlessException constructed with no-args
Reviewed-by: honkar, tr, azvegint
Backport-of: 81985d422d
2025-06-23 17:05:48 +00:00
Erik Gahlin
ca6b165003 8359895: JFR: method-timing view doesn't work
Reviewed-by: mgronlun
Backport-of: 984d7f9cdf
2025-06-23 13:09:03 +00:00
Erik Gahlin
d5aa225451 8359242: JFR: Missing help text for method trace and timing
Reviewed-by: mgronlun
Backport-of: e57a214e2a
2025-06-23 12:22:30 +00:00
Matthias Bläsing
79a85df074 8353950: Clipboard interaction on Windows is unstable
8332271: Reading data from the clipboard from multiple threads crashes the JVM

Reviewed-by: prr
Backport-of: 92be7821f5
2025-06-20 21:49:26 +00:00
Jaikiran Pai
41928aed7d 8359709: java.net.HttpURLConnection sends unexpected "Host" request header in some cases after JDK-8344190
Reviewed-by: dfuchs
Backport-of: 57266064a7
2025-06-20 09:47:26 +00:00
Tobias Hartmann
3f6b0c69c3 8359386: Fix incorrect value for max_size of C2CodeStub when APX is used
Reviewed-by: mhaessig, epeter
Backport-of: b52af182c4
2025-06-20 08:29:10 +00:00
SendaoYan
36b185a930 8359402: Test CloseDescriptors.java should throw SkippedException when there is no lsof/sctp
Reviewed-by: jpai
Backport-of: a16d23557b
2025-06-20 06:26:52 +00:00
Erik Gahlin
c832f001e4 8359593: JFR: Instrumentation of java.lang.String corrupts recording
Reviewed-by: mgronlun
Backport-of: 2f2acb2e3f
2025-06-19 14:19:16 +00:00
Vladimir Kozlov
e5ac75a35b 8359646: C1 crash in AOTCodeAddressTable::add_C_string
Reviewed-by: shade, thartmann
Backport-of: 96070212ad
2025-06-19 13:41:06 +00:00
Erik Gahlin
b79ca5f03b 8359248: JFR: Help text for-XX:StartFlightRecording:report-on-exit should explain option can be repeated
Reviewed-by: mgronlun
Backport-of: fedd0a0ee3
2025-06-19 12:56:19 +00:00
Stuart Marks
5bcea92eaa 8338140: (str) Add notes to String.trim and String.isEmpty pointing to newer APIs
Reviewed-by: naoto, bpb, liach
Backport-of: 06d804a0f0
2025-06-17 20:45:27 +00:00
Damon Fenacci
cc4e9716ac 8358129: compiler/startup/StartupOutput.java runs into out of memory on Windows after JDK-8347406
Reviewed-by: shade
Backport-of: 534a8605e5
2025-06-17 13:10:06 +00:00
Roland Westrelin
46cfc1e194 8358334: C2/Shenandoah: incorrect execution with Unsafe
Reviewed-by: thartmann
Backport-of: 1fcede053c
2025-06-17 08:06:58 +00:00
Rajan Halade
ae71782e77 8359170: Add 2 TLS and 2 CS Sectigo roots
Reviewed-by: mullan
Backport-of: 9586817cea
2025-06-17 06:10:35 +00:00
Ioi Lam
753700182d 8355556: JVM crash because archived method handle intrinsics are not restored
Reviewed-by: shade
Backport-of: 366650a438
2025-06-17 04:36:41 +00:00
SendaoYan
eb727dcb51 8359272: Several vmTestbase/compact tests timed out on large memory machine
Reviewed-by: ayang
Backport-of: a0fb35c837
2025-06-17 00:43:52 +00:00
Johannes Bechberger
b6cacfcbc8 8359135: New test TestCPUTimeSampleThrottling fails intermittently
Reviewed-by: mdoerr
Backport-of: 3f0fef2c9c
2025-06-16 16:20:54 +00:00
Hamlin Li
d870a48880 8358892: RISC-V: jvm crash when running dacapo sunflow after JDK-8352504
8359045: RISC-V: construct test to verify invocation of C2_MacroAssembler::enc_cmove_cmp_fp => BoolTest::ge/gt

Reviewed-by: fyang
Backport-of: 9d060574e5
2025-06-16 11:18:32 +00:00
Fernando Guallini
2ea2f74f92 8358171: Additional code coverage for PEM API
Reviewed-by: rhalade, ascarpino
Backport-of: b2e7cda6a0
2025-06-16 09:54:18 +00:00
Alan Bateman
077ce2edc7 8358764: (sc) SocketChannel.close when thread blocked in read causes connection to be reset (win)
Reviewed-by: iris, jpai
Backport-of: e5196fc24d
2025-06-16 09:19:56 +00:00
Tobias Hartmann
2a3294571a 8359327: Incorrect AVX3Threshold results into code buffer overflows on APX targets
Reviewed-by: chagedorn
Backport-of: e7f63ba310
2025-06-16 08:48:49 +00:00
SendaoYan
3877746eb9 8359181: Error messages generated by configure --help after 8301197
Reviewed-by: ihse
Backport-of: 7b7136b4ec
2025-06-15 12:25:17 +00:00
Tobias Hartmann
3bd80fe3ba 8357782: JVM JIT Causes Static Initialization Order Issue
Reviewed-by: shade
Backport-of: e8ef93ae9d
2025-06-15 09:05:56 +00:00
Tobias Hartmann
03232d4a5d 8359200: Memory corruption in MStack::push
Reviewed-by: epeter, shade
Backport-of: ed39e17e34
2025-06-15 09:04:55 +00:00
Daniel Fuchs
4111730845 8359364: java/net/URL/EarlyOrDelayedParsing test fails intermittently
Reviewed-by: alanb
Backport-of: 57cabc6d74
2025-06-13 16:54:40 +00:00
Kevin Walls
74ea38e406 8358701: Remove misleading javax.management.remote API doc wording about JMX spec, and historic link to JMXMP
Reviewed-by: alanb
Backport-of: 66535fe26d
2025-06-13 14:28:14 +00:00
Tobias Hartmann
839a91e14b 8357982: Fix several failing BMI tests with -XX:+UseAPX
Reviewed-by: chagedorn
Backport-of: c98dffa186
2025-06-12 11:11:41 +00:00
Daniel Fuchs
aa4f79eaec 8358617: java/net/HttpURLConnection/HttpURLConnectionExpectContinueTest.java fails with 403 due to system proxies
Reviewed-by: jpai
Backport-of: a377773fa7
2025-06-11 16:22:34 +00:00
Stuart Marks
c7df72ff0f 8358809: Improve link to stdin.encoding from java.lang.IO
Reviewed-by: naoto
Backport-of: d024f58e61
2025-06-07 00:56:45 +00:00
Rajan Halade
80e066e733 8345414: Google CAInterop test failures
Reviewed-by: weijun
Backport-of: 8e9ba788ae
2025-06-06 21:31:33 +00:00
347 changed files with 9303 additions and 1924 deletions

View File

@@ -310,7 +310,7 @@ jobs:
uses: ./.github/workflows/build-windows.yml
with:
platform: windows-x64
msvc-toolset-version: '14.43'
msvc-toolset-version: '14.44'
msvc-toolset-architecture: 'x86.x64'
configure-arguments: ${{ github.event.inputs.configure-arguments }}
make-arguments: ${{ github.event.inputs.make-arguments }}
@@ -322,7 +322,7 @@ jobs:
uses: ./.github/workflows/build-windows.yml
with:
platform: windows-aarch64
msvc-toolset-version: '14.43'
msvc-toolset-version: '14.44'
msvc-toolset-architecture: 'arm64'
make-target: 'hotspot'
extra-conf-options: '--openjdk-target=aarch64-unknown-cygwin'

View File

@@ -1,6 +1,6 @@
#!/bin/bash
#
# Copyright (c) 2012, 2023, Oracle and/or its affiliates. All rights reserved.
# Copyright (c) 2012, 2025, Oracle and/or its affiliates. All rights reserved.
# DO NOT ALTER OR REMOVE COPYRIGHT NOTICES OR THIS FILE HEADER.
#
# This code is free software; you can redistribute it and/or modify it
@@ -366,7 +366,7 @@ EOT
# Print additional help, e.g. a list of toolchains and JVM features.
# This must be done by the autoconf script.
( CONFIGURE_PRINT_ADDITIONAL_HELP=true . $generated_script PRINTF=printf )
( CONFIGURE_PRINT_ADDITIONAL_HELP=true . $generated_script PRINTF=printf ECHO=echo )
cat <<EOT

View File

@@ -456,13 +456,13 @@ SliderDemo.horizontal=Horizontal
SliderDemo.vertical=Vertikal
SliderDemo.plain=Einfach
SliderDemo.a_plain_slider=Ein einfacher Schieberegler
SliderDemo.majorticks=Grobteilungen
SliderDemo.majorticksdescription=Ein Schieberegler mit Grobteilungsmarkierungen
SliderDemo.ticks=Feinteilungen, Teilungen zum Einrasten und Labels
SliderDemo.minorticks=Feinteilungen
SliderDemo.minorticksdescription=Ein Schieberegler mit Grob- und Feinteilungen, mit Teilungen, in die der Schieberegler einrastet, wobei einige Teilungen mit einem sichtbaren Label versehen sind
SliderDemo.majorticks=Hauptteilstriche
SliderDemo.majorticksdescription=Ein Schieberegler mit Hauptteilstrichen
SliderDemo.ticks=Hilfsteilstriche, zum Einrasten und Beschriften
SliderDemo.minorticks=Hilfsteilstriche
SliderDemo.minorticksdescription=Ein Schieberegler mit Haupt- und Hilfsteilstrichen, in die der Schieberegler einrastet, wobei einige Teilstriche mit einer sichtbaren Beschriftung versehen sind
SliderDemo.disabled=Deaktiviert
SliderDemo.disableddescription=Ein Schieberegler mit Grob- und Feinteilungen, der nicht aktiviert ist (kann nicht bearbeitet werden)
SliderDemo.disableddescription=Ein Schieberegler mit Haupt- und Hilfsteilstrichen, der nicht aktiviert ist (kann nicht bearbeitet werden)
### SplitPane Demo ###

View File

@@ -8888,13 +8888,8 @@ instruct TailCalljmpInd(IPRegP jump_target, inline_cache_regP method_ptr) %{
match(TailCall jump_target method_ptr);
ins_cost(CALL_COST);
format %{ "MOV Rexception_pc, LR\n\t"
"jump $jump_target \t! $method_ptr holds method" %}
format %{ "jump $jump_target \t! $method_ptr holds method" %}
ins_encode %{
__ mov(Rexception_pc, LR); // this is used only to call
// StubRoutines::forward_exception_entry()
// which expects PC of exception in
// R5. FIXME?
__ jump($jump_target$$Register);
%}
ins_pipe(tail_call);
@@ -8939,8 +8934,10 @@ instruct ForwardExceptionjmp()
match(ForwardException);
ins_cost(CALL_COST);
format %{ "b forward_exception_stub" %}
format %{ "MOV Rexception_pc, LR\n\t"
"b forward_exception_entry" %}
ins_encode %{
__ mov(Rexception_pc, LR);
// OK to trash Rtemp, because Rtemp is used by stub
__ jump(StubRoutines::forward_exception_entry(), relocInfo::runtime_call_type, Rtemp);
%}

View File

@@ -3928,8 +3928,10 @@ void MacroAssembler::kernel_crc32_vpmsum_aligned(Register crc, Register buf, Reg
Label L_outer_loop, L_inner_loop, L_last;
// Set DSCR pre-fetch to deepest.
load_const_optimized(t0, VM_Version::_dscr_val | 7);
mtdscr(t0);
if (VM_Version::has_mfdscr()) {
load_const_optimized(t0, VM_Version::_dscr_val | 7);
mtdscr(t0);
}
mtvrwz(VCRC, crc); // crc lives in VCRC, now
@@ -4073,8 +4075,10 @@ void MacroAssembler::kernel_crc32_vpmsum_aligned(Register crc, Register buf, Reg
// ********** Main loop end **********
// Restore DSCR pre-fetch value.
load_const_optimized(t0, VM_Version::_dscr_val);
mtdscr(t0);
if (VM_Version::has_mfdscr()) {
load_const_optimized(t0, VM_Version::_dscr_val);
mtdscr(t0);
}
// ********** Simple loop for remaining 16 byte blocks **********
{

View File

@@ -952,8 +952,10 @@ class StubGenerator: public StubCodeGenerator {
address start_pc = __ pc();
Register tmp1 = R6_ARG4;
// probably copy stub would have changed value reset it.
__ load_const_optimized(tmp1, VM_Version::_dscr_val);
__ mtdscr(tmp1);
if (VM_Version::has_mfdscr()) {
__ load_const_optimized(tmp1, VM_Version::_dscr_val);
__ mtdscr(tmp1);
}
__ li(R3_RET, 0); // return 0
__ blr();
return start_pc;
@@ -1070,9 +1072,10 @@ class StubGenerator: public StubCodeGenerator {
__ dcbt(R3_ARG1, 0);
// If supported set DSCR pre-fetch to deepest.
__ load_const_optimized(tmp2, VM_Version::_dscr_val | 7);
__ mtdscr(tmp2);
if (VM_Version::has_mfdscr()) {
__ load_const_optimized(tmp2, VM_Version::_dscr_val | 7);
__ mtdscr(tmp2);
}
__ li(tmp1, 16);
// Backbranch target aligned to 32-byte. Not 16-byte align as
@@ -1092,8 +1095,10 @@ class StubGenerator: public StubCodeGenerator {
__ bdnz(l_10); // Dec CTR and loop if not zero.
// Restore DSCR pre-fetch value.
__ load_const_optimized(tmp2, VM_Version::_dscr_val);
__ mtdscr(tmp2);
if (VM_Version::has_mfdscr()) {
__ load_const_optimized(tmp2, VM_Version::_dscr_val);
__ mtdscr(tmp2);
}
} // FasterArrayCopy
@@ -1344,8 +1349,10 @@ class StubGenerator: public StubCodeGenerator {
__ dcbt(R3_ARG1, 0);
// If supported set DSCR pre-fetch to deepest.
__ load_const_optimized(tmp2, VM_Version::_dscr_val | 7);
__ mtdscr(tmp2);
if (VM_Version::has_mfdscr()) {
__ load_const_optimized(tmp2, VM_Version::_dscr_val | 7);
__ mtdscr(tmp2);
}
__ li(tmp1, 16);
// Backbranch target aligned to 32-byte. It's not aligned 16-byte
@@ -1365,8 +1372,11 @@ class StubGenerator: public StubCodeGenerator {
__ bdnz(l_9); // Dec CTR and loop if not zero.
// Restore DSCR pre-fetch value.
__ load_const_optimized(tmp2, VM_Version::_dscr_val);
__ mtdscr(tmp2);
if (VM_Version::has_mfdscr()) {
__ load_const_optimized(tmp2, VM_Version::_dscr_val);
__ mtdscr(tmp2);
}
} // FasterArrayCopy
__ bind(l_6);
@@ -1527,9 +1537,10 @@ class StubGenerator: public StubCodeGenerator {
__ dcbt(R3_ARG1, 0);
// Set DSCR pre-fetch to deepest.
__ load_const_optimized(tmp2, VM_Version::_dscr_val | 7);
__ mtdscr(tmp2);
if (VM_Version::has_mfdscr()) {
__ load_const_optimized(tmp2, VM_Version::_dscr_val | 7);
__ mtdscr(tmp2);
}
__ li(tmp1, 16);
// Backbranch target aligned to 32-byte. Not 16-byte align as
@@ -1549,9 +1560,10 @@ class StubGenerator: public StubCodeGenerator {
__ bdnz(l_7); // Dec CTR and loop if not zero.
// Restore DSCR pre-fetch value.
__ load_const_optimized(tmp2, VM_Version::_dscr_val);
__ mtdscr(tmp2);
if (VM_Version::has_mfdscr()) {
__ load_const_optimized(tmp2, VM_Version::_dscr_val);
__ mtdscr(tmp2);
}
} // FasterArrayCopy
@@ -1672,9 +1684,10 @@ class StubGenerator: public StubCodeGenerator {
__ dcbt(R3_ARG1, 0);
// Set DSCR pre-fetch to deepest.
__ load_const_optimized(tmp2, VM_Version::_dscr_val | 7);
__ mtdscr(tmp2);
if (VM_Version::has_mfdscr()) {
__ load_const_optimized(tmp2, VM_Version::_dscr_val | 7);
__ mtdscr(tmp2);
}
__ li(tmp1, 16);
// Backbranch target aligned to 32-byte. Not 16-byte align as
@@ -1694,8 +1707,10 @@ class StubGenerator: public StubCodeGenerator {
__ bdnz(l_4);
// Restore DSCR pre-fetch value.
__ load_const_optimized(tmp2, VM_Version::_dscr_val);
__ mtdscr(tmp2);
if (VM_Version::has_mfdscr()) {
__ load_const_optimized(tmp2, VM_Version::_dscr_val);
__ mtdscr(tmp2);
}
__ cmpwi(CR0, R5_ARG3, 0);
__ beq(CR0, l_6);
@@ -1788,9 +1803,10 @@ class StubGenerator: public StubCodeGenerator {
__ dcbt(R3_ARG1, 0);
// Set DSCR pre-fetch to deepest.
__ load_const_optimized(tmp2, VM_Version::_dscr_val | 7);
__ mtdscr(tmp2);
if (VM_Version::has_mfdscr()) {
__ load_const_optimized(tmp2, VM_Version::_dscr_val | 7);
__ mtdscr(tmp2);
}
__ li(tmp1, 16);
// Backbranch target aligned to 32-byte. Not 16-byte align as
@@ -1810,8 +1826,10 @@ class StubGenerator: public StubCodeGenerator {
__ bdnz(l_5); // Dec CTR and loop if not zero.
// Restore DSCR pre-fetch value.
__ load_const_optimized(tmp2, VM_Version::_dscr_val);
__ mtdscr(tmp2);
if (VM_Version::has_mfdscr()) {
__ load_const_optimized(tmp2, VM_Version::_dscr_val);
__ mtdscr(tmp2);
}
} // FasterArrayCopy
@@ -1910,9 +1928,10 @@ class StubGenerator: public StubCodeGenerator {
__ dcbt(R3_ARG1, 0);
// Set DSCR pre-fetch to deepest.
__ load_const_optimized(tmp2, VM_Version::_dscr_val | 7);
__ mtdscr(tmp2);
if (VM_Version::has_mfdscr()) {
__ load_const_optimized(tmp2, VM_Version::_dscr_val | 7);
__ mtdscr(tmp2);
}
__ li(tmp1, 16);
// Backbranch target aligned to 32-byte. Not 16-byte align as
@@ -1932,8 +1951,10 @@ class StubGenerator: public StubCodeGenerator {
__ bdnz(l_4);
// Restore DSCR pre-fetch value.
__ load_const_optimized(tmp2, VM_Version::_dscr_val);
__ mtdscr(tmp2);
if (VM_Version::has_mfdscr()) {
__ load_const_optimized(tmp2, VM_Version::_dscr_val);
__ mtdscr(tmp2);
}
__ cmpwi(CR0, R5_ARG3, 0);
__ beq(CR0, l_1);

View File

@@ -80,7 +80,9 @@ void VM_Version::initialize() {
"%zu on this machine", PowerArchitecturePPC64);
// Power 8: Configure Data Stream Control Register.
config_dscr();
if (VM_Version::has_mfdscr()) {
config_dscr();
}
if (!UseSIGTRAP) {
MSG(TrapBasedICMissChecks);
@@ -170,7 +172,8 @@ void VM_Version::initialize() {
// Create and print feature-string.
char buf[(num_features+1) * 16]; // Max 16 chars per feature.
jio_snprintf(buf, sizeof(buf),
"ppc64 sha aes%s%s",
"ppc64 sha aes%s%s%s",
(has_mfdscr() ? " mfdscr" : ""),
(has_darn() ? " darn" : ""),
(has_brw() ? " brw" : "")
// Make sure number of %s matches num_features!
@@ -488,6 +491,7 @@ void VM_Version::determine_features() {
uint32_t *code = (uint32_t *)a->pc();
// Keep R3_ARG1 unmodified, it contains &field (see below).
// Keep R4_ARG2 unmodified, it contains offset = 0 (see below).
a->mfdscr(R0);
a->darn(R7);
a->brw(R5, R6);
a->blr();
@@ -524,6 +528,7 @@ void VM_Version::determine_features() {
// determine which instructions are legal.
int feature_cntr = 0;
if (code[feature_cntr++]) features |= mfdscr_m;
if (code[feature_cntr++]) features |= darn_m;
if (code[feature_cntr++]) features |= brw_m;

View File

@@ -32,12 +32,14 @@
class VM_Version: public Abstract_VM_Version {
protected:
enum Feature_Flag {
mfdscr,
darn,
brw,
num_features // last entry to count features
};
enum Feature_Flag_Set {
unknown_m = 0,
mfdscr_m = (1 << mfdscr ),
darn_m = (1 << darn ),
brw_m = (1 << brw ),
all_features_m = (unsigned long)-1
@@ -67,8 +69,9 @@ public:
static bool is_determine_features_test_running() { return _is_determine_features_test_running; }
// CPU instruction support
static bool has_darn() { return (_features & darn_m) != 0; }
static bool has_brw() { return (_features & brw_m) != 0; }
static bool has_mfdscr() { return (_features & mfdscr_m) != 0; } // Power8, but may be unavailable (QEMU)
static bool has_darn() { return (_features & darn_m) != 0; }
static bool has_brw() { return (_features & brw_m) != 0; }
// Assembler testing
static void allow_all();

View File

@@ -2170,15 +2170,13 @@ void C2_MacroAssembler::enc_cmove_cmp_fp(int cmpFlag, FloatRegister op1, FloatRe
cmov_cmp_fp_le(op1, op2, dst, src, is_single);
break;
case BoolTest::ge:
assert(false, "Should go to BoolTest::le case");
ShouldNotReachHere();
cmov_cmp_fp_ge(op1, op2, dst, src, is_single);
break;
case BoolTest::lt:
cmov_cmp_fp_lt(op1, op2, dst, src, is_single);
break;
case BoolTest::gt:
assert(false, "Should go to BoolTest::lt case");
ShouldNotReachHere();
cmov_cmp_fp_gt(op1, op2, dst, src, is_single);
break;
default:
assert(false, "unsupported compare condition");

View File

@@ -1268,12 +1268,19 @@ void MacroAssembler::cmov_gtu(Register cmp1, Register cmp2, Register dst, Regist
}
// ----------- cmove, compare float -----------
//
// For CmpF/D + CMoveI/L, ordered ones are quite straight and simple,
// so, just list behaviour of unordered ones as follow.
//
// Set dst (CMoveI (Binary cop (CmpF/D op1 op2)) (Binary dst src))
// (If one or both inputs to the compare are NaN, then)
// 1. (op1 lt op2) => true => CMove: dst = src
// 2. (op1 le op2) => true => CMove: dst = src
// 3. (op1 gt op2) => false => CMove: dst = dst
// 4. (op1 ge op2) => false => CMove: dst = dst
// 5. (op1 eq op2) => false => CMove: dst = dst
// 6. (op1 ne op2) => true => CMove: dst = src
// Move src to dst only if cmp1 == cmp2,
// otherwise leave dst unchanged, including the case where one of them is NaN.
// Clarification:
// java code : cmp1 != cmp2 ? dst : src
// transformed to : CMove dst, (cmp1 eq cmp2), dst, src
void MacroAssembler::cmov_cmp_fp_eq(FloatRegister cmp1, FloatRegister cmp2, Register dst, Register src, bool is_single) {
if (UseZicond) {
if (is_single) {
@@ -1289,7 +1296,7 @@ void MacroAssembler::cmov_cmp_fp_eq(FloatRegister cmp1, FloatRegister cmp2, Regi
Label no_set;
if (is_single) {
// jump if cmp1 != cmp2, including the case of NaN
// not jump (i.e. move src to dst) if cmp1 == cmp2
// fallthrough (i.e. move src to dst) if cmp1 == cmp2
float_bne(cmp1, cmp2, no_set);
} else {
double_bne(cmp1, cmp2, no_set);
@@ -1298,11 +1305,6 @@ void MacroAssembler::cmov_cmp_fp_eq(FloatRegister cmp1, FloatRegister cmp2, Regi
bind(no_set);
}
// Keep dst unchanged only if cmp1 == cmp2,
// otherwise move src to dst, including the case where one of them is NaN.
// Clarification:
// java code : cmp1 == cmp2 ? dst : src
// transformed to : CMove dst, (cmp1 ne cmp2), dst, src
void MacroAssembler::cmov_cmp_fp_ne(FloatRegister cmp1, FloatRegister cmp2, Register dst, Register src, bool is_single) {
if (UseZicond) {
if (is_single) {
@@ -1318,7 +1320,7 @@ void MacroAssembler::cmov_cmp_fp_ne(FloatRegister cmp1, FloatRegister cmp2, Regi
Label no_set;
if (is_single) {
// jump if cmp1 == cmp2
// not jump (i.e. move src to dst) if cmp1 != cmp2, including the case of NaN
// fallthrough (i.e. move src to dst) if cmp1 != cmp2, including the case of NaN
float_beq(cmp1, cmp2, no_set);
} else {
double_beq(cmp1, cmp2, no_set);
@@ -1327,14 +1329,6 @@ void MacroAssembler::cmov_cmp_fp_ne(FloatRegister cmp1, FloatRegister cmp2, Regi
bind(no_set);
}
// When cmp1 <= cmp2 or any of them is NaN then dst = src, otherwise, dst = dst
// Clarification
// scenario 1:
// java code : cmp2 < cmp1 ? dst : src
// transformed to : CMove dst, (cmp1 le cmp2), dst, src
// scenario 2:
// java code : cmp1 > cmp2 ? dst : src
// transformed to : CMove dst, (cmp1 le cmp2), dst, src
void MacroAssembler::cmov_cmp_fp_le(FloatRegister cmp1, FloatRegister cmp2, Register dst, Register src, bool is_single) {
if (UseZicond) {
if (is_single) {
@@ -1350,7 +1344,7 @@ void MacroAssembler::cmov_cmp_fp_le(FloatRegister cmp1, FloatRegister cmp2, Regi
Label no_set;
if (is_single) {
// jump if cmp1 > cmp2
// not jump (i.e. move src to dst) if cmp1 <= cmp2 or either is NaN
// fallthrough (i.e. move src to dst) if cmp1 <= cmp2 or either is NaN
float_bgt(cmp1, cmp2, no_set);
} else {
double_bgt(cmp1, cmp2, no_set);
@@ -1359,14 +1353,30 @@ void MacroAssembler::cmov_cmp_fp_le(FloatRegister cmp1, FloatRegister cmp2, Regi
bind(no_set);
}
// When cmp1 < cmp2 or any of them is NaN then dst = src, otherwise, dst = dst
// Clarification
// scenario 1:
// java code : cmp2 <= cmp1 ? dst : src
// transformed to : CMove dst, (cmp1 lt cmp2), dst, src
// scenario 2:
// java code : cmp1 >= cmp2 ? dst : src
// transformed to : CMove dst, (cmp1 lt cmp2), dst, src
void MacroAssembler::cmov_cmp_fp_ge(FloatRegister cmp1, FloatRegister cmp2, Register dst, Register src, bool is_single) {
if (UseZicond) {
if (is_single) {
fle_s(t0, cmp2, cmp1);
} else {
fle_d(t0, cmp2, cmp1);
}
czero_nez(dst, dst, t0);
czero_eqz(t0 , src, t0);
orr(dst, dst, t0);
return;
}
Label no_set;
if (is_single) {
// jump if cmp1 < cmp2 or either is NaN
// fallthrough (i.e. move src to dst) if cmp1 >= cmp2
float_blt(cmp1, cmp2, no_set, false, true);
} else {
double_blt(cmp1, cmp2, no_set, false, true);
}
mv(dst, src);
bind(no_set);
}
void MacroAssembler::cmov_cmp_fp_lt(FloatRegister cmp1, FloatRegister cmp2, Register dst, Register src, bool is_single) {
if (UseZicond) {
if (is_single) {
@@ -1382,7 +1392,7 @@ void MacroAssembler::cmov_cmp_fp_lt(FloatRegister cmp1, FloatRegister cmp2, Regi
Label no_set;
if (is_single) {
// jump if cmp1 >= cmp2
// not jump (i.e. move src to dst) if cmp1 < cmp2 or either is NaN
// fallthrough (i.e. move src to dst) if cmp1 < cmp2 or either is NaN
float_bge(cmp1, cmp2, no_set);
} else {
double_bge(cmp1, cmp2, no_set);
@@ -1391,6 +1401,30 @@ void MacroAssembler::cmov_cmp_fp_lt(FloatRegister cmp1, FloatRegister cmp2, Regi
bind(no_set);
}
void MacroAssembler::cmov_cmp_fp_gt(FloatRegister cmp1, FloatRegister cmp2, Register dst, Register src, bool is_single) {
if (UseZicond) {
if (is_single) {
flt_s(t0, cmp2, cmp1);
} else {
flt_d(t0, cmp2, cmp1);
}
czero_nez(dst, dst, t0);
czero_eqz(t0 , src, t0);
orr(dst, dst, t0);
return;
}
Label no_set;
if (is_single) {
// jump if cmp1 <= cmp2 or either is NaN
// fallthrough (i.e. move src to dst) if cmp1 > cmp2
float_ble(cmp1, cmp2, no_set, false, true);
} else {
double_ble(cmp1, cmp2, no_set, false, true);
}
mv(dst, src);
bind(no_set);
}
// Float compare branch instructions
#define INSN(NAME, FLOATCMP, BRANCH) \

View File

@@ -660,7 +660,9 @@ class MacroAssembler: public Assembler {
void cmov_cmp_fp_eq(FloatRegister cmp1, FloatRegister cmp2, Register dst, Register src, bool is_single);
void cmov_cmp_fp_ne(FloatRegister cmp1, FloatRegister cmp2, Register dst, Register src, bool is_single);
void cmov_cmp_fp_le(FloatRegister cmp1, FloatRegister cmp2, Register dst, Register src, bool is_single);
void cmov_cmp_fp_ge(FloatRegister cmp1, FloatRegister cmp2, Register dst, Register src, bool is_single);
void cmov_cmp_fp_lt(FloatRegister cmp1, FloatRegister cmp2, Register dst, Register src, bool is_single);
void cmov_cmp_fp_gt(FloatRegister cmp1, FloatRegister cmp2, Register dst, Register src, bool is_single);
public:
// We try to follow risc-v asm menomics.

View File

@@ -15681,6 +15681,8 @@ void Assembler::pusha_uncached() { // 64bit
// Push pair of original stack pointer along with remaining registers
// at 16B aligned boundary.
push2p(rax, r31);
// Restore the original contents of RAX register.
movq(rax, Address(rax));
push2p(r30, r29);
push2p(r28, r27);
push2p(r26, r25);

View File

@@ -4655,6 +4655,7 @@ static void convertF2I_slowpath(C2_MacroAssembler& masm, C2GeneralStub<Register,
__ subptr(rsp, 8);
__ movdbl(Address(rsp), src);
__ call(RuntimeAddress(target));
// APX REX2 encoding for pop(dst) increases the stub size by 1 byte.
__ pop(dst);
__ jmp(stub.continuation());
#undef __
@@ -4687,7 +4688,9 @@ void C2_MacroAssembler::convertF2I(BasicType dst_bt, BasicType src_bt, Register
}
}
auto stub = C2CodeStub::make<Register, XMMRegister, address>(dst, src, slowpath_target, 23, convertF2I_slowpath);
// Using the APX extended general purpose registers increases the instruction encoding size by 1 byte.
int max_size = 23 + (UseAPX ? 1 : 0);
auto stub = C2CodeStub::make<Register, XMMRegister, address>(dst, src, slowpath_target, max_size, convertF2I_slowpath);
jcc(Assembler::equal, stub->entry());
bind(stub->continuation());
}

View File

@@ -353,7 +353,7 @@ void ShenandoahBarrierSetAssembler::load_reference_barrier(MacroAssembler* masm,
// The rest is saved with the optimized path
uint num_saved_regs = 4 + (dst != rax ? 1 : 0) + 4;
uint num_saved_regs = 4 + (dst != rax ? 1 : 0) + 4 + (UseAPX ? 16 : 0);
__ subptr(rsp, num_saved_regs * wordSize);
uint slot = num_saved_regs;
if (dst != rax) {
@@ -367,6 +367,25 @@ void ShenandoahBarrierSetAssembler::load_reference_barrier(MacroAssembler* masm,
__ movptr(Address(rsp, (--slot) * wordSize), r9);
__ movptr(Address(rsp, (--slot) * wordSize), r10);
__ movptr(Address(rsp, (--slot) * wordSize), r11);
// Save APX extended registers r16r31 if enabled
if (UseAPX) {
__ movptr(Address(rsp, (--slot) * wordSize), r16);
__ movptr(Address(rsp, (--slot) * wordSize), r17);
__ movptr(Address(rsp, (--slot) * wordSize), r18);
__ movptr(Address(rsp, (--slot) * wordSize), r19);
__ movptr(Address(rsp, (--slot) * wordSize), r20);
__ movptr(Address(rsp, (--slot) * wordSize), r21);
__ movptr(Address(rsp, (--slot) * wordSize), r22);
__ movptr(Address(rsp, (--slot) * wordSize), r23);
__ movptr(Address(rsp, (--slot) * wordSize), r24);
__ movptr(Address(rsp, (--slot) * wordSize), r25);
__ movptr(Address(rsp, (--slot) * wordSize), r26);
__ movptr(Address(rsp, (--slot) * wordSize), r27);
__ movptr(Address(rsp, (--slot) * wordSize), r28);
__ movptr(Address(rsp, (--slot) * wordSize), r29);
__ movptr(Address(rsp, (--slot) * wordSize), r30);
__ movptr(Address(rsp, (--slot) * wordSize), r31);
}
// r12-r15 are callee saved in all calling conventions
assert(slot == 0, "must use all slots");
@@ -398,6 +417,25 @@ void ShenandoahBarrierSetAssembler::load_reference_barrier(MacroAssembler* masm,
__ super_call_VM_leaf(CAST_FROM_FN_PTR(address, ShenandoahRuntime::load_reference_barrier_phantom), arg0, arg1);
}
// Restore APX extended registers r31r16 if previously saved
if (UseAPX) {
__ movptr(r31, Address(rsp, (slot++) * wordSize));
__ movptr(r30, Address(rsp, (slot++) * wordSize));
__ movptr(r29, Address(rsp, (slot++) * wordSize));
__ movptr(r28, Address(rsp, (slot++) * wordSize));
__ movptr(r27, Address(rsp, (slot++) * wordSize));
__ movptr(r26, Address(rsp, (slot++) * wordSize));
__ movptr(r25, Address(rsp, (slot++) * wordSize));
__ movptr(r24, Address(rsp, (slot++) * wordSize));
__ movptr(r23, Address(rsp, (slot++) * wordSize));
__ movptr(r22, Address(rsp, (slot++) * wordSize));
__ movptr(r21, Address(rsp, (slot++) * wordSize));
__ movptr(r20, Address(rsp, (slot++) * wordSize));
__ movptr(r19, Address(rsp, (slot++) * wordSize));
__ movptr(r18, Address(rsp, (slot++) * wordSize));
__ movptr(r17, Address(rsp, (slot++) * wordSize));
__ movptr(r16, Address(rsp, (slot++) * wordSize));
}
__ movptr(r11, Address(rsp, (slot++) * wordSize));
__ movptr(r10, Address(rsp, (slot++) * wordSize));
__ movptr(r9, Address(rsp, (slot++) * wordSize));

View File

@@ -239,7 +239,7 @@
do_arch_blob, \
do_arch_entry, \
do_arch_entry_init) \
do_arch_blob(final, 31000 \
do_arch_blob(final, 33000 \
WINDOWS_ONLY(+22000) ZGC_ONLY(+20000)) \
#endif // CPU_X86_STUBDECLARATIONS_HPP

View File

@@ -46,6 +46,12 @@
//
/******************************************************************************/
/* Represents 0x7FFFFFFFFFFFFFFF double precision in lower 64 bits*/
ATTRIBUTE_ALIGNED(16) static const juint _ABS_MASK[] =
{
4294967295, 2147483647, 0, 0
};
ATTRIBUTE_ALIGNED(4) static const juint _SIG_MASK[] =
{
0, 1032192
@@ -188,10 +194,10 @@ address StubGenerator::generate_libmCbrt() {
StubCodeMark mark(this, stub_id);
address start = __ pc();
Label L_2TAG_PACKET_0_0_1, L_2TAG_PACKET_1_0_1, L_2TAG_PACKET_2_0_1, L_2TAG_PACKET_3_0_1;
Label L_2TAG_PACKET_4_0_1, L_2TAG_PACKET_5_0_1, L_2TAG_PACKET_6_0_1;
Label L_2TAG_PACKET_0_0_1, L_2TAG_PACKET_1_0_1, L_2TAG_PACKET_2_0_1;
Label B1_1, B1_2, B1_4;
address ABS_MASK = (address)_ABS_MASK;
address SIG_MASK = (address)_SIG_MASK;
address EXP_MASK = (address)_EXP_MASK;
address EXP_MSK2 = (address)_EXP_MSK2;
@@ -208,8 +214,12 @@ address StubGenerator::generate_libmCbrt() {
__ enter(); // required for proper stackwalking of RuntimeStub frame
__ bind(B1_1);
__ subq(rsp, 24);
__ movsd(Address(rsp), xmm0);
__ ucomisd(xmm0, ExternalAddress(ZERON), r11 /*rscratch*/);
__ jcc(Assembler::equal, L_2TAG_PACKET_1_0_1); // Branch only if x is +/- zero or NaN
__ movq(xmm1, xmm0);
__ andpd(xmm1, ExternalAddress(ABS_MASK), r11 /*rscratch*/);
__ ucomisd(xmm1, ExternalAddress(INF), r11 /*rscratch*/);
__ jcc(Assembler::equal, B1_4); // Branch only if x is +/- INF
__ bind(B1_2);
__ movq(xmm7, xmm0);
@@ -228,8 +238,6 @@ address StubGenerator::generate_libmCbrt() {
__ andl(rdx, rax);
__ cmpl(rdx, 0);
__ jcc(Assembler::equal, L_2TAG_PACKET_0_0_1); // Branch only if |x| is denormalized
__ cmpl(rdx, 524032);
__ jcc(Assembler::equal, L_2TAG_PACKET_1_0_1); // Branch only if |x| is INF or NaN
__ shrl(rdx, 8);
__ shrq(r9, 8);
__ andpd(xmm2, xmm0);
@@ -297,8 +305,6 @@ address StubGenerator::generate_libmCbrt() {
__ andl(rdx, rax);
__ shrl(rdx, 8);
__ shrq(r9, 8);
__ cmpl(rdx, 0);
__ jcc(Assembler::equal, L_2TAG_PACKET_3_0_1); // Branch only if |x| is zero
__ andpd(xmm2, xmm0);
__ andpd(xmm0, xmm5);
__ orpd(xmm3, xmm2);
@@ -322,41 +328,10 @@ address StubGenerator::generate_libmCbrt() {
__ psllq(xmm7, 52);
__ jmp(L_2TAG_PACKET_2_0_1);
__ bind(L_2TAG_PACKET_3_0_1);
__ cmpq(r9, 0);
__ jcc(Assembler::notEqual, L_2TAG_PACKET_4_0_1); // Branch only if x is negative zero
__ xorpd(xmm0, xmm0);
__ jmp(B1_4);
__ bind(L_2TAG_PACKET_4_0_1);
__ movsd(xmm0, ExternalAddress(ZERON), r11 /*rscratch*/);
__ jmp(B1_4);
__ bind(L_2TAG_PACKET_1_0_1);
__ movl(rax, Address(rsp, 4));
__ movl(rdx, Address(rsp));
__ movl(rcx, rax);
__ andl(rcx, 2147483647);
__ cmpl(rcx, 2146435072);
__ jcc(Assembler::above, L_2TAG_PACKET_5_0_1); // Branch only if |x| is NaN
__ cmpl(rdx, 0);
__ jcc(Assembler::notEqual, L_2TAG_PACKET_5_0_1); // Branch only if |x| is NaN
__ cmpl(rax, 2146435072);
__ jcc(Assembler::notEqual, L_2TAG_PACKET_6_0_1); // Branch only if x is negative INF
__ movsd(xmm0, ExternalAddress(INF), r11 /*rscratch*/);
__ jmp(B1_4);
__ bind(L_2TAG_PACKET_6_0_1);
__ movsd(xmm0, ExternalAddress(NEG_INF), r11 /*rscratch*/);
__ jmp(B1_4);
__ bind(L_2TAG_PACKET_5_0_1);
__ movsd(xmm0, Address(rsp));
__ addsd(xmm0, xmm0);
__ movq(Address(rsp, 8), xmm0);
__ bind(B1_4);
__ addq(rsp, 24);
__ leave(); // required for proper stackwalking of RuntimeStub frame
__ ret(0);

View File

@@ -440,7 +440,6 @@ class VM_Version_StubGenerator: public StubCodeGenerator {
__ andl(rax, Address(rbp, in_bytes(VM_Version::xem_xcr0_offset()))); // xcr0 bits apx_f
__ jcc(Assembler::equal, vector_save_restore);
#ifndef PRODUCT
bool save_apx = UseAPX;
VM_Version::set_apx_cpuFeatures();
UseAPX = true;
@@ -457,7 +456,6 @@ class VM_Version_StubGenerator: public StubCodeGenerator {
__ movq(Address(rsi, 8), r31);
UseAPX = save_apx;
#endif
__ bind(vector_save_restore);
//
// Check if OS has enabled XGETBV instruction to access XCR0
@@ -1022,8 +1020,6 @@ void VM_Version::get_processor_features() {
if (UseAPX && !apx_supported) {
warning("UseAPX is not supported on this CPU, setting it to false");
FLAG_SET_DEFAULT(UseAPX, false);
} else if (FLAG_IS_DEFAULT(UseAPX)) {
FLAG_SET_DEFAULT(UseAPX, apx_supported ? true : false);
}
if (!UseAPX) {
@@ -2111,7 +2107,7 @@ bool VM_Version::is_intel_cascade_lake() {
// has improved implementation of 64-byte load/stores and so the default
// threshold is set to 0 for these platforms.
int VM_Version::avx3_threshold() {
return (is_intel_family_core() &&
return (is_intel_server_family() &&
supports_serialize() &&
FLAG_IS_DEFAULT(AVX3Threshold)) ? 0 : AVX3Threshold;
}
@@ -3151,17 +3147,11 @@ bool VM_Version::os_supports_apx_egprs() {
if (!supports_apx_f()) {
return false;
}
// Enable APX support for product builds after
// completion of planned features listed in JDK-8329030.
#if !defined(PRODUCT)
if (_cpuid_info.apx_save[0] != egpr_test_value() ||
_cpuid_info.apx_save[1] != egpr_test_value()) {
return false;
}
return true;
#else
return false;
#endif
}
uint VM_Version::cores_per_cpu() {

View File

@@ -10527,7 +10527,8 @@ instruct xorI_rReg_im1_ndd(rRegI dst, rRegI src, immI_M1 imm)
// Xor Register with Immediate
instruct xorI_rReg_imm(rRegI dst, immI src, rFlagsReg cr)
%{
predicate(!UseAPX);
// Strict predicate check to make selection of xorI_rReg_im1 cost agnostic if immI src is -1.
predicate(!UseAPX && n->in(2)->bottom_type()->is_int()->get_con() != -1);
match(Set dst (XorI dst src));
effect(KILL cr);
flag(PD::Flag_sets_sign_flag, PD::Flag_sets_zero_flag, PD::Flag_sets_parity_flag, PD::Flag_clears_overflow_flag, PD::Flag_clears_carry_flag);
@@ -10541,7 +10542,8 @@ instruct xorI_rReg_imm(rRegI dst, immI src, rFlagsReg cr)
instruct xorI_rReg_rReg_imm_ndd(rRegI dst, rRegI src1, immI src2, rFlagsReg cr)
%{
predicate(UseAPX);
// Strict predicate check to make selection of xorI_rReg_im1_ndd cost agnostic if immI src2 is -1.
predicate(UseAPX && n->in(2)->bottom_type()->is_int()->get_con() != -1);
match(Set dst (XorI src1 src2));
effect(KILL cr);
flag(PD::Flag_sets_sign_flag, PD::Flag_sets_zero_flag, PD::Flag_sets_parity_flag, PD::Flag_clears_overflow_flag, PD::Flag_clears_carry_flag);
@@ -10559,6 +10561,7 @@ instruct xorI_rReg_mem_imm_ndd(rRegI dst, memory src1, immI src2, rFlagsReg cr)
predicate(UseAPX);
match(Set dst (XorI (LoadI src1) src2));
effect(KILL cr);
ins_cost(150);
flag(PD::Flag_sets_sign_flag, PD::Flag_sets_zero_flag, PD::Flag_sets_parity_flag, PD::Flag_clears_overflow_flag, PD::Flag_clears_carry_flag);
format %{ "exorl $dst, $src1, $src2\t# int ndd" %}
@@ -11201,7 +11204,8 @@ instruct xorL_rReg_im1_ndd(rRegL dst,rRegL src, immL_M1 imm)
// Xor Register with Immediate
instruct xorL_rReg_imm(rRegL dst, immL32 src, rFlagsReg cr)
%{
predicate(!UseAPX);
// Strict predicate check to make selection of xorL_rReg_im1 cost agnostic if immL32 src is -1.
predicate(!UseAPX && n->in(2)->bottom_type()->is_long()->get_con() != -1L);
match(Set dst (XorL dst src));
effect(KILL cr);
flag(PD::Flag_sets_sign_flag, PD::Flag_sets_zero_flag, PD::Flag_sets_parity_flag, PD::Flag_clears_overflow_flag, PD::Flag_clears_carry_flag);
@@ -11215,7 +11219,8 @@ instruct xorL_rReg_imm(rRegL dst, immL32 src, rFlagsReg cr)
instruct xorL_rReg_rReg_imm(rRegL dst, rRegL src1, immL32 src2, rFlagsReg cr)
%{
predicate(UseAPX);
// Strict predicate check to make selection of xorL_rReg_im1_ndd cost agnostic if immL32 src2 is -1.
predicate(UseAPX && n->in(2)->bottom_type()->is_long()->get_con() != -1L);
match(Set dst (XorL src1 src2));
effect(KILL cr);
flag(PD::Flag_sets_sign_flag, PD::Flag_sets_zero_flag, PD::Flag_sets_parity_flag, PD::Flag_clears_overflow_flag, PD::Flag_clears_carry_flag);
@@ -11234,6 +11239,7 @@ instruct xorL_rReg_mem_imm(rRegL dst, memory src1, immL32 src2, rFlagsReg cr)
match(Set dst (XorL (LoadL src1) src2));
effect(KILL cr);
flag(PD::Flag_sets_sign_flag, PD::Flag_sets_zero_flag, PD::Flag_sets_parity_flag, PD::Flag_clears_overflow_flag, PD::Flag_clears_carry_flag);
ins_cost(150);
format %{ "exorq $dst, $src1, $src2\t# long ndd" %}
ins_encode %{

View File

@@ -2623,7 +2623,6 @@ LONG WINAPI topLevelExceptionFilter(struct _EXCEPTION_POINTERS* exceptionInfo) {
return Handle_Exception(exceptionInfo, VM_Version::cpuinfo_cont_addr());
}
#if !defined(PRODUCT)
if ((exception_code == EXCEPTION_ACCESS_VIOLATION) &&
VM_Version::is_cpuinfo_segv_addr_apx(pc)) {
// Verify that OS save/restore APX registers.
@@ -2631,7 +2630,6 @@ LONG WINAPI topLevelExceptionFilter(struct _EXCEPTION_POINTERS* exceptionInfo) {
return Handle_Exception(exceptionInfo, VM_Version::cpuinfo_cont_addr_apx());
}
#endif
#endif
#ifdef CAN_SHOW_REGISTERS_ON_ASSERT
if (VMError::was_assert_poison_crash(exception_record)) {

View File

@@ -429,13 +429,11 @@ bool PosixSignals::pd_hotspot_signal_handler(int sig, siginfo_t* info,
stub = VM_Version::cpuinfo_cont_addr();
}
#if !defined(PRODUCT) && defined(_LP64)
if ((sig == SIGSEGV || sig == SIGBUS) && VM_Version::is_cpuinfo_segv_addr_apx(pc)) {
// Verify that OS save/restore APX registers.
stub = VM_Version::cpuinfo_cont_addr_apx();
VM_Version::clear_apx_test_state();
}
#endif
// We test if stub is already set (by the stack overflow code
// above) so it is not overwritten by the code that follows. This

View File

@@ -255,13 +255,11 @@ bool PosixSignals::pd_hotspot_signal_handler(int sig, siginfo_t* info,
stub = VM_Version::cpuinfo_cont_addr();
}
#if !defined(PRODUCT) && defined(_LP64)
if ((sig == SIGSEGV) && VM_Version::is_cpuinfo_segv_addr_apx(pc)) {
// Verify that OS save/restore APX registers.
stub = VM_Version::cpuinfo_cont_addr_apx();
VM_Version::clear_apx_test_state();
}
#endif
if (thread->thread_state() == _thread_in_Java) {
// Java thread running in Java code => find exception handler if any

View File

@@ -1,5 +1,5 @@
/*
* Copyright (c) 1999, 2023, Oracle and/or its affiliates. All rights reserved.
* Copyright (c) 1999, 2025, Oracle and/or its affiliates. All rights reserved.
* DO NOT ALTER OR REMOVE COPYRIGHT NOTICES OR THIS FILE HEADER.
*
* This code is free software; you can redistribute it and/or modify it
@@ -187,7 +187,13 @@ class ValueNumberingVisitor: public InstructionVisitor {
void do_Convert (Convert* x) { /* nothing to do */ }
void do_NullCheck (NullCheck* x) { /* nothing to do */ }
void do_TypeCast (TypeCast* x) { /* nothing to do */ }
void do_NewInstance (NewInstance* x) { /* nothing to do */ }
void do_NewInstance (NewInstance* x) {
ciInstanceKlass* c = x->klass();
if (c != nullptr && !c->is_initialized() &&
(!c->is_loaded() || c->has_class_initializer())) {
kill_memory();
}
}
void do_NewTypeArray (NewTypeArray* x) { /* nothing to do */ }
void do_NewObjectArray (NewObjectArray* x) { /* nothing to do */ }
void do_NewMultiArray (NewMultiArray* x) { /* nothing to do */ }

View File

@@ -147,7 +147,7 @@
product(bool, AOTVerifyTrainingData, trueInDebug, DIAGNOSTIC, \
"Verify archived training data") \
\
product(bool, AOTCompileEagerly, false, DIAGNOSTIC, \
product(bool, AOTCompileEagerly, false, EXPERIMENTAL, \
"Compile methods as soon as possible") \
\
/* AOT Code flags */ \

View File

@@ -837,11 +837,10 @@ void MetaspaceShared::preload_and_dump(TRAPS) {
struct stat st;
if (os::stat(AOTCache, &st) != 0) {
tty->print_cr("AOTCache creation failed: %s", AOTCache);
vm_exit(0);
} else {
tty->print_cr("AOTCache creation is complete: %s " INT64_FORMAT " bytes", AOTCache, (int64_t)(st.st_size));
vm_exit(0);
}
vm_direct_exit(0);
}
}
}

View File

@@ -549,6 +549,11 @@ bool ciInstanceKlass::compute_has_trusted_loader() {
return java_lang_ClassLoader::is_trusted_loader(loader_oop);
}
bool ciInstanceKlass::has_class_initializer() {
VM_ENTRY_MARK;
return get_instanceKlass()->class_initializer() != nullptr;
}
// ------------------------------------------------------------------
// ciInstanceKlass::find_method
//

View File

@@ -231,6 +231,8 @@ public:
ciInstanceKlass* unique_concrete_subklass();
bool has_finalizable_subclass();
bool has_class_initializer();
bool contains_field_offset(int offset);
// Get the instance of java.lang.Class corresponding to

View File

@@ -3738,6 +3738,7 @@ void ClassFileParser::apply_parsed_class_metadata(
_cp->set_pool_holder(this_klass);
this_klass->set_constants(_cp);
this_klass->set_fieldinfo_stream(_fieldinfo_stream);
this_klass->set_fieldinfo_search_table(_fieldinfo_search_table);
this_klass->set_fields_status(_fields_status);
this_klass->set_methods(_methods);
this_klass->set_inner_classes(_inner_classes);
@@ -3747,6 +3748,8 @@ void ClassFileParser::apply_parsed_class_metadata(
this_klass->set_permitted_subclasses(_permitted_subclasses);
this_klass->set_record_components(_record_components);
DEBUG_ONLY(FieldInfoStream::validate_search_table(_cp, _fieldinfo_stream, _fieldinfo_search_table));
// Delay the setting of _local_interfaces and _transitive_interfaces until after
// initialize_supers() in fill_instance_klass(). It is because the _local_interfaces could
// be shared with _transitive_interfaces and _transitive_interfaces may be shared with
@@ -5054,6 +5057,7 @@ void ClassFileParser::fill_instance_klass(InstanceKlass* ik,
// note that is not safe to use the fields in the parser from this point on
assert(nullptr == _cp, "invariant");
assert(nullptr == _fieldinfo_stream, "invariant");
assert(nullptr == _fieldinfo_search_table, "invariant");
assert(nullptr == _fields_status, "invariant");
assert(nullptr == _methods, "invariant");
assert(nullptr == _inner_classes, "invariant");
@@ -5274,6 +5278,7 @@ ClassFileParser::ClassFileParser(ClassFileStream* stream,
_super_klass(),
_cp(nullptr),
_fieldinfo_stream(nullptr),
_fieldinfo_search_table(nullptr),
_fields_status(nullptr),
_methods(nullptr),
_inner_classes(nullptr),
@@ -5350,6 +5355,7 @@ void ClassFileParser::clear_class_metadata() {
// deallocated if classfile parsing returns an error.
_cp = nullptr;
_fieldinfo_stream = nullptr;
_fieldinfo_search_table = nullptr;
_fields_status = nullptr;
_methods = nullptr;
_inner_classes = nullptr;
@@ -5372,6 +5378,7 @@ ClassFileParser::~ClassFileParser() {
if (_fieldinfo_stream != nullptr) {
MetadataFactory::free_array<u1>(_loader_data, _fieldinfo_stream);
}
MetadataFactory::free_array<u1>(_loader_data, _fieldinfo_search_table);
if (_fields_status != nullptr) {
MetadataFactory::free_array<FieldStatus>(_loader_data, _fields_status);
@@ -5772,6 +5779,7 @@ void ClassFileParser::post_process_parsed_stream(const ClassFileStream* const st
_fieldinfo_stream =
FieldInfoStream::create_FieldInfoStream(_temp_field_info, _java_fields_count,
injected_fields_count, loader_data(), CHECK);
_fieldinfo_search_table = FieldInfoStream::create_search_table(_cp, _fieldinfo_stream, _loader_data, CHECK);
_fields_status =
MetadataFactory::new_array<FieldStatus>(_loader_data, _temp_field_info->length(),
FieldStatus(0), CHECK);

View File

@@ -1,5 +1,5 @@
/*
* Copyright (c) 1997, 2024, Oracle and/or its affiliates. All rights reserved.
* Copyright (c) 1997, 2025, Oracle and/or its affiliates. All rights reserved.
* DO NOT ALTER OR REMOVE COPYRIGHT NOTICES OR THIS FILE HEADER.
*
* This code is free software; you can redistribute it and/or modify it
@@ -123,6 +123,7 @@ class ClassFileParser {
const InstanceKlass* _super_klass;
ConstantPool* _cp;
Array<u1>* _fieldinfo_stream;
Array<u1>* _fieldinfo_search_table;
Array<FieldStatus>* _fields_status;
Array<Method*>* _methods;
Array<u2>* _inner_classes;

View File

@@ -301,7 +301,7 @@ void FieldLayout::reconstruct_layout(const InstanceKlass* ik, bool& has_instance
BasicType last_type;
int last_offset = -1;
while (ik != nullptr) {
for (AllFieldStream fs(ik->fieldinfo_stream(), ik->constants()); !fs.done(); fs.next()) {
for (AllFieldStream fs(ik); !fs.done(); fs.next()) {
BasicType type = Signature::basic_type(fs.signature());
// distinction between static and non-static fields is missing
if (fs.access_flags().is_static()) continue;
@@ -461,7 +461,7 @@ void FieldLayout::print(outputStream* output, bool is_static, const InstanceKlas
bool found = false;
const InstanceKlass* ik = super;
while (!found && ik != nullptr) {
for (AllFieldStream fs(ik->fieldinfo_stream(), ik->constants()); !fs.done(); fs.next()) {
for (AllFieldStream fs(ik); !fs.done(); fs.next()) {
if (fs.offset() == b->offset()) {
output->print_cr(" @%d \"%s\" %s %d/%d %s",
b->offset(),

View File

@@ -967,6 +967,13 @@ void java_lang_Class::fixup_mirror(Klass* k, TRAPS) {
Array<u1>* new_fis = FieldInfoStream::create_FieldInfoStream(fields, java_fields, injected_fields, k->class_loader_data(), CHECK);
ik->set_fieldinfo_stream(new_fis);
MetadataFactory::free_array<u1>(k->class_loader_data(), old_stream);
Array<u1>* old_table = ik->fieldinfo_search_table();
Array<u1>* search_table = FieldInfoStream::create_search_table(ik->constants(), new_fis, k->class_loader_data(), CHECK);
ik->set_fieldinfo_search_table(search_table);
MetadataFactory::free_array<u1>(k->class_loader_data(), old_table);
DEBUG_ONLY(FieldInfoStream::validate_search_table(ik->constants(), new_fis, search_table));
}
}

View File

@@ -32,6 +32,7 @@
#include "classfile/javaClasses.inline.hpp"
#include "classfile/stringTable.hpp"
#include "classfile/vmClasses.hpp"
#include "compiler/compileBroker.hpp"
#include "gc/shared/collectedHeap.hpp"
#include "gc/shared/oopStorage.inline.hpp"
#include "gc/shared/oopStorageSet.hpp"
@@ -115,6 +116,7 @@ OopStorage* StringTable::_oop_storage;
static size_t _current_size = 0;
static volatile size_t _items_count = 0;
DEBUG_ONLY(static bool _disable_interning_during_cds_dump = false);
volatile bool _alt_hash = false;
@@ -346,6 +348,10 @@ bool StringTable::has_work() {
return Atomic::load_acquire(&_has_work);
}
size_t StringTable::items_count_acquire() {
return Atomic::load_acquire(&_items_count);
}
void StringTable::trigger_concurrent_work() {
// Avoid churn on ServiceThread
if (!has_work()) {
@@ -504,6 +510,9 @@ oop StringTable::intern(const char* utf8_string, TRAPS) {
}
oop StringTable::intern(const StringWrapper& name, TRAPS) {
assert(!Atomic::load_acquire(&_disable_interning_during_cds_dump),
"All threads that may intern strings should have been stopped before CDS starts copying the interned string table");
// shared table always uses java_lang_String::hash_code
unsigned int hash = hash_wrapped_string(name);
oop found_string = lookup_shared(name, hash);
@@ -793,7 +802,7 @@ void StringTable::verify() {
}
// Verification and comp
class VerifyCompStrings : StackObj {
class StringTable::VerifyCompStrings : StackObj {
static unsigned string_hash(oop const& str) {
return java_lang_String::hash_code_noupdate(str);
}
@@ -805,7 +814,7 @@ class VerifyCompStrings : StackObj {
string_hash, string_equals> _table;
public:
size_t _errors;
VerifyCompStrings() : _table(unsigned(_items_count / 8) + 1, 0 /* do not resize */), _errors(0) {}
VerifyCompStrings() : _table(unsigned(items_count_acquire() / 8) + 1, 0 /* do not resize */), _errors(0) {}
bool operator()(WeakHandle* val) {
oop s = val->resolve();
if (s == nullptr) {
@@ -939,20 +948,31 @@ oop StringTable::lookup_shared(const jchar* name, int len) {
return _shared_table.lookup(wrapped_name, java_lang_String::hash_code(name, len), 0);
}
// This is called BEFORE we enter the CDS safepoint. We can allocate heap objects.
// This should be called when we know no more strings will be added (which will be easy
// to guarantee because CDS runs with a single Java thread. See JDK-8253495.)
// This is called BEFORE we enter the CDS safepoint. We can still allocate Java object arrays to
// be used by the shared strings table.
void StringTable::allocate_shared_strings_array(TRAPS) {
if (!CDSConfig::is_dumping_heap()) {
return;
}
assert(CDSConfig::allow_only_single_java_thread(), "No more interned strings can be added");
if (_items_count > (size_t)max_jint) {
fatal("Too many strings to be archived: %zu", _items_count);
CompileBroker::wait_for_no_active_tasks();
precond(CDSConfig::allow_only_single_java_thread());
// At this point, no more strings will be added:
// - There's only a single Java thread (this thread). It no longer executes Java bytecodes
// so JIT compilation will eventually stop.
// - CompileBroker has no more active tasks, so all JIT requests have been processed.
// This flag will be cleared after intern table dumping has completed, so we can run the
// compiler again (for future AOT method compilation, etc).
DEBUG_ONLY(Atomic::release_store(&_disable_interning_during_cds_dump, true));
if (items_count_acquire() > (size_t)max_jint) {
fatal("Too many strings to be archived: %zu", items_count_acquire());
}
int total = (int)_items_count;
int total = (int)items_count_acquire();
size_t single_array_size = objArrayOopDesc::object_size(total);
log_info(aot)("allocated string table for %d strings", total);
@@ -972,7 +992,7 @@ void StringTable::allocate_shared_strings_array(TRAPS) {
// This can only happen if you have an extremely large number of classes that
// refer to more than 16384 * 16384 = 26M interned strings! Not a practical concern
// but bail out for safety.
log_error(aot)("Too many strings to be archived: %zu", _items_count);
log_error(aot)("Too many strings to be archived: %zu", items_count_acquire());
MetaspaceShared::unrecoverable_writing_error();
}
@@ -1070,7 +1090,7 @@ oop StringTable::init_shared_strings_array() {
void StringTable::write_shared_table() {
_shared_table.reset();
CompactHashtableWriter writer((int)_items_count, ArchiveBuilder::string_stats());
CompactHashtableWriter writer((int)items_count_acquire(), ArchiveBuilder::string_stats());
int index = 0;
auto copy_into_shared_table = [&] (WeakHandle* val) {
@@ -1084,6 +1104,8 @@ void StringTable::write_shared_table() {
};
_local_table->do_safepoint_scan(copy_into_shared_table);
writer.dump(&_shared_table, "string");
DEBUG_ONLY(Atomic::release_store(&_disable_interning_during_cds_dump, false));
}
void StringTable::set_shared_strings_array_index(int root_index) {

View File

@@ -40,7 +40,7 @@ class StringTableConfig;
class StringTable : AllStatic {
friend class StringTableConfig;
class VerifyCompStrings;
static volatile bool _has_work;
// Set if one bucket is out of balance due to hash algorithm deficiency
@@ -74,6 +74,7 @@ private:
static void item_added();
static void item_removed();
static size_t items_count_acquire();
static oop intern(const StringWrapper& name, TRAPS);
static oop do_intern(const StringWrapper& name, uintx hash, TRAPS);

View File

@@ -344,6 +344,7 @@ AOTCodeCache::~AOTCodeCache() {
_store_buffer = nullptr;
}
if (_table != nullptr) {
MutexLocker ml(AOTCodeCStrings_lock, Mutex::_no_safepoint_check_flag);
delete _table;
_table = nullptr;
}
@@ -774,6 +775,9 @@ bool AOTCodeCache::store_code_blob(CodeBlob& blob, AOTCodeEntry::Kind entry_kind
// we need to take a lock to prevent race between compiler threads generating AOT code
// and the main thread generating adapter
MutexLocker ml(Compile_lock);
if (!is_on()) {
return false; // AOT code cache was already dumped and closed.
}
if (!cache->align_write()) {
return false;
}
@@ -1434,6 +1438,9 @@ AOTCodeAddressTable::~AOTCodeAddressTable() {
if (_extrs_addr != nullptr) {
FREE_C_HEAP_ARRAY(address, _extrs_addr);
}
if (_stubs_addr != nullptr) {
FREE_C_HEAP_ARRAY(address, _stubs_addr);
}
if (_shared_blobs_addr != nullptr) {
FREE_C_HEAP_ARRAY(address, _shared_blobs_addr);
}
@@ -1485,6 +1492,7 @@ void AOTCodeCache::load_strings() {
int AOTCodeCache::store_strings() {
if (_C_strings_used > 0) {
MutexLocker ml(AOTCodeCStrings_lock, Mutex::_no_safepoint_check_flag);
uint offset = _write_position;
uint length = 0;
uint* lengths = (uint *)reserve_bytes(sizeof(uint) * _C_strings_used);
@@ -1510,15 +1518,17 @@ int AOTCodeCache::store_strings() {
const char* AOTCodeCache::add_C_string(const char* str) {
if (is_on_for_dump() && str != nullptr) {
return _cache->_table->add_C_string(str);
MutexLocker ml(AOTCodeCStrings_lock, Mutex::_no_safepoint_check_flag);
AOTCodeAddressTable* table = addr_table();
if (table != nullptr) {
return table->add_C_string(str);
}
}
return str;
}
const char* AOTCodeAddressTable::add_C_string(const char* str) {
if (_extrs_complete) {
LogStreamHandle(Trace, aot, codecache, stringtable) log; // ctor outside lock
MutexLocker ml(AOTCodeCStrings_lock, Mutex::_no_safepoint_check_flag);
// Check previous strings address
for (int i = 0; i < _C_strings_count; i++) {
if (_C_strings_in[i] == str) {
@@ -1535,9 +1545,7 @@ const char* AOTCodeAddressTable::add_C_string(const char* str) {
_C_strings_in[_C_strings_count] = str;
const char* dup = os::strdup(str);
_C_strings[_C_strings_count++] = dup;
if (log.is_enabled()) {
log.print_cr("add_C_string: [%d] " INTPTR_FORMAT " '%s'", _C_strings_count, p2i(dup), dup);
}
log_trace(aot, codecache, stringtable)("add_C_string: [%d] " INTPTR_FORMAT " '%s'", _C_strings_count, p2i(dup), dup);
return dup;
} else {
assert(false, "Number of C strings >= MAX_STR_COUNT");

View File

@@ -136,6 +136,7 @@ private:
public:
AOTCodeAddressTable() :
_extrs_addr(nullptr),
_stubs_addr(nullptr),
_shared_blobs_addr(nullptr),
_C1_blobs_addr(nullptr),
_extrs_length(0),

View File

@@ -160,7 +160,7 @@ CodeBlob::CodeBlob(const char* name, CodeBlobKind kind, CodeBuffer* cb, int size
}
} else {
// We need unique and valid not null address
assert(_mutable_data = blob_end(), "sanity");
assert(_mutable_data == blob_end(), "sanity");
}
set_oop_maps(oop_maps);
@@ -177,6 +177,7 @@ CodeBlob::CodeBlob(const char* name, CodeBlobKind kind, int size, uint16_t heade
_code_offset(_content_offset),
_data_offset(size),
_frame_size(0),
_mutable_data_size(0),
S390_ONLY(_ctable_offset(0) COMMA)
_header_size(header_size),
_frame_complete_offset(CodeOffsets::frame_never_safe),
@@ -185,7 +186,7 @@ CodeBlob::CodeBlob(const char* name, CodeBlobKind kind, int size, uint16_t heade
{
assert(is_aligned(size, oopSize), "unaligned size");
assert(is_aligned(header_size, oopSize), "unaligned size");
assert(_mutable_data = blob_end(), "sanity");
assert(_mutable_data == blob_end(), "sanity");
}
void CodeBlob::restore_mutable_data(address reloc_data) {
@@ -195,8 +196,11 @@ void CodeBlob::restore_mutable_data(address reloc_data) {
if (_mutable_data == nullptr) {
vm_exit_out_of_memory(_mutable_data_size, OOM_MALLOC_ERROR, "codebuffer: no space for mutable data");
}
} else {
_mutable_data = blob_end(); // default value
}
if (_relocation_size > 0) {
assert(_mutable_data_size > 0, "relocation is part of mutable data section");
memcpy((address)relocation_begin(), reloc_data, relocation_size());
}
}
@@ -206,6 +210,8 @@ void CodeBlob::purge() {
if (_mutable_data != blob_end()) {
os::free(_mutable_data);
_mutable_data = blob_end(); // Valid not null address
_mutable_data_size = 0;
_relocation_size = 0;
}
if (_oop_maps != nullptr) {
delete _oop_maps;

View File

@@ -247,7 +247,7 @@ public:
// Sizes
int size() const { return _size; }
int header_size() const { return _header_size; }
int relocation_size() const { return pointer_delta_as_int((address) relocation_end(), (address) relocation_begin()); }
int relocation_size() const { return _relocation_size; }
int content_size() const { return pointer_delta_as_int(content_end(), content_begin()); }
int code_size() const { return pointer_delta_as_int(code_end(), code_begin()); }

View File

@@ -28,7 +28,6 @@
#include "code/dependencies.hpp"
#include "code/nativeInst.hpp"
#include "code/nmethod.inline.hpp"
#include "code/relocInfo.hpp"
#include "code/scopeDesc.hpp"
#include "compiler/abstractCompiler.hpp"
#include "compiler/compilationLog.hpp"
@@ -1653,10 +1652,6 @@ void nmethod::maybe_print_nmethod(const DirectiveSet* directive) {
}
void nmethod::print_nmethod(bool printmethod) {
// Enter a critical section to prevent a race with deopts that patch code and updates the relocation info.
// Unfortunately, we have to lock the NMethodState_lock before the tty lock due to the deadlock rules and
// cannot lock in a more finely grained manner.
ConditionalMutexLocker ml(NMethodState_lock, !NMethodState_lock->owned_by_self(), Mutex::_no_safepoint_check_flag);
ttyLocker ttyl; // keep the following output all in one block
if (xtty != nullptr) {
xtty->begin_head("print_nmethod");
@@ -2046,17 +2041,6 @@ bool nmethod::make_not_entrant(const char* reason) {
// cache call.
NativeJump::patch_verified_entry(entry_point(), verified_entry_point(),
SharedRuntime::get_handle_wrong_method_stub());
// Update the relocation info for the patched entry.
// First, get the old relocation info...
RelocIterator iter(this, verified_entry_point(), verified_entry_point() + 8);
if (iter.next() && iter.addr() == verified_entry_point()) {
Relocation* old_reloc = iter.reloc();
// ...then reset the iterator to update it.
RelocIterator iter(this, verified_entry_point(), verified_entry_point() + 8);
relocInfo::change_reloc_info_for_address(&iter, verified_entry_point(), old_reloc->type(),
relocInfo::relocType::runtime_call_type);
}
}
if (update_recompile_counts()) {
@@ -2182,6 +2166,7 @@ void nmethod::purge(bool unregister_nmethod) {
}
CodeCache::unregister_old_nmethod(this);
JVMCI_ONLY( _metadata_size = 0; )
CodeBlob::purge();
}

View File

@@ -1750,6 +1750,10 @@ void CompileBroker::wait_for_completion(CompileTask* task) {
}
}
void CompileBroker::wait_for_no_active_tasks() {
CompileTask::wait_for_no_active_tasks();
}
/**
* Initialize compiler thread(s) + compiler object(s). The postcondition
* of this function is that the compiler runtimes are initialized and that

View File

@@ -383,6 +383,9 @@ public:
static bool is_compilation_disabled_forever() {
return _should_compile_new_jobs == shutdown_compilation;
}
static void wait_for_no_active_tasks();
static void handle_full_code_cache(CodeBlobType code_blob_type);
// Ensures that warning is only printed once.
static bool should_print_compiler_warning() {

View File

@@ -37,12 +37,13 @@
#include "runtime/mutexLocker.hpp"
CompileTask* CompileTask::_task_free_list = nullptr;
int CompileTask::_active_tasks = 0;
/**
* Allocate a CompileTask, from the free list if possible.
*/
CompileTask* CompileTask::allocate() {
MutexLocker locker(CompileTaskAlloc_lock);
MonitorLocker locker(CompileTaskAlloc_lock);
CompileTask* task = nullptr;
if (_task_free_list != nullptr) {
@@ -56,6 +57,7 @@ CompileTask* CompileTask::allocate() {
}
assert(task->is_free(), "Task must be free.");
task->set_is_free(false);
_active_tasks++;
return task;
}
@@ -63,7 +65,7 @@ CompileTask* CompileTask::allocate() {
* Add a task to the free list.
*/
void CompileTask::free(CompileTask* task) {
MutexLocker locker(CompileTaskAlloc_lock);
MonitorLocker locker(CompileTaskAlloc_lock);
if (!task->is_free()) {
if ((task->_method_holder != nullptr && JNIHandles::is_weak_global_handle(task->_method_holder))) {
JNIHandles::destroy_weak_global(task->_method_holder);
@@ -79,6 +81,17 @@ void CompileTask::free(CompileTask* task) {
task->set_is_free(true);
task->set_next(_task_free_list);
_task_free_list = task;
_active_tasks--;
if (_active_tasks == 0) {
locker.notify_all();
}
}
}
void CompileTask::wait_for_no_active_tasks() {
MonitorLocker locker(CompileTaskAlloc_lock);
while (_active_tasks > 0) {
locker.wait();
}
}

View File

@@ -1,5 +1,5 @@
/*
* Copyright (c) 1998, 2024, Oracle and/or its affiliates. All rights reserved.
* Copyright (c) 1998, 2025, Oracle and/or its affiliates. All rights reserved.
* DO NOT ALTER OR REMOVE COPYRIGHT NOTICES OR THIS FILE HEADER.
*
* This code is free software; you can redistribute it and/or modify it
@@ -83,6 +83,7 @@ class CompileTask : public CHeapObj<mtCompiler> {
private:
static CompileTask* _task_free_list;
static int _active_tasks;
int _compile_id;
Method* _method;
jobject _method_holder;
@@ -123,6 +124,7 @@ class CompileTask : public CHeapObj<mtCompiler> {
static CompileTask* allocate();
static void free(CompileTask* task);
static void wait_for_no_active_tasks();
int compile_id() const { return _compile_id; }
Method* method() const { return _method; }

View File

@@ -625,6 +625,34 @@ void ShenandoahBarrierC2Support::verify(RootNode* root) {
}
#endif
bool ShenandoahBarrierC2Support::is_anti_dependent_load_at_control(PhaseIdealLoop* phase, Node* maybe_load, Node* store,
Node* control) {
return maybe_load->is_Load() && phase->C->can_alias(store->adr_type(), phase->C->get_alias_index(maybe_load->adr_type())) &&
phase->ctrl_or_self(maybe_load) == control;
}
void ShenandoahBarrierC2Support::maybe_push_anti_dependent_loads(PhaseIdealLoop* phase, Node* maybe_store, Node* control, Unique_Node_List &wq) {
if (!maybe_store->is_Store() && !maybe_store->is_LoadStore()) {
return;
}
Node* mem = maybe_store->in(MemNode::Memory);
for (DUIterator_Fast imax, i = mem->fast_outs(imax); i < imax; i++) {
Node* u = mem->fast_out(i);
if (is_anti_dependent_load_at_control(phase, u, maybe_store, control)) {
wq.push(u);
}
}
}
void ShenandoahBarrierC2Support::push_data_inputs_at_control(PhaseIdealLoop* phase, Node* n, Node* ctrl, Unique_Node_List &wq) {
for (uint i = 0; i < n->req(); i++) {
Node* in = n->in(i);
if (in != nullptr && phase->has_ctrl(in) && phase->get_ctrl(in) == ctrl) {
wq.push(in);
}
}
}
bool ShenandoahBarrierC2Support::is_dominator_same_ctrl(Node* c, Node* d, Node* n, PhaseIdealLoop* phase) {
// That both nodes have the same control is not sufficient to prove
// domination, verify that there's no path from d to n
@@ -639,22 +667,9 @@ bool ShenandoahBarrierC2Support::is_dominator_same_ctrl(Node* c, Node* d, Node*
if (m->is_Phi() && m->in(0)->is_Loop()) {
assert(phase->ctrl_or_self(m->in(LoopNode::EntryControl)) != c, "following loop entry should lead to new control");
} else {
if (m->is_Store() || m->is_LoadStore()) {
// Take anti-dependencies into account
Node* mem = m->in(MemNode::Memory);
for (DUIterator_Fast imax, i = mem->fast_outs(imax); i < imax; i++) {
Node* u = mem->fast_out(i);
if (u->is_Load() && phase->C->can_alias(m->adr_type(), phase->C->get_alias_index(u->adr_type())) &&
phase->ctrl_or_self(u) == c) {
wq.push(u);
}
}
}
for (uint i = 0; i < m->req(); i++) {
if (m->in(i) != nullptr && phase->ctrl_or_self(m->in(i)) == c) {
wq.push(m->in(i));
}
}
// Take anti-dependencies into account
maybe_push_anti_dependent_loads(phase, m, c, wq);
push_data_inputs_at_control(phase, m, c, wq);
}
}
return true;
@@ -1006,7 +1021,20 @@ void ShenandoahBarrierC2Support::call_lrb_stub(Node*& ctrl, Node*& val, Node* lo
phase->register_new_node(val, ctrl);
}
void ShenandoahBarrierC2Support::fix_ctrl(Node* barrier, Node* region, const MemoryGraphFixer& fixer, Unique_Node_List& uses, Unique_Node_List& uses_to_ignore, uint last, PhaseIdealLoop* phase) {
void ShenandoahBarrierC2Support::collect_nodes_above_barrier(Unique_Node_List &nodes_above_barrier, PhaseIdealLoop* phase, Node* ctrl, Node* init_raw_mem) {
nodes_above_barrier.clear();
if (phase->has_ctrl(init_raw_mem) && phase->get_ctrl(init_raw_mem) == ctrl && !init_raw_mem->is_Phi()) {
nodes_above_barrier.push(init_raw_mem);
}
for (uint next = 0; next < nodes_above_barrier.size(); next++) {
Node* n = nodes_above_barrier.at(next);
// Take anti-dependencies into account
maybe_push_anti_dependent_loads(phase, n, ctrl, nodes_above_barrier);
push_data_inputs_at_control(phase, n, ctrl, nodes_above_barrier);
}
}
void ShenandoahBarrierC2Support::fix_ctrl(Node* barrier, Node* region, const MemoryGraphFixer& fixer, Unique_Node_List& uses, Unique_Node_List& nodes_above_barrier, uint last, PhaseIdealLoop* phase) {
Node* ctrl = phase->get_ctrl(barrier);
Node* init_raw_mem = fixer.find_mem(ctrl, barrier);
@@ -1017,30 +1045,17 @@ void ShenandoahBarrierC2Support::fix_ctrl(Node* barrier, Node* region, const Mem
// control will be after the expanded barrier. The raw memory (if
// its memory is control dependent on the barrier's input control)
// must stay above the barrier.
uses_to_ignore.clear();
if (phase->has_ctrl(init_raw_mem) && phase->get_ctrl(init_raw_mem) == ctrl && !init_raw_mem->is_Phi()) {
uses_to_ignore.push(init_raw_mem);
}
for (uint next = 0; next < uses_to_ignore.size(); next++) {
Node *n = uses_to_ignore.at(next);
for (uint i = 0; i < n->req(); i++) {
Node* in = n->in(i);
if (in != nullptr && phase->has_ctrl(in) && phase->get_ctrl(in) == ctrl) {
uses_to_ignore.push(in);
}
}
}
collect_nodes_above_barrier(nodes_above_barrier, phase, ctrl, init_raw_mem);
for (DUIterator_Fast imax, i = ctrl->fast_outs(imax); i < imax; i++) {
Node* u = ctrl->fast_out(i);
if (u->_idx < last &&
u != barrier &&
!u->depends_only_on_test() && // preserve dependency on test
!uses_to_ignore.member(u) &&
!nodes_above_barrier.member(u) &&
(u->in(0) != ctrl || (!u->is_Region() && !u->is_Phi())) &&
(ctrl->Opcode() != Op_CatchProj || u->Opcode() != Op_CreateEx)) {
Node* old_c = phase->ctrl_or_self(u);
Node* c = old_c;
if (c != ctrl ||
if (old_c != ctrl ||
is_dominator_same_ctrl(old_c, barrier, u, phase) ||
ShenandoahBarrierSetC2::is_shenandoah_state_load(u)) {
phase->igvn().rehash_node_delayed(u);
@@ -1315,7 +1330,7 @@ void ShenandoahBarrierC2Support::pin_and_expand(PhaseIdealLoop* phase) {
// Expand load-reference-barriers
MemoryGraphFixer fixer(Compile::AliasIdxRaw, true, phase);
Unique_Node_List uses_to_ignore;
Unique_Node_List nodes_above_barriers;
for (int i = state->load_reference_barriers_count() - 1; i >= 0; i--) {
ShenandoahLoadReferenceBarrierNode* lrb = state->load_reference_barrier(i);
uint last = phase->C->unique();
@@ -1410,7 +1425,7 @@ void ShenandoahBarrierC2Support::pin_and_expand(PhaseIdealLoop* phase) {
Node* out_val = val_phi;
phase->register_new_node(val_phi, region);
fix_ctrl(lrb, region, fixer, uses, uses_to_ignore, last, phase);
fix_ctrl(lrb, region, fixer, uses, nodes_above_barriers, last, phase);
ctrl = orig_ctrl;

View File

@@ -62,8 +62,12 @@ private:
PhaseIdealLoop* phase, int flags);
static void call_lrb_stub(Node*& ctrl, Node*& val, Node* load_addr,
DecoratorSet decorators, PhaseIdealLoop* phase);
static void collect_nodes_above_barrier(Unique_Node_List &nodes_above_barrier, PhaseIdealLoop* phase, Node* ctrl,
Node* init_raw_mem);
static void test_in_cset(Node*& ctrl, Node*& not_cset_ctrl, Node* val, Node* raw_mem, PhaseIdealLoop* phase);
static void fix_ctrl(Node* barrier, Node* region, const MemoryGraphFixer& fixer, Unique_Node_List& uses, Unique_Node_List& uses_to_ignore, uint last, PhaseIdealLoop* phase);
static void fix_ctrl(Node* barrier, Node* region, const MemoryGraphFixer& fixer, Unique_Node_List& uses, Unique_Node_List& nodes_above_barrier, uint last, PhaseIdealLoop* phase);
static Node* get_load_addr(PhaseIdealLoop* phase, VectorSet& visited, Node* lrb);
public:
@@ -76,6 +80,11 @@ public:
static bool expand(Compile* C, PhaseIterGVN& igvn);
static void pin_and_expand(PhaseIdealLoop* phase);
static void push_data_inputs_at_control(PhaseIdealLoop* phase, Node* n, Node* ctrl,
Unique_Node_List &wq);
static bool is_anti_dependent_load_at_control(PhaseIdealLoop* phase, Node* maybe_load, Node* store, Node* control);
static void maybe_push_anti_dependent_loads(PhaseIdealLoop* phase, Node* maybe_store, Node* control, Unique_Node_List &wq);
#ifdef ASSERT
static void verify(RootNode* root);
#endif

View File

@@ -415,10 +415,6 @@ void ShenandoahConcurrentGC::entry_reset() {
msg);
op_reset();
}
if (heap->mode()->is_generational()) {
heap->old_generation()->card_scan()->mark_read_table_as_clean();
}
}
void ShenandoahConcurrentGC::entry_scan_remembered_set() {
@@ -644,6 +640,10 @@ void ShenandoahConcurrentGC::op_reset() {
} else {
_generation->prepare_gc();
}
if (heap->mode()->is_generational()) {
heap->old_generation()->card_scan()->mark_read_table_as_clean();
}
}
class ShenandoahInitMarkUpdateRegionStateClosure : public ShenandoahHeapRegionClosure {

View File

@@ -136,9 +136,15 @@ void ShenandoahDegenGC::op_degenerated() {
heap->set_unload_classes(_generation->heuristics()->can_unload_classes() &&
(!heap->mode()->is_generational() || _generation->is_global()));
if (heap->mode()->is_generational() && _generation->is_young()) {
// Swap remembered sets for young
_generation->swap_card_tables();
if (heap->mode()->is_generational()) {
// Clean the read table before swapping it. The end goal here is to have a clean
// write table, and to have the read table updated with the previous write table.
heap->old_generation()->card_scan()->mark_read_table_as_clean();
if (_generation->is_young()) {
// Swap remembered sets for young
_generation->swap_card_tables();
}
}
case _degenerated_roots:

View File

@@ -183,6 +183,29 @@ void ShenandoahGenerationalHeap::stop() {
regulator_thread()->stop();
}
bool ShenandoahGenerationalHeap::requires_barriers(stackChunkOop obj) const {
if (is_idle()) {
return false;
}
if (is_concurrent_young_mark_in_progress() && is_in_young(obj) && !marking_context()->allocated_after_mark_start(obj)) {
// We are marking young, this object is in young, and it is below the TAMS
return true;
}
if (is_in_old(obj)) {
// Card marking barriers are required for objects in the old generation
return true;
}
if (has_forwarded_objects()) {
// Object may have pointers that need to be updated
return true;
}
return false;
}
void ShenandoahGenerationalHeap::evacuate_collection_set(bool concurrent) {
ShenandoahRegionIterator regions;
ShenandoahGenerationalEvacuationTask task(this, &regions, concurrent, false /* only promote regions */);

View File

@@ -128,6 +128,8 @@ public:
void stop() override;
bool requires_barriers(stackChunkOop obj) const override;
// Used for logging the result of a region transfer outside the heap lock
struct TransferResult {
bool success;

View File

@@ -1452,27 +1452,23 @@ void ShenandoahHeap::print_heap_regions_on(outputStream* st) const {
}
}
size_t ShenandoahHeap::trash_humongous_region_at(ShenandoahHeapRegion* start) {
size_t ShenandoahHeap::trash_humongous_region_at(ShenandoahHeapRegion* start) const {
assert(start->is_humongous_start(), "reclaim regions starting with the first one");
oop humongous_obj = cast_to_oop(start->bottom());
size_t size = humongous_obj->size();
size_t required_regions = ShenandoahHeapRegion::required_regions(size * HeapWordSize);
size_t index = start->index() + required_regions - 1;
assert(!start->has_live(), "liveness must be zero");
for(size_t i = 0; i < required_regions; i++) {
// Reclaim from tail. Otherwise, assertion fails when printing region to trace log,
// as it expects that every region belongs to a humongous region starting with a humongous start region.
ShenandoahHeapRegion* region = get_region(index --);
assert(region->is_humongous(), "expect correct humongous start or continuation");
// Do not try to get the size of this humongous object. STW collections will
// have already unloaded classes, so an unmarked object may have a bad klass pointer.
ShenandoahHeapRegion* region = start;
size_t index = region->index();
do {
assert(region->is_humongous(), "Expect correct humongous start or continuation");
assert(!region->is_cset(), "Humongous region should not be in collection set");
region->make_trash_immediate();
}
return required_regions;
region = get_region(++index);
} while (region != nullptr && region->is_humongous_continuation());
// Return number of regions trashed
return index - start->index();
}
class ShenandoahCheckCleanGCLABClosure : public ThreadClosure {

View File

@@ -828,7 +828,7 @@ public:
static inline void atomic_clear_oop(narrowOop* addr, oop compare);
static inline void atomic_clear_oop(narrowOop* addr, narrowOop compare);
size_t trash_humongous_region_at(ShenandoahHeapRegion *r);
size_t trash_humongous_region_at(ShenandoahHeapRegion *r) const;
static inline void increase_object_age(oop obj, uint additional_age);

View File

@@ -624,7 +624,7 @@ void ShenandoahDirectCardMarkRememberedSet::swap_card_tables() {
#ifdef ASSERT
CardValue* start_bp = &(_card_table->write_byte_map())[0];
CardValue* end_bp = &(new_ptr)[_card_table->last_valid_index()];
CardValue* end_bp = &(start_bp[_card_table->last_valid_index()]);
while (start_bp <= end_bp) {
assert(*start_bp == CardTable::clean_card_val(), "Should be clean: " PTR_FORMAT, p2i(start_bp));

View File

@@ -170,9 +170,15 @@ NO_TRANSITION(jboolean, jfr_set_throttle(JNIEnv* env, jclass jvm, jlong event_ty
return JNI_TRUE;
NO_TRANSITION_END
JVM_ENTRY_NO_ENV(void, jfr_set_cpu_throttle(JNIEnv* env, jclass jvm, jdouble rate, jboolean auto_adapt))
JVM_ENTRY_NO_ENV(void, jfr_set_cpu_rate(JNIEnv* env, jclass jvm, jdouble rate))
JfrEventSetting::set_enabled(JfrCPUTimeSampleEvent, rate > 0);
JfrCPUTimeThreadSampling::set_rate(rate, auto_adapt == JNI_TRUE);
JfrCPUTimeThreadSampling::set_rate(rate);
JVM_END
JVM_ENTRY_NO_ENV(void, jfr_set_cpu_period(JNIEnv* env, jclass jvm, jlong period_nanos))
assert(period_nanos >= 0, "invariant");
JfrEventSetting::set_enabled(JfrCPUTimeSampleEvent, period_nanos > 0);
JfrCPUTimeThreadSampling::set_period(period_nanos);
JVM_END
NO_TRANSITION(void, jfr_set_miscellaneous(JNIEnv* env, jclass jvm, jlong event_type_id, jlong value))

View File

@@ -129,7 +129,9 @@ jlong JNICALL jfr_get_unloaded_event_classes_count(JNIEnv* env, jclass jvm);
jboolean JNICALL jfr_set_throttle(JNIEnv* env, jclass jvm, jlong event_type_id, jlong event_sample_size, jlong period_ms);
void JNICALL jfr_set_cpu_throttle(JNIEnv* env, jclass jvm, jdouble rate, jboolean auto_adapt);
void JNICALL jfr_set_cpu_rate(JNIEnv* env, jclass jvm, jdouble rate);
void JNICALL jfr_set_cpu_period(JNIEnv* env, jclass jvm, jlong period_nanos);
void JNICALL jfr_set_miscellaneous(JNIEnv* env, jclass jvm, jlong id, jlong value);

View File

@@ -83,7 +83,8 @@ JfrJniMethodRegistration::JfrJniMethodRegistration(JNIEnv* env) {
(char*)"getUnloadedEventClassCount", (char*)"()J", (void*)jfr_get_unloaded_event_classes_count,
(char*)"setMiscellaneous", (char*)"(JJ)V", (void*)jfr_set_miscellaneous,
(char*)"setThrottle", (char*)"(JJJ)Z", (void*)jfr_set_throttle,
(char*)"setCPUThrottle", (char*)"(DZ)V", (void*)jfr_set_cpu_throttle,
(char*)"setCPURate", (char*)"(D)V", (void*)jfr_set_cpu_rate,
(char*)"setCPUPeriod", (char*)"(J)V", (void*)jfr_set_cpu_period,
(char*)"emitOldObjectSamples", (char*)"(JZZ)V", (void*)jfr_emit_old_object_samples,
(char*)"shouldRotateDisk", (char*)"()Z", (void*)jfr_should_rotate_disk,
(char*)"exclude", (char*)"(Ljava/lang/Thread;)V", (void*)jfr_exclude_thread,

View File

@@ -948,22 +948,24 @@
<Field type="long" contentType="bytes" name="freeSize" label="Free Size" description="Free swap space" />
</Event>
<Event name="ExecutionSample" category="Java Virtual Machine, Profiling" label="Method Profiling Sample" description="Snapshot of a threads state"
<Event name="ExecutionSample" category="Java Virtual Machine, Profiling" label="Java Execution Sample"
description="Snapshot of a thread executing Java code. Threads that are not executing Java code, including those waiting or executing native code, are not included."
period="everyChunk">
<Field type="Thread" name="sampledThread" label="Thread" />
<Field type="StackTrace" name="stackTrace" label="Stack Trace" />
<Field type="ThreadState" name="state" label="Thread State" />
</Event>
<Event name="NativeMethodSample" category="Java Virtual Machine, Profiling" label="Method Profiling Sample Native" description="Snapshot of a threads state when in native"
<Event name="NativeMethodSample" category="Java Virtual Machine, Profiling" label="Native Sample"
description="Snapshot of a thread in native code, executing or waiting. Threads that are executing Java code are not included."
period="everyChunk">
<Field type="Thread" name="sampledThread" label="Thread" />
<Field type="StackTrace" name="stackTrace" label="Stack Trace" />
<Field type="ThreadState" name="state" label="Thread State" />
</Event>
<Event name="CPUTimeSample" category="Java Virtual Machine, Profiling" label="CPU Time Method Sample"
description="Snapshot of a threads state from the CPU time sampler. The throttle can be either an upper bound for the event emission rate, e.g. 100/s, or the cpu-time period, e.g. 10ms, with s, ms, us and ns supported as time units."
<Event name="CPUTimeSample" category="Java Virtual Machine, Profiling" label="CPU Time Sample"
description="Snapshot of a threads state from the CPU time sampler, both threads executing native and Java code are included. The throttle setting can be either an upper bound for the event emission rate, e.g. 100/s, or the cpu-time period, e.g. 10ms, with s, ms, us and ns supported as time units."
throttle="true" thread="false" experimental="true" startTime="false">
<Field type="StackTrace" name="stackTrace" label="Stack Trace" />
<Field type="Thread" name="eventThread" label="Thread" />
@@ -972,7 +974,7 @@
<Field type="boolean" name="biased" label="Biased" description="The sample is safepoint-biased" />
</Event>
<Event name="CPUTimeSamplesLost" category="Java Virtual Machine, Profiling" label="CPU Time Method Profiling Lost Samples" description="Records that the CPU time sampler lost samples"
<Event name="CPUTimeSamplesLost" category="Java Virtual Machine, Profiling" label="CPU Time Samples Lost" description="Records that the CPU time sampler lost samples"
thread="false" stackTrace="false" startTime="false" experimental="true">
<Field type="int" name="lostSamples" label="Lost Samples" />
<Field type="Thread" name="eventThread" label="Thread" />

View File

@@ -45,7 +45,7 @@
#include "signals_posix.hpp"
static const int64_t AUTOADAPT_INTERVAL_MS = 100;
static const int64_t RECOMPUTE_INTERVAL_MS = 100;
static bool is_excluded(JavaThread* jt) {
return jt->is_hidden_from_external_view() ||
@@ -163,20 +163,42 @@ void JfrCPUTimeTraceQueue::clear() {
Atomic::release_store(&_head, (u4)0);
}
static int64_t compute_sampling_period(double rate) {
if (rate == 0) {
return 0;
// A throttle is either a rate or a fixed period
class JfrCPUSamplerThrottle {
union {
double _rate;
u8 _period_nanos;
};
bool _is_rate;
public:
JfrCPUSamplerThrottle(double rate) : _rate(rate), _is_rate(true) {
assert(rate >= 0, "invariant");
}
return os::active_processor_count() * 1000000000.0 / rate;
}
JfrCPUSamplerThrottle(u8 period_nanos) : _period_nanos(period_nanos), _is_rate(false) {}
bool enabled() const { return _is_rate ? _rate > 0 : _period_nanos > 0; }
int64_t compute_sampling_period() const {
if (_is_rate) {
if (_rate == 0) {
return 0;
}
return os::active_processor_count() * 1000000000.0 / _rate;
}
return _period_nanos;
}
};
class JfrCPUSamplerThread : public NonJavaThread {
friend class JfrCPUTimeThreadSampling;
private:
Semaphore _sample;
NonJavaThread* _sampler_thread;
double _rate;
bool _auto_adapt;
JfrCPUSamplerThrottle _throttle;
volatile int64_t _current_sampling_period_ns;
volatile bool _disenrolled;
// top bit is used to indicate that no signal handler should proceed
@@ -187,7 +209,7 @@ class JfrCPUSamplerThread : public NonJavaThread {
static const u4 STOP_SIGNAL_BIT = 0x80000000;
JfrCPUSamplerThread(double rate, bool auto_adapt);
JfrCPUSamplerThread(JfrCPUSamplerThrottle& throttle);
void start_thread();
@@ -195,9 +217,9 @@ class JfrCPUSamplerThread : public NonJavaThread {
void disenroll();
void update_all_thread_timers();
void auto_adapt_period_if_needed();
void recompute_period_if_needed();
void set_rate(double rate, bool auto_adapt);
void set_throttle(JfrCPUSamplerThrottle& throttle);
int64_t get_sampling_period() const { return Atomic::load(&_current_sampling_period_ns); };
void sample_thread(JfrSampleRequest& request, void* ucontext, JavaThread* jt, JfrThreadLocal* tl, JfrTicks& now);
@@ -231,18 +253,16 @@ public:
void trigger_async_processing_of_cpu_time_jfr_requests();
};
JfrCPUSamplerThread::JfrCPUSamplerThread(double rate, bool auto_adapt) :
JfrCPUSamplerThread::JfrCPUSamplerThread(JfrCPUSamplerThrottle& throttle) :
_sample(),
_sampler_thread(nullptr),
_rate(rate),
_auto_adapt(auto_adapt),
_current_sampling_period_ns(compute_sampling_period(rate)),
_throttle(throttle),
_current_sampling_period_ns(throttle.compute_sampling_period()),
_disenrolled(true),
_active_signal_handlers(STOP_SIGNAL_BIT),
_is_async_processing_of_cpu_time_jfr_requests_triggered(false),
_warned_about_timer_creation_failure(false),
_signal_handler_installed(false) {
assert(rate >= 0, "invariant");
}
void JfrCPUSamplerThread::trigger_async_processing_of_cpu_time_jfr_requests() {
@@ -321,7 +341,7 @@ void JfrCPUSamplerThread::disenroll() {
void JfrCPUSamplerThread::run() {
assert(_sampler_thread == nullptr, "invariant");
_sampler_thread = this;
int64_t last_auto_adapt_check = os::javaTimeNanos();
int64_t last_recompute_check = os::javaTimeNanos();
while (true) {
if (!_sample.trywait()) {
// disenrolled
@@ -329,9 +349,9 @@ void JfrCPUSamplerThread::run() {
}
_sample.signal();
if (os::javaTimeNanos() - last_auto_adapt_check > AUTOADAPT_INTERVAL_MS * 1000000) {
auto_adapt_period_if_needed();
last_auto_adapt_check = os::javaTimeNanos();
if (os::javaTimeNanos() - last_recompute_check > RECOMPUTE_INTERVAL_MS * 1000000) {
recompute_period_if_needed();
last_recompute_check = os::javaTimeNanos();
}
if (Atomic::cmpxchg(&_is_async_processing_of_cpu_time_jfr_requests_triggered, true, false)) {
@@ -442,42 +462,50 @@ JfrCPUTimeThreadSampling::~JfrCPUTimeThreadSampling() {
}
}
void JfrCPUTimeThreadSampling::create_sampler(double rate, bool auto_adapt) {
void JfrCPUTimeThreadSampling::create_sampler(JfrCPUSamplerThrottle& throttle) {
assert(_sampler == nullptr, "invariant");
_sampler = new JfrCPUSamplerThread(rate, auto_adapt);
_sampler = new JfrCPUSamplerThread(throttle);
_sampler->start_thread();
_sampler->enroll();
}
void JfrCPUTimeThreadSampling::update_run_state(double rate, bool auto_adapt) {
if (rate != 0) {
void JfrCPUTimeThreadSampling::update_run_state(JfrCPUSamplerThrottle& throttle) {
if (throttle.enabled()) {
if (_sampler == nullptr) {
create_sampler(rate, auto_adapt);
create_sampler(throttle);
} else {
_sampler->set_rate(rate, auto_adapt);
_sampler->set_throttle(throttle);
_sampler->enroll();
}
return;
}
if (_sampler != nullptr) {
_sampler->set_rate(rate /* 0 */, auto_adapt);
_sampler->set_throttle(throttle);
_sampler->disenroll();
}
}
void JfrCPUTimeThreadSampling::set_rate(double rate, bool auto_adapt) {
assert(rate >= 0, "invariant");
void JfrCPUTimeThreadSampling::set_rate(double rate) {
if (_instance == nullptr) {
return;
}
instance().set_rate_value(rate, auto_adapt);
JfrCPUSamplerThrottle throttle(rate);
instance().set_throttle_value(throttle);
}
void JfrCPUTimeThreadSampling::set_rate_value(double rate, bool auto_adapt) {
if (_sampler != nullptr) {
_sampler->set_rate(rate, auto_adapt);
void JfrCPUTimeThreadSampling::set_period(u8 nanos) {
if (_instance == nullptr) {
return;
}
update_run_state(rate, auto_adapt);
JfrCPUSamplerThrottle throttle(nanos);
instance().set_throttle_value(throttle);
}
void JfrCPUTimeThreadSampling::set_throttle_value(JfrCPUSamplerThrottle& throttle) {
if (_sampler != nullptr) {
_sampler->set_throttle(throttle);
}
update_run_state(throttle);
}
void JfrCPUTimeThreadSampling::on_javathread_create(JavaThread *thread) {
@@ -704,24 +732,21 @@ void JfrCPUSamplerThread::stop_timer() {
VMThread::execute(&op);
}
void JfrCPUSamplerThread::auto_adapt_period_if_needed() {
void JfrCPUSamplerThread::recompute_period_if_needed() {
int64_t current_period = get_sampling_period();
if (_auto_adapt || current_period == -1) {
int64_t period = compute_sampling_period(_rate);
if (period != current_period) {
Atomic::store(&_current_sampling_period_ns, period);
update_all_thread_timers();
}
int64_t period = _throttle.compute_sampling_period();
if (period != current_period) {
Atomic::store(&_current_sampling_period_ns, period);
update_all_thread_timers();
}
}
void JfrCPUSamplerThread::set_rate(double rate, bool auto_adapt) {
_rate = rate;
_auto_adapt = auto_adapt;
if (_rate > 0 && Atomic::load_acquire(&_disenrolled) == false) {
auto_adapt_period_if_needed();
void JfrCPUSamplerThread::set_throttle(JfrCPUSamplerThrottle& throttle) {
_throttle = throttle;
if (_throttle.enabled() && Atomic::load_acquire(&_disenrolled) == false) {
recompute_period_if_needed();
} else {
Atomic::store(&_current_sampling_period_ns, compute_sampling_period(rate));
Atomic::store(&_current_sampling_period_ns, _throttle.compute_sampling_period());
}
}
@@ -765,12 +790,18 @@ void JfrCPUTimeThreadSampling::destroy() {
_instance = nullptr;
}
void JfrCPUTimeThreadSampling::set_rate(double rate, bool auto_adapt) {
void JfrCPUTimeThreadSampling::set_rate(double rate) {
if (rate != 0) {
warn();
}
}
void JfrCPUTimeThreadSampling::set_period(u8 period_nanos) {
if (period_nanos != 0) {
warn();
}
}
void JfrCPUTimeThreadSampling::on_javathread_create(JavaThread* thread) {
}

View File

@@ -95,14 +95,16 @@ public:
class JfrCPUSamplerThread;
class JfrCPUSamplerThrottle;
class JfrCPUTimeThreadSampling : public JfrCHeapObj {
friend class JfrRecorder;
private:
JfrCPUSamplerThread* _sampler;
void create_sampler(double rate, bool auto_adapt);
void set_rate_value(double rate, bool auto_adapt);
void create_sampler(JfrCPUSamplerThrottle& throttle);
void set_throttle_value(JfrCPUSamplerThrottle& throttle);
JfrCPUTimeThreadSampling();
~JfrCPUTimeThreadSampling();
@@ -111,10 +113,13 @@ class JfrCPUTimeThreadSampling : public JfrCHeapObj {
static JfrCPUTimeThreadSampling* create();
static void destroy();
void update_run_state(double rate, bool auto_adapt);
void update_run_state(JfrCPUSamplerThrottle& throttle);
static void set_rate(JfrCPUSamplerThrottle& throttle);
public:
static void set_rate(double rate, bool auto_adapt);
static void set_rate(double rate);
static void set_period(u8 nanos);
static void on_javathread_create(JavaThread* thread);
static void on_javathread_terminate(JavaThread* thread);
@@ -140,7 +145,8 @@ private:
static void destroy();
public:
static void set_rate(double rate, bool auto_adapt);
static void set_rate(double rate);
static void set_period(u8 nanos);
static void on_javathread_create(JavaThread* thread);
static void on_javathread_terminate(JavaThread* thread);

View File

@@ -36,14 +36,6 @@
#include "utilities/preserveException.hpp"
#include "utilities/macros.hpp"
class JfrRecorderThread : public JavaThread {
public:
JfrRecorderThread(ThreadFunction entry_point) : JavaThread(entry_point) {}
virtual ~JfrRecorderThread() {}
virtual bool is_JfrRecorder_thread() const { return true; }
};
static Thread* start_thread(instanceHandle thread_oop, ThreadFunction proc, TRAPS) {
assert(thread_oop.not_null(), "invariant");
assert(proc != nullptr, "invariant");

View File

@@ -26,9 +26,9 @@
#define SHARE_JFR_RECORDER_SERVICE_JFRRECORDERTHREAD_HPP
#include "memory/allStatic.hpp"
#include "runtime/javaThread.hpp"
#include "utilities/debug.hpp"
class JavaThread;
class JfrCheckpointManager;
class JfrPostBox;
class Thread;
@@ -42,4 +42,12 @@ class JfrRecorderThreadEntry : AllStatic {
static bool start(JfrCheckpointManager* cp_manager, JfrPostBox* post_box, TRAPS);
};
class JfrRecorderThread : public JavaThread {
public:
JfrRecorderThread(ThreadFunction entry_point) : JavaThread(entry_point) {}
virtual ~JfrRecorderThread() {}
virtual bool is_JfrRecorder_thread() const { return true; }
};
#endif // SHARE_JFR_RECORDER_SERVICE_JFRRECORDERTHREAD_HPP

View File

@@ -45,6 +45,7 @@
#include "runtime/os.hpp"
#include "runtime/threadIdentifier.hpp"
#include "utilities/sizes.hpp"
#include "utilities/spinYield.hpp"
JfrThreadLocal::JfrThreadLocal() :
_sample_request(),
@@ -79,7 +80,8 @@ JfrThreadLocal::JfrThreadLocal() :
_enqueued_requests(false),
_vthread(false),
_notified(false),
_dead(false)
_dead(false),
_sampling_critical_section(false)
#ifdef LINUX
,_cpu_timer(nullptr),
_cpu_time_jfr_locked(UNLOCKED),
@@ -599,7 +601,10 @@ bool JfrThreadLocal::try_acquire_cpu_time_jfr_dequeue_lock() {
}
void JfrThreadLocal::acquire_cpu_time_jfr_dequeue_lock() {
while (Atomic::cmpxchg(&_cpu_time_jfr_locked, UNLOCKED, DEQUEUE) != UNLOCKED);
SpinYield s;
while (Atomic::cmpxchg(&_cpu_time_jfr_locked, UNLOCKED, DEQUEUE) != UNLOCKED) {
s.wait();
}
}
void JfrThreadLocal::release_cpu_time_jfr_queue_lock() {

View File

@@ -47,7 +47,6 @@ void VectorSet::init(Arena* arena) {
// Expand the existing set to a bigger size
void VectorSet::grow(uint new_word_capacity) {
_nesting.check(_set_arena); // Check if a potential reallocation in the arena is safe
assert(new_word_capacity >= _size, "Should have been checked before, use maybe_grow?");
assert(new_word_capacity < (1U << 30), "");
uint x = next_power_of_2(new_word_capacity);

View File

@@ -52,6 +52,7 @@ private:
// Grow vector to required word capacity
void maybe_grow(uint new_word_capacity) {
_nesting.check(_set_arena); // Check if a potential reallocation in the arena is safe
if (new_word_capacity >= _size) {
grow(new_word_capacity);
}

View File

@@ -25,11 +25,12 @@
#include "nmt/memTag.hpp"
#include "runtime/os.hpp"
void* GuardedMemory::wrap_copy(const void* ptr, const size_t len, const void* tag) {
void* GuardedMemory::wrap_copy(const void* ptr, const size_t len,
const void* tag, const void* tag2) {
size_t total_sz = GuardedMemory::get_total_size(len);
void* outerp = os::malloc(total_sz, mtInternal);
if (outerp != nullptr) {
GuardedMemory guarded(outerp, len, tag);
GuardedMemory guarded(outerp, len, tag, tag2);
void* innerp = guarded.get_user_ptr();
if (ptr != nullptr) {
memcpy(innerp, ptr, len);
@@ -58,8 +59,8 @@ void GuardedMemory::print_on(outputStream* st) const {
return;
}
st->print_cr("GuardedMemory(" PTR_FORMAT ") base_addr=" PTR_FORMAT
" tag=" PTR_FORMAT " user_size=%zu user_data=" PTR_FORMAT,
p2i(this), p2i(_base_addr), p2i(get_tag()), get_user_size(), p2i(get_user_ptr()));
" tag=" PTR_FORMAT " tag2=" PTR_FORMAT " user_size=%zu user_data=" PTR_FORMAT,
p2i(this), p2i(_base_addr), p2i(get_tag()), p2i(get_tag2()), get_user_size(), p2i(get_user_ptr()));
Guard* guard = get_head_guard();
st->print_cr(" Header guard @" PTR_FORMAT " is %s", p2i(guard), (guard->verify() ? "OK" : "BROKEN"));

View File

@@ -1,5 +1,5 @@
/*
* Copyright (c) 2014, 2023, Oracle and/or its affiliates. All rights reserved.
* Copyright (c) 2014, 2025, Oracle and/or its affiliates. All rights reserved.
* DO NOT ALTER OR REMOVE COPYRIGHT NOTICES OR THIS FILE HEADER.
*
* This code is free software; you can redistribute it and/or modify it
@@ -26,6 +26,7 @@
#define SHARE_MEMORY_GUARDEDMEMORY_HPP
#include "memory/allocation.hpp"
#include "runtime/os.hpp"
#include "utilities/globalDefinitions.hpp"
/**
@@ -43,13 +44,14 @@
* |base_addr | 0xABABABABABABABAB | Head guard |
* |+16 | <size_t:user_size> | User data size |
* |+sizeof(uintptr_t) | <tag> | Tag word |
* |+sizeof(uintptr_t) | <tag2> | Tag word |
* |+sizeof(void*) | 0xF1 <user_data> ( | User data |
* |+user_size | 0xABABABABABABABAB | Tail guard |
* -------------------------------------------------------------
*
* Where:
* - guard padding uses "badResourceValue" (0xAB)
* - tag word is general purpose
* - tag word and tag2 word are general purpose
* - user data
* -- initially padded with "uninitBlockPad" (0xF1),
* -- to "freeBlockPad" (0xBA), when freed
@@ -111,6 +113,10 @@ protected:
}
bool verify() const {
// We may not be able to dereference directly.
if (!os::is_readable_range((const void*) _guard, (const void*) (_guard + GUARD_SIZE))) {
return false;
}
u_char* c = (u_char*) _guard;
u_char* end = c + GUARD_SIZE;
while (c < end) {
@@ -137,6 +143,7 @@ protected:
size_t _user_size;
};
void* _tag;
void* _tag2;
public:
void set_user_size(const size_t usz) { _user_size = usz; }
size_t get_user_size() const { return _user_size; }
@@ -144,6 +151,8 @@ protected:
void set_tag(const void* tag) { _tag = (void*) tag; }
void* get_tag() const { return _tag; }
void set_tag2(const void* tag2) { _tag2 = (void*) tag2; }
void* get_tag2() const { return _tag2; }
}; // GuardedMemory::GuardHeader
// Guarded Memory...
@@ -162,9 +171,11 @@ protected:
* @param base_ptr allocation wishing to be wrapped, must be at least "GuardedMemory::get_total_size()" bytes.
* @param user_size the size of the user data to be wrapped.
* @param tag optional general purpose tag.
* @param tag2 optional second general purpose tag.
*/
GuardedMemory(void* base_ptr, const size_t user_size, const void* tag = nullptr) {
wrap_with_guards(base_ptr, user_size, tag);
GuardedMemory(void* base_ptr, const size_t user_size,
const void* tag = nullptr, const void* tag2 = nullptr) {
wrap_with_guards(base_ptr, user_size, tag, tag2);
}
/**
@@ -189,16 +200,19 @@ protected:
* @param base_ptr allocation wishing to be wrapped, must be at least "GuardedMemory::get_total_size()" bytes.
* @param user_size the size of the user data to be wrapped.
* @param tag optional general purpose tag.
* @param tag2 optional second general purpose tag.
*
* @return user data pointer (inner pointer to supplied "base_ptr").
*/
void* wrap_with_guards(void* base_ptr, size_t user_size, const void* tag = nullptr) {
void* wrap_with_guards(void* base_ptr, size_t user_size,
const void* tag = nullptr, const void* tag2 = nullptr) {
assert(base_ptr != nullptr, "Attempt to wrap null with memory guard");
_base_addr = (u_char*)base_ptr;
get_head_guard()->build();
get_head_guard()->set_user_size(user_size);
get_tail_guard()->build();
set_tag(tag);
set_tag2(tag2);
set_user_bytes(uninitBlockPad);
assert(verify_guards(), "Expected valid memory guards");
return get_user_ptr();
@@ -230,6 +244,20 @@ protected:
*/
void* get_tag() const { return get_head_guard()->get_tag(); }
/**
* Set the second general purpose tag.
*
* @param tag general purpose tag.
*/
void set_tag2(const void* tag) { get_head_guard()->set_tag2(tag); }
/**
* Return the second general purpose tag.
*
* @return the second general purpose tag, defaults to null.
*/
void* get_tag2() const { return get_head_guard()->get_tag2(); }
/**
* Return the size of the user data.
*
@@ -302,10 +330,12 @@ protected:
* @param ptr the memory to be copied
* @param len the length of the copy
* @param tag optional general purpose tag (see GuardedMemory::get_tag())
* @param tag2 optional general purpose tag (see GuardedMemory::get_tag2())
*
* @return guarded wrapped memory pointer to the user area, or null if OOM.
*/
static void* wrap_copy(const void* p, const size_t len, const void* tag = nullptr);
static void* wrap_copy(const void* p, const size_t len,
const void* tag = nullptr, const void* tag2 = nullptr);
/**
* Free wrapped copy.

View File

@@ -22,8 +22,11 @@
*
*/
#include "memory/resourceArea.hpp"
#include "cds/cdsConfig.hpp"
#include "oops/fieldInfo.inline.hpp"
#include "runtime/atomic.hpp"
#include "utilities/packedTable.hpp"
void FieldInfo::print(outputStream* os, ConstantPool* cp) {
os->print_cr("index=%d name_index=%d name=%s signature_index=%d signature=%s offset=%d "
@@ -37,8 +40,10 @@ void FieldInfo::print(outputStream* os, ConstantPool* cp) {
field_flags().as_uint(),
initializer_index(),
generic_signature_index(),
_field_flags.is_injected() ? lookup_symbol(generic_signature_index())->as_utf8() : cp->symbol_at(generic_signature_index())->as_utf8(),
contended_group());
_field_flags.is_generic() ? (_field_flags.is_injected() ?
lookup_symbol(generic_signature_index())->as_utf8() : cp->symbol_at(generic_signature_index())->as_utf8()
) : "",
is_contended() ? contended_group() : 0);
}
void FieldInfo::print_from_growable_array(outputStream* os, GrowableArray<FieldInfo>* array, ConstantPool* cp) {
@@ -62,13 +67,17 @@ Array<u1>* FieldInfoStream::create_FieldInfoStream(GrowableArray<FieldInfo>* fie
StreamSizer s;
StreamFieldSizer sizer(&s);
assert(fields->length() == java_fields + injected_fields, "must be");
sizer.consumer()->accept_uint(java_fields);
sizer.consumer()->accept_uint(injected_fields);
for (int i = 0; i < fields->length(); i++) {
FieldInfo* fi = fields->adr_at(i);
sizer.map_field_info(*fi);
}
int storage_size = sizer.consumer()->position() + 1;
// Originally there was an extra byte with 0 terminating the reading;
// now we check limits instead.
int storage_size = sizer.consumer()->position();
Array<u1>* const fis = MetadataFactory::new_array<u1>(loader_data, storage_size, CHECK_NULL);
using StreamWriter = UNSIGNED5::Writer<Array<u1>*, int, ArrayHelper<Array<u1>*, int>>;
@@ -79,15 +88,14 @@ Array<u1>* FieldInfoStream::create_FieldInfoStream(GrowableArray<FieldInfo>* fie
writer.consumer()->accept_uint(java_fields);
writer.consumer()->accept_uint(injected_fields);
for (int i = 0; i < fields->length(); i++) {
FieldInfo* fi = fields->adr_at(i);
writer.map_field_info(*fi);
writer.map_field_info(fields->at(i));
}
#ifdef ASSERT
FieldInfoReader r(fis);
int jfc = r.next_uint();
int jfc, ifc;
r.read_field_counts(&jfc, &ifc);
assert(jfc == java_fields, "Must be");
int ifc = r.next_uint();
assert(ifc == injected_fields, "Must be");
for (int i = 0; i < jfc + ifc; i++) {
FieldInfo fi;
@@ -113,30 +121,221 @@ Array<u1>* FieldInfoStream::create_FieldInfoStream(GrowableArray<FieldInfo>* fie
return fis;
}
GrowableArray<FieldInfo>* FieldInfoStream::create_FieldInfoArray(const Array<u1>* fis, int* java_fields_count, int* injected_fields_count) {
int length = FieldInfoStream::num_total_fields(fis);
GrowableArray<FieldInfo>* array = new GrowableArray<FieldInfo>(length);
int FieldInfoStream::compare_name_and_sig(const Symbol* n1, const Symbol* s1, const Symbol* n2, const Symbol* s2) {
int cmp = n1->fast_compare(n2);
return cmp != 0 ? cmp : s1->fast_compare(s2);
}
// We use both name and signature during the comparison; while JLS require unique
// names for fields, JVMS requires only unique name + signature combination.
struct field_pos {
Symbol* _name;
Symbol* _signature;
int _index;
int _position;
};
class FieldInfoSupplier: public PackedTableBuilder::Supplier {
const field_pos* _positions;
size_t _elements;
public:
FieldInfoSupplier(const field_pos* positions, size_t elements): _positions(positions), _elements(elements) {}
bool next(uint32_t* key, uint32_t* value) override {
if (_elements == 0) {
return false;
}
*key = _positions->_position;
*value = _positions->_index;
++_positions;
--_elements;
return true;
}
};
Array<u1>* FieldInfoStream::create_search_table(ConstantPool* cp, const Array<u1>* fis, ClassLoaderData* loader_data, TRAPS) {
if (CDSConfig::is_dumping_dynamic_archive()) {
// We cannot use search table; in case of dynamic archives it should be sorted by "requested" addresses,
// but Symbol* addresses are coming from _constants, which has "buffered" addresses.
// For background, see new comments inside allocate_node_impl in symbolTable.cpp
return nullptr;
}
FieldInfoReader r(fis);
*java_fields_count = r.next_uint();
*injected_fields_count = r.next_uint();
int java_fields;
int injected_fields;
r.read_field_counts(&java_fields, &injected_fields);
assert(java_fields >= 0, "must be");
if (java_fields == 0 || fis->length() == 0 || static_cast<uint>(java_fields) < BinarySearchThreshold) {
return nullptr;
}
ResourceMark rm;
field_pos* positions = NEW_RESOURCE_ARRAY(field_pos, java_fields);
for (int i = 0; i < java_fields; ++i) {
assert(r.has_next(), "number of fields must match");
positions[i]._position = r.position();
FieldInfo fi;
r.read_field_info(fi);
positions[i]._name = fi.name(cp);
positions[i]._signature = fi.signature(cp);
positions[i]._index = i;
}
auto compare_pair = [](const void* v1, const void* v2) {
const field_pos* p1 = reinterpret_cast<const field_pos*>(v1);
const field_pos* p2 = reinterpret_cast<const field_pos*>(v2);
return compare_name_and_sig(p1->_name, p1->_signature, p2->_name, p2->_signature);
};
qsort(positions, java_fields, sizeof(field_pos), compare_pair);
PackedTableBuilder builder(fis->length() - 1, java_fields - 1);
Array<u1>* table = MetadataFactory::new_array<u1>(loader_data, java_fields * builder.element_bytes(), CHECK_NULL);
FieldInfoSupplier supplier(positions, java_fields);
builder.fill(table->data(), static_cast<size_t>(table->length()), supplier);
return table;
}
GrowableArray<FieldInfo>* FieldInfoStream::create_FieldInfoArray(const Array<u1>* fis, int* java_fields_count, int* injected_fields_count) {
FieldInfoReader r(fis);
r.read_field_counts(java_fields_count, injected_fields_count);
int length = *java_fields_count + *injected_fields_count;
GrowableArray<FieldInfo>* array = new GrowableArray<FieldInfo>(length);
while (r.has_next()) {
FieldInfo fi;
r.read_field_info(fi);
array->append(fi);
}
assert(array->length() == length, "Must be");
assert(array->length() == *java_fields_count + *injected_fields_count, "Must be");
return array;
}
void FieldInfoStream::print_from_fieldinfo_stream(Array<u1>* fis, outputStream* os, ConstantPool* cp) {
int length = FieldInfoStream::num_total_fields(fis);
FieldInfoReader r(fis);
int java_field_count = r.next_uint();
int injected_fields_count = r.next_uint();
int java_fields_count;
int injected_fields_count;
r.read_field_counts(&java_fields_count, &injected_fields_count);
while (r.has_next()) {
FieldInfo fi;
r.read_field_info(fi);
fi.print(os, cp);
}
}
class FieldInfoComparator: public PackedTableLookup::Comparator {
const FieldInfoReader* _reader;
ConstantPool* _cp;
const Symbol* _name;
const Symbol* _signature;
public:
FieldInfoComparator(const FieldInfoReader* reader, ConstantPool* cp, const Symbol* name, const Symbol* signature):
_reader(reader), _cp(cp), _name(name), _signature(signature) {}
int compare_to(uint32_t position) override {
FieldInfoReader r2(*_reader);
r2.set_position_and_next_index(position, -1);
u2 name_index, sig_index;
r2.read_name_and_signature(&name_index, &sig_index);
Symbol* mid_name = _cp->symbol_at(name_index);
Symbol* mid_sig = _cp->symbol_at(sig_index);
return FieldInfoStream::compare_name_and_sig(_name, _signature, mid_name, mid_sig);
}
#ifdef ASSERT
void reset(uint32_t position) override {
FieldInfoReader r2(*_reader);
r2.set_position_and_next_index(position, -1);
u2 name_index, signature_index;
r2.read_name_and_signature(&name_index, &signature_index);
_name = _cp->symbol_at(name_index);
_signature = _cp->symbol_at(signature_index);
}
#endif // ASSERT
};
#ifdef ASSERT
void FieldInfoStream::validate_search_table(ConstantPool* cp, const Array<u1>* fis, const Array<u1>* search_table) {
if (search_table == nullptr) {
return;
}
FieldInfoReader reader(fis);
int java_fields, injected_fields;
reader.read_field_counts(&java_fields, &injected_fields);
assert(java_fields > 0, "must be");
PackedTableLookup lookup(fis->length() - 1, java_fields - 1, search_table);
assert(lookup.element_bytes() * java_fields == static_cast<unsigned int>(search_table->length()), "size does not match");
FieldInfoComparator comparator(&reader, cp, nullptr, nullptr);
// Check 1: assert that elements have the correct order based on the comparison function
lookup.validate_order(comparator);
// Check 2: Iterate through the original stream (not just search_table) and try if lookup works as expected
reader.set_position_and_next_index(0, 0);
reader.read_field_counts(&java_fields, &injected_fields);
while (reader.has_next()) {
int field_start = reader.position();
FieldInfo fi;
reader.read_field_info(fi);
if (fi.field_flags().is_injected()) {
// checking only java fields that precede injected ones
break;
}
FieldInfoReader r2(fis);
int index = r2.search_table_lookup(search_table, fi.name(cp), fi.signature(cp), cp, java_fields);
assert(index == static_cast<int>(fi.index()), "wrong index: %d != %u", index, fi.index());
assert(index == r2.next_index(), "index should match");
assert(field_start == r2.position(), "must find the same position");
}
}
#endif // ASSERT
void FieldInfoStream::print_search_table(outputStream* st, ConstantPool* cp, const Array<u1>* fis, const Array<u1>* search_table) {
if (search_table == nullptr) {
return;
}
FieldInfoReader reader(fis);
int java_fields, injected_fields;
reader.read_field_counts(&java_fields, &injected_fields);
assert(java_fields > 0, "must be");
PackedTableLookup lookup(fis->length() - 1, java_fields - 1, search_table);
auto printer = [&] (size_t offset, uint32_t position, uint32_t index) {
reader.set_position_and_next_index(position, -1);
u2 name_index, sig_index;
reader.read_name_and_signature(&name_index, &sig_index);
Symbol* name = cp->symbol_at(name_index);
Symbol* sig = cp->symbol_at(sig_index);
st->print(" [%zu] #%d,#%d = ", offset, name_index, sig_index);
name->print_symbol_on(st);
st->print(":");
sig->print_symbol_on(st);
st->print(" @ %p,%p", name, sig);
st->cr();
};
lookup.iterate(printer);
}
int FieldInfoReader::search_table_lookup(const Array<u1>* search_table, const Symbol* name, const Symbol* signature, ConstantPool* cp, int java_fields) {
assert(java_fields >= 0, "must be");
if (java_fields == 0) {
return -1;
}
FieldInfoComparator comp(this, cp, name, signature);
PackedTableLookup lookup(_r.limit() - 1, java_fields - 1, search_table);
uint32_t position;
static_assert(sizeof(uint32_t) == sizeof(_next_index), "field size assert");
if (lookup.search(comp, &position, reinterpret_cast<uint32_t*>(&_next_index))) {
_r.set_position(static_cast<int>(position));
return _next_index;
} else {
return -1;
}
}

View File

@@ -222,29 +222,28 @@ public:
void map_field_info(const FieldInfo& fi);
};
// Gadget for decoding and reading the stream of field records.
class FieldInfoReader {
friend class FieldInfoStream;
friend class ClassFileParser;
friend class FieldStreamBase;
friend class FieldInfo;
UNSIGNED5::Reader<const u1*, int> _r;
int _next_index;
public:
public:
FieldInfoReader(const Array<u1>* fi);
private:
uint32_t next_uint() { return _r.next_uint(); }
private:
inline uint32_t next_uint() { return _r.next_uint(); }
void skip(int n) { int s = _r.try_skip(n); assert(s == n,""); }
public:
int has_next() { return _r.has_next(); }
int position() { return _r.position(); }
int next_index() { return _next_index; }
void read_field_counts(int* java_fields, int* injected_fields);
int has_next() const { return _r.position() < _r.limit(); }
int position() const { return _r.position(); }
int next_index() const { return _next_index; }
void read_name_and_signature(u2* name_index, u2* signature_index);
void read_field_info(FieldInfo& fi);
int search_table_lookup(const Array<u1>* search_table, const Symbol* name, const Symbol* signature, ConstantPool* cp, int java_fields);
// skip a whole field record, both required and optional bits
FieldInfoReader& skip_field_info();
@@ -271,6 +270,11 @@ class FieldInfoStream : AllStatic {
friend class JavaFieldStream;
friend class FieldStreamBase;
friend class ClassFileParser;
friend class FieldInfoReader;
friend class FieldInfoComparator;
private:
static int compare_name_and_sig(const Symbol* n1, const Symbol* s1, const Symbol* n2, const Symbol* s2);
public:
static int num_java_fields(const Array<u1>* fis);
@@ -278,9 +282,14 @@ class FieldInfoStream : AllStatic {
static int num_total_fields(const Array<u1>* fis);
static Array<u1>* create_FieldInfoStream(GrowableArray<FieldInfo>* fields, int java_fields, int injected_fields,
ClassLoaderData* loader_data, TRAPS);
ClassLoaderData* loader_data, TRAPS);
static Array<u1>* create_search_table(ConstantPool* cp, const Array<u1>* fis, ClassLoaderData* loader_data, TRAPS);
static GrowableArray<FieldInfo>* create_FieldInfoArray(const Array<u1>* fis, int* java_fields_count, int* injected_fields_count);
static void print_from_fieldinfo_stream(Array<u1>* fis, outputStream* os, ConstantPool* cp);
DEBUG_ONLY(static void validate_search_table(ConstantPool* cp, const Array<u1>* fis, const Array<u1>* search_table);)
static void print_search_table(outputStream* st, ConstantPool* cp, const Array<u1>* fis, const Array<u1>* search_table);
};
class FieldStatus {

View File

@@ -56,16 +56,27 @@ inline Symbol* FieldInfo::lookup_symbol(int symbol_index) const {
inline int FieldInfoStream::num_injected_java_fields(const Array<u1>* fis) {
FieldInfoReader fir(fis);
fir.skip(1);
return fir.next_uint();
int java_fields_count;
int injected_fields_count;
fir.read_field_counts(&java_fields_count, &injected_fields_count);
return injected_fields_count;
}
inline int FieldInfoStream::num_total_fields(const Array<u1>* fis) {
FieldInfoReader fir(fis);
return fir.next_uint() + fir.next_uint();
int java_fields_count;
int injected_fields_count;
fir.read_field_counts(&java_fields_count, &injected_fields_count);
return java_fields_count + injected_fields_count;
}
inline int FieldInfoStream::num_java_fields(const Array<u1>* fis) { return FieldInfoReader(fis).next_uint(); }
inline int FieldInfoStream::num_java_fields(const Array<u1>* fis) {
FieldInfoReader fir(fis);
int java_fields_count;
int injected_fields_count;
fir.read_field_counts(&java_fields_count, &injected_fields_count);
return java_fields_count;
}
template<typename CON>
inline void Mapper<CON>::map_field_info(const FieldInfo& fi) {
@@ -94,13 +105,22 @@ inline void Mapper<CON>::map_field_info(const FieldInfo& fi) {
inline FieldInfoReader::FieldInfoReader(const Array<u1>* fi)
: _r(fi->data(), 0),
: _r(fi->data(), fi->length()),
_next_index(0) { }
inline void FieldInfoReader::read_field_counts(int* java_fields, int* injected_fields) {
*java_fields = next_uint();
*injected_fields = next_uint();
}
inline void FieldInfoReader::read_name_and_signature(u2* name_index, u2* signature_index) {
*name_index = checked_cast<u2>(next_uint());
*signature_index = checked_cast<u2>(next_uint());
}
inline void FieldInfoReader::read_field_info(FieldInfo& fi) {
fi._index = _next_index++;
fi._name_index = checked_cast<u2>(next_uint());
fi._signature_index = checked_cast<u2>(next_uint());
read_name_and_signature(&fi._name_index, &fi._signature_index);
fi._offset = next_uint();
fi._access_flags = AccessFlags(checked_cast<u2>(next_uint()));
fi._field_flags = FieldInfo::FieldFlags(next_uint());

View File

@@ -1,5 +1,5 @@
/*
* Copyright (c) 2011, 2023, Oracle and/or its affiliates. All rights reserved.
* Copyright (c) 2011, 2025, Oracle and/or its affiliates. All rights reserved.
* DO NOT ALTER OR REMOVE COPYRIGHT NOTICES OR THIS FILE HEADER.
*
* This code is free software; you can redistribute it and/or modify it
@@ -56,17 +56,23 @@ class FieldStreamBase : public StackObj {
inline FieldStreamBase(const Array<u1>* fieldinfo_stream, ConstantPool* constants, int start, int limit);
inline FieldStreamBase(Array<u1>* fieldinfo_stream, ConstantPool* constants);
inline FieldStreamBase(const Array<u1>* fieldinfo_stream, ConstantPool* constants);
private:
private:
void initialize() {
int java_fields_count = _reader.next_uint();
int injected_fields_count = _reader.next_uint();
assert( _limit <= java_fields_count + injected_fields_count, "Safety check");
int java_fields_count;
int injected_fields_count;
_reader.read_field_counts(&java_fields_count, &injected_fields_count);
if (_limit < _index) {
_limit = java_fields_count + injected_fields_count;
} else {
assert( _limit <= java_fields_count + injected_fields_count, "Safety check");
}
if (_limit != 0) {
_reader.read_field_info(_fi_buf);
}
}
public:
inline FieldStreamBase(InstanceKlass* klass);
@@ -138,8 +144,11 @@ class FieldStreamBase : public StackObj {
// Iterate over only the Java fields
class JavaFieldStream : public FieldStreamBase {
Array<u1>* _search_table;
public:
JavaFieldStream(const InstanceKlass* k): FieldStreamBase(k->fieldinfo_stream(), k->constants(), 0, k->java_fields_count()) {}
JavaFieldStream(const InstanceKlass* k): FieldStreamBase(k->fieldinfo_stream(), k->constants(), 0, k->java_fields_count()),
_search_table(k->fieldinfo_search_table()) {}
u2 name_index() const {
assert(!field()->field_flags().is_injected(), "regular only");
@@ -149,7 +158,6 @@ class JavaFieldStream : public FieldStreamBase {
u2 signature_index() const {
assert(!field()->field_flags().is_injected(), "regular only");
return field()->signature_index();
return -1;
}
u2 generic_signature_index() const {
@@ -164,6 +172,10 @@ class JavaFieldStream : public FieldStreamBase {
assert(!field()->field_flags().is_injected(), "regular only");
return field()->initializer_index();
}
// Performs either a linear search or binary search through the stream
// looking for a matching name/signature combo
bool lookup(const Symbol* name, const Symbol* signature);
};
@@ -176,7 +188,6 @@ class InternalFieldStream : public FieldStreamBase {
class AllFieldStream : public FieldStreamBase {
public:
AllFieldStream(Array<u1>* fieldinfo, ConstantPool* constants): FieldStreamBase(fieldinfo, constants) {}
AllFieldStream(const InstanceKlass* k): FieldStreamBase(k->fieldinfo_stream(), k->constants()) {}
};

View File

@@ -1,5 +1,5 @@
/*
* Copyright (c) 2019, 2023, Oracle and/or its affiliates. All rights reserved.
* Copyright (c) 2019, 2025, Oracle and/or its affiliates. All rights reserved.
* DO NOT ALTER OR REMOVE COPYRIGHT NOTICES OR THIS FILE HEADER.
*
* This code is free software; you can redistribute it and/or modify it
@@ -33,22 +33,18 @@
FieldStreamBase::FieldStreamBase(const Array<u1>* fieldinfo_stream, ConstantPool* constants, int start, int limit) :
_fieldinfo_stream(fieldinfo_stream),
_reader(FieldInfoReader(_fieldinfo_stream)),
_constants(constantPoolHandle(Thread::current(), constants)), _index(start) {
_index = start;
if (limit < start) {
_limit = FieldInfoStream::num_total_fields(_fieldinfo_stream);
} else {
_limit = limit;
}
_constants(constantPoolHandle(Thread::current(), constants)),
_index(start),
_limit(limit) {
initialize();
}
FieldStreamBase::FieldStreamBase(Array<u1>* fieldinfo_stream, ConstantPool* constants) :
FieldStreamBase::FieldStreamBase(const Array<u1>* fieldinfo_stream, ConstantPool* constants) :
_fieldinfo_stream(fieldinfo_stream),
_reader(FieldInfoReader(_fieldinfo_stream)),
_constants(constantPoolHandle(Thread::current(), constants)),
_index(0),
_limit(FieldInfoStream::num_total_fields(_fieldinfo_stream)) {
_limit(-1) {
initialize();
}
@@ -57,9 +53,28 @@ FieldStreamBase::FieldStreamBase(InstanceKlass* klass) :
_reader(FieldInfoReader(_fieldinfo_stream)),
_constants(constantPoolHandle(Thread::current(), klass->constants())),
_index(0),
_limit(FieldInfoStream::num_total_fields(_fieldinfo_stream)) {
_limit(-1) {
assert(klass == field_holder(), "");
initialize();
}
inline bool JavaFieldStream::lookup(const Symbol* name, const Symbol* signature) {
if (_search_table != nullptr) {
int index = _reader.search_table_lookup(_search_table, name, signature, _constants(), _limit);
if (index >= 0) {
assert(index < _limit, "must be");
_index = index;
_reader.read_field_info(_fi_buf);
return true;
}
} else {
for (; !done(); next()) {
if (this->name() == name && this->signature() == signature) {
return true;
}
}
}
return false;
}
#endif // SHARE_OOPS_FIELDSTREAMS_INLINE_HPP

View File

@@ -686,6 +686,11 @@ void InstanceKlass::deallocate_contents(ClassLoaderData* loader_data) {
}
set_fieldinfo_stream(nullptr);
if (fieldinfo_search_table() != nullptr && !fieldinfo_search_table()->is_shared()) {
MetadataFactory::free_array<u1>(loader_data, fieldinfo_search_table());
}
set_fieldinfo_search_table(nullptr);
if (fields_status() != nullptr && !fields_status()->is_shared()) {
MetadataFactory::free_array<FieldStatus>(loader_data, fields_status());
}
@@ -1786,13 +1791,12 @@ FieldInfo InstanceKlass::field(int index) const {
}
bool InstanceKlass::find_local_field(Symbol* name, Symbol* sig, fieldDescriptor* fd) const {
for (JavaFieldStream fs(this); !fs.done(); fs.next()) {
Symbol* f_name = fs.name();
Symbol* f_sig = fs.signature();
if (f_name == name && f_sig == sig) {
fd->reinitialize(const_cast<InstanceKlass*>(this), fs.to_FieldInfo());
return true;
}
JavaFieldStream fs(this);
if (fs.lookup(name, sig)) {
assert(fs.name() == name, "name must match");
assert(fs.signature() == sig, "signature must match");
fd->reinitialize(const_cast<InstanceKlass*>(this), fs.to_FieldInfo());
return true;
}
return false;
}
@@ -2610,6 +2614,7 @@ void InstanceKlass::metaspace_pointers_do(MetaspaceClosure* it) {
}
it->push(&_fieldinfo_stream);
it->push(&_fieldinfo_search_table);
// _fields_status might be written into by Rewriter::scan_method() -> fd.set_has_initialized_final_update()
it->push(&_fields_status, MetaspaceClosure::_writable);
@@ -2710,6 +2715,8 @@ void InstanceKlass::remove_unshareable_info() {
DEBUG_ONLY(_shared_class_load_count = 0);
remove_unshareable_flags();
DEBUG_ONLY(FieldInfoStream::validate_search_table(_constants, _fieldinfo_stream, _fieldinfo_search_table));
}
void InstanceKlass::remove_unshareable_flags() {
@@ -2816,6 +2823,8 @@ void InstanceKlass::restore_unshareable_info(ClassLoaderData* loader_data, Handl
if (DiagnoseSyncOnValueBasedClasses && has_value_based_class_annotation() && !is_value_based()) {
set_is_value_based();
}
DEBUG_ONLY(FieldInfoStream::validate_search_table(_constants, _fieldinfo_stream, _fieldinfo_search_table));
}
// Check if a class or any of its supertypes has a version older than 50.
@@ -3760,6 +3769,11 @@ void InstanceKlass::print_on(outputStream* st) const {
map++;
}
st->cr();
if (fieldinfo_search_table() != nullptr) {
st->print_cr(BULLET"---- field info search table:");
FieldInfoStream::print_search_table(st, _constants, _fieldinfo_stream, _fieldinfo_search_table);
}
}
void InstanceKlass::print_value_on(outputStream* st) const {

View File

@@ -276,6 +276,7 @@ class InstanceKlass: public Klass {
// Fields information is stored in an UNSIGNED5 encoded stream (see fieldInfo.hpp)
Array<u1>* _fieldinfo_stream;
Array<u1>* _fieldinfo_search_table;
Array<FieldStatus>* _fields_status;
// embedded Java vtable follows here
@@ -398,6 +399,9 @@ class InstanceKlass: public Klass {
Array<u1>* fieldinfo_stream() const { return _fieldinfo_stream; }
void set_fieldinfo_stream(Array<u1>* fis) { _fieldinfo_stream = fis; }
Array<u1>* fieldinfo_search_table() const { return _fieldinfo_search_table; }
void set_fieldinfo_search_table(Array<u1>* table) { _fieldinfo_search_table = table; }
Array<FieldStatus>* fields_status() const {return _fields_status; }
void set_fields_status(Array<FieldStatus>* array) { _fields_status = array; }

View File

@@ -1180,38 +1180,42 @@ void klassItable::check_constraints(GrowableArray<Method*>* supers, TRAPS) {
Method* interface_method = supers->at(i); // method overridden
if (target != nullptr && interface_method != nullptr) {
InstanceKlass* method_holder = target->method_holder();
InstanceKlass* interf = interface_method->method_holder();
HandleMark hm(THREAD);
Handle method_holder_loader(THREAD, method_holder->class_loader());
Handle interface_loader(THREAD, interf->class_loader());
// Do not check loader constraints for overpass methods because overpass
// methods are created by the jvm to throw exceptions.
if (!target->is_overpass()) {
InstanceKlass* method_holder = target->method_holder();
InstanceKlass* interf = interface_method->method_holder();
HandleMark hm(THREAD);
Handle method_holder_loader(THREAD, method_holder->class_loader());
Handle interface_loader(THREAD, interf->class_loader());
if (method_holder_loader() != interface_loader()) {
ResourceMark rm(THREAD);
Symbol* failed_type_symbol =
SystemDictionary::check_signature_loaders(target->signature(),
_klass,
method_holder_loader,
interface_loader,
true);
if (failed_type_symbol != nullptr) {
stringStream ss;
ss.print("loader constraint violation in interface itable"
" initialization for class %s: when selecting method '",
_klass->external_name());
interface_method->print_external_name(&ss),
ss.print("' the class loader %s for super interface %s, and the class"
" loader %s of the selected method's %s, %s have"
" different Class objects for the type %s used in the signature (%s; %s)",
interf->class_loader_data()->loader_name_and_id(),
interf->external_name(),
method_holder->class_loader_data()->loader_name_and_id(),
method_holder->external_kind(),
method_holder->external_name(),
failed_type_symbol->as_klass_external_name(),
interf->class_in_module_of_loader(false, true),
method_holder->class_in_module_of_loader(false, true));
THROW_MSG(vmSymbols::java_lang_LinkageError(), ss.as_string());
if (method_holder_loader() != interface_loader()) {
ResourceMark rm(THREAD);
Symbol* failed_type_symbol =
SystemDictionary::check_signature_loaders(target->signature(),
_klass,
method_holder_loader,
interface_loader,
true);
if (failed_type_symbol != nullptr) {
stringStream ss;
ss.print("loader constraint violation in interface itable"
" initialization for class %s: when selecting method '",
_klass->external_name());
interface_method->print_external_name(&ss),
ss.print("' the class loader %s for super interface %s, and the class"
" loader %s of the selected method's %s, %s have"
" different Class objects for the type %s used in the signature (%s; %s)",
interf->class_loader_data()->loader_name_and_id(),
interf->external_name(),
method_holder->class_loader_data()->loader_name_and_id(),
method_holder->external_kind(),
method_holder->external_name(),
failed_type_symbol->as_klass_external_name(),
interf->class_in_module_of_loader(false, true),
method_holder->class_in_module_of_loader(false, true));
THROW_MSG(vmSymbols::java_lang_LinkageError(), ss.as_string());
}
}
}
}
@@ -1333,11 +1337,9 @@ void klassItable::initialize_itable_for_interface(int method_table_offset, Insta
target = LinkResolver::lookup_instance_method_in_klasses(_klass, m->name(), m->signature(),
Klass::PrivateLookupMode::skip);
}
if (target == nullptr || !target->is_public() || target->is_abstract() || target->is_overpass()) {
assert(target == nullptr || !target->is_overpass() || target->is_public(),
"Non-public overpass method!");
if (target == nullptr || !target->is_public() || target->is_abstract()) {
// Entry does not resolve. Leave it empty for AbstractMethodError or other error.
if (!(target == nullptr) && !target->is_public()) {
if (target != nullptr && !target->is_public()) {
// Stuff an IllegalAccessError throwing method in there instead.
itableOffsetEntry::method_entry(_klass, method_table_offset)[m->itable_index()].
initialize(_klass, Universe::throw_illegal_access_error());

View File

@@ -1860,7 +1860,7 @@ public:
};
Mutex* MethodData::extra_data_lock() {
Mutex* lock = Atomic::load(&_extra_data_lock);
Mutex* lock = Atomic::load_acquire(&_extra_data_lock);
if (lock == nullptr) {
// This lock could be acquired while we are holding DumpTimeTable_lock/nosafepoint
lock = new Mutex(Mutex::nosafepoint-1, "MDOExtraData_lock");

View File

@@ -34,6 +34,7 @@
#include "memory/metaspaceClosure.hpp"
#include "oops/instanceKlass.hpp"
#include "oops/method.hpp"
#include "oops/objArrayKlass.hpp"
#include "runtime/handles.hpp"
#include "runtime/mutexLocker.hpp"
#include "utilities/resizeableResourceHash.hpp"
@@ -286,7 +287,12 @@ private:
static bool is_klass_loaded(Klass* k) {
if (have_data()) {
// If we're running in AOT mode some classes may not be loaded yet
return !k->is_instance_klass() || InstanceKlass::cast(k)->is_loaded();
if (k->is_objArray_klass()) {
k = ObjArrayKlass::cast(k)->bottom_klass();
}
if (k->is_instance_klass()) {
return InstanceKlass::cast(k)->is_loaded();
}
}
return true;
}

View File

@@ -37,11 +37,8 @@
#include "utilities/copy.hpp"
#include "utilities/powerOfTwo.hpp"
void Block_Array::grow( uint i ) {
_nesting.check(_arena); // Check if a potential reallocation in the arena is safe
if (i < Max()) {
return; // No need to grow
}
void Block_Array::grow(uint i) {
assert(i >= Max(), "Should have been checked before, use maybe_grow?");
DEBUG_ONLY(_limit = i+1);
if( i < _size ) return;
if( !_size ) {

View File

@@ -53,7 +53,13 @@ class Block_Array : public ArenaObj {
ReallocMark _nesting; // Safety checks for arena reallocation
protected:
Block **_blocks;
void grow( uint i ); // Grow array node to fit
void maybe_grow(uint i) {
_nesting.check(_arena); // Check if a potential reallocation in the arena is safe
if (i >= Max()) {
grow(i);
}
}
void grow(uint i); // Grow array node to fit
public:
Block_Array(Arena *a) : _size(OptoBlockListSize), _arena(a) {
@@ -68,7 +74,7 @@ public:
Block *operator[] ( uint i ) const // Lookup, or assert for not mapped
{ assert( i < Max(), "oob" ); return _blocks[i]; }
// Extend the mapping: index i maps to Block *n.
void map( uint i, Block *n ) { grow(i); _blocks[i] = n; }
void map( uint i, Block *n ) { maybe_grow(i); _blocks[i] = n; }
uint Max() const { DEBUG_ONLY(return _limit); return _size; }
};

View File

@@ -298,19 +298,52 @@ IdealLoopTree* PhaseIdealLoop::insert_outer_loop(IdealLoopTree* loop, LoopNode*
return outer_ilt;
}
// Create a skeleton strip mined outer loop: a Loop head before the
// inner strip mined loop, a safepoint and an exit condition guarded
// by an opaque node after the inner strip mined loop with a backedge
// to the loop head. The inner strip mined loop is left as it is. Only
// once loop optimizations are over, do we adjust the inner loop exit
// condition to limit its number of iterations, set the outer loop
// exit condition and add Phis to the outer loop head. Some loop
// optimizations that operate on the inner strip mined loop need to be
// aware of the outer strip mined loop: loop unswitching needs to
// clone the outer loop as well as the inner, unrolling needs to only
// clone the inner loop etc. No optimizations need to change the outer
// strip mined loop as it is only a skeleton.
IdealLoopTree* PhaseIdealLoop::create_outer_strip_mined_loop(BoolNode *test, Node *cmp, Node *init_control,
// Create a skeleton strip mined outer loop: an OuterStripMinedLoop head before the inner strip mined CountedLoop, a
// SafePoint on exit of the inner CountedLoopEnd and an OuterStripMinedLoopEnd test that can't constant fold until loop
// optimizations are over. The inner strip mined loop is left as it is. Only once loop optimizations are over, do we
// adjust the inner loop exit condition to limit its number of iterations, set the outer loop exit condition and add
// Phis to the outer loop head. Some loop optimizations that operate on the inner strip mined loop need to be aware of
// the outer strip mined loop: loop unswitching needs to clone the outer loop as well as the inner, unrolling needs to
// only clone the inner loop etc. No optimizations need to change the outer strip mined loop as it is only a skeleton.
//
// Schematically:
//
// OuterStripMinedLoop -------|
// | |
// CountedLoop ----------- | |
// \- Phi (iv) -| | |
// / \ | | |
// CmpI AddI --| | |
// \ | |
// Bool | |
// \ | |
// CountedLoopEnd | |
// / \ | |
// IfFalse IfTrue--------| |
// | |
// SafePoint |
// | |
// OuterStripMinedLoopEnd |
// / \ |
// IfFalse IfTrue-----------|
// |
//
//
// As loop optimizations transform the inner loop, the outer strip mined loop stays mostly unchanged. The only exception
// is nodes referenced from the SafePoint and sunk from the inner loop: they end up in the outer strip mined loop.
//
// Not adding Phis to the outer loop head from the beginning, and only adding them after loop optimizations does not
// conform to C2's IR rules: any variable or memory slice that is mutated in a loop should have a Phi. The main
// motivation for such a design that doesn't conform to C2's IR rules is to allow existing loop optimizations to be
// mostly unaffected by the outer strip mined loop: the only extra step needed in most cases is to step over the
// OuterStripMinedLoop. The main drawback is that once loop optimizations are over, an extra step is needed to finish
// constructing the outer loop. This is handled by OuterStripMinedLoopNode::adjust_strip_mined_loop().
//
// Adding Phis to the outer loop is largely straightforward: there needs to be one Phi in the outer loop for every Phi
// in the inner loop. Things may be more complicated for sunk Store nodes: there may not be any inner loop Phi left
// after sinking for a particular memory slice but the outer loop needs a Phi. See
// OuterStripMinedLoopNode::handle_sunk_stores_when_finishing_construction()
IdealLoopTree* PhaseIdealLoop::create_outer_strip_mined_loop(Node* init_control,
IdealLoopTree* loop, float cl_prob, float le_fcnt,
Node*& entry_control, Node*& iffalse) {
Node* outer_test = intcon(0);
@@ -2255,9 +2288,8 @@ bool PhaseIdealLoop::is_counted_loop(Node* x, IdealLoopTree*&loop, BasicType iv_
is_deleteable_safept(sfpt);
IdealLoopTree* outer_ilt = nullptr;
if (strip_mine_loop) {
outer_ilt = create_outer_strip_mined_loop(test, cmp, init_control, loop,
cl_prob, le->_fcnt, entry_control,
iffalse);
outer_ilt = create_outer_strip_mined_loop(init_control, loop, cl_prob, le->_fcnt,
entry_control, iffalse);
}
// Now setup a new CountedLoopNode to replace the existing LoopNode
@@ -2870,10 +2902,11 @@ BaseCountedLoopNode* BaseCountedLoopNode::make(Node* entry, Node* backedge, Basi
return new LongCountedLoopNode(entry, backedge);
}
void OuterStripMinedLoopNode::fix_sunk_stores(CountedLoopEndNode* inner_cle, LoopNode* inner_cl, PhaseIterGVN* igvn,
PhaseIdealLoop* iloop) {
Node* cle_out = inner_cle->proj_out(false);
Node* cle_tail = inner_cle->proj_out(true);
void OuterStripMinedLoopNode::fix_sunk_stores_when_back_to_counted_loop(PhaseIterGVN* igvn,
PhaseIdealLoop* iloop) const {
CountedLoopNode* inner_cl = inner_counted_loop();
IfFalseNode* cle_out = inner_loop_exit();
if (cle_out->outcnt() > 1) {
// Look for chains of stores that were sunk
// out of the inner loop and are in the outer loop
@@ -2988,11 +3021,90 @@ void OuterStripMinedLoopNode::fix_sunk_stores(CountedLoopEndNode* inner_cle, Loo
}
}
// The outer strip mined loop is initially only partially constructed. In particular Phis are omitted.
// See comment above: PhaseIdealLoop::create_outer_strip_mined_loop()
// We're now in the process of finishing the construction of the outer loop. For each Phi in the inner loop, a Phi in
// the outer loop was just now created. However, Sunk Stores cause an extra challenge:
// 1) If all Stores in the inner loop were sunk for a particular memory slice, there's no Phi left for that memory slice
// in the inner loop anymore, and hence we did not yet add a Phi for the outer loop. So an extra Phi must now be
// added for each chain of sunk Stores for a particular memory slice.
// 2) If some Stores were sunk and some left in the inner loop, a Phi was already created in the outer loop but
// its backedge input wasn't wired correctly to the last Store of the chain: the backedge input was set to the
// backedge of the inner loop Phi instead, but it needs to be the last Store of the chain in the outer loop. We now
// have to fix that too.
void OuterStripMinedLoopNode::handle_sunk_stores_when_finishing_construction(PhaseIterGVN* igvn) {
IfFalseNode* cle_exit_proj = inner_loop_exit();
// Find Sunk stores: Sunk stores are pinned on the loop exit projection of the inner loop. Indeed, because Sunk Stores
// modify the memory state captured by the SafePoint in the outer strip mined loop, they must be above it. The
// SafePoint's control input is the loop exit projection. It's also the only control out of the inner loop above the
// SafePoint.
#ifdef ASSERT
int stores_in_outer_loop_cnt = 0;
for (DUIterator_Fast imax, i = cle_exit_proj->fast_outs(imax); i < imax; i++) {
Node* u = cle_exit_proj->fast_out(i);
if (u->is_Store()) {
stores_in_outer_loop_cnt++;
}
}
#endif
// Sunk stores are reachable from the memory state of the outer loop safepoint
SafePointNode* safepoint = outer_safepoint();
MergeMemNode* mm = safepoint->in(TypeFunc::Memory)->isa_MergeMem();
if (mm == nullptr) {
// There is no MergeMem, which should only happen if there was no memory node
// sunk out of the loop.
assert(stores_in_outer_loop_cnt == 0, "inconsistent");
return;
}
DEBUG_ONLY(int stores_in_outer_loop_cnt2 = 0);
for (MergeMemStream mms(mm); mms.next_non_empty();) {
Node* mem = mms.memory();
// Traverse up the chain of stores to find the first store pinned
// at the loop exit projection.
Node* last = mem;
Node* first = nullptr;
while (mem->is_Store() && mem->in(0) == cle_exit_proj) {
DEBUG_ONLY(stores_in_outer_loop_cnt2++);
first = mem;
mem = mem->in(MemNode::Memory);
}
if (first != nullptr) {
// Found a chain of Stores that were sunk
// Do we already have a memory Phi for that slice on the outer loop? If that is the case, that Phi was created
// by cloning an inner loop Phi. The inner loop Phi should have mem, the memory state of the first Store out of
// the inner loop, as input on the backedge. So does the outer loop Phi given it's a clone.
Node* phi = nullptr;
for (DUIterator_Fast imax, i = mem->fast_outs(imax); i < imax; i++) {
Node* u = mem->fast_out(i);
if (u->is_Phi() && u->in(0) == this && u->in(LoopBackControl) == mem) {
assert(phi == nullptr, "there should be only one");
phi = u;
PRODUCT_ONLY(break);
}
}
if (phi == nullptr) {
// No outer loop Phi? create one
phi = PhiNode::make(this, last);
phi->set_req(EntryControl, mem);
phi = igvn->transform(phi);
igvn->replace_input_of(first, MemNode::Memory, phi);
} else {
// Fix memory state along the backedge: it should be the last sunk Store of the chain
igvn->replace_input_of(phi, LoopBackControl, last);
}
}
}
assert(stores_in_outer_loop_cnt == stores_in_outer_loop_cnt2, "inconsistent");
}
void OuterStripMinedLoopNode::adjust_strip_mined_loop(PhaseIterGVN* igvn) {
verify_strip_mined(1);
// Look for the outer & inner strip mined loop, reduce number of
// iterations of the inner loop, set exit condition of outer loop,
// construct required phi nodes for outer loop.
CountedLoopNode* inner_cl = unique_ctrl_out()->as_CountedLoop();
CountedLoopNode* inner_cl = inner_counted_loop();
assert(inner_cl->is_strip_mined(), "inner loop should be strip mined");
if (LoopStripMiningIter == 0) {
remove_outer_loop_and_safepoint(igvn);
@@ -3010,7 +3122,7 @@ void OuterStripMinedLoopNode::adjust_strip_mined_loop(PhaseIterGVN* igvn) {
inner_cl->clear_strip_mined();
return;
}
CountedLoopEndNode* inner_cle = inner_cl->loopexit();
CountedLoopEndNode* inner_cle = inner_counted_loop_end();
int stride = inner_cl->stride_con();
// For a min int stride, LoopStripMiningIter * stride overflows the int range for all values of LoopStripMiningIter
@@ -3091,8 +3203,9 @@ void OuterStripMinedLoopNode::adjust_strip_mined_loop(PhaseIterGVN* igvn) {
}
Node* iv_phi = nullptr;
// Make a clone of each phi in the inner loop
// for the outer loop
// Make a clone of each phi in the inner loop for the outer loop
// When Stores were Sunk, after this step, a Phi may still be missing or its backedge incorrectly wired. See
// handle_sunk_stores_when_finishing_construction()
for (uint i = 0; i < inner_cl->outcnt(); i++) {
Node* u = inner_cl->raw_out(i);
if (u->is_Phi()) {
@@ -3111,6 +3224,8 @@ void OuterStripMinedLoopNode::adjust_strip_mined_loop(PhaseIterGVN* igvn) {
}
}
handle_sunk_stores_when_finishing_construction(igvn);
if (iv_phi != nullptr) {
// Now adjust the inner loop's exit condition
Node* limit = inner_cl->limit();
@@ -3166,7 +3281,7 @@ void OuterStripMinedLoopNode::transform_to_counted_loop(PhaseIterGVN* igvn, Phas
CountedLoopEndNode* inner_cle = inner_cl->loopexit();
Node* safepoint = outer_safepoint();
fix_sunk_stores(inner_cle, inner_cl, igvn, iloop);
fix_sunk_stores_when_back_to_counted_loop(igvn, iloop);
// make counted loop exit test always fail
ConINode* zero = igvn->intcon(0);

View File

@@ -573,7 +573,8 @@ class LoopLimitNode : public Node {
// Support for strip mining
class OuterStripMinedLoopNode : public LoopNode {
private:
static void fix_sunk_stores(CountedLoopEndNode* inner_cle, LoopNode* inner_cl, PhaseIterGVN* igvn, PhaseIdealLoop* iloop);
void fix_sunk_stores_when_back_to_counted_loop(PhaseIterGVN* igvn, PhaseIdealLoop* iloop) const;
void handle_sunk_stores_when_finishing_construction(PhaseIterGVN* igvn);
public:
OuterStripMinedLoopNode(Compile* C, Node *entry, Node *backedge)
@@ -589,6 +590,10 @@ public:
virtual OuterStripMinedLoopEndNode* outer_loop_end() const;
virtual IfFalseNode* outer_loop_exit() const;
virtual SafePointNode* outer_safepoint() const;
CountedLoopNode* inner_counted_loop() const { return unique_ctrl_out()->as_CountedLoop(); }
CountedLoopEndNode* inner_counted_loop_end() const { return inner_counted_loop()->loopexit(); }
IfFalseNode* inner_loop_exit() const { return inner_counted_loop_end()->proj_out(false)->as_IfFalse(); }
void adjust_strip_mined_loop(PhaseIterGVN* igvn);
void remove_outer_loop_and_safepoint(PhaseIterGVN* igvn) const;
@@ -1293,7 +1298,7 @@ public:
void add_parse_predicate(Deoptimization::DeoptReason reason, Node* inner_head, IdealLoopTree* loop, SafePointNode* sfpt);
SafePointNode* find_safepoint(Node* back_control, Node* x, IdealLoopTree* loop);
IdealLoopTree* insert_outer_loop(IdealLoopTree* loop, LoopNode* outer_l, Node* outer_ift);
IdealLoopTree* create_outer_strip_mined_loop(BoolNode *test, Node *cmp, Node *init_control,
IdealLoopTree* create_outer_strip_mined_loop(Node* init_control,
IdealLoopTree* loop, float cl_prob, float le_fcnt,
Node*& entry_control, Node*& iffalse);

View File

@@ -65,13 +65,8 @@ public:
Node_Stack::push(n, (uint)ns);
}
void push(Node *n, Node_State ns, Node *parent, int indx) {
++_inode_top;
if ((_inode_top + 1) >= _inode_max) grow();
_inode_top->node = parent;
_inode_top->indx = (uint)indx;
++_inode_top;
_inode_top->node = n;
_inode_top->indx = (uint)ns;
Node_Stack::push(parent, (uint)indx);
Node_Stack::push(n, (uint)ns);
}
Node *parent() {
pop();

View File

@@ -2798,7 +2798,6 @@ const RegMask &Node::in_RegMask(uint) const {
}
void Node_Array::grow(uint i) {
_nesting.check(_a); // Check if a potential reallocation in the arena is safe
assert(i >= _max, "Should have been checked before, use maybe_grow?");
assert(_max > 0, "invariant");
uint old = _max;
@@ -3038,10 +3037,6 @@ void Unique_Node_List::remove_useless_nodes(VectorSet &useful) {
//=============================================================================
void Node_Stack::grow() {
_nesting.check(_a); // Check if a potential reallocation in the arena is safe
if (_inode_top < _inode_max) {
return; // No need to grow
}
size_t old_top = pointer_delta(_inode_top,_inodes,sizeof(INode)); // save _top
size_t old_max = pointer_delta(_inode_max,_inodes,sizeof(INode));
size_t max = old_max << 1; // max * 2

View File

@@ -1633,6 +1633,7 @@ protected:
// Grow array to required capacity
void maybe_grow(uint i) {
_nesting.check(_a); // Check if a potential reallocation in the arena is safe
if (i >= _max) {
grow(i);
}
@@ -1884,7 +1885,15 @@ protected:
INode *_inodes; // Array storage for the stack
Arena *_a; // Arena to allocate in
ReallocMark _nesting; // Safety checks for arena reallocation
void maybe_grow() {
_nesting.check(_a); // Check if a potential reallocation in the arena is safe
if (_inode_top >= _inode_max) {
grow();
}
}
void grow();
public:
Node_Stack(int size) {
size_t max = (size > OptoNodeListSize) ? size : OptoNodeListSize;
@@ -1907,7 +1916,7 @@ public:
}
void push(Node *n, uint i) {
++_inode_top;
grow();
maybe_grow();
INode *top = _inode_top; // optimization
top->node = n;
top->indx = i;

View File

@@ -1954,6 +1954,7 @@ void PhaseCCP::push_more_uses(Unique_Node_List& worklist, Node* parent, const No
push_and(worklist, parent, use);
push_cast_ii(worklist, parent, use);
push_opaque_zero_trip_guard(worklist, use);
push_bool_with_cmpu_and_mask(worklist, use);
}
@@ -2000,6 +2001,57 @@ void PhaseCCP::push_cmpu(Unique_Node_List& worklist, const Node* use) const {
}
}
// Look for the following shape, which can be optimized by BoolNode::Value_cmpu_and_mask() (i.e. corresponds to case
// (1b): "(m & x) <u (m + 1))".
// If any of the inputs on the level (%%) change, we need to revisit Bool because we could have prematurely found that
// the Bool is constant (i.e. case (1b) can be applied) which could become invalid with new type information during CCP.
//
// m x m 1 (%%)
// \ / \ /
// AndI AddI
// \ /
// CmpU
// |
// Bool
//
void PhaseCCP::push_bool_with_cmpu_and_mask(Unique_Node_List& worklist, const Node* use) const {
uint use_op = use->Opcode();
if (use_op != Op_AndI && (use_op != Op_AddI || use->in(2)->find_int_con(0) != 1)) {
// Not "m & x" or "m + 1"
return;
}
for (DUIterator_Fast imax, i = use->fast_outs(imax); i < imax; i++) {
Node* cmpu = use->fast_out(i);
if (cmpu->Opcode() == Op_CmpU) {
push_bool_matching_case1b(worklist, cmpu);
}
}
}
// Push any Bool below 'cmpu' that matches case (1b) of BoolNode::Value_cmpu_and_mask().
void PhaseCCP::push_bool_matching_case1b(Unique_Node_List& worklist, const Node* cmpu) const {
assert(cmpu->Opcode() == Op_CmpU, "must be");
for (DUIterator_Fast imax, i = cmpu->fast_outs(imax); i < imax; i++) {
Node* bol = cmpu->fast_out(i);
if (!bol->is_Bool() || bol->as_Bool()->_test._test != BoolTest::lt) {
// Not a Bool with "<u"
continue;
}
Node* andI = cmpu->in(1);
Node* addI = cmpu->in(2);
if (andI->Opcode() != Op_AndI || addI->Opcode() != Op_AddI || addI->in(2)->find_int_con(0) != 1) {
// Not "m & x" and "m + 1"
continue;
}
Node* m = addI->in(1);
if (m == andI->in(1) || m == andI->in(2)) {
// Is "m" shared? Matched (1b) and thus we revisit Bool.
push_if_not_bottom_type(worklist, bol);
}
}
}
// If n is used in a counted loop exit condition, then the type of the counted loop's Phi depends on the type of 'n'.
// Seem PhiNode::Value().
void PhaseCCP::push_counted_loop_phi(Unique_Node_List& worklist, Node* parent, const Node* use) {

View File

@@ -627,6 +627,8 @@ class PhaseCCP : public PhaseIterGVN {
void push_and(Unique_Node_List& worklist, const Node* parent, const Node* use) const;
void push_cast_ii(Unique_Node_List& worklist, const Node* parent, const Node* use) const;
void push_opaque_zero_trip_guard(Unique_Node_List& worklist, const Node* use) const;
void push_bool_with_cmpu_and_mask(Unique_Node_List& worklist, const Node* use) const;
void push_bool_matching_case1b(Unique_Node_List& worklist, const Node* cmpu) const;
public:
PhaseCCP( PhaseIterGVN *igvn ); // Compute conditional constants

View File

@@ -1885,7 +1885,7 @@ const Type* BoolNode::Value_cmpu_and_mask(PhaseValues* phase) const {
// (1b) "(x & m) <u m + 1" and "(m & x) <u m + 1", cmp2 = m + 1
Node* rhs_m = cmp2->in(1);
const TypeInt* rhs_m_type = phase->type(rhs_m)->isa_int();
if (rhs_m_type->_lo > -1 || rhs_m_type->_hi < -1) {
if (rhs_m_type != nullptr && (rhs_m_type->_lo > -1 || rhs_m_type->_hi < -1)) {
// Exclude any case where m == -1 is possible.
m = rhs_m;
}
@@ -1903,12 +1903,16 @@ const Type* BoolNode::Value_cmpu_and_mask(PhaseValues* phase) const {
// Simplify a Bool (convert condition codes to boolean (1 or 0)) node,
// based on local information. If the input is constant, do it.
const Type* BoolNode::Value(PhaseGVN* phase) const {
const Type* input_type = phase->type(in(1));
if (input_type == Type::TOP) {
return Type::TOP;
}
const Type* t = Value_cmpu_and_mask(phase);
if (t != nullptr) {
return t;
}
return _test.cc2logical( phase->type( in(1) ) );
return _test.cc2logical(input_type);
}
#ifndef PRODUCT
@@ -2023,10 +2027,12 @@ const Type* SqrtHFNode::Value(PhaseGVN* phase) const {
static const Type* reverse_bytes(int opcode, const Type* con) {
switch (opcode) {
case Op_ReverseBytesS: return TypeInt::make(byteswap(checked_cast<jshort>(con->is_int()->get_con())));
case Op_ReverseBytesUS: return TypeInt::make(byteswap(checked_cast<jchar>(con->is_int()->get_con())));
case Op_ReverseBytesI: return TypeInt::make(byteswap(checked_cast<jint>(con->is_int()->get_con())));
case Op_ReverseBytesL: return TypeLong::make(byteswap(checked_cast<jlong>(con->is_long()->get_con())));
// It is valid in bytecode to load any int and pass it to a method that expects a smaller type (i.e., short, char).
// Let's cast the value to match the Java behavior.
case Op_ReverseBytesS: return TypeInt::make(byteswap(static_cast<jshort>(con->is_int()->get_con())));
case Op_ReverseBytesUS: return TypeInt::make(byteswap(static_cast<jchar>(con->is_int()->get_con())));
case Op_ReverseBytesI: return TypeInt::make(byteswap(con->is_int()->get_con()));
case Op_ReverseBytesL: return TypeLong::make(byteswap(con->is_long()->get_con()));
default: ShouldNotReachHere();
}
}

View File

@@ -2535,6 +2535,82 @@ VStatus VLoopBody::construct() {
return VStatus::make_success();
}
// Returns true if the given operation can be vectorized with "truncation" where the upper bits in the integer do not
// contribute to the result. This is true for most arithmetic operations, but false for operations such as
// leading/trailing zero count.
static bool can_subword_truncate(Node* in, const Type* type) {
if (in->is_Load() || in->is_Store() || in->is_Convert() || in->is_Phi()) {
return true;
}
int opc = in->Opcode();
// If the node's base type is a subword type, check an additional set of nodes.
if (type == TypeInt::SHORT || type == TypeInt::CHAR) {
switch (opc) {
case Op_ReverseBytesS:
case Op_ReverseBytesUS:
return true;
}
}
// Can be truncated:
switch (opc) {
case Op_AddI:
case Op_SubI:
case Op_MulI:
case Op_AndI:
case Op_OrI:
case Op_XorI:
return true;
}
#ifdef ASSERT
// While shifts have subword vectorized forms, they require knowing the precise type of input loads so they are
// considered non-truncating.
if (VectorNode::is_shift_opcode(opc)) {
return false;
}
// Vector nodes should not truncate.
if (type->isa_vect() != nullptr || type->isa_vectmask() != nullptr || in->is_Reduction()) {
return false;
}
// Cannot be truncated:
switch (opc) {
case Op_AbsI:
case Op_DivI:
case Op_ModI:
case Op_MinI:
case Op_MaxI:
case Op_CMoveI:
case Op_Conv2B:
case Op_RotateRight:
case Op_RotateLeft:
case Op_PopCountI:
case Op_ReverseBytesI:
case Op_ReverseI:
case Op_CountLeadingZerosI:
case Op_CountTrailingZerosI:
case Op_IsInfiniteF:
case Op_IsInfiniteD:
case Op_ExtractS:
case Op_ExtractC:
case Op_ExtractB:
return false;
default:
// If this assert is hit, that means that we need to determine if the node can be safely truncated,
// and then add it to the list of truncating nodes or the list of non-truncating ones just above.
// In product, we just return false, which is always correct.
assert(false, "Unexpected node in SuperWord truncation: %s", NodeClassNames[in->Opcode()]);
}
#endif
// Default to disallowing vector truncation
return false;
}
void VLoopTypes::compute_vector_element_type() {
#ifndef PRODUCT
if (_vloop.is_trace_vector_element_type()) {
@@ -2589,18 +2665,19 @@ void VLoopTypes::compute_vector_element_type() {
// be vectorized if the higher order bits info is imprecise.
const Type* vt = vtn;
int op = in->Opcode();
if (VectorNode::is_shift_opcode(op) || op == Op_AbsI || op == Op_ReverseBytesI) {
if (!can_subword_truncate(in, vt)) {
Node* load = in->in(1);
if (load->is_Load() &&
// For certain operations such as shifts and abs(), use the size of the load if it exists
if ((VectorNode::is_shift_opcode(op) || op == Op_AbsI) && load->is_Load() &&
_vloop.in_bb(load) &&
(velt_type(load)->basic_type() == T_INT)) {
// Only Load nodes distinguish signed (LoadS/LoadB) and unsigned
// (LoadUS/LoadUB) values. Store nodes only have one version.
vt = velt_type(load);
} else if (op != Op_LShiftI) {
// Widen type to int to avoid the creation of vector nodes. Note
// Widen type to the node type to avoid the creation of vector nodes. Note
// that left shifts work regardless of the signedness.
vt = TypeInt::INT;
vt = container_type(in);
}
}
set_velt_type(in, vt);

View File

@@ -350,24 +350,33 @@ check_is_obj_array(JavaThread* thr, jarray jArray) {
}
}
// Arbitrary (but well-known) tag for GetStringChars
const void* STRING_TAG = (void*)0x47114711;
// Arbitrary (but well-known) tag for GetStringUTFChars
const void* STRING_UTF_TAG = (void*) 0x48124812;
// Arbitrary (but well-known) tag for GetPrimitiveArrayCritical
const void* CRITICAL_TAG = (void*)0x49134913;
/*
* Copy and wrap array elements for bounds checking.
* Remember the original elements (GuardedMemory::get_tag())
*/
static void* check_jni_wrap_copy_array(JavaThread* thr, jarray array,
void* orig_elements) {
void* orig_elements, jboolean is_critical = JNI_FALSE) {
void* result;
IN_VM(
oop a = JNIHandles::resolve_non_null(array);
size_t len = arrayOop(a)->length() <<
TypeArrayKlass::cast(a->klass())->log2_element_size();
result = GuardedMemory::wrap_copy(orig_elements, len, orig_elements);
result = GuardedMemory::wrap_copy(orig_elements, len, orig_elements, is_critical ? CRITICAL_TAG : nullptr);
)
return result;
}
static void* check_wrapped_array(JavaThread* thr, const char* fn_name,
void* obj, void* carray, size_t* rsz) {
void* obj, void* carray, size_t* rsz, jboolean is_critical) {
if (carray == nullptr) {
tty->print_cr("%s: elements vector null" PTR_FORMAT, fn_name, p2i(obj));
NativeReportJNIFatalError(thr, "Elements vector null");
@@ -386,6 +395,29 @@ static void* check_wrapped_array(JavaThread* thr, const char* fn_name,
DEBUG_ONLY(guarded.print_on(tty);) // This may crash.
NativeReportJNIFatalError(thr, err_msg("%s: unrecognized elements", fn_name));
}
if (orig_result == STRING_TAG || orig_result == STRING_UTF_TAG) {
bool was_utf = orig_result == STRING_UTF_TAG;
tty->print_cr("%s: called on something allocated by %s",
fn_name, was_utf ? "GetStringUTFChars" : "GetStringChars");
DEBUG_ONLY(guarded.print_on(tty);) // This may crash.
NativeReportJNIFatalError(thr, err_msg("%s called on something allocated by %s",
fn_name, was_utf ? "GetStringUTFChars" : "GetStringChars"));
}
if (is_critical && (guarded.get_tag2() != CRITICAL_TAG)) {
tty->print_cr("%s: called on something not allocated by GetPrimitiveArrayCritical", fn_name);
DEBUG_ONLY(guarded.print_on(tty);) // This may crash.
NativeReportJNIFatalError(thr, err_msg("%s called on something not allocated by GetPrimitiveArrayCritical",
fn_name));
}
if (!is_critical && (guarded.get_tag2() == CRITICAL_TAG)) {
tty->print_cr("%s: called on something allocated by GetPrimitiveArrayCritical", fn_name);
DEBUG_ONLY(guarded.print_on(tty);) // This may crash.
NativeReportJNIFatalError(thr, err_msg("%s called on something allocated by GetPrimitiveArrayCritical",
fn_name));
}
if (rsz != nullptr) {
*rsz = guarded.get_user_size();
}
@@ -395,7 +427,7 @@ static void* check_wrapped_array(JavaThread* thr, const char* fn_name,
static void* check_wrapped_array_release(JavaThread* thr, const char* fn_name,
void* obj, void* carray, jint mode, jboolean is_critical) {
size_t sz;
void* orig_result = check_wrapped_array(thr, fn_name, obj, carray, &sz);
void* orig_result = check_wrapped_array(thr, fn_name, obj, carray, &sz, is_critical);
switch (mode) {
case 0:
memcpy(orig_result, carray, sz);
@@ -1430,9 +1462,6 @@ JNI_ENTRY_CHECKED(jsize,
return result;
JNI_END
// Arbitrary (but well-known) tag
const void* STRING_TAG = (void*)0x47114711;
JNI_ENTRY_CHECKED(const jchar *,
checked_jni_GetStringChars(JNIEnv *env,
jstring str,
@@ -1535,9 +1564,6 @@ JNI_ENTRY_CHECKED(jlong,
return result;
JNI_END
// Arbitrary (but well-known) tag - different than GetStringChars
const void* STRING_UTF_TAG = (void*) 0x48124812;
JNI_ENTRY_CHECKED(const char *,
checked_jni_GetStringUTFChars(JNIEnv *env,
jstring str,
@@ -1859,7 +1885,7 @@ JNI_ENTRY_CHECKED(void *,
)
void *result = UNCHECKED()->GetPrimitiveArrayCritical(env, array, isCopy);
if (result != nullptr) {
result = check_jni_wrap_copy_array(thr, array, result);
result = check_jni_wrap_copy_array(thr, array, result, JNI_TRUE);
}
functionExit(thr);
return result;

View File

@@ -2966,7 +2966,7 @@ JVM_ENTRY(jobject, JVM_CreateThreadSnapshot(JNIEnv* env, jobject jthread))
oop snapshot = ThreadSnapshotFactory::get_thread_snapshot(jthread, THREAD);
return JNIHandles::make_local(THREAD, snapshot);
#else
return nullptr;
THROW_NULL(vmSymbols::java_lang_UnsupportedOperationException());
#endif
JVM_END

View File

@@ -3550,6 +3550,13 @@ void VM_RedefineClasses::set_new_constant_pool(
Array<u1>* new_fis = FieldInfoStream::create_FieldInfoStream(fields, java_fields, injected_fields, scratch_class->class_loader_data(), CHECK);
scratch_class->set_fieldinfo_stream(new_fis);
MetadataFactory::free_array<u1>(scratch_class->class_loader_data(), old_stream);
Array<u1>* old_table = scratch_class->fieldinfo_search_table();
Array<u1>* search_table = FieldInfoStream::create_search_table(scratch_class->constants(), new_fis, scratch_class->class_loader_data(), CHECK);
scratch_class->set_fieldinfo_search_table(search_table);
MetadataFactory::free_array<u1>(scratch_class->class_loader_data(), old_table);
DEBUG_ONLY(FieldInfoStream::validate_search_table(scratch_class->constants(), new_fis, search_table));
}
// Update constant pool indices in the inner classes info to use

View File

@@ -2005,6 +2005,10 @@ const int ObjectAlignmentInBytes = 8;
product(bool, UseThreadsLockThrottleLock, true, DIAGNOSTIC, \
"Use an extra lock during Thread start and exit to alleviate" \
"contention on Threads_lock.") \
\
develop(uint, BinarySearchThreshold, 16, \
"Minimal number of elements in a sorted collection to prefer" \
"binary search over simple linear search." ) \
// end of RUNTIME_FLAGS

View File

@@ -83,7 +83,7 @@ Monitor* CompileTaskWait_lock = nullptr;
Monitor* MethodCompileQueue_lock = nullptr;
Monitor* CompileThread_lock = nullptr;
Monitor* Compilation_lock = nullptr;
Mutex* CompileTaskAlloc_lock = nullptr;
Monitor* CompileTaskAlloc_lock = nullptr;
Mutex* CompileStatistics_lock = nullptr;
Mutex* DirectivesStack_lock = nullptr;
Monitor* Terminator_lock = nullptr;
@@ -343,7 +343,7 @@ void mutex_init() {
MUTEX_DEFL(G1RareEvent_lock , PaddedMutex , Threads_lock, true);
}
MUTEX_DEFL(CompileTaskAlloc_lock , PaddedMutex , MethodCompileQueue_lock);
MUTEX_DEFL(CompileTaskAlloc_lock , PaddedMonitor, MethodCompileQueue_lock);
MUTEX_DEFL(CompileTaskWait_lock , PaddedMonitor, MethodCompileQueue_lock);
#if INCLUDE_PARALLELGC

View File

@@ -85,7 +85,7 @@ extern Monitor* CompileThread_lock; // a lock held by compile threa
extern Monitor* Compilation_lock; // a lock used to pause compilation
extern Mutex* TrainingData_lock; // a lock used when accessing training records
extern Monitor* TrainingReplayQueue_lock; // a lock held when class are added/removed to the training replay queue
extern Mutex* CompileTaskAlloc_lock; // a lock held when CompileTasks are allocated
extern Monitor* CompileTaskAlloc_lock; // a lock held when CompileTasks are allocated
extern Monitor* CompileTaskWait_lock; // a lock held when CompileTasks are waited/notified
extern Mutex* CompileStatistics_lock; // a lock held when updating compilation statistics
extern Mutex* DirectivesStack_lock; // a lock held when mutating the dirstack and ref counting directives

View File

@@ -1168,9 +1168,10 @@ void ThreadsSMRSupport::print_info_on(const Thread* thread, outputStream* st) {
// The count is only interesting if we have a _threads_list_ptr.
st->print(", _nested_threads_hazard_ptr_cnt=%u", thread->_nested_threads_hazard_ptr_cnt);
}
if (SafepointSynchronize::is_at_safepoint() || Thread::current() == thread) {
// It is only safe to walk the list if we're at a safepoint or the
// calling thread is walking its own list.
if ((SafepointSynchronize::is_at_safepoint() && thread->is_Java_thread()) ||
Thread::current() == thread) {
// It is only safe to walk the list if we're at a safepoint and processing a JavaThread,
// or the calling thread is walking its own list.
SafeThreadsListPtr* current = thread->_threads_list_ptr;
if (current != nullptr) {
// Skip the top nesting level as it is always printed above.

View File

@@ -771,8 +771,8 @@ jint Threads::create_vm(JavaVMInitArgs* args, bool* canTryAgain) {
#endif
if (CDSConfig::is_using_aot_linked_classes()) {
AOTLinkedClassBulkLoader::finish_loading_javabase_classes(CHECK_JNI_ERR);
SystemDictionary::restore_archived_method_handle_intrinsics();
AOTLinkedClassBulkLoader::finish_loading_javabase_classes(CHECK_JNI_ERR);
}
// Start string deduplication thread if requested.

View File

@@ -44,6 +44,7 @@
#include "gc/shared/vmStructs_gc.hpp"
#include "interpreter/bytecodes.hpp"
#include "interpreter/interpreter.hpp"
#include "jfr/recorder/service/jfrRecorderThread.hpp"
#include "logging/logAsyncWriter.hpp"
#include "memory/allocation.hpp"
#include "memory/allocation.inline.hpp"
@@ -1027,6 +1028,7 @@
declare_type(TrainingReplayThread, JavaThread) \
declare_type(StringDedupThread, JavaThread) \
declare_type(AttachListenerThread, JavaThread) \
declare_type(JfrRecorderThread, JavaThread) \
DEBUG_ONLY(COMPILER2_OR_JVMCI_PRESENT( \
declare_type(DeoptimizeObjectsALotThread, JavaThread))) \
declare_toplevel_type(OSThread) \

View File

@@ -1439,7 +1439,17 @@ oop ThreadSnapshotFactory::get_thread_snapshot(jobject jthread, TRAPS) {
ResourceMark rm(THREAD);
HandleMark hm(THREAD);
Handle thread_h(THREAD, JNIHandles::resolve(jthread));
JavaThread* java_thread = nullptr;
oop thread_oop;
bool has_javathread = tlh.cv_internal_thread_to_JavaThread(jthread, &java_thread, &thread_oop);
assert((has_javathread && thread_oop != nullptr) || !has_javathread, "Missing Thread oop");
Handle thread_h(THREAD, thread_oop);
bool is_virtual = java_lang_VirtualThread::is_instance(thread_h()); // Deals with null
if (!has_javathread && !is_virtual) {
return nullptr; // thread terminated so not of interest
}
// wrapper to auto delete JvmtiVTMSTransitionDisabler
class TransitionDisabler {
@@ -1460,8 +1470,6 @@ oop ThreadSnapshotFactory::get_thread_snapshot(jobject jthread, TRAPS) {
}
} transition_disabler;
JavaThread* java_thread = nullptr;
bool is_virtual = java_lang_VirtualThread::is_instance(thread_h());
Handle carrier_thread;
if (is_virtual) {
// 1st need to disable mount/unmount transitions

View File

@@ -0,0 +1,113 @@
/*
* Copyright (c) 2025, Oracle and/or its affiliates. All rights reserved.
* DO NOT ALTER OR REMOVE COPYRIGHT NOTICES OR THIS FILE HEADER.
*
* This code is free software; you can redistribute it and/or modify it
* under the terms of the GNU General Public License version 2 only, as
* published by the Free Software Foundation.
*
* This code is distributed in the hope that it will be useful, but WITHOUT
* ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or
* FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License
* version 2 for more details (a copy is included in the LICENSE file that
* accompanied this code).
*
* You should have received a copy of the GNU General Public License version
* 2 along with this work; if not, write to the Free Software Foundation,
* Inc., 51 Franklin St, Fifth Floor, Boston, MA 02110-1301 USA.
*
* Please contact Oracle, 500 Oracle Parkway, Redwood Shores, CA 94065 USA
* or visit www.oracle.com if you need additional information or have any
* questions.
*
*/
#include <cstring>
#include "utilities/align.hpp"
#include "utilities/count_leading_zeros.hpp"
#include "utilities/packedTable.hpp"
// The thresholds are inclusive, and in practice the limits are rounded
// to the nearest power-of-two - 1.
// Based on the max_key and max_value we figure out the number of bits required to store
// key and value; imagine that only as bits (not aligned to byte boundary... yet).
// Then we concatenate the bits for key and value, and 'add' 1-7 padding zeroes
// (high-order bits) to align on bytes.
// In the end we have each element in the table consuming 1-8 bytes (case with 0 bits for key
// is ruled out).
PackedTableBase::PackedTableBase(uint32_t max_key, uint32_t max_value) {
unsigned int key_bits = max_key == 0 ? 0 : 32 - count_leading_zeros(max_key);
unsigned int value_bits = max_value == 0 ? 0 : 32 - count_leading_zeros(max_value);
_element_bytes = align_up(key_bits + value_bits, 8) / 8;
// shifting left by 32 is undefined behaviour, and in practice returns 1
_key_mask = key_bits >= 32 ? -1 : (1U << key_bits) - 1;
_value_shift = key_bits;
_value_mask = value_bits >= 32 ? -1 : (1U << value_bits) - 1;
guarantee(_element_bytes > 0, "wouldn't work");
assert(_element_bytes <= sizeof(uint64_t), "shouldn't happen");
}
// Note: we require the supplier to provide the elements in the final order as we can't easily sort
// within this method - qsort() accepts only pure function as comparator.
void PackedTableBuilder::fill(u1* table, size_t table_length, Supplier &supplier) const {
uint32_t key, value;
size_t offset = 0;
for (; offset <= table_length && supplier.next(&key, &value); offset += _element_bytes) {
assert((key & ~_key_mask) == 0, "key out of bounds");
assert((value & ~_value_mask) == 0, "value out of bounds: %x vs. %x (%x)", value, _value_mask, ~_value_mask);
uint64_t element = static_cast<uint64_t>(key) | (static_cast<uint64_t>(value) << _value_shift);
for (unsigned int i = 0; i < _element_bytes; ++i) {
table[offset + i] = static_cast<u1>(0xFF & element);
element >>= 8;
}
}
assert(offset == table_length, "Did not fill whole array");
assert(!supplier.next(&key, &value), "Supplier has more elements");
}
uint64_t PackedTableLookup::read_element(size_t offset) const {
uint64_t element = 0;
for (unsigned int i = 0; i < _element_bytes; ++i) {
element |= static_cast<uint64_t>(_table[offset + i]) << (8 * i);
}
assert((element & ~((uint64_t) _key_mask | ((uint64_t) _value_mask << _value_shift))) == 0, "read too much");
return element;
}
bool PackedTableLookup::search(Comparator& comparator, uint32_t* found_key, uint32_t* found_value) const {
unsigned int low = 0, high = checked_cast<unsigned int>(_table_length / _element_bytes);
assert(low < high, "must be");
while (low < high) {
unsigned int mid = low + (high - low) / 2;
assert(mid >= low && mid < high, "integer overflow?");
uint64_t element = read_element(_element_bytes * mid);
// Ignoring high 32 bits in element on purpose
uint32_t key = static_cast<uint32_t>(element) & _key_mask;
int cmp = comparator.compare_to(key);
if (cmp == 0) {
*found_key = key;
// Since __builtin_memcpy in read_element does not copy bits outside the element
// anything above _value_mask << _value_shift should be zero.
*found_value = checked_cast<uint32_t>(element >> _value_shift) & _value_mask;
return true;
} else if (cmp < 0) {
high = mid;
} else {
low = mid + 1;
}
}
return false;
}
#ifdef ASSERT
void PackedTableLookup::validate_order(Comparator &comparator) const {
auto validator = [&] (size_t offset, uint32_t key, uint32_t value) {
if (offset != 0) {
assert(comparator.compare_to(key) < 0, "not sorted");
}
comparator.reset(key);
};
iterate(validator);
}
#endif

View File

@@ -0,0 +1,123 @@
/*
* Copyright (c) 2025, Oracle and/or its affiliates. All rights reserved.
* DO NOT ALTER OR REMOVE COPYRIGHT NOTICES OR THIS FILE HEADER.
*
* This code is free software; you can redistribute it and/or modify it
* under the terms of the GNU General Public License version 2 only, as
* published by the Free Software Foundation.
*
* This code is distributed in the hope that it will be useful, but WITHOUT
* ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or
* FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License
* version 2 for more details (a copy is included in the LICENSE file that
* accompanied this code).
*
* You should have received a copy of the GNU General Public License version
* 2 along with this work; if not, write to the Free Software Foundation,
* Inc., 51 Franklin St, Fifth Floor, Boston, MA 02110-1301 USA.
*
* Please contact Oracle, 500 Oracle Parkway, Redwood Shores, CA 94065 USA
* or visit www.oracle.com if you need additional information or have any
* questions.
*
*/
#include "oops/array.hpp"
#include "utilities/globalDefinitions.hpp"
// Base for space-optimized structure supporting binary search. Each element
// consists of up to 32-bit key, and up to 32-bit value; these are packed
// into a bit-record with 1-byte alignment.
// The keys are ordered according to a custom comparator.
class PackedTableBase {
protected:
unsigned int _element_bytes;
uint32_t _key_mask;
unsigned int _value_shift;
uint32_t _value_mask;
public:
PackedTableBase(uint32_t max_key, uint32_t max_value);
// Returns number of bytes each element will occupy.
inline unsigned int element_bytes(void) const { return _element_bytes; }
};
// Helper class for constructing a packed table in the provided array.
class PackedTableBuilder: public PackedTableBase {
public:
class Supplier {
public:
// Returns elements with already ordered keys.
// This function should return true when the key and value was set,
// and false when there's no more elements.
// Packed table does NOT support duplicate keys.
virtual bool next(uint32_t* key, uint32_t* value) = 0;
};
// The thresholds are inclusive, and in practice the limits are rounded
// to the nearest power-of-two - 1.
// See PackedTableBase constructor for details.
PackedTableBuilder(uint32_t max_key, uint32_t max_value): PackedTableBase(max_key, max_value) {}
// Constructs a packed table in the provided array, filling it with elements
// from the supplier. Note that no comparator is requied by this method -
// the supplier must return elements with already ordered keys.
// The table_length (in bytes) should match number of elements provided
// by the supplier (when Supplier::next() returns false the whole array should
// be filled).
void fill(u1* table, size_t table_length, Supplier &supplier) const;
};
// Helper class for lookup in a packed table.
class PackedTableLookup: public PackedTableBase {
const u1* const _table;
const size_t _table_length;
uint64_t read_element(size_t offset) const;
public:
// The comparator implementation does not have to store a key (uint32_t);
// the idea is that key can point into a different structure that hosts data
// suitable for the actual comparison. That's why PackedTableLookup::search(...)
// returns the key it found as well as the value.
class Comparator {
public:
// Returns negative/0/positive if the target referred to by this comparator
// is lower/equal/higher than the target referred to by the key.
virtual int compare_to(uint32_t key) = 0;
// Changes the target this comparator refers to.
DEBUG_ONLY(virtual void reset(uint32_t key) = 0);
};
// The thresholds are inclusive, and in practice the limits are rounded
// to the nearest power-of-two - 1.
// See PackedTableBase constructor for details.
PackedTableLookup(uint32_t max_key, uint32_t max_value, const u1 *table, size_t table_length):
PackedTableBase(max_key, max_value), _table(table), _table_length(table_length) {}
PackedTableLookup(uint32_t max_key, uint32_t max_value, const Array<u1> *table):
PackedTableLookup(max_key, max_value, table->data(), static_cast<size_t>(table->length())) {}
// Performs a binary search in the packed table, looking for an element with key
// referring to a target equal according to the comparator.
// When the element is found, found_key and found_value are updated from the element
// and the function returns true.
// When the element is not found, found_key and found_value are not changed and
// the function returns false.
bool search(Comparator& comparator, uint32_t* found_key, uint32_t* found_value) const;
// Asserts that elements in the packed table follow the order defined by the comparator.
DEBUG_ONLY(void validate_order(Comparator &comparator) const);
template<typename Function>
void iterate(Function func) const {
for (size_t offset = 0; offset < _table_length; offset += _element_bytes) {
uint64_t element = read_element(offset);
uint32_t key = static_cast<uint32_t>(element) & _key_mask;
uint32_t value = checked_cast<uint32_t>(element >> _value_shift) & _value_mask;
func(offset, key, value);
}
}
};

View File

@@ -1,5 +1,5 @@
/*
* Copyright (c) 1997, 2023, Oracle and/or its affiliates. All rights reserved.
* Copyright (c) 1997, 2025, Oracle and/or its affiliates. All rights reserved.
* DO NOT ALTER OR REMOVE COPYRIGHT NOTICES OR THIS FILE HEADER.
*
* This code is free software; you can redistribute it and/or modify it
@@ -261,7 +261,7 @@ class UNSIGNED5 : AllStatic {
ARR _array;
OFF _limit;
OFF _position;
int next_length() {
int next_length() const {
return UNSIGNED5::check_length(_array, _position, _limit, GET());
}
public:
@@ -270,7 +270,7 @@ class UNSIGNED5 : AllStatic {
uint32_t next_uint() {
return UNSIGNED5::read_uint(_array, _position, _limit, GET());
}
bool has_next() {
bool has_next() const {
return next_length() != 0;
}
// tries to skip count logical entries; returns actual number skipped
@@ -284,8 +284,9 @@ class UNSIGNED5 : AllStatic {
return actual;
}
ARR array() { return _array; }
OFF limit() { return _limit; }
OFF position() { return _position; }
OFF limit() const { return _limit; }
OFF position() const { return _position; }
void set_limit(OFF limit) { _limit = limit; }
void set_position(OFF position) { _position = position; }
// For debugging, even in product builds (see debug.cpp).

View File

@@ -1,6 +1,6 @@
/*
* Copyright (c) 2008, 2025, Oracle and/or its affiliates. All rights reserved.
* Copyright (c) 2013 SAP SE. All rights reserved.
* Copyright (c) 2013, 2025 SAP SE. All rights reserved.
* DO NOT ALTER OR REMOVE COPYRIGHT NOTICES OR THIS FILE HEADER.
*
* This code is free software; you can redistribute it and/or modify it
@@ -52,6 +52,15 @@ class AixFileSystemProvider extends UnixFileSystemProvider {
return new AixFileStore(path);
}
private static boolean supportsUserDefinedFileAttributeView(UnixPath file) {
try {
FileStore store = new AixFileStore(file);
return store.supportsFileAttributeView(UserDefinedFileAttributeView.class);
} catch (IOException e) {
return false;
}
}
@Override
@SuppressWarnings("unchecked")
public <V extends FileAttributeView> V getFileAttributeView(Path obj,
@@ -59,8 +68,10 @@ class AixFileSystemProvider extends UnixFileSystemProvider {
LinkOption... options)
{
if (type == UserDefinedFileAttributeView.class) {
return (V) new AixUserDefinedFileAttributeView(UnixPath.toUnixPath(obj),
Util.followLinks(options));
UnixPath file = UnixPath.toUnixPath(obj);
return supportsUserDefinedFileAttributeView(file) ?
(V) new AixUserDefinedFileAttributeView(file, Util.followLinks(options))
: null;
}
return super.getFileAttributeView(obj, type, options);
}
@@ -71,8 +82,10 @@ class AixFileSystemProvider extends UnixFileSystemProvider {
LinkOption... options)
{
if (name.equals("user")) {
return new AixUserDefinedFileAttributeView(UnixPath.toUnixPath(obj),
Util.followLinks(options));
UnixPath file = UnixPath.toUnixPath(obj);
return supportsUserDefinedFileAttributeView(file) ?
new AixUserDefinedFileAttributeView(file, Util.followLinks(options))
: null;
}
return super.getFileAttributeView(obj, name, options);
}

View File

@@ -1,5 +1,5 @@
/*
* Copyright (c) 1998, 2024, Oracle and/or its affiliates. All rights reserved.
* Copyright (c) 1998, 2025, Oracle and/or its affiliates. All rights reserved.
* DO NOT ALTER OR REMOVE COPYRIGHT NOTICES OR THIS FILE HEADER.
*
* This code is free software; you can redistribute it and/or modify it
@@ -23,6 +23,7 @@
* questions.
*/
#include <stdbool.h>
#include <sys/socket.h>
#include <netinet/in.h>
#include <arpa/inet.h>
@@ -229,33 +230,50 @@ void setOSNameAndVersion(java_props_t *sprops) {
NSString *nsVerStr = NULL;
char* osVersionCStr = NULL;
NSOperatingSystemVersion osVer = [[NSProcessInfo processInfo] operatingSystemVersion];
// Copy out the char* if running on version other than 10.16 Mac OS (10.16 == 11.x)
// or explicitly requesting version compatibility
if (!((long)osVer.majorVersion == 10 && (long)osVer.minorVersion >= 16) ||
(getenv("SYSTEM_VERSION_COMPAT") != NULL)) {
if (osVer.patchVersion == 0) { // Omit trailing ".0"
// Some macOS versions require special handling. For example,
// when the NSOperatingSystemVersion reports 10.16 as the version
// then it should be treated as 11. Similarly, when it reports 16.0
// as the version then it should be treated as 26.
// If the SYSTEM_VERSION_COMPAT environment variable (a macOS construct)
// is set to 1, then we don't do any special handling for any versions
// and just literally use the value that NSOperatingSystemVersion reports.
const char* envVal = getenv("SYSTEM_VERSION_COMPAT");
const bool versionCompatEnabled = envVal != NULL
&& strncmp(envVal, "1", 1) == 0;
const bool requiresSpecialHandling =
((long) osVer.majorVersion == 10 && (long) osVer.minorVersion >= 16)
|| ((long) osVer.majorVersion == 16 && (long) osVer.minorVersion >= 0);
if (!requiresSpecialHandling || versionCompatEnabled) {
// no special handling - just use the version reported
// by NSOperatingSystemVersion
if (osVer.patchVersion == 0) {
// Omit trailing ".0"
nsVerStr = [NSString stringWithFormat:@"%ld.%ld",
(long)osVer.majorVersion, (long)osVer.minorVersion];
} else {
nsVerStr = [NSString stringWithFormat:@"%ld.%ld.%ld",
(long)osVer.majorVersion, (long)osVer.minorVersion, (long)osVer.patchVersion];
(long)osVer.majorVersion, (long)osVer.minorVersion,
(long)osVer.patchVersion];
}
} else {
// Version 10.16, without explicit env setting of SYSTEM_VERSION_COMPAT
// AKA 11+ Read the *real* ProductVersion from the hidden link to avoid SYSTEM_VERSION_COMPAT
// If not found, fallback below to the SystemVersion.plist
NSDictionary *version = [NSDictionary dictionaryWithContentsOfFile :
@"/System/Library/CoreServices/.SystemVersionPlatform.plist"];
// Requires special handling. We ignore the version reported
// by the NSOperatingSystemVersion API and instead read the
// *real* ProductVersion from
// /System/Library/CoreServices/.SystemVersionPlatform.plist.
// If not found there, then as a last resort we fallback to
// /System/Library/CoreServices/SystemVersion.plist
NSDictionary *version = [NSDictionary dictionaryWithContentsOfFile:
@"/System/Library/CoreServices/.SystemVersionPlatform.plist"];
if (version != NULL) {
nsVerStr = [version objectForKey : @"ProductVersion"];
nsVerStr = [version objectForKey: @"ProductVersion"];
}
}
// Fallback to reading the SystemVersion.plist
// Last resort - fallback to reading the SystemVersion.plist
if (nsVerStr == NULL) {
NSDictionary *version = [NSDictionary dictionaryWithContentsOfFile :
@"/System/Library/CoreServices/SystemVersion.plist"];
NSDictionary *version = [NSDictionary dictionaryWithContentsOfFile:
@"/System/Library/CoreServices/SystemVersion.plist"];
if (version != NULL) {
nsVerStr = [version objectForKey : @"ProductVersion"];
nsVerStr = [version objectForKey: @"ProductVersion"];
}
}

View File

@@ -1134,9 +1134,9 @@ public class File
if (ss == null) return null;
int n = ss.length;
File[] fs = new File[n];
for (int i = 0; i < n; i++) {
fs[i] = new File(ss[i], this);
}
boolean isEmpty = path.isEmpty();
for (int i = 0; i < n; i++)
fs[i] = isEmpty ? new File(ss[i]) : new File(ss[i], this);
return fs;
}
@@ -1169,9 +1169,10 @@ public class File
String[] ss = normalizedList();
if (ss == null) return null;
ArrayList<File> files = new ArrayList<>();
boolean isEmpty = path.isEmpty();
for (String s : ss)
if ((filter == null) || filter.accept(this, s))
files.add(new File(s, this));
files.add(isEmpty ? new File(s) : new File(s, this));
return files.toArray(new File[files.size()]);
}
@@ -1202,8 +1203,9 @@ public class File
String[] ss = normalizedList();
if (ss == null) return null;
ArrayList<File> files = new ArrayList<>();
boolean isEmpty = path.isEmpty();
for (String s : ss) {
File f = new File(s, this);
File f = isEmpty ? new File(s) : new File(s, this);
if ((filter == null) || filter.accept(f))
files.add(f);
}

Some files were not shown because too many files have changed in this diff Show More