Compare commits

..

256 Commits

Author SHA1 Message Date
Thomas Schatzl
02fe095d29 8364934: G1: Rename members of G1CollectionSet
Reviewed-by: ayang, kbarrett
2025-08-21 11:53:57 +00:00
Thomas Schatzl
a3fd4248b7 8365115: G1: Refactor rem set statistics gather code for group
Reviewed-by: kbarrett, ayang
2025-08-21 09:46:02 +00:00
Thomas Schatzl
f61b247fe3 8364962: G1: Inline G1CollectionSet::finalize_incremental_building
Reviewed-by: ayang, kbarrett
2025-08-21 09:44:41 +00:00
Thomas Schatzl
ed260e8cae 8365026: G1: Initialization should start a "full" new collection set
Reviewed-by: ayang, kbarrett
2025-08-21 09:37:34 +00:00
Thomas Schatzl
f0e706698d 8364414: G1: Use simpler data structure for holding collection set candidates during calculation
Reviewed-by: ayang, iwalulya
2025-08-21 09:36:16 +00:00
Thomas Schatzl
9439d76309 8364532: G1: In liveness tracing, print more significant digits for the liveness value
Reviewed-by: ayang, iwalulya
2025-08-21 09:35:57 +00:00
Thomas Schatzl
b735ef99b2 8364925: G1: Improve program flow around incremental collection set building
Reviewed-by: ayang, iwalulya
2025-08-21 09:19:14 +00:00
Thomas Schatzl
5ede5b47d4 8364650: G1: Use InvalidCSetIndex instead of UINT_MAX for "invalid" sentinel value of young_index_in_cset
Reviewed-by: ayang, iwalulya
2025-08-21 09:18:58 +00:00
Manuel Hässig
5febc4e3bb 8365910: [BACKOUT] Add a compilation timeout flag to catch long running compilations
Reviewed-by: chagedorn, dholmes
2025-08-21 08:23:32 +00:00
Fredrik Bredberg
a7c0f4b845 8365146: Remove LockingMode related code from ppc64
Reviewed-by: aboldtch, mdoerr
2025-08-21 07:47:26 +00:00
Manuel Hässig
c74c60fb8b 8308094: Add a compilation timeout flag to catch long running compilations
Co-authored-by: Dean Long <dlong@openjdk.org>
Reviewed-by: dlong, chagedorn
2025-08-21 07:09:25 +00:00
Amit Kumar
78d50c0215 8358756: [s390x] Test StartupOutput.java crash due to CodeCache size
Reviewed-by: lucy, dfenacci
2025-08-21 03:53:30 +00:00
Dingli Zhang
2e06a91765 8365841: RISC-V: Several IR verification tests fail after JDK-8350960 without Zvfh
Reviewed-by: fyang, fjiang, mli
2025-08-21 01:20:16 +00:00
Francesco Andreuzzi
ecab52c09b 8365610: Sort share/jfr includes
Reviewed-by: shade, mgronlun
2025-08-20 17:21:22 +00:00
Francesco Andreuzzi
ed7d5fe840 8360304: Redundant condition in LibraryCallKit::inline_vector_nary_operation
Reviewed-by: shade, vlivanov
2025-08-20 17:16:38 +00:00
Alan Bateman
be6c15ecb4 8365671: Typo in Joiner.allUntil example
Reviewed-by: liach
2025-08-20 16:07:38 +00:00
Chris Plummer
9041f4c47f 8309400: JDI spec needs to clarify when OpaqueFrameException and NativeMethodException are thrown
Reviewed-by: sspitsyn, alanb, amenkov
2025-08-20 15:32:17 +00:00
Archie Cobbs
3e60ab51fe 8348611: Eliminate DeferredLintHandler and emit warnings after attribution
8224228: No way to locally suppress lint warnings in parser/tokenizer or preview features
8353758: Missing calls to Log.useSource() in JavacTrees

Reviewed-by: mcimadamore, vromero, jlahoda
2025-08-20 15:04:48 +00:00
Hannes Wallnöfer
5ca8d7c2a7 8284499: Add the ability to right-click and open in new tab JavaDoc Search results
Reviewed-by: liach
2025-08-20 14:52:04 +00:00
Patricio Chilano Mateo
ebf5ae8435 8359222: [asan] jvmti/vthread/ToggleNotifyJvmtiTest/ToggleNotifyJvmtiTest triggers stack-buffer-overflow error
Reviewed-by: dholmes, fbredberg, coleenp
2025-08-20 14:49:16 +00:00
Afshin Zafari
e912977a66 8353444: NMT: rename 'category' to 'MemTag' in malloc tracker
Reviewed-by: jsjolen
2025-08-20 13:40:13 +00:00
Volkan Yazici
1383b8ef87 8362243: Devkit creation for Fedora base OS is broken
Reviewed-by: ihse, erikj, shade
2025-08-20 13:14:04 +00:00
Fei Gao
51d710e3cc 8364184: [REDO] AArch64: [VectorAPI] sve vector math operations are not supported after JDK-8353217
Reviewed-by: ihse, aph
2025-08-20 11:35:31 +00:00
Hannes Wallnöfer
908f3c9697 8356411: Comment tree not reporting correct position for label
Reviewed-by: liach
2025-08-20 08:38:06 +00:00
Fredrik Bredberg
169d145e99 8365188: Remove LockingMode related code from s390
Reviewed-by: ayang, aboldtch, amitkumar
2025-08-20 08:25:01 +00:00
Anton Artemov
70f3469310 8365556: ObjectMonitor::try_lock_or_add_to_entry_list() returns true with the wrong state of the node
Reviewed-by: pchilanomate, dholmes, fbredberg
2025-08-20 08:13:07 +00:00
Ivan Walulya
9c338f6f87 8365780: G1: Remset for young regions are cleared too early during Full GC
Reviewed-by: sjohanss, ayang
2025-08-20 07:51:47 +00:00
Anton Artemov
4ffd2a8aa4 8364819: Post-integration cleanups for JDK-8359820
Reviewed-by: dholmes, ayang, shade
2025-08-20 07:28:36 +00:00
Daniel Gredler
c220a6c7bb 8359955: Regressions ~7% in several J2DBench in 25-b26
Reviewed-by: prr, serb
2025-08-20 07:26:02 +00:00
Yagmur Eren
40bc083267 8358748: Large page size initialization fails with assert "page_size must be a power of 2"
Reviewed-by: ayang, dholmes
2025-08-20 07:16:36 +00:00
Matthias Baesken
320235ccb8 8365700: Jar --validate without any --file option leaves around a temporary file /tmp/tmpJar<number>.jar
Reviewed-by: jpai, asteiner
2025-08-20 06:47:36 +00:00
Jaikiran Pai
b453eb63c6 8365811: test/jdk/java/net/CookieHandler/B6644726.java failure - "Should have 5 cookies. Got only 4, expires probably didn't parse correctly"
Reviewed-by: syan, alanb
2025-08-20 06:07:20 +00:00
Koichi Sakata
506625b768 8356324: JVM crash (SIGSEGV at ClassListParser::resolve_indy_impl) during -Xshare:dump starting from 21.0.5
Reviewed-by: coleenp, matsaave
2025-08-20 04:47:04 +00:00
Valerie Peng
640b71da48 8365168: Use 64-bit aligned addresses for CK_ULONG access in PKCS11 native key code
Reviewed-by: coffeys
2025-08-20 04:20:22 +00:00
Weijun Wang
eca2032c06 8365559: jarsigner shows files non-existent if signed with a weak algorithm
Reviewed-by: abarashev, wetmore
2025-08-20 00:04:38 +00:00
Samuel Chee
95577ca97f 8361890: Aarch64: Removal of redundant dmb from C1 AtomicLong methods
Reviewed-by: aph, dlong
2025-08-19 23:48:57 +00:00
Roger Riggs
55e7494dee 8365703: Refactor ZipCoder to use common JLA.uncheckedNewStringNoRepl
Reviewed-by: lancea, vyazici
2025-08-19 23:33:40 +00:00
Brett Okken
3bbaa772b0 8364320: String encodeUTF8 latin1 with negatives
Reviewed-by: liach, rriggs
2025-08-19 20:31:17 +00:00
Phil Race
0858743dee 8277585: Remove the terminally deprecated finalize() method from javax.imageio.stream APIs
Reviewed-by: achung, azvegint, serb
2025-08-19 20:03:52 +00:00
Roger Riggs
884076f6e2 8365719: Refactor uses of JLA.uncheckedNewStringNoRepl
Reviewed-by: liach, vyazici
2025-08-19 19:06:20 +00:00
Erik Gahlin
024292ac4d 8365614: JFR: Improve PrettyWriter::printValue
Reviewed-by: mgronlun
2025-08-19 16:08:12 +00:00
Hannes Wallnöfer
0755477c9a 8342705: Add dark mode for docs
Reviewed-by: liach
2025-08-19 16:01:12 +00:00
Chris Plummer
4ed268ff9a 8362304: Fix JDWP spec w.r.t. OPAQUE_FRAME and INVALID_SLOT errors
Reviewed-by: sspitsyn, alanb, amenkov
2025-08-19 15:05:25 +00:00
Erik Gahlin
0b2d0817f1 8365636: JFR: Minor cleanup
Reviewed-by: shade
2025-08-19 14:45:37 +00:00
Fei Gao
999761d0f6 8365312: GCC 12 cannot compile SVE on aarch64 with auto-var-init pattern
Reviewed-by: kbarrett, ihse, erikj
2025-08-19 08:22:40 +00:00
Manjunath Matti
812434c420 8359114: [s390x] Add z17 detection code
Reviewed-by: amitkumar, aph
2025-08-19 07:57:00 +00:00
Manuel Hässig
626bea80ab 8356176: C2 MemorySegment: missing RCE with byteSize() in Loop Exit Check inside the for Expression
Co-authored-by: Quan Anh Mai <qamai@openjdk.org>
Co-authored-by: Emanuel Peter <epeter@openjdk.org>
Co-authored-by: Christian Hagedorn <chagedorn@openjdk.org>
Co-authored-by: Tobias Hartmann <thartmann@openjdk.org>
Reviewed-by: epeter, qamai
2025-08-19 06:37:52 +00:00
Sergey Bylokhov
4c80780f6a 8359380: Rework deferral profile logic after JDK-8346465
Reviewed-by: prr
2025-08-19 06:33:12 +00:00
Volkan Yazici
655dc516c2 8361842: Move input validation checks to Java for java.lang.StringCoding intrinsics
Reviewed-by: rriggs, liach, dfenacci, thartmann, redestad, jrose
2025-08-19 05:06:50 +00:00
Boris Ulasevich
f2f7a490c0 8365071: ARM32: JFR intrinsic jvm_commit triggers C2 regalloc assert
Reviewed-by: mgronlun
2025-08-19 04:40:45 +00:00
Shawn M Emery
e04a310375 8364806: Test sun/security/krb5/config/IncludeRandom.java times out on Windows
Reviewed-by: mbaesken
2025-08-18 23:54:06 +00:00
Mikhail Yankelevich
ec7361e082 8365660: test/jdk/sun/security/pkcs11/KeyAgreement/ tests skipped without SkipExceprion
Reviewed-by: rhalade
2025-08-18 23:07:57 +00:00
Justin Lu
a0053012a4 8364780: Unicode extension clarifications for NumberFormat/DecimalFormatSymbols
Reviewed-by: naoto
2025-08-18 22:10:20 +00:00
David Alayachew
bad38a0f92 8365643: JShell EditPad out of bounds on Windows
Reviewed-by: liach, aivanov, cstein, jlahoda
2025-08-18 20:47:02 +00:00
Raffaello Giulietti
285adff24e 8362448: Make use of the Double.toString(double) algorithm in java.text.DecimalFormat
Reviewed-by: naoto, jlu
2025-08-18 16:12:34 +00:00
Aleksey Shipilev
c9ecedd226 8365594: Strengthen Universe klasses asserts to catch bootstrapping errors earlier
Reviewed-by: coleenp, ayang
2025-08-18 15:51:08 +00:00
Erik Gahlin
2a16cc890b 8365550: JFR: The active-settings view should not use LAST_BATCH
Reviewed-by: shade, mgronlun
2025-08-18 15:42:31 +00:00
Jaikiran Pai
81c6ed3882 8365533: Remove outdated jdk.internal.javac package export to several modules from java.base
Reviewed-by: alanb, liach
2025-08-18 13:40:42 +00:00
Matthew Donovan
c1198bba0e 8357277: Update OpenSSL library for interop tests
Reviewed-by: rhalade
2025-08-18 11:08:36 +00:00
Erik Gahlin
a42ba1ff1a 8365638: JFR: Add --exact for debugging out-of-order events
Reviewed-by: shade
2025-08-18 10:36:35 +00:00
Pasam Soujanya
6e91ccd1c3 8365305: The ARIA role ‘contentinfo’ is not valid for the element <footer>
Reviewed-by: hannesw
2025-08-18 09:37:58 +00:00
Saranya Natarajan
2b756ab1e8 8358781: C2 fails with assert "bad profile data type" when TypeProfileCasts is disabled
Reviewed-by: mhaessig, kvn, dfenacci
2025-08-18 08:16:32 +00:00
Aleksey Shipilev
ca753ebad6 8365165: Zap C-heap memory at delete/free
Reviewed-by: kvn, kbarrett
2025-08-18 08:12:20 +00:00
Volkan Yazici
190e113031 8364263: HttpClient: Improve encapsulation of ProxyServer
Reviewed-by: dfuchs, jpai
2025-08-18 08:11:19 +00:00
Matthias Baesken
166ea12d73 8365543: UnixNativeDispatcher.init should lookup open64at and stat64at on AIX
Co-authored-by: Joachim Kern <jkern@openjdk.org>
Reviewed-by: jkern, stuefe, goetz, alanb
2025-08-18 07:14:09 +00:00
David Beaumont
e7ca8c7d55 8365436: ImageReaderTest fails when jmods directory not present
Reviewed-by: sgehwolf, alanb
2025-08-18 07:08:19 +00:00
Per Minborg
f364fcab79 8359119: Change Charset to use StableValue
Reviewed-by: alanb, rriggs
2025-08-18 05:32:03 +00:00
Kim Barrett
bd65d483df 8365245: Move size reducing operations to GrowableArrayWithAllocator
Reviewed-by: jsjolen, stefank
2025-08-17 12:56:42 +00:00
Alexey Semenyuk
57210af9bc 8365555: Cleanup redundancies in jpackage implementation
Reviewed-by: almatvee
2025-08-16 04:41:25 +00:00
Leonid Mesnik
a70521c62e 8364973: Add JVMTI stress testing mode
Reviewed-by: erikj, ihse, sspitsyn
2025-08-15 22:45:01 +00:00
Andrew Dinn
b023fea062 8365558: Fix stub entry init and blob creation on Zero
Reviewed-by: asmehra, kvn
2025-08-15 22:12:57 +00:00
Phil Race
b69a3849b2 8365198: Remove unnecessary mention of finalize in ImageIO reader/writer docs
Reviewed-by: bchristi, azvegint
2025-08-15 20:02:43 +00:00
William Kemper
6e760b9b74 8365622: Shenandoah: Fix Shenandoah simple bit map test
Reviewed-by: ysr
2025-08-15 20:00:01 +00:00
Dean Long
39a3652968 8278874: tighten VerifyStack constraints
Co-authored-by: Tom Rodriguez <never@openjdk.org>
Reviewed-by: mhaessig, never
2025-08-15 18:52:45 +00:00
William Kemper
08db4b9962 8365571: GenShen: PLAB promotions may remain disabled for evacuation threads
Reviewed-by: kdnilsen, ysr, shade
2025-08-15 17:56:47 +00:00
Francesco Andreuzzi
dbae90c950 8364723: Sort share/interpreter includes
Reviewed-by: shade, ayang
2025-08-15 10:45:00 +00:00
Volkan Yazici
059b49b955 8365244: Some test control variables are undocumented in doc/testing.md
Reviewed-by: erikj
2025-08-15 10:37:26 +00:00
Guanqiang Han
b6d5f49b8d 8365296: Build failure with Clang due to -Wformat warning after JDK-8364611
Reviewed-by: ayang, mbaesken
2025-08-15 09:41:17 +00:00
Markus Grönlund
5856dc34c8 8365199: Use a set instead of a list as the intermediary Klass* storage to reduce typeset processing
Reviewed-by: egahlin
2025-08-15 09:32:51 +00:00
Manuel Hässig
fa2eb61648 8365491: VSCode IDE: add basic configuration for the Oracle Java extension
Reviewed-by: ihse, jlahoda
2025-08-15 08:55:11 +00:00
Doug Simon
e3aeebec17 8365468: EagerJVMCI should only apply to the CompilerBroker JVMCI runtime
Reviewed-by: never
2025-08-15 07:35:52 +00:00
Chen Liang
6fb6f3d39b 8361638: java.lang.classfile.CodeBuilder.CatchBuilder should not throw IllegalArgumentException for representable exception handlers
Reviewed-by: asotona
2025-08-15 04:25:37 +00:00
David Beaumont
44b19c01ac 8365532: java/lang/module/ModuleReader/ModuleReaderTest.testImage fails
Reviewed-by: alanb
2025-08-15 02:53:42 +00:00
Vladimir Kozlov
a65f200220 8365512: Replace -Xcomp with -Xmixed for AOT assembly phase
Reviewed-by: shade
2025-08-14 23:59:34 +00:00
Chen Liang
8c363b3e3e 8364319: Move java.lang.constant.AsTypeMethodHandleDesc to jdk.internal
Reviewed-by: redestad
2025-08-14 21:41:14 +00:00
Chen Liang
c5cbcac828 8361730: The CodeBuilder.trying(BlockCodeBuilder,CatchBuilder) method generates corrupted bytecode in certain cases
Reviewed-by: asotona
2025-08-14 20:27:08 +00:00
William Kemper
dccca0fb7a 8365572: Shenandoah: Remove unused thread local _paced_time field
Reviewed-by: shade
2025-08-14 19:58:54 +00:00
David Beaumont
ba23105231 8365048: idea.sh script does not correctly detect/handle git worktrees
Reviewed-by: shade, vyazici, erikj, mcimadamore, ihse
2025-08-14 17:02:05 +00:00
Igor Veresov
26ccb3cef1 8362530: VM crash with -XX:+PrintTieredEvents when collecting AOT profiling
Reviewed-by: chagedorn, kvn
2025-08-14 16:59:05 +00:00
Phil Race
b0f98df75a 8365416: java.desktop no longer needs preview feature access
Reviewed-by: alanb, jpai
2025-08-14 15:20:47 +00:00
Albert Mingkun Yang
dd113c8df0 8364628: Serial: Refactor SerialHeap::mem_allocate_work
Reviewed-by: phh, kbarrett
2025-08-14 14:50:56 +00:00
Roman Marchenko
41520998aa 8365098: make/RunTests.gmk generates a wrong path to test artifacts on Alpine
Reviewed-by: erikj, ihse
2025-08-14 12:31:20 +00:00
Matthias Baesken
98f54d90ea 8365487: [asan] some oops (mode) related tests fail
Reviewed-by: kbarrett, syan
2025-08-14 11:11:47 +00:00
Erik Gahlin
7698c373a6 8364556: JFR: Disable SymbolTableStatistics and StringTableStatistics in default.jfc
Reviewed-by: mgronlun
2025-08-14 10:43:21 +00:00
Yudi Zheng
e320162815 8365218: [JVMCI] AArch64 CPU features are not computed correctly after 8364128
Reviewed-by: dnsimon
2025-08-14 07:39:49 +00:00
Joel Sikström
3e3298509f 8365317: ZGC: Setting ZYoungGCThreads lower than ZOldGCThreads may result in a crash
Reviewed-by: tschatzl, eosterlund
2025-08-14 07:37:10 +00:00
Jan Lahoda
a6be228642 8365314: javac fails with an exception for erroneous source
Reviewed-by: vromero
2025-08-14 07:04:40 +00:00
Jan Lahoda
c22e01d776 8341342: Elements.getAllModuleElements() does not work properly before JavacTask.analyze()
Reviewed-by: vromero, liach
2025-08-14 07:02:08 +00:00
Prasanta Sadhukhan
9dcc502cc8 8365375: Method SU3.setAcceleratorSelectionForeground assigns to acceleratorForeground
Reviewed-by: aivanov, prr, kizune
2025-08-14 04:55:02 +00:00
Aleksey Shipilev
9c266ae83c 8365229: ARM32: c2i_no_clinit_check_entry assert failed after JDK-8364269
Reviewed-by: kvn, adinn, bulasevich, phh
2025-08-13 20:49:16 +00:00
Justin Lu
9660320041 8364781: Re-examine DigitList digits resizing during parsing
Reviewed-by: liach, naoto
2025-08-13 20:43:46 +00:00
Johan Sjölen
4680dc9831 8365264: Rename ResourceHashtable to HashTable
Reviewed-by: iklam, ayang
2025-08-13 18:41:57 +00:00
Alex Menkov
ecbdd3405a 8361103: java_lang_Thread::async_get_stack_trace does not properly protect JavaThread
Reviewed-by: sspitsyn, dholmes
2025-08-13 18:24:56 +00:00
Srinivas Vamsi Parasa
38a261415d 8365265: x86 short forward jump exceeds 8-bit offset in methodHandles_x86.cpp when using Intel APX
Reviewed-by: shade, jbhateja, aph
2025-08-13 17:53:05 +00:00
Nikita Gubarkov
899e13f40a 8364434: Inconsistent BufferedContext state after GC
Reviewed-by: jdv, azvegint, avu
2025-08-13 17:36:07 +00:00
Boris Ulasevich
001aaa1e49 8365166: ARM32: missing os::fetch_bcp_from_context implementation
Reviewed-by: shade
2025-08-13 12:45:48 +00:00
Guanqiang Han
f3b34d32d6 8359235: C1 compilation fails with "assert(is_single_stack() && !is_virtual()) failed: type check"
Reviewed-by: thartmann, dlong
2025-08-13 10:52:54 +00:00
Fredrik Bredberg
e77cdd93ea 8364570: Remove LockingMode related code from riscv64
Reviewed-by: fyang, fjiang
2025-08-13 08:47:08 +00:00
Jan Lahoda
72e22b4de5 8362885: A more formal way to mark javac's Flags that belong to a specific Symbol type only
Reviewed-by: ihse, liach, vromero, mcimadamore, erikj
2025-08-13 08:07:45 +00:00
Ramkumar Sunderbabu
25480f0011 8365184: sun/tools/jhsdb/HeapDumpTestWithActiveProcess.java Re-enable SerialGC flag on debuggee process
Reviewed-by: lmesnik, cjplummer, sspitsyn
2025-08-13 01:45:49 +00:00
Dingli Zhang
636c61a386 8365302: RISC-V: compiler/loopopts/superword/TestAlignVector.java fails when vlen=128
Reviewed-by: fyang, fjiang
2025-08-13 01:24:39 +00:00
Erik Gahlin
87d734012e 8364756: JFR: Improve slow tests
Reviewed-by: mgronlun
2025-08-12 17:44:34 +00:00
Brian Burkhalter
d023982600 8361209: (bf) Use CharSequence::getChars for StringCharBuffer bulk get methods
Reviewed-by: rriggs, alanb
2025-08-12 17:39:14 +00:00
Coleen Phillimore
4c03e5938d 8364750: Remove unused declaration in jvm.h
Reviewed-by: shade
2025-08-12 16:30:09 +00:00
Ioi Lam
ad0fd13f20 8364454: ProblemList runtime/cds/DeterministicDump.java on macos for JDK-8363986
Reviewed-by: ccheung
2025-08-12 16:20:00 +00:00
Erik Gahlin
a382996bb4 8364993: JFR: Disable jdk.ModuleExport in default.jfc
Reviewed-by: mgronlun
2025-08-12 13:42:53 +00:00
Matthias Baesken
391ea15118 8365307: AIX make fails after JDK-8364611
Reviewed-by: clanger, asteiner
2025-08-12 13:16:54 +00:00
Albert Mingkun Yang
19a76a45e9 8365316: Remove unnecessary default arg value in gcVMOperations
Reviewed-by: tschatzl
2025-08-12 11:58:37 +00:00
Albert Mingkun Yang
95b7a8b3e3 8365237: Remove unused SoftRefPolicy::_all_soft_refs_clear
Reviewed-by: tschatzl, kbarrett
2025-08-12 11:29:43 +00:00
Thomas Schatzl
16e461ef31 8365122: G1: Minor clean up of G1SurvivorRegions
Reviewed-by: sangheki, kbarrett
2025-08-12 08:52:37 +00:00
Fredrik Bredberg
3c0eed8e47 8364406: Remove LockingMode related code from aarch64
Reviewed-by: aph, dholmes
2025-08-12 08:45:36 +00:00
Fredrik Bredberg
f155f7d6e5 8364141: Remove LockingMode related code from x86
Reviewed-by: aboldtch, dholmes, coleenp
2025-08-12 08:45:02 +00:00
David Beaumont
b81f4faed7 8360037: Refactor ImageReader in preparation for Valhalla support
Reviewed-by: alanb, rriggs, jpai
2025-08-12 08:34:26 +00:00
Johny Jose
5a442197d2 7191877: TEST_BUG: java/rmi/transport/checkLeaseInfoLeak/CheckLeaseLeak.java failing intermittently
Reviewed-by: smarks, coffeys
2025-08-12 08:26:42 +00:00
Afshin Zafari
db12f1934a 8364280: NMTCommittedVirtualMemoryTracker.test_committed_virtualmemory_region_vm fails with assertion "negative distance"
Reviewed-by: gziemski, jsjolen
2025-08-12 08:03:18 +00:00
Matthias Baesken
d78fa5a9f6 8365240: [asan] exclude some tests when using asan enabled binaries
Reviewed-by: lmesnik, sspitsyn
2025-08-12 07:16:57 +00:00
Alexey Semenyuk
72d3a2a977 8308349: missing working directory option for launcher when invoked from shortcuts
Reviewed-by: almatvee
2025-08-12 03:15:49 +00:00
Dingli Zhang
6927fc3904 8365200: RISC-V: compiler/loopopts/superword/TestGeneralizedReductions.java fails with Zvbb and vlen=128
Reviewed-by: fyang, fjiang
2025-08-12 01:25:35 +00:00
Joe Darcy
9593730a23 8362376: Use @Stable annotation in Java FDLIBM implementation
Reviewed-by: liach, rgiulietti
2025-08-11 23:45:24 +00:00
Brian Burkhalter
8cd79752c6 8364761: (aio) AsynchronousChannelGroup.execute doesn't check null command
Reviewed-by: alanb, vyazici
2025-08-11 18:50:39 +00:00
Aleksey Shipilev
958383d69c 8364501: Compiler shutdown crashes on access to deleted CompileTask
Reviewed-by: kvn, mhaessig
2025-08-11 18:49:37 +00:00
Francesco Andreuzzi
e9e331b2a9 8365238: 'jfr' feature requires 'services' with 'custom' build variant
Reviewed-by: erikj, shade, ihse
2025-08-11 17:10:10 +00:00
Thomas Stuefe
bdb1646a1e 8364611: (process) Child process SIGPIPE signal disposition should be default
Reviewed-by: erikj, rriggs
2025-08-11 15:37:31 +00:00
Magnus Ihse Bursie
23985c29b4 8357979: Compile jdk.internal.vm.ci targeting the Boot JDK version
Reviewed-by: erikj, dnsimon
2025-08-11 14:12:55 +00:00
Casper Norrbin
0ad919c1e5 8352067: Remove the NMT treap and replace its uses with the utilities red-black tree
Reviewed-by: jsjolen, ayang
2025-08-11 12:22:52 +00:00
Darragh Clarke
43cfd80c1c 8352502: Response message is null if expect 100 assertion fails with non 100
Reviewed-by: dfuchs
2025-08-11 11:57:08 +00:00
Benoît Maillard
a60e523f88 8349191: Test compiler/ciReplay/TestIncrementalInlining.java failed
Reviewed-by: mhaessig, dfenacci, chagedorn
2025-08-11 11:15:34 +00:00
Albert Mingkun Yang
fd766b27b9 8364541: Parallel: Support allocation in old generation when heap is almost full
Reviewed-by: phh, tschatzl
2025-08-11 10:49:47 +00:00
Jan Lahoda
8b5bb01355 8364987: javac fails with an exception when looking for diamond creation
Reviewed-by: vromero
2025-08-11 10:28:59 +00:00
Magnus Ihse Bursie
1fc0b01601 8361142: Improve custom hooks for makefiles
Reviewed-by: erikj
2025-08-11 09:44:49 +00:00
Albert Mingkun Yang
0c39228ec1 8364767: G1: Remove use of CollectedHeap::_soft_ref_policy
Reviewed-by: tschatzl, sangheki
2025-08-11 09:42:12 +00:00
Dmitry Cherepanov
10762d408b 8365044: Missing copyright header in Contextual.java
Reviewed-by: egahlin
2025-08-11 08:19:02 +00:00
Joel Sikström
f28126ebc2 8365050: Too verbose warning in os::commit_memory_limit() on Windows
Reviewed-by: dholmes, mbaesken
2025-08-11 08:18:28 +00:00
Volkan Yazici
c31f4861fb 8364365: HKSCS encoder does not properly set the replacement character
Reviewed-by: sherman
2025-08-11 07:10:38 +00:00
Matthias Baesken
15e8609a2c 8364996: java/awt/font/FontNames/LocaleFamilyNames.java times out on Windows
Reviewed-by: clanger, prr, asteiner
2025-08-11 07:08:03 +00:00
Jaikiran Pai
022e29a775 8365086: CookieStore.getURIs() and get(URI) should return an immutable List
Reviewed-by: liach, vyazici, dfuchs
2025-08-10 04:22:10 +00:00
Chen Liang
e13b4c8de9 8358535: Changes in ClassValue (JDK-8351996) caused a 1-9% regression in Renaissance-PageRank
Reviewed-by: jrose, shade
2025-08-09 23:44:21 +00:00
Jaikiran Pai
f83454cd61 8364786: Test java/net/vthread/HttpALot.java intermittently fails - 24999 handled, expected 25000
Reviewed-by: dfuchs, alanb, vyazici
2025-08-09 02:00:58 +00:00
Alexey Semenyuk
8ad1fcc48a 8364564: Shortcut configuration is not recorded in .jpackage.xml file
Reviewed-by: almatvee
2025-08-08 22:11:52 +00:00
Alexey Semenyuk
c1c0155604 8364129: Rename libwixhelper
Reviewed-by: erikj, almatvee
2025-08-08 21:41:44 +00:00
Chen Liang
cd50d78d44 8361300: Document exceptions for Unsafe offset methods
Reviewed-by: jrose, vyazici
2025-08-08 17:17:21 +00:00
Andrew Dinn
241808e13f 8364269: Simplify code cache API by storing adapter entry offsets in blob
Reviewed-by: kvn, shade, asmehra
2025-08-08 09:12:08 +00:00
Afshin Zafari
1b3e23110b 8360048: NMT crash in gtest/NMTGtests.java: fatal error: NMT corruption: Block at 0x0000017748307120: header canary broken
Reviewed-by: jsjolen, gziemski
2025-08-08 09:06:43 +00:00
Thomas Schatzl
a26a6f3152 8364649: G1: Move collection set related full gc reset code into abandon_collection_set() method
Reviewed-by: ayang, sangheki
2025-08-08 08:06:56 +00:00
Thomas Schatzl
47017e3864 8364760: G1: Remove obsolete code in G1MergeCardSetClosure
Reviewed-by: ayang, sangheki
2025-08-08 07:57:06 +00:00
Thomas Schatzl
bcca5cee2d 8364642: G1: Remove parameter in G1CollectedHeap::abandon_collection_set()
Reviewed-by: ayang
2025-08-08 07:56:29 +00:00
Thomas Schatzl
198782c957 8364877: G1: Inline G1CollectedHeap::set_region_short_lived_locked
Reviewed-by: ayang, sangheki
2025-08-08 07:54:23 +00:00
Andrey Turbanov
d0624f8b62 8364808: Make BasicDesktopPaneUI.Actions.MOVE_RESIZE_INCREMENT static
Reviewed-by: tr, azvegint, kizune, aivanov
2025-08-08 05:03:55 +00:00
John Jiang
4c9eaddaef 8364597: Replace THL A29 Limited with Tencent
Reviewed-by: jiefu
2025-08-08 02:27:30 +00:00
Harshitha Onkar
c71be802b5 8361748: Enforce limits on the size of an XBM image
Reviewed-by: prr, jdv
2025-08-07 21:19:47 +00:00
Ayush Rigal
b8acbc3ed8 8364315: Remove unused xml files from test/jaxp/javax/xml/jaxp/functional/javax/xml/transform/xmlfiles
Reviewed-by: jpai, joehw
2025-08-07 21:11:26 +00:00
Alexey Semenyuk
244e6293c3 8364984: Many jpackage tests are failing on Linux after JDK-8334238
Reviewed-by: almatvee
2025-08-07 19:55:41 +00:00
Liam Miller-Cushon
c0e6ffabc2 8364954: (bf) CleaningThread should be InnocuousThread
Reviewed-by: rriggs, alanb
2025-08-07 19:43:45 +00:00
Brett Okken
5116d9e5fe 8364213: (bf) Improve java/nio/Buffer/CharBufferAsCharSequenceTest test comments
8364345: Test java/nio/Buffer/CharBufferAsCharSequenceTest.java failed

Reviewed-by: bpb, rriggs
2025-08-07 19:27:28 +00:00
Phil Race
78117eff56 8364230: javax/swing/text/StringContent can be migrated away from using finalize
Reviewed-by: psadhukhan, abhiscxk, kizune
2025-08-07 18:58:28 +00:00
Brian Burkhalter
02e187119d 8364277: (fs) BasicFileAttributes.isDirectory and isOther return true for NTFS directory junctions when links not followed
Reviewed-by: alanb
2025-08-07 18:24:22 +00:00
Andrew Dinn
90ea42f716 8364558: Failure to generate compiler stubs from compiler thread should not crash VM when compilation disabled due to full CodeCache
Reviewed-by: kvn, shade
2025-08-07 16:23:32 +00:00
Prasanta Sadhukhan
e29346dbd6 8348760: RadioButton is not shown if JRadioButtonMenuItem is rendered with ImageIcon in WindowsLookAndFeel
Reviewed-by: prr, kizune, abhiscxk
2025-08-07 16:03:12 +00:00
Francesco Andreuzzi
e606278fc8 8358598: PhaseIterGVN::PhaseIterGVN(PhaseGVN* gvn) doesn't use its parameter
Reviewed-by: galder, mhaessig, shade
2025-08-07 15:43:36 +00:00
Guanqiang Han
83953c458e 8364822: Comment cleanup, stale references to closeDescriptors and UNIXProcess.c
Reviewed-by: kevinw, rriggs
2025-08-07 14:11:46 +00:00
Ashutosh Mehra
bc3d865640 8364128: Improve gathering of cpu feature names using stringStream
Co-authored-by: Johan Sjölen <jsjolen@openjdk.org>
Reviewed-by: kvn, jsjolen
2025-08-07 13:26:33 +00:00
Jeremy Wood
8d73fe91bc 8358813: JPasswordField identifies spaces in password via delete shortcuts
Reviewed-by: aivanov, dnguyen
2025-08-07 10:21:54 +00:00
Thomas Schatzl
c56fb0b6ef 8364503: gc/g1/TestCodeCacheUnloadDuringConcCycle.java fails because of race printing to stdout
Reviewed-by: ayang, dholmes
2025-08-07 08:40:42 +00:00
Johannes Bechberger
487cc3c5be 8359690: New test TestCPUTimeSampleThrottling still fails intermittently
Reviewed-by: mbaesken
2025-08-07 07:52:48 +00:00
David Holmes
078d0d4968 8364235: Fix for JDK-8361447 breaks the alignment requirements for GuardedMemory
Co-authored-by: Johan Sjölen <jsjolen@openjdk.org>
Reviewed-by: dcubed, jsjolen, aboldtch
2025-08-07 04:37:21 +00:00
Alexey Semenyuk
7e484e2a63 8334238: Enhance AddLShortcutTest jpackage test
Reviewed-by: almatvee
2025-08-07 02:02:36 +00:00
Guanqiang Han
f95af744b0 8364312: debug agent should set FD_CLOEXEC flag rather than explicitly closing every open file
Reviewed-by: cjplummer, kevinw
2025-08-06 15:37:31 +00:00
Albert Mingkun Yang
72d1066ae3 8364722: Parallel: Move CLDG mark clearing to the end of full GC
Reviewed-by: tschatzl, zgu
2025-08-06 12:21:16 +00:00
David Beaumont
0ceb366dc2 8356645: Javac should utilize new ZIP file system read-only access mode
Reviewed-by: jlahoda
2025-08-06 08:55:47 +00:00
Per Minborg
9dffbc9c4c 8364540: Apply @Stable to Shared Secrets
Reviewed-by: rriggs
2025-08-06 08:52:14 +00:00
Aleksey Shipilev
e304d37996 8361211: C2: Final graph reshaping generates unencodeable klass constants
Reviewed-by: kvn, qamai, thartmann, mdoerr
2025-08-06 08:32:25 +00:00
Joel Sikström
8d529bc4f3 8364518: Support for Job Objects in os::commit_memory_limit() on Windows
Reviewed-by: ayang, dholmes
2025-08-06 07:54:44 +00:00
Koushik Thirupattur
ca41644538 8355379: Annotate lazy fields in java.security @Stable
Reviewed-by: pminborg
2025-08-06 06:40:40 +00:00
Anton Artemov
6656e767db 8359820: Improve handshake/safepoint timeout diagnostic messages
Reviewed-by: dholmes, stuefe
2025-08-06 04:45:35 +00:00
Aleksey Shipilev
68a35511eb 8364212: Shenandoah: Rework archived objects loading
Reviewed-by: wkemper, kdnilsen
2025-08-05 18:34:07 +00:00
Thomas Schatzl
d906e45026 8364531: G1: Factor out liveness tracing code
Reviewed-by: ayang, sangheki
2025-08-05 16:13:53 +00:00
Erik Gahlin
8a571ee7f2 8364667: JFR: Throttle doesn't work with dynamic events
Reviewed-by: mgronlun
2025-08-05 14:33:30 +00:00
Albert Mingkun Yang
ba0ae4cb28 8364254: Serial: Remove soft ref policy update in WhiteBox FullGC
Reviewed-by: tschatzl, sangheki
2025-08-05 10:43:30 +00:00
Francesco Andreuzzi
df736eb582 8364618: Sort share/code includes
Reviewed-by: shade, mhaessig
2025-08-05 10:23:54 +00:00
Saranya Natarajan
d25b9befe0 8325482: Test that distinct seeds produce distinct traces for compiler stress flags
Reviewed-by: chagedorn, dfenacci
2025-08-05 08:39:47 +00:00
Matthias Baesken
67ba8b45dd 8364514: [asan] runtime/jni/checked/TestCharArrayReleasing.java heap-buffer-overflow
Reviewed-by: dholmes
2025-08-05 08:02:54 +00:00
Joel Sikström
febd4b26b2 8360515: PROPERFMTARGS should always use size_t template specialization for unit
Reviewed-by: dholmes, stuefe
2025-08-05 07:41:11 +00:00
Alexey Semenyuk
c0c7d39b59 8364587: Update jpackage internal javadoc
Reviewed-by: almatvee
2025-08-05 01:42:45 +00:00
Alexey Semenyuk
6b360ac99a 8359756: Bug in RuntimePackageTest.testName test
Reviewed-by: almatvee
2025-08-05 01:09:56 +00:00
Alexey Semenyuk
0f4c3dc944 8362352: Fix references to non-existing resource strings
Reviewed-by: almatvee
2025-08-05 01:04:38 +00:00
David Holmes
84a4a3647c 8364314: java_lang_Thread::get_thread_status fails assert(base != nullptr) failed: Invalid base
Reviewed-by: amenkov, shade, dcubed, pchilanomate, sspitsyn
2025-08-04 21:48:38 +00:00
Mohamed Issa
f96b6bcd4d 8364666: Tier1 builds broken by JDK-8360559
Reviewed-by: sviswanathan
2025-08-04 21:31:35 +00:00
Phil Race
dc4d9b4849 8362898: Remove finalize() methods from javax.imageio TIFF classes
Reviewed-by: azvegint, jdv
2025-08-04 20:25:41 +00:00
Coleen Phillimore
da3a5da81b 8343218: Add option to disable allocating interface and abstract classes in non-class metaspace
Reviewed-by: shade, kvn, yzheng, stuefe, dholmes
2025-08-04 20:13:03 +00:00
Phil Race
0d0d93e8f6 8210765: Remove finalize method in CStrike.java
Reviewed-by: psadhukhan, achung, azvegint
2025-08-04 19:29:03 +00:00
Phil Race
d1e362e9a8 8363889: Update sun.print.PrintJob2D to use Disposer
Reviewed-by: azvegint, psadhukhan
2025-08-04 19:27:23 +00:00
Mohamed Issa
05f8a6fca8 8360559: Optimize Math.sinh for x86 64 bit platforms
Reviewed-by: sviswanathan, sparasa
2025-08-04 18:47:57 +00:00
Kevin Driver
b5f450a599 8364226: Better ECDSASignature Memory Management
Reviewed-by: ascarpino, hchao
2025-08-04 15:59:57 +00:00
Artur Barashev
6c52b73465 8209992: Align SSLSocket and SSLEngine Javadocs
Reviewed-by: wetmore
2025-08-04 13:55:58 +00:00
Galder Zamarreño
567c0c9335 8354244: Use random data in MinMaxRed_Long data arrays
Reviewed-by: chagedorn, mhaessig
2025-08-04 13:51:14 +00:00
Albert Mingkun Yang
fc4755535d 8364516: Serial: Move class unloading logic inside SerialFullGC::invoke_at_safepoint
Reviewed-by: tschatzl, sangheki
2025-08-04 12:59:26 +00:00
Ao Qi
a9f3d3a290 8364177: JDK fails to build due to undefined symbol in libpng on LoongArch64
Reviewed-by: prr, aivanov, erikj
2025-08-04 12:37:11 +00:00
Jasmine Karthikeyan
500462fb69 8364580: Test compiler/vectorization/TestSubwordTruncation.java fails on platforms without RoundF/RoundD
Reviewed-by: chagedorn, shade
2025-08-04 12:11:10 +00:00
Erik Gahlin
68a4396dbc 8364316: JFR: Incorrect validation of mirror fields
Reviewed-by: shade, mgronlun
2025-08-04 10:53:40 +00:00
Erik Gahlin
da0d9598d0 8364190: JFR: RemoteRecordingStream withers don't work
Reviewed-by: mgronlun
2025-08-04 10:41:21 +00:00
Erik Gahlin
b96b9c3d5b 8364461: JFR: Default constructor may not be first in setting control
Reviewed-by: mgronlun
2025-08-04 10:25:14 +00:00
Markus Grönlund
3bc449797e 8364258: ThreadGroup constant pool serialization is not normalized
Reviewed-by: egahlin
2025-08-04 09:42:05 +00:00
Erik Gahlin
cf5a25538e 8364427: JFR: Possible resource leak in Recording::getStream
Reviewed-by: mgronlun
2025-08-04 09:12:12 +00:00
Erik Gahlin
ea7e943874 8364257: JFR: User-defined events and settings with a one-letter name cannot be configured
Reviewed-by: mgronlun
2025-08-04 08:50:35 +00:00
Francesco Andreuzzi
3387b3195c 8364519: Sort share/classfile includes
Reviewed-by: shade, ayang
2025-08-04 08:20:22 +00:00
Andrey Turbanov
8269fdc78e 8362067: Remove unnecessary List.contains key from SpringLayout.Constraints.pushConstraint
Reviewed-by: aivanov
2025-08-04 08:15:09 +00:00
Abhishek Kumar
57553ca1db 8361298: SwingUtilities/bug4967768.java fails where character P is not underline
Reviewed-by: dnguyen, psadhukhan, achung, azvegint
2025-08-04 04:17:16 +00:00
David Holmes
158e59ab91 8364106: Include java.runtime.version in thread dump output
Reviewed-by: alanb, coffeys
2025-08-03 22:28:12 +00:00
Chen Liang
1a206d2a6c 8364545: tools/javac/launcher/SourceLauncherTest.java fails frequently
Reviewed-by: cstein, jpai
2025-08-03 13:23:43 +00:00
DarraghConway
a5e0c9d0c5 8363720: Follow up to JDK-8360411 with post review comments
Reviewed-by: bpb, rriggs
2025-08-03 11:03:15 +00:00
Thomas Stuefe
819de07117 8363998: Implement Compressed Class Pointers for 32-bit
Reviewed-by: rkennke, coleenp
2025-08-03 06:43:31 +00:00
erfang
f40381e41d 8356760: VectorAPI: Optimize VectorMask.fromLong for all-true/all-false cases
Reviewed-by: xgong, jbhateja
2025-08-02 07:54:42 +00:00
Serguei Spitsyn
e801e51311 8306324: StopThread results in thread being marked as interrupted, leading to unexpected InterruptedException
Reviewed-by: pchilanomate, alanb
2025-08-02 04:21:42 +00:00
Volkan Yazici
7ea08d3928 8362244: Devkit's Oracle Linux base OS keyword is incorrectly documented
Reviewed-by: erikj
2025-08-01 20:36:17 +00:00
Justin Lu
8e921aee5a 8364370: java.text.DecimalFormat specification indentation correction
Reviewed-by: liach, naoto
2025-08-01 18:43:02 +00:00
Mikhail Yankelevich
6d0bbc8a18 8357470: src/java.base/share/classes/sun/security/util/Debug.java implement the test for args.toLowerCase
Reviewed-by: coffeys
2025-08-01 18:42:41 +00:00
Coleen Phillimore
ee3665bca0 8364187: Make getClassAccessFlagsRaw non-native
Reviewed-by: thartmann, rriggs, liach
2025-08-01 15:21:45 +00:00
Bhavana Kilambi
2ba8a06f0c 8348868: AArch64: Add backend support for SelectFromTwoVector
Co-authored-by: Jatin Bhateja <jbhateja@openjdk.org>
Reviewed-by: haosun, aph, sviswanathan, xgong
2025-08-01 13:11:21 +00:00
Christian Stein
8ac4a88f3c 8362237: IllegalArgumentException in the launcher when exception without stack trace is thrown
Reviewed-by: kcr, vromero
2025-08-01 11:01:56 +00:00
Oli Gillespie
6c5804722b 8364296: Set IntelJccErratumMitigation flag ergonomically
Reviewed-by: shade, jbhateja
2025-08-01 10:27:08 +00:00
Matthias Baesken
812bd8e94d 8364199: Enhance list of environment variables printed in hserr/hsinfo file
Reviewed-by: lucy, clanger
2025-08-01 10:24:11 +00:00
Prasanta Sadhukhan
7fbeede14c 4938801: The popup does not go when the component is removed
Co-authored-by: Alexey Ivanov <aivanov@openjdk.org>
Reviewed-by: dnguyen, abhiscxk
2025-08-01 09:15:52 +00:00
Hannes Wallnöfer
d80b5c8728 8361316: javadoc tool fails with an exception for an inheritdoc on throws clause of a constructor
Reviewed-by: nbenalla, liach
2025-08-01 08:39:29 +00:00
Hannes Wallnöfer
7d63c9fa4d 8294074: Make other specs more discoverable from the API docs
Reviewed-by: mr
2025-08-01 08:35:10 +00:00
Thomas Schatzl
beda14e3cb 8364423: G1: Refactor G1UpdateRegionLivenessAndSelectForRebuildTask
Reviewed-by: sangheki, ayang
2025-08-01 08:22:04 +00:00
Joel Sikström
ae11d8f446 8364248: Separate commit and reservation limit detection
Reviewed-by: stuefe, ayang
2025-08-01 07:42:45 +00:00
Joel Sikström
e82d7f5810 8364351: ZGC: Replace usages of ZPageAgeRange() with ZPageAgeRangeAll
Reviewed-by: stefank, aboldtch
2025-08-01 07:11:11 +00:00
Aleksey Shipilev
577ac0610a 8358340: Support CDS heap archive with Generational Shenandoah
Reviewed-by: xpeng, wkemper
2025-08-01 06:28:29 +00:00
Francesco Andreuzzi
c9b8bd6ff4 8364359: Sort share/cds includes
Reviewed-by: shade, iklam
2025-08-01 06:27:02 +00:00
Albert Mingkun Yang
913d318c97 8364504: [BACKOUT] JDK-8364176 Serial: Group all class unloading logic at the end of marking phase
Reviewed-by: dholmes
2025-08-01 05:59:33 +00:00
Artur Barashev
724e8c076e 8364484: misc tests fail with Received fatal alert: handshake_failure
Reviewed-by: ascarpino
2025-07-31 21:24:09 +00:00
Albert Mingkun Yang
e0e82066fe 8364166: Parallel: Remove the use of soft_ref_policy in Full GC
Reviewed-by: tschatzl, sangheki
2025-07-31 18:53:07 +00:00
Albert Mingkun Yang
443afdc77f 8364176: Serial: Group all class unloading logic at the end of marking phase
Reviewed-by: tschatzl, sangheki
2025-07-31 18:52:44 +00:00
Chen Liang
fe09e93b8f 8364317: Explicitly document some assumptions of StringUTF16
Reviewed-by: rgiulietti, rriggs, vyazici
2025-07-31 18:26:28 +00:00
Johannes Graham
d19442399c 8358880: Performance of parsing with DecimalFormat can be improved
Reviewed-by: jlu, liach, rgiulietti
2025-07-31 17:50:18 +00:00
Anton Artemov
c4fbfa2103 8363949: Incorrect jtreg header in MonitorWithDeadObjectTest.java
Reviewed-by: stefank, coleenp, ayang
2025-07-31 15:39:38 +00:00
Aleksey Shipilev
1b9efaa11e 8364183: Shenandoah: Improve commit/uncommit handling
Reviewed-by: wkemper, xpeng
2025-07-31 15:17:51 +00:00
Weijun Wang
b2b56cfc00 8359395: XML signature generation does not support user provided SecureRandom
Reviewed-by: mullan
2025-07-31 14:45:31 +00:00
Francesco Andreuzzi
53d152e7db 8364087: Amend comment in globalDefinitions.hpp on "classfile_constants.h" include
Reviewed-by: stefank, ayang
2025-07-31 14:43:10 +00:00
DarraghConway
d4705947d8 8360408: [TEST] Use @requires tag instead of exiting based on "os.name" property value for sun/net/www/protocol/file/FileURLTest.java
Reviewed-by: vyazici, rriggs
2025-07-31 14:41:13 +00:00
Thomas Schatzl
5f357fa27d 8364197: G1: Sort G1 mutex locks by name and group them together
Reviewed-by: coleenp, ayang
2025-07-31 14:08:40 +00:00
Artur Barashev
e544cd9920 8359956: Support algorithm constraints and certificate checks in SunX509 key manager
Reviewed-by: mullan
2025-07-31 13:57:19 +00:00
Lei Zhu
458f033d4d 8362533: Tests sun/management/jmxremote/bootstrap/* duplicate VM flags
Reviewed-by: lmesnik, sspitsyn, kevinw
2025-07-31 13:11:59 +00:00
Axel Boldt-Christmas
3f21c8bd1f 8361897: gc/z/TestUncommit.java fails with Uncommitted too slow
Reviewed-by: stefank, jsikstro
2025-07-31 13:08:29 +00:00
Manuel Hässig
ddb64836e5 8364409: [BACKOUT] Consolidate Identity of self-inverse operations
Reviewed-by: thartmann, bmaillard, hgreule
2025-07-31 12:12:15 +00:00
Yasumasa Suenaga
8ed214f3b1 8364090: Dump JFR recording on CrashOnOutOfMemoryError
Reviewed-by: egahlin, stuefe
2025-07-31 12:10:43 +00:00
1068 changed files with 26466 additions and 14403 deletions

View File

@@ -125,7 +125,8 @@ if [ -d "$TOPLEVEL_DIR/.hg" ] ; then
VCS_TYPE="hg4idea"
fi
if [ -d "$TOPLEVEL_DIR/.git" ] ; then
# Git worktrees use a '.git' file rather than directory, so test both.
if [ -d "$TOPLEVEL_DIR/.git" -o -f "$TOPLEVEL_DIR/.git" ] ; then
VCS_TYPE="Git"
fi

View File

@@ -1451,10 +1451,10 @@ of a cross-compiling toolchain and a sysroot environment which can
easily be used together with the <code>--with-devkit</code> configure
option to cross compile the JDK. On Linux/x86_64, the following
command:</p>
<pre><code>bash configure --with-devkit=&lt;devkit-path&gt; --openjdk-target=ppc64-linux-gnu &amp;&amp; make</code></pre>
<p>will configure and build the JDK for Linux/ppc64 assuming that
<code>&lt;devkit-path&gt;</code> points to a Linux/x86_64 to Linux/ppc64
devkit.</p>
<pre><code>bash configure --with-devkit=&lt;devkit-path&gt; --openjdk-target=ppc64le-linux-gnu &amp;&amp; make</code></pre>
<p>will configure and build the JDK for Linux/ppc64le assuming that
<code>&lt;devkit-path&gt;</code> points to a Linux/x86_64 to
Linux/ppc64le devkit.</p>
<p>Devkits can be created from the <code>make/devkit</code> directory by
executing:</p>
<pre><code>make [ TARGETS=&quot;&lt;TARGET_TRIPLET&gt;+&quot; ] [ BASE_OS=&lt;OS&gt; ] [ BASE_OS_VERSION=&lt;VER&gt; ]</code></pre>
@@ -1481,22 +1481,22 @@ following targets are known to work:</p>
<td>arm-linux-gnueabihf</td>
</tr>
<tr class="even">
<td>ppc64-linux-gnu</td>
<td>ppc64le-linux-gnu</td>
</tr>
<tr class="odd">
<td>ppc64le-linux-gnu</td>
<td>riscv64-linux-gnu</td>
</tr>
<tr class="even">
<td>s390x-linux-gnu</td>
</tr>
</tbody>
</table>
<p><code>BASE_OS</code> must be one of "OEL6" for Oracle Enterprise
Linux 6 or "Fedora" (if not specified "OEL6" will be the default). If
the base OS is "Fedora" the corresponding Fedora release can be
specified with the help of the <code>BASE_OS_VERSION</code> option (with
"27" as default version). If the build is successful, the new devkits
can be found in the <code>build/devkit/result</code> subdirectory:</p>
<p><code>BASE_OS</code> must be one of <code>OL</code> for Oracle
Enterprise Linux or <code>Fedora</code>. If the base OS is
<code>Fedora</code> the corresponding Fedora release can be specified
with the help of the <code>BASE_OS_VERSION</code> option. If the build
is successful, the new devkits can be found in the
<code>build/devkit/result</code> subdirectory:</p>
<pre><code>cd make/devkit
make TARGETS=&quot;ppc64le-linux-gnu aarch64-linux-gnu&quot; BASE_OS=Fedora BASE_OS_VERSION=21
ls -1 ../../build/devkit/result/

View File

@@ -1258,11 +1258,11 @@ toolchain and a sysroot environment which can easily be used together with the
following command:
```
bash configure --with-devkit=<devkit-path> --openjdk-target=ppc64-linux-gnu && make
bash configure --with-devkit=<devkit-path> --openjdk-target=ppc64le-linux-gnu && make
```
will configure and build the JDK for Linux/ppc64 assuming that `<devkit-path>`
points to a Linux/x86_64 to Linux/ppc64 devkit.
will configure and build the JDK for Linux/ppc64le assuming that `<devkit-path>`
points to a Linux/x86_64 to Linux/ppc64le devkit.
Devkits can be created from the `make/devkit` directory by executing:
@@ -1281,16 +1281,14 @@ at least the following targets are known to work:
| x86_64-linux-gnu |
| aarch64-linux-gnu |
| arm-linux-gnueabihf |
| ppc64-linux-gnu |
| ppc64le-linux-gnu |
| riscv64-linux-gnu |
| s390x-linux-gnu |
`BASE_OS` must be one of "OEL6" for Oracle Enterprise Linux 6 or "Fedora" (if
not specified "OEL6" will be the default). If the base OS is "Fedora" the
corresponding Fedora release can be specified with the help of the
`BASE_OS_VERSION` option (with "27" as default version). If the build is
successful, the new devkits can be found in the `build/devkit/result`
subdirectory:
`BASE_OS` must be one of `OL` for Oracle Enterprise Linux or `Fedora`. If the
base OS is `Fedora` the corresponding Fedora release can be specified with the
help of the `BASE_OS_VERSION` option. If the build is successful, the new
devkits can be found in the `build/devkit/result` subdirectory:
```
cd make/devkit

View File

@@ -11,11 +11,8 @@
div.columns{display: flex; gap: min(4vw, 1.5em);}
div.column{flex: auto; overflow-x: auto;}
div.hanging-indent{margin-left: 1.5em; text-indent: -1.5em;}
/* The extra [class] is a hack that increases specificity enough to
override a similar rule in reveal.js */
ul.task-list[class]{list-style: none;}
ul.task-list{list-style: none;}
ul.task-list li input[type="checkbox"] {
font-size: inherit;
width: 0.8em;
margin: 0 0.8em 0.2em -1.6em;
vertical-align: middle;

View File

@@ -72,11 +72,9 @@ id="toc-notes-for-specific-tests">Notes for Specific Tests</a>
<li><a href="#non-us-locale" id="toc-non-us-locale">Non-US
locale</a></li>
<li><a href="#pkcs11-tests" id="toc-pkcs11-tests">PKCS11 Tests</a></li>
</ul></li>
<li><a href="#testing-ahead-of-time-optimizations"
id="toc-testing-ahead-of-time-optimizations">### Testing Ahead-of-time
Optimizations</a>
<ul>
id="toc-testing-ahead-of-time-optimizations">Testing Ahead-of-time
Optimizations</a></li>
<li><a href="#testing-with-alternative-security-providers"
id="toc-testing-with-alternative-security-providers">Testing with
alternative security providers</a></li>
@@ -435,6 +433,9 @@ the diff between the specified revision and the repository tip.</p>
<p>The report is stored in
<code>build/$BUILD/test-results/jcov-output/diff_coverage_report</code>
file.</p>
<h4 id="aot_jdk">AOT_JDK</h4>
<p>See <a href="#testing-ahead-of-time-optimizations">Testing
Ahead-of-time optimizations</a>.</p>
<h3 id="jtreg-keywords">JTReg keywords</h3>
<h4 id="jobs-1">JOBS</h4>
<p>The test concurrency (<code>-concurrency</code>).</p>
@@ -457,6 +458,12 @@ class, named Virtual, is currently part of the JDK build in the
<code>test/jtreg_test_thread_factory/</code> directory. This class gets
compiled during the test image build. The implementation of the Virtual
class creates a new virtual thread for executing each test class.</p>
<h4 id="jvmti_stress_agent">JVMTI_STRESS_AGENT</h4>
<p>Executes JTReg tests with JVM TI stress agent. The stress agent is
the part of test library and located in
<code>test/lib/jdk/test/lib/jvmti/libJvmtiStressAgent.cpp</code>. The
value of this argument is set as JVM TI agent options. This mode uses
ProblemList-jvmti-stress-agent.txt as an additional exclude list.</p>
<h4 id="test_mode">TEST_MODE</h4>
<p>The test mode (<code>agentvm</code> or <code>othervm</code>).</p>
<p>Defaults to <code>agentvm</code>.</p>
@@ -556,6 +563,12 @@ each fork. Same as specifying <code>-wi &lt;num&gt;</code>.</p>
same values as <code>-rff</code>, i.e., <code>text</code>,
<code>csv</code>, <code>scsv</code>, <code>json</code>, or
<code>latex</code>.</p>
<h4 id="test_jdk">TEST_JDK</h4>
<p>The path to the JDK that will be used to run the benchmarks.</p>
<p>Defaults to <code>build/&lt;CONF-NAME&gt;/jdk</code>.</p>
<h4 id="benchmarks_jar">BENCHMARKS_JAR</h4>
<p>The path to the JAR containing the benchmarks.</p>
<p>Defaults to <code>test/micro/benchmarks.jar</code>.</p>
<h4 id="vm_options-2">VM_OPTIONS</h4>
<p>Additional VM arguments to provide to forked off VMs. Same as
<code>-jvmArgs &lt;args&gt;</code></p>
@@ -601,8 +614,8 @@ element of the appropriate <code>@Artifact</code> class. (See
JTREG=&quot;JAVA_OPTIONS=-Djdk.test.lib.artifacts.nsslib-linux_aarch64=/path/to/NSS-libs&quot;</code></pre>
<p>For more notes about the PKCS11 tests, please refer to
test/jdk/sun/security/pkcs11/README.</p>
<h2 id="testing-ahead-of-time-optimizations">### Testing Ahead-of-time
Optimizations</h2>
<h3 id="testing-ahead-of-time-optimizations">Testing Ahead-of-time
Optimizations</h3>
<p>One way to improve test coverage of ahead-of-time (AOT) optimizations
in the JDK is to run existing jtreg test cases in a special "AOT_JDK"
mode. Example:</p>

View File

@@ -367,6 +367,10 @@ between the specified revision and the repository tip.
The report is stored in
`build/$BUILD/test-results/jcov-output/diff_coverage_report` file.
#### AOT_JDK
See [Testing Ahead-of-time optimizations](#testing-ahead-of-time-optimizations).
### JTReg keywords
#### JOBS
@@ -397,6 +401,13 @@ the `test/jtreg_test_thread_factory/` directory. This class gets compiled
during the test image build. The implementation of the Virtual class creates a
new virtual thread for executing each test class.
#### JVMTI_STRESS_AGENT
Executes JTReg tests with JVM TI stress agent. The stress agent is the part of
test library and located in `test/lib/jdk/test/lib/jvmti/libJvmtiStressAgent.cpp`.
The value of this argument is set as JVM TI agent options.
This mode uses ProblemList-jvmti-stress-agent.txt as an additional exclude list.
#### TEST_MODE
The test mode (`agentvm` or `othervm`).
@@ -545,6 +556,18 @@ Amount of time to spend in each warmup iteration. Same as specifying `-w
Specify to have the test run save a log of the values. Accepts the same values
as `-rff`, i.e., `text`, `csv`, `scsv`, `json`, or `latex`.
#### TEST_JDK
The path to the JDK that will be used to run the benchmarks.
Defaults to `build/<CONF-NAME>/jdk`.
#### BENCHMARKS_JAR
The path to the JAR containing the benchmarks.
Defaults to `test/micro/benchmarks.jar`.
#### VM_OPTIONS
Additional VM arguments to provide to forked off VMs. Same as `-jvmArgs <args>`
@@ -612,7 +635,7 @@ For more notes about the PKCS11 tests, please refer to
test/jdk/sun/security/pkcs11/README.
### Testing Ahead-of-time Optimizations
-------------------------------------------------------------------------------
One way to improve test coverage of ahead-of-time (AOT) optimizations in
the JDK is to run existing jtreg test cases in a special "AOT_JDK" mode.
Example:

View File

@@ -85,7 +85,7 @@ CreateHkTargets = \
################################################################################
# Include module specific build settings
THIS_SNIPPET := modules/$(MODULE)/Java.gmk
THIS_SNIPPET := $(call GetModuleSnippetName, Java)
ifneq ($(wildcard $(THIS_SNIPPET)), )
include MakeSnippetStart.gmk
@@ -115,6 +115,7 @@ $(eval $(call SetupJavaCompilation, $(MODULE), \
EXCLUDE_FILES := $(EXCLUDE_FILES), \
EXCLUDE_PATTERNS := -files, \
KEEP_ALL_TRANSLATIONS := $(KEEP_ALL_TRANSLATIONS), \
TARGET_RELEASE := $(TARGET_RELEASE), \
JAVAC_FLAGS := \
$(DOCLINT) \
$(JAVAC_FLAGS) \

View File

@@ -184,7 +184,7 @@ endif
################################################################################
# Include module specific build settings
THIS_SNIPPET := modules/$(MODULE)/Jmod.gmk
THIS_SNIPPET := $(call GetModuleSnippetName, Jmod)
ifneq ($(wildcard $(THIS_SNIPPET)), )
include MakeSnippetStart.gmk

View File

@@ -236,8 +236,8 @@ define create_overview_file
#
ifneq ($$($1_GROUPS), )
$1_OVERVIEW_TEXT += \
<p>This document is divided into \
$$(subst 2,two,$$(subst 3,three,$$(words $$($1_GROUPS)))) sections:</p> \
<p>This document has \
$$(subst 2,two,$$(subst 3,three,$$(words $$($1_GROUPS)))) major sections:</p> \
<blockquote><dl> \
#
$1_OVERVIEW_TEXT += $$(foreach g, $$($1_GROUPS), \
@@ -246,7 +246,10 @@ define create_overview_file
)
$1_OVERVIEW_TEXT += \
</dl></blockquote> \
#
<p><a href="../specs/index.html">Related documents</a> specify the Java \
programming language, the Java Virtual Machine, various protocols and file \
formats pertaining to the Java platform, and tools included in the JDK.</p> \
#
endif
$1_OVERVIEW_TEXT += \
</body></html> \

View File

@@ -270,6 +270,7 @@ endif
# Since debug symbols are not included in the jmod files, they need to be copied
# in manually after generating the images.
# These variables are read by SetupCopyDebuginfo
ALL_JDK_MODULES := $(JDK_MODULES)
ALL_JRE_MODULES := $(sort $(JRE_MODULES), $(foreach m, $(JRE_MODULES), \
$(call FindTransitiveDepsForModule, $m)))

View File

@@ -1407,7 +1407,7 @@ CLEAN_SUPPORT_DIRS += demos
CLEAN_SUPPORT_DIR_TARGETS := $(addprefix clean-, $(CLEAN_SUPPORT_DIRS))
CLEAN_TESTS += hotspot-jtreg-native jdk-jtreg-native lib
CLEAN_TEST_TARGETS += $(addprefix clean-test-, $(CLEAN_TESTS))
CLEAN_PHASES := gensrc java native include
CLEAN_PHASES += gensrc java native include
CLEAN_PHASE_TARGETS := $(addprefix clean-, $(CLEAN_PHASES))
CLEAN_MODULE_TARGETS := $(addprefix clean-, $(ALL_MODULES))
# Construct targets of the form clean-$module-$phase

View File

@@ -149,7 +149,7 @@ endef
################################################################################
PHASE_MAKEDIRS := $(TOPDIR)/make
PHASE_MAKEDIRS += $(TOPDIR)/make
# Helper macro for DeclareRecipesForPhase
# Declare a recipe for calling the module and phase specific makefile.

View File

@@ -34,18 +34,23 @@ include MakeFileStart.gmk
################################################################################
include CopyFiles.gmk
include Modules.gmk
MODULE_SRC := $(TOPDIR)/src/$(MODULE)
# Define the snippet for MakeSnippetStart/End
THIS_SNIPPET := modules/$(MODULE)/$(MAKEFILE_PREFIX).gmk
################################################################################
# Include module specific build settings
include MakeSnippetStart.gmk
THIS_SNIPPET := $(call GetModuleSnippetName, $(MAKEFILE_PREFIX))
# Include the file being wrapped.
include $(THIS_SNIPPET)
ifneq ($(wildcard $(THIS_SNIPPET)), )
include MakeSnippetStart.gmk
include MakeSnippetEnd.gmk
# Include the file being wrapped.
include $(THIS_SNIPPET)
include MakeSnippetEnd.gmk
endif
ifeq ($(MAKEFILE_PREFIX), Lib)
# We need to keep track of what libraries are generated/needed by this

View File

@@ -204,8 +204,9 @@ $(eval $(call SetTestOpt,AOT_JDK,JTREG))
$(eval $(call ParseKeywordVariable, JTREG, \
SINGLE_KEYWORDS := JOBS TIMEOUT_FACTOR FAILURE_HANDLER_TIMEOUT \
TEST_MODE ASSERT VERBOSE RETAIN TEST_THREAD_FACTORY MAX_MEM RUN_PROBLEM_LISTS \
RETRY_COUNT REPEAT_COUNT MAX_OUTPUT REPORT AOT_JDK $(CUSTOM_JTREG_SINGLE_KEYWORDS), \
TEST_MODE ASSERT VERBOSE RETAIN TEST_THREAD_FACTORY JVMTI_STRESS_AGENT \
MAX_MEM RUN_PROBLEM_LISTS RETRY_COUNT REPEAT_COUNT MAX_OUTPUT REPORT \
AOT_JDK $(CUSTOM_JTREG_SINGLE_KEYWORDS), \
STRING_KEYWORDS := OPTIONS JAVA_OPTIONS VM_OPTIONS KEYWORDS \
EXTRA_PROBLEM_LISTS LAUNCHER_OPTIONS \
$(CUSTOM_JTREG_STRING_KEYWORDS), \
@@ -876,6 +877,15 @@ define SetupRunJtregTestBody
))
endif
ifneq ($$(JTREG_JVMTI_STRESS_AGENT), )
AGENT := $$(LIBRARY_PREFIX)JvmtiStressAgent$$(SHARED_LIBRARY_SUFFIX)=$$(JTREG_JVMTI_STRESS_AGENT)
$1_JTREG_BASIC_OPTIONS += -javaoption:'-agentpath:$(TEST_IMAGE_DIR)/hotspot/jtreg/native/$$(AGENT)'
$1_JTREG_BASIC_OPTIONS += $$(addprefix $$(JTREG_PROBLEM_LIST_PREFIX), $$(wildcard \
$$(addprefix $$($1_TEST_ROOT)/, ProblemList-jvmti-stress-agent.txt) \
))
endif
ifneq ($$(JTREG_LAUNCHER_OPTIONS), )
$1_JTREG_LAUNCHER_OPTIONS += $$(JTREG_LAUNCHER_OPTIONS)
endif
@@ -1243,7 +1253,7 @@ UseSpecialTestHandler = \
# Now process each test to run and setup a proper make rule
$(foreach test, $(TESTS_TO_RUN), \
$(eval TEST_ID := $(shell $(ECHO) $(strip $(test)) | \
$(TR) -cs '[a-z][A-Z][0-9]\n' '[_*1000]')) \
$(TR) -cs '[a-z][A-Z][0-9]\n' '_')) \
$(eval ALL_TEST_IDS += $(TEST_ID)) \
$(if $(call UseCustomTestHandler, $(test)), \
$(eval $(call SetupRunCustomTest, $(TEST_ID), \
@@ -1323,9 +1333,9 @@ run-test-report: post-run-test
TEST TOTAL PASS FAIL ERROR SKIP " "
$(foreach test, $(TESTS_TO_RUN), \
$(eval TEST_ID := $(shell $(ECHO) $(strip $(test)) | \
$(TR) -cs '[a-z][A-Z][0-9]\n' '[_*1000]')) \
$(TR) -cs '[a-z][A-Z][0-9]\n' '_')) \
$(ECHO) >> $(TEST_LAST_IDS) $(TEST_ID) $(NEWLINE) \
$(eval NAME_PATTERN := $(shell $(ECHO) $(test) | $(TR) -c '\n' '[_*1000]')) \
$(eval NAME_PATTERN := $(shell $(ECHO) $(test) | $(TR) -c '\n' '_')) \
$(if $(filter __________________________________________________%, $(NAME_PATTERN)), \
$(eval TEST_NAME := ) \
$(PRINTF) >> $(TEST_SUMMARY) "%2s %-49s\n" " " "$(test)" $(NEWLINE) \

View File

@@ -176,3 +176,19 @@ ULIMIT := ulimit
ifeq ($(OPENJDK_BUILD_OS), windows)
PATHTOOL := cygpath
endif
# These settings are needed to run testing with jvmti agent
ifeq ($(OPENJDK_BUILD_OS), linux)
LIBRARY_PREFIX := lib
SHARED_LIBRARY_SUFFIX := .so
endif
ifeq ($(OPENJDK_BUILD_OS), windows)
LIBRARY_PREFIX :=
SHARED_LIBRARY_SUFFIX := .dll
endif
ifeq ($(OPENJDK_BUILD_OS), macosx)
LIBRARY_PREFIX := lib
SHARED_LIBRARY_SUFFIX := .dylib
endif

View File

@@ -36,7 +36,7 @@ $(eval $(call SetupJavaCompilation, BUILD_TOOLS_LANGTOOLS, \
COMPILER := bootjdk, \
TARGET_RELEASE := $(TARGET_RELEASE_BOOTJDK), \
SRC := $(TOPDIR)/make/langtools/tools, \
INCLUDES := compileproperties propertiesparser, \
INCLUDES := compileproperties flagsgenerator propertiesparser, \
COPY := .properties, \
BIN := $(BUILDTOOLS_OUTPUTDIR)/langtools_tools_classes, \
))

View File

@@ -395,11 +395,9 @@ AC_DEFUN_ONCE([BOOTJDK_SETUP_BOOT_JDK],
# When compiling code to be executed by the Boot JDK, force compatibility with the
# oldest supported bootjdk.
OLDEST_BOOT_JDK=`$ECHO $DEFAULT_ACCEPTABLE_BOOT_VERSIONS \
OLDEST_BOOT_JDK_VERSION=`$ECHO $DEFAULT_ACCEPTABLE_BOOT_VERSIONS \
| $TR " " "\n" | $SORT -n | $HEAD -n1`
# -Xlint:-options is added to avoid "warning: [options] system modules path not set in conjunction with -source"
BOOT_JDK_SOURCETARGET="-source $OLDEST_BOOT_JDK -target $OLDEST_BOOT_JDK -Xlint:-options"
AC_SUBST(BOOT_JDK_SOURCETARGET)
AC_SUBST(OLDEST_BOOT_JDK_VERSION)
# Check if the boot jdk is 32 or 64 bit
if $JAVA -version 2>&1 | $GREP -q "64-Bit"; then

View File

@@ -940,7 +940,7 @@ AC_DEFUN([FLAGS_SETUP_CFLAGS_CPU_DEP],
# ACLE and this flag are required to build the aarch64 SVE related functions in
# libvectormath. Apple Silicon does not support SVE; use macOS as a proxy for
# that check.
if test "x$OPENJDK_TARGET_CPU" = "xaarch64" && test "x$OPENJDK_TARGET_CPU" = "xlinux"; then
if test "x$OPENJDK_TARGET_CPU" = "xaarch64" && test "x$OPENJDK_TARGET_OS" = "xlinux"; then
if test "x$TOOLCHAIN_TYPE" = xgcc || test "x$TOOLCHAIN_TYPE" = xclang; then
AC_LANG_PUSH(C)
OLD_CFLAGS="$CFLAGS"
@@ -954,6 +954,17 @@ AC_DEFUN([FLAGS_SETUP_CFLAGS_CPU_DEP],
[
AC_MSG_RESULT([yes])
$2SVE_CFLAGS="-march=armv8-a+sve"
# Switching the initialization mode with gcc from 'pattern' to 'zero'
# avoids the use of unsupported `__builtin_clear_padding` for variable
# length aggregates
if test "x$DEBUG_LEVEL" != xrelease && test "x$TOOLCHAIN_TYPE" = xgcc ; then
INIT_ZERO_FLAG="-ftrivial-auto-var-init=zero"
FLAGS_COMPILER_CHECK_ARGUMENTS(ARGUMENT: [$INIT_ZERO_FLAG],
IF_TRUE: [
$2SVE_CFLAGS="${$2SVE_CFLAGS} $INIT_ZERO_FLAG"
]
)
fi
],
[
AC_MSG_RESULT([no])

View File

@@ -513,6 +513,10 @@ AC_DEFUN([JVM_FEATURES_VERIFY],
[
variant=$1
if JVM_FEATURES_IS_ACTIVE(jfr) && ! JVM_FEATURES_IS_ACTIVE(services); then
AC_MSG_ERROR([Specified JVM feature 'jfr' requires feature 'services' for variant '$variant'])
fi
if JVM_FEATURES_IS_ACTIVE(jvmci) && ! (JVM_FEATURES_IS_ACTIVE(compiler1) || \
JVM_FEATURES_IS_ACTIVE(compiler2)); then
AC_MSG_ERROR([Specified JVM feature 'jvmci' requires feature 'compiler2' or 'compiler1' for variant '$variant'])

View File

@@ -393,9 +393,8 @@ EXTERNAL_BUILDJDK := @EXTERNAL_BUILDJDK@
# Whether the boot jdk jar supports --date=TIMESTAMP
BOOT_JDK_JAR_SUPPORTS_DATE := @BOOT_JDK_JAR_SUPPORTS_DATE@
# When compiling Java source to be run by the boot jdk
# use these extra flags, eg -source 6 -target 6
BOOT_JDK_SOURCETARGET := @BOOT_JDK_SOURCETARGET@
# The oldest supported boot jdk version
OLDEST_BOOT_JDK_VERSION := @OLDEST_BOOT_JDK_VERSION@
# Information about the build system
NUM_CORES := @NUM_CORES@

View File

@@ -38,10 +38,15 @@ include JarArchive.gmk
###
# Create classes that can run on the bootjdk
TARGET_RELEASE_BOOTJDK := $(BOOT_JDK_SOURCETARGET)
# -Xlint:-options is added to avoid the warning
# "system modules path not set in conjunction with -source"
TARGET_RELEASE_BOOTJDK := -source $(OLDEST_BOOT_JDK_VERSION) \
-target $(OLDEST_BOOT_JDK_VERSION) -Xlint:-options
# Create classes that can be used in (or be a part of) the new jdk we're building
TARGET_RELEASE_NEWJDK := -source $(JDK_SOURCE_TARGET_VERSION) -target $(JDK_SOURCE_TARGET_VERSION)
# Create classes that can be used in (or be a part of) the new jdk we're
# building
TARGET_RELEASE_NEWJDK := -source $(JDK_SOURCE_TARGET_VERSION) \
-target $(JDK_SOURCE_TARGET_VERSION)
# Create classes that can be used in JDK 8, for legacy support
TARGET_RELEASE_JDK8 := --release 8
@@ -178,6 +183,10 @@ define SetupJavaCompilationBody
$1_SAFE_NAME := $$(strip $$(subst /,_, $1))
ifeq ($$($1_LOG_ACTION), )
$1_LOG_ACTION := Compiling
endif
ifeq ($$($1_SMALL_JAVA), )
# If unspecified, default to true
$1_SMALL_JAVA := true
@@ -472,7 +481,7 @@ define SetupJavaCompilationBody
# list of files.
$$($1_FILELIST): $$($1_SRCS) $$($1_VARDEPS_FILE)
$$(call MakeDir, $$(@D))
$$(call LogWarn, Compiling up to $$(words $$($1_SRCS)) files for $1)
$$(call LogWarn, $$($1_LOG_ACTION) up to $$(words $$($1_SRCS)) files for $1)
$$(eval $$(call ListPathsSafely, $1_SRCS, $$($1_FILELIST)))
# Create a $$($1_MODFILELIST) file with significant modified dependencies

View File

@@ -33,7 +33,7 @@ include $(TOPDIR)/make/conf/module-loader-map.conf
# Append platform-specific and upgradeable modules
PLATFORM_MODULES += $(PLATFORM_MODULES_$(OPENJDK_TARGET_OS)) \
$(UPGRADEABLE_PLATFORM_MODULES)
$(UPGRADEABLE_PLATFORM_MODULES) $(CUSTOM_UPGRADEABLE_PLATFORM_MODULES)
################################################################################
# Setup module sets for docs
@@ -216,7 +216,7 @@ endif
# Find dependencies ("requires") for a given module.
# Param 1: Module to find dependencies for.
FindDepsForModule = \
$(DEPS_$(strip $1))
$(filter-out $(IMPORT_MODULES), $(DEPS_$(strip $1)))
# Find dependencies ("requires") transitively in 3 levels for a given module.
# Param 1: Module to find dependencies for.
@@ -254,7 +254,8 @@ FindTransitiveIndirectDepsForModules = \
# Upgradeable modules are those that are either defined as upgradeable or that
# require an upradeable module.
FindAllUpgradeableModules = \
$(sort $(filter-out $(MODULES_FILTER), $(UPGRADEABLE_PLATFORM_MODULES)))
$(sort $(filter-out $(MODULES_FILTER), \
$(UPGRADEABLE_PLATFORM_MODULES) $(CUSTOM_UPGRADEABLE_PLATFORM_MODULES)))
################################################################################
@@ -316,6 +317,19 @@ define ReadImportMetaData
$$(eval $$(call ReadSingleImportMetaData, $$m)))
endef
################################################################################
# Get a full snippet path for the current module and a given base name.
#
# Param 1 - The base name of the snippet file to include
GetModuleSnippetName = \
$(if $(CUSTOM_MODULE_MAKE_ROOT), \
$(if $(wildcard $(CUSTOM_MODULE_MAKE_ROOT)/$(MODULE)/$(strip $1).gmk), \
$(CUSTOM_MODULE_MAKE_ROOT)/$(MODULE)/$(strip $1).gmk, \
$(wildcard modules/$(MODULE)/$(strip $1).gmk) \
), \
$(wildcard modules/$(MODULE)/$(strip $1).gmk) \
)
################################################################################
endif # include guard

View File

@@ -39,7 +39,7 @@
#
# make TARGETS="aarch64-linux-gnu" BASE_OS=Fedora
# or
# make TARGETS="arm-linux-gnueabihf ppc64-linux-gnu" BASE_OS=Fedora BASE_OS_VERSION=17
# make TARGETS="arm-linux-gnueabihf ppc64le-linux-gnu" BASE_OS=Fedora BASE_OS_VERSION=17
#
# to build several devkits for a specific OS version at once.
# You can find the final results under ../../build/devkit/result/<host>-to-<target>
@@ -50,7 +50,7 @@
# makefile again for cross compilation. Ex:
#
# PATH=$PWD/../../build/devkit/result/x86_64-linux-gnu-to-x86_64-linux-gnu/bin:$PATH \
# make TARGETS="arm-linux-gnueabihf,ppc64-linux-gnu" BASE_OS=Fedora
# make TARGETS="arm-linux-gnueabihf ppc64le-linux-gnu" BASE_OS=Fedora
#
# This is the makefile which iterates over all host and target platforms.
#

View File

@@ -69,15 +69,26 @@ else ifeq ($(BASE_OS), Fedora)
ifeq ($(BASE_OS_VERSION), )
BASE_OS_VERSION := $(DEFAULT_OS_VERSION)
endif
ifeq ($(filter aarch64 armhfp ppc64le riscv64 s390x x86_64, $(ARCH)), )
$(error Only "aarch64 armhfp ppc64le riscv64 s390x x86_64" architectures are supported for Fedora, but "$(ARCH)" was requested)
endif
ifeq ($(ARCH), riscv64)
ifeq ($(filter 38 39 40 41, $(BASE_OS_VERSION)), )
$(error Only Fedora 38-41 are supported for "$(ARCH)", but Fedora $(BASE_OS_VERSION) was requested)
endif
BASE_URL := http://fedora.riscv.rocks/repos-dist/f$(BASE_OS_VERSION)/latest/$(ARCH)/Packages/
else
LATEST_ARCHIVED_OS_VERSION := 35
ifeq ($(filter x86_64 armhfp, $(ARCH)), )
LATEST_ARCHIVED_OS_VERSION := 36
ifeq ($(filter aarch64 armhfp x86_64, $(ARCH)), )
FEDORA_TYPE := fedora-secondary
else
FEDORA_TYPE := fedora/linux
endif
ifeq ($(ARCH), armhfp)
ifneq ($(BASE_OS_VERSION), 36)
$(error Fedora 36 is the last release supporting "armhfp", but $(BASE_OS) was requested)
endif
endif
NOT_ARCHIVED := $(shell [ $(BASE_OS_VERSION) -gt $(LATEST_ARCHIVED_OS_VERSION) ] && echo true)
ifeq ($(NOT_ARCHIVED),true)
BASE_URL := https://dl.fedoraproject.org/pub/$(FEDORA_TYPE)/releases/$(BASE_OS_VERSION)/Everything/$(ARCH)/os/Packages/
@@ -464,7 +475,7 @@ ifeq ($(ARCH), armhfp)
$(BUILDDIR)/$(gcc_ver)/Makefile : CONFIG += --with-float=hard
endif
ifneq ($(filter riscv64 ppc64 ppc64le s390x, $(ARCH)), )
ifneq ($(filter riscv64 ppc64le s390x, $(ARCH)), )
# We only support 64-bit on these platforms anyway
CONFIG += --disable-multilib
endif

View File

@@ -12,12 +12,17 @@
],
"extensions": {
"recommendations": [
"oracle.oracle-java",
// {{INDEXER_EXTENSIONS}}
]
},
"settings": {
// {{INDEXER_SETTINGS}}
// Java extension
"jdk.project.jdkhome": "{{OUTPUTDIR}}/jdk",
"jdk.java.onSave.organizeImports": false, // prevents unnecessary changes
// Additional conventions
"files.associations": {
"*.gmk": "makefile"

View File

@@ -0,0 +1,161 @@
/*
* Copyright (c) 2025, Oracle and/or its affiliates. All rights reserved.
* DO NOT ALTER OR REMOVE COPYRIGHT NOTICES OR THIS FILE HEADER.
*
* This code is free software; you can redistribute it and/or modify it
* under the terms of the GNU General Public License version 2 only, as
* published by the Free Software Foundation. Oracle designates this
* particular file as subject to the "Classpath" exception as provided
* by Oracle in the LICENSE file that accompanied this code.
*
* This code is distributed in the hope that it will be useful, but WITHOUT
* ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or
* FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License
* version 2 for more details (a copy is included in the LICENSE file that
* accompanied this code).
*
* You should have received a copy of the GNU General Public License version
* 2 along with this work; if not, write to the Free Software Foundation,
* Inc., 51 Franklin St, Fifth Floor, Boston, MA 02110-1301 USA.
*
* Please contact Oracle, 500 Oracle Parkway, Redwood Shores, CA 94065 USA
* or visit www.oracle.com if you need additional information or have any
* questions.
*/
package flagsgenerator;
import com.sun.source.tree.CompilationUnitTree;
import com.sun.source.util.JavacTask;
import com.sun.source.util.TreePath;
import com.sun.source.util.Trees;
import java.io.IOException;
import java.io.PrintWriter;
import java.nio.file.Files;
import java.nio.file.Paths;
import java.util.ArrayList;
import java.util.EnumMap;
import java.util.HashMap;
import java.util.HashSet;
import java.util.List;
import java.util.Locale;
import java.util.Map;
import java.util.Map.Entry;
import java.util.Set;
import java.util.TreeMap;
import java.util.stream.Collectors;
import javax.lang.model.element.AnnotationMirror;
import javax.lang.model.element.TypeElement;
import javax.lang.model.element.VariableElement;
import javax.lang.model.util.ElementFilter;
import javax.tools.ToolProvider;
public class FlagsGenerator {
public static void main(String... args) throws IOException {
var compiler = ToolProvider.getSystemJavaCompiler();
try (var fm = compiler.getStandardFileManager(null, null, null)) {
JavacTask task = (JavacTask) compiler.getTask(null, null, d -> {}, null, null, fm.getJavaFileObjects(args[0]));
Trees trees = Trees.instance(task);
CompilationUnitTree cut = task.parse().iterator().next();
task.analyze();
TypeElement clazz = (TypeElement) trees.getElement(new TreePath(new TreePath(cut), cut.getTypeDecls().get(0)));
Map<Integer, List<String>> flag2Names = new TreeMap<>();
Map<FlagTarget, Map<Integer, List<String>>> target2FlagBit2Fields = new EnumMap<>(FlagTarget.class);
Map<String, String> customToString = new HashMap<>();
Set<String> noToString = new HashSet<>();
for (VariableElement field : ElementFilter.fieldsIn(clazz.getEnclosedElements())) {
String flagName = field.getSimpleName().toString();
for (AnnotationMirror am : field.getAnnotationMirrors()) {
switch (am.getAnnotationType().toString()) {
case "com.sun.tools.javac.code.Flags.Use" -> {
long flagValue = ((Number) field.getConstantValue()).longValue();
int flagBit = 63 - Long.numberOfLeadingZeros(flagValue);
flag2Names.computeIfAbsent(flagBit, _ -> new ArrayList<>())
.add(flagName);
List<?> originalTargets = (List<?>) valueOfValueAttribute(am);
originalTargets.stream()
.map(value -> FlagTarget.valueOf(value.toString()))
.forEach(target -> target2FlagBit2Fields.computeIfAbsent(target, _ -> new HashMap<>())
.computeIfAbsent(flagBit, _ -> new ArrayList<>())
.add(flagName));
}
case "com.sun.tools.javac.code.Flags.CustomToStringValue" -> {
customToString.put(flagName, (String) valueOfValueAttribute(am));
}
case "com.sun.tools.javac.code.Flags.NoToStringValue" -> {
noToString.add(flagName);
}
}
}
}
//verify there are no flag overlaps:
for (Entry<FlagTarget, Map<Integer, List<String>>> targetAndFlag : target2FlagBit2Fields.entrySet()) {
for (Entry<Integer, List<String>> flagAndFields : targetAndFlag.getValue().entrySet()) {
if (flagAndFields.getValue().size() > 1) {
throw new AssertionError("duplicate flag for target: " + targetAndFlag.getKey() +
", flag: " + flagAndFields.getKey() +
", flags fields: " + flagAndFields.getValue());
}
}
}
try (PrintWriter out = new PrintWriter(Files.newBufferedWriter(Paths.get(args[1])))) {
out.println("""
package com.sun.tools.javac.code;
public enum FlagsEnum {
""");
for (Entry<Integer, List<String>> e : flag2Names.entrySet()) {
String constantName = e.getValue().stream().collect(Collectors.joining("_OR_"));
String toString = e.getValue()
.stream()
.filter(n -> !noToString.contains(n))
.map(n -> customToString.getOrDefault(n, n.toLowerCase(Locale.US)))
.collect(Collectors.joining(" or "));
out.println(" " + constantName + "(1L<<" + e.getKey() + ", \"" + toString + "\"),");
}
out.println("""
;
private final long value;
private final String toString;
private FlagsEnum(long value, String toString) {
this.value = value;
this.toString = toString;
}
public long value() {
return value;
}
public String toString() {
return toString;
}
}
""");
}
}
}
private static Object valueOfValueAttribute(AnnotationMirror am) {
return am.getElementValues()
.values()
.iterator()
.next()
.getValue();
}
private enum FlagTarget {
BLOCK,
CLASS,
METHOD,
MODULE,
PACKAGE,
TYPE_VAR,
VARIABLE;
}
}

View File

@@ -1,5 +1,5 @@
/*
* Copyright (c) 2014, 2024, Oracle and/or its affiliates. All rights reserved.
* Copyright (c) 2014, 2025, Oracle and/or its affiliates. All rights reserved.
* DO NOT ALTER OR REMOVE COPYRIGHT NOTICES OR THIS FILE HEADER.
*
* This code is free software; you can redistribute it and/or modify it
@@ -76,7 +76,7 @@ public interface MessageType {
ANNOTATION("annotation", "Compound", "com.sun.tools.javac.code.Attribute"),
BOOLEAN("boolean", "boolean", null),
COLLECTION("collection", "Collection", "java.util"),
FLAG("flag", "Flag", "com.sun.tools.javac.code.Flags"),
FLAG("flag", "FlagsEnum", "com.sun.tools.javac.code"),
FRAGMENT("fragment", "Fragment", null),
DIAGNOSTIC("diagnostic", "JCDiagnostic", "com.sun.tools.javac.util"),
MODIFIER("modifier", "Modifier", "javax.lang.model.element"),

View File

@@ -177,7 +177,8 @@ ifeq ($(ENABLE_HEADLESS_ONLY), false)
endif
LIBSPLASHSCREEN_CFLAGS += -DSPLASHSCREEN -DPNG_NO_MMX_CODE \
-DPNG_ARM_NEON_OPT=0 -DPNG_ARM_NEON_IMPLEMENTATION=0
-DPNG_ARM_NEON_OPT=0 -DPNG_ARM_NEON_IMPLEMENTATION=0 \
-DPNG_LOONGARCH_LSX_OPT=0
ifeq ($(call isTargetOs, linux)+$(call isTargetCpuArch, ppc), true+true)
LIBSPLASHSCREEN_CFLAGS += -DPNG_POWERPC_VSX_OPT=0

View File

@@ -41,17 +41,17 @@ $(eval $(call SetupCompileProperties, COMPILE_PROPERTIES, \
TARGETS += $(COMPILE_PROPERTIES)
################################################################################
#
# Compile properties files into enum-like classes using the propertiesparser tool
#
# To avoid reevaluating the compilation setup for the tools each time this file
# is included, the following trick is used to be able to declare a dependency on
# the built tools.
BUILD_TOOLS_LANGTOOLS := $(call SetupJavaCompilationCompileTarget, \
BUILD_TOOLS_LANGTOOLS, $(BUILDTOOLS_OUTPUTDIR)/langtools_tools_classes)
################################################################################
#
# Compile properties files into enum-like classes using the propertiesparser tool
#
TOOL_PARSEPROPERTIES_CMD := $(JAVA_SMALL) -cp $(BUILDTOOLS_OUTPUTDIR)/langtools_tools_classes \
propertiesparser.PropertiesParser
@@ -76,3 +76,26 @@ $(eval $(call SetupExecute, PARSEPROPERTIES, \
TARGETS += $(PARSEPROPERTIES)
################################################################################
#
# Generate FlagsEnum from Flags constants
#
TOOL_FLAGSGENERATOR_CMD := $(JAVA_SMALL) -cp $(BUILDTOOLS_OUTPUTDIR)/langtools_tools_classes \
flagsgenerator.FlagsGenerator
FLAGS_SRC := \
$(MODULE_SRC)/share/classes/com/sun/tools/javac/code/Flags.java
FLAGS_OUT := \
$(SUPPORT_OUTPUTDIR)/gensrc/$(MODULE)/com/sun/tools/javac/code/FlagsEnum.java
$(eval $(call SetupExecute, FLAGSGENERATOR, \
WARN := Generating FlagsEnum, \
DEPS := $(FLAGS_SRC) $(BUILD_TOOLS_LANGTOOLS), \
OUTPUT_FILE := $(FLAGS_OUT), \
COMMAND := $(TOOL_FLAGSGENERATOR_CMD) $(FLAGS_SRC) $(FLAGS_OUT), \
))
TARGETS += $(FLAGSGENERATOR)
################################################################################

View File

@@ -33,4 +33,6 @@ DISABLED_WARNINGS_java += dangling-doc-comments this-escape
JAVAC_FLAGS += -parameters -XDstringConcat=inline
TARGET_RELEASE := $(TARGET_RELEASE_BOOTJDK)
################################################################################

View File

@@ -121,15 +121,15 @@ ifeq ($(call isTargetOs, windows), true)
TARGETS += $(BUILD_LIBJPACKAGE)
##############################################################################
## Build libwixhelper
## Build libmsica
##############################################################################
# Build Wix custom action helper
# Build MSI custom action library
# Output library in resources dir, and symbols in the object dir
$(eval $(call SetupJdkLibrary, BUILD_LIBWIXHELPER, \
NAME := wixhelper, \
$(eval $(call SetupJdkLibrary, BUILD_LIBMSICA, \
NAME := msica, \
OUTPUT_DIR := $(JPACKAGE_OUTPUT_DIR), \
SYMBOLS_DIR := $(SUPPORT_OUTPUTDIR)/native/$(MODULE)/libwixhelper, \
SYMBOLS_DIR := $(SUPPORT_OUTPUTDIR)/native/$(MODULE)/libmsica, \
ONLY_EXPORTED := true, \
OPTIMIZATION := LOW, \
EXTRA_SRC := common, \
@@ -139,7 +139,7 @@ ifeq ($(call isTargetOs, windows), true)
LIBS_windows := msi.lib ole32.lib shell32.lib shlwapi.lib user32.lib, \
))
TARGETS += $(BUILD_LIBWIXHELPER)
TARGETS += $(BUILD_LIBMSICA)
##############################################################################
## Build msiwrapper

View File

@@ -62,7 +62,8 @@ BUILD_JDK_JTREG_LIBRARIES_JDK_LIBS_libGetXSpace := java.base:libjava
ifeq ($(call isTargetOs, windows), true)
BUILD_JDK_JTREG_EXCLUDE += libDirectIO.c libInheritedChannel.c \
libExplicitAttach.c libImplicitAttach.c \
exelauncher.c libFDLeaker.c exeFDLeakTester.c
exelauncher.c libFDLeaker.c exeFDLeakTester.c \
libChangeSignalDisposition.c exePrintSignalDisposition.c
BUILD_JDK_JTREG_EXECUTABLES_LIBS_exeNullCallerTest := $(LIBCXX)
BUILD_JDK_JTREG_EXECUTABLES_LIBS_exerevokeall := advapi32.lib

View File

@@ -881,6 +881,46 @@ reg_class vectorx_reg(
V31, V31_H, V31_J, V31_K
);
// Class for vector register V10
reg_class v10_veca_reg(
V10, V10_H, V10_J, V10_K
);
// Class for vector register V11
reg_class v11_veca_reg(
V11, V11_H, V11_J, V11_K
);
// Class for vector register V12
reg_class v12_veca_reg(
V12, V12_H, V12_J, V12_K
);
// Class for vector register V13
reg_class v13_veca_reg(
V13, V13_H, V13_J, V13_K
);
// Class for vector register V17
reg_class v17_veca_reg(
V17, V17_H, V17_J, V17_K
);
// Class for vector register V18
reg_class v18_veca_reg(
V18, V18_H, V18_J, V18_K
);
// Class for vector register V23
reg_class v23_veca_reg(
V23, V23_H, V23_J, V23_K
);
// Class for vector register V24
reg_class v24_veca_reg(
V24, V24_H, V24_J, V24_K
);
// Class for 128 bit register v0
reg_class v0_reg(
V0, V0_H
@@ -4969,6 +5009,86 @@ operand vReg()
interface(REG_INTER);
%}
operand vReg_V10()
%{
constraint(ALLOC_IN_RC(v10_veca_reg));
match(vReg);
op_cost(0);
format %{ %}
interface(REG_INTER);
%}
operand vReg_V11()
%{
constraint(ALLOC_IN_RC(v11_veca_reg));
match(vReg);
op_cost(0);
format %{ %}
interface(REG_INTER);
%}
operand vReg_V12()
%{
constraint(ALLOC_IN_RC(v12_veca_reg));
match(vReg);
op_cost(0);
format %{ %}
interface(REG_INTER);
%}
operand vReg_V13()
%{
constraint(ALLOC_IN_RC(v13_veca_reg));
match(vReg);
op_cost(0);
format %{ %}
interface(REG_INTER);
%}
operand vReg_V17()
%{
constraint(ALLOC_IN_RC(v17_veca_reg));
match(vReg);
op_cost(0);
format %{ %}
interface(REG_INTER);
%}
operand vReg_V18()
%{
constraint(ALLOC_IN_RC(v18_veca_reg));
match(vReg);
op_cost(0);
format %{ %}
interface(REG_INTER);
%}
operand vReg_V23()
%{
constraint(ALLOC_IN_RC(v23_veca_reg));
match(vReg);
op_cost(0);
format %{ %}
interface(REG_INTER);
%}
operand vReg_V24()
%{
constraint(ALLOC_IN_RC(v24_veca_reg));
match(vReg);
op_cost(0);
format %{ %}
interface(REG_INTER);
%}
operand vecA()
%{
constraint(ALLOC_IN_RC(vectora_reg));
@@ -16161,41 +16281,8 @@ instruct branchLoopEnd(cmpOp cmp, rFlagsReg cr, label lbl)
// ============================================================================
// inlined locking and unlocking
instruct cmpFastLock(rFlagsReg cr, iRegP object, iRegP box, iRegPNoSp tmp, iRegPNoSp tmp2, iRegPNoSp tmp3)
%{
predicate(LockingMode != LM_LIGHTWEIGHT);
match(Set cr (FastLock object box));
effect(TEMP tmp, TEMP tmp2, TEMP tmp3);
ins_cost(5 * INSN_COST);
format %{ "fastlock $object,$box\t! kills $tmp,$tmp2,$tmp3" %}
ins_encode %{
__ fast_lock($object$$Register, $box$$Register, $tmp$$Register, $tmp2$$Register, $tmp3$$Register);
%}
ins_pipe(pipe_serial);
%}
instruct cmpFastUnlock(rFlagsReg cr, iRegP object, iRegP box, iRegPNoSp tmp, iRegPNoSp tmp2)
%{
predicate(LockingMode != LM_LIGHTWEIGHT);
match(Set cr (FastUnlock object box));
effect(TEMP tmp, TEMP tmp2);
ins_cost(5 * INSN_COST);
format %{ "fastunlock $object,$box\t! kills $tmp, $tmp2" %}
ins_encode %{
__ fast_unlock($object$$Register, $box$$Register, $tmp$$Register, $tmp2$$Register);
%}
ins_pipe(pipe_serial);
%}
instruct cmpFastLockLightweight(rFlagsReg cr, iRegP object, iRegP box, iRegPNoSp tmp, iRegPNoSp tmp2, iRegPNoSp tmp3)
%{
predicate(LockingMode == LM_LIGHTWEIGHT);
match(Set cr (FastLock object box));
effect(TEMP tmp, TEMP tmp2, TEMP tmp3);
@@ -16211,7 +16298,6 @@ instruct cmpFastLockLightweight(rFlagsReg cr, iRegP object, iRegP box, iRegPNoSp
instruct cmpFastUnlockLightweight(rFlagsReg cr, iRegP object, iRegP box, iRegPNoSp tmp, iRegPNoSp tmp2, iRegPNoSp tmp3)
%{
predicate(LockingMode == LM_LIGHTWEIGHT);
match(Set cr (FastUnlock object box));
effect(TEMP tmp, TEMP tmp2, TEMP tmp3);

View File

@@ -257,6 +257,28 @@ source %{
return false;
}
break;
case Op_SelectFromTwoVector:
// The "tbl" instruction for two vector table is supported only in Neon and SVE2. Return
// false if vector length > 16B but supported SVE version < 2.
// For vector length of 16B, generate SVE2 "tbl" instruction if SVE2 is supported, else
// generate Neon "tbl" instruction to select from two vectors.
// This operation is disabled for doubles and longs on machines with SVE < 2 and instead
// the default VectorRearrange + VectorBlend is generated because the performance of the default
// implementation was better than or equal to the implementation for SelectFromTwoVector.
if (UseSVE < 2 && (type2aelembytes(bt) == 8 || length_in_bytes > 16)) {
return false;
}
// Because the SVE2 "tbl" instruction is unpredicated and partial operations cannot be generated
// using masks, we disable this operation on machines where length_in_bytes < MaxVectorSize
// on that machine with the only exception of 8B vector length. This is because at the time of
// writing this, there is no SVE2 machine available with length_in_bytes > 8 and
// length_in_bytes < MaxVectorSize to test this operation on (for example - there isn't an
// SVE2 machine available with MaxVectorSize = 32 to test a case with length_in_bytes = 16).
if (UseSVE == 2 && length_in_bytes > 8 && length_in_bytes < MaxVectorSize) {
return false;
}
break;
default:
break;
}
@@ -7172,3 +7194,71 @@ instruct vexpandBits(vReg dst, vReg src1, vReg src2) %{
%}
ins_pipe(pipe_slow);
%}
// ------------------------------------- SelectFromTwoVector ------------------------------------
// The Neon and SVE2 tbl instruction for two vector lookup requires both the source vectors to be
// consecutive. The match rules for SelectFromTwoVector reserve two consecutive vector registers
// for src1 and src2.
// Four combinations of vector registers for vselect_from_two_vectors are chosen at random
// (two from volatile and two from non-volatile set) which gives more freedom to the register
// allocator to choose the best pair of source registers at that point.
instruct vselect_from_two_vectors_10_11(vReg dst, vReg_V10 src1, vReg_V11 src2,
vReg index, vReg tmp) %{
effect(TEMP_DEF dst, TEMP tmp);
match(Set dst (SelectFromTwoVector (Binary index src1) src2));
format %{ "vselect_from_two_vectors_10_11 $dst, $src1, $src2, $index\t# KILL $tmp" %}
ins_encode %{
BasicType bt = Matcher::vector_element_basic_type(this);
uint length_in_bytes = Matcher::vector_length_in_bytes(this);
__ select_from_two_vectors($dst$$FloatRegister, $src1$$FloatRegister,
$src2$$FloatRegister, $index$$FloatRegister,
$tmp$$FloatRegister, bt, length_in_bytes);
%}
ins_pipe(pipe_slow);
%}
instruct vselect_from_two_vectors_12_13(vReg dst, vReg_V12 src1, vReg_V13 src2,
vReg index, vReg tmp) %{
effect(TEMP_DEF dst, TEMP tmp);
match(Set dst (SelectFromTwoVector (Binary index src1) src2));
format %{ "vselect_from_two_vectors_12_13 $dst, $src1, $src2, $index\t# KILL $tmp" %}
ins_encode %{
BasicType bt = Matcher::vector_element_basic_type(this);
uint length_in_bytes = Matcher::vector_length_in_bytes(this);
__ select_from_two_vectors($dst$$FloatRegister, $src1$$FloatRegister,
$src2$$FloatRegister, $index$$FloatRegister,
$tmp$$FloatRegister, bt, length_in_bytes);
%}
ins_pipe(pipe_slow);
%}
instruct vselect_from_two_vectors_17_18(vReg dst, vReg_V17 src1, vReg_V18 src2,
vReg index, vReg tmp) %{
effect(TEMP_DEF dst, TEMP tmp);
match(Set dst (SelectFromTwoVector (Binary index src1) src2));
format %{ "vselect_from_two_vectors_17_18 $dst, $src1, $src2, $index\t# KILL $tmp" %}
ins_encode %{
BasicType bt = Matcher::vector_element_basic_type(this);
uint length_in_bytes = Matcher::vector_length_in_bytes(this);
__ select_from_two_vectors($dst$$FloatRegister, $src1$$FloatRegister,
$src2$$FloatRegister, $index$$FloatRegister,
$tmp$$FloatRegister, bt, length_in_bytes);
%}
ins_pipe(pipe_slow);
%}
instruct vselect_from_two_vectors_23_24(vReg dst, vReg_V23 src1, vReg_V24 src2,
vReg index, vReg tmp) %{
effect(TEMP_DEF dst, TEMP tmp);
match(Set dst (SelectFromTwoVector (Binary index src1) src2));
format %{ "vselect_from_two_vectors_23_24 $dst, $src1, $src2, $index\t# KILL $tmp" %}
ins_encode %{
BasicType bt = Matcher::vector_element_basic_type(this);
uint length_in_bytes = Matcher::vector_length_in_bytes(this);
__ select_from_two_vectors($dst$$FloatRegister, $src1$$FloatRegister,
$src2$$FloatRegister, $index$$FloatRegister,
$tmp$$FloatRegister, bt, length_in_bytes);
%}
ins_pipe(pipe_slow);
%}

View File

@@ -247,6 +247,28 @@ source %{
return false;
}
break;
case Op_SelectFromTwoVector:
// The "tbl" instruction for two vector table is supported only in Neon and SVE2. Return
// false if vector length > 16B but supported SVE version < 2.
// For vector length of 16B, generate SVE2 "tbl" instruction if SVE2 is supported, else
// generate Neon "tbl" instruction to select from two vectors.
// This operation is disabled for doubles and longs on machines with SVE < 2 and instead
// the default VectorRearrange + VectorBlend is generated because the performance of the default
// implementation was better than or equal to the implementation for SelectFromTwoVector.
if (UseSVE < 2 && (type2aelembytes(bt) == 8 || length_in_bytes > 16)) {
return false;
}
// Because the SVE2 "tbl" instruction is unpredicated and partial operations cannot be generated
// using masks, we disable this operation on machines where length_in_bytes < MaxVectorSize
// on that machine with the only exception of 8B vector length. This is because at the time of
// writing this, there is no SVE2 machine available with length_in_bytes > 8 and
// length_in_bytes < MaxVectorSize to test this operation on (for example - there isn't an
// SVE2 machine available with MaxVectorSize = 32 to test a case with length_in_bytes = 16).
if (UseSVE == 2 && length_in_bytes > 8 && length_in_bytes < MaxVectorSize) {
return false;
}
break;
default:
break;
}
@@ -5154,3 +5176,34 @@ BITPERM(vcompressBits, CompressBitsV, sve_bext)
// ----------------------------------- ExpandBitsV ---------------------------------
BITPERM(vexpandBits, ExpandBitsV, sve_bdep)
// ------------------------------------- SelectFromTwoVector ------------------------------------
// The Neon and SVE2 tbl instruction for two vector lookup requires both the source vectors to be
// consecutive. The match rules for SelectFromTwoVector reserve two consecutive vector registers
// for src1 and src2.
// Four combinations of vector registers for vselect_from_two_vectors are chosen at random
// (two from volatile and two from non-volatile set) which gives more freedom to the register
// allocator to choose the best pair of source registers at that point.
dnl
dnl SELECT_FROM_TWO_VECTORS($1, $2 )
dnl SELECT_FROM_TWO_VECTORS(first_reg, second_reg)
define(`SELECT_FROM_TWO_VECTORS', `
instruct vselect_from_two_vectors_$1_$2(vReg dst, vReg_V$1 src1, vReg_V$2 src2,
vReg index, vReg tmp) %{
effect(TEMP_DEF dst, TEMP tmp);
match(Set dst (SelectFromTwoVector (Binary index src1) src2));
format %{ "vselect_from_two_vectors_$1_$2 $dst, $src1, $src2, $index\t# KILL $tmp" %}
ins_encode %{
BasicType bt = Matcher::vector_element_basic_type(this);
uint length_in_bytes = Matcher::vector_length_in_bytes(this);
__ select_from_two_vectors($dst$$FloatRegister, $src1$$FloatRegister,
$src2$$FloatRegister, $index$$FloatRegister,
$tmp$$FloatRegister, bt, length_in_bytes);
%}
ins_pipe(pipe_slow);
%}')dnl
dnl
SELECT_FROM_TWO_VECTORS(10, 11)
SELECT_FROM_TWO_VECTORS(12, 13)
SELECT_FROM_TWO_VECTORS(17, 18)
SELECT_FROM_TWO_VECTORS(23, 24)

View File

@@ -4231,12 +4231,29 @@ public:
sf(imm1, 9, 5), rf(Zd, 0);
}
// SVE programmable table lookup/permute using vector of element indices
void sve_tbl(FloatRegister Zd, SIMD_RegVariant T, FloatRegister Zn, FloatRegister Zm) {
private:
void _sve_tbl(FloatRegister Zd, SIMD_RegVariant T, FloatRegister Zn, unsigned reg_count, FloatRegister Zm) {
starti;
assert(T != Q, "invalid size");
// Only supports one or two vector lookup. One vector lookup was introduced in SVE1
// and two vector lookup in SVE2
assert(0 < reg_count && reg_count <= 2, "invalid number of registers");
int op11 = (reg_count == 1) ? 0b10 : 0b01;
f(0b00000101, 31, 24), f(T, 23, 22), f(0b1, 21), rf(Zm, 16);
f(0b001100, 15, 10), rf(Zn, 5), rf(Zd, 0);
f(0b001, 15, 13), f(op11, 12, 11), f(0b0, 10), rf(Zn, 5), rf(Zd, 0);
}
public:
// SVE/SVE2 Programmable table lookup in one or two vector table (zeroing)
void sve_tbl(FloatRegister Zd, SIMD_RegVariant T, FloatRegister Zn, FloatRegister Zm) {
_sve_tbl(Zd, T, Zn, 1, Zm);
}
void sve_tbl(FloatRegister Zd, SIMD_RegVariant T, FloatRegister Zn1, FloatRegister Zn2, FloatRegister Zm) {
assert(Zn1->successor() == Zn2, "invalid order of registers");
_sve_tbl(Zd, T, Zn1, 2, Zm);
}
// Shuffle active elements of vector to the right and fill with zero

View File

@@ -410,11 +410,7 @@ int LIR_Assembler::emit_unwind_handler() {
if (method()->is_synchronized()) {
monitor_address(0, FrameMap::r0_opr);
stub = new MonitorExitStub(FrameMap::r0_opr, true, 0);
if (LockingMode == LM_MONITOR) {
__ b(*stub->entry());
} else {
__ unlock_object(r5, r4, r0, r6, *stub->entry());
}
__ unlock_object(r5, r4, r0, r6, *stub->entry());
__ bind(*stub->continuation());
}
@@ -2484,13 +2480,7 @@ void LIR_Assembler::emit_lock(LIR_OpLock* op) {
Register hdr = op->hdr_opr()->as_register();
Register lock = op->lock_opr()->as_register();
Register temp = op->scratch_opr()->as_register();
if (LockingMode == LM_MONITOR) {
if (op->info() != nullptr) {
add_debug_info_for_null_check_here(op->info());
__ null_check(obj, -1);
}
__ b(*op->stub()->entry());
} else if (op->code() == lir_lock) {
if (op->code() == lir_lock) {
assert(BasicLock::displaced_header_offset_in_bytes() == 0, "lock_reg must point to the displaced header");
// add debug info for NullPointerException only if one is possible
int null_check_offset = __ lock_object(hdr, obj, lock, temp, *op->stub()->entry());
@@ -2823,7 +2813,7 @@ void LIR_Assembler::leal(LIR_Opr addr, LIR_Opr dest, LIR_PatchCode patch_code, C
return;
}
__ lea(dest->as_register_lo(), as_Address(addr->as_address_ptr()));
__ lea(dest->as_pointer_register(), as_Address(addr->as_address_ptr()));
}
@@ -3133,7 +3123,9 @@ void LIR_Assembler::atomic_op(LIR_Code code, LIR_Opr src, LIR_Opr data, LIR_Opr
default:
ShouldNotReachHere();
}
__ membar(__ AnyAny);
if(!UseLSE) {
__ membar(__ AnyAny);
}
}
#undef __

View File

@@ -981,7 +981,7 @@ void LIRGenerator::do_update_CRC32(Intrinsic* x) {
CallingConvention* cc = frame_map()->c_calling_convention(&signature);
const LIR_Opr result_reg = result_register_for(x->type());
LIR_Opr addr = new_pointer_register();
LIR_Opr addr = new_register(T_ADDRESS);
__ leal(LIR_OprFact::address(a), addr);
crc.load_item_force(cc->at(0));
@@ -1058,7 +1058,7 @@ void LIRGenerator::do_update_CRC32C(Intrinsic* x) {
CallingConvention* cc = frame_map()->c_calling_convention(&signature);
const LIR_Opr result_reg = result_register_for(x->type());
LIR_Opr addr = new_pointer_register();
LIR_Opr addr = new_register(T_ADDRESS);
__ leal(LIR_OprFact::address(a), addr);
crc.load_item_force(cc->at(0));

View File

@@ -60,8 +60,6 @@ void C1_MacroAssembler::float_cmp(bool is_float, int unordered_result,
}
int C1_MacroAssembler::lock_object(Register hdr, Register obj, Register disp_hdr, Register temp, Label& slow_case) {
const int aligned_mask = BytesPerWord -1;
const int hdr_offset = oopDesc::mark_offset_in_bytes();
assert_different_registers(hdr, obj, disp_hdr, temp, rscratch2);
int null_check_offset = -1;
@@ -72,95 +70,20 @@ int C1_MacroAssembler::lock_object(Register hdr, Register obj, Register disp_hdr
null_check_offset = offset();
if (LockingMode == LM_LIGHTWEIGHT) {
lightweight_lock(disp_hdr, obj, hdr, temp, rscratch2, slow_case);
} else if (LockingMode == LM_LEGACY) {
lightweight_lock(disp_hdr, obj, hdr, temp, rscratch2, slow_case);
if (DiagnoseSyncOnValueBasedClasses != 0) {
load_klass(hdr, obj);
ldrb(hdr, Address(hdr, Klass::misc_flags_offset()));
tst(hdr, KlassFlags::_misc_is_value_based_class);
br(Assembler::NE, slow_case);
}
Label done;
// Load object header
ldr(hdr, Address(obj, hdr_offset));
// and mark it as unlocked
orr(hdr, hdr, markWord::unlocked_value);
// save unlocked object header into the displaced header location on the stack
str(hdr, Address(disp_hdr, 0));
// test if object header is still the same (i.e. unlocked), and if so, store the
// displaced header address in the object header - if it is not the same, get the
// object header instead
lea(rscratch2, Address(obj, hdr_offset));
cmpxchgptr(hdr, disp_hdr, rscratch2, rscratch1, done, /*fallthough*/nullptr);
// if the object header was the same, we're done
// if the object header was not the same, it is now in the hdr register
// => test if it is a stack pointer into the same stack (recursive locking), i.e.:
//
// 1) (hdr & aligned_mask) == 0
// 2) sp <= hdr
// 3) hdr <= sp + page_size
//
// these 3 tests can be done by evaluating the following expression:
//
// (hdr - sp) & (aligned_mask - page_size)
//
// assuming both the stack pointer and page_size have their least
// significant 2 bits cleared and page_size is a power of 2
mov(rscratch1, sp);
sub(hdr, hdr, rscratch1);
ands(hdr, hdr, aligned_mask - (int)os::vm_page_size());
// for recursive locking, the result is zero => save it in the displaced header
// location (null in the displaced hdr location indicates recursive locking)
str(hdr, Address(disp_hdr, 0));
// otherwise we don't care about the result and handle locking via runtime call
cbnz(hdr, slow_case);
// done
bind(done);
inc_held_monitor_count(rscratch1);
}
return null_check_offset;
}
void C1_MacroAssembler::unlock_object(Register hdr, Register obj, Register disp_hdr, Register temp, Label& slow_case) {
const int aligned_mask = BytesPerWord -1;
const int hdr_offset = oopDesc::mark_offset_in_bytes();
assert_different_registers(hdr, obj, disp_hdr, temp, rscratch2);
Label done;
if (LockingMode != LM_LIGHTWEIGHT) {
// load displaced header
ldr(hdr, Address(disp_hdr, 0));
// if the loaded hdr is null we had recursive locking
// if we had recursive locking, we are done
cbz(hdr, done);
}
// load object
ldr(obj, Address(disp_hdr, BasicObjectLock::obj_offset()));
verify_oop(obj);
if (LockingMode == LM_LIGHTWEIGHT) {
lightweight_unlock(obj, hdr, temp, rscratch2, slow_case);
} else if (LockingMode == LM_LEGACY) {
// test if object header is pointing to the displaced header, and if so, restore
// the displaced header in the object - if the object header is not pointing to
// the displaced header, get the object header instead
// if the object header was not pointing to the displaced header,
// we do unlocking via runtime call
if (hdr_offset) {
lea(rscratch1, Address(obj, hdr_offset));
cmpxchgptr(disp_hdr, hdr, rscratch1, rscratch2, done, &slow_case);
} else {
cmpxchgptr(disp_hdr, hdr, obj, rscratch2, done, &slow_case);
}
// done
bind(done);
dec_held_monitor_count(rscratch1);
}
lightweight_unlock(obj, hdr, temp, rscratch2, slow_case);
}

View File

@@ -147,215 +147,8 @@ address C2_MacroAssembler::arrays_hashcode(Register ary, Register cnt, Register
return pc();
}
void C2_MacroAssembler::fast_lock(Register objectReg, Register boxReg, Register tmpReg,
Register tmp2Reg, Register tmp3Reg) {
Register oop = objectReg;
Register box = boxReg;
Register disp_hdr = tmpReg;
Register tmp = tmp2Reg;
Label cont;
Label object_has_monitor;
Label count, no_count;
assert(LockingMode != LM_LIGHTWEIGHT, "lightweight locking should use fast_lock_lightweight");
assert_different_registers(oop, box, tmp, disp_hdr, rscratch2);
// Load markWord from object into displaced_header.
ldr(disp_hdr, Address(oop, oopDesc::mark_offset_in_bytes()));
if (DiagnoseSyncOnValueBasedClasses != 0) {
load_klass(tmp, oop);
ldrb(tmp, Address(tmp, Klass::misc_flags_offset()));
tst(tmp, KlassFlags::_misc_is_value_based_class);
br(Assembler::NE, cont);
}
// Check for existing monitor
tbnz(disp_hdr, exact_log2(markWord::monitor_value), object_has_monitor);
if (LockingMode == LM_MONITOR) {
tst(oop, oop); // Set NE to indicate 'failure' -> take slow-path. We know that oop != 0.
b(cont);
} else {
assert(LockingMode == LM_LEGACY, "must be");
// Set tmp to be (markWord of object | UNLOCK_VALUE).
orr(tmp, disp_hdr, markWord::unlocked_value);
// Initialize the box. (Must happen before we update the object mark!)
str(tmp, Address(box, BasicLock::displaced_header_offset_in_bytes()));
// Compare object markWord with an unlocked value (tmp) and if
// equal exchange the stack address of our box with object markWord.
// On failure disp_hdr contains the possibly locked markWord.
cmpxchg(oop, tmp, box, Assembler::xword, /*acquire*/ true,
/*release*/ true, /*weak*/ false, disp_hdr);
br(Assembler::EQ, cont);
assert(oopDesc::mark_offset_in_bytes() == 0, "offset of _mark is not 0");
// If the compare-and-exchange succeeded, then we found an unlocked
// object, will have now locked it will continue at label cont
// Check if the owner is self by comparing the value in the
// markWord of object (disp_hdr) with the stack pointer.
mov(rscratch1, sp);
sub(disp_hdr, disp_hdr, rscratch1);
mov(tmp, (address) (~(os::vm_page_size()-1) | markWord::lock_mask_in_place));
// If condition is true we are cont and hence we can store 0 as the
// displaced header in the box, which indicates that it is a recursive lock.
ands(tmp/*==0?*/, disp_hdr, tmp); // Sets flags for result
str(tmp/*==0, perhaps*/, Address(box, BasicLock::displaced_header_offset_in_bytes()));
b(cont);
}
// Handle existing monitor.
bind(object_has_monitor);
// Try to CAS owner (no owner => current thread's _monitor_owner_id).
ldr(rscratch2, Address(rthread, JavaThread::monitor_owner_id_offset()));
add(tmp, disp_hdr, (in_bytes(ObjectMonitor::owner_offset())-markWord::monitor_value));
cmpxchg(tmp, zr, rscratch2, Assembler::xword, /*acquire*/ true,
/*release*/ true, /*weak*/ false, tmp3Reg); // Sets flags for result
// Store a non-null value into the box to avoid looking like a re-entrant
// lock. The fast-path monitor unlock code checks for
// markWord::monitor_value so use markWord::unused_mark which has the
// relevant bit set, and also matches ObjectSynchronizer::enter.
mov(tmp, (address)markWord::unused_mark().value());
str(tmp, Address(box, BasicLock::displaced_header_offset_in_bytes()));
br(Assembler::EQ, cont); // CAS success means locking succeeded
cmp(tmp3Reg, rscratch2);
br(Assembler::NE, cont); // Check for recursive locking
// Recursive lock case
increment(Address(disp_hdr, in_bytes(ObjectMonitor::recursions_offset()) - markWord::monitor_value), 1);
// flag == EQ still from the cmp above, checking if this is a reentrant lock
bind(cont);
// flag == EQ indicates success
// flag == NE indicates failure
br(Assembler::NE, no_count);
bind(count);
if (LockingMode == LM_LEGACY) {
inc_held_monitor_count(rscratch1);
}
bind(no_count);
}
void C2_MacroAssembler::fast_unlock(Register objectReg, Register boxReg, Register tmpReg,
Register tmp2Reg) {
Register oop = objectReg;
Register box = boxReg;
Register disp_hdr = tmpReg;
Register owner_addr = tmpReg;
Register tmp = tmp2Reg;
Label cont;
Label object_has_monitor;
Label count, no_count;
Label unlocked;
assert(LockingMode != LM_LIGHTWEIGHT, "lightweight locking should use fast_unlock_lightweight");
assert_different_registers(oop, box, tmp, disp_hdr);
if (LockingMode == LM_LEGACY) {
// Find the lock address and load the displaced header from the stack.
ldr(disp_hdr, Address(box, BasicLock::displaced_header_offset_in_bytes()));
// If the displaced header is 0, we have a recursive unlock.
cmp(disp_hdr, zr);
br(Assembler::EQ, cont);
}
// Handle existing monitor.
ldr(tmp, Address(oop, oopDesc::mark_offset_in_bytes()));
tbnz(tmp, exact_log2(markWord::monitor_value), object_has_monitor);
if (LockingMode == LM_MONITOR) {
tst(oop, oop); // Set NE to indicate 'failure' -> take slow-path. We know that oop != 0.
b(cont);
} else {
assert(LockingMode == LM_LEGACY, "must be");
// Check if it is still a light weight lock, this is is true if we
// see the stack address of the basicLock in the markWord of the
// object.
cmpxchg(oop, box, disp_hdr, Assembler::xword, /*acquire*/ false,
/*release*/ true, /*weak*/ false, tmp);
b(cont);
}
assert(oopDesc::mark_offset_in_bytes() == 0, "offset of _mark is not 0");
// Handle existing monitor.
bind(object_has_monitor);
STATIC_ASSERT(markWord::monitor_value <= INT_MAX);
add(tmp, tmp, -(int)markWord::monitor_value); // monitor
ldr(disp_hdr, Address(tmp, ObjectMonitor::recursions_offset()));
Label notRecursive;
cbz(disp_hdr, notRecursive);
// Recursive lock
sub(disp_hdr, disp_hdr, 1u);
str(disp_hdr, Address(tmp, ObjectMonitor::recursions_offset()));
cmp(disp_hdr, disp_hdr); // Sets flags for result
b(cont);
bind(notRecursive);
// Compute owner address.
lea(owner_addr, Address(tmp, ObjectMonitor::owner_offset()));
// Set owner to null.
// Release to satisfy the JMM
stlr(zr, owner_addr);
// We need a full fence after clearing owner to avoid stranding.
// StoreLoad achieves this.
membar(StoreLoad);
// Check if the entry_list is empty.
ldr(rscratch1, Address(tmp, ObjectMonitor::entry_list_offset()));
cmp(rscratch1, zr);
br(Assembler::EQ, cont); // If so we are done.
// Check if there is a successor.
ldr(rscratch1, Address(tmp, ObjectMonitor::succ_offset()));
cmp(rscratch1, zr);
br(Assembler::NE, unlocked); // If so we are done.
// Save the monitor pointer in the current thread, so we can try to
// reacquire the lock in SharedRuntime::monitor_exit_helper().
str(tmp, Address(rthread, JavaThread::unlocked_inflated_monitor_offset()));
cmp(zr, rthread); // Set Flag to NE => slow path
b(cont);
bind(unlocked);
cmp(zr, zr); // Set Flag to EQ => fast path
// Intentional fall-through
bind(cont);
// flag == EQ indicates success
// flag == NE indicates failure
br(Assembler::NE, no_count);
bind(count);
if (LockingMode == LM_LEGACY) {
dec_held_monitor_count(rscratch1);
}
bind(no_count);
}
void C2_MacroAssembler::fast_lock_lightweight(Register obj, Register box, Register t1,
Register t2, Register t3) {
assert(LockingMode == LM_LIGHTWEIGHT, "must be");
assert_different_registers(obj, box, t1, t2, t3, rscratch2);
// Handle inflated monitor.
@@ -512,7 +305,6 @@ void C2_MacroAssembler::fast_lock_lightweight(Register obj, Register box, Regist
void C2_MacroAssembler::fast_unlock_lightweight(Register obj, Register box, Register t1,
Register t2, Register t3) {
assert(LockingMode == LM_LIGHTWEIGHT, "must be");
assert_different_registers(obj, box, t1, t2, t3);
// Handle inflated monitor.
@@ -2858,3 +2650,124 @@ void C2_MacroAssembler::reconstruct_frame_pointer(Register rtmp) {
add(rfp, sp, framesize - 2 * wordSize);
}
}
// Selects elements from two source vectors (src1, src2) based on index values in the index register
// using Neon instructions and places it in the destination vector element corresponding to the
// index vector element. Each index in the index register must be in the range - [0, 2 * NUM_ELEM),
// where NUM_ELEM is the number of BasicType elements per vector.
// If idx < NUM_ELEM --> selects src1[idx] (idx is an element of the index register)
// Otherwise, selects src2[idx NUM_ELEM]
void C2_MacroAssembler::select_from_two_vectors_neon(FloatRegister dst, FloatRegister src1,
FloatRegister src2, FloatRegister index,
FloatRegister tmp, unsigned vector_length_in_bytes) {
assert_different_registers(dst, src1, src2, tmp);
SIMD_Arrangement size = vector_length_in_bytes == 16 ? T16B : T8B;
if (vector_length_in_bytes == 16) {
assert(UseSVE <= 1, "sve must be <= 1");
assert(src1->successor() == src2, "Source registers must be ordered");
// If the vector length is 16B, then use the Neon "tbl" instruction with two vector table
tbl(dst, size, src1, 2, index);
} else { // vector length == 8
assert(UseSVE == 0, "must be Neon only");
// We need to fit both the source vectors (src1, src2) in a 128-bit register because the
// Neon "tbl" instruction supports only looking up 16B vectors. We then use the Neon "tbl"
// instruction with one vector lookup
ins(tmp, D, src1, 0, 0);
ins(tmp, D, src2, 1, 0);
tbl(dst, size, tmp, 1, index);
}
}
// Selects elements from two source vectors (src1, src2) based on index values in the index register
// using SVE/SVE2 instructions and places it in the destination vector element corresponding to the
// index vector element. Each index in the index register must be in the range - [0, 2 * NUM_ELEM),
// where NUM_ELEM is the number of BasicType elements per vector.
// If idx < NUM_ELEM --> selects src1[idx] (idx is an element of the index register)
// Otherwise, selects src2[idx NUM_ELEM]
void C2_MacroAssembler::select_from_two_vectors_sve(FloatRegister dst, FloatRegister src1,
FloatRegister src2, FloatRegister index,
FloatRegister tmp, SIMD_RegVariant T,
unsigned vector_length_in_bytes) {
assert_different_registers(dst, src1, src2, index, tmp);
if (vector_length_in_bytes == 8) {
// We need to fit both the source vectors (src1, src2) in a single vector register because the
// SVE "tbl" instruction is unpredicated and works on the entire vector which can lead to
// incorrect results if each source vector is only partially filled. We then use the SVE "tbl"
// instruction with one vector lookup
assert(UseSVE >= 1, "sve must be >= 1");
ins(tmp, D, src1, 0, 0);
ins(tmp, D, src2, 1, 0);
sve_tbl(dst, T, tmp, index);
} else { // UseSVE == 2 and vector_length_in_bytes > 8
// If the vector length is > 8, then use the SVE2 "tbl" instruction with the two vector table.
// The assertion - vector_length_in_bytes == MaxVectorSize ensures that this operation
// is not executed on machines where vector_length_in_bytes < MaxVectorSize
// with the only exception of 8B vector length.
assert(UseSVE == 2 && vector_length_in_bytes == MaxVectorSize, "must be");
assert(src1->successor() == src2, "Source registers must be ordered");
sve_tbl(dst, T, src1, src2, index);
}
}
void C2_MacroAssembler::select_from_two_vectors(FloatRegister dst, FloatRegister src1,
FloatRegister src2, FloatRegister index,
FloatRegister tmp, BasicType bt,
unsigned vector_length_in_bytes) {
assert_different_registers(dst, src1, src2, index, tmp);
// The cases that can reach this method are -
// - UseSVE = 0, vector_length_in_bytes = 8 or 16
// - UseSVE = 1, vector_length_in_bytes = 8 or 16
// - UseSVE = 2, vector_length_in_bytes >= 8
//
// SVE/SVE2 tbl instructions are generated when UseSVE = 1 with vector_length_in_bytes = 8
// and UseSVE = 2 with vector_length_in_bytes >= 8
//
// Neon instructions are generated when UseSVE = 0 with vector_length_in_bytes = 8 or 16 and
// UseSVE = 1 with vector_length_in_bytes = 16
if ((UseSVE == 1 && vector_length_in_bytes == 8) || UseSVE == 2) {
SIMD_RegVariant T = elemType_to_regVariant(bt);
select_from_two_vectors_sve(dst, src1, src2, index, tmp, T, vector_length_in_bytes);
return;
}
// The only BasicTypes that can reach here are T_SHORT, T_BYTE, T_INT and T_FLOAT
assert(bt != T_DOUBLE && bt != T_LONG, "unsupported basic type");
assert(vector_length_in_bytes <= 16, "length_in_bytes must be <= 16");
bool isQ = vector_length_in_bytes == 16;
SIMD_Arrangement size1 = isQ ? T16B : T8B;
SIMD_Arrangement size2 = esize2arrangement((uint)type2aelembytes(bt), isQ);
// Neon "tbl" instruction only supports byte tables, so we need to look at chunks of
// 2B for selecting shorts or chunks of 4B for selecting ints/floats from the table.
// The index values in "index" register are in the range of [0, 2 * NUM_ELEM) where NUM_ELEM
// is the number of elements that can fit in a vector. For ex. for T_SHORT with 64-bit vector length,
// the indices can range from [0, 8).
// As an example with 64-bit vector length and T_SHORT type - let index = [2, 5, 1, 0]
// Move a constant 0x02 in every byte of tmp - tmp = [0x0202, 0x0202, 0x0202, 0x0202]
// Multiply index vector with tmp to yield - dst = [0x0404, 0x0a0a, 0x0202, 0x0000]
// Move a constant 0x0100 in every 2B of tmp - tmp = [0x0100, 0x0100, 0x0100, 0x0100]
// Add the multiplied result to the vector in tmp to obtain the byte level
// offsets - dst = [0x0504, 0x0b0a, 0x0302, 0x0100]
// Use these offsets in the "tbl" instruction to select chunks of 2B.
if (bt == T_BYTE) {
select_from_two_vectors_neon(dst, src1, src2, index, tmp, vector_length_in_bytes);
} else {
int elem_size = (bt == T_SHORT) ? 2 : 4;
uint64_t tbl_offset = (bt == T_SHORT) ? 0x0100u : 0x03020100u;
mov(tmp, size1, elem_size);
mulv(dst, size2, index, tmp);
mov(tmp, size2, tbl_offset);
addv(dst, size1, dst, tmp); // "dst" now contains the processed index elements
// to select a set of 2B/4B
select_from_two_vectors_neon(dst, src1, src2, dst, tmp, vector_length_in_bytes);
}
}

View File

@@ -34,6 +34,15 @@
void neon_reduce_logical_helper(int opc, bool sf, Register Rd, Register Rn, Register Rm,
enum shift_kind kind = Assembler::LSL, unsigned shift = 0);
void select_from_two_vectors_neon(FloatRegister dst, FloatRegister src1,
FloatRegister src2, FloatRegister index,
FloatRegister tmp, unsigned vector_length_in_bytes);
void select_from_two_vectors_sve(FloatRegister dst, FloatRegister src1,
FloatRegister src2, FloatRegister index,
FloatRegister tmp, SIMD_RegVariant T,
unsigned vector_length_in_bytes);
public:
// jdk.internal.util.ArraysSupport.vectorizedHashCode
address arrays_hashcode(Register ary, Register cnt, Register result, FloatRegister vdata0,
@@ -42,9 +51,6 @@
FloatRegister vmul3, FloatRegister vpow, FloatRegister vpowm,
BasicType eltype);
// Code used by cmpFastLock and cmpFastUnlock mach instructions in .ad file.
void fast_lock(Register object, Register box, Register tmp, Register tmp2, Register tmp3);
void fast_unlock(Register object, Register box, Register tmp, Register tmp2);
// Code used by cmpFastLockLightweight and cmpFastUnlockLightweight mach instructions in .ad file.
void fast_lock_lightweight(Register object, Register box, Register t1, Register t2, Register t3);
void fast_unlock_lightweight(Register object, Register box, Register t1, Register t2, Register t3);
@@ -193,4 +199,9 @@
void reconstruct_frame_pointer(Register rtmp);
// Select from a table of two vectors
void select_from_two_vectors(FloatRegister dst, FloatRegister src1, FloatRegister src2,
FloatRegister index, FloatRegister tmp, BasicType bt,
unsigned vector_length_in_bytes);
#endif // CPU_AARCH64_C2_MACROASSEMBLER_AARCH64_HPP

View File

@@ -1,5 +1,5 @@
/*
* Copyright (c) 2019, 2024, Oracle and/or its affiliates. All rights reserved.
* Copyright (c) 2019, 2025, Oracle and/or its affiliates. All rights reserved.
* DO NOT ALTER OR REMOVE COPYRIGHT NOTICES OR THIS FILE HEADER.
*
* This code is free software; you can redistribute it and/or modify it
@@ -101,9 +101,12 @@ frame FreezeBase::new_heap_frame(frame& f, frame& caller) {
*hf.addr_at(frame::interpreter_frame_locals_offset) = locals_offset;
return hf;
} else {
// We need to re-read fp out of the frame because it may be an oop and we might have
// had a safepoint in finalize_freeze, after constructing f.
fp = *(intptr_t**)(f.sp() - frame::sender_sp_offset);
// For a compiled frame we need to re-read fp out of the frame because it may be an
// oop and we might have had a safepoint in finalize_freeze, after constructing f.
// For stub/native frames the value is not used while frozen, and will be constructed again
// when thawing the frame (see ThawBase::new_stack_frame). We use a special bad address to
// help with debugging, particularly when inspecting frames and identifying invalid accesses.
fp = FKind::compiled ? *(intptr_t**)(f.sp() - frame::sender_sp_offset) : (intptr_t*)badAddressVal;
int fsize = FKind::size(f);
sp = caller.unextended_sp() - fsize;
@@ -192,6 +195,11 @@ inline void FreezeBase::patch_pd(frame& hf, const frame& caller) {
}
}
inline void FreezeBase::patch_pd_unused(intptr_t* sp) {
intptr_t* fp_addr = sp - frame::sender_sp_offset;
*fp_addr = badAddressVal;
}
//////// Thaw
// Fast path

View File

@@ -691,104 +691,27 @@ void InterpreterMacroAssembler::leave_jfr_critical_section() {
void InterpreterMacroAssembler::lock_object(Register lock_reg)
{
assert(lock_reg == c_rarg1, "The argument is only for looks. It must be c_rarg1");
if (LockingMode == LM_MONITOR) {
call_VM_preemptable(noreg,
CAST_FROM_FN_PTR(address, InterpreterRuntime::monitorenter),
lock_reg);
} else {
Label count, done;
const Register swap_reg = r0;
const Register tmp = c_rarg2;
const Register obj_reg = c_rarg3; // Will contain the oop
const Register tmp2 = c_rarg4;
const Register tmp3 = c_rarg5;
const Register tmp = c_rarg2;
const Register obj_reg = c_rarg3; // Will contain the oop
const Register tmp2 = c_rarg4;
const Register tmp3 = c_rarg5;
const int obj_offset = in_bytes(BasicObjectLock::obj_offset());
const int lock_offset = in_bytes(BasicObjectLock::lock_offset());
const int mark_offset = lock_offset +
BasicLock::displaced_header_offset_in_bytes();
// Load object pointer into obj_reg %c_rarg3
ldr(obj_reg, Address(lock_reg, BasicObjectLock::obj_offset()));
Label slow_case;
Label slow_case, done;
lightweight_lock(lock_reg, obj_reg, tmp, tmp2, tmp3, slow_case);
b(done);
// Load object pointer into obj_reg %c_rarg3
ldr(obj_reg, Address(lock_reg, obj_offset));
bind(slow_case);
if (LockingMode == LM_LIGHTWEIGHT) {
lightweight_lock(lock_reg, obj_reg, tmp, tmp2, tmp3, slow_case);
b(done);
} else if (LockingMode == LM_LEGACY) {
// Call the runtime routine for slow case
call_VM_preemptable(noreg,
CAST_FROM_FN_PTR(address, InterpreterRuntime::monitorenter),
lock_reg);
if (DiagnoseSyncOnValueBasedClasses != 0) {
load_klass(tmp, obj_reg);
ldrb(tmp, Address(tmp, Klass::misc_flags_offset()));
tst(tmp, KlassFlags::_misc_is_value_based_class);
br(Assembler::NE, slow_case);
}
// Load (object->mark() | 1) into swap_reg
ldr(rscratch1, Address(obj_reg, oopDesc::mark_offset_in_bytes()));
orr(swap_reg, rscratch1, 1);
// Save (object->mark() | 1) into BasicLock's displaced header
str(swap_reg, Address(lock_reg, mark_offset));
assert(lock_offset == 0,
"displached header must be first word in BasicObjectLock");
Label fail;
cmpxchg_obj_header(swap_reg, lock_reg, obj_reg, rscratch1, count, /*fallthrough*/nullptr);
// Fast check for recursive lock.
//
// Can apply the optimization only if this is a stack lock
// allocated in this thread. For efficiency, we can focus on
// recently allocated stack locks (instead of reading the stack
// base and checking whether 'mark' points inside the current
// thread stack):
// 1) (mark & 7) == 0, and
// 2) sp <= mark < mark + os::pagesize()
//
// Warning: sp + os::pagesize can overflow the stack base. We must
// neither apply the optimization for an inflated lock allocated
// just above the thread stack (this is why condition 1 matters)
// nor apply the optimization if the stack lock is inside the stack
// of another thread. The latter is avoided even in case of overflow
// because we have guard pages at the end of all stacks. Hence, if
// we go over the stack base and hit the stack of another thread,
// this should not be in a writeable area that could contain a
// stack lock allocated by that thread. As a consequence, a stack
// lock less than page size away from sp is guaranteed to be
// owned by the current thread.
//
// These 3 tests can be done by evaluating the following
// expression: ((mark - sp) & (7 - os::vm_page_size())),
// assuming both stack pointer and pagesize have their
// least significant 3 bits clear.
// NOTE: the mark is in swap_reg %r0 as the result of cmpxchg
// NOTE2: aarch64 does not like to subtract sp from rn so take a
// copy
mov(rscratch1, sp);
sub(swap_reg, swap_reg, rscratch1);
ands(swap_reg, swap_reg, (uint64_t)(7 - (int)os::vm_page_size()));
// Save the test result, for recursive case, the result is zero
str(swap_reg, Address(lock_reg, mark_offset));
br(Assembler::NE, slow_case);
bind(count);
inc_held_monitor_count(rscratch1);
b(done);
}
bind(slow_case);
// Call the runtime routine for slow case
call_VM_preemptable(noreg,
CAST_FROM_FN_PTR(address, InterpreterRuntime::monitorenter),
lock_reg);
bind(done);
}
bind(done);
}
@@ -807,57 +730,29 @@ void InterpreterMacroAssembler::unlock_object(Register lock_reg)
{
assert(lock_reg == c_rarg1, "The argument is only for looks. It must be rarg1");
if (LockingMode == LM_MONITOR) {
call_VM_leaf(CAST_FROM_FN_PTR(address, InterpreterRuntime::monitorexit), lock_reg);
} else {
Label count, done;
const Register swap_reg = r0;
const Register header_reg = c_rarg2; // Will contain the old oopMark
const Register obj_reg = c_rarg3; // Will contain the oop
const Register tmp_reg = c_rarg4; // Temporary used by lightweight_unlock
const Register swap_reg = r0;
const Register header_reg = c_rarg2; // Will contain the old oopMark
const Register obj_reg = c_rarg3; // Will contain the oop
const Register tmp_reg = c_rarg4; // Temporary used by lightweight_unlock
save_bcp(); // Save in case of exception
save_bcp(); // Save in case of exception
// Load oop into obj_reg(%c_rarg3)
ldr(obj_reg, Address(lock_reg, BasicObjectLock::obj_offset()));
if (LockingMode != LM_LIGHTWEIGHT) {
// Convert from BasicObjectLock structure to object and BasicLock
// structure Store the BasicLock address into %r0
lea(swap_reg, Address(lock_reg, BasicObjectLock::lock_offset()));
}
// Free entry
str(zr, Address(lock_reg, BasicObjectLock::obj_offset()));
// Load oop into obj_reg(%c_rarg3)
ldr(obj_reg, Address(lock_reg, BasicObjectLock::obj_offset()));
Label slow_case, done;
lightweight_unlock(obj_reg, header_reg, swap_reg, tmp_reg, slow_case);
b(done);
// Free entry
str(zr, Address(lock_reg, BasicObjectLock::obj_offset()));
Label slow_case;
if (LockingMode == LM_LIGHTWEIGHT) {
lightweight_unlock(obj_reg, header_reg, swap_reg, tmp_reg, slow_case);
b(done);
} else if (LockingMode == LM_LEGACY) {
// Load the old header from BasicLock structure
ldr(header_reg, Address(swap_reg,
BasicLock::displaced_header_offset_in_bytes()));
// Test for recursion
cbz(header_reg, count);
// Atomic swap back the old header
cmpxchg_obj_header(swap_reg, header_reg, obj_reg, rscratch1, count, &slow_case);
bind(count);
dec_held_monitor_count(rscratch1);
b(done);
}
bind(slow_case);
// Call the runtime routine for slow case.
str(obj_reg, Address(lock_reg, BasicObjectLock::obj_offset())); // restore obj
call_VM_leaf(CAST_FROM_FN_PTR(address, InterpreterRuntime::monitorexit), lock_reg);
bind(done);
restore_bcp();
}
bind(slow_case);
// Call the runtime routine for slow case.
str(obj_reg, Address(lock_reg, BasicObjectLock::obj_offset())); // restore obj
call_VM_leaf(CAST_FROM_FN_PTR(address, InterpreterRuntime::monitorexit), lock_reg);
bind(done);
restore_bcp();
}
void InterpreterMacroAssembler::test_method_data_pointer(Register mdp,

View File

@@ -6421,10 +6421,14 @@ void MacroAssembler::fill_words(Register base, Register cnt, Register value)
// Intrinsic for
//
// - sun/nio/cs/ISO_8859_1$Encoder.implEncodeISOArray
// return the number of characters copied.
// - java/lang/StringUTF16.compress
// return index of non-latin1 character if copy fails, otherwise 'len'.
// - sun.nio.cs.ISO_8859_1.Encoder#encodeISOArray0(byte[] sa, int sp, byte[] da, int dp, int len)
// Encodes char[] to byte[] in ISO-8859-1
//
// - java.lang.StringCoding#encodeISOArray0(byte[] sa, int sp, byte[] da, int dp, int len)
// Encodes byte[] (containing UTF-16) to byte[] in ISO-8859-1
//
// - java.lang.StringCoding#encodeAsciiArray0(char[] sa, int sp, byte[] da, int dp, int len)
// Encodes char[] to byte[] in ASCII
//
// This version always returns the number of characters copied, and does not
// clobber the 'len' register. A successful copy will complete with the post-
@@ -7097,7 +7101,6 @@ void MacroAssembler::double_move(VMRegPair src, VMRegPair dst, Register tmp) {
// - t1, t2, t3: temporary registers, will be destroyed
// - slow: branched to if locking fails, absolute offset may larger than 32KB (imm14 encoding).
void MacroAssembler::lightweight_lock(Register basic_lock, Register obj, Register t1, Register t2, Register t3, Label& slow) {
assert(LockingMode == LM_LIGHTWEIGHT, "only used with new lightweight locking");
assert_different_registers(basic_lock, obj, t1, t2, t3, rscratch1);
Label push;
@@ -7157,7 +7160,6 @@ void MacroAssembler::lightweight_lock(Register basic_lock, Register obj, Registe
// - t1, t2, t3: temporary registers
// - slow: branched to if unlocking fails, absolute offset may larger than 32KB (imm14 encoding).
void MacroAssembler::lightweight_unlock(Register obj, Register t1, Register t2, Register t3, Label& slow) {
assert(LockingMode == LM_LIGHTWEIGHT, "only used with new lightweight locking");
// cmpxchg clobbers rscratch1.
assert_different_registers(obj, t1, t2, t3, rscratch1);

View File

@@ -1721,7 +1721,7 @@ nmethod* SharedRuntime::generate_native_wrapper(MacroAssembler* masm,
// We use the same pc/oopMap repeatedly when we call out.
Label native_return;
if (LockingMode != LM_LEGACY && method->is_object_wait0()) {
if (method->is_object_wait0()) {
// For convenience we use the pc we want to resume to in case of preemption on Object.wait.
__ set_last_Java_frame(sp, noreg, native_return, rscratch1);
} else {
@@ -1776,44 +1776,7 @@ nmethod* SharedRuntime::generate_native_wrapper(MacroAssembler* masm,
// Load the oop from the handle
__ ldr(obj_reg, Address(oop_handle_reg, 0));
if (LockingMode == LM_MONITOR) {
__ b(slow_path_lock);
} else if (LockingMode == LM_LEGACY) {
// Load (object->mark() | 1) into swap_reg %r0
__ ldr(rscratch1, Address(obj_reg, oopDesc::mark_offset_in_bytes()));
__ orr(swap_reg, rscratch1, 1);
// Save (object->mark() | 1) into BasicLock's displaced header
__ str(swap_reg, Address(lock_reg, mark_word_offset));
// src -> dest iff dest == r0 else r0 <- dest
__ cmpxchg_obj_header(r0, lock_reg, obj_reg, rscratch1, count, /*fallthrough*/nullptr);
// Hmm should this move to the slow path code area???
// Test if the oopMark is an obvious stack pointer, i.e.,
// 1) (mark & 3) == 0, and
// 2) sp <= mark < mark + os::pagesize()
// These 3 tests can be done by evaluating the following
// expression: ((mark - sp) & (3 - os::vm_page_size())),
// assuming both stack pointer and pagesize have their
// least significant 2 bits clear.
// NOTE: the oopMark is in swap_reg %r0 as the result of cmpxchg
__ sub(swap_reg, sp, swap_reg);
__ neg(swap_reg, swap_reg);
__ ands(swap_reg, swap_reg, 3 - (int)os::vm_page_size());
// Save the test result, for recursive case, the result is zero
__ str(swap_reg, Address(lock_reg, mark_word_offset));
__ br(Assembler::NE, slow_path_lock);
__ bind(count);
__ inc_held_monitor_count(rscratch1);
} else {
assert(LockingMode == LM_LIGHTWEIGHT, "must be");
__ lightweight_lock(lock_reg, obj_reg, swap_reg, tmp, lock_tmp, slow_path_lock);
}
__ lightweight_lock(lock_reg, obj_reg, swap_reg, tmp, lock_tmp, slow_path_lock);
// Slow path will re-enter here
__ bind(lock_done);
@@ -1888,7 +1851,7 @@ nmethod* SharedRuntime::generate_native_wrapper(MacroAssembler* masm,
__ lea(rscratch2, Address(rthread, JavaThread::thread_state_offset()));
__ stlrw(rscratch1, rscratch2);
if (LockingMode != LM_LEGACY && method->is_object_wait0()) {
if (method->is_object_wait0()) {
// Check preemption for Object.wait()
__ ldr(rscratch1, Address(rthread, JavaThread::preempt_alternate_return_offset()));
__ cbz(rscratch1, native_return);
@@ -1917,48 +1880,18 @@ nmethod* SharedRuntime::generate_native_wrapper(MacroAssembler* masm,
// Get locked oop from the handle we passed to jni
__ ldr(obj_reg, Address(oop_handle_reg, 0));
Label done, not_recursive;
if (LockingMode == LM_LEGACY) {
// Simple recursive lock?
__ ldr(rscratch1, Address(sp, lock_slot_offset * VMRegImpl::stack_slot_size));
__ cbnz(rscratch1, not_recursive);
__ dec_held_monitor_count(rscratch1);
__ b(done);
}
__ bind(not_recursive);
// Must save r0 if if it is live now because cmpxchg must use it
if (ret_type != T_FLOAT && ret_type != T_DOUBLE && ret_type != T_VOID) {
save_native_result(masm, ret_type, stack_slots);
}
if (LockingMode == LM_MONITOR) {
__ b(slow_path_unlock);
} else if (LockingMode == LM_LEGACY) {
// get address of the stack lock
__ lea(r0, Address(sp, lock_slot_offset * VMRegImpl::stack_slot_size));
// get old displaced header
__ ldr(old_hdr, Address(r0, 0));
// Atomic swap old header if oop still contains the stack lock
Label count;
__ cmpxchg_obj_header(r0, old_hdr, obj_reg, rscratch1, count, &slow_path_unlock);
__ bind(count);
__ dec_held_monitor_count(rscratch1);
} else {
assert(LockingMode == LM_LIGHTWEIGHT, "");
__ lightweight_unlock(obj_reg, old_hdr, swap_reg, lock_tmp, slow_path_unlock);
}
__ lightweight_unlock(obj_reg, old_hdr, swap_reg, lock_tmp, slow_path_unlock);
// slow path re-enters here
__ bind(unlock_done);
if (ret_type != T_FLOAT && ret_type != T_DOUBLE && ret_type != T_VOID) {
restore_native_result(masm, ret_type, stack_slots);
}
__ bind(done);
}
Label dtrace_method_exit, dtrace_method_exit_done;

View File

@@ -1478,22 +1478,17 @@ address TemplateInterpreterGenerator::generate_native_entry(bool synchronized) {
__ lea(rscratch2, Address(rthread, JavaThread::thread_state_offset()));
__ stlrw(rscratch1, rscratch2);
if (LockingMode != LM_LEGACY) {
// Check preemption for Object.wait()
Label not_preempted;
__ ldr(rscratch1, Address(rthread, JavaThread::preempt_alternate_return_offset()));
__ cbz(rscratch1, not_preempted);
__ str(zr, Address(rthread, JavaThread::preempt_alternate_return_offset()));
__ br(rscratch1);
__ bind(native_return);
__ restore_after_resume(true /* is_native */);
// reload result_handler
__ ldr(result_handler, Address(rfp, frame::interpreter_frame_result_handler_offset*wordSize));
__ bind(not_preempted);
} else {
// any pc will do so just use this one for LM_LEGACY to keep code together.
__ bind(native_return);
}
// Check preemption for Object.wait()
Label not_preempted;
__ ldr(rscratch1, Address(rthread, JavaThread::preempt_alternate_return_offset()));
__ cbz(rscratch1, not_preempted);
__ str(zr, Address(rthread, JavaThread::preempt_alternate_return_offset()));
__ br(rscratch1);
__ bind(native_return);
__ restore_after_resume(true /* is_native */);
// reload result_handler
__ ldr(result_handler, Address(rfp, frame::interpreter_frame_result_handler_offset*wordSize));
__ bind(not_preempted);
// reset_last_Java_frame
__ reset_last_Java_frame(true);

View File

@@ -32,6 +32,7 @@
#include "runtime/vm_version.hpp"
#include "utilities/formatBuffer.hpp"
#include "utilities/macros.hpp"
#include "utilities/ostream.hpp"
int VM_Version::_cpu;
int VM_Version::_model;
@@ -50,6 +51,8 @@ uintptr_t VM_Version::_pac_mask;
SpinWait VM_Version::_spin_wait;
const char* VM_Version::_features_names[MAX_CPU_FEATURES] = { nullptr };
static SpinWait get_spin_wait_desc() {
SpinWait spin_wait(OnSpinWaitInst, OnSpinWaitInstCount);
if (spin_wait.inst() == SpinWait::SB && !VM_Version::supports_sb()) {
@@ -60,6 +63,11 @@ static SpinWait get_spin_wait_desc() {
}
void VM_Version::initialize() {
#define SET_CPU_FEATURE_NAME(id, name, bit) \
_features_names[bit] = XSTR(name);
CPU_FEATURE_FLAGS(SET_CPU_FEATURE_NAME)
#undef SET_CPU_FEATURE_NAME
_supports_atomic_getset4 = true;
_supports_atomic_getadd4 = true;
_supports_atomic_getset8 = true;
@@ -194,7 +202,7 @@ void VM_Version::initialize() {
// Cortex A53
if (_cpu == CPU_ARM && model_is(0xd03)) {
_features |= CPU_A53MAC;
set_feature(CPU_A53MAC);
if (FLAG_IS_DEFAULT(UseSIMDForArrayEquals)) {
FLAG_SET_DEFAULT(UseSIMDForArrayEquals, false);
}
@@ -234,7 +242,7 @@ void VM_Version::initialize() {
}
}
if (_features & (CPU_FP | CPU_ASIMD)) {
if (supports_feature(CPU_FP) || supports_feature(CPU_ASIMD)) {
if (FLAG_IS_DEFAULT(UseSignumIntrinsic)) {
FLAG_SET_DEFAULT(UseSignumIntrinsic, true);
}
@@ -397,7 +405,7 @@ void VM_Version::initialize() {
FLAG_SET_DEFAULT(UseGHASHIntrinsics, false);
}
if (_features & CPU_ASIMD) {
if (supports_feature(CPU_ASIMD)) {
if (FLAG_IS_DEFAULT(UseChaCha20Intrinsics)) {
UseChaCha20Intrinsics = true;
}
@@ -408,7 +416,7 @@ void VM_Version::initialize() {
FLAG_SET_DEFAULT(UseChaCha20Intrinsics, false);
}
if (_features & CPU_ASIMD) {
if (supports_feature(CPU_ASIMD)) {
if (FLAG_IS_DEFAULT(UseKyberIntrinsics)) {
UseKyberIntrinsics = true;
}
@@ -419,7 +427,7 @@ void VM_Version::initialize() {
FLAG_SET_DEFAULT(UseKyberIntrinsics, false);
}
if (_features & CPU_ASIMD) {
if (supports_feature(CPU_ASIMD)) {
if (FLAG_IS_DEFAULT(UseDilithiumIntrinsics)) {
UseDilithiumIntrinsics = true;
}
@@ -620,32 +628,38 @@ void VM_Version::initialize() {
// Sync SVE related CPU features with flags
if (UseSVE < 2) {
_features &= ~CPU_SVE2;
_features &= ~CPU_SVEBITPERM;
clear_feature(CPU_SVE2);
clear_feature(CPU_SVEBITPERM);
}
if (UseSVE < 1) {
_features &= ~CPU_SVE;
clear_feature(CPU_SVE);
}
// Construct the "features" string
char buf[512];
int buf_used_len = os::snprintf_checked(buf, sizeof(buf), "0x%02x:0x%x:0x%03x:%d", _cpu, _variant, _model, _revision);
stringStream ss(512);
ss.print("0x%02x:0x%x:0x%03x:%d", _cpu, _variant, _model, _revision);
if (_model2) {
os::snprintf_checked(buf + buf_used_len, sizeof(buf) - buf_used_len, "(0x%03x)", _model2);
ss.print("(0x%03x)", _model2);
}
size_t features_offset = strnlen(buf, sizeof(buf));
#define ADD_FEATURE_IF_SUPPORTED(id, name, bit) \
do { \
if (VM_Version::supports_##name()) strcat(buf, ", " #name); \
} while(0);
CPU_FEATURE_FLAGS(ADD_FEATURE_IF_SUPPORTED)
#undef ADD_FEATURE_IF_SUPPORTED
ss.print(", ");
int features_offset = (int)ss.size();
insert_features_names(_features, ss);
_cpu_info_string = os::strdup(buf);
_cpu_info_string = ss.as_string(true);
_features_string = _cpu_info_string + features_offset;
}
_features_string = extract_features_string(_cpu_info_string,
strnlen(_cpu_info_string, sizeof(buf)),
features_offset);
void VM_Version::insert_features_names(uint64_t features, stringStream& ss) {
int i = 0;
ss.join([&]() {
while (i < MAX_CPU_FEATURES) {
if (supports_feature((VM_Version::Feature_Flag)i)) {
return _features_names[i++];
}
i += 1;
}
return (const char*)nullptr;
}, ", ");
}
#if defined(LINUX)

View File

@@ -30,6 +30,10 @@
#include "runtime/abstract_vm_version.hpp"
#include "utilities/sizes.hpp"
class stringStream;
#define BIT_MASK(flag) (1ULL<<(flag))
class VM_Version : public Abstract_VM_Version {
friend class VMStructs;
friend class JVMCIVMStructs;
@@ -66,6 +70,8 @@ public:
static void initialize();
static void check_virtualizations();
static void insert_features_names(uint64_t features, stringStream& ss);
static void print_platform_virtualization_info(outputStream*);
// Asserts
@@ -139,17 +145,32 @@ enum Ampere_CPU_Model {
decl(A53MAC, a53mac, 31)
enum Feature_Flag {
#define DECLARE_CPU_FEATURE_FLAG(id, name, bit) CPU_##id = (1 << bit),
#define DECLARE_CPU_FEATURE_FLAG(id, name, bit) CPU_##id = bit,
CPU_FEATURE_FLAGS(DECLARE_CPU_FEATURE_FLAG)
#undef DECLARE_CPU_FEATURE_FLAG
MAX_CPU_FEATURES
};
STATIC_ASSERT(sizeof(_features) * BitsPerByte >= MAX_CPU_FEATURES);
static const char* _features_names[MAX_CPU_FEATURES];
// Feature identification
#define CPU_FEATURE_DETECTION(id, name, bit) \
static bool supports_##name() { return (_features & CPU_##id) != 0; };
static bool supports_##name() { return supports_feature(CPU_##id); }
CPU_FEATURE_FLAGS(CPU_FEATURE_DETECTION)
#undef CPU_FEATURE_DETECTION
static void set_feature(Feature_Flag flag) {
_features |= BIT_MASK(flag);
}
static void clear_feature(Feature_Flag flag) {
_features &= (~BIT_MASK(flag));
}
static bool supports_feature(Feature_Flag flag) {
return (_features & BIT_MASK(flag)) != 0;
}
static int cpu_family() { return _cpu; }
static int cpu_model() { return _model; }
static int cpu_model2() { return _model2; }

View File

@@ -1,5 +1,5 @@
/*
* Copyright (c) 2008, 2019, Oracle and/or its affiliates. All rights reserved.
* Copyright (c) 2008, 2025, Oracle and/or its affiliates. All rights reserved.
* DO NOT ALTER OR REMOVE COPYRIGHT NOTICES OR THIS FILE HEADER.
*
* This code is free software; you can redistribute it and/or modify it
@@ -95,8 +95,6 @@
}
static int adjust_reg_range(int range) {
// Reduce the number of available regs (to free Rheap_base) in case of compressed oops
if (UseCompressedOops || UseCompressedClassPointers) return range - 1;
return range;
}

View File

@@ -2229,16 +2229,9 @@ void LIR_Assembler::emit_arraycopy(LIR_OpArrayCopy* op) {
// We don't know the array types are compatible
if (basic_type != T_OBJECT) {
// Simple test for basic type arrays
if (UseCompressedClassPointers) {
// We don't need decode because we just need to compare
__ ldr_u32(tmp, Address(src, oopDesc::klass_offset_in_bytes()));
__ ldr_u32(tmp2, Address(dst, oopDesc::klass_offset_in_bytes()));
__ cmp_32(tmp, tmp2);
} else {
__ load_klass(tmp, src);
__ load_klass(tmp2, dst);
__ cmp(tmp, tmp2);
}
__ load_klass(tmp, src);
__ load_klass(tmp2, dst);
__ cmp(tmp, tmp2);
__ b(*stub->entry(), ne);
} else {
// For object arrays, if src is a sub class of dst then we can
@@ -2461,12 +2454,7 @@ void LIR_Assembler::emit_load_klass(LIR_OpLoadKlass* op) {
if (info != nullptr) {
add_debug_info_for_null_check_here(info);
}
if (UseCompressedClassPointers) { // On 32 bit arm??
__ ldr_u32(result, Address(obj, oopDesc::klass_offset_in_bytes()));
} else {
__ ldr(result, Address(obj, oopDesc::klass_offset_in_bytes()));
}
__ ldr(result, Address(obj, oopDesc::klass_offset_in_bytes()));
}
void LIR_Assembler::emit_profile_call(LIR_OpProfileCall* op) {

View File

@@ -60,6 +60,10 @@ inline void FreezeBase::patch_pd(frame& hf, const frame& caller) {
Unimplemented();
}
inline void FreezeBase::patch_pd_unused(intptr_t* sp) {
Unimplemented();
}
inline void FreezeBase::patch_stack_pd(intptr_t* frame_sp, intptr_t* heap_sp) {
Unimplemented();
}

View File

@@ -174,6 +174,7 @@ address TemplateInterpreterGenerator::generate_math_entry(AbstractInterpreter::M
break;
case Interpreter::java_lang_math_fmaD:
case Interpreter::java_lang_math_fmaF:
case Interpreter::java_lang_math_sinh:
case Interpreter::java_lang_math_tanh:
case Interpreter::java_lang_math_cbrt:
// TODO: Implement intrinsic

View File

@@ -228,11 +228,7 @@ int LIR_Assembler::emit_unwind_handler() {
if (method()->is_synchronized()) {
monitor_address(0, FrameMap::R4_opr);
stub = new MonitorExitStub(FrameMap::R4_opr, true, 0);
if (LockingMode == LM_MONITOR) {
__ b(*stub->entry());
} else {
__ unlock_object(R5, R6, R4, *stub->entry());
}
__ unlock_object(R5, R6, R4, *stub->entry());
__ bind(*stub->continuation());
}
@@ -2618,44 +2614,20 @@ void LIR_Assembler::emit_lock(LIR_OpLock* op) {
// Obj may not be an oop.
if (op->code() == lir_lock) {
MonitorEnterStub* stub = (MonitorEnterStub*)op->stub();
if (LockingMode != LM_MONITOR) {
assert(BasicLock::displaced_header_offset_in_bytes() == 0, "lock_reg must point to the displaced header");
// Add debug info for NullPointerException only if one is possible.
if (op->info() != nullptr) {
if (!os::zero_page_read_protected() || !ImplicitNullChecks) {
explicit_null_check(obj, op->info());
} else {
add_debug_info_for_null_check_here(op->info());
}
}
__ lock_object(hdr, obj, lock, op->scratch_opr()->as_register(), *op->stub()->entry());
} else {
// always do slow locking
// note: The slow locking code could be inlined here, however if we use
// slow locking, speed doesn't matter anyway and this solution is
// simpler and requires less duplicated code - additionally, the
// slow locking code is the same in either case which simplifies
// debugging.
if (op->info() != nullptr) {
assert(BasicLock::displaced_header_offset_in_bytes() == 0, "lock_reg must point to the displaced header");
// Add debug info for NullPointerException only if one is possible.
if (op->info() != nullptr) {
if (!os::zero_page_read_protected() || !ImplicitNullChecks) {
explicit_null_check(obj, op->info());
} else {
add_debug_info_for_null_check_here(op->info());
__ null_check(obj);
}
__ b(*op->stub()->entry());
}
__ lock_object(hdr, obj, lock, op->scratch_opr()->as_register(), *op->stub()->entry());
} else {
assert (op->code() == lir_unlock, "Invalid code, expected lir_unlock");
if (LockingMode != LM_MONITOR) {
assert(BasicLock::displaced_header_offset_in_bytes() == 0, "lock_reg must point to the displaced header");
__ unlock_object(hdr, obj, lock, *op->stub()->entry());
} else {
// always do slow unlocking
// note: The slow unlocking code could be inlined here, however if we use
// slow unlocking, speed doesn't matter anyway and this solution is
// simpler and requires less duplicated code - additionally, the
// slow unlocking code is the same in either case which simplifies
// debugging.
__ b(*op->stub()->entry());
}
assert(BasicLock::displaced_header_offset_in_bytes() == 0, "lock_reg must point to the displaced header");
__ unlock_object(hdr, obj, lock, *op->stub()->entry());
}
__ bind(*op->stub()->continuation());
}

View File

@@ -82,59 +82,13 @@ void C1_MacroAssembler::lock_object(Register Rmark, Register Roop, Register Rbox
// Save object being locked into the BasicObjectLock...
std(Roop, in_bytes(BasicObjectLock::obj_offset()), Rbox);
if (LockingMode == LM_LIGHTWEIGHT) {
lightweight_lock(Rbox, Roop, Rmark, Rscratch, slow_int);
} else if (LockingMode == LM_LEGACY) {
if (DiagnoseSyncOnValueBasedClasses != 0) {
load_klass(Rscratch, Roop);
lbz(Rscratch, in_bytes(Klass::misc_flags_offset()), Rscratch);
testbitdi(CR0, R0, Rscratch, exact_log2(KlassFlags::_misc_is_value_based_class));
bne(CR0, slow_int);
}
// ... and mark it unlocked.
ori(Rmark, Rmark, markWord::unlocked_value);
// Save unlocked object header into the displaced header location on the stack.
std(Rmark, BasicLock::displaced_header_offset_in_bytes(), Rbox);
// Compare object markWord with Rmark and if equal exchange Rscratch with object markWord.
assert(oopDesc::mark_offset_in_bytes() == 0, "cas must take a zero displacement");
cmpxchgd(/*flag=*/CR0,
/*current_value=*/Rscratch,
/*compare_value=*/Rmark,
/*exchange_value=*/Rbox,
/*where=*/Roop/*+0==mark_offset_in_bytes*/,
MacroAssembler::MemBarRel | MacroAssembler::MemBarAcq,
MacroAssembler::cmpxchgx_hint_acquire_lock(),
noreg,
&cas_failed,
/*check without membar and ldarx first*/true);
// If compare/exchange succeeded we found an unlocked object and we now have locked it
// hence we are done.
} else {
assert(false, "Unhandled LockingMode:%d", LockingMode);
}
lightweight_lock(Rbox, Roop, Rmark, Rscratch, slow_int);
b(done);
bind(slow_int);
b(slow_case); // far
if (LockingMode == LM_LEGACY) {
bind(cas_failed);
// We did not find an unlocked object so see if this is a recursive case.
sub(Rscratch, Rscratch, R1_SP);
load_const_optimized(R0, (~(os::vm_page_size()-1) | markWord::lock_mask_in_place));
and_(R0/*==0?*/, Rscratch, R0);
std(R0/*==0, perhaps*/, BasicLock::displaced_header_offset_in_bytes(), Rbox);
bne(CR0, slow_int);
}
bind(done);
if (LockingMode == LM_LEGACY) {
inc_held_monitor_count(Rmark /*tmp*/);
}
}
@@ -146,43 +100,17 @@ void C1_MacroAssembler::unlock_object(Register Rmark, Register Roop, Register Rb
Address mark_addr(Roop, oopDesc::mark_offset_in_bytes());
assert(mark_addr.disp() == 0, "cas must take a zero displacement");
if (LockingMode != LM_LIGHTWEIGHT) {
// Test first if it is a fast recursive unlock.
ld(Rmark, BasicLock::displaced_header_offset_in_bytes(), Rbox);
cmpdi(CR0, Rmark, 0);
beq(CR0, done);
}
// Load object.
ld(Roop, in_bytes(BasicObjectLock::obj_offset()), Rbox);
verify_oop(Roop, FILE_AND_LINE);
if (LockingMode == LM_LIGHTWEIGHT) {
lightweight_unlock(Roop, Rmark, slow_int);
} else if (LockingMode == LM_LEGACY) {
// Check if it is still a light weight lock, this is is true if we see
// the stack address of the basicLock in the markWord of the object.
cmpxchgd(/*flag=*/CR0,
/*current_value=*/R0,
/*compare_value=*/Rbox,
/*exchange_value=*/Rmark,
/*where=*/Roop,
MacroAssembler::MemBarRel,
MacroAssembler::cmpxchgx_hint_release_lock(),
noreg,
&slow_int);
} else {
assert(false, "Unhandled LockingMode:%d", LockingMode);
}
lightweight_unlock(Roop, Rmark, slow_int);
b(done);
bind(slow_int);
b(slow_case); // far
// Done
bind(done);
if (LockingMode == LM_LEGACY) {
dec_held_monitor_count(Rmark /*tmp*/);
}
}

View File

@@ -334,6 +334,9 @@ inline void FreezeBase::patch_pd(frame& hf, const frame& caller) {
#endif
}
inline void FreezeBase::patch_pd_unused(intptr_t* sp) {
}
//////// Thaw
// Fast path

View File

@@ -946,121 +946,20 @@ void InterpreterMacroAssembler::leave_jfr_critical_section() {
// object - Address of the object to be locked.
//
void InterpreterMacroAssembler::lock_object(Register monitor, Register object) {
if (LockingMode == LM_MONITOR) {
call_VM_preemptable(noreg, CAST_FROM_FN_PTR(address, InterpreterRuntime::monitorenter), monitor);
} else {
// template code (for LM_LEGACY):
//
// markWord displaced_header = obj->mark().set_unlocked();
// monitor->lock()->set_displaced_header(displaced_header);
// if (Atomic::cmpxchg(/*addr*/obj->mark_addr(), /*cmp*/displaced_header, /*ex=*/monitor) == displaced_header) {
// // We stored the monitor address into the object's mark word.
// } else if (THREAD->is_lock_owned((address)displaced_header))
// // Simple recursive case.
// monitor->lock()->set_displaced_header(nullptr);
// } else {
// // Slow path.
// InterpreterRuntime::monitorenter(THREAD, monitor);
// }
const Register header = R7_ARG5;
const Register tmp = R8_ARG6;
const Register header = R7_ARG5;
const Register object_mark_addr = R8_ARG6;
const Register current_header = R9_ARG7;
const Register tmp = R10_ARG8;
Label done, slow_case;
Label count_locking, done, slow_case, cas_failed;
assert_different_registers(header, tmp);
assert_different_registers(header, object_mark_addr, current_header, tmp);
lightweight_lock(monitor, object, header, tmp, slow_case);
b(done);
// markWord displaced_header = obj->mark().set_unlocked();
bind(slow_case);
call_VM_preemptable(noreg, CAST_FROM_FN_PTR(address, InterpreterRuntime::monitorenter), monitor);
if (LockingMode == LM_LIGHTWEIGHT) {
lightweight_lock(monitor, object, header, tmp, slow_case);
b(done);
} else if (LockingMode == LM_LEGACY) {
if (DiagnoseSyncOnValueBasedClasses != 0) {
load_klass(tmp, object);
lbz(tmp, in_bytes(Klass::misc_flags_offset()), tmp);
testbitdi(CR0, R0, tmp, exact_log2(KlassFlags::_misc_is_value_based_class));
bne(CR0, slow_case);
}
// Load markWord from object into header.
ld(header, oopDesc::mark_offset_in_bytes(), object);
// Set displaced_header to be (markWord of object | UNLOCK_VALUE).
ori(header, header, markWord::unlocked_value);
// monitor->lock()->set_displaced_header(displaced_header);
const int lock_offset = in_bytes(BasicObjectLock::lock_offset());
const int mark_offset = lock_offset +
BasicLock::displaced_header_offset_in_bytes();
// Initialize the box (Must happen before we update the object mark!).
std(header, mark_offset, monitor);
// if (Atomic::cmpxchg(/*addr*/obj->mark_addr(), /*cmp*/displaced_header, /*ex=*/monitor) == displaced_header) {
// Store stack address of the BasicObjectLock (this is monitor) into object.
addi(object_mark_addr, object, oopDesc::mark_offset_in_bytes());
// Must fence, otherwise, preceding store(s) may float below cmpxchg.
// CmpxchgX sets CR0 to cmpX(current, displaced).
cmpxchgd(/*flag=*/CR0,
/*current_value=*/current_header,
/*compare_value=*/header, /*exchange_value=*/monitor,
/*where=*/object_mark_addr,
MacroAssembler::MemBarRel | MacroAssembler::MemBarAcq,
MacroAssembler::cmpxchgx_hint_acquire_lock(),
noreg,
&cas_failed,
/*check without membar and ldarx first*/true);
// If the compare-and-exchange succeeded, then we found an unlocked
// object and we have now locked it.
b(count_locking);
bind(cas_failed);
// } else if (THREAD->is_lock_owned((address)displaced_header))
// // Simple recursive case.
// monitor->lock()->set_displaced_header(nullptr);
// We did not see an unlocked object so try the fast recursive case.
// Check if owner is self by comparing the value in the markWord of object
// (current_header) with the stack pointer.
sub(current_header, current_header, R1_SP);
assert(os::vm_page_size() > 0xfff, "page size too small - change the constant");
load_const_optimized(tmp, ~(os::vm_page_size()-1) | markWord::lock_mask_in_place);
and_(R0/*==0?*/, current_header, tmp);
// If condition is true we are done and hence we can store 0 in the displaced
// header indicating it is a recursive lock.
bne(CR0, slow_case);
std(R0/*==0!*/, mark_offset, monitor);
b(count_locking);
}
// } else {
// // Slow path.
// InterpreterRuntime::monitorenter(THREAD, monitor);
// None of the above fast optimizations worked so we have to get into the
// slow case of monitor enter.
bind(slow_case);
call_VM_preemptable(noreg, CAST_FROM_FN_PTR(address, InterpreterRuntime::monitorenter), monitor);
// }
if (LockingMode == LM_LEGACY) {
b(done);
align(32, 12);
bind(count_locking);
inc_held_monitor_count(current_header /*tmp*/);
}
bind(done);
}
bind(done);
}
// Unlocks an object. Used in monitorexit bytecode and remove_activation.
@@ -1071,95 +970,34 @@ void InterpreterMacroAssembler::lock_object(Register monitor, Register object) {
//
// Throw IllegalMonitorException if object is not locked by current thread.
void InterpreterMacroAssembler::unlock_object(Register monitor) {
if (LockingMode == LM_MONITOR) {
call_VM_leaf(CAST_FROM_FN_PTR(address, InterpreterRuntime::monitorexit), monitor);
} else {
const Register object = R7_ARG5;
const Register header = R8_ARG6;
const Register current_header = R10_ARG8;
// template code (for LM_LEGACY):
//
// if ((displaced_header = monitor->displaced_header()) == nullptr) {
// // Recursive unlock. Mark the monitor unlocked by setting the object field to null.
// monitor->set_obj(nullptr);
// } else if (Atomic::cmpxchg(obj->mark_addr(), monitor, displaced_header) == monitor) {
// // We swapped the unlocked mark in displaced_header into the object's mark word.
// monitor->set_obj(nullptr);
// } else {
// // Slow path.
// InterpreterRuntime::monitorexit(monitor);
// }
Label free_slot;
Label slow_case;
const Register object = R7_ARG5;
const Register header = R8_ARG6;
const Register object_mark_addr = R9_ARG7;
const Register current_header = R10_ARG8;
assert_different_registers(object, header, current_header);
Label free_slot;
Label slow_case;
// The object address from the monitor is in object.
ld(object, in_bytes(BasicObjectLock::obj_offset()), monitor);
assert_different_registers(object, header, object_mark_addr, current_header);
lightweight_unlock(object, header, slow_case);
if (LockingMode != LM_LIGHTWEIGHT) {
// Test first if we are in the fast recursive case.
ld(header, in_bytes(BasicObjectLock::lock_offset()) +
BasicLock::displaced_header_offset_in_bytes(), monitor);
b(free_slot);
// If the displaced header is zero, we have a recursive unlock.
cmpdi(CR0, header, 0);
beq(CR0, free_slot); // recursive unlock
}
bind(slow_case);
call_VM_leaf(CAST_FROM_FN_PTR(address, InterpreterRuntime::monitorexit), monitor);
// } else if (Atomic::cmpxchg(obj->mark_addr(), monitor, displaced_header) == monitor) {
// // We swapped the unlocked mark in displaced_header into the object's mark word.
// monitor->set_obj(nullptr);
Label done;
b(done); // Monitor register may be overwritten! Runtime has already freed the slot.
// If we still have a lightweight lock, unlock the object and be done.
// The object address from the monitor is in object.
ld(object, in_bytes(BasicObjectLock::obj_offset()), monitor);
if (LockingMode == LM_LIGHTWEIGHT) {
lightweight_unlock(object, header, slow_case);
} else {
addi(object_mark_addr, object, oopDesc::mark_offset_in_bytes());
// We have the displaced header in displaced_header. If the lock is still
// lightweight, it will contain the monitor address and we'll store the
// displaced header back into the object's mark word.
// CmpxchgX sets CR0 to cmpX(current, monitor).
cmpxchgd(/*flag=*/CR0,
/*current_value=*/current_header,
/*compare_value=*/monitor, /*exchange_value=*/header,
/*where=*/object_mark_addr,
MacroAssembler::MemBarRel,
MacroAssembler::cmpxchgx_hint_release_lock(),
noreg,
&slow_case);
}
b(free_slot);
// } else {
// // Slow path.
// InterpreterRuntime::monitorexit(monitor);
// The lock has been converted into a heavy lock and hence
// we need to get into the slow case.
bind(slow_case);
call_VM_leaf(CAST_FROM_FN_PTR(address, InterpreterRuntime::monitorexit), monitor);
// }
Label done;
b(done); // Monitor register may be overwritten! Runtime has already freed the slot.
// Exchange worked, do monitor->set_obj(nullptr);
align(32, 12);
bind(free_slot);
li(R0, 0);
std(R0, in_bytes(BasicObjectLock::obj_offset()), monitor);
if (LockingMode == LM_LEGACY) {
dec_held_monitor_count(current_header /*tmp*/);
}
bind(done);
}
// Do monitor->set_obj(nullptr);
align(32, 12);
bind(free_slot);
li(R0, 0);
std(R0, in_bytes(BasicObjectLock::obj_offset()), monitor);
bind(done);
}
// Load compiled (i2c) or interpreter entry when calling from interpreted and

View File

@@ -2671,238 +2671,6 @@ address MacroAssembler::emit_trampoline_stub(int destination_toc_offset,
}
// "The box" is the space on the stack where we copy the object mark.
void MacroAssembler::compiler_fast_lock_object(ConditionRegister flag, Register oop, Register box,
Register temp, Register displaced_header, Register current_header) {
assert(LockingMode != LM_LIGHTWEIGHT, "uses fast_lock_lightweight");
assert_different_registers(oop, box, temp, displaced_header, current_header);
Label object_has_monitor;
Label cas_failed;
Label success, failure;
// Load markWord from object into displaced_header.
ld(displaced_header, oopDesc::mark_offset_in_bytes(), oop);
if (DiagnoseSyncOnValueBasedClasses != 0) {
load_klass(temp, oop);
lbz(temp, in_bytes(Klass::misc_flags_offset()), temp);
testbitdi(flag, R0, temp, exact_log2(KlassFlags::_misc_is_value_based_class));
bne(flag, failure);
}
// Handle existing monitor.
// The object has an existing monitor iff (mark & monitor_value) != 0.
andi_(temp, displaced_header, markWord::monitor_value);
bne(CR0, object_has_monitor);
if (LockingMode == LM_MONITOR) {
// Set NE to indicate 'failure' -> take slow-path.
crandc(flag, Assembler::equal, flag, Assembler::equal);
b(failure);
} else {
assert(LockingMode == LM_LEGACY, "must be");
// Set displaced_header to be (markWord of object | UNLOCK_VALUE).
ori(displaced_header, displaced_header, markWord::unlocked_value);
// Load Compare Value application register.
// Initialize the box. (Must happen before we update the object mark!)
std(displaced_header, BasicLock::displaced_header_offset_in_bytes(), box);
// Must fence, otherwise, preceding store(s) may float below cmpxchg.
// Compare object markWord with mark and if equal exchange scratch1 with object markWord.
cmpxchgd(/*flag=*/flag,
/*current_value=*/current_header,
/*compare_value=*/displaced_header,
/*exchange_value=*/box,
/*where=*/oop,
MacroAssembler::MemBarRel | MacroAssembler::MemBarAcq,
MacroAssembler::cmpxchgx_hint_acquire_lock(),
noreg,
&cas_failed,
/*check without membar and ldarx first*/true);
assert(oopDesc::mark_offset_in_bytes() == 0, "offset of _mark is not 0");
// If the compare-and-exchange succeeded, then we found an unlocked
// object and we have now locked it.
b(success);
bind(cas_failed);
// We did not see an unlocked object so try the fast recursive case.
// Check if the owner is self by comparing the value in the markWord of object
// (current_header) with the stack pointer.
sub(current_header, current_header, R1_SP);
load_const_optimized(temp, ~(os::vm_page_size()-1) | markWord::lock_mask_in_place);
and_(R0/*==0?*/, current_header, temp);
// If condition is true we are cont and hence we can store 0 as the
// displaced header in the box, which indicates that it is a recursive lock.
std(R0/*==0, perhaps*/, BasicLock::displaced_header_offset_in_bytes(), box);
if (flag != CR0) {
mcrf(flag, CR0);
}
beq(CR0, success);
b(failure);
}
// Handle existing monitor.
bind(object_has_monitor);
// Try to CAS owner (no owner => current thread's _monitor_owner_id).
addi(temp, displaced_header, in_bytes(ObjectMonitor::owner_offset()) - markWord::monitor_value);
Register thread_id = displaced_header;
ld(thread_id, in_bytes(JavaThread::monitor_owner_id_offset()), R16_thread);
cmpxchgd(/*flag=*/flag,
/*current_value=*/current_header,
/*compare_value=*/(intptr_t)0,
/*exchange_value=*/thread_id,
/*where=*/temp,
MacroAssembler::MemBarRel | MacroAssembler::MemBarAcq,
MacroAssembler::cmpxchgx_hint_acquire_lock());
// Store a non-null value into the box.
std(box, BasicLock::displaced_header_offset_in_bytes(), box);
beq(flag, success);
// Check for recursive locking.
cmpd(flag, current_header, thread_id);
bne(flag, failure);
// Current thread already owns the lock. Just increment recursions.
Register recursions = displaced_header;
ld(recursions, in_bytes(ObjectMonitor::recursions_offset() - ObjectMonitor::owner_offset()), temp);
addi(recursions, recursions, 1);
std(recursions, in_bytes(ObjectMonitor::recursions_offset() - ObjectMonitor::owner_offset()), temp);
// flag == EQ indicates success, increment held monitor count if LM_LEGACY is enabled
// flag == NE indicates failure
bind(success);
if (LockingMode == LM_LEGACY) {
inc_held_monitor_count(temp);
}
#ifdef ASSERT
// Check that unlocked label is reached with flag == EQ.
Label flag_correct;
beq(flag, flag_correct);
stop("compiler_fast_lock_object: Flag != EQ");
#endif
bind(failure);
#ifdef ASSERT
// Check that slow_path label is reached with flag == NE.
bne(flag, flag_correct);
stop("compiler_fast_lock_object: Flag != NE");
bind(flag_correct);
#endif
}
void MacroAssembler::compiler_fast_unlock_object(ConditionRegister flag, Register oop, Register box,
Register temp, Register displaced_header, Register current_header) {
assert(LockingMode != LM_LIGHTWEIGHT, "uses fast_unlock_lightweight");
assert_different_registers(oop, box, temp, displaced_header, current_header);
Label success, failure, object_has_monitor, not_recursive;
if (LockingMode == LM_LEGACY) {
// Find the lock address and load the displaced header from the stack.
ld(displaced_header, BasicLock::displaced_header_offset_in_bytes(), box);
// If the displaced header is 0, we have a recursive unlock.
cmpdi(flag, displaced_header, 0);
beq(flag, success);
}
// Handle existing monitor.
// The object has an existing monitor iff (mark & monitor_value) != 0.
ld(current_header, oopDesc::mark_offset_in_bytes(), oop);
andi_(R0, current_header, markWord::monitor_value);
bne(CR0, object_has_monitor);
if (LockingMode == LM_MONITOR) {
// Set NE to indicate 'failure' -> take slow-path.
crandc(flag, Assembler::equal, flag, Assembler::equal);
b(failure);
} else {
assert(LockingMode == LM_LEGACY, "must be");
// Check if it is still a light weight lock, this is is true if we see
// the stack address of the basicLock in the markWord of the object.
// Cmpxchg sets flag to cmpd(current_header, box).
cmpxchgd(/*flag=*/flag,
/*current_value=*/current_header,
/*compare_value=*/box,
/*exchange_value=*/displaced_header,
/*where=*/oop,
MacroAssembler::MemBarRel,
MacroAssembler::cmpxchgx_hint_release_lock(),
noreg,
&failure);
assert(oopDesc::mark_offset_in_bytes() == 0, "offset of _mark is not 0");
b(success);
}
// Handle existing monitor.
bind(object_has_monitor);
STATIC_ASSERT(markWord::monitor_value <= INT_MAX);
addi(current_header, current_header, -(int)markWord::monitor_value); // monitor
ld(displaced_header, in_bytes(ObjectMonitor::recursions_offset()), current_header);
addic_(displaced_header, displaced_header, -1);
blt(CR0, not_recursive); // Not recursive if negative after decrement.
// Recursive unlock
std(displaced_header, in_bytes(ObjectMonitor::recursions_offset()), current_header);
if (flag == CR0) { // Otherwise, flag is already EQ, here.
crorc(CR0, Assembler::equal, CR0, Assembler::equal); // Set CR0 EQ
}
b(success);
bind(not_recursive);
// Set owner to null.
// Release to satisfy the JMM
release();
li(temp, 0);
std(temp, in_bytes(ObjectMonitor::owner_offset()), current_header);
// We need a full fence after clearing owner to avoid stranding.
// StoreLoad achieves this.
membar(StoreLoad);
// Check if the entry_list is empty.
ld(temp, in_bytes(ObjectMonitor::entry_list_offset()), current_header);
cmpdi(flag, temp, 0);
beq(flag, success); // If so we are done.
// Check if there is a successor.
ld(temp, in_bytes(ObjectMonitor::succ_offset()), current_header);
cmpdi(flag, temp, 0);
// Invert equal bit
crnand(flag, Assembler::equal, flag, Assembler::equal);
beq(flag, success); // If there is a successor we are done.
// Save the monitor pointer in the current thread, so we can try
// to reacquire the lock in SharedRuntime::monitor_exit_helper().
std(current_header, in_bytes(JavaThread::unlocked_inflated_monitor_offset()), R16_thread);
b(failure); // flag == NE
// flag == EQ indicates success, decrement held monitor count if LM_LEGACY is enabled
// flag == NE indicates failure
bind(success);
if (LockingMode == LM_LEGACY) {
dec_held_monitor_count(temp);
}
#ifdef ASSERT
// Check that unlocked label is reached with flag == EQ.
Label flag_correct;
beq(flag, flag_correct);
stop("compiler_fast_unlock_object: Flag != EQ");
#endif
bind(failure);
#ifdef ASSERT
// Check that slow_path label is reached with flag == NE.
bne(flag, flag_correct);
stop("compiler_fast_unlock_object: Flag != NE");
bind(flag_correct);
#endif
}
void MacroAssembler::compiler_fast_lock_lightweight_object(ConditionRegister flag, Register obj, Register box,
Register tmp1, Register tmp2, Register tmp3) {
assert_different_registers(obj, box, tmp1, tmp2, tmp3);
@@ -4769,38 +4537,6 @@ void MacroAssembler::pop_cont_fastpath() {
bind(done);
}
// Note: Must preserve CR0 EQ (invariant).
void MacroAssembler::inc_held_monitor_count(Register tmp) {
assert(LockingMode == LM_LEGACY, "");
ld(tmp, in_bytes(JavaThread::held_monitor_count_offset()), R16_thread);
#ifdef ASSERT
Label ok;
cmpdi(CR0, tmp, 0);
bge_predict_taken(CR0, ok);
stop("held monitor count is negativ at increment");
bind(ok);
crorc(CR0, Assembler::equal, CR0, Assembler::equal); // Restore CR0 EQ
#endif
addi(tmp, tmp, 1);
std(tmp, in_bytes(JavaThread::held_monitor_count_offset()), R16_thread);
}
// Note: Must preserve CR0 EQ (invariant).
void MacroAssembler::dec_held_monitor_count(Register tmp) {
assert(LockingMode == LM_LEGACY, "");
ld(tmp, in_bytes(JavaThread::held_monitor_count_offset()), R16_thread);
#ifdef ASSERT
Label ok;
cmpdi(CR0, tmp, 0);
bgt_predict_taken(CR0, ok);
stop("held monitor count is <= 0 at decrement");
bind(ok);
crorc(CR0, Assembler::equal, CR0, Assembler::equal); // Restore CR0 EQ
#endif
addi(tmp, tmp, -1);
std(tmp, in_bytes(JavaThread::held_monitor_count_offset()), R16_thread);
}
// Function to flip between unlocked and locked state (fast locking).
// Branches to failed if the state is not as expected with CR0 NE.
// Falls through upon success with CR0 EQ.
@@ -4842,7 +4578,6 @@ void MacroAssembler::atomically_flip_locked_state(bool is_unlock, Register obj,
// - obj: the object to be locked
// - t1, t2: temporary register
void MacroAssembler::lightweight_lock(Register box, Register obj, Register t1, Register t2, Label& slow) {
assert(LockingMode == LM_LIGHTWEIGHT, "only used with new lightweight locking");
assert_different_registers(box, obj, t1, t2, R0);
Label push;
@@ -4899,7 +4634,6 @@ void MacroAssembler::lightweight_lock(Register box, Register obj, Register t1, R
// - obj: the object to be unlocked
// - t1: temporary register
void MacroAssembler::lightweight_unlock(Register obj, Register t1, Label& slow) {
assert(LockingMode == LM_LIGHTWEIGHT, "only used with new lightweight locking");
assert_different_registers(obj, t1);
#ifdef ASSERT

View File

@@ -697,8 +697,6 @@ class MacroAssembler: public Assembler {
void push_cont_fastpath();
void pop_cont_fastpath();
void inc_held_monitor_count(Register tmp);
void dec_held_monitor_count(Register tmp);
void atomically_flip_locked_state(bool is_unlock, Register obj, Register tmp, Label& failed, int semantics);
void lightweight_lock(Register box, Register obj, Register t1, Register t2, Label& slow);
void lightweight_unlock(Register obj, Register t1, Label& slow);
@@ -715,12 +713,6 @@ class MacroAssembler: public Assembler {
enum { trampoline_stub_size = 6 * 4 };
address emit_trampoline_stub(int destination_toc_offset, int insts_call_instruction_offset, Register Rtoc = noreg);
void compiler_fast_lock_object(ConditionRegister flag, Register oop, Register box,
Register tmp1, Register tmp2, Register tmp3);
void compiler_fast_unlock_object(ConditionRegister flag, Register oop, Register box,
Register tmp1, Register tmp2, Register tmp3);
void compiler_fast_lock_lightweight_object(ConditionRegister flag, Register oop, Register box,
Register tmp1, Register tmp2, Register tmp3);

View File

@@ -11573,40 +11573,8 @@ instruct partialSubtypeCheckConstSuper(rarg3RegP sub, rarg2RegP super_reg, immP
// inlined locking and unlocking
instruct cmpFastLock(flagsRegCR0 crx, iRegPdst oop, iRegPdst box, iRegPdst tmp1, iRegPdst tmp2) %{
predicate(LockingMode != LM_LIGHTWEIGHT);
match(Set crx (FastLock oop box));
effect(TEMP tmp1, TEMP tmp2);
format %{ "FASTLOCK $oop, $box, $tmp1, $tmp2" %}
ins_encode %{
__ compiler_fast_lock_object($crx$$CondRegister, $oop$$Register, $box$$Register,
$tmp1$$Register, $tmp2$$Register, /*tmp3*/ R0);
// If locking was successful, crx should indicate 'EQ'.
// The compiler generates a branch to the runtime call to
// _complete_monitor_locking_Java for the case where crx is 'NE'.
%}
ins_pipe(pipe_class_compare);
%}
instruct cmpFastUnlock(flagsRegCR0 crx, iRegPdst oop, iRegPdst box, iRegPdst tmp1, iRegPdst tmp2, iRegPdst tmp3) %{
predicate(LockingMode != LM_LIGHTWEIGHT);
match(Set crx (FastUnlock oop box));
effect(TEMP tmp1, TEMP tmp2, TEMP tmp3);
format %{ "FASTUNLOCK $oop, $box, $tmp1, $tmp2" %}
ins_encode %{
__ compiler_fast_unlock_object($crx$$CondRegister, $oop$$Register, $box$$Register,
$tmp1$$Register, $tmp2$$Register, $tmp3$$Register);
// If unlocking was successful, crx should indicate 'EQ'.
// The compiler generates a branch to the runtime call to
// _complete_monitor_unlocking_Java for the case where crx is 'NE'.
%}
ins_pipe(pipe_class_compare);
%}
instruct cmpFastLockLightweight(flagsRegCR0 crx, iRegPdst oop, iRegPdst box, iRegPdst tmp1, iRegPdst tmp2) %{
predicate(LockingMode == LM_LIGHTWEIGHT && !UseObjectMonitorTable);
predicate(!UseObjectMonitorTable);
match(Set crx (FastLock oop box));
effect(TEMP tmp1, TEMP tmp2);
@@ -11622,7 +11590,7 @@ instruct cmpFastLockLightweight(flagsRegCR0 crx, iRegPdst oop, iRegPdst box, iRe
%}
instruct cmpFastLockMonitorTable(flagsRegCR0 crx, iRegPdst oop, iRegPdst box, iRegPdst tmp1, iRegPdst tmp2, iRegPdst tmp3, flagsRegCR1 cr1) %{
predicate(LockingMode == LM_LIGHTWEIGHT && UseObjectMonitorTable);
predicate(UseObjectMonitorTable);
match(Set crx (FastLock oop box));
effect(TEMP tmp1, TEMP tmp2, TEMP tmp3, KILL cr1);
@@ -11638,7 +11606,6 @@ instruct cmpFastLockMonitorTable(flagsRegCR0 crx, iRegPdst oop, iRegPdst box, iR
%}
instruct cmpFastUnlockLightweight(flagsRegCR0 crx, iRegPdst oop, iRegPdst box, iRegPdst tmp1, iRegPdst tmp2, iRegPdst tmp3) %{
predicate(LockingMode == LM_LIGHTWEIGHT);
match(Set crx (FastUnlock oop box));
effect(TEMP tmp1, TEMP tmp2, TEMP tmp3);

View File

@@ -2446,14 +2446,9 @@ nmethod *SharedRuntime::generate_native_wrapper(MacroAssembler *masm,
__ addi(r_box, R1_SP, lock_offset);
// Try fastpath for locking.
if (LockingMode == LM_LIGHTWEIGHT) {
// fast_lock kills r_temp_1, r_temp_2, r_temp_3.
Register r_temp_3_or_noreg = UseObjectMonitorTable ? r_temp_3 : noreg;
__ compiler_fast_lock_lightweight_object(CR0, r_oop, r_box, r_temp_1, r_temp_2, r_temp_3_or_noreg);
} else {
// fast_lock kills r_temp_1, r_temp_2, r_temp_3.
__ compiler_fast_lock_object(CR0, r_oop, r_box, r_temp_1, r_temp_2, r_temp_3);
}
// fast_lock kills r_temp_1, r_temp_2, r_temp_3.
Register r_temp_3_or_noreg = UseObjectMonitorTable ? r_temp_3 : noreg;
__ compiler_fast_lock_lightweight_object(CR0, r_oop, r_box, r_temp_1, r_temp_2, r_temp_3_or_noreg);
__ beq(CR0, locked);
// None of the above fast optimizations worked so we have to get into the
@@ -2620,7 +2615,7 @@ nmethod *SharedRuntime::generate_native_wrapper(MacroAssembler *masm,
__ stw(R0, thread_(thread_state));
// Check preemption for Object.wait()
if (LockingMode != LM_LEGACY && method->is_object_wait0()) {
if (method->is_object_wait0()) {
Label not_preempted;
__ ld(R0, in_bytes(JavaThread::preempt_alternate_return_offset()), R16_thread);
__ cmpdi(CR0, R0, 0);
@@ -2672,11 +2667,7 @@ nmethod *SharedRuntime::generate_native_wrapper(MacroAssembler *masm,
__ addi(r_box, R1_SP, lock_offset);
// Try fastpath for unlocking.
if (LockingMode == LM_LIGHTWEIGHT) {
__ compiler_fast_unlock_lightweight_object(CR0, r_oop, r_box, r_temp_1, r_temp_2, r_temp_3);
} else {
__ compiler_fast_unlock_object(CR0, r_oop, r_box, r_temp_1, r_temp_2, r_temp_3);
}
__ compiler_fast_unlock_lightweight_object(CR0, r_oop, r_box, r_temp_1, r_temp_2, r_temp_3);
__ beq(CR0, done);
// Save and restore any potential method result value around the unlocking operation.
@@ -2717,7 +2708,7 @@ nmethod *SharedRuntime::generate_native_wrapper(MacroAssembler *masm,
// --------------------------------------------------------------------------
// Last java frame won't be set if we're resuming after preemption
bool maybe_preempted = LockingMode != LM_LEGACY && method->is_object_wait0();
bool maybe_preempted = method->is_object_wait0();
__ reset_last_Java_frame(!maybe_preempted /* check_last_java_sp */);
// Unbox oop result, e.g. JNIHandles::resolve value.

View File

@@ -1089,6 +1089,7 @@ address TemplateInterpreterGenerator::generate_math_entry(AbstractInterpreter::M
case Interpreter::java_lang_math_sin : runtime_entry = CAST_FROM_FN_PTR(address, SharedRuntime::dsin); break;
case Interpreter::java_lang_math_cos : runtime_entry = CAST_FROM_FN_PTR(address, SharedRuntime::dcos); break;
case Interpreter::java_lang_math_tan : runtime_entry = CAST_FROM_FN_PTR(address, SharedRuntime::dtan); break;
case Interpreter::java_lang_math_sinh : /* run interpreted */ break;
case Interpreter::java_lang_math_tanh : /* run interpreted */ break;
case Interpreter::java_lang_math_cbrt : /* run interpreted */ break;
case Interpreter::java_lang_math_abs : /* run interpreted */ break;
@@ -1361,7 +1362,7 @@ address TemplateInterpreterGenerator::generate_native_entry(bool synchronized) {
// convenient and the slow signature handler can use this same frame
// anchor.
bool support_vthread_preemption = Continuations::enabled() && LockingMode != LM_LEGACY;
bool support_vthread_preemption = Continuations::enabled();
// We have a TOP_IJAVA_FRAME here, which belongs to us.
Label last_java_pc;

View File

@@ -339,11 +339,7 @@ int LIR_Assembler::emit_unwind_handler() {
if (method()->is_synchronized()) {
monitor_address(0, FrameMap::r10_opr);
stub = new MonitorExitStub(FrameMap::r10_opr, true, 0);
if (LockingMode == LM_MONITOR) {
__ j(*stub->entry());
} else {
__ unlock_object(x15, x14, x10, x16, *stub->entry());
}
__ unlock_object(x15, x14, x10, x16, *stub->entry());
__ bind(*stub->continuation());
}
@@ -1497,13 +1493,7 @@ void LIR_Assembler::emit_lock(LIR_OpLock* op) {
Register hdr = op->hdr_opr()->as_register();
Register lock = op->lock_opr()->as_register();
Register temp = op->scratch_opr()->as_register();
if (LockingMode == LM_MONITOR) {
if (op->info() != nullptr) {
add_debug_info_for_null_check_here(op->info());
__ null_check(obj, -1);
}
__ j(*op->stub()->entry());
} else if (op->code() == lir_lock) {
if (op->code() == lir_lock) {
assert(BasicLock::displaced_header_offset_in_bytes() == 0, "lock_reg must point to the displaced header");
// add debug info for NullPointerException only if one is possible
int null_check_offset = __ lock_object(hdr, obj, lock, temp, *op->stub()->entry());
@@ -1831,7 +1821,7 @@ void LIR_Assembler::leal(LIR_Opr addr, LIR_Opr dest, LIR_PatchCode patch_code, C
}
LIR_Address* adr = addr->as_address_ptr();
Register dst = dest->as_register_lo();
Register dst = dest->as_pointer_register();
assert_different_registers(dst, t0);
if (adr->base()->is_valid() && dst == adr->base()->as_pointer_register() && (!adr->index()->is_cpu_register())) {

View File

@@ -837,7 +837,7 @@ void LIRGenerator::do_update_CRC32(Intrinsic* x) {
CallingConvention* cc = frame_map()->c_calling_convention(&signature);
const LIR_Opr result_reg = result_register_for(x->type());
LIR_Opr addr = new_pointer_register();
LIR_Opr addr = new_register(T_ADDRESS);
__ leal(LIR_OprFact::address(a), addr);
crc.load_item_force(cc->at(0));

View File

@@ -49,8 +49,6 @@ void C1_MacroAssembler::float_cmp(bool is_float, int unordered_result,
}
int C1_MacroAssembler::lock_object(Register hdr, Register obj, Register disp_hdr, Register temp, Label& slow_case) {
const int aligned_mask = BytesPerWord - 1;
const int hdr_offset = oopDesc::mark_offset_in_bytes();
assert_different_registers(hdr, obj, disp_hdr, temp, t0, t1);
int null_check_offset = -1;
@@ -61,97 +59,19 @@ int C1_MacroAssembler::lock_object(Register hdr, Register obj, Register disp_hdr
null_check_offset = offset();
if (LockingMode == LM_LIGHTWEIGHT) {
lightweight_lock(disp_hdr, obj, hdr, temp, t1, slow_case);
} else if (LockingMode == LM_LEGACY) {
if (DiagnoseSyncOnValueBasedClasses != 0) {
load_klass(hdr, obj);
lbu(hdr, Address(hdr, Klass::misc_flags_offset()));
test_bit(temp, hdr, exact_log2(KlassFlags::_misc_is_value_based_class));
bnez(temp, slow_case, /* is_far */ true);
}
Label done;
// Load object header
ld(hdr, Address(obj, hdr_offset));
// and mark it as unlocked
ori(hdr, hdr, markWord::unlocked_value);
// save unlocked object header into the displaced header location on the stack
sd(hdr, Address(disp_hdr, 0));
// test if object header is still the same (i.e. unlocked), and if so, store the
// displaced header address in the object header - if it is not the same, get the
// object header instead
la(temp, Address(obj, hdr_offset));
// if the object header was the same, we're done
cmpxchgptr(hdr, disp_hdr, temp, t1, done, /*fallthough*/nullptr);
// if the object header was not the same, it is now in the hdr register
// => test if it is a stack pointer into the same stack (recursive locking), i.e.:
//
// 1) (hdr & aligned_mask) == 0
// 2) sp <= hdr
// 3) hdr <= sp + page_size
//
// these 3 tests can be done by evaluating the following expression:
//
// (hdr -sp) & (aligned_mask - page_size)
//
// assuming both the stack pointer and page_size have their least
// significant 2 bits cleared and page_size is a power of 2
sub(hdr, hdr, sp);
mv(temp, aligned_mask - (int)os::vm_page_size());
andr(hdr, hdr, temp);
// for recursive locking, the result is zero => save it in the displaced header
// location (null in the displaced hdr location indicates recursive locking)
sd(hdr, Address(disp_hdr, 0));
// otherwise we don't care about the result and handle locking via runtime call
bnez(hdr, slow_case, /* is_far */ true);
// done
bind(done);
inc_held_monitor_count(t0);
}
lightweight_lock(disp_hdr, obj, hdr, temp, t1, slow_case);
return null_check_offset;
}
void C1_MacroAssembler::unlock_object(Register hdr, Register obj, Register disp_hdr, Register temp, Label& slow_case) {
const int aligned_mask = BytesPerWord - 1;
const int hdr_offset = oopDesc::mark_offset_in_bytes();
assert_different_registers(hdr, obj, disp_hdr, temp, t0, t1);
Label done;
if (LockingMode != LM_LIGHTWEIGHT) {
// load displaced header
ld(hdr, Address(disp_hdr, 0));
// if the loaded hdr is null we had recursive locking
// if we had recursive locking, we are done
beqz(hdr, done);
}
// load object
ld(obj, Address(disp_hdr, BasicObjectLock::obj_offset()));
verify_oop(obj);
if (LockingMode == LM_LIGHTWEIGHT) {
lightweight_unlock(obj, hdr, temp, t1, slow_case);
} else if (LockingMode == LM_LEGACY) {
// test if object header is pointing to the displaced header, and if so, restore
// the displaced header in the object - if the object header is not pointing to
// the displaced header, get the object header instead
// if the object header was not pointing to the displaced header,
// we do unlocking via runtime call
if (hdr_offset) {
la(temp, Address(obj, hdr_offset));
cmpxchgptr(disp_hdr, hdr, temp, t1, done, &slow_case);
} else {
cmpxchgptr(disp_hdr, hdr, obj, t1, done, &slow_case);
}
// done
bind(done);
dec_held_monitor_count(t0);
}
lightweight_unlock(obj, hdr, temp, t1, slow_case);
}
// Defines obj, preserves var_size_in_bytes

View File

@@ -43,240 +43,11 @@
#define BIND(label) bind(label); BLOCK_COMMENT(#label ":")
void C2_MacroAssembler::fast_lock(Register objectReg, Register boxReg,
Register tmp1Reg, Register tmp2Reg, Register tmp3Reg, Register tmp4Reg) {
// Use cr register to indicate the fast_lock result: zero for success; non-zero for failure.
Register flag = t1;
Register oop = objectReg;
Register box = boxReg;
Register disp_hdr = tmp1Reg;
Register tmp = tmp2Reg;
Label object_has_monitor;
// Finish fast lock successfully. MUST branch to with flag == 0
Label locked;
// Finish fast lock unsuccessfully. slow_path MUST branch to with flag != 0
Label slow_path;
assert(LockingMode != LM_LIGHTWEIGHT, "lightweight locking should use fast_lock_lightweight");
assert_different_registers(oop, box, tmp, disp_hdr, flag, tmp3Reg, t0);
mv(flag, 1);
// Load markWord from object into displaced_header.
ld(disp_hdr, Address(oop, oopDesc::mark_offset_in_bytes()));
if (DiagnoseSyncOnValueBasedClasses != 0) {
load_klass(tmp, oop);
lbu(tmp, Address(tmp, Klass::misc_flags_offset()));
test_bit(tmp, tmp, exact_log2(KlassFlags::_misc_is_value_based_class));
bnez(tmp, slow_path);
}
// Check for existing monitor
test_bit(tmp, disp_hdr, exact_log2(markWord::monitor_value));
bnez(tmp, object_has_monitor);
if (LockingMode == LM_MONITOR) {
j(slow_path);
} else {
assert(LockingMode == LM_LEGACY, "must be");
// Set tmp to be (markWord of object | UNLOCK_VALUE).
ori(tmp, disp_hdr, markWord::unlocked_value);
// Initialize the box. (Must happen before we update the object mark!)
sd(tmp, Address(box, BasicLock::displaced_header_offset_in_bytes()));
// Compare object markWord with an unlocked value (tmp) and if
// equal exchange the stack address of our box with object markWord.
// On failure disp_hdr contains the possibly locked markWord.
cmpxchg(/*memory address*/oop, /*expected value*/tmp, /*new value*/box, Assembler::int64,
Assembler::aq, Assembler::rl, /*result*/disp_hdr);
beq(disp_hdr, tmp, locked);
assert(oopDesc::mark_offset_in_bytes() == 0, "offset of _mark is not 0");
// If the compare-and-exchange succeeded, then we found an unlocked
// object, will have now locked it will continue at label locked
// We did not see an unlocked object so try the fast recursive case.
// Check if the owner is self by comparing the value in the
// markWord of object (disp_hdr) with the stack pointer.
sub(disp_hdr, disp_hdr, sp);
mv(tmp, (intptr_t) (~(os::vm_page_size()-1) | (uintptr_t)markWord::lock_mask_in_place));
// If (mark & lock_mask) == 0 and mark - sp < page_size, we are stack-locking and goto label
// locked, hence we can store 0 as the displaced header in the box, which indicates that it
// is a recursive lock.
andr(tmp/*==0?*/, disp_hdr, tmp);
sd(tmp/*==0, perhaps*/, Address(box, BasicLock::displaced_header_offset_in_bytes()));
beqz(tmp, locked);
j(slow_path);
}
// Handle existing monitor.
bind(object_has_monitor);
// Try to CAS owner (no owner => current thread's _monitor_owner_id).
add(tmp, disp_hdr, (in_bytes(ObjectMonitor::owner_offset()) - markWord::monitor_value));
Register tid = tmp4Reg;
ld(tid, Address(xthread, JavaThread::monitor_owner_id_offset()));
cmpxchg(/*memory address*/tmp, /*expected value*/zr, /*new value*/tid, Assembler::int64,
Assembler::aq, Assembler::rl, /*result*/tmp3Reg); // cas succeeds if tmp3Reg == zr(expected)
// Store a non-null value into the box to avoid looking like a re-entrant
// lock. The fast-path monitor unlock code checks for
// markWord::monitor_value so use markWord::unused_mark which has the
// relevant bit set, and also matches ObjectSynchronizer::slow_enter.
mv(tmp, (address)markWord::unused_mark().value());
sd(tmp, Address(box, BasicLock::displaced_header_offset_in_bytes()));
beqz(tmp3Reg, locked); // CAS success means locking succeeded
bne(tmp3Reg, tid, slow_path); // Check for recursive locking
// Recursive lock case
increment(Address(disp_hdr, in_bytes(ObjectMonitor::recursions_offset()) - markWord::monitor_value), 1, tmp2Reg, tmp3Reg);
bind(locked);
mv(flag, zr);
if (LockingMode == LM_LEGACY) {
inc_held_monitor_count(t0);
}
#ifdef ASSERT
// Check that locked label is reached with flag == 0.
Label flag_correct;
beqz(flag, flag_correct);
stop("Fast Lock Flag != 0");
#endif
bind(slow_path);
#ifdef ASSERT
// Check that slow_path label is reached with flag != 0.
bnez(flag, flag_correct);
stop("Fast Lock Flag == 0");
bind(flag_correct);
#endif
// C2 uses the value of flag (0 vs !0) to determine the continuation.
}
void C2_MacroAssembler::fast_unlock(Register objectReg, Register boxReg,
Register tmp1Reg, Register tmp2Reg) {
// Use cr register to indicate the fast_unlock result: zero for success; non-zero for failure.
Register flag = t1;
Register oop = objectReg;
Register box = boxReg;
Register disp_hdr = tmp1Reg;
Register owner_addr = tmp1Reg;
Register tmp = tmp2Reg;
Label object_has_monitor;
// Finish fast lock successfully. MUST branch to with flag == 0
Label unlocked;
// Finish fast lock unsuccessfully. slow_path MUST branch to with flag != 0
Label slow_path;
assert(LockingMode != LM_LIGHTWEIGHT, "lightweight locking should use fast_unlock_lightweight");
assert_different_registers(oop, box, tmp, disp_hdr, flag, t0);
mv(flag, 1);
if (LockingMode == LM_LEGACY) {
// Find the lock address and load the displaced header from the stack.
ld(disp_hdr, Address(box, BasicLock::displaced_header_offset_in_bytes()));
// If the displaced header is 0, we have a recursive unlock.
beqz(disp_hdr, unlocked);
}
// Handle existing monitor.
ld(tmp, Address(oop, oopDesc::mark_offset_in_bytes()));
test_bit(t0, tmp, exact_log2(markWord::monitor_value));
bnez(t0, object_has_monitor);
if (LockingMode == LM_MONITOR) {
j(slow_path);
} else {
assert(LockingMode == LM_LEGACY, "must be");
// Check if it is still a light weight lock, this is true if we
// see the stack address of the basicLock in the markWord of the
// object.
cmpxchg(/*memory address*/oop, /*expected value*/box, /*new value*/disp_hdr, Assembler::int64,
Assembler::relaxed, Assembler::rl, /*result*/tmp);
beq(box, tmp, unlocked); // box == tmp if cas succeeds
j(slow_path);
}
assert(oopDesc::mark_offset_in_bytes() == 0, "offset of _mark is not 0");
// Handle existing monitor.
bind(object_has_monitor);
subi(tmp, tmp, (int)markWord::monitor_value); // monitor
ld(disp_hdr, Address(tmp, ObjectMonitor::recursions_offset()));
Label notRecursive;
beqz(disp_hdr, notRecursive); // Will be 0 if not recursive.
// Recursive lock
subi(disp_hdr, disp_hdr, 1);
sd(disp_hdr, Address(tmp, ObjectMonitor::recursions_offset()));
j(unlocked);
bind(notRecursive);
// Compute owner address.
la(owner_addr, Address(tmp, ObjectMonitor::owner_offset()));
// Set owner to null.
// Release to satisfy the JMM
membar(MacroAssembler::LoadStore | MacroAssembler::StoreStore);
sd(zr, Address(owner_addr));
// We need a full fence after clearing owner to avoid stranding.
// StoreLoad achieves this.
membar(StoreLoad);
// Check if the entry_list is empty.
ld(t0, Address(tmp, ObjectMonitor::entry_list_offset()));
beqz(t0, unlocked); // If so we are done.
// Check if there is a successor.
ld(t0, Address(tmp, ObjectMonitor::succ_offset()));
bnez(t0, unlocked); // If so we are done.
// Save the monitor pointer in the current thread, so we can try to
// reacquire the lock in SharedRuntime::monitor_exit_helper().
sd(tmp, Address(xthread, JavaThread::unlocked_inflated_monitor_offset()));
mv(flag, 1);
j(slow_path);
bind(unlocked);
mv(flag, zr);
if (LockingMode == LM_LEGACY) {
dec_held_monitor_count(t0);
}
#ifdef ASSERT
// Check that unlocked label is reached with flag == 0.
Label flag_correct;
beqz(flag, flag_correct);
stop("Fast Lock Flag != 0");
#endif
bind(slow_path);
#ifdef ASSERT
// Check that slow_path label is reached with flag != 0.
bnez(flag, flag_correct);
stop("Fast Lock Flag == 0");
bind(flag_correct);
#endif
// C2 uses the value of flag (0 vs !0) to determine the continuation.
}
void C2_MacroAssembler::fast_lock_lightweight(Register obj, Register box,
Register tmp1, Register tmp2, Register tmp3, Register tmp4) {
// Flag register, zero for success; non-zero for failure.
Register flag = t1;
assert(LockingMode == LM_LIGHTWEIGHT, "must be");
assert_different_registers(obj, box, tmp1, tmp2, tmp3, tmp4, flag, t0);
mv(flag, 1);
@@ -439,7 +210,6 @@ void C2_MacroAssembler::fast_unlock_lightweight(Register obj, Register box,
// Flag register, zero for success; non-zero for failure.
Register flag = t1;
assert(LockingMode == LM_LIGHTWEIGHT, "must be");
assert_different_registers(obj, box, tmp1, tmp2, tmp3, flag, t0);
mv(flag, 1);
@@ -2823,10 +2593,14 @@ void C2_MacroAssembler::char_array_compress_v(Register src, Register dst, Regist
// Intrinsic for
//
// - sun/nio/cs/ISO_8859_1$Encoder.implEncodeISOArray
// return the number of characters copied.
// - java/lang/StringUTF16.compress
// return index of non-latin1 character if copy fails, otherwise 'len'.
// - sun.nio.cs.ISO_8859_1.Encoder#encodeISOArray0(byte[] sa, int sp, byte[] da, int dp, int len)
// Encodes char[] to byte[] in ISO-8859-1
//
// - java.lang.StringCoding#encodeISOArray0(byte[] sa, int sp, byte[] da, int dp, int len)
// Encodes byte[] (containing UTF-16) to byte[] in ISO-8859-1
//
// - java.lang.StringCoding#encodeAsciiArray0(char[] sa, int sp, byte[] da, int dp, int len)
// Encodes char[] to byte[] in ASCII
//
// This version always returns the number of characters copied. A successful
// copy will complete with the post-condition: 'res' == 'len', while an

View File

@@ -1,5 +1,5 @@
/*
* Copyright (c) 2020, 2024, Oracle and/or its affiliates. All rights reserved.
* Copyright (c) 2020, 2025, Oracle and/or its affiliates. All rights reserved.
* Copyright (c) 2020, 2022, Huawei Technologies Co., Ltd. All rights reserved.
* DO NOT ALTER OR REMOVE COPYRIGHT NOTICES OR THIS FILE HEADER.
*
@@ -49,11 +49,6 @@
const int STUB_THRESHOLD, Label *STUB, Label *DONE);
public:
// Code used by cmpFastLock and cmpFastUnlock mach instructions in .ad file.
void fast_lock(Register object, Register box,
Register tmp1, Register tmp2, Register tmp3, Register tmp4);
void fast_unlock(Register object, Register box, Register tmp1, Register tmp2);
// Code used by cmpFastLockLightweight and cmpFastUnlockLightweight mach instructions in .ad file.
void fast_lock_lightweight(Register object, Register box,
Register tmp1, Register tmp2, Register tmp3, Register tmp4);

View File

@@ -194,6 +194,9 @@ inline void FreezeBase::patch_pd(frame& hf, const frame& caller) {
}
}
inline void FreezeBase::patch_pd_unused(intptr_t* sp) {
}
//////// Thaw
// Fast path

View File

@@ -733,84 +733,26 @@ void InterpreterMacroAssembler::leave_jfr_critical_section() {
void InterpreterMacroAssembler::lock_object(Register lock_reg)
{
assert(lock_reg == c_rarg1, "The argument is only for looks. It must be c_rarg1");
if (LockingMode == LM_MONITOR) {
call_VM_preemptable(noreg,
CAST_FROM_FN_PTR(address, InterpreterRuntime::monitorenter),
lock_reg);
} else {
Label count, done;
const Register swap_reg = x10;
const Register tmp = c_rarg2;
const Register obj_reg = c_rarg3; // Will contain the oop
const Register tmp2 = c_rarg4;
const Register tmp3 = c_rarg5;
const Register tmp = c_rarg2;
const Register obj_reg = c_rarg3; // Will contain the oop
const Register tmp2 = c_rarg4;
const Register tmp3 = c_rarg5;
const int obj_offset = in_bytes(BasicObjectLock::obj_offset());
const int lock_offset = in_bytes(BasicObjectLock::lock_offset());
const int mark_offset = lock_offset +
BasicLock::displaced_header_offset_in_bytes();
// Load object pointer into obj_reg (c_rarg3)
ld(obj_reg, Address(lock_reg, BasicObjectLock::obj_offset()));
Label slow_case;
Label done, slow_case;
lightweight_lock(lock_reg, obj_reg, tmp, tmp2, tmp3, slow_case);
j(done);
// Load object pointer into obj_reg c_rarg3
ld(obj_reg, Address(lock_reg, obj_offset));
bind(slow_case);
// Call the runtime routine for slow case
call_VM_preemptable(noreg,
CAST_FROM_FN_PTR(address, InterpreterRuntime::monitorenter),
lock_reg);
if (LockingMode == LM_LIGHTWEIGHT) {
lightweight_lock(lock_reg, obj_reg, tmp, tmp2, tmp3, slow_case);
j(done);
} else if (LockingMode == LM_LEGACY) {
if (DiagnoseSyncOnValueBasedClasses != 0) {
load_klass(tmp, obj_reg);
lbu(tmp, Address(tmp, Klass::misc_flags_offset()));
test_bit(tmp, tmp, exact_log2(KlassFlags::_misc_is_value_based_class));
bnez(tmp, slow_case);
}
// Load (object->mark() | 1) into swap_reg
ld(t0, Address(obj_reg, oopDesc::mark_offset_in_bytes()));
ori(swap_reg, t0, 1);
// Save (object->mark() | 1) into BasicLock's displaced header
sd(swap_reg, Address(lock_reg, mark_offset));
assert(lock_offset == 0,
"displached header must be first word in BasicObjectLock");
cmpxchg_obj_header(swap_reg, lock_reg, obj_reg, tmp, count, /*fallthrough*/nullptr);
// Test if the oopMark is an obvious stack pointer, i.e.,
// 1) (mark & 7) == 0, and
// 2) sp <= mark < mark + os::pagesize()
//
// These 3 tests can be done by evaluating the following
// expression: ((mark - sp) & (7 - os::vm_page_size())),
// assuming both stack pointer and pagesize have their
// least significant 3 bits clear.
// NOTE: the oopMark is in swap_reg x10 as the result of cmpxchg
sub(swap_reg, swap_reg, sp);
mv(t0, (int64_t)(7 - (int)os::vm_page_size()));
andr(swap_reg, swap_reg, t0);
// Save the test result, for recursive case, the result is zero
sd(swap_reg, Address(lock_reg, mark_offset));
bnez(swap_reg, slow_case);
bind(count);
inc_held_monitor_count(t0);
j(done);
}
bind(slow_case);
// Call the runtime routine for slow case
call_VM_preemptable(noreg,
CAST_FROM_FN_PTR(address, InterpreterRuntime::monitorenter),
lock_reg);
bind(done);
}
bind(done);
}
@@ -829,58 +771,30 @@ void InterpreterMacroAssembler::unlock_object(Register lock_reg)
{
assert(lock_reg == c_rarg1, "The argument is only for looks. It must be rarg1");
if (LockingMode == LM_MONITOR) {
call_VM_leaf(CAST_FROM_FN_PTR(address, InterpreterRuntime::monitorexit), lock_reg);
} else {
Label count, done;
const Register swap_reg = x10;
const Register header_reg = c_rarg2; // Will contain the old oopMark
const Register obj_reg = c_rarg3; // Will contain the oop
const Register tmp_reg = c_rarg4; // Temporary used by lightweight_unlock
const Register swap_reg = x10;
const Register header_reg = c_rarg2; // Will contain the old oopMark
const Register obj_reg = c_rarg3; // Will contain the oop
const Register tmp_reg = c_rarg4; // Temporary used by lightweight_unlock
save_bcp(); // Save in case of exception
save_bcp(); // Save in case of exception
// Load oop into obj_reg (c_rarg3)
ld(obj_reg, Address(lock_reg, BasicObjectLock::obj_offset()));
if (LockingMode != LM_LIGHTWEIGHT) {
// Convert from BasicObjectLock structure to object and BasicLock
// structure Store the BasicLock address into x10
la(swap_reg, Address(lock_reg, BasicObjectLock::lock_offset()));
}
// Free entry
sd(zr, Address(lock_reg, BasicObjectLock::obj_offset()));
// Load oop into obj_reg(c_rarg3)
ld(obj_reg, Address(lock_reg, BasicObjectLock::obj_offset()));
Label done, slow_case;
lightweight_unlock(obj_reg, header_reg, swap_reg, tmp_reg, slow_case);
j(done);
// Free entry
sd(zr, Address(lock_reg, BasicObjectLock::obj_offset()));
bind(slow_case);
// Call the runtime routine for slow case.
sd(obj_reg, Address(lock_reg, BasicObjectLock::obj_offset())); // restore obj
call_VM_leaf(CAST_FROM_FN_PTR(address, InterpreterRuntime::monitorexit), lock_reg);
Label slow_case;
if (LockingMode == LM_LIGHTWEIGHT) {
lightweight_unlock(obj_reg, header_reg, swap_reg, tmp_reg, slow_case);
j(done);
} else if (LockingMode == LM_LEGACY) {
// Load the old header from BasicLock structure
ld(header_reg, Address(swap_reg,
BasicLock::displaced_header_offset_in_bytes()));
// Test for recursion
beqz(header_reg, count);
// Atomic swap back the old header
cmpxchg_obj_header(swap_reg, header_reg, obj_reg, tmp_reg, count, &slow_case);
bind(count);
dec_held_monitor_count(t0);
j(done);
}
bind(slow_case);
// Call the runtime routine for slow case.
sd(obj_reg, Address(lock_reg, BasicObjectLock::obj_offset())); // restore obj
call_VM_leaf(CAST_FROM_FN_PTR(address, InterpreterRuntime::monitorexit), lock_reg);
bind(done);
restore_bcp();
}
bind(done);
restore_bcp();
}

View File

@@ -6421,7 +6421,6 @@ void MacroAssembler::test_bit(Register Rd, Register Rs, uint32_t bit_pos) {
// - tmp1, tmp2, tmp3: temporary registers, will be destroyed
// - slow: branched to if locking fails
void MacroAssembler::lightweight_lock(Register basic_lock, Register obj, Register tmp1, Register tmp2, Register tmp3, Label& slow) {
assert(LockingMode == LM_LIGHTWEIGHT, "only used with new lightweight locking");
assert_different_registers(basic_lock, obj, tmp1, tmp2, tmp3, t0);
Label push;
@@ -6481,7 +6480,6 @@ void MacroAssembler::lightweight_lock(Register basic_lock, Register obj, Registe
// - tmp1, tmp2, tmp3: temporary registers
// - slow: branched to if unlocking fails
void MacroAssembler::lightweight_unlock(Register obj, Register tmp1, Register tmp2, Register tmp3, Label& slow) {
assert(LockingMode == LM_LIGHTWEIGHT, "only used with new lightweight locking");
assert_different_registers(obj, tmp1, tmp2, tmp3, t0);
#ifdef ASSERT

View File

@@ -1,5 +1,5 @@
//
// Copyright (c) 2003, 2024, Oracle and/or its affiliates. All rights reserved.
// Copyright (c) 2003, 2025, Oracle and/or its affiliates. All rights reserved.
// Copyright (c) 2014, 2020, Red Hat Inc. All rights reserved.
// Copyright (c) 2020, 2024, Huawei Technologies Co., Ltd. All rights reserved.
// DO NOT ALTER OR REMOVE COPYRIGHT NOTICES OR THIS FILE HEADER.
@@ -11021,45 +11021,9 @@ instruct tlsLoadP(javaThread_RegP dst)
// inlined locking and unlocking
// using t1 as the 'flag' register to bridge the BoolNode producers and consumers
instruct cmpFastLock(rFlagsReg cr, iRegP object, iRegP box,
iRegPNoSp tmp1, iRegPNoSp tmp2, iRegPNoSp tmp3, iRegPNoSp tmp4)
%{
predicate(LockingMode != LM_LIGHTWEIGHT);
match(Set cr (FastLock object box));
effect(TEMP tmp1, TEMP tmp2, TEMP tmp3, TEMP tmp4);
ins_cost(10 * DEFAULT_COST);
format %{ "fastlock $object,$box\t! kills $tmp1,$tmp2,$tmp3,$tmp4 #@cmpFastLock" %}
ins_encode %{
__ fast_lock($object$$Register, $box$$Register,
$tmp1$$Register, $tmp2$$Register, $tmp3$$Register, $tmp4$$Register);
%}
ins_pipe(pipe_serial);
%}
// using t1 as the 'flag' register to bridge the BoolNode producers and consumers
instruct cmpFastUnlock(rFlagsReg cr, iRegP object, iRegP box, iRegPNoSp tmp1, iRegPNoSp tmp2)
%{
predicate(LockingMode != LM_LIGHTWEIGHT);
match(Set cr (FastUnlock object box));
effect(TEMP tmp1, TEMP tmp2);
ins_cost(10 * DEFAULT_COST);
format %{ "fastunlock $object,$box\t! kills $tmp1, $tmp2, #@cmpFastUnlock" %}
ins_encode %{
__ fast_unlock($object$$Register, $box$$Register, $tmp1$$Register, $tmp2$$Register);
%}
ins_pipe(pipe_serial);
%}
instruct cmpFastLockLightweight(rFlagsReg cr, iRegP object, iRegP box,
iRegPNoSp tmp1, iRegPNoSp tmp2, iRegPNoSp tmp3, iRegPNoSp tmp4)
%{
predicate(LockingMode == LM_LIGHTWEIGHT);
match(Set cr (FastLock object box));
effect(TEMP tmp1, TEMP tmp2, TEMP tmp3, TEMP tmp4);
@@ -11074,10 +11038,10 @@ instruct cmpFastLockLightweight(rFlagsReg cr, iRegP object, iRegP box,
ins_pipe(pipe_serial);
%}
// using t1 as the 'flag' register to bridge the BoolNode producers and consumers
instruct cmpFastUnlockLightweight(rFlagsReg cr, iRegP object, iRegP box,
iRegPNoSp tmp1, iRegPNoSp tmp2, iRegPNoSp tmp3)
%{
predicate(LockingMode == LM_LIGHTWEIGHT);
match(Set cr (FastUnlock object box));
effect(TEMP tmp1, TEMP tmp2, TEMP tmp3);

View File

@@ -110,6 +110,7 @@ source %{
if (vlen < 4) {
return false;
}
break;
case Op_VectorCastHF2F:
case Op_VectorCastF2HF:
case Op_AddVHF:

View File

@@ -1637,7 +1637,7 @@ nmethod* SharedRuntime::generate_native_wrapper(MacroAssembler* masm,
// We use the same pc/oopMap repeatedly when we call out.
Label native_return;
if (LockingMode != LM_LEGACY && method->is_object_wait0()) {
if (method->is_object_wait0()) {
// For convenience we use the pc we want to resume to in case of preemption on Object.wait.
__ set_last_Java_frame(sp, noreg, native_return, t0);
} else {
@@ -1679,8 +1679,6 @@ nmethod* SharedRuntime::generate_native_wrapper(MacroAssembler* masm,
Label lock_done;
if (method->is_synchronized()) {
Label count;
const int mark_word_offset = BasicLock::displaced_header_offset_in_bytes();
// Get the handle (the 2nd argument)
@@ -1693,42 +1691,7 @@ nmethod* SharedRuntime::generate_native_wrapper(MacroAssembler* masm,
// Load the oop from the handle
__ ld(obj_reg, Address(oop_handle_reg, 0));
if (LockingMode == LM_MONITOR) {
__ j(slow_path_lock);
} else if (LockingMode == LM_LEGACY) {
// Load (object->mark() | 1) into swap_reg % x10
__ ld(t0, Address(obj_reg, oopDesc::mark_offset_in_bytes()));
__ ori(swap_reg, t0, 1);
// Save (object->mark() | 1) into BasicLock's displaced header
__ sd(swap_reg, Address(lock_reg, mark_word_offset));
// src -> dest if dest == x10 else x10 <- dest
__ cmpxchg_obj_header(x10, lock_reg, obj_reg, lock_tmp, count, /*fallthrough*/nullptr);
// Test if the oopMark is an obvious stack pointer, i.e.,
// 1) (mark & 3) == 0, and
// 2) sp <= mark < mark + os::pagesize()
// These 3 tests can be done by evaluating the following
// expression: ((mark - sp) & (3 - os::vm_page_size())),
// assuming both stack pointer and pagesize have their
// least significant 2 bits clear.
// NOTE: the oopMark is in swap_reg % 10 as the result of cmpxchg
__ sub(swap_reg, swap_reg, sp);
__ mv(t0, 3 - (int)os::vm_page_size());
__ andr(swap_reg, swap_reg, t0);
// Save the test result, for recursive case, the result is zero
__ sd(swap_reg, Address(lock_reg, mark_word_offset));
__ bnez(swap_reg, slow_path_lock);
__ bind(count);
__ inc_held_monitor_count(t0);
} else {
assert(LockingMode == LM_LIGHTWEIGHT, "must be");
__ lightweight_lock(lock_reg, obj_reg, swap_reg, tmp, lock_tmp, slow_path_lock);
}
__ lightweight_lock(lock_reg, obj_reg, swap_reg, tmp, lock_tmp, slow_path_lock);
// Slow path will re-enter here
__ bind(lock_done);
@@ -1789,7 +1752,7 @@ nmethod* SharedRuntime::generate_native_wrapper(MacroAssembler* masm,
__ membar(MacroAssembler::LoadStore | MacroAssembler::StoreStore);
__ sw(t0, Address(t1));
if (LockingMode != LM_LEGACY && method->is_object_wait0()) {
if (method->is_object_wait0()) {
// Check preemption for Object.wait()
__ ld(t1, Address(xthread, JavaThread::preempt_alternate_return_offset()));
__ beqz(t1, native_return);
@@ -1818,48 +1781,18 @@ nmethod* SharedRuntime::generate_native_wrapper(MacroAssembler* masm,
// Get locked oop from the handle we passed to jni
__ ld(obj_reg, Address(oop_handle_reg, 0));
Label done, not_recursive;
if (LockingMode == LM_LEGACY) {
// Simple recursive lock?
__ ld(t0, Address(sp, lock_slot_offset * VMRegImpl::stack_slot_size));
__ bnez(t0, not_recursive);
__ dec_held_monitor_count(t0);
__ j(done);
}
__ bind(not_recursive);
// Must save x10 if if it is live now because cmpxchg must use it
if (ret_type != T_FLOAT && ret_type != T_DOUBLE && ret_type != T_VOID) {
save_native_result(masm, ret_type, stack_slots);
}
if (LockingMode == LM_MONITOR) {
__ j(slow_path_unlock);
} else if (LockingMode == LM_LEGACY) {
// get address of the stack lock
__ la(x10, Address(sp, lock_slot_offset * VMRegImpl::stack_slot_size));
// get old displaced header
__ ld(old_hdr, Address(x10, 0));
// Atomic swap old header if oop still contains the stack lock
Label count;
__ cmpxchg_obj_header(x10, old_hdr, obj_reg, lock_tmp, count, &slow_path_unlock);
__ bind(count);
__ dec_held_monitor_count(t0);
} else {
assert(LockingMode == LM_LIGHTWEIGHT, "");
__ lightweight_unlock(obj_reg, old_hdr, swap_reg, lock_tmp, slow_path_unlock);
}
__ lightweight_unlock(obj_reg, old_hdr, swap_reg, lock_tmp, slow_path_unlock);
// slow path re-enters here
__ bind(unlock_done);
if (ret_type != T_FLOAT && ret_type != T_DOUBLE && ret_type != T_VOID) {
restore_native_result(masm, ret_type, stack_slots);
}
__ bind(done);
}
Label dtrace_method_exit, dtrace_method_exit_done;

View File

@@ -1253,22 +1253,17 @@ address TemplateInterpreterGenerator::generate_native_entry(bool synchronized) {
__ mv(t0, _thread_in_Java);
__ sw(t0, Address(xthread, JavaThread::thread_state_offset()));
if (LockingMode != LM_LEGACY) {
// Check preemption for Object.wait()
Label not_preempted;
__ ld(t1, Address(xthread, JavaThread::preempt_alternate_return_offset()));
__ beqz(t1, not_preempted);
__ sd(zr, Address(xthread, JavaThread::preempt_alternate_return_offset()));
__ jr(t1);
__ bind(native_return);
__ restore_after_resume(true /* is_native */);
// reload result_handler
__ ld(result_handler, Address(fp, frame::interpreter_frame_result_handler_offset * wordSize));
__ bind(not_preempted);
} else {
// any pc will do so just use this one for LM_LEGACY to keep code together.
__ bind(native_return);
}
// Check preemption for Object.wait()
Label not_preempted;
__ ld(t1, Address(xthread, JavaThread::preempt_alternate_return_offset()));
__ beqz(t1, not_preempted);
__ sd(zr, Address(xthread, JavaThread::preempt_alternate_return_offset()));
__ jr(t1);
__ bind(native_return);
__ restore_after_resume(true /* is_native */);
// reload result_handler
__ ld(result_handler, Address(fp, frame::interpreter_frame_result_handler_offset * wordSize));
__ bind(not_preempted);
// reset_last_Java_frame
__ reset_last_Java_frame(true);

View File

@@ -229,11 +229,7 @@ int LIR_Assembler::emit_unwind_handler() {
LIR_Opr lock = FrameMap::as_opr(Z_R1_scratch);
monitor_address(0, lock);
stub = new MonitorExitStub(lock, true, 0);
if (LockingMode == LM_MONITOR) {
__ branch_optimized(Assembler::bcondAlways, *stub->entry());
} else {
__ unlock_object(Rtmp1, Rtmp2, lock->as_register(), *stub->entry());
}
__ unlock_object(Rtmp1, Rtmp2, lock->as_register(), *stub->entry());
__ bind(*stub->continuation());
}
@@ -2714,13 +2710,7 @@ void LIR_Assembler::emit_lock(LIR_OpLock* op) {
Register obj = op->obj_opr()->as_register(); // May not be an oop.
Register hdr = op->hdr_opr()->as_register();
Register lock = op->lock_opr()->as_register();
if (LockingMode == LM_MONITOR) {
if (op->info() != nullptr) {
add_debug_info_for_null_check_here(op->info());
__ null_check(obj);
}
__ branch_optimized(Assembler::bcondAlways, *op->stub()->entry());
} else if (op->code() == lir_lock) {
if (op->code() == lir_lock) {
assert(BasicLock::displaced_header_offset_in_bytes() == 0, "lock_reg must point to the displaced header");
// Add debug info for NullPointerException only if one is possible.
if (op->info() != nullptr) {

View File

@@ -58,8 +58,6 @@ void C1_MacroAssembler::verified_entry(bool breakAtEntry) {
}
void C1_MacroAssembler::lock_object(Register Rmark, Register Roop, Register Rbox, Label& slow_case) {
const int hdr_offset = oopDesc::mark_offset_in_bytes();
const Register tmp = Z_R1_scratch;
assert_different_registers(Rmark, Roop, Rbox, tmp);
@@ -69,95 +67,17 @@ void C1_MacroAssembler::lock_object(Register Rmark, Register Roop, Register Rbox
// Save object being locked into the BasicObjectLock...
z_stg(Roop, Address(Rbox, BasicObjectLock::obj_offset()));
assert(LockingMode != LM_MONITOR, "LM_MONITOR is already handled, by emit_lock()");
if (LockingMode == LM_LIGHTWEIGHT) {
lightweight_lock(Rbox, Roop, Rmark, tmp, slow_case);
} else if (LockingMode == LM_LEGACY) {
if (DiagnoseSyncOnValueBasedClasses != 0) {
load_klass(tmp, Roop);
z_tm(Address(tmp, Klass::misc_flags_offset()), KlassFlags::_misc_is_value_based_class);
branch_optimized(Assembler::bcondAllOne, slow_case);
}
NearLabel done;
// Load object header.
z_lg(Rmark, Address(Roop, hdr_offset));
// and mark it as unlocked.
z_oill(Rmark, markWord::unlocked_value);
// Save unlocked object header into the displaced header location on the stack.
z_stg(Rmark, Address(Rbox, BasicLock::displaced_header_offset_in_bytes()));
// Test if object header is still the same (i.e. unlocked), and if so, store the
// displaced header address in the object header. If it is not the same, get the
// object header instead.
z_csg(Rmark, Rbox, hdr_offset, Roop);
// If the object header was the same, we're done.
branch_optimized(Assembler::bcondEqual, done);
// If the object header was not the same, it is now in the Rmark register.
// => Test if it is a stack pointer into the same stack (recursive locking), i.e.:
//
// 1) (Rmark & markWord::lock_mask_in_place) == 0
// 2) rsp <= Rmark
// 3) Rmark <= rsp + page_size
//
// These 3 tests can be done by evaluating the following expression:
//
// (Rmark - Z_SP) & (~(page_size-1) | markWord::lock_mask_in_place)
//
// assuming both the stack pointer and page_size have their least
// significant 2 bits cleared and page_size is a power of 2
z_sgr(Rmark, Z_SP);
load_const_optimized(Z_R0_scratch, (~(os::vm_page_size() - 1) | markWord::lock_mask_in_place));
z_ngr(Rmark, Z_R0_scratch); // AND sets CC (result eq/ne 0).
// For recursive locking, the result is zero. => Save it in the displaced header
// location (null in the displaced Rmark location indicates recursive locking).
z_stg(Rmark, Address(Rbox, BasicLock::displaced_header_offset_in_bytes()));
// Otherwise we don't care about the result and handle locking via runtime call.
branch_optimized(Assembler::bcondNotZero, slow_case);
// done
bind(done);
} else {
assert(false, "Unhandled LockingMode:%d", LockingMode);
}
lightweight_lock(Rbox, Roop, Rmark, tmp, slow_case);
}
void C1_MacroAssembler::unlock_object(Register Rmark, Register Roop, Register Rbox, Label& slow_case) {
const int hdr_offset = oopDesc::mark_offset_in_bytes();
assert_different_registers(Rmark, Roop, Rbox);
NearLabel done;
if (LockingMode != LM_LIGHTWEIGHT) {
// Load displaced header.
z_ltg(Rmark, Address(Rbox, BasicLock::displaced_header_offset_in_bytes()));
// If the loaded Rmark is null we had recursive locking, and we are done.
z_bre(done);
}
// Load object.
z_lg(Roop, Address(Rbox, BasicObjectLock::obj_offset()));
verify_oop(Roop, FILE_AND_LINE);
if (LockingMode == LM_LIGHTWEIGHT) {
lightweight_unlock(Roop, Rmark, Z_R1_scratch, slow_case);
} else if (LockingMode == LM_LEGACY) {
// Test if object header is pointing to the displaced header, and if so, restore
// the displaced header in the object. If the object header is not pointing to
// the displaced header, get the object header instead.
z_csg(Rbox, Rmark, hdr_offset, Roop);
// If the object header was not pointing to the displaced header,
// we do unlocking via runtime call.
branch_optimized(Assembler::bcondNotEqual, slow_case);
} else {
assert(false, "Unhandled LockingMode:%d", LockingMode);
}
// done
bind(done);
lightweight_unlock(Roop, Rmark, Z_R1_scratch, slow_case);
}
void C1_MacroAssembler::try_allocate(

View File

@@ -60,6 +60,10 @@ inline void FreezeBase::patch_pd(frame& hf, const frame& caller) {
Unimplemented();
}
inline void FreezeBase::patch_pd_unused(intptr_t* sp) {
Unimplemented();
}
inline void FreezeBase::patch_stack_pd(intptr_t* frame_sp, intptr_t* heap_sp) {
Unimplemented();
}

View File

@@ -1008,109 +1008,18 @@ void InterpreterMacroAssembler::remove_activation(TosState state,
// object (Z_R11, Z_R2) - Address of the object to be locked.
// templateTable (monitorenter) is using Z_R2 for object
void InterpreterMacroAssembler::lock_object(Register monitor, Register object) {
if (LockingMode == LM_MONITOR) {
call_VM(noreg, CAST_FROM_FN_PTR(address, InterpreterRuntime::monitorenter), monitor);
return;
}
// template code: (for LM_LEGACY)
//
// markWord displaced_header = obj->mark().set_unlocked();
// monitor->lock()->set_displaced_header(displaced_header);
// if (Atomic::cmpxchg(/*addr*/obj->mark_addr(), /*cmp*/displaced_header, /*ex=*/monitor) == displaced_header) {
// // We stored the monitor address into the object's mark word.
// } else if (THREAD->is_lock_owned((address)displaced_header))
// // Simple recursive case.
// monitor->lock()->set_displaced_header(nullptr);
// } else {
// // Slow path.
// InterpreterRuntime::monitorenter(THREAD, monitor);
// }
const int hdr_offset = oopDesc::mark_offset_in_bytes();
const Register header = Z_ARG5;
const Register object_mark_addr = Z_ARG4;
const Register current_header = Z_ARG5;
const Register tmp = Z_R1_scratch;
NearLabel done, slow_case;
// markWord header = obj->mark().set_unlocked();
if (LockingMode == LM_LIGHTWEIGHT) {
lightweight_lock(monitor, object, header, tmp, slow_case);
} else if (LockingMode == LM_LEGACY) {
if (DiagnoseSyncOnValueBasedClasses != 0) {
load_klass(tmp, object);
z_tm(Address(tmp, Klass::misc_flags_offset()), KlassFlags::_misc_is_value_based_class);
z_btrue(slow_case);
}
// Load markWord from object into header.
z_lg(header, hdr_offset, object);
// Set header to be (markWord of object | UNLOCK_VALUE).
// This will not change anything if it was unlocked before.
z_oill(header, markWord::unlocked_value);
// monitor->lock()->set_displaced_header(displaced_header);
const int lock_offset = in_bytes(BasicObjectLock::lock_offset());
const int mark_offset = lock_offset + BasicLock::displaced_header_offset_in_bytes();
// Initialize the box (Must happen before we update the object mark!).
z_stg(header, mark_offset, monitor);
// if (Atomic::cmpxchg(/*addr*/obj->mark_addr(), /*cmp*/displaced_header, /*ex=*/monitor) == displaced_header) {
// not necessary, use offset in instruction directly.
// add2reg(object_mark_addr, hdr_offset, object);
// Store stack address of the BasicObjectLock (this is monitor) into object.
z_csg(header, monitor, hdr_offset, object);
assert(current_header == header,
"must be same register"); // Identified two registers from z/Architecture.
z_bre(done);
// } else if (THREAD->is_lock_owned((address)displaced_header))
// // Simple recursive case.
// monitor->lock()->set_displaced_header(nullptr);
// We did not see an unlocked object so try the fast recursive case.
// Check if owner is self by comparing the value in the markWord of object
// (current_header) with the stack pointer.
z_sgr(current_header, Z_SP);
assert(os::vm_page_size() > 0xfff, "page size too small - change the constant");
// The prior sequence "LGR, NGR, LTGR" can be done better
// (Z_R1 is temp and not used after here).
load_const_optimized(Z_R0, (~(os::vm_page_size() - 1) | markWord::lock_mask_in_place));
z_ngr(Z_R0, current_header); // AND sets CC (result eq/ne 0)
// If condition is true we are done and hence we can store 0 in the displaced
// header indicating it is a recursive lock and be done.
z_brne(slow_case);
z_release(); // Member unnecessary on zarch AND because the above csg does a sync before and after.
z_stg(Z_R0/*==0!*/, mark_offset, monitor);
}
lightweight_lock(monitor, object, header, tmp, slow_case);
z_bru(done);
// } else {
// // Slow path.
// InterpreterRuntime::monitorenter(THREAD, monitor);
// None of the above fast optimizations worked so we have to get into the
// slow case of monitor enter.
bind(slow_case);
call_VM(noreg,
CAST_FROM_FN_PTR(address, InterpreterRuntime::monitorenter),
monitor);
// }
bind(done);
}
@@ -1122,28 +1031,6 @@ void InterpreterMacroAssembler::lock_object(Register monitor, Register object) {
//
// Throw IllegalMonitorException if object is not locked by current thread.
void InterpreterMacroAssembler::unlock_object(Register monitor, Register object) {
if (LockingMode == LM_MONITOR) {
call_VM_leaf(CAST_FROM_FN_PTR(address, InterpreterRuntime::monitorexit), monitor);
return;
}
// else {
// template code: (for LM_LEGACY):
//
// if ((displaced_header = monitor->displaced_header()) == nullptr) {
// // Recursive unlock. Mark the monitor unlocked by setting the object field to null.
// monitor->set_obj(nullptr);
// } else if (Atomic::cmpxchg(obj->mark_addr(), monitor, displaced_header) == monitor) {
// // We swapped the unlocked mark in displaced_header into the object's mark word.
// monitor->set_obj(nullptr);
// } else {
// // Slow path.
// InterpreterRuntime::monitorexit(monitor);
// }
const int hdr_offset = oopDesc::mark_offset_in_bytes();
const Register header = Z_ARG4;
const Register current_header = Z_R1_scratch;
Address obj_entry(monitor, BasicObjectLock::obj_offset());
@@ -1159,56 +1046,16 @@ void InterpreterMacroAssembler::unlock_object(Register monitor, Register object)
assert_different_registers(monitor, object, header, current_header);
// if ((displaced_header = monitor->displaced_header()) == nullptr) {
// // Recursive unlock. Mark the monitor unlocked by setting the object field to null.
// monitor->set_obj(nullptr);
// monitor->lock()->set_displaced_header(displaced_header);
const int lock_offset = in_bytes(BasicObjectLock::lock_offset());
const int mark_offset = lock_offset + BasicLock::displaced_header_offset_in_bytes();
clear_mem(obj_entry, sizeof(oop));
if (LockingMode != LM_LIGHTWEIGHT) {
// Test first if we are in the fast recursive case.
MacroAssembler::load_and_test_long(header, Address(monitor, mark_offset));
z_bre(done); // header == 0 -> goto done
}
// } else if (Atomic::cmpxchg(obj->mark_addr(), monitor, displaced_header) == monitor) {
// // We swapped the unlocked mark in displaced_header into the object's mark word.
// monitor->set_obj(nullptr);
// If we still have a lightweight lock, unlock the object and be done.
if (LockingMode == LM_LIGHTWEIGHT) {
lightweight_unlock(object, header, current_header, slow_case);
z_bru(done);
} else {
// The markword is expected to be at offset 0.
// This is not required on s390, at least not here.
assert(hdr_offset == 0, "unlock_object: review code below");
// We have the displaced header in header. If the lock is still
// lightweight, it will contain the monitor address and we'll store the
// displaced header back into the object's mark word.
z_lgr(current_header, monitor);
z_csg(current_header, header, hdr_offset, object);
z_bre(done);
}
// } else {
// // Slow path.
// InterpreterRuntime::monitorexit(monitor);
lightweight_unlock(object, header, current_header, slow_case);
z_bru(done);
// The lock has been converted into a heavy lock and hence
// we need to get into the slow case.
bind(slow_case);
z_stg(object, obj_entry); // Restore object entry, has been cleared above.
call_VM_leaf(CAST_FROM_FN_PTR(address, InterpreterRuntime::monitorexit), monitor);
// }
bind(done);
}

View File

@@ -3766,211 +3766,6 @@ void MacroAssembler::increment_counter_eq(address counter_address, Register tmp1
bind(l);
}
// "The box" is the space on the stack where we copy the object mark.
void MacroAssembler::compiler_fast_lock_object(Register oop, Register box, Register temp1, Register temp2) {
assert(LockingMode != LM_LIGHTWEIGHT, "uses fast_lock_lightweight");
assert_different_registers(oop, box, temp1, temp2, Z_R0_scratch);
Register displacedHeader = temp1;
Register currentHeader = temp1;
Register temp = temp2;
NearLabel done, object_has_monitor;
const int hdr_offset = oopDesc::mark_offset_in_bytes();
BLOCK_COMMENT("compiler_fast_lock_object {");
// Load markWord from oop into mark.
z_lg(displacedHeader, hdr_offset, oop);
if (DiagnoseSyncOnValueBasedClasses != 0) {
load_klass(temp, oop);
z_tm(Address(temp, Klass::misc_flags_offset()), KlassFlags::_misc_is_value_based_class);
z_brne(done);
}
// Handle existing monitor.
// The object has an existing monitor iff (mark & monitor_value) != 0.
guarantee(Immediate::is_uimm16(markWord::monitor_value), "must be half-word");
z_tmll(displacedHeader, markWord::monitor_value);
z_brnaz(object_has_monitor);
if (LockingMode == LM_MONITOR) {
// Set NE to indicate 'failure' -> take slow-path
// From loading the markWord, we know that oop != nullptr
z_ltgr(oop, oop);
z_bru(done);
} else {
assert(LockingMode == LM_LEGACY, "must be");
// Set mark to markWord | markWord::unlocked_value.
z_oill(displacedHeader, markWord::unlocked_value);
// Load Compare Value application register.
// Initialize the box (must happen before we update the object mark).
z_stg(displacedHeader, BasicLock::displaced_header_offset_in_bytes(), box);
// Compare object markWord with mark and if equal, exchange box with object markWork.
// If the compare-and-swap succeeds, then we found an unlocked object and have now locked it.
z_csg(displacedHeader, box, hdr_offset, oop);
assert(currentHeader == displacedHeader, "must be same register"); // Identified two registers from z/Architecture.
z_bre(done);
// We did not see an unlocked object
// currentHeader contains what is currently stored in the oop's markWord.
// We might have a recursive case. Verify by checking if the owner is self.
// To do so, compare the value in the markWord (currentHeader) with the stack pointer.
z_sgr(currentHeader, Z_SP);
load_const_optimized(temp, (~(os::vm_page_size() - 1) | markWord::lock_mask_in_place));
z_ngr(currentHeader, temp);
// result zero: owner is self -> recursive lock. Indicate that by storing 0 in the box.
// result not-zero: attempt failed. We don't hold the lock -> go for slow case.
z_stg(currentHeader/*==0 or not 0*/, BasicLock::displaced_header_offset_in_bytes(), box);
z_bru(done);
}
bind(object_has_monitor);
Register zero = temp;
Register monitor_tagged = displacedHeader; // Tagged with markWord::monitor_value.
// Try to CAS owner (no owner => current thread's _monitor_owner_id).
// If csg succeeds then CR=EQ, otherwise, register zero is filled
// with the current owner.
z_lghi(zero, 0);
z_lg(Z_R0_scratch, Address(Z_thread, JavaThread::monitor_owner_id_offset()));
z_csg(zero, Z_R0_scratch, OM_OFFSET_NO_MONITOR_VALUE_TAG(owner), monitor_tagged);
// Store a non-null value into the box.
z_stg(box, BasicLock::displaced_header_offset_in_bytes(), box);
z_bre(done); // acquired the lock for the first time.
BLOCK_COMMENT("fast_path_recursive_lock {");
// Check if we are already the owner (recursive lock)
z_cgr(Z_R0_scratch, zero); // owner is stored in zero by "z_csg" above
z_brne(done); // not a recursive lock
// Current thread already owns the lock. Just increment recursion count.
z_agsi(Address(monitor_tagged, OM_OFFSET_NO_MONITOR_VALUE_TAG(recursions)), 1ll);
z_cgr(zero, zero); // set the CC to EQUAL
BLOCK_COMMENT("} fast_path_recursive_lock");
bind(done);
BLOCK_COMMENT("} compiler_fast_lock_object");
// If locking was successful, CR should indicate 'EQ'.
// The compiler or the native wrapper generates a branch to the runtime call
// _complete_monitor_locking_Java.
}
void MacroAssembler::compiler_fast_unlock_object(Register oop, Register box, Register temp1, Register temp2) {
assert(LockingMode != LM_LIGHTWEIGHT, "uses fast_unlock_lightweight");
assert_different_registers(oop, box, temp1, temp2, Z_R0_scratch);
Register displacedHeader = temp1;
Register currentHeader = temp2;
Register temp = temp1;
const int hdr_offset = oopDesc::mark_offset_in_bytes();
Label done, object_has_monitor, not_recursive;
BLOCK_COMMENT("compiler_fast_unlock_object {");
if (LockingMode == LM_LEGACY) {
// Find the lock address and load the displaced header from the stack.
// if the displaced header is zero, we have a recursive unlock.
load_and_test_long(displacedHeader, Address(box, BasicLock::displaced_header_offset_in_bytes()));
z_bre(done);
}
// Handle existing monitor.
// The object has an existing monitor iff (mark & monitor_value) != 0.
z_lg(currentHeader, hdr_offset, oop);
guarantee(Immediate::is_uimm16(markWord::monitor_value), "must be half-word");
z_tmll(currentHeader, markWord::monitor_value);
z_brnaz(object_has_monitor);
if (LockingMode == LM_MONITOR) {
// Set NE to indicate 'failure' -> take slow-path
z_ltgr(oop, oop);
z_bru(done);
} else {
assert(LockingMode == LM_LEGACY, "must be");
// Check if it is still a lightweight lock, this is true if we see
// the stack address of the basicLock in the markWord of the object
// copy box to currentHeader such that csg does not kill it.
z_lgr(currentHeader, box);
z_csg(currentHeader, displacedHeader, hdr_offset, oop);
z_bru(done); // csg sets CR as desired.
}
// In case of LM_LIGHTWEIGHT, we may reach here with (temp & ObjectMonitor::ANONYMOUS_OWNER) != 0.
// This is handled like owner thread mismatches: We take the slow path.
// Handle existing monitor.
bind(object_has_monitor);
z_lg(Z_R0_scratch, Address(Z_thread, JavaThread::monitor_owner_id_offset()));
z_cg(Z_R0_scratch, Address(currentHeader, OM_OFFSET_NO_MONITOR_VALUE_TAG(owner)));
z_brne(done);
BLOCK_COMMENT("fast_path_recursive_unlock {");
load_and_test_long(temp, Address(currentHeader, OM_OFFSET_NO_MONITOR_VALUE_TAG(recursions)));
z_bre(not_recursive); // if 0 then jump, it's not recursive locking
// Recursive inflated unlock
z_agsi(Address(currentHeader, OM_OFFSET_NO_MONITOR_VALUE_TAG(recursions)), -1ll);
z_cgr(currentHeader, currentHeader); // set the CC to EQUAL
BLOCK_COMMENT("} fast_path_recursive_unlock");
z_bru(done);
bind(not_recursive);
NearLabel set_eq_unlocked;
// Set owner to null.
// Release to satisfy the JMM
z_release();
z_lghi(temp, 0);
z_stg(temp, OM_OFFSET_NO_MONITOR_VALUE_TAG(owner), currentHeader);
// We need a full fence after clearing owner to avoid stranding.
z_fence();
// Check if the entry_list is empty.
load_and_test_long(temp, Address(currentHeader, OM_OFFSET_NO_MONITOR_VALUE_TAG(entry_list)));
z_bre(done); // If so we are done.
// Check if there is a successor.
load_and_test_long(temp, Address(currentHeader, OM_OFFSET_NO_MONITOR_VALUE_TAG(succ)));
z_brne(set_eq_unlocked); // If so we are done.
// Save the monitor pointer in the current thread, so we can try to
// reacquire the lock in SharedRuntime::monitor_exit_helper().
z_xilf(currentHeader, markWord::monitor_value);
z_stg(currentHeader, Address(Z_thread, JavaThread::unlocked_inflated_monitor_offset()));
z_ltgr(oop, oop); // Set flag = NE
z_bru(done);
bind(set_eq_unlocked);
z_cr(temp, temp); // Set flag = EQ
bind(done);
BLOCK_COMMENT("} compiler_fast_unlock_object");
// flag == EQ indicates success
// flag == NE indicates failure
}
void MacroAssembler::resolve_jobject(Register value, Register tmp1, Register tmp2) {
BarrierSetAssembler* bs = BarrierSet::barrier_set()->barrier_set_assembler();
bs->resolve_jobject(this, value, tmp1, tmp2);
@@ -6349,7 +6144,6 @@ void MacroAssembler::zap_from_to(Register low, Register high, Register val, Regi
// Note: make sure Z_R1 is not manipulated here when C2 compiler is in play
void MacroAssembler::lightweight_lock(Register basic_lock, Register obj, Register temp1, Register temp2, Label& slow) {
assert(LockingMode == LM_LIGHTWEIGHT, "only used with new lightweight locking");
assert_different_registers(basic_lock, obj, temp1, temp2);
Label push;
@@ -6415,7 +6209,6 @@ void MacroAssembler::lightweight_lock(Register basic_lock, Register obj, Registe
// - Z_R1_scratch: will be killed in case of Interpreter & C1 Compiler
void MacroAssembler::lightweight_unlock(Register obj, Register temp1, Register temp2, Label& slow) {
assert(LockingMode == LM_LIGHTWEIGHT, "only used with new lightweight locking");
assert_different_registers(obj, temp1, temp2);
Label unlocked, push_and_slow;

View File

@@ -790,8 +790,6 @@ class MacroAssembler: public Assembler {
// Kills registers tmp1_reg and tmp2_reg and preserves the condition code.
void increment_counter_eq(address counter_address, Register tmp1_reg, Register tmp2_reg);
void compiler_fast_lock_object(Register oop, Register box, Register temp1, Register temp2);
void compiler_fast_unlock_object(Register oop, Register box, Register temp1, Register temp2);
void lightweight_lock(Register basic_lock, Register obj, Register tmp1, Register tmp2, Label& slow);
void lightweight_unlock(Register obj, Register tmp1, Register tmp2, Label& slow);
void compiler_fast_lock_lightweight_object(Register obj, Register box, Register tmp1, Register tmp2);

View File

@@ -1,5 +1,5 @@
//
// Copyright (c) 2017, 2024, Oracle and/or its affiliates. All rights reserved.
// Copyright (c) 2017, 2025, Oracle and/or its affiliates. All rights reserved.
// Copyright (c) 2017, 2024 SAP SE. All rights reserved.
// DO NOT ALTER OR REMOVE COPYRIGHT NOTICES OR THIS FILE HEADER.
//
@@ -10161,30 +10161,7 @@ instruct partialSubtypeCheckConstSuper(rarg2RegP sub, rarg1RegP super, immP supe
// ============================================================================
// inlined locking and unlocking
instruct cmpFastLock(flagsReg pcc, iRegP_N2P oop, iRegP_N2P box, iRegP tmp1, iRegP tmp2) %{
predicate(LockingMode != LM_LIGHTWEIGHT);
match(Set pcc (FastLock oop box));
effect(TEMP tmp1, TEMP tmp2);
ins_cost(100);
// TODO: s390 port size(VARIABLE_SIZE); // Uses load_const_optimized.
format %{ "FASTLOCK $oop, $box; KILL Z_ARG4, Z_ARG5" %}
ins_encode %{ __ compiler_fast_lock_object($oop$$Register, $box$$Register, $tmp1$$Register, $tmp2$$Register); %}
ins_pipe(pipe_class_dummy);
%}
instruct cmpFastUnlock(flagsReg pcc, iRegP_N2P oop, iRegP_N2P box, iRegP tmp1, iRegP tmp2) %{
predicate(LockingMode != LM_LIGHTWEIGHT);
match(Set pcc (FastUnlock oop box));
effect(TEMP tmp1, TEMP tmp2);
ins_cost(100);
// TODO: s390 port size(FIXED_SIZE);
format %{ "FASTUNLOCK $oop, $box; KILL Z_ARG4, Z_ARG5" %}
ins_encode %{ __ compiler_fast_unlock_object($oop$$Register, $box$$Register, $tmp1$$Register, $tmp2$$Register); %}
ins_pipe(pipe_class_dummy);
%}
instruct cmpFastLockLightweight(flagsReg pcc, iRegP_N2P oop, iRegP_N2P box, iRegP tmp1, iRegP tmp2) %{
predicate(LockingMode == LM_LIGHTWEIGHT);
match(Set pcc (FastLock oop box));
effect(TEMP tmp1, TEMP tmp2);
ins_cost(100);
@@ -10200,7 +10177,6 @@ instruct cmpFastLockLightweight(flagsReg pcc, iRegP_N2P oop, iRegP_N2P box, iReg
%}
instruct cmpFastUnlockLightweight(flagsReg pcc, iRegP_N2P oop, iRegP_N2P box, iRegP tmp1, iRegP tmp2) %{
predicate(LockingMode == LM_LIGHTWEIGHT);
match(Set pcc (FastUnlock oop box));
effect(TEMP tmp1, TEMP tmp2);
ins_cost(100);

View File

@@ -1764,13 +1764,8 @@ nmethod *SharedRuntime::generate_native_wrapper(MacroAssembler *masm,
__ add2reg(r_box, lock_offset, Z_SP);
// Try fastpath for locking.
if (LockingMode == LM_LIGHTWEIGHT) {
// Fast_lock kills r_temp_1, r_temp_2.
__ compiler_fast_lock_lightweight_object(r_oop, r_box, r_tmp1, r_tmp2);
} else {
// Fast_lock kills r_temp_1, r_temp_2.
__ compiler_fast_lock_object(r_oop, r_box, r_tmp1, r_tmp2);
}
// Fast_lock kills r_temp_1, r_temp_2.
__ compiler_fast_lock_lightweight_object(r_oop, r_box, r_tmp1, r_tmp2);
__ z_bre(done);
//-------------------------------------------------------------------------
@@ -1965,13 +1960,8 @@ nmethod *SharedRuntime::generate_native_wrapper(MacroAssembler *masm,
__ add2reg(r_box, lock_offset, Z_SP);
// Try fastpath for unlocking.
if (LockingMode == LM_LIGHTWEIGHT) {
// Fast_unlock kills r_tmp1, r_tmp2.
__ compiler_fast_unlock_lightweight_object(r_oop, r_box, r_tmp1, r_tmp2);
} else {
// Fast_unlock kills r_tmp1, r_tmp2.
__ compiler_fast_unlock_object(r_oop, r_box, r_tmp1, r_tmp2);
}
// Fast_unlock kills r_tmp1, r_tmp2.
__ compiler_fast_unlock_lightweight_object(r_oop, r_box, r_tmp1, r_tmp2);
__ z_bre(done);
// Slow path for unlocking.

View File

@@ -1239,6 +1239,7 @@ address TemplateInterpreterGenerator::generate_math_entry(AbstractInterpreter::M
case Interpreter::java_lang_math_sin : runtime_entry = CAST_FROM_FN_PTR(address, SharedRuntime::dsin); break;
case Interpreter::java_lang_math_cos : runtime_entry = CAST_FROM_FN_PTR(address, SharedRuntime::dcos); break;
case Interpreter::java_lang_math_tan : runtime_entry = CAST_FROM_FN_PTR(address, SharedRuntime::dtan); break;
case Interpreter::java_lang_math_sinh : /* run interpreted */ break;
case Interpreter::java_lang_math_tanh : /* run interpreted */ break;
case Interpreter::java_lang_math_cbrt : /* run interpreted */ break;
case Interpreter::java_lang_math_abs : /* run interpreted */ break;

View File

@@ -68,12 +68,13 @@ unsigned int VM_Version::_Icache_lineSize = DEFAULT
// z14: 2017-09
// z15: 2019-09
// z16: 2022-05
// z17: 2025-04
static const char* z_gen[] = {" ", "G1", "G2", "G3", "G4", "G5", "G6", "G7", "G8", "G9", "G10" };
static const char* z_machine[] = {" ", "2064", "2084", "2094", "2097", "2817", "2827", "2964", "3906", "8561", "3931" };
static const char* z_name[] = {" ", "z900", "z990", "z9 EC", "z10 EC", "z196 EC", "ec12", "z13", "z14", "z15", "z16" };
static const char* z_WDFM[] = {" ", "2006-06-30", "2008-06-30", "2010-06-30", "2012-06-30", "2014-06-30", "2016-12-31", "2019-06-30", "2021-06-30", "2024-12-31", "tbd" };
static const char* z_EOS[] = {" ", "2014-12-31", "2014-12-31", "2017-10-31", "2019-12-31", "2021-12-31", "2023-12-31", "2024-12-31", "tbd", "tbd", "tbd" };
static const char* z_gen[] = {" ", "G1", "G2", "G3", "G4", "G5", "G6", "G7", "G8", "G9", "G10", "G11" };
static const char* z_machine[] = {" ", "2064", "2084", "2094", "2097", "2817", "2827", "2964", "3906", "8561", "3931", "9175" };
static const char* z_name[] = {" ", "z900", "z990", "z9 EC", "z10 EC", "z196 EC", "ec12", "z13", "z14", "z15", "z16", "z17" };
static const char* z_WDFM[] = {" ", "2006-06-30", "2008-06-30", "2010-06-30", "2012-06-30", "2014-06-30", "2016-12-31", "2019-06-30", "2021-06-30", "2024-12-31", "tbd", "tbd" };
static const char* z_EOS[] = {" ", "2014-12-31", "2014-12-31", "2017-10-31", "2019-12-31", "2021-12-31", "2023-12-31", "2024-12-31", "tbd", "tbd", "tbd", "tbd" };
static const char* z_features[] = {" ",
"system-z, g1-z900, ldisp",
"system-z, g2-z990, ldisp_fast",
@@ -85,7 +86,9 @@ static const char* z_features[] = {" ",
"system-z, g8-z14, ldisp_fast, extimm, pcrel_load/store, cmpb, cond_load/store, interlocked_update, txm, vectorinstr, instrext2, venh1",
"system-z, g9-z15, ldisp_fast, extimm, pcrel_load/store, cmpb, cond_load/store, interlocked_update, txm, vectorinstr, instrext2, venh1, instrext3, venh2",
"system-z, g10-z16, ldisp_fast, extimm, pcrel_load/store, cmpb, cond_load/store, interlocked_update, txm, vectorinstr, instrext2, venh1, instrext3, venh2,"
"bear_enh, sort_enh, nnpa_assist, storage_key_removal, vpack_decimal_enh"
"bear_enh, sort_enh, nnpa_assist, storage_key_removal, vpack_decimal_enh",
"system-z, g11-z17, ldisp_fast, extimm, pcrel_load/store, cmpb, cond_load/store, interlocked_update, txm, vectorinstr, instrext2, venh1, instrext3, venh2,"
"bear_enh, sort_enh, nnpa_assist, storage_key_removal, vpack_decimal_enh, concurrent_function"
};
void VM_Version::initialize() {
@@ -339,6 +342,11 @@ int VM_Version::get_model_index() {
// is the index of the oldest detected model.
int ambiguity = 0;
int model_ix = 0;
if (is_z17()) {
model_ix = 11;
ambiguity++;
}
if (is_z16()) {
model_ix = 10;
ambiguity++;

View File

@@ -100,7 +100,7 @@ class VM_Version: public Abstract_VM_Version {
#define CryptoExtension4Mask 0x0004000000000000UL // z196 (aka message-security assist extension 4, for KMF, KMCTR, KMO)
#define DFPPackedConversionMask 0x0000800000000000UL // z13
// ----------------------------------------------
// --- FeatureBitString Bits 128..192 (DW[2]) ---
// --- FeatureBitString Bits 128..191 (DW[2]) ---
// ----------------------------------------------
// 11111111111111111
// 23344455666778889
@@ -118,9 +118,10 @@ class VM_Version: public Abstract_VM_Version {
#define NNPAssistFacilityMask 0x0000000004000000UL // z16, Neural-network-processing-assist facility, Bit: 165
// ----------------------------------------------
// --- FeatureBitString Bits 193..200 (DW[3]) ---
// --- FeatureBitString Bits 192..255 (DW[3]) ---
// ----------------------------------------------
#define BEAREnhFacilityMask 0x4000000000000000UL // z16, BEAR-enhancement facility, Bit: 193
#define ConcurrentFunFacilityMask 0x0040000000000000UL // z17, Concurrent-functions facility, Bit: 201
enum {
_max_cache_levels = 8, // As limited by ECAG instruction.
@@ -189,7 +190,8 @@ class VM_Version: public Abstract_VM_Version {
static bool is_z13() { return has_CryptoExt5() && !has_MiscInstrExt2(); }
static bool is_z14() { return has_MiscInstrExt2() && !has_MiscInstrExt3(); }
static bool is_z15() { return has_MiscInstrExt3() && !has_BEAR_Enh_Facility(); }
static bool is_z16() { return has_BEAR_Enh_Facility(); }
static bool is_z16() { return has_BEAR_Enh_Facility() && !has_Concurrent_Fun_Facility(); }
static bool is_z17() { return has_Concurrent_Fun_Facility();}
// Need to use nested class with unscoped enum.
// C++11 declaration "enum class Cipher { ... } is not supported.
@@ -497,6 +499,7 @@ class VM_Version: public Abstract_VM_Version {
static bool has_BEAR_Enh_Facility() { return (_features[3] & BEAREnhFacilityMask) == BEAREnhFacilityMask; }
static bool has_NNP_Assist_Facility() { return (_features[2] & NNPAssistFacilityMask) == NNPAssistFacilityMask; }
static bool has_Concurrent_Fun_Facility() { return (_features[3] & ConcurrentFunFacilityMask) == ConcurrentFunFacilityMask; }
// Crypto features query functions.
static bool has_Crypto_AES_GCM128() { return has_Crypto() && test_feature_bit(&_cipher_features_KMA[0], Cipher::_AES128, Cipher::_featureBits); }
@@ -573,6 +576,7 @@ class VM_Version: public Abstract_VM_Version {
static void set_has_VectorPackedDecimalEnh() { _features[2] |= VectorPackedDecimalEnhMask; }
static void set_has_BEAR_Enh_Facility() { _features[3] |= BEAREnhFacilityMask;}
static void set_has_NNP_Assist_Facility() { _features[2] |= NNPAssistFacilityMask;}
static void set_has_Concurrent_Fun_Facility() { _features[3] |= ConcurrentFunFacilityMask;}
static void reset_has_VectorFacility() { _features[2] &= ~VectorFacilityMask; }

View File

@@ -413,11 +413,7 @@ int LIR_Assembler::emit_unwind_handler() {
if (method()->is_synchronized()) {
monitor_address(0, FrameMap::rax_opr);
stub = new MonitorExitStub(FrameMap::rax_opr, true, 0);
if (LockingMode == LM_MONITOR) {
__ jmp(*stub->entry());
} else {
__ unlock_object(rdi, rsi, rax, *stub->entry());
}
__ unlock_object(rdi, rsi, rax, *stub->entry());
__ bind(*stub->continuation());
}
@@ -2733,15 +2729,9 @@ void LIR_Assembler::emit_lock(LIR_OpLock* op) {
Register obj = op->obj_opr()->as_register(); // may not be an oop
Register hdr = op->hdr_opr()->as_register();
Register lock = op->lock_opr()->as_register();
if (LockingMode == LM_MONITOR) {
if (op->info() != nullptr) {
add_debug_info_for_null_check_here(op->info());
__ null_check(obj);
}
__ jmp(*op->stub()->entry());
} else if (op->code() == lir_lock) {
if (op->code() == lir_lock) {
assert(BasicLock::displaced_header_offset_in_bytes() == 0, "lock_reg must point to the displaced header");
Register tmp = LockingMode == LM_LIGHTWEIGHT ? op->scratch_opr()->as_register() : noreg;
Register tmp = op->scratch_opr()->as_register();
// add debug info for NullPointerException only if one is possible
int null_check_offset = __ lock_object(hdr, obj, lock, tmp, *op->stub()->entry());
if (op->info() != nullptr) {

View File

@@ -289,7 +289,7 @@ void LIRGenerator::do_MonitorEnter(MonitorEnter* x) {
// this CodeEmitInfo must not have the xhandlers because here the
// object is already locked (xhandlers expect object to be unlocked)
CodeEmitInfo* info = state_for(x, x->state(), true);
LIR_Opr tmp = LockingMode == LM_LIGHTWEIGHT ? new_register(T_ADDRESS) : LIR_OprFact::illegalOpr;
LIR_Opr tmp = new_register(T_ADDRESS);
monitor_enter(obj.result(), lock, syncTempOpr(), tmp,
x->monitor_no(), info_for_exception, info);
}
@@ -720,8 +720,8 @@ void LIRGenerator::do_MathIntrinsic(Intrinsic* x) {
if (x->id() == vmIntrinsics::_dexp || x->id() == vmIntrinsics::_dlog ||
x->id() == vmIntrinsics::_dpow || x->id() == vmIntrinsics::_dcos ||
x->id() == vmIntrinsics::_dsin || x->id() == vmIntrinsics::_dtan ||
x->id() == vmIntrinsics::_dlog10 || x->id() == vmIntrinsics::_dtanh ||
x->id() == vmIntrinsics::_dcbrt
x->id() == vmIntrinsics::_dlog10 || x->id() == vmIntrinsics::_dsinh ||
x->id() == vmIntrinsics::_dtanh || x->id() == vmIntrinsics::_dcbrt
) {
do_LibmIntrinsic(x);
return;
@@ -835,6 +835,12 @@ void LIRGenerator::do_LibmIntrinsic(Intrinsic* x) {
__ call_runtime_leaf(CAST_FROM_FN_PTR(address, SharedRuntime::dtan), getThreadTemp(), result_reg, cc->args());
}
break;
case vmIntrinsics::_dsinh:
assert(StubRoutines::dsinh() != nullptr, "sinh intrinsic not found");
if (StubRoutines::dsinh() != nullptr) {
__ call_runtime_leaf(StubRoutines::dsinh(), getThreadTemp(), result_reg, cc->args());
}
break;
case vmIntrinsics::_dtanh:
assert(StubRoutines::dtanh() != nullptr, "tanh intrinsic not found");
if (StubRoutines::dtanh() != nullptr) {
@@ -955,7 +961,7 @@ void LIRGenerator::do_update_CRC32(Intrinsic* x) {
CallingConvention* cc = frame_map()->c_calling_convention(&signature);
const LIR_Opr result_reg = result_register_for(x->type());
LIR_Opr addr = new_pointer_register();
LIR_Opr addr = new_register(T_ADDRESS);
__ leal(LIR_OprFact::address(a), addr);
crc.load_item_force(cc->at(0));
@@ -1094,10 +1100,10 @@ void LIRGenerator::do_vectorizedMismatch(Intrinsic* x) {
CallingConvention* cc = frame_map()->c_calling_convention(&signature);
const LIR_Opr result_reg = result_register_for(x->type());
LIR_Opr ptr_addr_a = new_pointer_register();
LIR_Opr ptr_addr_a = new_register(T_ADDRESS);
__ leal(LIR_OprFact::address(addr_a), ptr_addr_a);
LIR_Opr ptr_addr_b = new_pointer_register();
LIR_Opr ptr_addr_b = new_register(T_ADDRESS);
__ leal(LIR_OprFact::address(addr_b), ptr_addr_b);
__ move(ptr_addr_a, cc->at(0));

View File

@@ -42,8 +42,6 @@
#include "utilities/globalDefinitions.hpp"
int C1_MacroAssembler::lock_object(Register hdr, Register obj, Register disp_hdr, Register tmp, Label& slow_case) {
const int aligned_mask = BytesPerWord -1;
const int hdr_offset = oopDesc::mark_offset_in_bytes();
assert(hdr == rax, "hdr must be rax, for the cmpxchg instruction");
assert_different_registers(hdr, obj, disp_hdr, tmp);
int null_check_offset = -1;
@@ -55,93 +53,20 @@ int C1_MacroAssembler::lock_object(Register hdr, Register obj, Register disp_hdr
null_check_offset = offset();
if (LockingMode == LM_LIGHTWEIGHT) {
lightweight_lock(disp_hdr, obj, hdr, tmp, slow_case);
} else if (LockingMode == LM_LEGACY) {
Label done;
if (DiagnoseSyncOnValueBasedClasses != 0) {
load_klass(hdr, obj, rscratch1);
testb(Address(hdr, Klass::misc_flags_offset()), KlassFlags::_misc_is_value_based_class);
jcc(Assembler::notZero, slow_case);
}
// Load object header
movptr(hdr, Address(obj, hdr_offset));
// and mark it as unlocked
orptr(hdr, markWord::unlocked_value);
// save unlocked object header into the displaced header location on the stack
movptr(Address(disp_hdr, 0), hdr);
// test if object header is still the same (i.e. unlocked), and if so, store the
// displaced header address in the object header - if it is not the same, get the
// object header instead
MacroAssembler::lock(); // must be immediately before cmpxchg!
cmpxchgptr(disp_hdr, Address(obj, hdr_offset));
// if the object header was the same, we're done
jcc(Assembler::equal, done);
// if the object header was not the same, it is now in the hdr register
// => test if it is a stack pointer into the same stack (recursive locking), i.e.:
//
// 1) (hdr & aligned_mask) == 0
// 2) rsp <= hdr
// 3) hdr <= rsp + page_size
//
// these 3 tests can be done by evaluating the following expression:
//
// (hdr - rsp) & (aligned_mask - page_size)
//
// assuming both the stack pointer and page_size have their least
// significant 2 bits cleared and page_size is a power of 2
subptr(hdr, rsp);
andptr(hdr, aligned_mask - (int)os::vm_page_size());
// for recursive locking, the result is zero => save it in the displaced header
// location (null in the displaced hdr location indicates recursive locking)
movptr(Address(disp_hdr, 0), hdr);
// otherwise we don't care about the result and handle locking via runtime call
jcc(Assembler::notZero, slow_case);
// done
bind(done);
inc_held_monitor_count();
}
lightweight_lock(disp_hdr, obj, hdr, tmp, slow_case);
return null_check_offset;
}
void C1_MacroAssembler::unlock_object(Register hdr, Register obj, Register disp_hdr, Label& slow_case) {
const int aligned_mask = BytesPerWord -1;
const int hdr_offset = oopDesc::mark_offset_in_bytes();
assert(disp_hdr == rax, "disp_hdr must be rax, for the cmpxchg instruction");
assert(hdr != obj && hdr != disp_hdr && obj != disp_hdr, "registers must be different");
Label done;
if (LockingMode != LM_LIGHTWEIGHT) {
// load displaced header
movptr(hdr, Address(disp_hdr, 0));
// if the loaded hdr is null we had recursive locking
testptr(hdr, hdr);
// if we had recursive locking, we are done
jcc(Assembler::zero, done);
}
// load object
movptr(obj, Address(disp_hdr, BasicObjectLock::obj_offset()));
verify_oop(obj);
if (LockingMode == LM_LIGHTWEIGHT) {
lightweight_unlock(obj, disp_hdr, hdr, slow_case);
} else if (LockingMode == LM_LEGACY) {
// test if object header is pointing to the displaced header, and if so, restore
// the displaced header in the object - if the object header is not pointing to
// the displaced header, get the object header instead
MacroAssembler::lock(); // must be immediately before cmpxchg!
cmpxchgptr(hdr, Address(obj, hdr_offset));
// if the object header was not pointing to the displaced header,
// we do unlocking via runtime call
jcc(Assembler::notEqual, slow_case);
// done
bind(done);
dec_held_monitor_count();
}
lightweight_unlock(obj, disp_hdr, hdr, slow_case);
}

View File

@@ -219,244 +219,11 @@ inline Assembler::AvxVectorLen C2_MacroAssembler::vector_length_encoding(int vle
// obj: object to lock
// box: on-stack box address (displaced header location) - KILLED
// rax,: tmp -- KILLED
// scr: tmp -- KILLED
void C2_MacroAssembler::fast_lock(Register objReg, Register boxReg, Register tmpReg,
Register scrReg, Register cx1Reg, Register cx2Reg, Register thread,
Metadata* method_data) {
assert(LockingMode != LM_LIGHTWEIGHT, "lightweight locking should use fast_lock_lightweight");
// Ensure the register assignments are disjoint
assert(tmpReg == rax, "");
assert(cx1Reg == noreg, "");
assert(cx2Reg == noreg, "");
assert_different_registers(objReg, boxReg, tmpReg, scrReg);
// Possible cases that we'll encounter in fast_lock
// ------------------------------------------------
// * Inflated
// -- unlocked
// -- Locked
// = by self
// = by other
// * neutral
// * stack-locked
// -- by self
// = sp-proximity test hits
// = sp-proximity test generates false-negative
// -- by other
//
Label IsInflated, DONE_LABEL, NO_COUNT, COUNT;
if (DiagnoseSyncOnValueBasedClasses != 0) {
load_klass(tmpReg, objReg, scrReg);
testb(Address(tmpReg, Klass::misc_flags_offset()), KlassFlags::_misc_is_value_based_class);
jcc(Assembler::notZero, DONE_LABEL);
}
movptr(tmpReg, Address(objReg, oopDesc::mark_offset_in_bytes())); // [FETCH]
testptr(tmpReg, markWord::monitor_value); // inflated vs stack-locked|neutral
jcc(Assembler::notZero, IsInflated);
if (LockingMode == LM_MONITOR) {
// Clear ZF so that we take the slow path at the DONE label. objReg is known to be not 0.
testptr(objReg, objReg);
} else {
assert(LockingMode == LM_LEGACY, "must be");
// Attempt stack-locking ...
orptr (tmpReg, markWord::unlocked_value);
movptr(Address(boxReg, 0), tmpReg); // Anticipate successful CAS
lock();
cmpxchgptr(boxReg, Address(objReg, oopDesc::mark_offset_in_bytes())); // Updates tmpReg
jcc(Assembler::equal, COUNT); // Success
// Recursive locking.
// The object is stack-locked: markword contains stack pointer to BasicLock.
// Locked by current thread if difference with current SP is less than one page.
subptr(tmpReg, rsp);
// Next instruction set ZFlag == 1 (Success) if difference is less then one page.
andptr(tmpReg, (int32_t) (7 - (int)os::vm_page_size()) );
movptr(Address(boxReg, 0), tmpReg);
}
jmp(DONE_LABEL);
bind(IsInflated);
// The object is inflated. tmpReg contains pointer to ObjectMonitor* + markWord::monitor_value
// Unconditionally set box->_displaced_header = markWord::unused_mark().
// Without cast to int32_t this style of movptr will destroy r10 which is typically obj.
movptr(Address(boxReg, 0), checked_cast<int32_t>(markWord::unused_mark().value()));
// It's inflated and we use scrReg for ObjectMonitor* in this section.
movptr(boxReg, Address(r15_thread, JavaThread::monitor_owner_id_offset()));
movq(scrReg, tmpReg);
xorq(tmpReg, tmpReg);
lock();
cmpxchgptr(boxReg, Address(scrReg, OM_OFFSET_NO_MONITOR_VALUE_TAG(owner)));
// Propagate ICC.ZF from CAS above into DONE_LABEL.
jccb(Assembler::equal, COUNT); // CAS above succeeded; propagate ZF = 1 (success)
cmpptr(boxReg, rax); // Check if we are already the owner (recursive lock)
jccb(Assembler::notEqual, NO_COUNT); // If not recursive, ZF = 0 at this point (fail)
incq(Address(scrReg, OM_OFFSET_NO_MONITOR_VALUE_TAG(recursions)));
xorq(rax, rax); // Set ZF = 1 (success) for recursive lock, denoting locking success
bind(DONE_LABEL);
// ZFlag == 1 count in fast path
// ZFlag == 0 count in slow path
jccb(Assembler::notZero, NO_COUNT); // jump if ZFlag == 0
bind(COUNT);
if (LockingMode == LM_LEGACY) {
// Count monitors in fast path
increment(Address(thread, JavaThread::held_monitor_count_offset()));
}
xorl(tmpReg, tmpReg); // Set ZF == 1
bind(NO_COUNT);
// At NO_COUNT the icc ZFlag is set as follows ...
// fast_unlock uses the same protocol.
// ZFlag == 1 -> Success
// ZFlag == 0 -> Failure - force control through the slow path
}
// obj: object to unlock
// box: box address (displaced header location), killed. Must be EAX.
// tmp: killed, cannot be obj nor box.
//
// Some commentary on balanced locking:
//
// fast_lock and fast_unlock are emitted only for provably balanced lock sites.
// Methods that don't have provably balanced locking are forced to run in the
// interpreter - such methods won't be compiled to use fast_lock and fast_unlock.
// The interpreter provides two properties:
// I1: At return-time the interpreter automatically and quietly unlocks any
// objects acquired the current activation (frame). Recall that the
// interpreter maintains an on-stack list of locks currently held by
// a frame.
// I2: If a method attempts to unlock an object that is not held by the
// the frame the interpreter throws IMSX.
//
// Lets say A(), which has provably balanced locking, acquires O and then calls B().
// B() doesn't have provably balanced locking so it runs in the interpreter.
// Control returns to A() and A() unlocks O. By I1 and I2, above, we know that O
// is still locked by A().
//
// The only other source of unbalanced locking would be JNI. The "Java Native Interface:
// Programmer's Guide and Specification" claims that an object locked by jni_monitorenter
// should not be unlocked by "normal" java-level locking and vice-versa. The specification
// doesn't specify what will occur if a program engages in such mixed-mode locking, however.
// Arguably given that the spec legislates the JNI case as undefined our implementation
// could reasonably *avoid* checking owner in fast_unlock().
// In the interest of performance we elide m->Owner==Self check in unlock.
// A perfectly viable alternative is to elide the owner check except when
// Xcheck:jni is enabled.
void C2_MacroAssembler::fast_unlock(Register objReg, Register boxReg, Register tmpReg) {
assert(LockingMode != LM_LIGHTWEIGHT, "lightweight locking should use fast_unlock_lightweight");
assert(boxReg == rax, "");
assert_different_registers(objReg, boxReg, tmpReg);
Label DONE_LABEL, Stacked, COUNT, NO_COUNT;
if (LockingMode == LM_LEGACY) {
cmpptr(Address(boxReg, 0), NULL_WORD); // Examine the displaced header
jcc (Assembler::zero, COUNT); // 0 indicates recursive stack-lock
}
movptr(tmpReg, Address(objReg, oopDesc::mark_offset_in_bytes())); // Examine the object's markword
if (LockingMode != LM_MONITOR) {
testptr(tmpReg, markWord::monitor_value); // Inflated?
jcc(Assembler::zero, Stacked);
}
// It's inflated.
// Despite our balanced locking property we still check that m->_owner == Self
// as java routines or native JNI code called by this thread might
// have released the lock.
//
// If there's no contention try a 1-0 exit. That is, exit without
// a costly MEMBAR or CAS. See synchronizer.cpp for details on how
// we detect and recover from the race that the 1-0 exit admits.
//
// Conceptually fast_unlock() must execute a STST|LDST "release" barrier
// before it STs null into _owner, releasing the lock. Updates
// to data protected by the critical section must be visible before
// we drop the lock (and thus before any other thread could acquire
// the lock and observe the fields protected by the lock).
// IA32's memory-model is SPO, so STs are ordered with respect to
// each other and there's no need for an explicit barrier (fence).
// See also http://gee.cs.oswego.edu/dl/jmm/cookbook.html.
Label LSuccess, LNotRecursive;
cmpptr(Address(tmpReg, OM_OFFSET_NO_MONITOR_VALUE_TAG(recursions)), 0);
jccb(Assembler::equal, LNotRecursive);
// Recursive inflated unlock
decrement(Address(tmpReg, OM_OFFSET_NO_MONITOR_VALUE_TAG(recursions)));
jmpb(LSuccess);
bind(LNotRecursive);
// Set owner to null.
// Release to satisfy the JMM
movptr(Address(tmpReg, OM_OFFSET_NO_MONITOR_VALUE_TAG(owner)), NULL_WORD);
// We need a full fence after clearing owner to avoid stranding.
// StoreLoad achieves this.
membar(StoreLoad);
// Check if the entry_list is empty.
cmpptr(Address(tmpReg, OM_OFFSET_NO_MONITOR_VALUE_TAG(entry_list)), NULL_WORD);
jccb(Assembler::zero, LSuccess); // If so we are done.
// Check if there is a successor.
cmpptr(Address(tmpReg, OM_OFFSET_NO_MONITOR_VALUE_TAG(succ)), NULL_WORD);
jccb(Assembler::notZero, LSuccess); // If so we are done.
// Save the monitor pointer in the current thread, so we can try to
// reacquire the lock in SharedRuntime::monitor_exit_helper().
andptr(tmpReg, ~(int32_t)markWord::monitor_value);
movptr(Address(r15_thread, JavaThread::unlocked_inflated_monitor_offset()), tmpReg);
orl (boxReg, 1); // set ICC.ZF=0 to indicate failure
jmpb (DONE_LABEL);
bind (LSuccess);
testl (boxReg, 0); // set ICC.ZF=1 to indicate success
jmpb (DONE_LABEL);
if (LockingMode == LM_LEGACY) {
bind (Stacked);
movptr(tmpReg, Address (boxReg, 0)); // re-fetch
lock();
cmpxchgptr(tmpReg, Address(objReg, oopDesc::mark_offset_in_bytes())); // Uses RAX which is box
// Intentional fall-thru into DONE_LABEL
}
bind(DONE_LABEL);
// ZFlag == 1 count in fast path
// ZFlag == 0 count in slow path
jccb(Assembler::notZero, NO_COUNT);
bind(COUNT);
if (LockingMode == LM_LEGACY) {
// Count monitors in fast path
decrementq(Address(r15_thread, JavaThread::held_monitor_count_offset()));
}
xorl(tmpReg, tmpReg); // Set ZF == 1
bind(NO_COUNT);
}
// box: on-stack box address -- KILLED
// rax: tmp -- KILLED
// t : tmp -- KILLED
void C2_MacroAssembler::fast_lock_lightweight(Register obj, Register box, Register rax_reg,
Register t, Register thread) {
assert(LockingMode == LM_LIGHTWEIGHT, "must be");
assert(rax_reg == rax, "Used for CAS");
assert_different_registers(obj, box, rax_reg, t, thread);
@@ -616,8 +383,39 @@ void C2_MacroAssembler::fast_lock_lightweight(Register obj, Register box, Regist
// C2 uses the value of ZF to determine the continuation.
}
// obj: object to lock
// rax: tmp -- KILLED
// t : tmp - cannot be obj nor rax -- KILLED
//
// Some commentary on balanced locking:
//
// fast_lock and fast_unlock are emitted only for provably balanced lock sites.
// Methods that don't have provably balanced locking are forced to run in the
// interpreter - such methods won't be compiled to use fast_lock and fast_unlock.
// The interpreter provides two properties:
// I1: At return-time the interpreter automatically and quietly unlocks any
// objects acquired in the current activation (frame). Recall that the
// interpreter maintains an on-stack list of locks currently held by
// a frame.
// I2: If a method attempts to unlock an object that is not held by the
// frame the interpreter throws IMSX.
//
// Lets say A(), which has provably balanced locking, acquires O and then calls B().
// B() doesn't have provably balanced locking so it runs in the interpreter.
// Control returns to A() and A() unlocks O. By I1 and I2, above, we know that O
// is still locked by A().
//
// The only other source of unbalanced locking would be JNI. The "Java Native Interface
// Specification" states that an object locked by JNI's MonitorEnter should not be
// unlocked by "normal" java-level locking and vice-versa. The specification doesn't
// specify what will occur if a program engages in such mixed-mode locking, however.
// Arguably given that the spec legislates the JNI case as undefined our implementation
// could reasonably *avoid* checking owner in fast_unlock().
// In the interest of performance we elide m->Owner==Self check in unlock.
// A perfectly viable alternative is to elide the owner check except when
// Xcheck:jni is enabled.
void C2_MacroAssembler::fast_unlock_lightweight(Register obj, Register reg_rax, Register t, Register thread) {
assert(LockingMode == LM_LIGHTWEIGHT, "must be");
assert(reg_rax == rax, "Used for CAS");
assert_different_registers(obj, reg_rax, t);

View File

@@ -34,12 +34,7 @@ public:
Assembler::AvxVectorLen vector_length_encoding(int vlen_in_bytes);
// Code used by cmpFastLock and cmpFastUnlock mach instructions in .ad file.
// See full description in macroAssembler_x86.cpp.
void fast_lock(Register obj, Register box, Register tmp,
Register scr, Register cx1, Register cx2, Register thread,
Metadata* method_data);
void fast_unlock(Register obj, Register box, Register tmp);
// See full description in c2_MacroAssembler_x86.cpp.
void fast_lock_lightweight(Register obj, Register box, Register rax_reg,
Register t, Register thread);
void fast_unlock_lightweight(Register obj, Register reg_rax, Register t, Register thread);

View File

@@ -1,5 +1,5 @@
/*
* Copyright (c) 2019, 2024, Oracle and/or its affiliates. All rights reserved.
* Copyright (c) 2019, 2025, Oracle and/or its affiliates. All rights reserved.
* DO NOT ALTER OR REMOVE COPYRIGHT NOTICES OR THIS FILE HEADER.
*
* This code is free software; you can redistribute it and/or modify it
@@ -98,9 +98,12 @@ frame FreezeBase::new_heap_frame(frame& f, frame& caller) {
*hf.addr_at(frame::interpreter_frame_locals_offset) = locals_offset;
return hf;
} else {
// We need to re-read fp out of the frame because it may be an oop and we might have
// had a safepoint in finalize_freeze, after constructing f.
fp = *(intptr_t**)(f.sp() - frame::sender_sp_offset);
// For a compiled frame we need to re-read fp out of the frame because it may be an
// oop and we might have had a safepoint in finalize_freeze, after constructing f.
// For stub/native frames the value is not used while frozen, and will be constructed again
// when thawing the frame (see ThawBase::new_stack_frame). We use a special bad address to
// help with debugging, particularly when inspecting frames and identifying invalid accesses.
fp = FKind::compiled ? *(intptr_t**)(f.sp() - frame::sender_sp_offset) : (intptr_t*)badAddressVal;
int fsize = FKind::size(f);
sp = caller.unextended_sp() - fsize;
@@ -183,6 +186,11 @@ inline void FreezeBase::patch_pd(frame& hf, const frame& caller) {
}
}
inline void FreezeBase::patch_pd_unused(intptr_t* sp) {
intptr_t* fp_addr = sp - frame::sender_sp_offset;
*fp_addr = badAddressVal;
}
//////// Thaw
// Fast path

View File

@@ -1024,100 +1024,25 @@ void InterpreterMacroAssembler::get_method_counters(Register method,
void InterpreterMacroAssembler::lock_object(Register lock_reg) {
assert(lock_reg == c_rarg1, "The argument is only for looks. It must be c_rarg1");
if (LockingMode == LM_MONITOR) {
call_VM_preemptable(noreg,
CAST_FROM_FN_PTR(address, InterpreterRuntime::monitorenter),
lock_reg);
} else {
Label count_locking, done, slow_case;
Label done, slow_case;
const Register swap_reg = rax; // Must use rax for cmpxchg instruction
const Register tmp_reg = rbx;
const Register obj_reg = c_rarg3; // Will contain the oop
const Register rklass_decode_tmp = rscratch1;
const Register swap_reg = rax; // Must use rax for cmpxchg instruction
const Register tmp_reg = rbx;
const Register obj_reg = c_rarg3; // Will contain the oop
const int obj_offset = in_bytes(BasicObjectLock::obj_offset());
const int lock_offset = in_bytes(BasicObjectLock::lock_offset());
const int mark_offset = lock_offset +
BasicLock::displaced_header_offset_in_bytes();
// Load object pointer into obj_reg
movptr(obj_reg, Address(lock_reg, BasicObjectLock::obj_offset()));
// Load object pointer into obj_reg
movptr(obj_reg, Address(lock_reg, obj_offset));
lightweight_lock(lock_reg, obj_reg, swap_reg, tmp_reg, slow_case);
jmp(done);
if (LockingMode == LM_LIGHTWEIGHT) {
lightweight_lock(lock_reg, obj_reg, swap_reg, tmp_reg, slow_case);
} else if (LockingMode == LM_LEGACY) {
if (DiagnoseSyncOnValueBasedClasses != 0) {
load_klass(tmp_reg, obj_reg, rklass_decode_tmp);
testb(Address(tmp_reg, Klass::misc_flags_offset()), KlassFlags::_misc_is_value_based_class);
jcc(Assembler::notZero, slow_case);
}
bind(slow_case);
// Load immediate 1 into swap_reg %rax
movl(swap_reg, 1);
// Load (object->mark() | 1) into swap_reg %rax
orptr(swap_reg, Address(obj_reg, oopDesc::mark_offset_in_bytes()));
// Save (object->mark() | 1) into BasicLock's displaced header
movptr(Address(lock_reg, mark_offset), swap_reg);
assert(lock_offset == 0,
"displaced header must be first word in BasicObjectLock");
lock();
cmpxchgptr(lock_reg, Address(obj_reg, oopDesc::mark_offset_in_bytes()));
jcc(Assembler::zero, count_locking);
const int zero_bits = 7;
// Fast check for recursive lock.
//
// Can apply the optimization only if this is a stack lock
// allocated in this thread. For efficiency, we can focus on
// recently allocated stack locks (instead of reading the stack
// base and checking whether 'mark' points inside the current
// thread stack):
// 1) (mark & zero_bits) == 0, and
// 2) rsp <= mark < mark + os::pagesize()
//
// Warning: rsp + os::pagesize can overflow the stack base. We must
// neither apply the optimization for an inflated lock allocated
// just above the thread stack (this is why condition 1 matters)
// nor apply the optimization if the stack lock is inside the stack
// of another thread. The latter is avoided even in case of overflow
// because we have guard pages at the end of all stacks. Hence, if
// we go over the stack base and hit the stack of another thread,
// this should not be in a writeable area that could contain a
// stack lock allocated by that thread. As a consequence, a stack
// lock less than page size away from rsp is guaranteed to be
// owned by the current thread.
//
// These 3 tests can be done by evaluating the following
// expression: ((mark - rsp) & (zero_bits - os::vm_page_size())),
// assuming both stack pointer and pagesize have their
// least significant bits clear.
// NOTE: the mark is in swap_reg %rax as the result of cmpxchg
subptr(swap_reg, rsp);
andptr(swap_reg, zero_bits - (int)os::vm_page_size());
// Save the test result, for recursive case, the result is zero
movptr(Address(lock_reg, mark_offset), swap_reg);
jcc(Assembler::notZero, slow_case);
bind(count_locking);
inc_held_monitor_count();
}
jmp(done);
bind(slow_case);
// Call the runtime routine for slow case
call_VM_preemptable(noreg,
CAST_FROM_FN_PTR(address, InterpreterRuntime::monitorenter),
lock_reg);
bind(done);
}
// Call the runtime routine for slow case
call_VM_preemptable(noreg,
CAST_FROM_FN_PTR(address, InterpreterRuntime::monitorenter),
lock_reg);
bind(done);
}
@@ -1136,63 +1061,31 @@ void InterpreterMacroAssembler::lock_object(Register lock_reg) {
void InterpreterMacroAssembler::unlock_object(Register lock_reg) {
assert(lock_reg == c_rarg1, "The argument is only for looks. It must be c_rarg1");
if (LockingMode == LM_MONITOR) {
call_VM_leaf(CAST_FROM_FN_PTR(address, InterpreterRuntime::monitorexit), lock_reg);
} else {
Label count_locking, done, slow_case;
Label done, slow_case;
const Register swap_reg = rax; // Must use rax for cmpxchg instruction
const Register header_reg = c_rarg2; // Will contain the old oopMark
const Register obj_reg = c_rarg3; // Will contain the oop
const Register swap_reg = rax; // Must use rax for cmpxchg instruction
const Register header_reg = c_rarg2; // Will contain the old oopMark
const Register obj_reg = c_rarg3; // Will contain the oop
save_bcp(); // Save in case of exception
save_bcp(); // Save in case of exception
if (LockingMode != LM_LIGHTWEIGHT) {
// Convert from BasicObjectLock structure to object and BasicLock
// structure Store the BasicLock address into %rax
lea(swap_reg, Address(lock_reg, BasicObjectLock::lock_offset()));
}
// Load oop into obj_reg(%c_rarg3)
movptr(obj_reg, Address(lock_reg, BasicObjectLock::obj_offset()));
// Load oop into obj_reg(%c_rarg3)
movptr(obj_reg, Address(lock_reg, BasicObjectLock::obj_offset()));
// Free entry
movptr(Address(lock_reg, BasicObjectLock::obj_offset()), NULL_WORD);
// Free entry
movptr(Address(lock_reg, BasicObjectLock::obj_offset()), NULL_WORD);
lightweight_unlock(obj_reg, swap_reg, header_reg, slow_case);
jmp(done);
if (LockingMode == LM_LIGHTWEIGHT) {
lightweight_unlock(obj_reg, swap_reg, header_reg, slow_case);
} else if (LockingMode == LM_LEGACY) {
// Load the old header from BasicLock structure
movptr(header_reg, Address(swap_reg,
BasicLock::displaced_header_offset_in_bytes()));
bind(slow_case);
// Call the runtime routine for slow case.
movptr(Address(lock_reg, BasicObjectLock::obj_offset()), obj_reg); // restore obj
call_VM_leaf(CAST_FROM_FN_PTR(address, InterpreterRuntime::monitorexit), lock_reg);
// Test for recursion
testptr(header_reg, header_reg);
bind(done);
// zero for recursive case
jcc(Assembler::zero, count_locking);
// Atomic swap back the old header
lock();
cmpxchgptr(header_reg, Address(obj_reg, oopDesc::mark_offset_in_bytes()));
// zero for simple unlock of a stack-lock case
jcc(Assembler::notZero, slow_case);
bind(count_locking);
dec_held_monitor_count();
}
jmp(done);
bind(slow_case);
// Call the runtime routine for slow case.
movptr(Address(lock_reg, BasicObjectLock::obj_offset()), obj_reg); // restore obj
call_VM_leaf(CAST_FROM_FN_PTR(address, InterpreterRuntime::monitorexit), lock_reg);
bind(done);
restore_bcp();
}
restore_bcp();
}
void InterpreterMacroAssembler::test_method_data_pointer(Register mdp,

View File

@@ -6027,32 +6027,46 @@ void MacroAssembler::evpbroadcast(BasicType type, XMMRegister dst, Register src,
}
}
// encode char[] to byte[] in ISO_8859_1 or ASCII
//@IntrinsicCandidate
//private static int implEncodeISOArray(byte[] sa, int sp,
//byte[] da, int dp, int len) {
// int i = 0;
// for (; i < len; i++) {
// char c = StringUTF16.getChar(sa, sp++);
// if (c > '\u00FF')
// break;
// da[dp++] = (byte)c;
// }
// return i;
//}
//
//@IntrinsicCandidate
//private static int implEncodeAsciiArray(char[] sa, int sp,
// byte[] da, int dp, int len) {
// int i = 0;
// for (; i < len; i++) {
// char c = sa[sp++];
// if (c >= '\u0080')
// break;
// da[dp++] = (byte)c;
// }
// return i;
//}
// Encode given char[]/byte[] to byte[] in ISO_8859_1 or ASCII
//
// @IntrinsicCandidate
// int sun.nio.cs.ISO_8859_1.Encoder#encodeISOArray0(
// char[] sa, int sp, byte[] da, int dp, int len) {
// int i = 0;
// for (; i < len; i++) {
// char c = sa[sp++];
// if (c > '\u00FF')
// break;
// da[dp++] = (byte) c;
// }
// return i;
// }
//
// @IntrinsicCandidate
// int java.lang.StringCoding.encodeISOArray0(
// byte[] sa, int sp, byte[] da, int dp, int len) {
// int i = 0;
// for (; i < len; i++) {
// char c = StringUTF16.getChar(sa, sp++);
// if (c > '\u00FF')
// break;
// da[dp++] = (byte) c;
// }
// return i;
// }
//
// @IntrinsicCandidate
// int java.lang.StringCoding.encodeAsciiArray0(
// char[] sa, int sp, byte[] da, int dp, int len) {
// int i = 0;
// for (; i < len; i++) {
// char c = sa[sp++];
// if (c >= '\u0080')
// break;
// da[dp++] = (byte) c;
// }
// return i;
// }
void MacroAssembler::encode_iso_array(Register src, Register dst, Register len,
XMMRegister tmp1Reg, XMMRegister tmp2Reg,
XMMRegister tmp3Reg, XMMRegister tmp4Reg,

View File

@@ -137,7 +137,7 @@ void MethodHandles::verify_method(MacroAssembler* _masm, Register method, Regist
case vmIntrinsicID::_invokeBasic:
// Require compiled LambdaForm class to be fully initialized.
__ cmpb(Address(method_holder, InstanceKlass::init_state_offset()), InstanceKlass::fully_initialized);
__ jccb(Assembler::equal, L_ok);
__ jcc(Assembler::equal, L_ok);
break;
case vmIntrinsicID::_linkToStatic:
@@ -154,7 +154,7 @@ void MethodHandles::verify_method(MacroAssembler* _masm, Register method, Regist
// init_state check failed, but it may be an abstract interface method
__ load_unsigned_short(temp, Address(method, Method::access_flags_offset()));
__ testl(temp, JVM_ACC_ABSTRACT);
__ jccb(Assembler::notZero, L_ok);
__ jcc(Assembler::notZero, L_ok);
break;
default:

View File

@@ -59,17 +59,10 @@ void SharedRuntime::inline_check_hashcode_from_object_header(MacroAssembler* mas
__ movptr(result, Address(obj_reg, oopDesc::mark_offset_in_bytes()));
if (LockingMode == LM_LIGHTWEIGHT) {
if (!UseObjectMonitorTable) {
// check if monitor
__ testptr(result, markWord::monitor_value);
__ jcc(Assembler::notZero, slowCase);
}
} else {
// check if locked
__ testptr(result, markWord::unlocked_value);
__ jcc(Assembler::zero, slowCase);
if (!UseObjectMonitorTable) {
// check if monitor
__ testptr(result, markWord::monitor_value);
__ jcc(Assembler::notZero, slowCase);
}
// get hash

View File

@@ -2133,7 +2133,7 @@ nmethod* SharedRuntime::generate_native_wrapper(MacroAssembler* masm,
// We use the same pc/oopMap repeatedly when we call out
Label native_return;
if (LockingMode != LM_LEGACY && method->is_object_wait0()) {
if (method->is_object_wait0()) {
// For convenience we use the pc we want to resume to in case of preemption on Object.wait.
__ set_last_Java_frame(rsp, noreg, native_return, rscratch1);
} else {
@@ -2174,16 +2174,11 @@ nmethod* SharedRuntime::generate_native_wrapper(MacroAssembler* masm,
const Register swap_reg = rax; // Must use rax for cmpxchg instruction
const Register obj_reg = rbx; // Will contain the oop
const Register lock_reg = r13; // Address of compiler lock object (BasicLock)
const Register old_hdr = r13; // value of old header at unlock time
Label slow_path_lock;
Label lock_done;
if (method->is_synchronized()) {
Label count_mon;
const int mark_word_offset = BasicLock::displaced_header_offset_in_bytes();
// Get the handle (the 2nd argument)
__ mov(oop_handle_reg, c_rarg1);
@@ -2194,47 +2189,7 @@ nmethod* SharedRuntime::generate_native_wrapper(MacroAssembler* masm,
// Load the oop from the handle
__ movptr(obj_reg, Address(oop_handle_reg, 0));
if (LockingMode == LM_MONITOR) {
__ jmp(slow_path_lock);
} else if (LockingMode == LM_LEGACY) {
// Load immediate 1 into swap_reg %rax
__ movl(swap_reg, 1);
// Load (object->mark() | 1) into swap_reg %rax
__ orptr(swap_reg, Address(obj_reg, oopDesc::mark_offset_in_bytes()));
// Save (object->mark() | 1) into BasicLock's displaced header
__ movptr(Address(lock_reg, mark_word_offset), swap_reg);
// src -> dest iff dest == rax else rax <- dest
__ lock();
__ cmpxchgptr(lock_reg, Address(obj_reg, oopDesc::mark_offset_in_bytes()));
__ jcc(Assembler::equal, count_mon);
// Hmm should this move to the slow path code area???
// Test if the oopMark is an obvious stack pointer, i.e.,
// 1) (mark & 3) == 0, and
// 2) rsp <= mark < mark + os::pagesize()
// These 3 tests can be done by evaluating the following
// expression: ((mark - rsp) & (3 - os::vm_page_size())),
// assuming both stack pointer and pagesize have their
// least significant 2 bits clear.
// NOTE: the oopMark is in swap_reg %rax as the result of cmpxchg
__ subptr(swap_reg, rsp);
__ andptr(swap_reg, 3 - (int)os::vm_page_size());
// Save the test result, for recursive case, the result is zero
__ movptr(Address(lock_reg, mark_word_offset), swap_reg);
__ jcc(Assembler::notEqual, slow_path_lock);
__ bind(count_mon);
__ inc_held_monitor_count();
} else {
assert(LockingMode == LM_LIGHTWEIGHT, "must be");
__ lightweight_lock(lock_reg, obj_reg, swap_reg, rscratch1, slow_path_lock);
}
__ lightweight_lock(lock_reg, obj_reg, swap_reg, rscratch1, slow_path_lock);
// Slow path will re-enter here
__ bind(lock_done);
@@ -2322,7 +2277,7 @@ nmethod* SharedRuntime::generate_native_wrapper(MacroAssembler* masm,
// change thread state
__ movl(Address(r15_thread, JavaThread::thread_state_offset()), _thread_in_Java);
if (LockingMode != LM_LEGACY && method->is_object_wait0()) {
if (method->is_object_wait0()) {
// Check preemption for Object.wait()
__ movptr(rscratch1, Address(r15_thread, JavaThread::preempt_alternate_return_offset()));
__ cmpptr(rscratch1, NULL_WORD);
@@ -2354,38 +2309,12 @@ nmethod* SharedRuntime::generate_native_wrapper(MacroAssembler* masm,
// Get locked oop from the handle we passed to jni
__ movptr(obj_reg, Address(oop_handle_reg, 0));
if (LockingMode == LM_LEGACY) {
Label not_recur;
// Simple recursive lock?
__ cmpptr(Address(rsp, lock_slot_offset * VMRegImpl::stack_slot_size), NULL_WORD);
__ jcc(Assembler::notEqual, not_recur);
__ dec_held_monitor_count();
__ jmpb(fast_done);
__ bind(not_recur);
}
// Must save rax if it is live now because cmpxchg must use it
if (ret_type != T_FLOAT && ret_type != T_DOUBLE && ret_type != T_VOID) {
save_native_result(masm, ret_type, stack_slots);
}
if (LockingMode == LM_MONITOR) {
__ jmp(slow_path_unlock);
} else if (LockingMode == LM_LEGACY) {
// get address of the stack lock
__ lea(rax, Address(rsp, lock_slot_offset * VMRegImpl::stack_slot_size));
// get old displaced header
__ movptr(old_hdr, Address(rax, 0));
// Atomic swap old header if oop still contains the stack lock
__ lock();
__ cmpxchgptr(old_hdr, Address(obj_reg, oopDesc::mark_offset_in_bytes()));
__ jcc(Assembler::notEqual, slow_path_unlock);
__ dec_held_monitor_count();
} else {
assert(LockingMode == LM_LIGHTWEIGHT, "must be");
__ lightweight_unlock(obj_reg, swap_reg, lock_reg, slow_path_unlock);
}
__ lightweight_unlock(obj_reg, swap_reg, lock_reg, slow_path_unlock);
// slow path re-enters here
__ bind(unlock_done);

View File

@@ -3689,6 +3689,9 @@ void StubGenerator::generate_libm_stubs() {
if (vmIntrinsics::is_intrinsic_available(vmIntrinsics::_dtan)) {
StubRoutines::_dtan = generate_libmTan(); // from stubGenerator_x86_64_tan.cpp
}
if (vmIntrinsics::is_intrinsic_available(vmIntrinsics::_dsinh)) {
StubRoutines::_dsinh = generate_libmSinh(); // from stubGenerator_x86_64_sinh.cpp
}
if (vmIntrinsics::is_intrinsic_available(vmIntrinsics::_dtanh)) {
StubRoutines::_dtanh = generate_libmTanh(); // from stubGenerator_x86_64_tanh.cpp
}

View File

@@ -555,6 +555,7 @@ class StubGenerator: public StubCodeGenerator {
address generate_libmSin();
address generate_libmCos();
address generate_libmTan();
address generate_libmSinh();
address generate_libmTanh();
address generate_libmCbrt();
address generate_libmExp();

View File

@@ -1,6 +1,6 @@
/*
* Copyright (c) 2016, 2025, Intel Corporation. All rights reserved.
* Copyright (C) 2021 THL A29 Limited, a Tencent company. All rights reserved.
* Copyright (C) 2021, Tencent. All rights reserved.
* Intel Math Library (LIBM) Source Code
*
* DO NOT ALTER OR REMOVE COPYRIGHT NOTICES OR THIS FILE HEADER.

View File

@@ -1,6 +1,6 @@
/*
* Copyright (c) 2016, 2025, Intel Corporation. All rights reserved.
* Copyright (C) 2021 THL A29 Limited, a Tencent company. All rights reserved.
* Copyright (C) 2021, Tencent. All rights reserved.
* Intel Math Library (LIBM) Source Code
*
* DO NOT ALTER OR REMOVE COPYRIGHT NOTICES OR THIS FILE HEADER.

View File

@@ -1,6 +1,6 @@
/*
* Copyright (c) 2016, 2025, Intel Corporation. All rights reserved.
* Copyright (C) 2021 THL A29 Limited, a Tencent company. All rights reserved.
* Copyright (C) 2021, Tencent. All rights reserved.
* Intel Math Library (LIBM) Source Code
*
* DO NOT ALTER OR REMOVE COPYRIGHT NOTICES OR THIS FILE HEADER.

Some files were not shown because too many files have changed in this diff Show More