Compare commits

..

307 Commits

Author SHA1 Message Date
Andrey Turbanov
68815d54c1 8314734: Remove unused field TypeVariableImpl.EMPTY_ANNOTATION_ARRAY
Reviewed-by: bpb, darcy
2023-08-23 20:41:28 +00:00
Alexander Matveev
57a322da9b 8308042: [macos] Developer ID Application Certificate not picked up by jpackage if it contains UNICODE characters
Reviewed-by: asemenyuk
2023-08-23 20:22:12 +00:00
Chris Plummer
38a9edfb7e 8314679: SA fails to properly attach to JVM after having just detached from a different JVM
Reviewed-by: dholmes, kevinw
2023-08-23 20:11:10 +00:00
Chris Plummer
2c60cadfde 8280743: HSDB "Monitor Cache Dump" command might throw NPE
Reviewed-by: kevinw, sspitsyn
2023-08-23 19:12:35 +00:00
Ben Perez
9435cd1916 8175874: Update Security.insertProviderAt to specify behavior when requested position is out of range.
Reviewed-by: mullan, valeriep
2023-08-23 18:10:11 +00:00
lawrence.andrews
dbb788f34d 8294535: Add screen capture functionality to PassFailJFrame
Co-authored-by: Alexey Ivanov <aivanov@openjdk.org>
Reviewed-by: aivanov, honkar
2023-08-23 17:48:07 +00:00
Andrey Turbanov
fae3b02aeb 8314746: Remove unused private put* methods from DirectByteBufferR
Reviewed-by: alanb, bpb
2023-08-23 17:36:46 +00:00
Brian Burkhalter
096b7ff097 8314810: (fs) java/nio/file/Files/CopyInterference.java should use TestUtil::supportsLinks
Reviewed-by: aturbanov, alanb
2023-08-23 15:31:33 +00:00
Alexey Ivanov
62610203f1 8312555: Ideographic characters aren't stretched by AffineTransform.scale(2, 1)
Ignore bitmaps embedded into fonts for non-uniform scales

Reviewed-by: prr, serb
2023-08-23 11:48:22 +00:00
Matthias Baesken
703817d21f 8314517: some tests fail in case ipv6 is disabled on the machine
Reviewed-by: mdoerr, lucy, jpai, dfuchs
2023-08-23 10:44:40 +00:00
Thomas Schatzl
742e319a21 8314157: G1: "yielded" is not initialized on some paths after JDK-8140326
Reviewed-by: ayang, iwalulya
2023-08-23 09:45:25 +00:00
Roland Westrelin
1cee3b9fd9 8313262: C2: Sinking node may cause required cast to be dropped
Reviewed-by: chagedorn, thartmann
2023-08-23 08:59:36 +00:00
Tobias Holenstein
f8203cb272 8313626: C2 crash due to unexpected exception control flow
Reviewed-by: thartmann, chagedorn
2023-08-23 08:47:33 +00:00
Aleksey Shipilev
2be469f89e 8314743: Use of uninitialized local in SR_initialize after JDK-8314114
Reviewed-by: dholmes, coleenp
2023-08-23 07:17:29 +00:00
Jan Kratochvil
571c435e1a 8313374: --enable-ccache's CCACHE_BASEDIR breaks builds
Reviewed-by: erikj
2023-08-23 06:26:18 +00:00
Kimura Yukihiro
d1de3d082e 8313901: [TESTBUG] test/hotspot/jtreg/compiler/codecache/CodeCacheFullCountTest.java fails with java.lang.VirtualMachineError
Reviewed-by: shade, thartmann
2023-08-23 06:04:28 +00:00
Thomas Stuefe
a0d0f21f08 8314752: Use google test string comparison macros
Reviewed-by: coleenp, kbarrett
2023-08-23 05:26:05 +00:00
Andrew John Hughes
7e843c22e7 8284772: GHA: Use GCC Major Version Dependencies Only
Reviewed-by: jwaters, shade, stuefe, erikj, serb
Backport-of: 62defc3dfc4b9ba5adfe3189f34fe8b3f59b94a0
2023-08-23 03:28:23 +00:00
Valerie Peng
ba6cdbe2c2 8309214: sun/security/pkcs11/KeyStore/CertChainRemoval.java fails after 8301154
Reviewed-by: mbaesken, jnimeh
2023-08-22 23:49:03 +00:00
Calvin Cheung
9f4a9fe488 8312434: SPECjvm2008/xml.transform with CDS fails with "can't seal package nu.xom"
Reviewed-by: iklam, matsaave
2023-08-22 22:37:16 +00:00
Chris Plummer
7c169a426f 8312232: Remove sun.jvm.hotspot.runtime.VM.buildLongFromIntsPD()
Reviewed-by: lmesnik, kevinw
2023-08-22 20:57:11 +00:00
Brian Burkhalter
2eae13c669 8214248: (fs) Files:mismatch spec clarifications
Reviewed-by: alanb
2023-08-22 19:04:46 +00:00
Albert Mingkun Yang
ce1ded1a4f 8314749: Remove unimplemented _Copy_conjoint_oops_atomic
Reviewed-by: dcubed
2023-08-22 17:23:37 +00:00
Albert Mingkun Yang
32bf468c3b 8314274: G1: Fix -Wconversion warnings around G1CardSetArray::_data
Reviewed-by: kbarrett, tschatzl
2023-08-22 17:21:44 +00:00
Alexey Ivanov
eb065726f2 8313408: Use SVG for BoxLayout example
Reviewed-by: serb, tr, prr
2023-08-22 17:14:29 +00:00
Thomas Stuefe
20e94784c9 8314426: runtime/os/TestTrimNative.java is failing on slow machines
Reviewed-by: mbaesken, mdoerr, shade
2023-08-22 14:00:47 +00:00
Aleksey Shipilev
69d900d2ce 8314730: GHA: Drop libfreetype6-dev transitional package in favor of libfreetype-dev
Reviewed-by: andrew, erikj
2023-08-22 13:37:21 +00:00
Pavel Rappo
f39fc0aa2d 8314738: Remove all occurrences of and support for @revised
Reviewed-by: mr
2023-08-22 13:02:53 +00:00
Daohan Qu
6b9df037e4 8311240: Eliminate usage of testcases.jar from TestMetaSpaceLog.java
Reviewed-by: ayang, tschatzl
2023-08-22 12:51:59 +00:00
bobpengxie
3e1b1bf94e 8314688: VM build without C1 fails after JDK-8313372
Reviewed-by: yzheng, dnsimon, haosun
2023-08-22 09:21:25 +00:00
Cesar Soares Lucas
02ef859f79 8313689: C2: compiler/c2/irTests/scalarReplacement/AllocationMergesTests.java fails intermittently with -XX:-TieredCompilation
Reviewed-by: kvn, thartmann
2023-08-22 07:58:51 +00:00
Julian Waters
ab86d23adf 8250269: Replace ATTRIBUTE_ALIGNED with alignas
Reviewed-by: rkennke, kbarrett
2023-08-22 06:12:28 +00:00
Gui Cao
a66b5df14a 8314618: RISC-V: -XX:MaxVectorSize does not work as expected
Reviewed-by: fyang, dzhang
2023-08-22 02:47:52 +00:00
Sergey Bylokhov
87298d2ade 8312535: MidiSystem.getSoundbank() throws unexpected SecurityException
Reviewed-by: prr
2023-08-22 01:44:16 +00:00
Daniel D. Daugherty
78f74bc8ff 8314672: ProblemList runtime/cds/appcds/customLoader/HelloCustom_JFR.java on linux-all and windows-x64
Reviewed-by: azvegint
2023-08-21 17:13:48 +00:00
Leo Korinth
17a19dc060 8311639: Replace currentTimeMillis() with nanoTime() in jtreg/gc
Reviewed-by: stefank, ayang
2023-08-21 12:19:36 +00:00
Albert Mingkun Yang
0b3f452d25 8314161: G1: Fix -Wconversion warnings in G1CardSetConfiguration::_bitmap_hash_mask
Reviewed-by: tschatzl, iwalulya
2023-08-21 12:17:38 +00:00
Albert Mingkun Yang
abac60851c 8313962: G1: Refactor G1ConcurrentMark::_num_concurrent_workers
Reviewed-by: tschatzl, iwalulya
2023-08-21 12:15:26 +00:00
Aleksey Shipilev
812f475bc4 8314501: Shenandoah: sun/tools/jhsdb/heapconfig/JMapHeapConfigTest.java fails
Reviewed-by: cjplummer, sspitsyn
2023-08-21 09:02:01 +00:00
Thomas Schatzl
8939d15d92 8314100: G1: Improve collection set candidate selection code
Reviewed-by: ayang, iwalulya
2023-08-21 08:28:31 +00:00
Sidraya
ec1f7a8480 8311630: [s390] Implementation of Foreign Function & Memory API (Preview)
Reviewed-by: amitkumar, jvernee, mdoerr
2023-08-21 07:15:25 +00:00
Christian Stein
c50315de8f 8314495: Update to use jtreg 7.3.1
Reviewed-by: dholmes, erikj, iris, jpai
2023-08-21 06:30:56 +00:00
Alan Bateman
ed0f75f266 8313290: Misleading exception message from STS.Subtask::get when task forked after shutdown
Reviewed-by: psandoz
2023-08-19 18:42:43 +00:00
Xin Liu
febc34dd28 8314610: hotspot can't compile with the latest of gtest because of <iomanip>
Reviewed-by: jiefu, stuefe
2023-08-19 17:42:30 +00:00
Leonid Mesnik
58f5826ff4 8311222: strace004 can fail due to unexpected stack length after JDK-8309408
Reviewed-by: dholmes, alanb
2023-08-19 01:46:40 +00:00
Tyler Steele
395fc78880 8309475: Test java/foreign/TestByteBuffer.java fails: a problem with msync (aix)
Reviewed-by: mbaesken, alanb, mdoerr
2023-08-18 20:11:24 +00:00
Leonid Mesnik
f481477144 8314320: Mark runtime/CommandLine/ tests as flagless
Reviewed-by: dholmes
2023-08-18 17:53:07 +00:00
Chris Plummer
fbe28ee90d 8314481: JDWPTRANSPORT_ERROR_INTERNAL code in socketTransport.c can never be executed
Reviewed-by: dcubed, sspitsyn
2023-08-18 17:46:36 +00:00
Mandy Chung
50a2ce01f4 8310815: Clarify the name of the main class, services and provider classes in module descriptor
8314449: Clarify the name of the declaring class of StackTraceElement

Reviewed-by: alanb
2023-08-18 17:10:39 +00:00
Pavel Rappo
aecbb1b5c3 8314448: Coordinate DocLint and JavaDoc to report on unknown tags
Reviewed-by: jjg
2023-08-18 16:40:51 +00:00
Fredrik Bredberg
bcba5e9785 8313419: Template interpreter produces no safepoint check for return bytecodes
Reviewed-by: pchilanomate
2023-08-18 14:33:03 +00:00
Fredrik Bredberg
c36e009772 8308984: Relativize last_sp (and top_frame_sp) in interpreter frames
Reviewed-by: pchilanomate, aph, haosun
2023-08-18 14:29:28 +00:00
Tyler Steele
fdac6a6ac8 8312180: (bf) MappedMemoryUtils passes incorrect arguments to msync (aix)
Reviewed-by: clanger, stuefe
2023-08-18 13:58:58 +00:00
Coleen Phillimore
752121114f 8314265: Fix -Wconversion warnings in miscellaneous runtime code
Reviewed-by: stuefe, dholmes, chagedorn
2023-08-18 12:06:02 +00:00
Alexander Zvegintsev
2f04bc5f93 8313697: [XWayland][Screencast] consequent getPixelColor calls are slow
8310334: [XWayland][Screencast] screen capture error message in debug

Reviewed-by: serb, prr
2023-08-18 10:44:20 +00:00
Andrei Rybak
33d5dfdab3 8314543: gitattributes: make diffs easier to read
Git supports special hunk headers for several languages in diff output,
which make it easier to read diffs of files in that language, generated
by Git (git-diff, git-show, `git log -p`, etc).  For details, see
`git help gitattributes` or the online documentation.[1]

Add entries to the root .gitattributes file to support showing the hunk
headers for Java, C, C++, Markdown, Shell script, HTML, and CSS.  This
makes it easier to read diffs generated by Git.

[1] https://git-scm.com/docs/gitattributes

Reviewed-by: erikj, ksakata
2023-08-18 07:48:50 +00:00
Matthias Baesken
5058854b86 8314389: AttachListener::pd_set_flag obsolete
Reviewed-by: cjplummer, mdoerr, sspitsyn
2023-08-18 06:45:18 +00:00
Thomas Stuefe
891c3f4cca 8307356: Metaspace: simplify BinList handling
Reviewed-by: rkennke, coleenp
2023-08-18 05:51:05 +00:00
Ioi Lam
0299364d85 8314249: Refactor handling of invokedynamic in JVMCI ConstantPool
Reviewed-by: dnsimon, coleenp
2023-08-17 22:52:05 +00:00
Justin Lu
96778dd549 8314169: Combine related RoundingMode logic in j.text.DigitList
Reviewed-by: naoto
2023-08-17 22:41:21 +00:00
Harshitha Onkar
808bb1f7bc 8314246: javax/swing/JToolBar/4529206/bug4529206.java fails intermittently on Linux
Reviewed-by: dnguyen, serb
2023-08-17 20:37:06 +00:00
Joe Darcy
6445314fec 8314477: Improve definition of "prototypical type"
Reviewed-by: prappo
2023-08-17 20:25:46 +00:00
Andrey Turbanov
d27daf01d6 8314129: Make fields final in java.util.Scanner
Reviewed-by: stsypanov, liach, alanb
2023-08-17 18:32:06 +00:00
Andrey Turbanov
a8ab3be371 8314261: Make fields final in sun.net.www
Reviewed-by: redestad, jpai, dfuchs
2023-08-17 17:54:02 +00:00
Joe Darcy
3bb8afba69 8314489: Add javadoc index entries for java.lang.Math terms
Reviewed-by: alanb
2023-08-17 17:32:49 +00:00
Daniel D. Daugherty
2505cebc5d 8314533: ProblemList runtime/cds/appcds/customLoader/HelloCustom_JFR.java on linux-all with ZGC
Reviewed-by: azvegint
2023-08-17 17:05:54 +00:00
Erik Joelsson
b33ff30d70 8313661: [REDO] Relax prerequisites for java.base-jmod target
Reviewed-by: alanb
2023-08-17 16:54:36 +00:00
Chris Plummer
62ca00158c 8313357: Revisit requiring SA tests on OSX to either run as root or use sudo
Reviewed-by: dholmes, amenkov
2023-08-17 15:26:45 +00:00
Chris Plummer
388dcff725 8282712: VMConnection.open() does not detect if VM failed to be created, resulting in NPE
Reviewed-by: sspitsyn, amenkov
2023-08-17 15:09:09 +00:00
Robbin Ehn
e8f6b3e497 8314268: Missing include in assembler_riscv.hpp
Reviewed-by: shade, fyang
2023-08-17 14:45:59 +00:00
Claes Redestad
c634bdf9d9 8314444: Update jib-profiles.js to use JMH 1.37 devkit
Reviewed-by: shade, mikael, erikj
2023-08-17 11:54:24 +00:00
Per Minborg
2b81885f78 8314071: Test java/foreign/TestByteBuffer.java timed out
Reviewed-by: mcimadamore
2023-08-17 11:31:09 +00:00
Cristian Vat
32efd23c5d 8311939: Excessive allocation of Matcher.groups array
Reviewed-by: rriggs, igraves
2023-08-17 11:27:39 +00:00
Alan Bateman
ed585d16b9 8314280: StructuredTaskScope.shutdown should document that the state of completing subtasks is not defined
Reviewed-by: psandoz
2023-08-17 08:02:53 +00:00
Pavel Rappo
6f1071f5ed 8314213: DocLint should warn about unknown standard tags
Reviewed-by: jjg
2023-08-17 07:43:07 +00:00
Aggelos Biboudis
4331193010 8314423: Multiple patterns without unnamed variables
8314216: Case enumConstant, pattern compilation fails

Reviewed-by: jlahoda
2023-08-17 07:33:16 +00:00
Andrey Turbanov
249dc37426 8314321: Remove unused field jdk.internal.util.xml.impl.Attrs.mAttrIdx
Reviewed-by: alanb, vtewari, bpb
2023-08-17 07:13:38 +00:00
Sergey Bylokhov
b78f5a1068 8314076: ICC_ColorSpace#minVal/maxVal have the opposite description
Reviewed-by: azvegint
2023-08-17 05:33:44 +00:00
Kim Barrett
2a1176b544 8314276: Improve PtrQueue API around size/capacity
Reviewed-by: iwalulya, tschatzl
2023-08-17 05:06:11 +00:00
Joe Darcy
0c3bc71d24 8281169: Expand discussion of elements and types
Reviewed-by: mcimadamore, prappo
2023-08-16 20:31:51 +00:00
Ben Perez
f143380d01 8314240: test/jdk/sun/security/pkcs/pkcs7/SignerOrder.java fails to compile
Reviewed-by: mullan
2023-08-16 19:56:13 +00:00
Brian Burkhalter
6b396da278 8062795: (fs) Files.setPermissions requires read access when NOFOLLOW_LINKS specified
Reviewed-by: alanb
2023-08-16 17:53:56 +00:00
Leonid Mesnik
7b28d3608a 8314330: java/foreign tests should respect vm flags when start new processes
Reviewed-by: jvernee
2023-08-16 17:49:38 +00:00
Glavo
b32d6411c4 8311943: Cleanup usages of toLowerCase() and toUpperCase() in java.base
Reviewed-by: naoto
2023-08-16 17:37:21 +00:00
Lance Andersen
13f6450e2e 8313765: Invalid CEN header (invalid zip64 extra data field size)
Reviewed-by: simonis, alanb, coffeys
2023-08-16 15:42:36 +00:00
Ralf Schmelter
24e896d7c9 8310275: Bug in assignment operator of ReservedMemoryRegion
Reviewed-by: jsjolen, dholmes, stuefe
2023-08-16 15:00:50 +00:00
Thomas Schatzl
1925508425 8314144: gc/g1/ihop/TestIHOPStatic.java fails due to extra concurrent mark with -Xcomp
Reviewed-by: ayang, iwalulya
2023-08-16 12:08:56 +00:00
Raffaello Giulietti
b80001de0c 8314209: Wrong @since tag for RandomGenerator::equiDoubles
Reviewed-by: alanb
2023-08-16 08:21:34 +00:00
Matthias Baesken
ef6db5c299 8314211: Add NativeLibraryUnload event
Reviewed-by: stuefe, mdoerr
2023-08-16 07:39:42 +00:00
Christian Hagedorn
49ddb19972 8313760: [REDO] Enhance AES performance
Co-authored-by: Andrew Haley <aph@openjdk.org>
Reviewed-by: adinn, aph, sviswanathan, rhalade, kvn, dlong
2023-08-16 07:21:04 +00:00
Emanuel Peter
d46f0fb318 8313720: C2 SuperWord: wrong result with -XX:+UseVectorCmov -XX:+UseCMoveUnconditionally
Reviewed-by: chagedorn, thartmann
2023-08-16 07:15:43 +00:00
Aleksey Shipilev
38687f1a3e 8314262: GHA: Cut down cross-compilation sysroots deeper
Reviewed-by: erikj
2023-08-16 07:04:25 +00:00
Aleksey Shipilev
a602624ef4 8314020: Print instruction blocks in byte units
Reviewed-by: stuefe, fyang
2023-08-16 07:02:48 +00:00
Christian Hagedorn
0b12480de8 8314233: C2: assert(assertion_predicate_has_loop_opaque_node(iff)) failed: unexpected
Reviewed-by: thartmann, kvn
2023-08-16 06:58:23 +00:00
Tom Rodriguez
e1fdef5613 8314324: "8311557: [JVMCI] deadlock with JVMTI thread suspension" causes various failures
Reviewed-by: cjplummer, thartmann
2023-08-16 06:06:59 +00:00
Prasanta Sadhukhan
2bd2faeb76 4346610: Adding JSeparator to JToolBar "pushes" buttons added after separator to edge
Reviewed-by: tr, aivanov, dnguyen
2023-08-16 05:35:40 +00:00
Thomas Stuefe
6a15860b12 8314163: os::print_hex_dump prints incorrectly for big endian platforms and unit sizes larger than 1
Reviewed-by: mbaesken, shade
2023-08-16 05:14:40 +00:00
Leonid Mesnik
6bf4a33593 8314242: Update applications/scimark/Scimark.java to accept VM flags
Reviewed-by: dholmes
2023-08-16 00:15:55 +00:00
Christoph Schwentker
bc8e9f44a3 8311591: Add SystemModulesPlugin test case that splits module descriptors with new local variables defined by DedupSetBuilder
Reviewed-by: mchung
2023-08-15 22:34:37 +00:00
Chris Plummer
0f5e030bad 8309335: Get rid of use of reflection to call Thread.isVirtual() in nsk/jdi/EventRequestManager/stepRequests/stepreq001t.java
Reviewed-by: lmesnik, sspitsyn, alanb
2023-08-15 20:55:27 +00:00
Mikael Vidstedt
f66c73d34b 8314166: Update googletest to v1.14.0
Reviewed-by: kbarrett, stuefe, shade, erikj
2023-08-15 19:52:56 +00:00
Gerard Ziemski
f239954657 8310134: NMT: thread count in Thread section of VM.native_memory output confusing with virtual threads
Reviewed-by: jsjolen, dholmes, alanb
2023-08-15 17:06:28 +00:00
Aleksey Shipilev
2e8a0ab272 8314120: Add tests for FileDescriptor.sync
Reviewed-by: alanb, bpb
2023-08-15 16:11:09 +00:00
Ioi Lam
80809ef4cc 8314248: Remove HotSpotConstantPool::isResolvedDynamicInvoke
Reviewed-by: thartmann, dnsimon
2023-08-15 15:54:44 +00:00
Tom Rodriguez
004651ddc2 8311557: [JVMCI] deadlock with JVMTI thread suspension
Reviewed-by: thartmann, dnsimon
2023-08-15 15:44:33 +00:00
Coleen Phillimore
9ded86821b 8314114: Fix -Wconversion warnings in os code, primarily linux
Reviewed-by: dholmes, dlong
2023-08-15 11:05:31 +00:00
Emanuel Peter
a02d65efcc 8310308: IR Framework: check for type and size of vector nodes
Reviewed-by: chagedorn, thartmann
2023-08-15 10:08:51 +00:00
Thomas Stuefe
dff99f7f3d 8313782: Add user-facing warning if THPs are enabled but cannot be used
Reviewed-by: dholmes, sjohanss
2023-08-15 09:09:02 +00:00
Dmitry Cherepanov
f4e72c58d7 8313949: Missing word in GPLv2 license text in StackMapTableAttribute.java
Reviewed-by: iris
2023-08-15 08:43:38 +00:00
Matthias Baesken
6338927221 8314197: AttachListener::pd_find_operation always returning nullptr
Reviewed-by: dholmes, cjplummer, sspitsyn
2023-08-15 07:48:38 +00:00
David Holmes
b7dee213df 8314244: Incorrect file headers in new tests from JDK-8312597
Reviewed-by: lmesnik, kvn
2023-08-15 04:29:25 +00:00
Fei Gao
37c6b23f5b 8308340: C2: Idealize Fma nodes
Reviewed-by: kvn, epeter
2023-08-15 01:04:22 +00:00
Yasumasa Suenaga
583cb754f3 8313406: nep_invoker_blob can be simplified more
Reviewed-by: jvernee, vlivanov
2023-08-14 23:12:42 +00:00
Ben Taylor
0074b48ad7 8312597: Convert TraceTypeProfile to UL
Reviewed-by: shade, phh
2023-08-14 22:50:37 +00:00
Sean Mullan
1f1c5c6f8d 8314241: Add test/jdk/sun/security/pkcs/pkcs7/SignerOrder.java to ProblemList
Reviewed-by: dcubed, dholmes
2023-08-14 22:23:11 +00:00
David Holmes
f142470dea 8311981: Test gc/stringdedup/TestStringDeduplicationAgeThreshold.java#ZGenerational timed out
Reviewed-by: stefank, pchilanomate, dcubed, rehn
2023-08-14 21:18:57 +00:00
Ben Perez
595fdd36c5 8314059: Remove PKCS7.verify()
Reviewed-by: mullan
2023-08-14 18:39:18 +00:00
Kimura Yukihiro
49b29845f7 8313854: Some tests in serviceability area fail on localized Windows platform
Reviewed-by: amenkov, cjplummer
2023-08-14 18:26:55 +00:00
Brian Burkhalter
c132176b93 8114830: (fs) Files.copy fails due to interference from something else changing the file system
Reviewed-by: alanb, vtewari
2023-08-14 17:48:50 +00:00
Weibing Xiao
e56d3bc2da 8313657: com.sun.jndi.ldap.Connection.cleanup does not close connections on SocketTimeoutErrors
Reviewed-by: vtewari, msheppar, aefimov
2023-08-14 17:38:53 +00:00
Oli Gillespie
4b2703ad39 8313678: SymbolTable can leak Symbols during cleanup
Reviewed-by: coleenp, dholmes, shade
2023-08-14 15:58:03 +00:00
Liam Miller-Cushon
f41c267f85 8314045: ArithmeticException in GaloisCounterMode
Co-authored-by: Ioana Nedelcu <ioannanedelcu@google.com>
Reviewed-by: ascarpino
2023-08-14 15:51:18 +00:00
Ioi Lam
911d1dbbf7 8314078: HotSpotConstantPool.lookupField() asserts due to field changes in ConstantPool.cpp
Reviewed-by: dnsimon, coleenp
2023-08-14 15:37:44 +00:00
Christian Stein
6574dd796d 8314025: Remove JUnit-based test in java/lang/invoke from problem list
Reviewed-by: dholmes, jpai
2023-08-14 13:38:22 +00:00
Christian Hagedorn
207bd00c51 8313756: [BACKOUT] 8308682: Enhance AES performance
Reviewed-by: thartmann
2023-08-14 12:08:16 +00:00
Afshin Zafari
823f5b930c 8308850: Change JVM options with small ranges that get -Wconversion warnings to 32 bits
Reviewed-by: dholmes, coleenp, dlong
2023-08-14 11:57:17 +00:00
Albert Mingkun Yang
5bfb82e6fa 8314119: G1: Fix -Wconversion warnings in G1CardSetInlinePtr::card_pos_for
Reviewed-by: tschatzl, kbarrett
2023-08-14 11:08:31 +00:00
Aleksey Shipilev
06aa3c5628 8314118: Update JMH devkit to 1.37
Reviewed-by: erikj, redestad
2023-08-14 10:04:55 +00:00
Yudi Zheng
4164693f3b 8313372: [JVMCI] Export vmIntrinsics::is_intrinsic_available results to JVMCI compilers.
Reviewed-by: dnsimon, kvn
2023-08-14 08:56:15 +00:00
Stefan Karlsson
049b55f24e 8314019: Add gc logging to jdk/jfr/event/gc/detailed/TestZAllocationStallEvent.java
Reviewed-by: aboldtch, eosterlund
2023-08-14 08:45:16 +00:00
Christian Hagedorn
a39ed1087b 8314116: C2: assert(false) failed: malformed control flow after JDK-8305636
Reviewed-by: thartmann, kvn
2023-08-14 08:15:02 +00:00
Christian Hagedorn
1de5bf1ce9 8314106: C2: assert(is_valid()) failed: must be valid after JDK-8305636
Reviewed-by: thartmann, kvn
2023-08-14 08:14:42 +00:00
Feilong Jiang
5c91622885 8314117: RISC-V: Incorrect VMReg encoding in RISCV64Frame.java
Reviewed-by: fyang
2023-08-14 07:50:43 +00:00
Andrey Turbanov
6bbcef5315 8313948: Remove unnecessary static fields defaultUpper/defaultLower in sun.net.PortConfig
Reviewed-by: dfuchs
2023-08-14 07:04:29 +00:00
Andrey Turbanov
b88c273503 8313743: Make fields final in sun.nio.ch
Reviewed-by: bpb
2023-08-14 07:04:05 +00:00
Alexander Matveev
ec0cc6300a 8313904: [macos] All signing tests which verifies unsigned app images are failing
Reviewed-by: asemenyuk
2023-08-11 21:00:52 +00:00
Man Cao
7332502883 8314139: TEST_BUG: runtime/os/THPsInThreadStackPreventionTest.java could fail on machine with large number of cores
Reviewed-by: shade, stuefe
2023-08-11 20:43:31 +00:00
Chris Plummer
8f1c134848 8313798: [aarch64] sun/tools/jhsdb/HeapDumpTestWithActiveProcess.java sometimes times out on aarch64
Reviewed-by: kevinw, sspitsyn
2023-08-11 18:09:44 +00:00
Andreas Steiner
12326770dc 8313244: NM flags handling in configure process
Reviewed-by: clanger, jwaters, mbaesken, erikj
2023-08-11 13:21:46 +00:00
Albert Mingkun Yang
6ffc0324dc 8314113: G1: Remove unused G1CardSetInlinePtr::card_at
Reviewed-by: tschatzl
2023-08-11 12:19:39 +00:00
Johan Sjölen
62adeb08c3 8311648: Refactor the Arena/Chunk/ChunkPool interface
Reviewed-by: stuefe, coleenp
2023-08-11 09:32:45 +00:00
Ioi Lam
43462a36ab 8313224: Avoid calling JavaThread::current() in MemAllocator::Allocation constructor
Reviewed-by: tschatzl, coleenp
2023-08-11 03:39:39 +00:00
Mark Powers
9abb2a559e 8312461: JNI warnings in SunMSCApi provider
Reviewed-by: valeriep, djelinski
2023-08-10 23:43:38 +00:00
Jesse Glick
42758cb889 8312882: Update the CONTRIBUTING.md with pointers to lifecycle of a PR
Reviewed-by: erikj, jwilhelm
2023-08-10 22:26:32 +00:00
Calvin Cheung
88b4e3b853 8304292: Memory leak related to ClassLoader::update_class_path_entry_list
Reviewed-by: dholmes, iklam
2023-08-10 20:02:27 +00:00
Doug Simon
6f5c903d10 8313899: JVMCI exception Translation can fail in TranslatedException.<clinit>
Reviewed-by: never, thartmann
2023-08-10 18:53:02 +00:00
Damon Nguyen
d97de8260c 8313633: [macOS] java/awt/dnd/NextDropActionTest/NextDropActionTest.java fails with java.lang.RuntimeException: wrong next drop action!
Reviewed-by: honkar, serb
2023-08-10 17:52:28 +00:00
Xue-Lei Andrew Fan
79be8d9383 8312259: StatusResponseManager unused code clean up
Reviewed-by: mpowers, jnimeh
2023-08-10 17:15:56 +00:00
Tom Rodriguez
1875b2872b 8314061: [JVMCI] DeoptimizeALot stress logic breaks deferred barriers
Reviewed-by: thartmann, dnsimon
2023-08-10 16:40:28 +00:00
Coleen Phillimore
bd1b942741 8313905: Checked_cast assert in CDS compare_by_loader
Reviewed-by: dlong, iklam
2023-08-10 15:25:00 +00:00
Leonid Mesnik
9b53251131 8313654: Test WaitNotifySuspendedVThreadTest.java timed out
Reviewed-by: sspitsyn
2023-08-10 15:18:57 +00:00
Leonid Mesnik
e7c83ea948 8312194: test/hotspot/jtreg/applications/ctw/modules/jdk_crypto_ec.java cannot handle empty modules
Reviewed-by: kvn, thartmann
2023-08-10 15:18:34 +00:00
Matthias Baesken
23fe2ece58 8313616: support loading library members on AIX in os::dll_load
Reviewed-by: mdoerr
2023-08-10 12:06:43 +00:00
Coleen Phillimore
f47767ffef 8313882: Fix -Wconversion warnings in runtime code
Reviewed-by: pchilanomate, dlong, dholmes
2023-08-10 11:57:25 +00:00
Jaikiran Pai
0cb9ab04f4 8313239: InetAddress.getCanonicalHostName may return ip address if reverse lookup fails
Reviewed-by: dfuchs, aefimov, alanb
2023-08-10 10:01:46 +00:00
Oli Gillespie
028b3ae1b1 8313874: JNI NewWeakGlobalRef throws exception for null arg
Reviewed-by: dholmes, kbarrett, shade
2023-08-10 08:51:50 +00:00
Doug Simon
83adaf5477 8313421: [JVMCI] avoid locking class loader in CompilerToVM.lookupType
Reviewed-by: never, thartmann
2023-08-10 08:17:03 +00:00
Per Minborg
35b60f925a 8298095: Refine implSpec for SegmentAllocator
Reviewed-by: mcimadamore
2023-08-10 07:57:19 +00:00
Matthias Baesken
6dba2026d7 8313670: Simplify shared lib name handling code in some tests
Reviewed-by: cjplummer, sspitsyn
2023-08-10 07:23:24 +00:00
Thomas Stuefe
8f28809aa8 8299790: os::print_hex_dump is racy
Reviewed-by: shade, dholmes
2023-08-10 07:21:47 +00:00
Axel Boldt-Christmas
e080a0b4c0 8311508: ZGC: RAII use of IntelJccErratumAlignment
Reviewed-by: stefank, shade, tschatzl
2023-08-10 07:18:31 +00:00
Axel Boldt-Christmas
242a2e63df 8308843: Generational ZGC: Remove gc/z/TestHighUsage.java
Reviewed-by: ayang, tschatzl
2023-08-10 07:16:36 +00:00
Sergey Tsypanov
c822183e98 8313768: Reduce interaction with volatile field in j.u.l.StreamHandler
Reviewed-by: dfuchs, jpai
2023-08-10 05:50:19 +00:00
Alexandre Iline
cd16158edb 8314075: Update JCov version for JDK 22
Reviewed-by: serb
2023-08-10 00:43:28 +00:00
Joe Darcy
c307391ab1 8307184: Incorrect/inconsistent specification and implementation for Elements.getDocComment
Reviewed-by: vromero, jjg
2023-08-09 21:17:10 +00:00
Pavel Rappo
593ba2fe47 8313693: Introduce an internal utility for the Damerau–Levenshtein distance calculation
Reviewed-by: jlahoda, jjg
2023-08-09 16:08:23 +00:00
Christian Stein
360f65d7b1 8314022: Problem-list tests failing with jtreg 7.3
Reviewed-by: dholmes
2023-08-09 14:00:21 +00:00
Markus Grönlund
0eb0997ae4 8288936: Wrong lock ordering writing G1HeapRegionTypeChange JFR event
Reviewed-by: egahlin
2023-08-09 13:34:04 +00:00
Pavel Rappo
19ae62ae2c 8311170: Simplify and modernize equals and hashCode in security area
Reviewed-by: djelinski, rriggs, valeriep
2023-08-09 12:34:40 +00:00
Daniel Jeliński
e9f751ab16 8311247: Some cpp files are compiled with -std:c11 flag
Reviewed-by: aivanov, jwaters, prr, erikj
2023-08-09 12:26:32 +00:00
Erik Gahlin
213d3c449a 8313891: JFR: Incorrect exception message for RecordedObject::getInt
Reviewed-by: mgronlun
2023-08-09 11:46:25 +00:00
Richard Startin
0e2c72d7a5 8313796: AsyncGetCallTrace crash on unreadable interpreter method pointer
Reviewed-by: coleenp, aph, stuefe
2023-08-09 11:23:32 +00:00
Hannes Wallnöfer
52ec4bcb1b 8303056: Improve support for Unicode characters and digits in JavaDoc search
Reviewed-by: jjg
2023-08-09 09:50:21 +00:00
Albert Mingkun Yang
9cf12bb977 8313922: Remove unused WorkerPolicy::_debug_perturbation
Reviewed-by: tschatzl
2023-08-09 09:13:34 +00:00
Matthias Baesken
6e3cc131da 8312467: relax the builddir check in make/autoconf/basic.m4
Reviewed-by: clanger, erikj
2023-08-09 07:08:52 +00:00
Hannes Wallnöfer
77e5739f60 8310118: Resource files should be moved to appropriate directories
Reviewed-by: jjg
2023-08-09 07:01:15 +00:00
Matthias Baesken
96304f37f8 8313691: use close after failing os::fdopen in vmError and ciEnv
Reviewed-by: dholmes, thartmann
2023-08-09 06:54:15 +00:00
Leonid Mesnik
3fb4805b1a 8307462: [REDO] VmObjectAlloc is not generated by intrinsics methods which allocate objects
Reviewed-by: sspitsyn, thartmann
2023-08-09 06:29:42 +00:00
Stefan Karlsson
0a42c44bf8 8313954: Add gc logging to vmTestbase/vm/gc/containers/Combination05
Reviewed-by: tschatzl, lmesnik
2023-08-09 06:16:39 +00:00
Stefan Karlsson
735b16a696 8313752: InstanceKlassFlags::print_on doesn't print the flag names
Reviewed-by: stuefe, shade, coleenp
2023-08-09 06:16:18 +00:00
Tobias Hartmann
d3b578f1c9 8313345: SuperWord fails due to CMove without matching Bool pack
Co-authored-by: Emanuel Peter <epeter@openjdk.org>
Co-authored-by: Hannes Greule <hannesgreule@outlook.de>
Reviewed-by: chagedorn, epeter, hgreule
2023-08-09 05:16:02 +00:00
Yi Yang
31a307f2fb 8306441: Two phase segmented heap dump
Co-authored-by: Kevin Walls <kevinw@openjdk.org>
Reviewed-by: amenkov, kevinw
2023-08-09 01:58:57 +00:00
Rajan Halade
515add88ed 8313206: PKCS11 tests silently skip execution
Reviewed-by: ssahoo, mullan
2023-08-08 20:21:16 +00:00
Jim Laskey
6864441163 8313809: String template fails with java.lang.StringIndexOutOfBoundsException if last fragment is UTF16
Reviewed-by: redestad
2023-08-08 19:33:44 +00:00
Jorn Vernee
509f80bb04 8313889: Fix -Wconversion warnings in foreign benchmarks
Reviewed-by: pminborg, mcimadamore
2023-08-08 13:59:35 +00:00
Coleen Phillimore
5c3041ce83 8313435: Clean up unused default methods code
Reviewed-by: kbarrett, iklam
2023-08-08 12:12:57 +00:00
Coleen Phillimore
8752d4984a 8313785: Fix -Wconversion warnings in prims code
Reviewed-by: sspitsyn, dlong
2023-08-08 11:51:42 +00:00
Andrey Turbanov
41bdcded65 8313875: Use literals instead of static fields in java.util.Math: twoToTheDoubleScaleUp, twoToTheDoubleScaleDown
Reviewed-by: redestad, darcy, bpb, rgiulietti
2023-08-08 11:38:15 +00:00
Markus Grönlund
091e65e95b 8313552: Fix -Wconversion warnings in JFR code
Reviewed-by: coleenp
2023-08-08 11:01:59 +00:00
Thomas Schatzl
7e209528d3 8140326: G1: Consider putting regions where evacuation failed into next collection set
Co-authored-by: Albert Mingkun Yang <ayang@openjdk.org>
Reviewed-by: iwalulya, ayang
2023-08-08 10:29:14 +00:00
Stefan Karlsson
28fd7a1739 8311179: Generational ZGC: gc/z/TestSmallHeap.java failed with OutOfMemoryError
Reviewed-by: ayang, aboldtch, tschatzl
2023-08-08 09:57:52 +00:00
Jan Lahoda
a1115a7a39 8312204: unexpected else with statement causes compiler crash
Reviewed-by: vromero
2023-08-08 09:28:21 +00:00
Jan Lahoda
87a6acbeee 8313792: Verify 4th party information in src/jdk.internal.le/share/legal/jline.md
Reviewed-by: vromero
2023-08-08 08:49:39 +00:00
Chris Plummer
87b08b6e01 8307408: Some jdk/sun/tools/jhsdb tests don't pass test JVM args to the debuggee JVM
Reviewed-by: sspitsyn, lmesnik
2023-08-07 18:51:29 +00:00
Alex Menkov
83edffa608 8309663: test fails "assert(check_alignment(result)) failed: address not aligned: 0x00000008baadbabe"
Reviewed-by: sspitsyn, eosterlund
2023-08-07 18:27:33 +00:00
Justin Lu
1da82a34b1 8313702: Update IANA Language Subtag Registry to Version 2023-08-02
Reviewed-by: naoto, iris
2023-08-07 17:10:27 +00:00
Christian Stein
9c6eb67e85 8313167: Update to use jtreg 7.3
Reviewed-by: jjg, iris
2023-08-07 16:09:23 +00:00
Qing Xiao
380418fad0 8295058: test/langtools/tools/javac 116 test classes uses com.sun.tools.classfile library
Reviewed-by: asotona
2023-08-07 15:49:11 +00:00
Antonios Printezis
4726960fcd 8313779: RISC-V: use andn / orn in the MD5 instrinsic
Reviewed-by: luhenry, fyang
2023-08-07 14:17:44 +00:00
Per Minborg
bbbfa217a0 8313880: Incorrect copyright header in jdk/java/foreign/TestFree.java after JDK-8310643
Reviewed-by: thartmann
2023-08-07 12:34:52 +00:00
Coleen Phillimore
0bb6af3bc0 8313791: Fix just zPage.inline.hpp and xPage.inline.hpp
Reviewed-by: stefank, tschatzl
2023-08-07 12:06:41 +00:00
Aleksey Shipilev
4b192a8dc3 8313676: Amend TestLoadIndexedMismatch test to target intrinsic directly
Reviewed-by: thartmann, chagedorn
2023-08-07 11:26:08 +00:00
Per Minborg
0b4387e3a3 8310643: Misformatted copyright messages in FFM
Reviewed-by: jvernee
2023-08-07 10:58:11 +00:00
Aleksey Shipilev
538f9557b8 8313701: GHA: RISC-V should use the official repository for bootstrap
Reviewed-by: clanger, fyang
2023-08-07 10:48:11 +00:00
Aleksey Shipilev
226cdc696d 8312585: Rename DisableTHPStackMitigation flag to THPStackMitigation
Reviewed-by: dholmes, stuefe
2023-08-07 10:45:14 +00:00
Christian Hagedorn
dc01604756 8305636: Expand and clean up predicate classes and move them into separate files
Reviewed-by: thartmann, roland
2023-08-07 09:14:16 +00:00
Prasanta Sadhukhan
a38fdaf18d 8166900: If you wrap a JTable in a JLayer, the cursor is moved to the last row of table by you press the page down key.
Reviewed-by: abhiscxk, dnguyen, prr, serb
2023-08-07 09:12:33 +00:00
Abhishek Kumar
c1f4595e64 8311160: [macOS, Accessibility] VoiceOver: No announcements on JRadioButtonMenuItem and JCheckBoxMenuItem
Reviewed-by: asemenov, kizune
2023-08-07 05:02:16 +00:00
Julian Waters
90d795abf1 8313141: Missing check for os_thread type in os_windows.cpp
Reviewed-by: dholmes, mgronlun
2023-08-05 05:24:08 +00:00
Christoph Langer
6d18529616 8313795: Fix for JDK-8313564 breaks ppc and s390x builds
Reviewed-by: stuefe
2023-08-04 22:33:36 +00:00
Matias Saavedra Silva
ad6e9e75bf 8313554: Fix -Wconversion warnings for ResolvedFieldEntry
Reviewed-by: coleenp, dlong
2023-08-04 20:24:50 +00:00
danthe1st
b463c6d3b0 8311517: Add performance information to ArrayList javadoc
Reviewed-by: smarks, bpb
2023-08-04 20:21:25 +00:00
Stuart Marks
b2add96c35 8159527: Collections mutator methods should all be marked as optional operations
Reviewed-by: naoto, bpb
2023-08-04 19:27:56 +00:00
Ashutosh Mehra
873d117932 8312623: SA add NestHost and NestMembers attributes when dumping class
Reviewed-by: cjplummer, sspitsyn, stuefe
2023-08-04 18:42:37 +00:00
Thomas Stuefe
017e0c7850 8310388: Shenandoah: Auxiliary bitmap is not madvised for THP
Reviewed-by: shade, kdnilsen
2023-08-04 18:40:16 +00:00
Coleen Phillimore
f66cd5008d 8313564: Fix -Wconversion warnings in classfile code
Reviewed-by: matsaave, dholmes
2023-08-04 14:06:16 +00:00
Aleksey Shipilev
e8a37b90db 8313248: C2: setScopedValueCache intrinsic exposes nullptr pre-values to store barriers
Reviewed-by: thartmann, rkennke
2023-08-04 09:53:20 +00:00
Aleksey Shipilev
29f1d8ef50 8313707: GHA: Bootstrap sysroots with --variant=minbase
Reviewed-by: clanger, fyang
2023-08-04 09:11:32 +00:00
Raffaello Giulietti
61c58fdd00 8312976: MatchResult produces StringIndexOutOfBoundsException for groups outside match
Reviewed-by: alanb, smarks
2023-08-04 07:11:18 +00:00
Matthias Baesken
5d232959c2 8313251: Add NativeLibraryLoad event
Reviewed-by: jbechberger, egahlin, dholmes
2023-08-04 07:03:25 +00:00
Andreas Steiner
c4b8574b94 8311938: Add default cups include location for configure on AIX
Reviewed-by: clanger, mbaesken, jwaters
2023-08-04 06:56:12 +00:00
Qing Xiao
10a2605884 8294979: test/jdk/tools/jlink 3 test classes use ASM library
Reviewed-by: mchung, ksakata
2023-08-04 05:13:57 +00:00
KIRIYAMA Takuya
e8c325dea3 8313394: Array Elements in OldObjectSample event has the incorrect description
Reviewed-by: egahlin
2023-08-04 03:19:53 +00:00
Joe Wang
d60352e26f 8311006: missing @since info in jdk.xml.dom
Reviewed-by: iris, naoto, lancea
2023-08-03 21:49:05 +00:00
Tobias Hartmann
4577147993 8313712: [BACKOUT] 8313632: ciEnv::dump_replay_data use fclose
Reviewed-by: mikael
2023-08-03 18:08:29 +00:00
Tejesh R
bb3aac6063 8301606: JFileChooser file chooser details view "size" label cut off in Metal Look&Feel
Reviewed-by: aivanov, abhiscxk
2023-08-03 16:09:47 +00:00
Matthias Baesken
0f2fce7168 8313632: ciEnv::dump_replay_data use fclose
Reviewed-by: thartmann, lucy
2023-08-03 12:02:52 +00:00
Tobias Hartmann
ab1c212ac1 8312909: C1 should not inline through interface calls with non-subtype receiver
Reviewed-by: kvn, chagedorn
2023-08-03 11:02:42 +00:00
Jan Lahoda
c386091734 8312984: javac may crash on a record pattern with too few components
Reviewed-by: vromero
2023-08-03 08:37:15 +00:00
Thomas Stuefe
3212b64f8e 8313582: Problemlist failing test on linux x86
Reviewed-by: tschatzl
2023-08-03 08:32:13 +00:00
Matthias Baesken
bdac348c80 8313602: increase timeout for jdk/classfile/CorpusTest.java
Reviewed-by: clanger
2023-08-03 08:12:20 +00:00
Prasanta Sadhukhan
58906bf8fb 4893524: Swing drop targets should call close() on transferred readers and streams
Reviewed-by: serb, tr, aivanov
2023-08-03 07:23:19 +00:00
Jaikiran Pai
3c920f9cc6 8313274: [BACKOUT] Relax prerequisites for java.base-jmod target
Reviewed-by: dholmes
2023-08-03 07:15:21 +00:00
Amit Kumar
53ca75b18e 8313312: Add missing classpath exception copyright header
Reviewed-by: rriggs, asotona
2023-08-03 05:47:22 +00:00
Tejesh R
87d7e976cb 8311031: JTable header border vertical lines are not aligned with data grid lines
Reviewed-by: abhiscxk, psadhukhan, aivanov
2023-08-03 04:44:41 +00:00
Sergey Bylokhov
8248e351d0 8313576: GCC 7 reports compiler warning in bundled freetype 2.13.0
Reviewed-by: shade, prr
2023-08-02 23:37:35 +00:00
Jonathan Gibbons
6d180d5fbf 8313349: Introduce abstract void HtmlDocletWriter.buildPage()
Reviewed-by: prappo
2023-08-02 21:59:22 +00:00
Jim Laskey
bc1d2eac9a 8312821: Javac accepts char literal as template
Reviewed-by: jlahoda
2023-08-02 21:01:44 +00:00
Matias Saavedra Silva
cff25dd574 8306582: Remove MetaspaceShared::exit_after_static_dump()
Reviewed-by: iklam, alanb, ccheung
2023-08-02 17:11:22 +00:00
Brian Burkhalter
4ba81f631f 8313368: (fc) FileChannel.size returns 0 on block special files
Reviewed-by: vtewari, alanb
2023-08-02 15:25:59 +00:00
Deepa Kumari
c1a3f143bf 8312078: [PPC] JcmdScale.java Failing on AIX
Reviewed-by: stuefe, tsteele
2023-08-02 14:39:33 +00:00
Cesar Soares Lucas
6446792327 8312617: SIGSEGV in ConnectionGraph::verify_ram_nodes
Reviewed-by: kvn, thartmann
2023-08-02 14:27:07 +00:00
Antonios Printezis
b093880acd 8313322: RISC-V: implement MD5 intrinsic
Reviewed-by: luhenry, rehn
2023-08-02 13:17:00 +00:00
Stefan Karlsson
19e2c8c321 8313593: Generational ZGC: NMT assert when the heap fails to expand
Reviewed-by: stuefe, tschatzl, eosterlund
2023-08-02 12:13:47 +00:00
Aleksey Shipilev
46fbedb2be 8313402: C1: Incorrect LoadIndexed value numbering
Reviewed-by: phh, thartmann
2023-08-02 11:21:34 +00:00
Alan Bateman
6faf05c6dd 8311989: Test java/lang/Thread/virtual/Reflection.java timed out
Reviewed-by: jpai, mchung
2023-08-02 10:40:25 +00:00
Albert Mingkun Yang
5d1b911c92 8310311: Serial: move Generation::contribute_scratch to DefNewGeneration
Reviewed-by: tschatzl, kbarrett
2023-08-02 09:17:41 +00:00
Aleksey Shipilev
9454b2bbe1 8312591: GCC 6 build failure after JDK-8280982
Reviewed-by: jiefu, prr
2023-08-02 07:00:37 +00:00
Jenny Shivayogi
6a853bba09 8311821: Simplify ParallelGCThreadsConstraintFunc after CMS removal
Reviewed-by: kbarrett, shade, tschatzl
2023-08-02 07:00:13 +00:00
Daniel Jeliński
e8471f6bbe 8313507: Remove pkcs11/Cipher/TestKATForGCM.java from ProblemList
Reviewed-by: mullan
2023-08-02 05:45:24 +00:00
Joe Wang
528596fa93 8310991: missing @since tags in java.xml
Reviewed-by: iris, naoto, lancea
2023-08-02 01:37:40 +00:00
Jim Laskey
f14245b388 8312814: Compiler crash when template processor type is a captured wildcard
Reviewed-by: jlahoda, mcimadamore, vromero
2023-08-02 00:47:20 +00:00
Justin Lu
9b55e9a706 8312572: JDK 21 RDP2 L10n resource files update
Reviewed-by: naoto
2023-08-01 23:16:39 +00:00
John Jiang
28be34c1b9 8313226: Redundant condition test in X509CRLImpl
Reviewed-by: jnimeh
2023-08-01 22:35:27 +00:00
Calvin Cheung
dc14247077 8309240: Array classes should be stored in dynamic CDS archive
Reviewed-by: iklam
2023-08-01 22:08:55 +00:00
Calvin Cheung
bf7077752a 8312181: CDS dynamic dump crashes when verifying unlinked class from static archive
Reviewed-by: iklam, matsaave
2023-08-01 20:31:25 +00:00
Ashutosh Mehra
7ba8c69a2c 8312596: Null pointer access in Compile::TracePhase::~TracePhase after JDK-8311976
Reviewed-by: chagedorn, dlong, shade
2023-08-01 19:26:45 +00:00
Aleksey Shipilev
ec2f38fd38 8313428: GHA: Bump GCC versions for July 2023 updates
Reviewed-by: clanger, mbaesken, stuefe
2023-08-01 16:03:24 +00:00
Thomas Obermeier
98a915a54c 8313256: Exclude failing multicast tests on AIX
Reviewed-by: clanger
2023-08-01 15:31:54 +00:00
Christoph Langer
94b50b714a 8313404: Fix section label in test/jdk/ProblemList.txt
Reviewed-by: mbaesken, alanb
2023-08-01 13:45:10 +00:00
Coleen Phillimore
ee3e0917b3 8313249: Fix -Wconversion warnings in verifier code
Reviewed-by: matsaave, iklam, dlong
2023-08-01 11:59:11 +00:00
Joshua Cao
e36960ec6d 8312420: Integrate Graal's blender micro benchmark
Reviewed-by: dnsimon, thartmann, ksakata
2023-08-01 10:48:38 +00:00
Tejesh R
0a3c6d6bd0 8280482: Window transparency bug on Linux
Reviewed-by: dnguyen, azvegint
2023-08-01 04:28:42 +00:00
Matias Saavedra Silva
c91a3002fb 8307312: Replace "int which" with "int cp_index" in constantPool
Reviewed-by: coleenp, dholmes, iklam
2023-07-31 20:23:59 +00:00
Jim Laskey
6af0af5934 8310913: Move ReferencedKeyMap to jdk.internal so it may be shared
Reviewed-by: naoto, rriggs, mchung, liach
2023-07-31 19:11:14 +00:00
Matias Saavedra Silva
86783b9851 8301996: Move field resolution information out of the cpCache
Co-authored-by: Gui Cao <gcao@openjdk.org>
Co-authored-by: Dingli Zhang <dzhang@openjdk.org>
Co-authored-by: Martin Doerr <mdoerr@openjdk.org>
Reviewed-by: coleenp, fparain
2023-07-31 18:41:38 +00:00
Thomas Stuefe
5362ec9c6e 8312492: Remove THP sanity checks at VM startup
Reviewed-by: dholmes, coleenp
2023-07-31 16:51:29 +00:00
Hai-May Chao
e47a84f23d 8312489: Increase jdk.jar.maxSignatureFileSize default which is too low for JARs such as WhiteSource/Mend unified agent jar
Reviewed-by: mullan, mbaesken
2023-07-31 15:18:04 +00:00
Gerard Ziemski
78f67993f8 8293972: runtime/NMT/NMTInitializationTest.java#default_long-off failed with "Suspiciously long bucket chains in lookup table."
Reviewed-by: stuefe, dholmes
2023-07-31 15:12:22 +00:00
Qing Xiao
97b688340e 8295059: test/langtools/tools/javap 12 test classes use com.sun.tools.classfile library
Reviewed-by: asotona
2023-07-31 15:03:05 +00:00
Matthias Baesken
3671d83c87 8313252: Java_sun_awt_windows_ThemeReader_paintBackground release resources in early returns
Reviewed-by: clanger
2023-07-31 14:57:28 +00:00
Matias Saavedra Silva
b60e0adad6 8313207: Remove MetaspaceShared::_has_error_classes
Reviewed-by: ccheung, iklam
2023-07-31 13:44:38 +00:00
Aleksey Shipilev
408987e1ca 8313307: java/util/Formatter/Padding.java fails on some Locales
Reviewed-by: jlu, naoto
2023-07-31 08:35:31 +00:00
Jorn Vernee
6fca289887 8313023: Return value corrupted when using CCS + isTrivial (mainline)
Reviewed-by: mcimadamore, vlivanov
2023-07-31 08:01:17 +00:00
John Jiang
f8c2b7fee1 8313231: Redundant if statement in ZoneInfoFile
Reviewed-by: jiefu, scolebourne
2023-07-31 07:49:10 +00:00
Christoph Langer
807ca2d3a1 8313316: Disable runtime/ErrorHandling/MachCodeFramesInErrorFile.java on ppc64le
Reviewed-by: mbaesken
2023-07-31 07:42:37 +00:00
Thomas Stuefe
ad34be1f32 8312525: New test runtime/os/TestTrimNative.java#trimNative is failing: did not see the expected RSS reduction
Reviewed-by: dholmes, shade
2023-07-29 05:36:58 +00:00
Vladimir Kempik
d6245b6832 8310268: RISC-V: misaligned memory access in String.Compare intrinsic
Co-authored-by: Feilong Jiang <fjiang@openjdk.org>
Reviewed-by: fyang
2023-07-28 21:55:33 +00:00
Jonathan Gibbons
402cb6a550 8312201: Clean up common behavior in "page writers" and "member writers"
8284447: Remove the unused NestedClassWriter interface

Reviewed-by: prappo
2023-07-28 17:48:31 +00:00
Justin Lu
23755f90c9 8312411: MessageFormat.formatToCharacterIterator() can be improved
Reviewed-by: naoto
2023-07-28 17:33:20 +00:00
Jonathan Gibbons
e2cb0bc6f1 8313204: Inconsistent order of sections in generated class documentation
Reviewed-by: hannesw, prappo
2023-07-28 17:05:37 +00:00
Jonathan Gibbons
4ae75cab53 8313253: Rename methods in javadoc Comparators class
Reviewed-by: hannesw, prappo
2023-07-28 16:39:33 +00:00
Coleen Phillimore
e897041770 8312262: Klass::array_klass() should return ArrayKlass pointer
Reviewed-by: dlong, ccheung
2023-07-28 16:32:06 +00:00
Justin Lu
a9a3463afb 8312416: Tests in Locale should have more descriptive names
Reviewed-by: lancea, naoto
2023-07-28 16:27:06 +00:00
Matthias Baesken
d9559f9b24 8312612: handle WideCharToMultiByte return values
Reviewed-by: clanger
2023-07-28 13:45:19 +00:00
Matthias Baesken
34173ff0d1 8312574: jdk/jdk/jfr/jvm/TestChunkIntegrity.java fails with timeout
Reviewed-by: egahlin
2023-07-28 13:31:13 +00:00
Coleen Phillimore
47c4b992b4 8312121: Fix -Wconversion warnings in tribool.hpp
Reviewed-by: dlong, dholmes
2023-07-28 12:08:24 +00:00
Alexander Scherbatiy
a3d67231a7 8311033: [macos] PrinterJob does not take into account Sides attribute
Reviewed-by: prr, serb
2023-07-28 10:25:22 +00:00
Kevin Walls
4ae5a3e39b 8306446: java/lang/management/ThreadMXBean/Locks.java transient failures
Reviewed-by: cjplummer, sspitsyn
2023-07-28 09:44:04 +00:00
Damon Fenacci
cad6114e1c 8304954: SegmentedCodeCache fails when using large pages
Reviewed-by: stuefe, thartmann
2023-07-28 09:09:48 +00:00
Leonid Mesnik
ba645da97b 8313082: Enable CreateCoredumpOnCrash for testing in makefiles
Reviewed-by: dholmes
2023-07-28 02:01:48 +00:00
Valerie Peng
c27c87786a 8302017: Allocate BadPaddingException only if it will be thrown
Reviewed-by: xuelei
2023-07-27 21:24:03 +00:00
Jiangli Zhou
c55d29ff11 8312626: Resolve multiple definition of 'start_timer' when statically linking JDK native libraries with user code
Reviewed-by: serb
2023-07-27 19:12:46 +00:00
Alexey Semenyuk
0ca2bfd779 8311104: dangling-gsl warning in libwixhelper.cpp
Reviewed-by: almatvee
2023-07-27 16:07:54 +00:00
Thomas Obermeier
c05ba48b60 8313250: Exclude java/foreign/TestByteBuffer.java on AIX
Reviewed-by: rriggs, clanger
2023-07-27 15:45:20 +00:00
Kevin Walls
169b6e3cff 8313174: Create fewer predictable port clashes in management tests
Reviewed-by: cjplummer, amenkov
2023-07-27 15:40:13 +00:00
Roger Riggs
8650026ff1 8310033: Clarify return value of Java Time compareTo methods
Reviewed-by: bpb, scolebourne, prappo, naoto
2023-07-27 14:01:25 +00:00
Thomas Stuefe
25058cd23a 8312620: WSL Linux build crashes after JDK-8310233
Reviewed-by: dholmes, djelinski
2023-07-27 13:45:36 +00:00
Richard Reingruber
8661b8e115 8312495: assert(0 <= i && i < _len) failed: illegal index after JDK-8287061 on big endian platforms
Reviewed-by: clanger, kvn, dlong
2023-07-27 13:40:23 +00:00
Jaikiran Pai
486c7844f9 8312433: HttpClient request fails due to connection being considered idle and closed
Reviewed-by: djelinski
2023-07-27 12:14:14 +00:00
Gergö Barany
271417a0e1 8312579: [JVMCI] JVMCI support for virtual Vector API objects
Reviewed-by: dnsimon, never
2023-07-27 10:48:18 +00:00
Andreas Steiner
44576a7cca 8312466: /bin/nm usage in AIX makes needs -X64 flag
Reviewed-by: mbaesken, stuefe, jwaters
2023-07-27 10:37:40 +00:00
Doug Simon
86821a7ce8 8312235: [JVMCI] ConstantPool should not force eager resolution
Reviewed-by: never, matsaave
2023-07-27 08:39:32 +00:00
Eric Nothum
7cbab1f396 8312218: Print additional debug information when hitting assert(in_hash)
Reviewed-by: chagedorn, thartmann
2023-07-27 07:29:23 +00:00
Roland Westrelin
01e135c910 8312440: assert(cast != nullptr) failed: must have added a cast to pin the node
Reviewed-by: chagedorn, kvn, thartmann
2023-07-27 07:24:46 +00:00
Matthias Baesken
b7545a69a2 8313164: src/java.desktop/windows/native/libawt/windows/awt_Robot.cpp GetRGBPixels adjust releasing of resources
Reviewed-by: stuefe
2023-07-27 07:06:32 +00:00
Sean Coffey
36d578cddb 8311653: Modify -XshowSettings launcher behavior
Reviewed-by: mchung, rriggs
2023-07-27 06:33:27 +00:00
1489 changed files with 29854 additions and 17209 deletions

9
.gitattributes vendored
View File

@@ -1 +1,10 @@
* -text
*.java diff=java
*.c diff=cpp
*.h diff=cpp
*.cpp diff=cpp
*.hpp diff=cpp
*.md diff=markdown
*.sh diff=bash
*.html diff=html
*.css diff=css

View File

@@ -31,12 +31,6 @@ on:
gcc-major-version:
required: true
type: string
apt-gcc-version:
required: true
type: string
apt-gcc-cross-version:
required: true
type: string
extra-conf-options:
required: false
type: string
@@ -86,8 +80,7 @@ jobs:
- target-cpu: riscv64
gnu-arch: riscv64
debian-arch: riscv64
debian-repository: https://deb.debian.org/debian-ports
debian-keyring: /usr/share/keyrings/debian-ports-archive-keyring.gpg
debian-repository: https://httpredir.debian.org/debian/
debian-version: sid
steps:
@@ -114,10 +107,10 @@ jobs:
sudo apt-get update
sudo apt-get install --only-upgrade apt
sudo apt-get install \
gcc-${{ inputs.gcc-major-version }}=${{ inputs.apt-gcc-version }} \
g++-${{ inputs.gcc-major-version }}=${{ inputs.apt-gcc-version }} \
gcc-${{ inputs.gcc-major-version }}-${{ matrix.gnu-arch }}-linux-gnu${{ matrix.gnu-abi}}=${{ inputs.apt-gcc-cross-version }} \
g++-${{ inputs.gcc-major-version }}-${{ matrix.gnu-arch }}-linux-gnu${{ matrix.gnu-abi}}=${{ inputs.apt-gcc-cross-version }} \
gcc-${{ inputs.gcc-major-version }} \
g++-${{ inputs.gcc-major-version }} \
gcc-${{ inputs.gcc-major-version }}-${{ matrix.gnu-arch }}-linux-gnu${{ matrix.gnu-abi}} \
g++-${{ inputs.gcc-major-version }}-${{ matrix.gnu-arch }}-linux-gnu${{ matrix.gnu-abi}} \
libxrandr-dev libxtst-dev libcups2-dev libasound2-dev \
debian-ports-archive-keyring
sudo update-alternatives --install /usr/bin/gcc gcc /usr/bin/gcc-${{ inputs.gcc-major-version }} 100 --slave /usr/bin/g++ g++ /usr/bin/g++-${{ inputs.gcc-major-version }}
@@ -138,8 +131,9 @@ jobs:
sudo debootstrap
--arch=${{ matrix.debian-arch }}
--verbose
--include=fakeroot,symlinks,build-essential,libx11-dev,libxext-dev,libxrender-dev,libxrandr-dev,libxtst-dev,libxt-dev,libcups2-dev,libfontconfig1-dev,libasound2-dev,libfreetype6-dev,libpng-dev
--include=fakeroot,symlinks,build-essential,libx11-dev,libxext-dev,libxrender-dev,libxrandr-dev,libxtst-dev,libxt-dev,libcups2-dev,libfontconfig1-dev,libasound2-dev,libfreetype-dev,libpng-dev
--resolve-deps
--variant=minbase
$(test -n "${{ matrix.debian-keyring }}" && echo "--keyring=${{ matrix.debian-keyring }}")
${{ matrix.debian-version }}
sysroot
@@ -153,7 +147,8 @@ jobs:
sudo chown ${USER} -R sysroot
rm -rf sysroot/{dev,proc,run,sys,var}
rm -rf sysroot/usr/{sbin,bin,share}
rm -rf sysroot/usr/lib/{apt,udev,systemd}
rm -rf sysroot/usr/lib/{apt,gcc,udev,systemd}
rm -rf sysroot/usr/libexec/gcc
if: steps.get-cached-sysroot.outputs.cache-hit != 'true'
- name: 'Configure'

View File

@@ -49,9 +49,6 @@ on:
required: false
type: string
default: ''
apt-gcc-version:
required: true
type: string
apt-architecture:
required: false
type: string
@@ -114,7 +111,7 @@ jobs:
fi
sudo apt-get update
sudo apt-get install --only-upgrade apt
sudo apt-get install gcc-${{ inputs.gcc-major-version }}${{ inputs.gcc-package-suffix }}=${{ inputs.apt-gcc-version }} g++-${{ inputs.gcc-major-version }}${{ inputs.gcc-package-suffix }}=${{ inputs.apt-gcc-version }} libxrandr-dev${{ steps.arch.outputs.suffix }} libxtst-dev${{ steps.arch.outputs.suffix }} libcups2-dev${{ steps.arch.outputs.suffix }} libasound2-dev${{ steps.arch.outputs.suffix }} ${{ inputs.apt-extra-packages }}
sudo apt-get install gcc-${{ inputs.gcc-major-version }}${{ inputs.gcc-package-suffix }} g++-${{ inputs.gcc-major-version }}${{ inputs.gcc-package-suffix }} libxrandr-dev${{ steps.arch.outputs.suffix }} libxtst-dev${{ steps.arch.outputs.suffix }} libcups2-dev${{ steps.arch.outputs.suffix }} libasound2-dev${{ steps.arch.outputs.suffix }} ${{ inputs.apt-extra-packages }}
sudo update-alternatives --install /usr/bin/gcc gcc /usr/bin/gcc-${{ inputs.gcc-major-version }} 100 --slave /usr/bin/g++ g++ /usr/bin/g++-${{ inputs.gcc-major-version }}
- name: 'Configure'

View File

@@ -130,7 +130,6 @@ jobs:
with:
platform: linux-x64
gcc-major-version: '10'
apt-gcc-version: '10.4.0-4ubuntu1~22.04'
configure-arguments: ${{ github.event.inputs.configure-arguments }}
make-arguments: ${{ github.event.inputs.make-arguments }}
# The linux-x64 jdk bundle is used as buildjdk for the cross-compile job
@@ -144,11 +143,10 @@ jobs:
platform: linux-x86
gcc-major-version: '10'
gcc-package-suffix: '-multilib'
apt-gcc-version: '10.4.0-4ubuntu1~22.04'
apt-architecture: 'i386'
# Some multilib libraries do not have proper inter-dependencies, so we have to
# install their dependencies manually.
apt-extra-packages: 'libfreetype6-dev:i386 libtiff-dev:i386 libcupsimage2-dev:i386 libc6-i386 libgcc-s1:i386 libstdc++6:i386'
apt-extra-packages: 'libfreetype-dev:i386 libtiff-dev:i386 libcupsimage2-dev:i386 libc6-i386 libgcc-s1:i386 libstdc++6:i386'
extra-conf-options: '--with-target-bits=32'
configure-arguments: ${{ github.event.inputs.configure-arguments }}
make-arguments: ${{ github.event.inputs.make-arguments }}
@@ -163,7 +161,6 @@ jobs:
make-target: 'hotspot'
debug-levels: '[ "debug" ]'
gcc-major-version: '10'
apt-gcc-version: '10.4.0-4ubuntu1~22.04'
extra-conf-options: '--disable-precompiled-headers'
configure-arguments: ${{ github.event.inputs.configure-arguments }}
make-arguments: ${{ github.event.inputs.make-arguments }}
@@ -178,7 +175,6 @@ jobs:
make-target: 'hotspot'
debug-levels: '[ "debug" ]'
gcc-major-version: '10'
apt-gcc-version: '10.4.0-4ubuntu1~22.04'
extra-conf-options: '--with-jvm-variants=zero --disable-precompiled-headers'
configure-arguments: ${{ github.event.inputs.configure-arguments }}
make-arguments: ${{ github.event.inputs.make-arguments }}
@@ -193,7 +189,6 @@ jobs:
make-target: 'hotspot'
debug-levels: '[ "debug" ]'
gcc-major-version: '10'
apt-gcc-version: '10.4.0-4ubuntu1~22.04'
extra-conf-options: '--with-jvm-variants=minimal --disable-precompiled-headers'
configure-arguments: ${{ github.event.inputs.configure-arguments }}
make-arguments: ${{ github.event.inputs.make-arguments }}
@@ -209,7 +204,6 @@ jobs:
# Technically this is not the "debug" level, but we can't inject a new matrix state for just this job
debug-levels: '[ "debug" ]'
gcc-major-version: '10'
apt-gcc-version: '10.4.0-4ubuntu1~22.04'
extra-conf-options: '--with-debug-level=optimized --disable-precompiled-headers'
configure-arguments: ${{ github.event.inputs.configure-arguments }}
make-arguments: ${{ github.event.inputs.make-arguments }}
@@ -223,8 +217,6 @@ jobs:
uses: ./.github/workflows/build-cross-compile.yml
with:
gcc-major-version: '10'
apt-gcc-version: '10.4.0-4ubuntu1~22.04'
apt-gcc-cross-version: '10.4.0-4ubuntu1~22.04cross1'
configure-arguments: ${{ github.event.inputs.configure-arguments }}
make-arguments: ${{ github.event.inputs.make-arguments }}
if: needs.select.outputs.linux-cross-compile == 'true'
@@ -290,7 +282,6 @@ jobs:
# build JDK, and we do not need the additional testing of the graphs.
extra-conf-options: '--disable-full-docs'
gcc-major-version: '10'
apt-gcc-version: '10.4.0-4ubuntu1~22.04'
configure-arguments: ${{ github.event.inputs.configure-arguments }}
make-arguments: ${{ github.event.inputs.make-arguments }}
if: needs.select.outputs.docs == 'true'

View File

@@ -1,3 +1,3 @@
# Contributing to the JDK
Please see <https://openjdk.org/contribute> for how to contribute.
Please see the [OpenJDK Developers Guide](https://openjdk.org/guide/).

View File

@@ -324,7 +324,8 @@ GB of free disk space is required.</p>
<p>Even for 32-bit builds, it is recommended to use a 64-bit build
machine, and instead create a 32-bit target using
<code>--with-target-bits=32</code>.</p>
<p>Note: The Windows 32-bit x86 port is deprecated and may be removed in a future release.</p>
<p>Note: The Windows 32-bit x86 port is deprecated and may be removed in
a future release.</p>
<h3 id="building-on-aarch64">Building on aarch64</h3>
<p>At a minimum, a machine with 8 cores is advisable, as well as 8 GB of
RAM. (The more cores to use, the more memory you need.) At least 6 GB of
@@ -401,7 +402,8 @@ to the build system, e.g. in arguments to <code>configure</code>. So,
use <code>--with-msvcr-dll=/cygdrive/c/msvcr100.dll</code> rather than
<code>--with-msvcr-dll=c:\msvcr100.dll</code>. For details on this
conversion, see the section on <a href="#fixpath">Fixpath</a>.</p>
<p>Note: The Windows 32-bit x86 port is deprecated and may be removed in a future release.</p>
<p>Note: The Windows 32-bit x86 port is deprecated and may be removed in
a future release.</p>
<h4 id="cygwin">Cygwin</h4>
<p>A functioning <a href="http://www.cygwin.com/">Cygwin</a> environment
is required for building the JDK on Windows. If you have a 64-bit OS, we
@@ -1113,13 +1115,13 @@ just unpacked.</p>
Test framework. The top directory, which contains both
<code>googletest</code> and <code>googlemock</code> directories, should
be specified via <code>--with-gtest</code>. The minimum supported
version of Google Test is 1.13.0, whose source code can be obtained:</p>
version of Google Test is 1.14.0, whose source code can be obtained:</p>
<ul>
<li>by downloading and unpacking the source bundle from <a
href="https://github.com/google/googletest/releases/tag/v1.13.0">here</a></li>
<li>or by checking out <code>v1.13.0</code> tag of
href="https://github.com/google/googletest/releases/tag/v1.14.0">here</a></li>
<li>or by checking out <code>v1.14.0</code> tag of
<code>googletest</code> project:
<code>git clone -b v1.13.0 https://github.com/google/googletest</code></li>
<code>git clone -b v1.14.0 https://github.com/google/googletest</code></li>
</ul>
<p>To execute the most basic tests (tier 1), use:</p>
<pre><code>make run-test-tier1</code></pre>
@@ -2255,18 +2257,7 @@ However, please bear in mind that the JDK is a massive project, and we
must ask you to follow our rules and guidelines to be able to accept
your contribution.</p>
<p>The official place to start is the <a
href="http://openjdk.org/contribute/">'How to contribute' page</a>.
There is also an official (but somewhat outdated and skimpy on details)
<a href="http://openjdk.org/guide/">Developer's Guide</a>.</p>
<p>If this seems overwhelming to you, the Adoption Group is there to
help you! A good place to start is their <a
href="https://wiki.openjdk.org/display/Adoption/New+Contributor">'New
Contributor' page</a>, or start reading the comprehensive <a
href="https://adoptopenjdk.gitbooks.io/adoptopenjdk-getting-started-kit/en/">Getting
Started Kit</a>. The Adoption Group will also happily answer any
questions you have about contributing. Contact them by <a
href="http://mail.openjdk.org/mailman/listinfo/adoption-discuss">mail</a>
or <a href="http://openjdk.org/irc/">IRC</a>.</p>
href="https://openjdk.org/guide/">OpenJDK Developers Guide</a>.</p>
<h2 id="editing-this-document">Editing this document</h2>
<p>If you want to contribute changes to this document, edit
<code>doc/building.md</code> and then run

View File

@@ -884,11 +884,11 @@ Download the latest `.tar.gz` file, unpack it, and point `--with-jtreg` to the
Building of Hotspot Gtest suite requires the source code of Google
Test framework. The top directory, which contains both `googletest`
and `googlemock` directories, should be specified via `--with-gtest`.
The minimum supported version of Google Test is 1.13.0, whose source
The minimum supported version of Google Test is 1.14.0, whose source
code can be obtained:
* by downloading and unpacking the source bundle from [here](https://github.com/google/googletest/releases/tag/v1.13.0)
* or by checking out `v1.13.0` tag of `googletest` project: `git clone -b v1.13.0 https://github.com/google/googletest`
* by downloading and unpacking the source bundle from [here](https://github.com/google/googletest/releases/tag/v1.14.0)
* or by checking out `v1.14.0` tag of `googletest` project: `git clone -b v1.14.0 https://github.com/google/googletest`
To execute the most basic tests (tier 1), use:
```
@@ -2032,20 +2032,7 @@ First of all: Thank you! We gladly welcome your contribution.
However, please bear in mind that the JDK is a massive project, and we must ask
you to follow our rules and guidelines to be able to accept your contribution.
The official place to start is the ['How to contribute' page](
http://openjdk.org/contribute/). There is also an official (but somewhat
outdated and skimpy on details) [Developer's Guide](
http://openjdk.org/guide/).
If this seems overwhelming to you, the Adoption Group is there to help you! A
good place to start is their ['New Contributor' page](
https://wiki.openjdk.org/display/Adoption/New+Contributor), or start
reading the comprehensive [Getting Started Kit](
https://adoptopenjdk.gitbooks.io/adoptopenjdk-getting-started-kit/en/). The
Adoption Group will also happily answer any questions you have about
contributing. Contact them by [mail](
http://mail.openjdk.org/mailman/listinfo/adoption-discuss) or [IRC](
http://openjdk.org/irc/).
The official place to start is the [OpenJDK Developers Guide](https://openjdk.org/guide/).
## Editing this document

View File

@@ -67,7 +67,6 @@ MODULES_SOURCE_PATH := $(call PathList, $(call GetModuleSrcPath) )
# ordering of tags as the tags are otherwise ordered in order of definition.
JAVADOC_TAGS := \
-tag beaninfo:X \
-tag revised:X \
-tag since.unbundled:X \
-tag Note:X \
-tag ToDo:X \

View File

@@ -967,12 +967,15 @@ else
jdk.compiler-gendata: $(GENSRC_MODULEINFO_TARGETS)
# Declare dependencies between jmod targets.
# java.base jmod needs jrt-fs.jar and access to the other jmods to be built.
# java.base jmod needs jrt-fs.jar and access to the jmods for all non
# upgradeable modules and their transitive dependencies.
# When creating the BUILDJDK, we don't need to add hashes to java.base, thus
# we don't need to depend on all other jmods
ifneq ($(CREATING_BUILDJDK), true)
java.base-jmod: jrtfs-jar $(filter-out java.base-jmod \
$(addsuffix -jmod, $(call FindAllUpgradeableModules)), $(JMOD_TARGETS))
java.base-jmod: jrtfs-jar $(addsuffix -jmod, $(filter-out java.base, $(sort \
$(foreach m, $(filter-out $(call FindAllUpgradeableModules), $(JMOD_MODULES)), \
$m $(call FindTransitiveDepsForModules, $m) \
))))
endif
# If not already set, set the JVM target so that the JVM will be built.

View File

@@ -800,8 +800,10 @@ define SetupRunJtregTestBody
$1_JTREG_BASIC_OPTIONS += -e:JIB_DATA_DIR
# If running on Windows, propagate the _NT_SYMBOL_PATH to enable
# symbol lookup in hserr files
# The minidumps are disabled by default on client Windows, so enable them
ifeq ($$(call isTargetOs, windows), true)
$1_JTREG_BASIC_OPTIONS += -e:_NT_SYMBOL_PATH
$1_JTREG_BASIC_OPTIONS += -vmoption:-XX:+CreateCoredumpOnCrash
else ifeq ($$(call isTargetOs, linux), true)
$1_JTREG_BASIC_OPTIONS += -e:_JVM_DWARF_PATH=$$(SYMBOLS_IMAGE_DIR)
endif

View File

@@ -406,9 +406,9 @@ AC_DEFUN_ONCE([BASIC_SETUP_OUTPUT_DIR],
# WARNING: This might be a bad thing to do. You need to be sure you want to
# have a configuration in this directory. Do some sanity checks!
if test ! -e "$OUTPUTDIR/spec.gmk"; then
# If we have a spec.gmk, we have run here before and we are OK. Otherwise, check for
# other files
if test ! -e "$OUTPUTDIR/spec.gmk" && test ! -e "$OUTPUTDIR/configure-support/generated-configure.sh"; then
# If we have a spec.gmk or configure-support/generated-configure.sh,
# we have run here before and we are OK. Otherwise, check for other files
files_present=`$LS $OUTPUTDIR`
# Configure has already touched config.log and confdefs.h in the current dir when this check
# is performed.
@@ -423,8 +423,9 @@ AC_DEFUN_ONCE([BASIC_SETUP_OUTPUT_DIR],
AC_MSG_NOTICE([Current directory is $CONFIGURE_START_DIR.])
AC_MSG_NOTICE([Since this is not the source root, configure will output the configuration here])
AC_MSG_NOTICE([(as opposed to creating a configuration in <src_root>/build/<conf-name>).])
AC_MSG_NOTICE([However, this directory is not empty. This is not allowed, since it could])
AC_MSG_NOTICE([seriously mess up just about everything.])
AC_MSG_NOTICE([However, this directory is not empty, additionally to some allowed files])
AC_MSG_NOTICE([it contains $filtered_files.])
AC_MSG_NOTICE([This is not allowed, since it could seriously mess up just about everything.])
AC_MSG_NOTICE([Try 'cd $TOPDIR' and restart configure])
AC_MSG_NOTICE([(or create a new empty directory and cd to it).])
AC_MSG_ERROR([Will not continue creating configuration in $CONFIGURE_START_DIR])

View File

@@ -88,6 +88,16 @@ AC_DEFUN([FLAGS_SETUP_RCFLAGS],
AC_SUBST(RCFLAGS)
])
AC_DEFUN([FLAGS_SETUP_NMFLAGS],
[
# On AIX, we need to set NM flag -X64 for processing 64bit object files
if test "x$OPENJDK_TARGET_OS" = xaix; then
NMFLAGS="-X64"
fi
AC_SUBST(NMFLAGS)
])
################################################################################
# platform independent
AC_DEFUN([FLAGS_SETUP_ASFLAGS],

View File

@@ -429,6 +429,7 @@ AC_DEFUN([FLAGS_SETUP_FLAGS],
FLAGS_SETUP_ARFLAGS
FLAGS_SETUP_STRIPFLAGS
FLAGS_SETUP_RCFLAGS
FLAGS_SETUP_NMFLAGS
FLAGS_SETUP_ASFLAGS
FLAGS_SETUP_ASFLAGS_CPU_DEP([TARGET])

View File

@@ -1,5 +1,5 @@
#
# Copyright (c) 2011, 2016, Oracle and/or its affiliates. All rights reserved.
# Copyright (c) 2011, 2023, Oracle and/or its affiliates. All rights reserved.
# DO NOT ALTER OR REMOVE COPYRIGHT NOTICES OR THIS FILE HEADER.
#
# This code is free software; you can redistribute it and/or modify it
@@ -68,12 +68,20 @@ AC_DEFUN_ONCE([LIB_SETUP_CUPS],
fi
fi
if test "x$CUPS_FOUND" = xno; then
# Are the cups headers installed in the default /usr/include location?
AC_CHECK_HEADERS([cups/cups.h cups/ppd.h], [
CUPS_FOUND=yes
CUPS_CFLAGS=
DEFAULT_CUPS=yes
])
# Are the cups headers installed in the default AIX or /usr/include location?
if test "x$OPENJDK_TARGET_OS" = "xaix"; then
AC_CHECK_HEADERS([/opt/freeware/include/cups/cups.h /opt/freeware/include/cups/ppd.h], [
CUPS_FOUND=yes
CUPS_CFLAGS="-I/opt/freeware/include"
DEFAULT_CUPS=yes
])
else
AC_CHECK_HEADERS([cups/cups.h cups/ppd.h], [
CUPS_FOUND=yes
CUPS_CFLAGS=
DEFAULT_CUPS=yes
])
fi
fi
if test "x$CUPS_FOUND" = xno; then
HELP_MSG_MISSING_DEPENDENCY([cups])

View File

@@ -28,8 +28,8 @@
################################################################################
# Minimum supported versions
JTREG_MINIMUM_VERSION=7.2
GTEST_MINIMUM_VERSION=1.13.0
JTREG_MINIMUM_VERSION=7.3.1
GTEST_MINIMUM_VERSION=1.14.0
###############################################################################
#

View File

@@ -606,6 +606,7 @@ AR := @AR@
ARFLAGS:=@ARFLAGS@
NM:=@NM@
NMFLAGS:=@NMFLAGS@
STRIP:=@STRIP@
OBJDUMP:=@OBJDUMP@
CXXFILT:=@CXXFILT@

View File

@@ -48,12 +48,12 @@ define GetSymbols
$(SED) -e 's/#.*//;s/global://;s/local://;s/\;//;s/^[ ]*/_/;/^_$$$$/d' | \
$(EGREP) -v "JNI_OnLoad|JNI_OnUnload|Agent_OnLoad|Agent_OnUnload|Agent_OnAttach" > \
$$(@D)/$$(basename $$(@F)).symbols || true; \
$(NM) $$($1_TARGET) | $(GREP) " T " | \
$(NM) $(NMFLAGS) $$($1_TARGET) | $(GREP) " T " | \
$(EGREP) "JNI_OnLoad|JNI_OnUnload|Agent_OnLoad|Agent_OnUnload|Agent_OnAttach" | \
$(CUT) -d ' ' -f 3 >> $$(@D)/$$(basename $$(@F)).symbols || true;\
else \
$(ECHO) "Getting symbols from nm"; \
$(NM) -m $$($1_TARGET) | $(GREP) "__TEXT" | \
$(NM) $(NMFLAGS) -m $$($1_TARGET) | $(GREP) "__TEXT" | \
$(EGREP) -v "non-external|private extern|__TEXT,__eh_frame" | \
$(SED) -e 's/.* //' > $$(@D)/$$(basename $$(@F)).symbols; \
fi
@@ -215,7 +215,21 @@ DEPENDENCY_TARGET_SED_PATTERN := \
# The fix-deps-file macro is used to adjust the contents of the generated make
# dependency files to contain paths compatible with make.
#
REWRITE_PATHS_RELATIVE = false
ifeq ($(ALLOW_ABSOLUTE_PATHS_IN_OUTPUT)-$(FILE_MACRO_CFLAGS), false-)
REWRITE_PATHS_RELATIVE = true
endif
# CCACHE_BASEDIR needs fix-deps-file as makefiles use absolute filenames for
# object files while CCACHE_BASEDIR will make ccache relativize all paths for
# its compiler. The compiler then produces relative dependency files.
# make does not know a relative and absolute filename is the same so it will
# ignore such dependencies.
ifneq ($(CCACHE), )
REWRITE_PATHS_RELATIVE = true
endif
ifeq ($(REWRITE_PATHS_RELATIVE), true)
# Need to handle -I flags as both '-Ifoo' and '-I foo'.
MakeCommandRelative = \
$(CD) $(WORKSPACE_ROOT) && \

View File

@@ -25,8 +25,8 @@
# Versions and download locations for dependencies used by GitHub Actions (GHA)
GTEST_VERSION=1.13.0
JTREG_VERSION=7.2+1
GTEST_VERSION=1.14.0
JTREG_VERSION=7.3.1+1
LINUX_X64_BOOT_JDK_EXT=tar.gz
LINUX_X64_BOOT_JDK_URL=https://download.java.net/java/GA/jdk20/bdc68b4b9cbc4ebcb30745c85038d91d/36/GPL/openjdk-20_linux-x64_bin.tar.gz

View File

@@ -1188,9 +1188,9 @@ var getJibProfilesDependencies = function (input, common) {
jtreg: {
server: "jpg",
product: "jtreg",
version: "7.2",
version: "7.3.1",
build_number: "1",
file: "bundles/jtreg-7.2+1.zip",
file: "bundles/jtreg-7.3.1+1.zip",
environment_name: "JT_HOME",
environment_path: input.get("jtreg", "home_path") + "/bin",
configure_args: "--with-jtreg=" + input.get("jtreg", "home_path"),
@@ -1199,12 +1199,12 @@ var getJibProfilesDependencies = function (input, common) {
jmh: {
organization: common.organization,
ext: "tar.gz",
revision: "1.35+1.0"
revision: "1.37+1.0"
},
jcov: {
organization: common.organization,
revision: "3.0-14-jdk-asm+1.0",
revision: "3.0-15-jdk-asm+1.0",
ext: "zip",
environment_name: "JCOV_HOME",
},
@@ -1270,7 +1270,7 @@ var getJibProfilesDependencies = function (input, common) {
gtest: {
organization: common.organization,
ext: "tar.gz",
revision: "1.13.0+1.0"
revision: "1.14.0+1.0"
},
libffi: {

View File

@@ -26,8 +26,8 @@
# Create a bundle in the build directory, containing what's needed to
# build and run JMH microbenchmarks from the OpenJDK build.
JMH_VERSION=1.36
COMMONS_MATH3_VERSION=3.2
JMH_VERSION=1.37
COMMONS_MATH3_VERSION=3.6.1
JOPT_SIMPLE_VERSION=5.0.4
BUNDLE_NAME=jmh-$JMH_VERSION.tar.gz

View File

@@ -284,10 +284,10 @@ ifneq ($(GENERATE_COMPILE_COMMANDS_ONLY), true)
define SetupOperatorNewDeleteCheck
$1.op_check: $1
$$(call ExecuteWithLog, $1.op_check, \
$$(NM) $$< 2>&1 | $$(GREP) $$(addprefix -e , $$(MANGLED_SYMS)) | $$(GREP) $$(UNDEF_PATTERN) > $1.op_check || true)
$$(NM) $$(NMFLAGS) $$< 2>&1 | $$(GREP) $$(addprefix -e , $$(MANGLED_SYMS)) | $$(GREP) $$(UNDEF_PATTERN) > $1.op_check || true)
if [ -s $1.op_check ]; then \
$$(ECHO) "$$(notdir $$<): Error: Use of global operators new and delete is not allowed in Hotspot:"; \
$$(NM) $$< | $$(CXXFILT) | $$(EGREP) '$$(DEMANGLED_REGEXP)' | $$(GREP) $$(UNDEF_PATTERN); \
$$(NM) $$(NMFLAGS) $$< | $$(CXXFILT) | $$(EGREP) '$$(DEMANGLED_REGEXP)' | $$(GREP) $$(UNDEF_PATTERN); \
$$(ECHO) "See: $$(TOPDIR)/make/hotspot/lib/CompileJvm.gmk"; \
exit 1; \
fi

View File

@@ -53,7 +53,7 @@ endif
# platform dependent.
ifeq ($(call isTargetOs, linux), true)
DUMP_SYMBOLS_CMD := $(NM) --defined-only *$(OBJ_SUFFIX)
DUMP_SYMBOLS_CMD := $(NM) $(NMFLAGS) --defined-only *$(OBJ_SUFFIX)
ifneq ($(FILTER_SYMBOLS_PATTERN), )
FILTER_SYMBOLS_PATTERN := $(FILTER_SYMBOLS_PATTERN)|
endif
@@ -67,7 +67,7 @@ ifeq ($(call isTargetOs, linux), true)
else ifeq ($(call isTargetOs, macosx), true)
# nm on macosx prints out "warning: nm: no name list" to stderr for
# files without symbols. Hide this, even at the expense of hiding real errors.
DUMP_SYMBOLS_CMD := $(NM) -Uj *$(OBJ_SUFFIX) 2> /dev/null
DUMP_SYMBOLS_CMD := $(NM) $(NMFLAGS) -Uj *$(OBJ_SUFFIX) 2> /dev/null
ifneq ($(FILTER_SYMBOLS_PATTERN), )
FILTER_SYMBOLS_PATTERN := $(FILTER_SYMBOLS_PATTERN)|
endif
@@ -89,7 +89,7 @@ else ifeq ($(call isTargetOs, aix), true)
# which may be installed under /opt/freeware/bin. So better use an absolute path here!
# NM=/usr/bin/nm
DUMP_SYMBOLS_CMD := $(NM) -X64 -B -C *$(OBJ_SUFFIX)
DUMP_SYMBOLS_CMD := $(NM) $(NMFLAGS) -B -C *$(OBJ_SUFFIX)
FILTER_SYMBOLS_AWK_SCRIPT := \
'{ \
if (($$2="d" || $$2="D") && ($$3 ~ /^__vft/ || $$3 ~ /^gHotSpotVM/)) print $$3; \

View File

@@ -132,6 +132,7 @@ $(eval $(call SetupJdkLibrary, BUILD_LIBAWT, \
EXCLUDE_FILES := $(LIBAWT_EXFILES), \
OPTIMIZATION := HIGHEST, \
CFLAGS := $(CFLAGS_JDKLIB) $(LIBAWT_CFLAGS), \
CXXFLAGS := $(CXXFLAGS_JDKLIB) $(LIBAWT_CFLAGS), \
EXTRA_HEADER_DIRS := $(LIBAWT_EXTRA_HEADER_DIRS), \
DISABLED_WARNINGS_gcc_awt_LoadLibrary.c := unused-result, \
DISABLED_WARNINGS_gcc_debug_mem.c := format-nonliteral, \
@@ -245,6 +246,8 @@ ifeq ($(call isTargetOs, windows macosx), false)
DISABLED_WARNINGS_gcc_gtk3_interface.c := parentheses type-limits unused-function, \
DISABLED_WARNINGS_gcc_OGLBufImgOps.c := format-nonliteral, \
DISABLED_WARNINGS_gcc_OGLPaints.c := format-nonliteral, \
DISABLED_WARNINGS_gcc_screencast_pipewire.c := undef, \
DISABLED_WARNINGS_gcc_screencast_portal.c := undef, \
DISABLED_WARNINGS_gcc_sun_awt_X11_GtkFileDialogPeer.c := parentheses, \
DISABLED_WARNINGS_gcc_X11SurfaceData.c := implicit-fallthrough pointer-to-int-cast, \
DISABLED_WARNINGS_gcc_XlibWrapper.c := type-limits pointer-to-int-cast, \
@@ -446,6 +449,7 @@ else
LIBFREETYPE_LIBS := -lfreetype
endif
# gcc_ftobjs.c := maybe-uninitialized required for GCC 7 builds.
$(eval $(call SetupJdkLibrary, BUILD_LIBFREETYPE, \
NAME := freetype, \
OPTIMIZATION := HIGHEST, \
@@ -454,6 +458,7 @@ else
EXTRA_HEADER_DIRS := $(BUILD_LIBFREETYPE_HEADER_DIRS), \
DISABLED_WARNINGS_microsoft := 4267 4244 4996, \
DISABLED_WARNINGS_gcc := dangling-pointer stringop-overflow, \
DISABLED_WARNINGS_gcc_ftobjs.c := maybe-uninitialized, \
LDFLAGS := $(LDFLAGS_JDKLIB) \
$(call SET_SHARED_LIBRARY_ORIGIN), \
))
@@ -609,7 +614,9 @@ ifeq ($(call isTargetOs, windows), true)
$(eval $(call SetupJdkLibrary, BUILD_LIBJAWT, \
NAME := jawt, \
OPTIMIZATION := LOW, \
CFLAGS := $(CXXFLAGS_JDKLIB) \
CFLAGS := $(CFLAGS_JDKLIB) \
$(LIBJAWT_CFLAGS), \
CXXFLAGS := $(CXXFLAGS_JDKLIB) \
$(LIBJAWT_CFLAGS), \
EXTRA_HEADER_DIRS := $(LIBJAWT_EXTRA_HEADER_DIRS), \
LDFLAGS := $(LDFLAGS_JDKLIB) $(LDFLAGS_CXX_JDK), \
@@ -792,6 +799,8 @@ ifeq ($(ENABLE_HEADLESS_ONLY), false)
OPTIMIZATION := LOW, \
CFLAGS := $(CFLAGS_JDKLIB) $(LIBSPLASHSCREEN_CFLAGS) \
$(GIFLIB_CFLAGS) $(LIBJPEG_CFLAGS) $(PNG_CFLAGS) $(LIBZ_CFLAGS), \
CXXFLAGS := $(CXXFLAGS_JDKLIB) $(LIBSPLASHSCREEN_CFLAGS) \
$(GIFLIB_CFLAGS) $(LIBJPEG_CFLAGS) $(PNG_CFLAGS) $(LIBZ_CFLAGS), \
EXTRA_HEADER_DIRS := $(LIBSPLASHSCREEN_HEADER_DIRS), \
DISABLED_WARNINGS_gcc_dgif_lib.c := sign-compare, \
DISABLED_WARNINGS_gcc_jcmaster.c := implicit-fallthrough, \

View File

@@ -45,8 +45,9 @@ ifeq ($(call isTargetOs, windows), true)
$(eval $(call SetupJdkLibrary, BUILD_LIBSSPI_BRIDGE, \
NAME := sspi_bridge, \
OPTIMIZATION := LOW, \
CFLAGS := $(CFLAGS_JDKLIB) \
-I$(TOPDIR)/src/java.security.jgss/share/native/libj2gss, \
EXTRA_HEADER_DIRS := libj2gss, \
CFLAGS := $(CFLAGS_JDKLIB), \
CXXFLAGS := $(CXXFLAGS_JDKLIB), \
LDFLAGS := $(LDFLAGS_JDKLIB) \
$(call SET_SHARED_LIBRARY_ORIGIN), \
LIBS := Secur32.lib, \

View File

@@ -1,5 +1,5 @@
#
# Copyright (c) 2014, 2022, Oracle and/or its affiliates. All rights reserved.
# Copyright (c) 2014, 2023, Oracle and/or its affiliates. All rights reserved.
# DO NOT ALTER OR REMOVE COPYRIGHT NOTICES OR THIS FILE HEADER.
#
# This code is free software; you can redistribute it and/or modify it
@@ -43,6 +43,9 @@ ifeq ($(call isTargetOs, windows), true)
CFLAGS := $(filter-out -Zc:wchar_t-, $(CFLAGS_JDKEXE)) -Zc:wchar_t \
-analyze- -Od -Gd -D_WINDOWS \
-D_UNICODE -DUNICODE -RTC1 -EHsc, \
CXXFLAGS := $(filter-out -Zc:wchar_t-, $(CXXFLAGS_JDKEXE)) -Zc:wchar_t \
-analyze- -Od -Gd -D_WINDOWS \
-D_UNICODE -DUNICODE -RTC1 -EHsc, \
DISABLED_WARNINGS_microsoft_jabswitch.cpp := 4267 4996, \
LDFLAGS := $(LDFLAGS_JDKEXE), \
LIBS := advapi32.lib version.lib user32.lib, \
@@ -65,6 +68,7 @@ ifeq ($(call isTargetOs, windows), true)
SRC := $(ACCESSIBILITY_SRCDIR)/jaccessinspector $(ACCESSIBILITY_SRCDIR)/common \
$(ACCESSIBILITY_SRCDIR)/toolscommon $(ACCESSIBILITY_SRCDIR)/bridge, \
CFLAGS := $$(CFLAGS_JDKEXE) $(TOOLS_CFLAGS) -DACCESSBRIDGE_ARCH_$2 -EHsc, \
CXXFLAGS := $$(CXXFLAGS_JDKEXE) $(TOOLS_CFLAGS) -DACCESSBRIDGE_ARCH_$2 -EHsc, \
LDFLAGS := $$(LDFLAGS_JDKEXE) -stack:655360, \
LIBS := advapi32.lib user32.lib, \
VERSIONINFO_RESOURCE := $(ACCESSIBILITY_SRCDIR)/jaccessinspector/jaccessinspectorWindow.rc, \
@@ -86,6 +90,7 @@ ifeq ($(call isTargetOs, windows), true)
SRC := $(ACCESSIBILITY_SRCDIR)/jaccesswalker $(ACCESSIBILITY_SRCDIR)/common \
$(ACCESSIBILITY_SRCDIR)/toolscommon $(ACCESSIBILITY_SRCDIR)/bridge, \
CFLAGS := $$(CFLAGS_JDKEXE) $(TOOLS_CFLAGS) -DACCESSBRIDGE_ARCH_$2 -EHsc, \
CXXFLAGS := $$(CXXFLAGS_JDKEXE) $(TOOLS_CFLAGS) -DACCESSBRIDGE_ARCH_$2 -EHsc, \
LDFLAGS := $$(LDFLAGS_JDKEXE) -stack:655360, \
LIBS := advapi32.lib comctl32.lib gdi32.lib user32.lib, \
VERSIONINFO_RESOURCE := $(ACCESSIBILITY_SRCDIR)/jaccesswalker/jaccesswalkerWindow.rc, \

View File

@@ -1,5 +1,5 @@
#
# Copyright (c) 2014, 2022, Oracle and/or its affiliates. All rights reserved.
# Copyright (c) 2014, 2023, Oracle and/or its affiliates. All rights reserved.
# DO NOT ALTER OR REMOVE COPYRIGHT NOTICES OR THIS FILE HEADER.
#
# This code is free software; you can redistribute it and/or modify it
@@ -43,6 +43,8 @@ ifeq ($(call isTargetOs, windows), true)
DISABLED_WARNINGS_microsoft := 4311 4302 4312, \
CFLAGS := $(filter-out -MD, $(CFLAGS_JDKLIB)) -MT \
-DACCESSBRIDGE_ARCH_$2, \
CXXFLAGS := $(filter-out -MD, $(CXXFLAGS_JDKLIB)) -MT \
-DACCESSBRIDGE_ARCH_$2, \
EXTRA_HEADER_DIRS := \
include/bridge \
java.desktop:include, \
@@ -70,6 +72,8 @@ ifeq ($(call isTargetOs, windows), true)
DISABLED_WARNINGS_microsoft_WinAccessBridge.cpp := 4302 4311, \
CFLAGS := $(CFLAGS_JDKLIB) \
-DACCESSBRIDGE_ARCH_$2, \
CXXFLAGS := $(CXXFLAGS_JDKLIB) \
-DACCESSBRIDGE_ARCH_$2, \
EXTRA_HEADER_DIRS := \
include/bridge, \
LDFLAGS := $(LDFLAGS_JDKLIB) \

View File

@@ -1,5 +1,5 @@
#
# Copyright (c) 2011, 2019, Oracle and/or its affiliates. All rights reserved.
# Copyright (c) 2011, 2023, Oracle and/or its affiliates. All rights reserved.
# DO NOT ALTER OR REMOVE COPYRIGHT NOTICES OR THIS FILE HEADER.
#
# This code is free software; you can redistribute it and/or modify it
@@ -33,6 +33,7 @@ ifeq ($(call isTargetOs, windows), true)
NAME := sunmscapi, \
OPTIMIZATION := LOW, \
CFLAGS := $(CFLAGS_JDKLIB), \
CXXFLAGS := $(CXXFLAGS_JDKLIB), \
LDFLAGS := $(LDFLAGS_JDKLIB) $(LDFLAGS_CXX_JDK) \
$(call SET_SHARED_LIBRARY_ORIGIN), \
LIBS := crypt32.lib advapi32.lib ncrypt.lib, \

View File

@@ -2289,7 +2289,6 @@ bool Matcher::match_rule_supported(int opcode) {
if (!has_match_rule(opcode))
return false;
bool ret_value = true;
switch (opcode) {
case Op_OnSpinWait:
return VM_Version::supports_on_spin_wait();
@@ -2297,18 +2296,26 @@ bool Matcher::match_rule_supported(int opcode) {
case Op_CacheWBPreSync:
case Op_CacheWBPostSync:
if (!VM_Version::supports_data_cache_line_flush()) {
ret_value = false;
return false;
}
break;
case Op_ExpandBits:
case Op_CompressBits:
if (!VM_Version::supports_svebitperm()) {
ret_value = false;
return false;
}
break;
case Op_FmaF:
case Op_FmaD:
case Op_FmaVF:
case Op_FmaVD:
if (!UseFMA) {
return false;
}
break;
}
return ret_value; // Per default match rules are supported.
return true; // Per default match rules are supported.
}
const RegMask* Matcher::predicate_reg_mask(void) {
@@ -14305,12 +14312,12 @@ instruct mulD_reg_reg(vRegD dst, vRegD src1, vRegD src2) %{
// src1 * src2 + src3
instruct maddF_reg_reg(vRegF dst, vRegF src1, vRegF src2, vRegF src3) %{
predicate(UseFMA);
match(Set dst (FmaF src3 (Binary src1 src2)));
format %{ "fmadds $dst, $src1, $src2, $src3" %}
ins_encode %{
assert(UseFMA, "Needs FMA instructions support.");
__ fmadds(as_FloatRegister($dst$$reg),
as_FloatRegister($src1$$reg),
as_FloatRegister($src2$$reg),
@@ -14322,12 +14329,12 @@ instruct maddF_reg_reg(vRegF dst, vRegF src1, vRegF src2, vRegF src3) %{
// src1 * src2 + src3
instruct maddD_reg_reg(vRegD dst, vRegD src1, vRegD src2, vRegD src3) %{
predicate(UseFMA);
match(Set dst (FmaD src3 (Binary src1 src2)));
format %{ "fmaddd $dst, $src1, $src2, $src3" %}
ins_encode %{
assert(UseFMA, "Needs FMA instructions support.");
__ fmaddd(as_FloatRegister($dst$$reg),
as_FloatRegister($src1$$reg),
as_FloatRegister($src2$$reg),
@@ -14337,15 +14344,15 @@ instruct maddD_reg_reg(vRegD dst, vRegD src1, vRegD src2, vRegD src3) %{
ins_pipe(pipe_class_default);
%}
// -src1 * src2 + src3
// src1 * (-src2) + src3
// "(-src1) * src2 + src3" has been idealized to "src2 * (-src1) + src3"
instruct msubF_reg_reg(vRegF dst, vRegF src1, vRegF src2, vRegF src3) %{
predicate(UseFMA);
match(Set dst (FmaF src3 (Binary (NegF src1) src2)));
match(Set dst (FmaF src3 (Binary src1 (NegF src2))));
format %{ "fmsubs $dst, $src1, $src2, $src3" %}
ins_encode %{
assert(UseFMA, "Needs FMA instructions support.");
__ fmsubs(as_FloatRegister($dst$$reg),
as_FloatRegister($src1$$reg),
as_FloatRegister($src2$$reg),
@@ -14355,15 +14362,15 @@ instruct msubF_reg_reg(vRegF dst, vRegF src1, vRegF src2, vRegF src3) %{
ins_pipe(pipe_class_default);
%}
// -src1 * src2 + src3
// src1 * (-src2) + src3
// "(-src1) * src2 + src3" has been idealized to "src2 * (-src1) + src3"
instruct msubD_reg_reg(vRegD dst, vRegD src1, vRegD src2, vRegD src3) %{
predicate(UseFMA);
match(Set dst (FmaD src3 (Binary (NegD src1) src2)));
match(Set dst (FmaD src3 (Binary src1 (NegD src2))));
format %{ "fmsubd $dst, $src1, $src2, $src3" %}
ins_encode %{
assert(UseFMA, "Needs FMA instructions support.");
__ fmsubd(as_FloatRegister($dst$$reg),
as_FloatRegister($src1$$reg),
as_FloatRegister($src2$$reg),
@@ -14373,15 +14380,15 @@ instruct msubD_reg_reg(vRegD dst, vRegD src1, vRegD src2, vRegD src3) %{
ins_pipe(pipe_class_default);
%}
// -src1 * src2 - src3
// src1 * (-src2) - src3
// "(-src1) * src2 - src3" has been idealized to "src2 * (-src1) - src3"
instruct mnaddF_reg_reg(vRegF dst, vRegF src1, vRegF src2, vRegF src3) %{
predicate(UseFMA);
match(Set dst (FmaF (NegF src3) (Binary (NegF src1) src2)));
match(Set dst (FmaF (NegF src3) (Binary src1 (NegF src2))));
format %{ "fnmadds $dst, $src1, $src2, $src3" %}
ins_encode %{
assert(UseFMA, "Needs FMA instructions support.");
__ fnmadds(as_FloatRegister($dst$$reg),
as_FloatRegister($src1$$reg),
as_FloatRegister($src2$$reg),
@@ -14391,15 +14398,15 @@ instruct mnaddF_reg_reg(vRegF dst, vRegF src1, vRegF src2, vRegF src3) %{
ins_pipe(pipe_class_default);
%}
// -src1 * src2 - src3
// src1 * (-src2) - src3
// "(-src1) * src2 - src3" has been idealized to "src2 * (-src1) - src3"
instruct mnaddD_reg_reg(vRegD dst, vRegD src1, vRegD src2, vRegD src3) %{
predicate(UseFMA);
match(Set dst (FmaD (NegD src3) (Binary (NegD src1) src2)));
match(Set dst (FmaD (NegD src3) (Binary src1 (NegD src2))));
format %{ "fnmaddd $dst, $src1, $src2, $src3" %}
ins_encode %{
assert(UseFMA, "Needs FMA instructions support.");
__ fnmaddd(as_FloatRegister($dst$$reg),
as_FloatRegister($src1$$reg),
as_FloatRegister($src2$$reg),
@@ -14411,12 +14418,12 @@ instruct mnaddD_reg_reg(vRegD dst, vRegD src1, vRegD src2, vRegD src3) %{
// src1 * src2 - src3
instruct mnsubF_reg_reg(vRegF dst, vRegF src1, vRegF src2, vRegF src3, immF0 zero) %{
predicate(UseFMA);
match(Set dst (FmaF (NegF src3) (Binary src1 src2)));
format %{ "fnmsubs $dst, $src1, $src2, $src3" %}
ins_encode %{
assert(UseFMA, "Needs FMA instructions support.");
__ fnmsubs(as_FloatRegister($dst$$reg),
as_FloatRegister($src1$$reg),
as_FloatRegister($src2$$reg),
@@ -14428,13 +14435,13 @@ instruct mnsubF_reg_reg(vRegF dst, vRegF src1, vRegF src2, vRegF src3, immF0 zer
// src1 * src2 - src3
instruct mnsubD_reg_reg(vRegD dst, vRegD src1, vRegD src2, vRegD src3, immD0 zero) %{
predicate(UseFMA);
match(Set dst (FmaD (NegD src3) (Binary src1 src2)));
format %{ "fnmsubd $dst, $src1, $src2, $src3" %}
ins_encode %{
// n.b. insn name should be fnmsubd
assert(UseFMA, "Needs FMA instructions support.");
// n.b. insn name should be fnmsubd
__ fnmsub(as_FloatRegister($dst$$reg),
as_FloatRegister($src1$$reg),
as_FloatRegister($src2$$reg),

View File

@@ -2131,14 +2131,14 @@ instruct vmla_masked(vReg dst_src1, vReg src2, vReg src3, pRegGov pg) %{
%}
// vector fmla
// dst_src1 = dst_src1 + src2 * src3
// dst_src1 = src2 * src3 + dst_src1
instruct vfmla(vReg dst_src1, vReg src2, vReg src3) %{
predicate(UseFMA);
match(Set dst_src1 (FmaVF dst_src1 (Binary src2 src3)));
match(Set dst_src1 (FmaVD dst_src1 (Binary src2 src3)));
format %{ "vfmla $dst_src1, $src2, $src3" %}
ins_encode %{
assert(UseFMA, "Needs FMA instructions support.");
uint length_in_bytes = Matcher::vector_length_in_bytes(this);
if (VM_Version::use_neon_for_vector(length_in_bytes)) {
__ fmla($dst_src1$$FloatRegister, get_arrangement(this),
@@ -2157,11 +2157,12 @@ instruct vfmla(vReg dst_src1, vReg src2, vReg src3) %{
// dst_src1 = dst_src1 * src2 + src3
instruct vfmad_masked(vReg dst_src1, vReg src2, vReg src3, pRegGov pg) %{
predicate(UseFMA && UseSVE > 0);
predicate(UseSVE > 0);
match(Set dst_src1 (FmaVF (Binary dst_src1 src2) (Binary src3 pg)));
match(Set dst_src1 (FmaVD (Binary dst_src1 src2) (Binary src3 pg)));
format %{ "vfmad_masked $dst_src1, $pg, $src2, $src3" %}
ins_encode %{
assert(UseFMA, "Needs FMA instructions support.");
BasicType bt = Matcher::vector_element_basic_type(this);
__ sve_fmad($dst_src1$$FloatRegister, __ elemType_to_regVariant(bt),
$pg$$PRegister, $src2$$FloatRegister, $src3$$FloatRegister);
@@ -2221,34 +2222,14 @@ instruct vmls_masked(vReg dst_src1, vReg src2, vReg src3, pRegGov pg) %{
// vector fmls
// dst_src1 = dst_src1 + -src2 * src3
instruct vfmls1(vReg dst_src1, vReg src2, vReg src3) %{
predicate(UseFMA);
match(Set dst_src1 (FmaVF dst_src1 (Binary (NegVF src2) src3)));
match(Set dst_src1 (FmaVD dst_src1 (Binary (NegVD src2) src3)));
format %{ "vfmls1 $dst_src1, $src2, $src3" %}
ins_encode %{
uint length_in_bytes = Matcher::vector_length_in_bytes(this);
if (VM_Version::use_neon_for_vector(length_in_bytes)) {
__ fmls($dst_src1$$FloatRegister, get_arrangement(this),
$src2$$FloatRegister, $src3$$FloatRegister);
} else {
assert(UseSVE > 0, "must be sve");
BasicType bt = Matcher::vector_element_basic_type(this);
__ sve_fmls($dst_src1$$FloatRegister, __ elemType_to_regVariant(bt),
ptrue, $src2$$FloatRegister, $src3$$FloatRegister);
}
%}
ins_pipe(pipe_slow);
%}
// dst_src1 = dst_src1 + src2 * -src3
instruct vfmls2(vReg dst_src1, vReg src2, vReg src3) %{
predicate(UseFMA);
// dst_src1 = src2 * (-src3) + dst_src1
// "(-src2) * src3 + dst_src1" has been idealized to "src3 * (-src2) + dst_src1"
instruct vfmls(vReg dst_src1, vReg src2, vReg src3) %{
match(Set dst_src1 (FmaVF dst_src1 (Binary src2 (NegVF src3))));
match(Set dst_src1 (FmaVD dst_src1 (Binary src2 (NegVD src3))));
format %{ "vfmls2 $dst_src1, $src2, $src3" %}
format %{ "vfmls $dst_src1, $src2, $src3" %}
ins_encode %{
assert(UseFMA, "Needs FMA instructions support.");
uint length_in_bytes = Matcher::vector_length_in_bytes(this);
if (VM_Version::use_neon_for_vector(length_in_bytes)) {
__ fmls($dst_src1$$FloatRegister, get_arrangement(this),
@@ -2265,13 +2246,14 @@ instruct vfmls2(vReg dst_src1, vReg src2, vReg src3) %{
// vector fmsb - predicated
// dst_src1 = dst_src1 * -src2 + src3
// dst_src1 = dst_src1 * (-src2) + src3
instruct vfmsb_masked(vReg dst_src1, vReg src2, vReg src3, pRegGov pg) %{
predicate(UseFMA && UseSVE > 0);
predicate(UseSVE > 0);
match(Set dst_src1 (FmaVF (Binary dst_src1 (NegVF src2)) (Binary src3 pg)));
match(Set dst_src1 (FmaVD (Binary dst_src1 (NegVD src2)) (Binary src3 pg)));
format %{ "vfmsb_masked $dst_src1, $pg, $src2, $src3" %}
ins_encode %{
assert(UseFMA, "Needs FMA instructions support.");
BasicType bt = Matcher::vector_element_basic_type(this);
__ sve_fmsb($dst_src1$$FloatRegister, __ elemType_to_regVariant(bt),
$pg$$PRegister, $src2$$FloatRegister, $src3$$FloatRegister);
@@ -2281,27 +2263,15 @@ instruct vfmsb_masked(vReg dst_src1, vReg src2, vReg src3, pRegGov pg) %{
// vector fnmla (sve)
// dst_src1 = -dst_src1 + -src2 * src3
instruct vfnmla1(vReg dst_src1, vReg src2, vReg src3) %{
predicate(UseFMA && UseSVE > 0);
match(Set dst_src1 (FmaVF (NegVF dst_src1) (Binary (NegVF src2) src3)));
match(Set dst_src1 (FmaVD (NegVD dst_src1) (Binary (NegVD src2) src3)));
format %{ "vfnmla1 $dst_src1, $src2, $src3" %}
ins_encode %{
BasicType bt = Matcher::vector_element_basic_type(this);
__ sve_fnmla($dst_src1$$FloatRegister, __ elemType_to_regVariant(bt),
ptrue, $src2$$FloatRegister, $src3$$FloatRegister);
%}
ins_pipe(pipe_slow);
%}
// dst_src1 = -dst_src1 + src2 * -src3
instruct vfnmla2(vReg dst_src1, vReg src2, vReg src3) %{
predicate(UseFMA && UseSVE > 0);
// dst_src1 = src2 * (-src3) - dst_src1
// "(-src2) * src3 - dst_src1" has been idealized to "src3 * (-src2) - dst_src1"
instruct vfnmla(vReg dst_src1, vReg src2, vReg src3) %{
predicate(UseSVE > 0);
match(Set dst_src1 (FmaVF (NegVF dst_src1) (Binary src2 (NegVF src3))));
match(Set dst_src1 (FmaVD (NegVD dst_src1) (Binary src2 (NegVD src3))));
format %{ "vfnmla2 $dst_src1, $src2, $src3" %}
format %{ "vfnmla $dst_src1, $src2, $src3" %}
ins_encode %{
assert(UseFMA, "Needs FMA instructions support.");
BasicType bt = Matcher::vector_element_basic_type(this);
__ sve_fnmla($dst_src1$$FloatRegister, __ elemType_to_regVariant(bt),
ptrue, $src2$$FloatRegister, $src3$$FloatRegister);
@@ -2311,13 +2281,14 @@ instruct vfnmla2(vReg dst_src1, vReg src2, vReg src3) %{
// vector fnmad - predicated
// dst_src1 = -src3 + dst_src1 * -src2
// dst_src1 = dst_src1 * (-src2) - src3
instruct vfnmad_masked(vReg dst_src1, vReg src2, vReg src3, pRegGov pg) %{
predicate(UseFMA && UseSVE > 0);
predicate(UseSVE > 0);
match(Set dst_src1 (FmaVF (Binary dst_src1 (NegVF src2)) (Binary (NegVF src3) pg)));
match(Set dst_src1 (FmaVD (Binary dst_src1 (NegVD src2)) (Binary (NegVD src3) pg)));
format %{ "vfnmad_masked $dst_src1, $pg, $src2, $src3" %}
ins_encode %{
assert(UseFMA, "Needs FMA instructions support.");
BasicType bt = Matcher::vector_element_basic_type(this);
__ sve_fnmad($dst_src1$$FloatRegister, __ elemType_to_regVariant(bt),
$pg$$PRegister, $src2$$FloatRegister, $src3$$FloatRegister);
@@ -2327,13 +2298,14 @@ instruct vfnmad_masked(vReg dst_src1, vReg src2, vReg src3, pRegGov pg) %{
// vector fnmls (sve)
// dst_src1 = -dst_src1 + src2 * src3
// dst_src1 = src2 * src3 - dst_src1
instruct vfnmls(vReg dst_src1, vReg src2, vReg src3) %{
predicate(UseFMA && UseSVE > 0);
predicate(UseSVE > 0);
match(Set dst_src1 (FmaVF (NegVF dst_src1) (Binary src2 src3)));
match(Set dst_src1 (FmaVD (NegVD dst_src1) (Binary src2 src3)));
format %{ "vfnmls $dst_src1, $src2, $src3" %}
ins_encode %{
assert(UseFMA, "Needs FMA instructions support.");
BasicType bt = Matcher::vector_element_basic_type(this);
__ sve_fnmls($dst_src1$$FloatRegister, __ elemType_to_regVariant(bt),
ptrue, $src2$$FloatRegister, $src3$$FloatRegister);
@@ -2343,13 +2315,14 @@ instruct vfnmls(vReg dst_src1, vReg src2, vReg src3) %{
// vector fnmsb - predicated
// dst_src1 = -src3 + dst_src1 * src2
// dst_src1 = dst_src1 * src2 - src3
instruct vfnmsb_masked(vReg dst_src1, vReg src2, vReg src3, pRegGov pg) %{
predicate(UseFMA && UseSVE > 0);
predicate(UseSVE > 0);
match(Set dst_src1 (FmaVF (Binary dst_src1 src2) (Binary (NegVF src3) pg)));
match(Set dst_src1 (FmaVD (Binary dst_src1 src2) (Binary (NegVD src3) pg)));
format %{ "vfnmsb_masked $dst_src1, $pg, $src2, $src3" %}
ins_encode %{
assert(UseFMA, "Needs FMA instructions support.");
BasicType bt = Matcher::vector_element_basic_type(this);
__ sve_fnmsb($dst_src1$$FloatRegister, __ elemType_to_regVariant(bt),
$pg$$PRegister, $src2$$FloatRegister, $src3$$FloatRegister);

View File

@@ -1173,14 +1173,14 @@ instruct vmla_masked(vReg dst_src1, vReg src2, vReg src3, pRegGov pg) %{
%}
// vector fmla
// dst_src1 = dst_src1 + src2 * src3
// dst_src1 = src2 * src3 + dst_src1
instruct vfmla(vReg dst_src1, vReg src2, vReg src3) %{
predicate(UseFMA);
match(Set dst_src1 (FmaVF dst_src1 (Binary src2 src3)));
match(Set dst_src1 (FmaVD dst_src1 (Binary src2 src3)));
format %{ "vfmla $dst_src1, $src2, $src3" %}
ins_encode %{
assert(UseFMA, "Needs FMA instructions support.");
uint length_in_bytes = Matcher::vector_length_in_bytes(this);
if (VM_Version::use_neon_for_vector(length_in_bytes)) {
__ fmla($dst_src1$$FloatRegister, get_arrangement(this),
@@ -1199,11 +1199,12 @@ instruct vfmla(vReg dst_src1, vReg src2, vReg src3) %{
// dst_src1 = dst_src1 * src2 + src3
instruct vfmad_masked(vReg dst_src1, vReg src2, vReg src3, pRegGov pg) %{
predicate(UseFMA && UseSVE > 0);
predicate(UseSVE > 0);
match(Set dst_src1 (FmaVF (Binary dst_src1 src2) (Binary src3 pg)));
match(Set dst_src1 (FmaVD (Binary dst_src1 src2) (Binary src3 pg)));
format %{ "vfmad_masked $dst_src1, $pg, $src2, $src3" %}
ins_encode %{
assert(UseFMA, "Needs FMA instructions support.");
BasicType bt = Matcher::vector_element_basic_type(this);
__ sve_fmad($dst_src1$$FloatRegister, __ elemType_to_regVariant(bt),
$pg$$PRegister, $src2$$FloatRegister, $src3$$FloatRegister);
@@ -1263,34 +1264,14 @@ instruct vmls_masked(vReg dst_src1, vReg src2, vReg src3, pRegGov pg) %{
// vector fmls
// dst_src1 = dst_src1 + -src2 * src3
instruct vfmls1(vReg dst_src1, vReg src2, vReg src3) %{
predicate(UseFMA);
match(Set dst_src1 (FmaVF dst_src1 (Binary (NegVF src2) src3)));
match(Set dst_src1 (FmaVD dst_src1 (Binary (NegVD src2) src3)));
format %{ "vfmls1 $dst_src1, $src2, $src3" %}
ins_encode %{
uint length_in_bytes = Matcher::vector_length_in_bytes(this);
if (VM_Version::use_neon_for_vector(length_in_bytes)) {
__ fmls($dst_src1$$FloatRegister, get_arrangement(this),
$src2$$FloatRegister, $src3$$FloatRegister);
} else {
assert(UseSVE > 0, "must be sve");
BasicType bt = Matcher::vector_element_basic_type(this);
__ sve_fmls($dst_src1$$FloatRegister, __ elemType_to_regVariant(bt),
ptrue, $src2$$FloatRegister, $src3$$FloatRegister);
}
%}
ins_pipe(pipe_slow);
%}
// dst_src1 = dst_src1 + src2 * -src3
instruct vfmls2(vReg dst_src1, vReg src2, vReg src3) %{
predicate(UseFMA);
// dst_src1 = src2 * (-src3) + dst_src1
// "(-src2) * src3 + dst_src1" has been idealized to "src3 * (-src2) + dst_src1"
instruct vfmls(vReg dst_src1, vReg src2, vReg src3) %{
match(Set dst_src1 (FmaVF dst_src1 (Binary src2 (NegVF src3))));
match(Set dst_src1 (FmaVD dst_src1 (Binary src2 (NegVD src3))));
format %{ "vfmls2 $dst_src1, $src2, $src3" %}
format %{ "vfmls $dst_src1, $src2, $src3" %}
ins_encode %{
assert(UseFMA, "Needs FMA instructions support.");
uint length_in_bytes = Matcher::vector_length_in_bytes(this);
if (VM_Version::use_neon_for_vector(length_in_bytes)) {
__ fmls($dst_src1$$FloatRegister, get_arrangement(this),
@@ -1307,13 +1288,14 @@ instruct vfmls2(vReg dst_src1, vReg src2, vReg src3) %{
// vector fmsb - predicated
// dst_src1 = dst_src1 * -src2 + src3
// dst_src1 = dst_src1 * (-src2) + src3
instruct vfmsb_masked(vReg dst_src1, vReg src2, vReg src3, pRegGov pg) %{
predicate(UseFMA && UseSVE > 0);
predicate(UseSVE > 0);
match(Set dst_src1 (FmaVF (Binary dst_src1 (NegVF src2)) (Binary src3 pg)));
match(Set dst_src1 (FmaVD (Binary dst_src1 (NegVD src2)) (Binary src3 pg)));
format %{ "vfmsb_masked $dst_src1, $pg, $src2, $src3" %}
ins_encode %{
assert(UseFMA, "Needs FMA instructions support.");
BasicType bt = Matcher::vector_element_basic_type(this);
__ sve_fmsb($dst_src1$$FloatRegister, __ elemType_to_regVariant(bt),
$pg$$PRegister, $src2$$FloatRegister, $src3$$FloatRegister);
@@ -1323,27 +1305,15 @@ instruct vfmsb_masked(vReg dst_src1, vReg src2, vReg src3, pRegGov pg) %{
// vector fnmla (sve)
// dst_src1 = -dst_src1 + -src2 * src3
instruct vfnmla1(vReg dst_src1, vReg src2, vReg src3) %{
predicate(UseFMA && UseSVE > 0);
match(Set dst_src1 (FmaVF (NegVF dst_src1) (Binary (NegVF src2) src3)));
match(Set dst_src1 (FmaVD (NegVD dst_src1) (Binary (NegVD src2) src3)));
format %{ "vfnmla1 $dst_src1, $src2, $src3" %}
ins_encode %{
BasicType bt = Matcher::vector_element_basic_type(this);
__ sve_fnmla($dst_src1$$FloatRegister, __ elemType_to_regVariant(bt),
ptrue, $src2$$FloatRegister, $src3$$FloatRegister);
%}
ins_pipe(pipe_slow);
%}
// dst_src1 = -dst_src1 + src2 * -src3
instruct vfnmla2(vReg dst_src1, vReg src2, vReg src3) %{
predicate(UseFMA && UseSVE > 0);
// dst_src1 = src2 * (-src3) - dst_src1
// "(-src2) * src3 - dst_src1" has been idealized to "src3 * (-src2) - dst_src1"
instruct vfnmla(vReg dst_src1, vReg src2, vReg src3) %{
predicate(UseSVE > 0);
match(Set dst_src1 (FmaVF (NegVF dst_src1) (Binary src2 (NegVF src3))));
match(Set dst_src1 (FmaVD (NegVD dst_src1) (Binary src2 (NegVD src3))));
format %{ "vfnmla2 $dst_src1, $src2, $src3" %}
format %{ "vfnmla $dst_src1, $src2, $src3" %}
ins_encode %{
assert(UseFMA, "Needs FMA instructions support.");
BasicType bt = Matcher::vector_element_basic_type(this);
__ sve_fnmla($dst_src1$$FloatRegister, __ elemType_to_regVariant(bt),
ptrue, $src2$$FloatRegister, $src3$$FloatRegister);
@@ -1353,13 +1323,14 @@ instruct vfnmla2(vReg dst_src1, vReg src2, vReg src3) %{
// vector fnmad - predicated
// dst_src1 = -src3 + dst_src1 * -src2
// dst_src1 = dst_src1 * (-src2) - src3
instruct vfnmad_masked(vReg dst_src1, vReg src2, vReg src3, pRegGov pg) %{
predicate(UseFMA && UseSVE > 0);
predicate(UseSVE > 0);
match(Set dst_src1 (FmaVF (Binary dst_src1 (NegVF src2)) (Binary (NegVF src3) pg)));
match(Set dst_src1 (FmaVD (Binary dst_src1 (NegVD src2)) (Binary (NegVD src3) pg)));
format %{ "vfnmad_masked $dst_src1, $pg, $src2, $src3" %}
ins_encode %{
assert(UseFMA, "Needs FMA instructions support.");
BasicType bt = Matcher::vector_element_basic_type(this);
__ sve_fnmad($dst_src1$$FloatRegister, __ elemType_to_regVariant(bt),
$pg$$PRegister, $src2$$FloatRegister, $src3$$FloatRegister);
@@ -1369,13 +1340,14 @@ instruct vfnmad_masked(vReg dst_src1, vReg src2, vReg src3, pRegGov pg) %{
// vector fnmls (sve)
// dst_src1 = -dst_src1 + src2 * src3
// dst_src1 = src2 * src3 - dst_src1
instruct vfnmls(vReg dst_src1, vReg src2, vReg src3) %{
predicate(UseFMA && UseSVE > 0);
predicate(UseSVE > 0);
match(Set dst_src1 (FmaVF (NegVF dst_src1) (Binary src2 src3)));
match(Set dst_src1 (FmaVD (NegVD dst_src1) (Binary src2 src3)));
format %{ "vfnmls $dst_src1, $src2, $src3" %}
ins_encode %{
assert(UseFMA, "Needs FMA instructions support.");
BasicType bt = Matcher::vector_element_basic_type(this);
__ sve_fnmls($dst_src1$$FloatRegister, __ elemType_to_regVariant(bt),
ptrue, $src2$$FloatRegister, $src3$$FloatRegister);
@@ -1385,13 +1357,14 @@ instruct vfnmls(vReg dst_src1, vReg src2, vReg src3) %{
// vector fnmsb - predicated
// dst_src1 = -src3 + dst_src1 * src2
// dst_src1 = dst_src1 * src2 - src3
instruct vfnmsb_masked(vReg dst_src1, vReg src2, vReg src3, pRegGov pg) %{
predicate(UseFMA && UseSVE > 0);
predicate(UseSVE > 0);
match(Set dst_src1 (FmaVF (Binary dst_src1 src2) (Binary (NegVF src3) pg)));
match(Set dst_src1 (FmaVD (Binary dst_src1 src2) (Binary (NegVD src3) pg)));
format %{ "vfnmsb_masked $dst_src1, $pg, $src2, $src3" %}
ins_encode %{
assert(UseFMA, "Needs FMA instructions support.");
BasicType bt = Matcher::vector_element_basic_type(this);
__ sve_fnmsb($dst_src1$$FloatRegister, __ elemType_to_regVariant(bt),
$pg$$PRegister, $src2$$FloatRegister, $src3$$FloatRegister);

View File

@@ -83,7 +83,7 @@ frame FreezeBase::new_heap_frame(frame& f, frame& caller) {
intptr_t *sp, *fp; // sp is really our unextended_sp
if (FKind::interpreted) {
assert((intptr_t*)f.at(frame::interpreter_frame_last_sp_offset) == nullptr
|| f.unextended_sp() == (intptr_t*)f.at(frame::interpreter_frame_last_sp_offset), "");
|| f.unextended_sp() == (intptr_t*)f.at_relative(frame::interpreter_frame_last_sp_offset), "");
intptr_t locals_offset = *f.addr_at(frame::interpreter_frame_locals_offset);
// If the caller.is_empty(), i.e. we're freezing into an empty chunk, then we set
// the chunk's argsize in finalize_freeze and make room for it above the unextended_sp
@@ -123,7 +123,7 @@ frame FreezeBase::new_heap_frame(frame& f, frame& caller) {
void FreezeBase::adjust_interpreted_frame_unextended_sp(frame& f) {
assert((f.at(frame::interpreter_frame_last_sp_offset) != 0) || (f.unextended_sp() == f.sp()), "");
intptr_t* real_unextended_sp = (intptr_t*)f.at(frame::interpreter_frame_last_sp_offset);
intptr_t* real_unextended_sp = (intptr_t*)f.at_relative_or_null(frame::interpreter_frame_last_sp_offset);
if (real_unextended_sp != nullptr) {
f.set_unextended_sp(real_unextended_sp); // can be null at a safepoint
}
@@ -149,8 +149,8 @@ inline void FreezeBase::relativize_interpreted_frame_metadata(const frame& f, co
// because we freeze the padding word (see recurse_freeze_interpreted_frame) in order to keep the same relativized
// locals value, we don't need to change the locals value here.
// at(frame::interpreter_frame_last_sp_offset) can be null at safepoint preempts
*hf.addr_at(frame::interpreter_frame_last_sp_offset) = hf.unextended_sp() - hf.fp();
// Make sure that last_sp is already relativized.
assert((intptr_t*)hf.at_relative(frame::interpreter_frame_last_sp_offset) == hf.unextended_sp(), "");
relativize_one(vfp, hfp, frame::interpreter_frame_initial_sp_offset); // == block_top == block_bottom
relativize_one(vfp, hfp, frame::interpreter_frame_extended_sp_offset);
@@ -290,7 +290,9 @@ static inline void derelativize_one(intptr_t* const fp, int offset) {
inline void ThawBase::derelativize_interpreted_frame_metadata(const frame& hf, const frame& f) {
intptr_t* vfp = f.fp();
derelativize_one(vfp, frame::interpreter_frame_last_sp_offset);
// Make sure that last_sp is kept relativized.
assert((intptr_t*)f.at_relative(frame::interpreter_frame_last_sp_offset) == f.unextended_sp(), "");
derelativize_one(vfp, frame::interpreter_frame_initial_sp_offset);
derelativize_one(vfp, frame::interpreter_frame_extended_sp_offset);
}

View File

@@ -125,7 +125,8 @@ inline intptr_t* ContinuationHelper::InterpretedFrame::frame_top(const frame& f,
assert(res == (intptr_t*)f.interpreter_frame_monitor_end() - expression_stack_sz, "");
assert(res >= f.unextended_sp(),
"res: " INTPTR_FORMAT " initial_sp: " INTPTR_FORMAT " last_sp: " INTPTR_FORMAT " unextended_sp: " INTPTR_FORMAT " expression_stack_size: %d",
p2i(res), p2i(f.addr_at(frame::interpreter_frame_initial_sp_offset)), f.at(frame::interpreter_frame_last_sp_offset), p2i(f.unextended_sp()), expression_stack_sz);
p2i(res), p2i(f.addr_at(frame::interpreter_frame_initial_sp_offset)), f.at_relative_or_null(frame::interpreter_frame_last_sp_offset),
p2i(f.unextended_sp()), expression_stack_sz);
return res;
}

View File

@@ -168,7 +168,7 @@ void DowncallStubGenerator::generate() {
assert(_abi._shadow_space_bytes == 0, "not expecting shadow space on AArch64");
allocated_frame_size += arg_shuffle.out_arg_bytes();
bool should_save_return_value = !_needs_return_buffer && _needs_transition;
bool should_save_return_value = !_needs_return_buffer;
RegSpiller out_reg_spiller(_output_registers);
int spill_offset = -1;

View File

@@ -355,7 +355,9 @@ void frame::interpreter_frame_set_monitor_end(BasicObjectLock* value) {
// Used by template based interpreter deoptimization
void frame::interpreter_frame_set_last_sp(intptr_t* sp) {
*((intptr_t**)addr_at(interpreter_frame_last_sp_offset)) = sp;
assert(is_interpreted_frame(), "interpreted frame expected");
// set relativized last_sp
ptr_at_put(interpreter_frame_last_sp_offset, sp != nullptr ? (sp - fp()) : 0);
}
// Used by template based interpreter deoptimization
@@ -508,7 +510,7 @@ bool frame::is_interpreted_frame_valid(JavaThread* thread) const {
// first the method
Method* m = *interpreter_frame_method_addr();
Method* m = safe_interpreter_frame_method();
// validate the method we'd find in this potential sender
if (!Method::is_valid_method(m)) return false;

View File

@@ -262,7 +262,9 @@ inline intptr_t* frame::interpreter_frame_locals() const {
}
inline intptr_t* frame::interpreter_frame_last_sp() const {
return (intptr_t*)at(interpreter_frame_last_sp_offset);
intptr_t n = *addr_at(interpreter_frame_last_sp_offset);
assert(n <= 0, "n: " INTPTR_FORMAT, n);
return n != 0 ? &fp()[n] : nullptr;
}
inline intptr_t* frame::interpreter_frame_bcp_addr() const {

View File

@@ -36,6 +36,7 @@
#include "oops/markWord.hpp"
#include "oops/method.hpp"
#include "oops/methodData.hpp"
#include "oops/resolvedFieldEntry.hpp"
#include "oops/resolvedIndyEntry.hpp"
#include "prims/jvmtiExport.hpp"
#include "prims/jvmtiThreadState.hpp"
@@ -429,7 +430,9 @@ void InterpreterMacroAssembler::prepare_to_jump_from_interpreted() {
// set sender sp
mov(r19_sender_sp, sp);
// record last_sp
str(esp, Address(rfp, frame::interpreter_frame_last_sp_offset * wordSize));
sub(rscratch1, esp, rfp);
asr(rscratch1, rscratch1, Interpreter::logStackElementSize);
str(rscratch1, Address(rfp, frame::interpreter_frame_last_sp_offset * wordSize));
}
// Jump to from_interpreted entry of a call unless single stepping is possible
@@ -1881,8 +1884,24 @@ void InterpreterMacroAssembler::load_resolved_indy_entry(Register cache, Registe
get_cache_index_at_bcp(index, 1, sizeof(u4));
// Get address of invokedynamic array
ldr(cache, Address(rcpool, in_bytes(ConstantPoolCache::invokedynamic_entries_offset())));
// Scale the index to be the entry index * sizeof(ResolvedInvokeDynamicInfo)
// Scale the index to be the entry index * sizeof(ResolvedIndyEntry)
lsl(index, index, log2i_exact(sizeof(ResolvedIndyEntry)));
add(cache, cache, Array<ResolvedIndyEntry>::base_offset_in_bytes());
lea(cache, Address(cache, index));
}
void InterpreterMacroAssembler::load_field_entry(Register cache, Register index, int bcp_offset) {
// Get index out of bytecode pointer
get_cache_index_at_bcp(index, bcp_offset, sizeof(u2));
// Take shortcut if the size is a power of 2
if (is_power_of_2(sizeof(ResolvedFieldEntry))) {
lsl(index, index, log2i_exact(sizeof(ResolvedFieldEntry))); // Scale index by power of 2
} else {
mov(cache, sizeof(ResolvedFieldEntry));
mul(index, index, cache); // Scale the index to be the entry index * sizeof(ResolvedFieldEntry)
}
// Get address of field entries array
ldr(cache, Address(rcpool, ConstantPoolCache::field_entries_offset()));
add(cache, cache, Array<ResolvedFieldEntry>::base_offset_in_bytes());
lea(cache, Address(cache, index));
}

View File

@@ -321,6 +321,7 @@ class InterpreterMacroAssembler: public MacroAssembler {
}
void load_resolved_indy_entry(Register cache, Register index);
void load_field_entry(Register cache, Register index, int bcp_offset = 1);
};
#endif // CPU_AARCH64_INTERP_MASM_AARCH64_HPP

View File

@@ -465,7 +465,8 @@ address TemplateInterpreterGenerator::generate_return_entry_for(TosState state,
address entry = __ pc();
// Restore stack bottom in case i2c adjusted stack
__ ldr(esp, Address(rfp, frame::interpreter_frame_last_sp_offset * wordSize));
__ ldr(rscratch1, Address(rfp, frame::interpreter_frame_last_sp_offset * wordSize));
__ lea(esp, Address(rfp, rscratch1, Address::lsl(Interpreter::logStackElementSize)));
// and null it as marker that esp is now tos until next java call
__ str(zr, Address(rfp, frame::interpreter_frame_last_sp_offset * wordSize));
__ restore_bcp();
@@ -521,7 +522,8 @@ address TemplateInterpreterGenerator::generate_deopt_entry_for(TosState state,
__ restore_sp_after_call(); // Restore SP to extended SP
// Restore expression stack pointer
__ ldr(esp, Address(rfp, frame::interpreter_frame_last_sp_offset * wordSize));
__ ldr(rscratch1, Address(rfp, frame::interpreter_frame_last_sp_offset * wordSize));
__ lea(esp, Address(rfp, rscratch1, Address::lsl(Interpreter::logStackElementSize)));
// null last_sp until next java call
__ str(zr, Address(rfp, frame::interpreter_frame_last_sp_offset * wordSize));
@@ -1867,7 +1869,8 @@ void TemplateInterpreterGenerator::generate_throw_exception() {
/* notify_jvmdi */ false);
// Restore the last_sp and null it out
__ ldr(esp, Address(rfp, frame::interpreter_frame_last_sp_offset * wordSize));
__ ldr(rscratch1, Address(rfp, frame::interpreter_frame_last_sp_offset * wordSize));
__ lea(esp, Address(rfp, rscratch1, Address::lsl(Interpreter::logStackElementSize)));
__ str(zr, Address(rfp, frame::interpreter_frame_last_sp_offset * wordSize));
__ restore_bcp();

View File

@@ -38,6 +38,7 @@
#include "oops/method.hpp"
#include "oops/objArrayKlass.hpp"
#include "oops/oop.inline.hpp"
#include "oops/resolvedFieldEntry.hpp"
#include "oops/resolvedIndyEntry.hpp"
#include "prims/jvmtiExport.hpp"
#include "prims/methodHandles.hpp"
@@ -187,7 +188,14 @@ void TemplateTable::patch_bytecode(Bytecodes::Code bc, Register bc_reg,
// additional, required work.
assert(byte_no == f1_byte || byte_no == f2_byte, "byte_no out of range");
assert(load_bc_into_bc_reg, "we use bc_reg as temp");
__ get_cache_and_index_and_bytecode_at_bcp(temp_reg, bc_reg, temp_reg, byte_no, 1);
__ load_field_entry(temp_reg, bc_reg);
if (byte_no == f1_byte) {
__ lea(temp_reg, Address(temp_reg, in_bytes(ResolvedFieldEntry::get_code_offset())));
} else {
__ lea(temp_reg, Address(temp_reg, in_bytes(ResolvedFieldEntry::put_code_offset())));
}
// Load-acquire the bytecode to match store-release in ResolvedFieldEntry::fill_in()
__ ldarb(temp_reg, temp_reg);
__ movw(bc_reg, bc);
__ cbzw(temp_reg, L_patch_done); // don't patch
}
@@ -2197,6 +2205,18 @@ void TemplateTable::_return(TosState state)
if (_desc->bytecode() == Bytecodes::_return)
__ membar(MacroAssembler::StoreStore);
if (_desc->bytecode() != Bytecodes::_return_register_finalizer) {
Label no_safepoint;
__ ldr(rscratch1, Address(rthread, JavaThread::polling_word_offset()));
__ tbz(rscratch1, log2i_exact(SafepointMechanism::poll_bit()), no_safepoint);
__ push(state);
__ push_cont_fastpath(rthread);
__ call_VM(noreg, CAST_FROM_FN_PTR(address, InterpreterRuntime::at_safepoint));
__ pop_cont_fastpath(rthread);
__ pop(state);
__ bind(no_safepoint);
}
// Narrow result if state is itos but result type is smaller.
// Need to narrow in the return bytecode rather than in generate_return_entry
// since compiled code callers expect the result to already be narrowed.
@@ -2247,11 +2267,6 @@ void TemplateTable::resolve_cache_and_index(int byte_no,
Label resolved, clinit_barrier_slow;
Bytecodes::Code code = bytecode();
switch (code) {
case Bytecodes::_nofast_getfield: code = Bytecodes::_getfield; break;
case Bytecodes::_nofast_putfield: code = Bytecodes::_putfield; break;
default: break;
}
assert(byte_no == f1_byte || byte_no == f2_byte, "byte_no out of range");
__ get_cache_and_index_and_bytecode_at_bcp(Rcache, index, temp, byte_no, 1, index_size);
@@ -2279,6 +2294,69 @@ void TemplateTable::resolve_cache_and_index(int byte_no,
}
}
void TemplateTable::resolve_cache_and_index_for_field(int byte_no,
Register Rcache,
Register index) {
const Register temp = r19;
assert_different_registers(Rcache, index, temp);
Label resolved;
Bytecodes::Code code = bytecode();
switch (code) {
case Bytecodes::_nofast_getfield: code = Bytecodes::_getfield; break;
case Bytecodes::_nofast_putfield: code = Bytecodes::_putfield; break;
default: break;
}
assert(byte_no == f1_byte || byte_no == f2_byte, "byte_no out of range");
__ load_field_entry(Rcache, index);
if (byte_no == f1_byte) {
__ lea(temp, Address(Rcache, in_bytes(ResolvedFieldEntry::get_code_offset())));
} else {
__ lea(temp, Address(Rcache, in_bytes(ResolvedFieldEntry::put_code_offset())));
}
// Load-acquire the bytecode to match store-release in ResolvedFieldEntry::fill_in()
__ ldarb(temp, temp);
__ subs(zr, temp, (int) code); // have we resolved this bytecode?
__ br(Assembler::EQ, resolved);
// resolve first time through
address entry = CAST_FROM_FN_PTR(address, InterpreterRuntime::resolve_from_cache);
__ mov(temp, (int) code);
__ call_VM(noreg, entry, temp);
// Update registers with resolved info
__ load_field_entry(Rcache, index);
__ bind(resolved);
}
void TemplateTable::load_resolved_field_entry(Register obj,
Register cache,
Register tos_state,
Register offset,
Register flags,
bool is_static = false) {
assert_different_registers(cache, tos_state, flags, offset);
// Field offset
__ load_sized_value(offset, Address(cache, in_bytes(ResolvedFieldEntry::field_offset_offset())), sizeof(int), true /*is_signed*/);
// Flags
__ load_unsigned_byte(flags, Address(cache, in_bytes(ResolvedFieldEntry::flags_offset())));
// TOS state
__ load_unsigned_byte(tos_state, Address(cache, in_bytes(ResolvedFieldEntry::type_offset())));
// Klass overwrite register
if (is_static) {
__ ldr(obj, Address(cache, ResolvedFieldEntry::field_holder_offset()));
const int mirror_offset = in_bytes(Klass::java_mirror_offset());
__ ldr(obj, Address(obj, mirror_offset));
__ resolve_oop_handle(obj, r5, rscratch2);
}
}
// The Rcache and index registers must be set before call
// n.b unlike x86 cache already includes the index offset
void TemplateTable::load_field_cp_cache_entry(Register obj,
@@ -2430,8 +2508,7 @@ void TemplateTable::jvmti_post_field_access(Register cache, Register index,
__ ldrw(r0, Address(rscratch1));
__ cbzw(r0, L1);
__ get_cache_and_index_at_bcp(c_rarg2, c_rarg3, 1);
__ lea(c_rarg2, Address(c_rarg2, in_bytes(ConstantPoolCache::base_offset())));
__ load_field_entry(c_rarg2, index);
if (is_static) {
__ mov(c_rarg1, zr); // null object reference
@@ -2441,11 +2518,10 @@ void TemplateTable::jvmti_post_field_access(Register cache, Register index,
}
// c_rarg1: object pointer or null
// c_rarg2: cache entry pointer
// c_rarg3: jvalue object on the stack
__ call_VM(noreg, CAST_FROM_FN_PTR(address,
InterpreterRuntime::post_field_access),
c_rarg1, c_rarg2, c_rarg3);
__ get_cache_and_index_at_bcp(cache, index, 1);
c_rarg1, c_rarg2);
__ load_field_entry(cache, index);
__ bind(L1);
}
}
@@ -2459,17 +2535,17 @@ void TemplateTable::pop_and_check_object(Register r)
void TemplateTable::getfield_or_static(int byte_no, bool is_static, RewriteControl rc)
{
const Register cache = r2;
const Register index = r3;
const Register obj = r4;
const Register off = r19;
const Register flags = r0;
const Register raw_flags = r6;
const Register bc = r4; // uses same reg as obj, so don't mix them
const Register cache = r4;
const Register obj = r4;
const Register index = r3;
const Register tos_state = r3;
const Register off = r19;
const Register flags = r6;
const Register bc = r4; // uses same reg as obj, so don't mix them
resolve_cache_and_index(byte_no, cache, index, sizeof(u2));
resolve_cache_and_index_for_field(byte_no, cache, index);
jvmti_post_field_access(cache, index, is_static, false);
load_field_cp_cache_entry(obj, cache, index, off, raw_flags, is_static);
load_resolved_field_entry(obj, cache, tos_state, off, flags, is_static);
if (!is_static) {
// obj is on the stack
@@ -2484,7 +2560,7 @@ void TemplateTable::getfield_or_static(int byte_no, bool is_static, RewriteContr
// the stores in one method and we interpret the loads in another.
if (!CompilerConfig::is_c1_or_interpreter_only_no_jvmci()){
Label notVolatile;
__ tbz(raw_flags, ConstantPoolCacheEntry::is_volatile_shift, notVolatile);
__ tbz(flags, ResolvedFieldEntry::is_volatile_shift, notVolatile);
__ membar(MacroAssembler::AnyAny);
__ bind(notVolatile);
}
@@ -2494,13 +2570,8 @@ void TemplateTable::getfield_or_static(int byte_no, bool is_static, RewriteContr
Label Done, notByte, notBool, notInt, notShort, notChar,
notLong, notFloat, notObj, notDouble;
// x86 uses a shift and mask or wings it with a shift plus assert
// the mask is not needed. aarch64 just uses bitfield extract
__ ubfxw(flags, raw_flags, ConstantPoolCacheEntry::tos_state_shift,
ConstantPoolCacheEntry::tos_state_bits);
assert(btos == 0, "change code, btos != 0");
__ cbnz(flags, notByte);
__ cbnz(tos_state, notByte);
// Don't rewrite getstatic, only getfield
if (is_static) rc = may_not_rewrite;
@@ -2515,7 +2586,7 @@ void TemplateTable::getfield_or_static(int byte_no, bool is_static, RewriteContr
__ b(Done);
__ bind(notByte);
__ cmp(flags, (u1)ztos);
__ cmp(tos_state, (u1)ztos);
__ br(Assembler::NE, notBool);
// ztos (same code as btos)
@@ -2529,7 +2600,7 @@ void TemplateTable::getfield_or_static(int byte_no, bool is_static, RewriteContr
__ b(Done);
__ bind(notBool);
__ cmp(flags, (u1)atos);
__ cmp(tos_state, (u1)atos);
__ br(Assembler::NE, notObj);
// atos
do_oop_load(_masm, field, r0, IN_HEAP);
@@ -2540,7 +2611,7 @@ void TemplateTable::getfield_or_static(int byte_no, bool is_static, RewriteContr
__ b(Done);
__ bind(notObj);
__ cmp(flags, (u1)itos);
__ cmp(tos_state, (u1)itos);
__ br(Assembler::NE, notInt);
// itos
__ access_load_at(T_INT, IN_HEAP, r0, field, noreg, noreg);
@@ -2552,7 +2623,7 @@ void TemplateTable::getfield_or_static(int byte_no, bool is_static, RewriteContr
__ b(Done);
__ bind(notInt);
__ cmp(flags, (u1)ctos);
__ cmp(tos_state, (u1)ctos);
__ br(Assembler::NE, notChar);
// ctos
__ access_load_at(T_CHAR, IN_HEAP, r0, field, noreg, noreg);
@@ -2564,7 +2635,7 @@ void TemplateTable::getfield_or_static(int byte_no, bool is_static, RewriteContr
__ b(Done);
__ bind(notChar);
__ cmp(flags, (u1)stos);
__ cmp(tos_state, (u1)stos);
__ br(Assembler::NE, notShort);
// stos
__ access_load_at(T_SHORT, IN_HEAP, r0, field, noreg, noreg);
@@ -2576,7 +2647,7 @@ void TemplateTable::getfield_or_static(int byte_no, bool is_static, RewriteContr
__ b(Done);
__ bind(notShort);
__ cmp(flags, (u1)ltos);
__ cmp(tos_state, (u1)ltos);
__ br(Assembler::NE, notLong);
// ltos
__ access_load_at(T_LONG, IN_HEAP, r0, field, noreg, noreg);
@@ -2588,7 +2659,7 @@ void TemplateTable::getfield_or_static(int byte_no, bool is_static, RewriteContr
__ b(Done);
__ bind(notLong);
__ cmp(flags, (u1)ftos);
__ cmp(tos_state, (u1)ftos);
__ br(Assembler::NE, notFloat);
// ftos
__ access_load_at(T_FLOAT, IN_HEAP, noreg /* ftos */, field, noreg, noreg);
@@ -2601,7 +2672,7 @@ void TemplateTable::getfield_or_static(int byte_no, bool is_static, RewriteContr
__ bind(notFloat);
#ifdef ASSERT
__ cmp(flags, (u1)dtos);
__ cmp(tos_state, (u1)dtos);
__ br(Assembler::NE, notDouble);
#endif
// dtos
@@ -2621,7 +2692,7 @@ void TemplateTable::getfield_or_static(int byte_no, bool is_static, RewriteContr
__ bind(Done);
Label notVolatile;
__ tbz(raw_flags, ConstantPoolCacheEntry::is_volatile_shift, notVolatile);
__ tbz(flags, ResolvedFieldEntry::is_volatile_shift, notVolatile);
__ membar(MacroAssembler::LoadLoad | MacroAssembler::LoadStore);
__ bind(notVolatile);
}
@@ -2646,8 +2717,6 @@ void TemplateTable::getstatic(int byte_no)
void TemplateTable::jvmti_post_field_mod(Register cache, Register index, bool is_static) {
transition(vtos, vtos);
ByteSize cp_base_offset = ConstantPoolCache::base_offset();
if (JvmtiExport::can_post_field_modification()) {
// Check to see if a field modification watch has been set before
// we take the time to call into the VM.
@@ -2657,7 +2726,7 @@ void TemplateTable::jvmti_post_field_mod(Register cache, Register index, bool is
__ ldrw(r0, Address(rscratch1));
__ cbz(r0, L1);
__ get_cache_and_index_at_bcp(c_rarg2, rscratch1, 1);
__ mov(c_rarg2, cache);
if (is_static) {
// Life is simple. Null out the object pointer.
@@ -2667,12 +2736,7 @@ void TemplateTable::jvmti_post_field_mod(Register cache, Register index, bool is
// the object. We don't know the size of the value, though; it
// could be one or two words depending on its type. As a result,
// we must find the type to determine where the object is.
__ ldrw(c_rarg3, Address(c_rarg2,
in_bytes(cp_base_offset +
ConstantPoolCacheEntry::flags_offset())));
__ lsr(c_rarg3, c_rarg3,
ConstantPoolCacheEntry::tos_state_shift);
ConstantPoolCacheEntry::verify_tos_state_shift();
__ load_unsigned_byte(c_rarg3, Address(c_rarg2, in_bytes(ResolvedFieldEntry::type_offset())));
Label nope2, done, ok;
__ ldr(c_rarg1, at_tos_p1()); // initially assume a one word jvalue
__ cmpw(c_rarg3, ltos);
@@ -2683,8 +2747,6 @@ void TemplateTable::jvmti_post_field_mod(Register cache, Register index, bool is
__ ldr(c_rarg1, at_tos_p2()); // ltos (two word jvalue)
__ bind(nope2);
}
// cache entry pointer
__ add(c_rarg2, c_rarg2, in_bytes(cp_base_offset));
// object (tos)
__ mov(c_rarg3, esp);
// c_rarg1: object pointer set up above (null if static)
@@ -2694,7 +2756,7 @@ void TemplateTable::jvmti_post_field_mod(Register cache, Register index, bool is
CAST_FROM_FN_PTR(address,
InterpreterRuntime::post_field_modification),
c_rarg1, c_rarg2, c_rarg3);
__ get_cache_and_index_at_bcp(cache, index, 1);
__ load_field_entry(cache, index);
__ bind(L1);
}
}
@@ -2702,23 +2764,24 @@ void TemplateTable::jvmti_post_field_mod(Register cache, Register index, bool is
void TemplateTable::putfield_or_static(int byte_no, bool is_static, RewriteControl rc) {
transition(vtos, vtos);
const Register cache = r2;
const Register index = r3;
const Register obj = r2;
const Register off = r19;
const Register flags = r0;
const Register bc = r4;
const Register cache = r2;
const Register index = r3;
const Register tos_state = r3;
const Register obj = r2;
const Register off = r19;
const Register flags = r0;
const Register bc = r4;
resolve_cache_and_index(byte_no, cache, index, sizeof(u2));
resolve_cache_and_index_for_field(byte_no, cache, index);
jvmti_post_field_mod(cache, index, is_static);
load_field_cp_cache_entry(obj, cache, index, off, flags, is_static);
load_resolved_field_entry(obj, cache, tos_state, off, flags, is_static);
Label Done;
__ mov(r5, flags);
{
Label notVolatile;
__ tbz(r5, ConstantPoolCacheEntry::is_volatile_shift, notVolatile);
__ tbz(r5, ResolvedFieldEntry::is_volatile_shift, notVolatile);
__ membar(MacroAssembler::StoreStore | MacroAssembler::LoadStore);
__ bind(notVolatile);
}
@@ -2729,12 +2792,8 @@ void TemplateTable::putfield_or_static(int byte_no, bool is_static, RewriteContr
Label notByte, notBool, notInt, notShort, notChar,
notLong, notFloat, notObj, notDouble;
// x86 uses a shift and mask or wings it with a shift plus assert
// the mask is not needed. aarch64 just uses bitfield extract
__ ubfxw(flags, flags, ConstantPoolCacheEntry::tos_state_shift, ConstantPoolCacheEntry::tos_state_bits);
assert(btos == 0, "change code, btos != 0");
__ cbnz(flags, notByte);
__ cbnz(tos_state, notByte);
// Don't rewrite putstatic, only putfield
if (is_static) rc = may_not_rewrite;
@@ -2751,7 +2810,7 @@ void TemplateTable::putfield_or_static(int byte_no, bool is_static, RewriteContr
}
__ bind(notByte);
__ cmp(flags, (u1)ztos);
__ cmp(tos_state, (u1)ztos);
__ br(Assembler::NE, notBool);
// ztos
@@ -2766,7 +2825,7 @@ void TemplateTable::putfield_or_static(int byte_no, bool is_static, RewriteContr
}
__ bind(notBool);
__ cmp(flags, (u1)atos);
__ cmp(tos_state, (u1)atos);
__ br(Assembler::NE, notObj);
// atos
@@ -2782,7 +2841,7 @@ void TemplateTable::putfield_or_static(int byte_no, bool is_static, RewriteContr
}
__ bind(notObj);
__ cmp(flags, (u1)itos);
__ cmp(tos_state, (u1)itos);
__ br(Assembler::NE, notInt);
// itos
@@ -2797,7 +2856,7 @@ void TemplateTable::putfield_or_static(int byte_no, bool is_static, RewriteContr
}
__ bind(notInt);
__ cmp(flags, (u1)ctos);
__ cmp(tos_state, (u1)ctos);
__ br(Assembler::NE, notChar);
// ctos
@@ -2812,7 +2871,7 @@ void TemplateTable::putfield_or_static(int byte_no, bool is_static, RewriteContr
}
__ bind(notChar);
__ cmp(flags, (u1)stos);
__ cmp(tos_state, (u1)stos);
__ br(Assembler::NE, notShort);
// stos
@@ -2827,7 +2886,7 @@ void TemplateTable::putfield_or_static(int byte_no, bool is_static, RewriteContr
}
__ bind(notShort);
__ cmp(flags, (u1)ltos);
__ cmp(tos_state, (u1)ltos);
__ br(Assembler::NE, notLong);
// ltos
@@ -2842,7 +2901,7 @@ void TemplateTable::putfield_or_static(int byte_no, bool is_static, RewriteContr
}
__ bind(notLong);
__ cmp(flags, (u1)ftos);
__ cmp(tos_state, (u1)ftos);
__ br(Assembler::NE, notFloat);
// ftos
@@ -2858,7 +2917,7 @@ void TemplateTable::putfield_or_static(int byte_no, bool is_static, RewriteContr
__ bind(notFloat);
#ifdef ASSERT
__ cmp(flags, (u1)dtos);
__ cmp(tos_state, (u1)dtos);
__ br(Assembler::NE, notDouble);
#endif
@@ -2883,7 +2942,7 @@ void TemplateTable::putfield_or_static(int byte_no, bool is_static, RewriteContr
{
Label notVolatile;
__ tbz(r5, ConstantPoolCacheEntry::is_volatile_shift, notVolatile);
__ tbz(r5, ResolvedFieldEntry::is_volatile_shift, notVolatile);
__ membar(MacroAssembler::StoreLoad | MacroAssembler::StoreStore);
__ bind(notVolatile);
}
@@ -2902,8 +2961,7 @@ void TemplateTable::putstatic(int byte_no) {
putfield_or_static(byte_no, true);
}
void TemplateTable::jvmti_post_fast_field_mod()
{
void TemplateTable::jvmti_post_fast_field_mod() {
if (JvmtiExport::can_post_field_modification()) {
// Check to see if a field modification watch has been set before
// we take the time to call into the VM.
@@ -2933,7 +2991,7 @@ void TemplateTable::jvmti_post_fast_field_mod()
}
__ mov(c_rarg3, esp); // points to jvalue on the stack
// access constant pool cache entry
__ get_cache_entry_pointer_at_bcp(c_rarg2, r0, 1);
__ load_field_entry(c_rarg2, r0);
__ verify_oop(r19);
// r19: object pointer copied above
// c_rarg2: cache entry pointer
@@ -2968,21 +3026,18 @@ void TemplateTable::fast_storefield(TosState state)
jvmti_post_fast_field_mod();
// access constant pool cache
__ get_cache_and_index_at_bcp(r2, r1, 1);
__ load_field_entry(r2, r1);
__ push(r0);
// R1: field offset, R2: TOS, R3: flags
load_resolved_field_entry(r2, r2, r0, r1, r3);
__ pop(r0);
// Must prevent reordering of the following cp cache loads with bytecode load
__ membar(MacroAssembler::LoadLoad);
// test for volatile with r3
__ ldrw(r3, Address(r2, in_bytes(base +
ConstantPoolCacheEntry::flags_offset())));
// replace index with field offset from cache entry
__ ldr(r1, Address(r2, in_bytes(base + ConstantPoolCacheEntry::f2_offset())));
{
Label notVolatile;
__ tbz(r3, ConstantPoolCacheEntry::is_volatile_shift, notVolatile);
__ tbz(r3, ResolvedFieldEntry::is_volatile_shift, notVolatile);
__ membar(MacroAssembler::StoreStore | MacroAssembler::LoadStore);
__ bind(notVolatile);
}
@@ -3030,7 +3085,7 @@ void TemplateTable::fast_storefield(TosState state)
{
Label notVolatile;
__ tbz(r3, ConstantPoolCacheEntry::is_volatile_shift, notVolatile);
__ tbz(r3, ResolvedFieldEntry::is_volatile_shift, notVolatile);
__ membar(MacroAssembler::StoreLoad | MacroAssembler::StoreStore);
__ bind(notVolatile);
}
@@ -3049,7 +3104,7 @@ void TemplateTable::fast_accessfield(TosState state)
__ ldrw(r2, Address(rscratch1));
__ cbzw(r2, L1);
// access constant pool cache entry
__ get_cache_entry_pointer_at_bcp(c_rarg2, rscratch2, 1);
__ load_field_entry(c_rarg2, rscratch2);
__ verify_oop(r0);
__ push_ptr(r0); // save object pointer before call_VM() clobbers it
__ mov(c_rarg1, r0);
@@ -3064,15 +3119,13 @@ void TemplateTable::fast_accessfield(TosState state)
}
// access constant pool cache
__ get_cache_and_index_at_bcp(r2, r1, 1);
__ load_field_entry(r2, r1);
// Must prevent reordering of the following cp cache loads with bytecode load
__ membar(MacroAssembler::LoadLoad);
__ ldr(r1, Address(r2, in_bytes(ConstantPoolCache::base_offset() +
ConstantPoolCacheEntry::f2_offset())));
__ ldrw(r3, Address(r2, in_bytes(ConstantPoolCache::base_offset() +
ConstantPoolCacheEntry::flags_offset())));
__ load_sized_value(r1, Address(r2, in_bytes(ResolvedFieldEntry::field_offset_offset())), sizeof(int), true /*is_signed*/);
__ load_unsigned_byte(r3, Address(r2, in_bytes(ResolvedFieldEntry::flags_offset())));
// r0: object
__ verify_oop(r0);
@@ -3087,7 +3140,7 @@ void TemplateTable::fast_accessfield(TosState state)
// the stores in one method and we interpret the loads in another.
if (!CompilerConfig::is_c1_or_interpreter_only_no_jvmci()) {
Label notVolatile;
__ tbz(r3, ConstantPoolCacheEntry::is_volatile_shift, notVolatile);
__ tbz(r3, ResolvedFieldEntry::is_volatile_shift, notVolatile);
__ membar(MacroAssembler::AnyAny);
__ bind(notVolatile);
}
@@ -3124,7 +3177,7 @@ void TemplateTable::fast_accessfield(TosState state)
}
{
Label notVolatile;
__ tbz(r3, ConstantPoolCacheEntry::is_volatile_shift, notVolatile);
__ tbz(r3, ResolvedFieldEntry::is_volatile_shift, notVolatile);
__ membar(MacroAssembler::LoadLoad | MacroAssembler::LoadStore);
__ bind(notVolatile);
}
@@ -3137,9 +3190,8 @@ void TemplateTable::fast_xaccess(TosState state)
// get receiver
__ ldr(r0, aaddress(0));
// access constant pool cache
__ get_cache_and_index_at_bcp(r2, r3, 2);
__ ldr(r1, Address(r2, in_bytes(ConstantPoolCache::base_offset() +
ConstantPoolCacheEntry::f2_offset())));
__ load_field_entry(r2, r3, 2);
__ load_sized_value(r1, Address(r2, in_bytes(ResolvedFieldEntry::field_offset_offset())), sizeof(int), true /*is_signed*/);
// 8179954: We need to make sure that the code generated for
// volatile accesses forms a sequentially-consistent set of
@@ -3149,9 +3201,8 @@ void TemplateTable::fast_xaccess(TosState state)
// the stores in one method and we interpret the loads in another.
if (!CompilerConfig::is_c1_or_interpreter_only_no_jvmci()) {
Label notVolatile;
__ ldrw(r3, Address(r2, in_bytes(ConstantPoolCache::base_offset() +
ConstantPoolCacheEntry::flags_offset())));
__ tbz(r3, ConstantPoolCacheEntry::is_volatile_shift, notVolatile);
__ load_unsigned_byte(r3, Address(r2, in_bytes(ResolvedFieldEntry::flags_offset())));
__ tbz(r3, ResolvedFieldEntry::is_volatile_shift, notVolatile);
__ membar(MacroAssembler::AnyAny);
__ bind(notVolatile);
}
@@ -3177,9 +3228,8 @@ void TemplateTable::fast_xaccess(TosState state)
{
Label notVolatile;
__ ldrw(r3, Address(r2, in_bytes(ConstantPoolCache::base_offset() +
ConstantPoolCacheEntry::flags_offset())));
__ tbz(r3, ConstantPoolCacheEntry::is_volatile_shift, notVolatile);
__ load_unsigned_byte(r3, Address(r2, in_bytes(ResolvedFieldEntry::flags_offset())));
__ tbz(r3, ResolvedFieldEntry::is_volatile_shift, notVolatile);
__ membar(MacroAssembler::LoadLoad | MacroAssembler::LoadStore);
__ bind(notVolatile);
}

View File

@@ -421,7 +421,7 @@ bool frame::is_interpreted_frame_valid(JavaThread* thread) const {
// first the method
Method* m = *interpreter_frame_method_addr();
Method* m = safe_interpreter_frame_method();
// validate the method we'd find in this potential sender
if (!Method::is_valid_method(m)) return false;

View File

@@ -2492,6 +2492,16 @@ void TemplateTable::_return(TosState state) {
__ bind(skip_register_finalizer);
}
if (_desc->bytecode() != Bytecodes::_return_register_finalizer) {
Label no_safepoint;
__ ldr(Rtemp, Address(Rthread, JavaThread::polling_word_offset()));
__ tbz(Rtemp, exact_log2(SafepointMechanism::poll_bit()), no_safepoint);
__ push(state);
__ call_VM(noreg, CAST_FROM_FN_PTR(address, InterpreterRuntime::at_safepoint));
__ pop(state);
__ bind(no_safepoint);
}
// Narrow result if state is itos but result type is smaller.
// Need to narrow in the return bytecode rather than in generate_return_entry
// since compiled code callers expect the result to already be narrowed.

View File

@@ -89,7 +89,7 @@ inline void FreezeBase::relativize_interpreted_frame_metadata(const frame& f, co
relativize_one(vfp, hfp, ijava_idx(monitors));
relativize_one(vfp, hfp, ijava_idx(esp));
relativize_one(vfp, hfp, ijava_idx(top_frame_sp));
// top_frame_sp is already relativized
// hfp == hf.sp() + (f.fp() - f.sp()) is not true on ppc because the stack frame has room for
// the maximal expression stack and the expression stack in the heap frame is trimmed.
@@ -544,7 +544,7 @@ inline void ThawBase::derelativize_interpreted_frame_metadata(const frame& hf, c
derelativize_one(vfp, ijava_idx(monitors));
derelativize_one(vfp, ijava_idx(esp));
derelativize_one(vfp, ijava_idx(top_frame_sp));
// Keep top_frame_sp relativized.
}
inline void ThawBase::patch_pd(frame& f, const frame& caller) {

View File

@@ -86,7 +86,7 @@ inline void ContinuationHelper::InterpretedFrame::patch_sender_sp(frame& f, cons
intptr_t* sp = caller.unextended_sp();
if (!f.is_heap_frame() && caller.is_interpreted_frame()) {
// See diagram "Interpreter Calling Procedure on PPC" at the end of continuationFreezeThaw_ppc.inline.hpp
sp = (intptr_t*)caller.at(ijava_idx(top_frame_sp));
sp = (intptr_t*)caller.at_relative(ijava_idx(top_frame_sp));
}
assert(f.is_interpreted_frame(), "");
assert(f.is_heap_frame() || is_aligned(sp, frame::alignment_in_bytes), "");

View File

@@ -165,7 +165,7 @@ void DowncallStubGenerator::generate() {
int parameter_save_area_slots = MAX2(_input_registers.length(), 8);
int allocated_frame_size = frame::native_abi_minframe_size + parameter_save_area_slots * BytesPerWord;
bool should_save_return_value = !_needs_return_buffer && _needs_transition;
bool should_save_return_value = !_needs_return_buffer;
RegSpiller out_reg_spiller(_output_registers);
int spill_offset = -1;

View File

@@ -324,7 +324,7 @@ bool frame::is_interpreted_frame_valid(JavaThread* thread) const {
// first the method
Method* m = *interpreter_frame_method_addr();
Method* m = safe_interpreter_frame_method();
// validate the method we'd find in this potential sender
if (!Method::is_valid_method(m)) return false;

View File

@@ -231,7 +231,13 @@ inline intptr_t* frame::interpreter_frame_esp() const {
inline void frame::interpreter_frame_set_monitor_end(BasicObjectLock* end) { get_ijava_state()->monitors = (intptr_t) end;}
inline void frame::interpreter_frame_set_cpcache(ConstantPoolCache* cp) { *interpreter_frame_cache_addr() = cp; }
inline void frame::interpreter_frame_set_esp(intptr_t* esp) { get_ijava_state()->esp = (intptr_t) esp; }
inline void frame::interpreter_frame_set_top_frame_sp(intptr_t* top_frame_sp) { get_ijava_state()->top_frame_sp = (intptr_t) top_frame_sp; }
inline void frame::interpreter_frame_set_top_frame_sp(intptr_t* top_frame_sp) {
assert(is_interpreted_frame(), "interpreted frame expected");
// set relativized top_frame_sp
get_ijava_state()->top_frame_sp = (intptr_t) (top_frame_sp - fp());
}
inline void frame::interpreter_frame_set_sender_sp(intptr_t* sender_sp) { get_ijava_state()->sender_sp = (intptr_t) sender_sp; }
inline intptr_t* frame::interpreter_frame_expression_stack() const {

View File

@@ -128,6 +128,7 @@ class InterpreterMacroAssembler: public MacroAssembler {
void get_cache_and_index_at_bcp(Register cache, int bcp_offset, size_t index_size = sizeof(u2));
void load_resolved_indy_entry(Register cache, Register index);
void load_field_entry(Register cache, Register index, int bcp_offset = 1);
void get_u4(Register Rdst, Register Rsrc, int offset, signedOrNot is_signed);

View File

@@ -31,6 +31,7 @@
#include "interp_masm_ppc.hpp"
#include "interpreter/interpreterRuntime.hpp"
#include "oops/methodData.hpp"
#include "oops/resolvedFieldEntry.hpp"
#include "oops/resolvedIndyEntry.hpp"
#include "prims/jvmtiExport.hpp"
#include "prims/jvmtiThreadState.hpp"
@@ -487,11 +488,28 @@ void InterpreterMacroAssembler::load_resolved_indy_entry(Register cache, Registe
// Get address of invokedynamic array
ld_ptr(cache, in_bytes(ConstantPoolCache::invokedynamic_entries_offset()), R27_constPoolCache);
// Scale the index to be the entry index * sizeof(ResolvedInvokeDynamicInfo)
// Scale the index to be the entry index * sizeof(ResolvedIndyEntry)
sldi(index, index, log2i_exact(sizeof(ResolvedIndyEntry)));
add(cache, cache, index);
}
void InterpreterMacroAssembler::load_field_entry(Register cache, Register index, int bcp_offset) {
// Get index out of bytecode pointer
get_cache_index_at_bcp(index, bcp_offset, sizeof(u2));
// Take shortcut if the size is a power of 2
if (is_power_of_2(sizeof(ResolvedFieldEntry))) {
// Scale index by power of 2
sldi(index, index, log2i_exact(sizeof(ResolvedFieldEntry)));
} else {
// Scale the index to be the entry index * sizeof(ResolvedFieldEntry)
mulli(index, index, sizeof(ResolvedFieldEntry));
}
// Get address of field entries array
ld_ptr(cache, in_bytes(ConstantPoolCache::field_entries_offset()), R27_constPoolCache);
addi(cache, cache, Array<ResolvedFieldEntry>::base_offset_in_bytes());
add(cache, cache, index);
}
// Load object from cpool->resolved_references(index).
// Kills:
// - index
@@ -1223,6 +1241,9 @@ void InterpreterMacroAssembler::call_from_interpreter(Register Rtarget_method, R
save_interpreter_state(Rscratch2);
#ifdef ASSERT
ld(Rscratch1, _ijava_state_neg(top_frame_sp), Rscratch2); // Rscratch2 contains fp
sldi(Rscratch1, Rscratch1, Interpreter::logStackElementSize);
add(Rscratch1, Rscratch1, Rscratch2); // Rscratch2 contains fp
// Compare sender_sp with the derelativized top_frame_sp
cmpd(CCR0, R21_sender_SP, Rscratch1);
asm_assert_eq("top_frame_sp incorrect");
#endif
@@ -1997,7 +2018,10 @@ void InterpreterMacroAssembler::add_monitor_to_stack(bool stack_is_empty, Regist
"size of a monitor must respect alignment of SP");
resize_frame(-monitor_size, /*temp*/esp); // Allocate space for new monitor
std(R1_SP, _ijava_state_neg(top_frame_sp), esp); // esp contains fp
subf(Rtemp2, esp, R1_SP); // esp contains fp
sradi(Rtemp2, Rtemp2, Interpreter::logStackElementSize);
// Store relativized top_frame_sp
std(Rtemp2, _ijava_state_neg(top_frame_sp), esp); // esp contains fp
// Shuffle expression stack down. Recall that stack_base points
// just above the new expression stack bottom. Old_tos and new_tos
@@ -2233,6 +2257,9 @@ void InterpreterMacroAssembler::restore_interpreter_state(Register scratch, bool
Register tfsp = R18_locals;
Register scratch2 = R26_monitor;
ld(tfsp, _ijava_state_neg(top_frame_sp), scratch);
// Derelativize top_frame_sp
sldi(tfsp, tfsp, Interpreter::logStackElementSize);
add(tfsp, tfsp, scratch);
resize_frame_absolute(tfsp, scratch2, R0);
}
ld(R14_bcp, _ijava_state_neg(bcp), scratch); // Changed by VM code (exception).

View File

@@ -2148,6 +2148,9 @@ bool Matcher::match_rule_supported(int opcode) {
return SuperwordUseVSX;
case Op_PopCountVI:
return (SuperwordUseVSX && UsePopCountInstruction);
case Op_FmaF:
case Op_FmaD:
return UseFMA;
case Op_FmaVF:
case Op_FmaVD:
return (SuperwordUseVSX && UseFMA);
@@ -9652,6 +9655,7 @@ instruct maddF_reg_reg(regF dst, regF src1, regF src2, regF src3) %{
format %{ "FMADDS $dst, $src1, $src2, $src3" %}
size(4);
ins_encode %{
assert(UseFMA, "Needs FMA instructions support.");
__ fmadds($dst$$FloatRegister, $src1$$FloatRegister, $src2$$FloatRegister, $src3$$FloatRegister);
%}
ins_pipe(pipe_class_default);
@@ -9664,58 +9668,63 @@ instruct maddD_reg_reg(regD dst, regD src1, regD src2, regD src3) %{
format %{ "FMADD $dst, $src1, $src2, $src3" %}
size(4);
ins_encode %{
assert(UseFMA, "Needs FMA instructions support.");
__ fmadd($dst$$FloatRegister, $src1$$FloatRegister, $src2$$FloatRegister, $src3$$FloatRegister);
%}
ins_pipe(pipe_class_default);
%}
// -src1 * src2 + src3 = -(src1*src2-src3)
// src1 * (-src2) + src3 = -(src1*src2-src3)
// "(-src1) * src2 + src3" has been idealized to "src2 * (-src1) + src3"
instruct mnsubF_reg_reg(regF dst, regF src1, regF src2, regF src3) %{
match(Set dst (FmaF src3 (Binary (NegF src1) src2)));
match(Set dst (FmaF src3 (Binary src1 (NegF src2))));
format %{ "FNMSUBS $dst, $src1, $src2, $src3" %}
size(4);
ins_encode %{
assert(UseFMA, "Needs FMA instructions support.");
__ fnmsubs($dst$$FloatRegister, $src1$$FloatRegister, $src2$$FloatRegister, $src3$$FloatRegister);
%}
ins_pipe(pipe_class_default);
%}
// -src1 * src2 + src3 = -(src1*src2-src3)
// src1 * (-src2) + src3 = -(src1*src2-src3)
// "(-src1) * src2 + src3" has been idealized to "src2 * (-src1) + src3"
instruct mnsubD_reg_reg(regD dst, regD src1, regD src2, regD src3) %{
match(Set dst (FmaD src3 (Binary (NegD src1) src2)));
match(Set dst (FmaD src3 (Binary src1 (NegD src2))));
format %{ "FNMSUB $dst, $src1, $src2, $src3" %}
size(4);
ins_encode %{
assert(UseFMA, "Needs FMA instructions support.");
__ fnmsub($dst$$FloatRegister, $src1$$FloatRegister, $src2$$FloatRegister, $src3$$FloatRegister);
%}
ins_pipe(pipe_class_default);
%}
// -src1 * src2 - src3 = -(src1*src2+src3)
// src1 * (-src2) - src3 = -(src1*src2+src3)
// "(-src1) * src2 - src3" has been idealized to "src2 * (-src1) - src3"
instruct mnaddF_reg_reg(regF dst, regF src1, regF src2, regF src3) %{
match(Set dst (FmaF (NegF src3) (Binary (NegF src1) src2)));
match(Set dst (FmaF (NegF src3) (Binary src1 (NegF src2))));
format %{ "FNMADDS $dst, $src1, $src2, $src3" %}
size(4);
ins_encode %{
assert(UseFMA, "Needs FMA instructions support.");
__ fnmadds($dst$$FloatRegister, $src1$$FloatRegister, $src2$$FloatRegister, $src3$$FloatRegister);
%}
ins_pipe(pipe_class_default);
%}
// -src1 * src2 - src3 = -(src1*src2+src3)
// src1 * (-src2) - src3 = -(src1*src2+src3)
// "(-src1) * src2 - src3" has been idealized to "src2 * (-src1) - src3"
instruct mnaddD_reg_reg(regD dst, regD src1, regD src2, regD src3) %{
match(Set dst (FmaD (NegD src3) (Binary (NegD src1) src2)));
match(Set dst (FmaD (NegD src3) (Binary src1 (NegD src2))));
format %{ "FNMADD $dst, $src1, $src2, $src3" %}
size(4);
ins_encode %{
assert(UseFMA, "Needs FMA instructions support.");
__ fnmadd($dst$$FloatRegister, $src1$$FloatRegister, $src2$$FloatRegister, $src3$$FloatRegister);
%}
ins_pipe(pipe_class_default);
@@ -9728,6 +9737,7 @@ instruct msubF_reg_reg(regF dst, regF src1, regF src2, regF src3) %{
format %{ "FMSUBS $dst, $src1, $src2, $src3" %}
size(4);
ins_encode %{
assert(UseFMA, "Needs FMA instructions support.");
__ fmsubs($dst$$FloatRegister, $src1$$FloatRegister, $src2$$FloatRegister, $src3$$FloatRegister);
%}
ins_pipe(pipe_class_default);
@@ -9740,6 +9750,7 @@ instruct msubD_reg_reg(regD dst, regD src1, regD src2, regD src3) %{
format %{ "FMSUB $dst, $src1, $src2, $src3" %}
size(4);
ins_encode %{
assert(UseFMA, "Needs FMA instructions support.");
__ fmsub($dst$$FloatRegister, $src1$$FloatRegister, $src2$$FloatRegister, $src3$$FloatRegister);
%}
ins_pipe(pipe_class_default);
@@ -14057,7 +14068,7 @@ instruct vpopcnt_reg(vecX dst, vecX src) %{
%}
// --------------------------------- FMA --------------------------------------
// dst + src1 * src2
// src1 * src2 + dst
instruct vfma4F(vecX dst, vecX src1, vecX src2) %{
match(Set dst (FmaVF dst (Binary src1 src2)));
predicate(n->as_Vector()->length() == 4);
@@ -14066,14 +14077,15 @@ instruct vfma4F(vecX dst, vecX src1, vecX src2) %{
size(4);
ins_encode %{
assert(UseFMA, "Needs FMA instructions support.");
__ xvmaddasp($dst$$VectorSRegister, $src1$$VectorSRegister, $src2$$VectorSRegister);
%}
ins_pipe(pipe_class_default);
%}
// dst - src1 * src2
// src1 * (-src2) + dst
// "(-src1) * src2 + dst" has been idealized to "src2 * (-src1) + dst"
instruct vfma4F_neg1(vecX dst, vecX src1, vecX src2) %{
match(Set dst (FmaVF dst (Binary (NegVF src1) src2)));
match(Set dst (FmaVF dst (Binary src1 (NegVF src2))));
predicate(n->as_Vector()->length() == 4);
@@ -14081,12 +14093,13 @@ instruct vfma4F_neg1(vecX dst, vecX src1, vecX src2) %{
size(4);
ins_encode %{
assert(UseFMA, "Needs FMA instructions support.");
__ xvnmsubasp($dst$$VectorSRegister, $src1$$VectorSRegister, $src2$$VectorSRegister);
%}
ins_pipe(pipe_class_default);
%}
// - dst + src1 * src2
// src1 * src2 - dst
instruct vfma4F_neg2(vecX dst, vecX src1, vecX src2) %{
match(Set dst (FmaVF (NegVF dst) (Binary src1 src2)));
predicate(n->as_Vector()->length() == 4);
@@ -14095,12 +14108,13 @@ instruct vfma4F_neg2(vecX dst, vecX src1, vecX src2) %{
size(4);
ins_encode %{
assert(UseFMA, "Needs FMA instructions support.");
__ xvmsubasp($dst$$VectorSRegister, $src1$$VectorSRegister, $src2$$VectorSRegister);
%}
ins_pipe(pipe_class_default);
%}
// dst + src1 * src2
// src1 * src2 + dst
instruct vfma2D(vecX dst, vecX src1, vecX src2) %{
match(Set dst (FmaVD dst (Binary src1 src2)));
predicate(n->as_Vector()->length() == 2);
@@ -14109,14 +14123,15 @@ instruct vfma2D(vecX dst, vecX src1, vecX src2) %{
size(4);
ins_encode %{
assert(UseFMA, "Needs FMA instructions support.");
__ xvmaddadp($dst$$VectorSRegister, $src1$$VectorSRegister, $src2$$VectorSRegister);
%}
ins_pipe(pipe_class_default);
%}
// dst - src1 * src2
// src1 * (-src2) + dst
// "(-src1) * src2 + dst" has been idealized to "src2 * (-src1) + dst"
instruct vfma2D_neg1(vecX dst, vecX src1, vecX src2) %{
match(Set dst (FmaVD dst (Binary (NegVD src1) src2)));
match(Set dst (FmaVD dst (Binary src1 (NegVD src2))));
predicate(n->as_Vector()->length() == 2);
@@ -14124,12 +14139,13 @@ instruct vfma2D_neg1(vecX dst, vecX src1, vecX src2) %{
size(4);
ins_encode %{
assert(UseFMA, "Needs FMA instructions support.");
__ xvnmsubadp($dst$$VectorSRegister, $src1$$VectorSRegister, $src2$$VectorSRegister);
%}
ins_pipe(pipe_class_default);
%}
// - dst + src1 * src2
// src1 * src2 - dst
instruct vfma2D_neg2(vecX dst, vecX src1, vecX src2) %{
match(Set dst (FmaVD (NegVD dst) (Binary src1 src2)));
predicate(n->as_Vector()->length() == 2);
@@ -14138,6 +14154,7 @@ instruct vfma2D_neg2(vecX dst, vecX src1, vecX src2) %{
size(4);
ins_encode %{
assert(UseFMA, "Needs FMA instructions support.");
__ xvmsubadp($dst$$VectorSRegister, $src1$$VectorSRegister, $src2$$VectorSRegister);
%}
ins_pipe(pipe_class_default);

View File

@@ -1061,8 +1061,10 @@ void TemplateInterpreterGenerator::generate_fixed_frame(bool native_call, Regist
__ std(R0, _ijava_state_neg(oop_tmp), R1_SP); // only used for native_call
// Store sender's SP and this frame's top SP.
__ subf(R12_scratch2, Rtop_frame_size, R1_SP);
__ std(R21_sender_SP, _ijava_state_neg(sender_sp), R1_SP);
__ neg(R12_scratch2, Rtop_frame_size);
__ sradi(R12_scratch2, R12_scratch2, Interpreter::logStackElementSize);
// Store relativized top_frame_sp
__ std(R12_scratch2, _ijava_state_neg(top_frame_sp), R1_SP);
// Push top frame.

View File

@@ -38,6 +38,7 @@
#include "oops/methodData.hpp"
#include "oops/objArrayKlass.hpp"
#include "oops/oop.inline.hpp"
#include "oops/resolvedFieldEntry.hpp"
#include "oops/resolvedIndyEntry.hpp"
#include "prims/jvmtiExport.hpp"
#include "prims/methodHandles.hpp"
@@ -116,13 +117,10 @@ void TemplateTable::patch_bytecode(Bytecodes::Code new_bc, Register Rnew_bc, Reg
// additional, required work.
assert(byte_no == f1_byte || byte_no == f2_byte, "byte_no out of range");
assert(load_bc_into_bc_reg, "we use bc_reg as temp");
__ get_cache_and_index_at_bcp(Rtemp /* dst = cache */, 1);
// ((*(cache+indices))>>((1+byte_no)*8))&0xFF:
#if defined(VM_LITTLE_ENDIAN)
__ lbz(Rnew_bc, in_bytes(ConstantPoolCache::base_offset() + ConstantPoolCacheEntry::indices_offset()) + 1 + byte_no, Rtemp);
#else
__ lbz(Rnew_bc, in_bytes(ConstantPoolCache::base_offset() + ConstantPoolCacheEntry::indices_offset()) + 7 - (1 + byte_no), Rtemp);
#endif
__ load_field_entry(Rtemp, Rnew_bc);
int code_offset = (byte_no == f1_byte) ? in_bytes(ResolvedFieldEntry::get_code_offset())
: in_bytes(ResolvedFieldEntry::put_code_offset());
__ lbz(Rnew_bc, code_offset, Rtemp);
__ cmpwi(CCR0, Rnew_bc, 0);
__ li(Rnew_bc, (unsigned int)(unsigned char)new_bc);
__ beq(CCR0, L_patch_done);
@@ -2247,6 +2245,68 @@ void TemplateTable::resolve_cache_and_index(int byte_no, Register Rcache, Regist
__ bind(Ldone);
}
void TemplateTable::resolve_cache_and_index_for_field(int byte_no,
Register Rcache,
Register index) {
assert_different_registers(Rcache, index);
Label resolved;
Bytecodes::Code code = bytecode();
switch (code) {
case Bytecodes::_nofast_getfield: code = Bytecodes::_getfield; break;
case Bytecodes::_nofast_putfield: code = Bytecodes::_putfield; break;
default: break;
}
assert(byte_no == f1_byte || byte_no == f2_byte, "byte_no out of range");
__ load_field_entry(Rcache, index);
int code_offset = (byte_no == f1_byte) ? in_bytes(ResolvedFieldEntry::get_code_offset())
: in_bytes(ResolvedFieldEntry::put_code_offset());
__ lbz(R0, code_offset, Rcache);
__ cmpwi(CCR0, R0, (int)code); // have we resolved this bytecode?
__ beq(CCR0, resolved);
// resolve first time through
address entry = CAST_FROM_FN_PTR(address, InterpreterRuntime::resolve_from_cache);
__ li(R4_ARG2, (int)code);
__ call_VM(noreg, entry, R4_ARG2);
// Update registers with resolved info
__ load_field_entry(Rcache, index);
__ bind(resolved);
// Use acquire semantics for the bytecode (see ResolvedFieldEntry::fill_in()).
__ isync(); // Order load wrt. succeeding loads.
}
void TemplateTable::load_resolved_field_entry(Register obj,
Register cache,
Register tos_state,
Register offset,
Register flags,
bool is_static = false) {
assert_different_registers(cache, tos_state, flags, offset);
// Field offset
__ load_sized_value(offset, in_bytes(ResolvedFieldEntry::field_offset_offset()), cache, sizeof(int), true /*is_signed*/);
// Flags
__ lbz(flags, in_bytes(ResolvedFieldEntry::flags_offset()), cache);
if (tos_state != noreg) {
__ lbz(tos_state, in_bytes(ResolvedFieldEntry::type_offset()), cache);
}
// Klass overwrite register
if (is_static) {
__ ld(obj, in_bytes(ResolvedFieldEntry::field_holder_offset()), cache);
const int mirror_offset = in_bytes(Klass::java_mirror_offset());
__ ld(obj, mirror_offset, obj);
__ resolve_oop_handle(obj, R11_scratch1, R12_scratch2, MacroAssembler::PRESERVATION_NONE);
}
}
// Load the constant pool cache entry at field accesses into registers.
// The Rcache and Rindex registers must be set before call.
// Input:
@@ -2432,7 +2492,6 @@ void TemplateTable::jvmti_post_field_access(Register Rcache, Register Rscratch,
assert_different_registers(Rcache, Rscratch);
if (JvmtiExport::can_post_field_access()) {
ByteSize cp_base_offset = ConstantPoolCache::base_offset();
Label Lno_field_access_post;
// Check if post field access in enabled.
@@ -2443,7 +2502,6 @@ void TemplateTable::jvmti_post_field_access(Register Rcache, Register Rscratch,
__ beq(CCR0, Lno_field_access_post);
// Post access enabled - do it!
__ addi(Rcache, Rcache, in_bytes(cp_base_offset));
if (is_static) {
__ li(R17_tos, 0);
} else {
@@ -2467,7 +2525,7 @@ void TemplateTable::jvmti_post_field_access(Register Rcache, Register Rscratch,
__ verify_oop(R17_tos);
} else {
// Cache is still needed to get class or obj.
__ get_cache_and_index_at_bcp(Rcache, 1);
__ load_field_entry(Rcache, Rscratch);
}
__ align(32, 12);
@@ -2493,9 +2551,10 @@ void TemplateTable::getfield_or_static(int byte_no, bool is_static, RewriteContr
Label Lacquire, Lisync;
const Register Rcache = R3_ARG1,
Rclass_or_obj = R22_tmp2,
Roffset = R23_tmp3,
Rflags = R31,
Rclass_or_obj = R22_tmp2, // Needs to survive C call.
Roffset = R23_tmp3, // Needs to survive C call.
Rtos_state = R30, // Needs to survive C call.
Rflags = R31, // Needs to survive C call.
Rbtable = R5_ARG3,
Rbc = R30,
Rscratch = R11_scratch1; // used by load_field_cp_cache_entry
@@ -2507,37 +2566,34 @@ void TemplateTable::getfield_or_static(int byte_no, bool is_static, RewriteContr
address* branch_table = (is_static || rc == may_not_rewrite) ? static_branch_table : field_branch_table;
// Get field offset.
resolve_cache_and_index(byte_no, Rcache, Rscratch, sizeof(u2));
resolve_cache_and_index_for_field(byte_no, Rcache, Rscratch);
// JVMTI support
jvmti_post_field_access(Rcache, Rscratch, is_static, false);
// Load after possible GC.
load_field_cp_cache_entry(Rclass_or_obj, Rcache, noreg, Roffset, Rflags, is_static); // Uses R11, R12
load_resolved_field_entry(Rclass_or_obj, Rcache, Rtos_state, Roffset, Rflags, is_static); // Uses R11, R12
// Load pointer to branch table.
__ load_const_optimized(Rbtable, (address)branch_table, Rscratch);
// Get volatile flag.
__ rldicl(Rscratch, Rflags, 64-ConstantPoolCacheEntry::is_volatile_shift, 63); // Extract volatile bit.
__ rldicl(Rscratch, Rflags, 64-ResolvedFieldEntry::is_volatile_shift, 63); // Extract volatile bit.
// Note: sync is needed before volatile load on PPC64.
// Check field type.
__ rldicl(Rflags, Rflags, 64-ConstantPoolCacheEntry::tos_state_shift, 64-ConstantPoolCacheEntry::tos_state_bits);
#ifdef ASSERT
Label LFlagInvalid;
__ cmpldi(CCR0, Rflags, number_of_states);
__ cmpldi(CCR0, Rtos_state, number_of_states);
__ bge(CCR0, LFlagInvalid);
#endif
// Load from branch table and dispatch (volatile case: one instruction ahead).
__ sldi(Rflags, Rflags, LogBytesPerWord);
__ sldi(Rtos_state, Rtos_state, LogBytesPerWord);
__ cmpwi(CCR2, Rscratch, 1); // Volatile?
if (support_IRIW_for_not_multiple_copy_atomic_cpu) {
__ sldi(Rscratch, Rscratch, exact_log2(BytesPerInstWord)); // Volatile ? size of 1 instruction : 0.
}
__ ldx(Rbtable, Rbtable, Rflags);
__ ldx(Rbtable, Rbtable, Rtos_state);
// Get the obj from stack.
if (!is_static) {
@@ -2753,10 +2809,8 @@ void TemplateTable::jvmti_post_field_mod(Register Rcache, Register Rscratch, boo
__ beq(CCR0, Lno_field_mod_post);
// Do the post
ByteSize cp_base_offset = ConstantPoolCache::base_offset();
const Register Robj = Rscratch;
__ addi(Rcache, Rcache, in_bytes(cp_base_offset));
if (is_static) {
// Life is simple. Null out the object pointer.
__ li(Robj, 0);
@@ -2777,17 +2831,16 @@ void TemplateTable::jvmti_post_field_mod(Register Rcache, Register Rscratch, boo
default: {
offs = 0;
base = Robj;
const Register Rflags = Robj;
const Register Rtos_state = Robj;
Label is_one_slot;
// Life is harder. The stack holds the value on top, followed by the
// object. We don't know the size of the value, though; it could be
// one or two words depending on its type. As a result, we must find
// the type to determine where the object is.
__ ld(Rflags, in_bytes(ConstantPoolCacheEntry::flags_offset()), Rcache); // Big Endian
__ rldicl(Rflags, Rflags, 64-ConstantPoolCacheEntry::tos_state_shift, 64-ConstantPoolCacheEntry::tos_state_bits);
__ lbz(Rtos_state, in_bytes(ResolvedFieldEntry::type_offset()), Rcache);
__ cmpwi(CCR0, Rflags, ltos);
__ cmpwi(CCR1, Rflags, dtos);
__ cmpwi(CCR0, Rtos_state, ltos);
__ cmpwi(CCR1, Rtos_state, dtos);
__ addi(base, R15_esp, Interpreter::expr_offset_in_bytes(1));
__ crnor(CCR0, Assembler::equal, CCR1, Assembler::equal);
__ beq(CCR0, is_one_slot);
@@ -2802,7 +2855,7 @@ void TemplateTable::jvmti_post_field_mod(Register Rcache, Register Rscratch, boo
__ addi(R6_ARG4, R15_esp, Interpreter::expr_offset_in_bytes(0));
__ call_VM(noreg, CAST_FROM_FN_PTR(address, InterpreterRuntime::post_field_modification), Robj, Rcache, R6_ARG4);
__ get_cache_and_index_at_bcp(Rcache, 1);
__ load_field_entry(Rcache, Rscratch);
// In case of the fast versions, value lives in registers => put it back on tos.
switch(bytecode()) {
@@ -2830,7 +2883,8 @@ void TemplateTable::putfield_or_static(int byte_no, bool is_static, RewriteContr
const Register Rcache = R5_ARG3, // Do not use ARG1/2 (causes trouble in jvmti_post_field_mod).
Rclass_or_obj = R31, // Needs to survive C call.
Roffset = R22_tmp2, // Needs to survive C call.
Rflags = R30,
Rtos_state = R23_tmp3, // Needs to survive C call.
Rflags = R30, // Needs to survive C call.
Rbtable = R4_ARG2,
Rscratch = R11_scratch1, // used by load_field_cp_cache_entry
Rscratch2 = R12_scratch2, // used by load_field_cp_cache_entry
@@ -2850,32 +2904,29 @@ void TemplateTable::putfield_or_static(int byte_no, bool is_static, RewriteContr
// obj
// Load the field offset.
resolve_cache_and_index(byte_no, Rcache, Rscratch, sizeof(u2));
resolve_cache_and_index_for_field(byte_no, Rcache, Rscratch);
jvmti_post_field_mod(Rcache, Rscratch, is_static);
load_field_cp_cache_entry(Rclass_or_obj, Rcache, noreg, Roffset, Rflags, is_static); // Uses R11, R12
load_resolved_field_entry(Rclass_or_obj, Rcache, Rtos_state, Roffset, Rflags, is_static); // Uses R11, R12
// Load pointer to branch table.
__ load_const_optimized(Rbtable, (address)branch_table, Rscratch);
// Get volatile flag.
__ rldicl(Rscratch, Rflags, 64-ConstantPoolCacheEntry::is_volatile_shift, 63); // Extract volatile bit.
// Check the field type.
__ rldicl(Rflags, Rflags, 64-ConstantPoolCacheEntry::tos_state_shift, 64-ConstantPoolCacheEntry::tos_state_bits);
__ rldicl(Rscratch, Rflags, 64-ResolvedFieldEntry::is_volatile_shift, 63); // Extract volatile bit.
#ifdef ASSERT
Label LFlagInvalid;
__ cmpldi(CCR0, Rflags, number_of_states);
__ cmpldi(CCR0, Rtos_state, number_of_states);
__ bge(CCR0, LFlagInvalid);
#endif
// Load from branch table and dispatch (volatile case: one instruction ahead).
__ sldi(Rflags, Rflags, LogBytesPerWord);
__ sldi(Rtos_state, Rtos_state, LogBytesPerWord);
if (!support_IRIW_for_not_multiple_copy_atomic_cpu) {
__ cmpwi(CR_is_vol, Rscratch, 1); // Volatile?
}
__ sldi(Rscratch, Rscratch, exact_log2(BytesPerInstWord)); // Volatile? size of instruction 1 : 0.
__ ldx(Rbtable, Rbtable, Rflags);
__ ldx(Rbtable, Rbtable, Rtos_state);
__ subf(Rbtable, Rscratch, Rbtable); // Point to volatile/non-volatile entry point.
__ mtctr(Rbtable);
@@ -3085,15 +3136,15 @@ void TemplateTable::fast_storefield(TosState state) {
const ConditionRegister CR_is_vol = CCR2; // Non-volatile condition register (survives runtime call in do_oop_store).
// Constant pool already resolved => Load flags and offset of field.
__ get_cache_and_index_at_bcp(Rcache, 1);
__ load_field_entry(Rcache, Rscratch);
jvmti_post_field_mod(Rcache, Rscratch, false /* not static */);
load_field_cp_cache_entry(noreg, Rcache, noreg, Roffset, Rflags, false); // Uses R11, R12
load_resolved_field_entry(noreg, Rcache, noreg, Roffset, Rflags, false); // Uses R11, R12
// Get the obj and the final store addr.
pop_and_check_object(Rclass_or_obj); // Kills R11_scratch1.
// Get volatile flag.
__ rldicl_(Rscratch, Rflags, 64-ConstantPoolCacheEntry::is_volatile_shift, 63); // Extract volatile bit.
__ rldicl_(Rscratch, Rflags, 64-ResolvedFieldEntry::is_volatile_shift, 63); // Extract volatile bit.
if (!support_IRIW_for_not_multiple_copy_atomic_cpu) { __ cmpdi(CR_is_vol, Rscratch, 1); }
{
Label LnotVolatile;
@@ -3166,8 +3217,8 @@ void TemplateTable::fast_accessfield(TosState state) {
// R12_scratch2 used by load_field_cp_cache_entry
// Constant pool already resolved. Get the field offset.
__ get_cache_and_index_at_bcp(Rcache, 1);
load_field_cp_cache_entry(noreg, Rcache, noreg, Roffset, Rflags, false); // Uses R11, R12
__ load_field_entry(Rcache, Rscratch);
load_resolved_field_entry(noreg, Rcache, noreg, Roffset, Rflags, false); // Uses R11, R12
// JVMTI support
jvmti_post_field_access(Rcache, Rscratch, false, true);
@@ -3176,7 +3227,7 @@ void TemplateTable::fast_accessfield(TosState state) {
__ null_check_throw(Rclass_or_obj, -1, Rscratch);
// Get volatile flag.
__ rldicl_(Rscratch, Rflags, 64-ConstantPoolCacheEntry::is_volatile_shift, 63); // Extract volatile bit.
__ rldicl_(Rscratch, Rflags, 64-ResolvedFieldEntry::is_volatile_shift, 63); // Extract volatile bit.
__ bne(CCR0, LisVolatile);
switch(bytecode()) {
@@ -3305,8 +3356,8 @@ void TemplateTable::fast_xaccess(TosState state) {
__ ld(Rclass_or_obj, 0, R18_locals);
// Constant pool already resolved. Get the field offset.
__ get_cache_and_index_at_bcp(Rcache, 2);
load_field_cp_cache_entry(noreg, Rcache, noreg, Roffset, Rflags, false); // Uses R11, R12
__ load_field_entry(Rcache, Rscratch, 2);
load_resolved_field_entry(noreg, Rcache, noreg, Roffset, Rflags, false); // Uses R11, R12
// JVMTI support not needed, since we switch back to single bytecode as soon as debugger attaches.
@@ -3317,7 +3368,7 @@ void TemplateTable::fast_xaccess(TosState state) {
__ null_check_throw(Rclass_or_obj, -1, Rscratch);
// Get volatile flag.
__ rldicl_(Rscratch, Rflags, 64-ConstantPoolCacheEntry::is_volatile_shift, 63); // Extract volatile bit.
__ rldicl_(Rscratch, Rflags, 64-ResolvedFieldEntry::is_volatile_shift, 63); // Extract volatile bit.
__ bne(CCR0, LisVolatile);
switch(state) {

View File

@@ -497,7 +497,7 @@ void VM_Version::print_features() {
if (Verbose) {
if (ContendedPaddingWidth > 0) {
tty->cr();
tty->print_cr("ContendedPaddingWidth " INTX_FORMAT, ContendedPaddingWidth);
tty->print_cr("ContendedPaddingWidth %d", ContendedPaddingWidth);
}
}
}

View File

@@ -27,6 +27,7 @@
#ifndef CPU_RISCV_ASSEMBLER_RISCV_HPP
#define CPU_RISCV_ASSEMBLER_RISCV_HPP
#include "asm/assembler.hpp"
#include "asm/register.hpp"
#include "code/codeCache.hpp"
#include "metaprogramming/enableIf.hpp"

View File

@@ -868,9 +868,10 @@ void C2_MacroAssembler::string_compare(Register str1, Register str2,
// load first parts of strings and finish initialization while loading
{
if (str1_isL == str2_isL) { // LL or UU
// check if str1 and str2 is same pointer
beq(str1, str2, DONE);
// load 8 bytes once to compare
ld(tmp1, Address(str1));
beq(str1, str2, DONE);
ld(tmp2, Address(str2));
mv(t0, STUB_THRESHOLD);
bge(cnt2, t0, STUB);
@@ -913,9 +914,8 @@ void C2_MacroAssembler::string_compare(Register str1, Register str2,
addi(cnt1, cnt1, 8);
}
addi(cnt2, cnt2, isUL ? 4 : 8);
bne(tmp1, tmp2, DIFFERENCE);
bgez(cnt2, TAIL);
xorr(tmp3, tmp1, tmp2);
bnez(tmp3, DIFFERENCE);
// main loop
bind(NEXT_WORD);
@@ -944,38 +944,30 @@ void C2_MacroAssembler::string_compare(Register str1, Register str2,
addi(cnt1, cnt1, 8);
addi(cnt2, cnt2, 4);
}
bgez(cnt2, TAIL);
xorr(tmp3, tmp1, tmp2);
beqz(tmp3, NEXT_WORD);
j(DIFFERENCE);
bne(tmp1, tmp2, DIFFERENCE);
bltz(cnt2, NEXT_WORD);
bind(TAIL);
xorr(tmp3, tmp1, tmp2);
bnez(tmp3, DIFFERENCE);
// Last longword. In the case where length == 4 we compare the
// same longword twice, but that's still faster than another
// conditional branch.
if (str1_isL == str2_isL) { // LL or UU
ld(tmp1, Address(str1));
ld(tmp2, Address(str2));
load_long_misaligned(tmp1, Address(str1), tmp3, isLL ? 1 : 2);
load_long_misaligned(tmp2, Address(str2), tmp3, isLL ? 1 : 2);
} else if (isLU) { // LU case
lwu(tmp1, Address(str1));
ld(tmp2, Address(str2));
load_int_misaligned(tmp1, Address(str1), tmp3, false);
load_long_misaligned(tmp2, Address(str2), tmp3, 2);
inflate_lo32(tmp3, tmp1);
mv(tmp1, tmp3);
} else { // UL case
lwu(tmp2, Address(str2));
ld(tmp1, Address(str1));
load_int_misaligned(tmp2, Address(str2), tmp3, false);
load_long_misaligned(tmp1, Address(str1), tmp3, 2);
inflate_lo32(tmp3, tmp2);
mv(tmp2, tmp3);
}
bind(TAIL_CHECK);
xorr(tmp3, tmp1, tmp2);
beqz(tmp3, DONE);
beq(tmp1, tmp2, DONE);
// Find the first different characters in the longwords and
// compute their difference.
bind(DIFFERENCE);
xorr(tmp3, tmp1, tmp2);
ctzc_bit(result, tmp3, isLL); // count zero from lsb to msb
srl(tmp1, tmp1, result);
srl(tmp2, tmp2, result);

View File

@@ -82,7 +82,7 @@ template<typename FKind> frame FreezeBase::new_heap_frame(frame& f, frame& calle
intptr_t *sp, *fp; // sp is really our unextended_sp
if (FKind::interpreted) {
assert((intptr_t*)f.at(frame::interpreter_frame_last_sp_offset) == nullptr
|| f.unextended_sp() == (intptr_t*)f.at(frame::interpreter_frame_last_sp_offset), "");
|| f.unextended_sp() == (intptr_t*)f.at_relative(frame::interpreter_frame_last_sp_offset), "");
intptr_t locals_offset = *f.addr_at(frame::interpreter_frame_locals_offset);
// If the caller.is_empty(), i.e. we're freezing into an empty chunk, then we set
// the chunk's argsize in finalize_freeze and make room for it above the unextended_sp
@@ -121,7 +121,7 @@ template<typename FKind> frame FreezeBase::new_heap_frame(frame& f, frame& calle
void FreezeBase::adjust_interpreted_frame_unextended_sp(frame& f) {
assert((f.at(frame::interpreter_frame_last_sp_offset) != 0) || (f.unextended_sp() == f.sp()), "");
intptr_t* real_unextended_sp = (intptr_t*)f.at(frame::interpreter_frame_last_sp_offset);
intptr_t* real_unextended_sp = (intptr_t*)f.at_relative_or_null(frame::interpreter_frame_last_sp_offset);
if (real_unextended_sp != nullptr) {
f.set_unextended_sp(real_unextended_sp); // can be null at a safepoint
}
@@ -147,8 +147,8 @@ inline void FreezeBase::relativize_interpreted_frame_metadata(const frame& f, co
// because we freeze the padding word (see recurse_freeze_interpreted_frame) in order to keep the same relativized
// locals value, we don't need to change the locals value here.
// at(frame::interpreter_frame_last_sp_offset) can be null at safepoint preempts
*hf.addr_at(frame::interpreter_frame_last_sp_offset) = hf.unextended_sp() - hf.fp();
// Make sure that last_sp is already relativized.
assert((intptr_t*)hf.at_relative(frame::interpreter_frame_last_sp_offset) == hf.unextended_sp(), "");
relativize_one(vfp, hfp, frame::interpreter_frame_initial_sp_offset); // == block_top == block_bottom
relativize_one(vfp, hfp, frame::interpreter_frame_extended_sp_offset);
@@ -292,7 +292,9 @@ static inline void derelativize_one(intptr_t* const fp, int offset) {
inline void ThawBase::derelativize_interpreted_frame_metadata(const frame& hf, const frame& f) {
intptr_t* vfp = f.fp();
derelativize_one(vfp, frame::interpreter_frame_last_sp_offset);
// Make sure that last_sp is kept relativized.
assert((intptr_t*)f.at_relative(frame::interpreter_frame_last_sp_offset) == f.unextended_sp(), "");
derelativize_one(vfp, frame::interpreter_frame_initial_sp_offset);
derelativize_one(vfp, frame::interpreter_frame_extended_sp_offset);
}

View File

@@ -125,7 +125,8 @@ inline intptr_t* ContinuationHelper::InterpretedFrame::frame_top(const frame& f,
assert(res == (intptr_t*)f.interpreter_frame_monitor_end() - expression_stack_sz, "");
assert(res >= f.unextended_sp(),
"res: " INTPTR_FORMAT " initial_sp: " INTPTR_FORMAT " last_sp: " INTPTR_FORMAT " unextended_sp: " INTPTR_FORMAT " expression_stack_size: %d",
p2i(res), p2i(f.addr_at(frame::interpreter_frame_initial_sp_offset)), f.at(frame::interpreter_frame_last_sp_offset), p2i(f.unextended_sp()), expression_stack_sz);
p2i(res), p2i(f.addr_at(frame::interpreter_frame_initial_sp_offset)), f.at_relative_or_null(frame::interpreter_frame_last_sp_offset),
p2i(f.unextended_sp()), expression_stack_sz);
return res;
}

View File

@@ -165,7 +165,7 @@ void DowncallStubGenerator::generate() {
assert(_abi._shadow_space_bytes == 0, "not expecting shadow space on RISCV64");
allocated_frame_size += arg_shuffle.out_arg_bytes();
bool should_save_return_value = !_needs_return_buffer && _needs_transition;
bool should_save_return_value = !_needs_return_buffer;
RegSpiller out_reg_spiller(_output_registers);
int spill_offset = -1;

View File

@@ -331,7 +331,9 @@ void frame::interpreter_frame_set_monitor_end(BasicObjectLock* value) {
// Used by template based interpreter deoptimization
void frame::interpreter_frame_set_last_sp(intptr_t* last_sp) {
*((intptr_t**)addr_at(interpreter_frame_last_sp_offset)) = last_sp;
assert(is_interpreted_frame(), "interpreted frame expected");
// set relativized last_sp
ptr_at_put(interpreter_frame_last_sp_offset, last_sp != nullptr ? (last_sp - fp()) : 0);
}
void frame::interpreter_frame_set_extended_sp(intptr_t* sp) {
@@ -478,7 +480,7 @@ bool frame::is_interpreted_frame_valid(JavaThread* thread) const {
// do some validation of frame elements
// first the method
Method* m = *interpreter_frame_method_addr();
Method* m = safe_interpreter_frame_method();
// validate the method we'd find in this potential sender
if (!Method::is_valid_method(m)) {
return false;

View File

@@ -253,7 +253,9 @@ inline intptr_t* frame::interpreter_frame_locals() const {
}
inline intptr_t* frame::interpreter_frame_last_sp() const {
return (intptr_t*)at(interpreter_frame_last_sp_offset);
intptr_t n = *addr_at(interpreter_frame_last_sp_offset);
assert(n <= 0, "n: " INTPTR_FORMAT, n);
return n != 0 ? &fp()[n] : nullptr;
}
inline intptr_t* frame::interpreter_frame_bcp_addr() const {

View File

@@ -36,6 +36,7 @@
#include "oops/markWord.hpp"
#include "oops/method.hpp"
#include "oops/methodData.hpp"
#include "oops/resolvedFieldEntry.hpp"
#include "oops/resolvedIndyEntry.hpp"
#include "prims/jvmtiExport.hpp"
#include "prims/jvmtiThreadState.hpp"
@@ -490,7 +491,9 @@ void InterpreterMacroAssembler::prepare_to_jump_from_interpreted() {
// set sender sp
mv(x19_sender_sp, sp);
// record last_sp
sd(esp, Address(fp, frame::interpreter_frame_last_sp_offset * wordSize));
sub(t0, esp, fp);
srai(t0, t0, Interpreter::logStackElementSize);
sd(t0, Address(fp, frame::interpreter_frame_last_sp_offset * wordSize));
}
// Jump to from_interpreted entry of a call unless single stepping is possible
@@ -1986,13 +1989,30 @@ void InterpreterMacroAssembler::load_resolved_indy_entry(Register cache, Registe
get_cache_index_at_bcp(index, cache, 1, sizeof(u4));
// Get address of invokedynamic array
ld(cache, Address(xcpool, in_bytes(ConstantPoolCache::invokedynamic_entries_offset())));
// Scale the index to be the entry index * sizeof(ResolvedInvokeDynamicInfo)
// Scale the index to be the entry index * sizeof(ResolvedIndyEntry)
slli(index, index, log2i_exact(sizeof(ResolvedIndyEntry)));
add(cache, cache, Array<ResolvedIndyEntry>::base_offset_in_bytes());
add(cache, cache, index);
la(cache, Address(cache, 0));
}
void InterpreterMacroAssembler::load_field_entry(Register cache, Register index, int bcp_offset) {
// Get index out of bytecode pointer
get_cache_index_at_bcp(index, cache, bcp_offset, sizeof(u2));
// Take shortcut if the size is a power of 2
if (is_power_of_2(sizeof(ResolvedFieldEntry))) {
slli(index, index, log2i_exact(sizeof(ResolvedFieldEntry))); // Scale index by power of 2
} else {
mv(cache, sizeof(ResolvedFieldEntry));
mul(index, index, cache); // Scale the index to be the entry index * sizeof(ResolvedIndyEntry)
}
// Get address of field entries array
ld(cache, Address(xcpool, ConstantPoolCache::field_entries_offset()));
add(cache, cache, Array<ResolvedIndyEntry>::base_offset_in_bytes());
add(cache, cache, index);
la(cache, Address(cache, 0));
}
void InterpreterMacroAssembler::get_method_counters(Register method,
Register mcs, Label& skip) {
Label has_counters;

View File

@@ -300,6 +300,7 @@ class InterpreterMacroAssembler: public MacroAssembler {
}
void load_resolved_indy_entry(Register cache, Register index);
void load_field_entry(Register cache, Register index, int bcp_offset = 1);
#ifdef ASSERT
void verify_access_flags(Register access_flags, uint32_t flag,

View File

@@ -1654,6 +1654,28 @@ void MacroAssembler::xorrw(Register Rd, Register Rs1, Register Rs2) {
sign_extend(Rd, Rd, 32);
}
// Rd = Rs1 & (~Rd2)
void MacroAssembler::andn(Register Rd, Register Rs1, Register Rs2) {
if (UseZbb) {
Assembler::andn(Rd, Rs1, Rs2);
return;
}
notr(Rd, Rs2);
andr(Rd, Rs1, Rd);
}
// Rd = Rs1 | (~Rd2)
void MacroAssembler::orn(Register Rd, Register Rs1, Register Rs2) {
if (UseZbb) {
Assembler::orn(Rd, Rs1, Rs2);
return;
}
notr(Rd, Rs2);
orr(Rd, Rs1, Rd);
}
// Note: load_unsigned_short used to be called load_unsigned_word.
int MacroAssembler::load_unsigned_short(Register dst, Address src) {
int off = offset();
@@ -1968,6 +1990,22 @@ void MacroAssembler::ror_imm(Register dst, Register src, uint32_t shift, Registe
orr(dst, dst, tmp);
}
// rotate left with shift bits, 32-bit version
void MacroAssembler::rolw_imm(Register dst, Register src, uint32_t shift, Register tmp) {
if (UseZbb) {
// no roliw available
roriw(dst, src, 32 - shift);
return;
}
assert_different_registers(dst, tmp);
assert_different_registers(src, tmp);
assert(shift < 32, "shift amount must be < 32");
srliw(tmp, src, 32 - shift);
slliw(dst, src, shift);
orr(dst, dst, tmp);
}
void MacroAssembler::andi(Register Rd, Register Rn, int64_t imm, Register tmp) {
if (is_simm12(imm)) {
and_imm12(Rd, Rn, imm);
@@ -3967,18 +4005,17 @@ void MacroAssembler::ctzc_bit(Register Rd, Register Rs, bool isLL, Register tmp1
void MacroAssembler::inflate_lo32(Register Rd, Register Rs, Register tmp1, Register tmp2) {
assert_different_registers(Rd, Rs, tmp1, tmp2);
mv(tmp1, 0xFF);
mv(Rd, zr);
for (int i = 0; i <= 3; i++) {
mv(tmp1, 0xFF000000); // first byte mask at lower word
andr(Rd, Rs, tmp1);
for (int i = 0; i < 2; i++) {
slli(Rd, Rd, wordSize);
srli(tmp1, tmp1, wordSize);
andr(tmp2, Rs, tmp1);
if (i) {
slli(tmp2, tmp2, i * 8);
}
orr(Rd, Rd, tmp2);
if (i != 3) {
slli(tmp1, tmp1, 8);
}
}
slli(Rd, Rd, wordSize);
andi(tmp2, Rs, 0xFF); // last byte mask at lower word
orr(Rd, Rd, tmp2);
}
// This instruction reads adjacent 4 bytes from the upper half of source register,
@@ -3987,17 +4024,8 @@ void MacroAssembler::inflate_lo32(Register Rd, Register Rs, Register tmp1, Regis
// Rd: 00A700A600A500A4
void MacroAssembler::inflate_hi32(Register Rd, Register Rs, Register tmp1, Register tmp2) {
assert_different_registers(Rd, Rs, tmp1, tmp2);
mv(tmp1, 0xFF00000000);
mv(Rd, zr);
for (int i = 0; i <= 3; i++) {
andr(tmp2, Rs, tmp1);
orr(Rd, Rd, tmp2);
srli(Rd, Rd, 8);
if (i != 3) {
slli(tmp1, tmp1, 8);
}
}
srli(Rs, Rs, 32); // only upper 32 bits are needed
inflate_lo32(Rd, Rs, tmp1, tmp2);
}
// The size of the blocks erased by the zero_blocks stub. We must

View File

@@ -596,7 +596,9 @@ class MacroAssembler: public Assembler {
void NAME(Register Rs1, Register Rs2, const address dest) { \
assert_cond(dest != nullptr); \
int64_t offset = dest - pc(); \
guarantee(is_simm13(offset) && ((offset % 2) == 0), "offset is invalid."); \
guarantee(is_simm13(offset) && is_even(offset), \
"offset is invalid: is_simm_13: %s offset: " INT64_FORMAT, \
BOOL_TO_STR(is_simm13(offset)), offset); \
Assembler::NAME(Rs1, Rs2, offset); \
} \
INSN_ENTRY_RELOC(void, NAME(Register Rs1, Register Rs2, address dest, relocInfo::relocType rtype)) \
@@ -761,6 +763,10 @@ public:
void orrw(Register Rd, Register Rs1, Register Rs2);
void xorrw(Register Rd, Register Rs1, Register Rs2);
// logic with negate
void andn(Register Rd, Register Rs1, Register Rs2);
void orn(Register Rd, Register Rs1, Register Rs2);
// revb
void revb_h_h(Register Rd, Register Rs, Register tmp = t0); // reverse bytes in halfword in lower 16 bits, sign-extend
void revb_w_w(Register Rd, Register Rs, Register tmp1 = t0, Register tmp2 = t1); // reverse bytes in lower word, sign-extend
@@ -772,6 +778,7 @@ public:
void revb(Register Rd, Register Rs, Register tmp1 = t0, Register tmp2 = t1); // reverse bytes in doubleword
void ror_imm(Register dst, Register src, uint32_t shift, Register tmp = t0);
void rolw_imm(Register dst, Register src, uint32_t, Register tmp = t0);
void andi(Register Rd, Register Rn, int64_t imm, Register tmp = t0);
void orptr(Address adr, RegisterOrConstant src, Register tmp1 = t0, Register tmp2 = t1);

View File

@@ -1907,6 +1907,11 @@ bool Matcher::match_rule_supported(int opcode) {
case Op_CountTrailingZerosI:
case Op_CountTrailingZerosL:
return UseZbb;
case Op_FmaF:
case Op_FmaD:
case Op_FmaVF:
case Op_FmaVD:
return UseFMA;
}
return true; // Per default match rules are supported.
@@ -7271,13 +7276,13 @@ instruct mulD_reg_reg(fRegD dst, fRegD src1, fRegD src2) %{
// src1 * src2 + src3
instruct maddF_reg_reg(fRegF dst, fRegF src1, fRegF src2, fRegF src3) %{
predicate(UseFMA);
match(Set dst (FmaF src3 (Binary src1 src2)));
ins_cost(FMUL_SINGLE_COST);
format %{ "fmadd.s $dst, $src1, $src2, $src3\t#@maddF_reg_reg" %}
ins_encode %{
assert(UseFMA, "Needs FMA instructions support.");
__ fmadd_s(as_FloatRegister($dst$$reg),
as_FloatRegister($src1$$reg),
as_FloatRegister($src2$$reg),
@@ -7289,13 +7294,13 @@ instruct maddF_reg_reg(fRegF dst, fRegF src1, fRegF src2, fRegF src3) %{
// src1 * src2 + src3
instruct maddD_reg_reg(fRegD dst, fRegD src1, fRegD src2, fRegD src3) %{
predicate(UseFMA);
match(Set dst (FmaD src3 (Binary src1 src2)));
ins_cost(FMUL_DOUBLE_COST);
format %{ "fmadd.d $dst, $src1, $src2, $src3\t#@maddD_reg_reg" %}
ins_encode %{
assert(UseFMA, "Needs FMA instructions support.");
__ fmadd_d(as_FloatRegister($dst$$reg),
as_FloatRegister($src1$$reg),
as_FloatRegister($src2$$reg),
@@ -7307,13 +7312,13 @@ instruct maddD_reg_reg(fRegD dst, fRegD src1, fRegD src2, fRegD src3) %{
// src1 * src2 - src3
instruct msubF_reg_reg(fRegF dst, fRegF src1, fRegF src2, fRegF src3) %{
predicate(UseFMA);
match(Set dst (FmaF (NegF src3) (Binary src1 src2)));
ins_cost(FMUL_SINGLE_COST);
format %{ "fmsub.s $dst, $src1, $src2, $src3\t#@msubF_reg_reg" %}
ins_encode %{
assert(UseFMA, "Needs FMA instructions support.");
__ fmsub_s(as_FloatRegister($dst$$reg),
as_FloatRegister($src1$$reg),
as_FloatRegister($src2$$reg),
@@ -7325,13 +7330,13 @@ instruct msubF_reg_reg(fRegF dst, fRegF src1, fRegF src2, fRegF src3) %{
// src1 * src2 - src3
instruct msubD_reg_reg(fRegD dst, fRegD src1, fRegD src2, fRegD src3) %{
predicate(UseFMA);
match(Set dst (FmaD (NegD src3) (Binary src1 src2)));
ins_cost(FMUL_DOUBLE_COST);
format %{ "fmsub.d $dst, $src1, $src2, $src3\t#@msubD_reg_reg" %}
ins_encode %{
assert(UseFMA, "Needs FMA instructions support.");
__ fmsub_d(as_FloatRegister($dst$$reg),
as_FloatRegister($src1$$reg),
as_FloatRegister($src2$$reg),
@@ -7341,16 +7346,16 @@ instruct msubD_reg_reg(fRegD dst, fRegD src1, fRegD src2, fRegD src3) %{
ins_pipe(pipe_class_default);
%}
// -src1 * src2 + src3
// src1 * (-src2) + src3
// "(-src1) * src2 + src3" has been idealized to "src2 * (-src1) + src3"
instruct nmsubF_reg_reg(fRegF dst, fRegF src1, fRegF src2, fRegF src3) %{
predicate(UseFMA);
match(Set dst (FmaF src3 (Binary (NegF src1) src2)));
match(Set dst (FmaF src3 (Binary src1 (NegF src2))));
ins_cost(FMUL_SINGLE_COST);
format %{ "fnmsub.s $dst, $src1, $src2, $src3\t#@nmsubF_reg_reg" %}
ins_encode %{
assert(UseFMA, "Needs FMA instructions support.");
__ fnmsub_s(as_FloatRegister($dst$$reg),
as_FloatRegister($src1$$reg),
as_FloatRegister($src2$$reg),
@@ -7360,16 +7365,16 @@ instruct nmsubF_reg_reg(fRegF dst, fRegF src1, fRegF src2, fRegF src3) %{
ins_pipe(pipe_class_default);
%}
// -src1 * src2 + src3
// src1 * (-src2) + src3
// "(-src1) * src2 + src3" has been idealized to "src2 * (-src1) + src3"
instruct nmsubD_reg_reg(fRegD dst, fRegD src1, fRegD src2, fRegD src3) %{
predicate(UseFMA);
match(Set dst (FmaD src3 (Binary (NegD src1) src2)));
match(Set dst (FmaD src3 (Binary src1 (NegD src2))));
ins_cost(FMUL_DOUBLE_COST);
format %{ "fnmsub.d $dst, $src1, $src2, $src3\t#@nmsubD_reg_reg" %}
ins_encode %{
assert(UseFMA, "Needs FMA instructions support.");
__ fnmsub_d(as_FloatRegister($dst$$reg),
as_FloatRegister($src1$$reg),
as_FloatRegister($src2$$reg),
@@ -7379,16 +7384,16 @@ instruct nmsubD_reg_reg(fRegD dst, fRegD src1, fRegD src2, fRegD src3) %{
ins_pipe(pipe_class_default);
%}
// -src1 * src2 - src3
// src1 * (-src2) - src3
// "(-src1) * src2 - src3" has been idealized to "src2 * (-src1) - src3"
instruct nmaddF_reg_reg(fRegF dst, fRegF src1, fRegF src2, fRegF src3) %{
predicate(UseFMA);
match(Set dst (FmaF (NegF src3) (Binary (NegF src1) src2)));
match(Set dst (FmaF (NegF src3) (Binary src1 (NegF src2))));
ins_cost(FMUL_SINGLE_COST);
format %{ "fnmadd.s $dst, $src1, $src2, $src3\t#@nmaddF_reg_reg" %}
ins_encode %{
assert(UseFMA, "Needs FMA instructions support.");
__ fnmadd_s(as_FloatRegister($dst$$reg),
as_FloatRegister($src1$$reg),
as_FloatRegister($src2$$reg),
@@ -7398,16 +7403,16 @@ instruct nmaddF_reg_reg(fRegF dst, fRegF src1, fRegF src2, fRegF src3) %{
ins_pipe(pipe_class_default);
%}
// -src1 * src2 - src3
// src1 * (-src2) - src3
// "(-src1) * src2 - src3" has been idealized to "src2 * (-src1) - src3"
instruct nmaddD_reg_reg(fRegD dst, fRegD src1, fRegD src2, fRegD src3) %{
predicate(UseFMA);
match(Set dst (FmaD (NegD src3) (Binary (NegD src1) src2)));
match(Set dst (FmaD (NegD src3) (Binary src1 (NegD src2))));
ins_cost(FMUL_DOUBLE_COST);
format %{ "fnmadd.d $dst, $src1, $src2, $src3\t#@nmaddD_reg_reg" %}
ins_encode %{
assert(UseFMA, "Needs FMA instructions support.");
__ fnmadd_d(as_FloatRegister($dst$$reg),
as_FloatRegister($src1$$reg),
as_FloatRegister($src2$$reg),

View File

@@ -1,6 +1,6 @@
//
// Copyright (c) 2020, Oracle and/or its affiliates. All rights reserved.
// Copyright (c) 2020, Arm Limited. All rights reserved.
// Copyright (c) 2020, 2023, Oracle and/or its affiliates. All rights reserved.
// Copyright (c) 2020, 2023, Arm Limited. All rights reserved.
// Copyright (c) 2020, 2022, Huawei Technologies Co., Ltd. All rights reserved.
// DO NOT ALTER OR REMOVE COPYRIGHT NOTICES OR THIS FILE HEADER.
//
@@ -693,14 +693,14 @@ instruct vmin_fp_masked(vReg dst_src1, vReg src2, vRegMask vmask, vReg tmp1, vRe
// vector fmla
// dst_src1 = dst_src1 + src2 * src3
// dst_src1 = src2 * src3 + dst_src1
instruct vfmla(vReg dst_src1, vReg src2, vReg src3) %{
predicate(UseFMA);
match(Set dst_src1 (FmaVF dst_src1 (Binary src2 src3)));
match(Set dst_src1 (FmaVD dst_src1 (Binary src2 src3)));
ins_cost(VEC_COST);
format %{ "vfmla $dst_src1, $dst_src1, $src2, $src3" %}
ins_encode %{
assert(UseFMA, "Needs FMA instructions support.");
BasicType bt = Matcher::vector_element_basic_type(this);
__ vsetvli_helper(bt, Matcher::vector_length(this));
__ vfmacc_vv(as_VectorRegister($dst_src1$$reg),
@@ -713,11 +713,11 @@ instruct vfmla(vReg dst_src1, vReg src2, vReg src3) %{
// dst_src1 = dst_src1 * src2 + src3
instruct vfmadd_masked(vReg dst_src1, vReg src2, vReg src3, vRegMask_V0 v0) %{
predicate(UseFMA);
match(Set dst_src1 (FmaVF (Binary dst_src1 src2) (Binary src3 v0)));
match(Set dst_src1 (FmaVD (Binary dst_src1 src2) (Binary src3 v0)));
format %{ "vfmadd_masked $dst_src1, $dst_src1, $src2, $src3, $v0" %}
ins_encode %{
assert(UseFMA, "Needs FMA instructions support.");
BasicType bt = Matcher::vector_element_basic_type(this);
__ vsetvli_helper(bt, Matcher::vector_length(this));
__ vfmadd_vv(as_VectorRegister($dst_src1$$reg), as_VectorRegister($src2$$reg),
@@ -728,15 +728,14 @@ instruct vfmadd_masked(vReg dst_src1, vReg src2, vReg src3, vRegMask_V0 v0) %{
// vector fmls
// dst_src1 = dst_src1 + -src2 * src3
// dst_src1 = dst_src1 + src2 * -src3
// dst_src1 = src2 * (-src3) + dst_src1
// "(-src2) * src3 + dst_src1" has been idealized to "src3 * (-src2) + dst_src1"
instruct vfmlsF(vReg dst_src1, vReg src2, vReg src3) %{
predicate(UseFMA);
match(Set dst_src1 (FmaVF dst_src1 (Binary (NegVF src2) src3)));
match(Set dst_src1 (FmaVF dst_src1 (Binary src2 (NegVF src3))));
ins_cost(VEC_COST);
format %{ "vfmlsF $dst_src1, $dst_src1, $src2, $src3" %}
ins_encode %{
assert(UseFMA, "Needs FMA instructions support.");
__ vsetvli_helper(T_FLOAT, Matcher::vector_length(this));
__ vfnmsac_vv(as_VectorRegister($dst_src1$$reg),
as_VectorRegister($src2$$reg), as_VectorRegister($src3$$reg));
@@ -744,15 +743,14 @@ instruct vfmlsF(vReg dst_src1, vReg src2, vReg src3) %{
ins_pipe(pipe_slow);
%}
// dst_src1 = dst_src1 + -src2 * src3
// dst_src1 = dst_src1 + src2 * -src3
// dst_src1 = src2 * (-src3) + dst_src1
// "(-src2) * src3 + dst_src1" has been idealized to "src3 * (-src2) + dst_src1"
instruct vfmlsD(vReg dst_src1, vReg src2, vReg src3) %{
predicate(UseFMA);
match(Set dst_src1 (FmaVD dst_src1 (Binary (NegVD src2) src3)));
match(Set dst_src1 (FmaVD dst_src1 (Binary src2 (NegVD src3))));
ins_cost(VEC_COST);
format %{ "vfmlsD $dst_src1, $dst_src1, $src2, $src3" %}
ins_encode %{
assert(UseFMA, "Needs FMA instructions support.");
__ vsetvli_helper(T_DOUBLE, Matcher::vector_length(this));
__ vfnmsac_vv(as_VectorRegister($dst_src1$$reg),
as_VectorRegister($src2$$reg), as_VectorRegister($src3$$reg));
@@ -762,13 +760,13 @@ instruct vfmlsD(vReg dst_src1, vReg src2, vReg src3) %{
// vector fnmsub - predicated
// dst_src1 = dst_src1 * -src2 + src3
// dst_src1 = dst_src1 * (-src2) + src3
instruct vfnmsub_masked(vReg dst_src1, vReg src2, vReg src3, vRegMask_V0 v0) %{
predicate(UseFMA);
match(Set dst_src1 (FmaVF (Binary dst_src1 (NegVF src2)) (Binary src3 v0)));
match(Set dst_src1 (FmaVD (Binary dst_src1 (NegVD src2)) (Binary src3 v0)));
format %{ "vfnmsub_masked $dst_src1, $dst_src1, $src2, $src3, $v0" %}
ins_encode %{
assert(UseFMA, "Needs FMA instructions support.");
BasicType bt = Matcher::vector_element_basic_type(this);
__ vsetvli_helper(bt, Matcher::vector_length(this));
__ vfnmsub_vv(as_VectorRegister($dst_src1$$reg), as_VectorRegister($src2$$reg),
@@ -779,15 +777,14 @@ instruct vfnmsub_masked(vReg dst_src1, vReg src2, vReg src3, vRegMask_V0 v0) %{
// vector fnmla
// dst_src1 = -dst_src1 + -src2 * src3
// dst_src1 = -dst_src1 + src2 * -src3
// dst_src1 = src2 * (-src3) - dst_src1
// "(-src2) * src3 - dst_src1" has been idealized to "src3 * (-src2) - dst_src1"
instruct vfnmlaF(vReg dst_src1, vReg src2, vReg src3) %{
predicate(UseFMA);
match(Set dst_src1 (FmaVF (NegVF dst_src1) (Binary (NegVF src2) src3)));
match(Set dst_src1 (FmaVF (NegVF dst_src1) (Binary src2 (NegVF src3))));
ins_cost(VEC_COST);
format %{ "vfnmlaF $dst_src1, $dst_src1, $src2, $src3" %}
ins_encode %{
assert(UseFMA, "Needs FMA instructions support.");
__ vsetvli_helper(T_FLOAT, Matcher::vector_length(this));
__ vfnmacc_vv(as_VectorRegister($dst_src1$$reg),
as_VectorRegister($src2$$reg), as_VectorRegister($src3$$reg));
@@ -795,15 +792,14 @@ instruct vfnmlaF(vReg dst_src1, vReg src2, vReg src3) %{
ins_pipe(pipe_slow);
%}
// dst_src1 = -dst_src1 + -src2 * src3
// dst_src1 = -dst_src1 + src2 * -src3
// dst_src1 = src2 * (-src3) - dst_src1
// "(-src2) * src3 - dst_src1" has been idealized to "src3 * (-src2) - dst_src1"
instruct vfnmlaD(vReg dst_src1, vReg src2, vReg src3) %{
predicate(UseFMA);
match(Set dst_src1 (FmaVD (NegVD dst_src1) (Binary (NegVD src2) src3)));
match(Set dst_src1 (FmaVD (NegVD dst_src1) (Binary src2 (NegVD src3))));
ins_cost(VEC_COST);
format %{ "vfnmlaD $dst_src1, $dst_src1, $src2, $src3" %}
ins_encode %{
assert(UseFMA, "Needs FMA instructions support.");
__ vsetvli_helper(T_DOUBLE, Matcher::vector_length(this));
__ vfnmacc_vv(as_VectorRegister($dst_src1$$reg),
as_VectorRegister($src2$$reg), as_VectorRegister($src3$$reg));
@@ -813,13 +809,13 @@ instruct vfnmlaD(vReg dst_src1, vReg src2, vReg src3) %{
// vector fnmadd - predicated
// dst_src1 = -src3 + dst_src1 * -src2
// dst_src1 = dst_src1 * (-src2) - src3
instruct vfnmadd_masked(vReg dst_src1, vReg src2, vReg src3, vRegMask_V0 v0) %{
predicate(UseFMA);
match(Set dst_src1 (FmaVF (Binary dst_src1 (NegVF src2)) (Binary (NegVF src3) v0)));
match(Set dst_src1 (FmaVD (Binary dst_src1 (NegVD src2)) (Binary (NegVD src3) v0)));
format %{ "vfnmadd_masked $dst_src1, $dst_src1, $src2, $src3, $v0" %}
ins_encode %{
assert(UseFMA, "Needs FMA instructions support.");
BasicType bt = Matcher::vector_element_basic_type(this);
__ vsetvli_helper(bt, Matcher::vector_length(this));
__ vfnmadd_vv(as_VectorRegister($dst_src1$$reg), as_VectorRegister($src2$$reg),
@@ -830,13 +826,13 @@ instruct vfnmadd_masked(vReg dst_src1, vReg src2, vReg src3, vRegMask_V0 v0) %{
// vector fnmls
// dst_src1 = -dst_src1 + src2 * src3
// dst_src1 = src2 * src3 - dst_src1
instruct vfnmlsF(vReg dst_src1, vReg src2, vReg src3) %{
predicate(UseFMA);
match(Set dst_src1 (FmaVF (NegVF dst_src1) (Binary src2 src3)));
ins_cost(VEC_COST);
format %{ "vfnmlsF $dst_src1, $dst_src1, $src2, $src3" %}
ins_encode %{
assert(UseFMA, "Needs FMA instructions support.");
__ vsetvli_helper(T_FLOAT, Matcher::vector_length(this));
__ vfmsac_vv(as_VectorRegister($dst_src1$$reg),
as_VectorRegister($src2$$reg), as_VectorRegister($src3$$reg));
@@ -846,11 +842,11 @@ instruct vfnmlsF(vReg dst_src1, vReg src2, vReg src3) %{
// dst_src1 = -dst_src1 + src2 * src3
instruct vfnmlsD(vReg dst_src1, vReg src2, vReg src3) %{
predicate(UseFMA);
match(Set dst_src1 (FmaVD (NegVD dst_src1) (Binary src2 src3)));
ins_cost(VEC_COST);
format %{ "vfnmlsD $dst_src1, $dst_src1, $src2, $src3" %}
ins_encode %{
assert(UseFMA, "Needs FMA instructions support.");
__ vsetvli_helper(T_DOUBLE, Matcher::vector_length(this));
__ vfmsac_vv(as_VectorRegister($dst_src1$$reg),
as_VectorRegister($src2$$reg), as_VectorRegister($src3$$reg));
@@ -860,13 +856,13 @@ instruct vfnmlsD(vReg dst_src1, vReg src2, vReg src3) %{
// vector vfmsub - predicated
// dst_src1 = -src3 + dst_src1 * src2
// dst_src1 = dst_src1 * src2 - src3
instruct vfmsub_masked(vReg dst_src1, vReg src2, vReg src3, vRegMask_V0 v0) %{
predicate(UseFMA);
match(Set dst_src1 (FmaVF (Binary dst_src1 src2) (Binary (NegVF src3) v0)));
match(Set dst_src1 (FmaVD (Binary dst_src1 src2) (Binary (NegVD src3) v0)));
format %{ "vfmsub_masked $dst_src1, $dst_src1, $src2, $src3, $v0" %}
ins_encode %{
assert(UseFMA, "Needs FMA instructions support.");
BasicType bt = Matcher::vector_element_basic_type(this);
__ vsetvli_helper(bt, Matcher::vector_length(this));
__ vfmsub_vv(as_VectorRegister($dst_src1$$reg), as_VectorRegister($src2$$reg),
@@ -884,7 +880,7 @@ instruct vmla(vReg dst_src1, vReg src2, vReg src3) %{
match(Set dst_src1 (AddVI dst_src1 (MulVI src2 src3)));
match(Set dst_src1 (AddVL dst_src1 (MulVL src2 src3)));
ins_cost(VEC_COST);
format %{ "vmla $dst_src1, $dst_src1, src2, src3" %}
format %{ "vmla $dst_src1, $dst_src1, $src2, $src3" %}
ins_encode %{
BasicType bt = Matcher::vector_element_basic_type(this);
__ vsetvli_helper(bt, Matcher::vector_length(this));
@@ -920,7 +916,7 @@ instruct vmls(vReg dst_src1, vReg src2, vReg src3) %{
match(Set dst_src1 (SubVI dst_src1 (MulVI src2 src3)));
match(Set dst_src1 (SubVL dst_src1 (MulVL src2 src3)));
ins_cost(VEC_COST);
format %{ "vmls $dst_src1, $dst_src1, src2, src3" %}
format %{ "vmls $dst_src1, $dst_src1, $src2, $src3" %}
ins_encode %{
BasicType bt = Matcher::vector_element_basic_type(this);
__ vsetvli_helper(bt, Matcher::vector_length(this));

View File

@@ -2275,24 +2275,21 @@ class StubGenerator: public StubCodeGenerator {
}
// code for comparing 8 characters of strings with Latin1 and Utf16 encoding
void compare_string_8_x_LU(Register tmpL, Register tmpU, Label &DIFF1,
Label &DIFF2) {
const Register strU = x12, curU = x7, strL = x29, tmp = x30;
__ ld(tmpL, Address(strL));
__ addi(strL, strL, 8);
void compare_string_8_x_LU(Register tmpL, Register tmpU, Register strL, Register strU, Label& DIFF) {
const Register tmp = x30, tmpLval = x12;
__ ld(tmpLval, Address(strL));
__ addi(strL, strL, wordSize);
__ ld(tmpU, Address(strU));
__ addi(strU, strU, 8);
__ inflate_lo32(tmp, tmpL);
__ mv(t0, tmp);
__ xorr(tmp, curU, t0);
__ bnez(tmp, DIFF2);
__ addi(strU, strU, wordSize);
__ inflate_lo32(tmpL, tmpLval);
__ xorr(tmp, tmpU, tmpL);
__ bnez(tmp, DIFF);
__ ld(curU, Address(strU));
__ addi(strU, strU, 8);
__ inflate_hi32(tmp, tmpL);
__ mv(t0, tmp);
__ xorr(tmp, tmpU, t0);
__ bnez(tmp, DIFF1);
__ ld(tmpU, Address(strU));
__ addi(strU, strU, wordSize);
__ inflate_hi32(tmpL, tmpLval);
__ xorr(tmp, tmpU, tmpL);
__ bnez(tmp, DIFF);
}
// x10 = result
@@ -2307,11 +2304,9 @@ class StubGenerator: public StubCodeGenerator {
__ align(CodeEntryAlignment);
StubCodeMark mark(this, "StubRoutines", isLU ? "compare_long_string_different_encoding LU" : "compare_long_string_different_encoding UL");
address entry = __ pc();
Label SMALL_LOOP, TAIL, TAIL_LOAD_16, LOAD_LAST, DIFF1, DIFF2,
DONE, CALCULATE_DIFFERENCE;
const Register result = x10, str1 = x11, cnt1 = x12, str2 = x13, cnt2 = x14,
tmp1 = x28, tmp2 = x29, tmp3 = x30, tmp4 = x7, tmp5 = x31;
RegSet spilled_regs = RegSet::of(tmp4, tmp5);
Label SMALL_LOOP, TAIL, LOAD_LAST, DONE, CALCULATE_DIFFERENCE;
const Register result = x10, str1 = x11, str2 = x13, cnt2 = x14,
tmp1 = x28, tmp2 = x29, tmp3 = x30, tmp4 = x12;
// cnt2 == amount of characters left to compare
// Check already loaded first 4 symbols
@@ -2319,77 +2314,81 @@ class StubGenerator: public StubCodeGenerator {
__ mv(isLU ? tmp1 : tmp2, tmp3);
__ addi(str1, str1, isLU ? wordSize / 2 : wordSize);
__ addi(str2, str2, isLU ? wordSize : wordSize / 2);
__ sub(cnt2, cnt2, 8); // Already loaded 4 symbols. Last 4 is special case.
__ push_reg(spilled_regs, sp);
__ sub(cnt2, cnt2, wordSize / 2); // Already loaded 4 symbols
if (isLU) {
__ add(str1, str1, cnt2);
__ shadd(str2, cnt2, str2, t0, 1);
} else {
__ shadd(str1, cnt2, str1, t0, 1);
__ add(str2, str2, cnt2);
}
__ xorr(tmp3, tmp1, tmp2);
__ mv(tmp5, tmp2);
__ bnez(tmp3, CALCULATE_DIFFERENCE);
Register strU = isLU ? str2 : str1,
strL = isLU ? str1 : str2,
tmpU = isLU ? tmp5 : tmp1, // where to keep U for comparison
tmpL = isLU ? tmp1 : tmp5; // where to keep L for comparison
tmpU = isLU ? tmp2 : tmp1, // where to keep U for comparison
tmpL = isLU ? tmp1 : tmp2; // where to keep L for comparison
__ sub(tmp2, strL, cnt2); // strL pointer to load from
__ slli(t0, cnt2, 1);
__ sub(cnt1, strU, t0); // strU pointer to load from
// make sure main loop is 8 byte-aligned, we should load another 4 bytes from strL
// cnt2 is >= 68 here, no need to check it for >= 0
__ lwu(tmpL, Address(strL));
__ addi(strL, strL, wordSize / 2);
__ ld(tmpU, Address(strU));
__ addi(strU, strU, wordSize);
__ inflate_lo32(tmp3, tmpL);
__ mv(tmpL, tmp3);
__ xorr(tmp3, tmpU, tmpL);
__ bnez(tmp3, CALCULATE_DIFFERENCE);
__ addi(cnt2, cnt2, -wordSize / 2);
__ ld(tmp4, Address(cnt1));
__ addi(cnt1, cnt1, 8);
__ beqz(cnt2, LOAD_LAST); // no characters left except last load
__ sub(cnt2, cnt2, 16);
// we are now 8-bytes aligned on strL
__ sub(cnt2, cnt2, wordSize * 2);
__ bltz(cnt2, TAIL);
__ bind(SMALL_LOOP); // smaller loop
__ sub(cnt2, cnt2, 16);
compare_string_8_x_LU(tmpL, tmpU, DIFF1, DIFF2);
compare_string_8_x_LU(tmpL, tmpU, DIFF1, DIFF2);
__ sub(cnt2, cnt2, wordSize * 2);
compare_string_8_x_LU(tmpL, tmpU, strL, strU, CALCULATE_DIFFERENCE);
compare_string_8_x_LU(tmpL, tmpU, strL, strU, CALCULATE_DIFFERENCE);
__ bgez(cnt2, SMALL_LOOP);
__ addi(t0, cnt2, 16);
__ beqz(t0, LOAD_LAST);
__ bind(TAIL); // 1..15 characters left until last load (last 4 characters)
// Address of 8 bytes before last 4 characters in UTF-16 string
__ shadd(cnt1, cnt2, cnt1, t0, 1);
// Address of 16 bytes before last 4 characters in Latin1 string
__ add(tmp2, tmp2, cnt2);
__ ld(tmp4, Address(cnt1, -8));
// last 16 characters before last load
compare_string_8_x_LU(tmpL, tmpU, DIFF1, DIFF2);
compare_string_8_x_LU(tmpL, tmpU, DIFF1, DIFF2);
__ j(LOAD_LAST);
__ bind(DIFF2);
__ mv(tmpU, tmp4);
__ bind(DIFF1);
__ mv(tmpL, t0);
__ j(CALCULATE_DIFFERENCE);
__ bind(LOAD_LAST);
// Last 4 UTF-16 characters are already pre-loaded into tmp4 by compare_string_8_x_LU.
// No need to load it again
__ mv(tmpU, tmp4);
__ ld(tmpL, Address(strL));
__ addi(t0, cnt2, wordSize * 2);
__ beqz(t0, DONE);
__ bind(TAIL); // 1..15 characters left
// Aligned access. Load bytes in portions - 4, 2, 1.
__ addi(t0, cnt2, wordSize);
__ addi(cnt2, cnt2, wordSize * 2); // amount of characters left to process
__ bltz(t0, LOAD_LAST);
// remaining characters are greater than or equals to 8, we can do one compare_string_8_x_LU
compare_string_8_x_LU(tmpL, tmpU, strL, strU, CALCULATE_DIFFERENCE);
__ addi(cnt2, cnt2, -wordSize);
__ beqz(cnt2, DONE); // no character left
__ bind(LOAD_LAST); // cnt2 = 1..7 characters left
__ addi(cnt2, cnt2, -wordSize); // cnt2 is now an offset in strL which points to last 8 bytes
__ slli(t0, cnt2, 1); // t0 is now an offset in strU which points to last 16 bytes
__ add(strL, strL, cnt2); // Address of last 8 bytes in Latin1 string
__ add(strU, strU, t0); // Address of last 16 bytes in UTF-16 string
__ load_int_misaligned(tmpL, Address(strL), t0, false);
__ load_long_misaligned(tmpU, Address(strU), t0, 2);
__ inflate_lo32(tmp3, tmpL);
__ mv(tmpL, tmp3);
__ xorr(tmp3, tmpU, tmpL);
__ beqz(tmp3, DONE);
__ bnez(tmp3, CALCULATE_DIFFERENCE);
__ addi(strL, strL, wordSize / 2); // Address of last 4 bytes in Latin1 string
__ addi(strU, strU, wordSize); // Address of last 8 bytes in UTF-16 string
__ load_int_misaligned(tmpL, Address(strL), t0, false);
__ load_long_misaligned(tmpU, Address(strU), t0, 2);
__ inflate_lo32(tmp3, tmpL);
__ mv(tmpL, tmp3);
__ xorr(tmp3, tmpU, tmpL);
__ bnez(tmp3, CALCULATE_DIFFERENCE);
__ j(DONE); // no character left
// Find the first different characters in the longwords and
// compute their difference.
__ bind(CALCULATE_DIFFERENCE);
__ ctzc_bit(tmp4, tmp3);
__ srl(tmp1, tmp1, tmp4);
__ srl(tmp5, tmp5, tmp4);
__ srl(tmp2, tmp2, tmp4);
__ andi(tmp1, tmp1, 0xFFFF);
__ andi(tmp5, tmp5, 0xFFFF);
__ sub(result, tmp1, tmp5);
__ andi(tmp2, tmp2, 0xFFFF);
__ sub(result, tmp1, tmp2);
__ bind(DONE);
__ pop_reg(spilled_regs, sp);
__ ret();
return entry;
}
@@ -2502,9 +2501,9 @@ class StubGenerator: public StubCodeGenerator {
__ xorr(tmp4, tmp1, tmp2);
__ bnez(tmp4, DIFF);
__ add(str1, str1, cnt2);
__ ld(tmp5, Address(str1));
__ load_long_misaligned(tmp5, Address(str1), tmp3, isLL ? 1 : 2);
__ add(str2, str2, cnt2);
__ ld(cnt1, Address(str2));
__ load_long_misaligned(cnt1, Address(str2), tmp3, isLL ? 1 : 2);
__ xorr(tmp4, tmp5, cnt1);
__ beqz(tmp4, LENGTH_DIFF);
// Find the first different characters in the longwords and
@@ -3914,6 +3913,370 @@ class StubGenerator: public StubCodeGenerator {
return start;
}
// Set of L registers that correspond to a contiguous memory area.
// Each 64-bit register typically corresponds to 2 32-bit integers.
template <uint L>
class RegCache {
private:
MacroAssembler *_masm;
Register _regs[L];
public:
RegCache(MacroAssembler *masm, RegSet rs): _masm(masm) {
assert(rs.size() == L, "%u registers are used to cache %u 4-byte data", rs.size(), 2 * L);
auto it = rs.begin();
for (auto &r: _regs) {
r = *it;
++it;
}
}
void gen_loads(Register base) {
for (uint i = 0; i < L; i += 1) {
__ ld(_regs[i], Address(base, 8 * i));
}
}
// Generate code extracting i-th unsigned word (4 bytes).
void get_u32(Register dest, uint i, Register rmask32) {
assert(i < 2 * L, "invalid i: %u", i);
if (i % 2 == 0) {
__ andr(dest, _regs[i / 2], rmask32);
} else {
__ srli(dest, _regs[i / 2], 32);
}
}
};
typedef RegCache<8> BufRegCache;
// a += rtmp1 + x + ac;
// a = Integer.rotateLeft(a, s) + b;
void m5_FF_GG_HH_II_epilogue(BufRegCache& reg_cache,
Register a, Register b, Register c, Register d,
int k, int s, int t,
Register rtmp1, Register rtmp2, Register rmask32) {
// rtmp1 = rtmp1 + x + ac
reg_cache.get_u32(rtmp2, k, rmask32);
__ addw(rtmp1, rtmp1, rtmp2);
__ mv(rtmp2, t);
__ addw(rtmp1, rtmp1, rtmp2);
// a += rtmp1 + x + ac
__ addw(a, a, rtmp1);
// a = Integer.rotateLeft(a, s) + b;
__ rolw_imm(a, a, s, rtmp1);
__ addw(a, a, b);
}
// a += ((b & c) | ((~b) & d)) + x + ac;
// a = Integer.rotateLeft(a, s) + b;
void md5_FF(BufRegCache& reg_cache,
Register a, Register b, Register c, Register d,
int k, int s, int t,
Register rtmp1, Register rtmp2, Register rmask32) {
// rtmp1 = b & c
__ andr(rtmp1, b, c);
// rtmp2 = (~b) & d
__ andn(rtmp2, d, b);
// rtmp1 = (b & c) | ((~b) & d)
__ orr(rtmp1, rtmp1, rtmp2);
m5_FF_GG_HH_II_epilogue(reg_cache, a, b, c, d, k, s, t,
rtmp1, rtmp2, rmask32);
}
// a += ((b & d) | (c & (~d))) + x + ac;
// a = Integer.rotateLeft(a, s) + b;
void md5_GG(BufRegCache& reg_cache,
Register a, Register b, Register c, Register d,
int k, int s, int t,
Register rtmp1, Register rtmp2, Register rmask32) {
// rtmp1 = b & d
__ andr(rtmp1, b, d);
// rtmp2 = c & (~d)
__ andn(rtmp2, c, d);
// rtmp1 = (b & d) | (c & (~d))
__ orr(rtmp1, rtmp1, rtmp2);
m5_FF_GG_HH_II_epilogue(reg_cache, a, b, c, d, k, s, t,
rtmp1, rtmp2, rmask32);
}
// a += ((b ^ c) ^ d) + x + ac;
// a = Integer.rotateLeft(a, s) + b;
void md5_HH(BufRegCache& reg_cache,
Register a, Register b, Register c, Register d,
int k, int s, int t,
Register rtmp1, Register rtmp2, Register rmask32) {
// rtmp1 = (b ^ c) ^ d
__ xorr(rtmp1, b, c);
__ xorr(rtmp1, rtmp1, d);
m5_FF_GG_HH_II_epilogue(reg_cache, a, b, c, d, k, s, t,
rtmp1, rtmp2, rmask32);
}
// a += (c ^ (b | (~d))) + x + ac;
// a = Integer.rotateLeft(a, s) + b;
void md5_II(BufRegCache& reg_cache,
Register a, Register b, Register c, Register d,
int k, int s, int t,
Register rtmp1, Register rtmp2, Register rmask32) {
// rtmp1 = c ^ (b | (~d))
__ orn(rtmp1, b, d);
__ xorr(rtmp1, c, rtmp1);
m5_FF_GG_HH_II_epilogue(reg_cache, a, b, c, d, k, s, t,
rtmp1, rtmp2, rmask32);
}
// Arguments:
//
// Inputs:
// c_rarg0 - byte[] source+offset
// c_rarg1 - int[] SHA.state
// c_rarg2 - int offset (multi_block == True)
// c_rarg3 - int limit (multi_block == True)
//
// Registers:
// x0 zero (zero)
// x1 ra (return address)
// x2 sp (stack pointer)
// x3 gp (global pointer)
// x4 tp (thread pointer)
// x5 t0 state0
// x6 t1 state1
// x7 t2 state2
// x8 f0/s0 (frame pointer)
// x9 s1 state3 [saved-reg]
// x10 a0 rtmp1 / c_rarg0
// x11 a1 rtmp2 / c_rarg1
// x12 a2 a / c_rarg2
// x13 a3 b / c_rarg3
// x14 a4 c
// x15 a5 d
// x16 a6 buf
// x17 a7 state
// x18 s2 ofs [saved-reg] (multi_block == True)
// x19 s3 limit [saved-reg] (multi_block == True)
// x20 s4
// x21 s5
// x22 s6 mask32 [saved-reg]
// x23 s7
// x24 s8 buf0 [saved-reg]
// x25 s9 buf1 [saved-reg]
// x26 s10 buf2 [saved-reg]
// x27 s11 buf3 [saved-reg]
// x28 t3 buf4
// x29 t4 buf5
// x30 t5 buf6
// x31 t6 buf7
address generate_md5_implCompress(bool multi_block, const char *name) {
__ align(CodeEntryAlignment);
StubCodeMark mark(this, "StubRoutines", name);
address start = __ pc();
// rotation constants
const int S11 = 7;
const int S12 = 12;
const int S13 = 17;
const int S14 = 22;
const int S21 = 5;
const int S22 = 9;
const int S23 = 14;
const int S24 = 20;
const int S31 = 4;
const int S32 = 11;
const int S33 = 16;
const int S34 = 23;
const int S41 = 6;
const int S42 = 10;
const int S43 = 15;
const int S44 = 21;
Register buf_arg = c_rarg0; // a0
Register state_arg = c_rarg1; // a1
Register ofs_arg = c_rarg2; // a2
Register limit_arg = c_rarg3; // a3
// we'll copy the args to these registers to free up a0-a3
// to use for other values manipulated by instructions
// that can be compressed
Register buf = x16; // a6
Register state = x17; // a7
Register ofs = x18; // s2
Register limit = x19; // s3
// using x12->15 to allow compressed instructions
Register a = x12; // a2
Register b = x13; // a3
Register c = x14; // a4
Register d = x15; // a5
Register state0 = x5; // t0
Register state1 = x6; // t1
Register state2 = x7; // t2
Register state3 = x9; // s1
// using x9->x11 to allow compressed instructions
Register rtmp1 = x10; // a0
Register rtmp2 = x11; // a1
const int64_t MASK_32 = 0xffffffff;
Register rmask32 = x22; // s6
RegSet reg_cache_saved_regs = RegSet::of(x24, x25, x26, x27); // s8, s9, s10, s11
RegSet reg_cache_regs;
reg_cache_regs += reg_cache_saved_regs;
reg_cache_regs += RegSet::of(x28, x29, x30, x31); // t3, t4, t5, t6
BufRegCache reg_cache(_masm, reg_cache_regs);
RegSet saved_regs;
if (multi_block) {
saved_regs += RegSet::of(ofs, limit);
}
saved_regs += RegSet::of(state3, rmask32);
saved_regs += reg_cache_saved_regs;
__ push_reg(saved_regs, sp);
__ mv(buf, buf_arg);
__ mv(state, state_arg);
if (multi_block) {
__ mv(ofs, ofs_arg);
__ mv(limit, limit_arg);
}
__ mv(rmask32, MASK_32);
// to minimize the number of memory operations:
// read the 4 state 4-byte values in pairs, with a single ld,
// and split them into 2 registers
__ ld(state0, Address(state));
__ srli(state1, state0, 32);
__ andr(state0, state0, rmask32);
__ ld(state2, Address(state, 8));
__ srli(state3, state2, 32);
__ andr(state2, state2, rmask32);
Label md5_loop;
__ BIND(md5_loop);
reg_cache.gen_loads(buf);
__ mv(a, state0);
__ mv(b, state1);
__ mv(c, state2);
__ mv(d, state3);
// Round 1
md5_FF(reg_cache, a, b, c, d, 0, S11, 0xd76aa478, rtmp1, rtmp2, rmask32);
md5_FF(reg_cache, d, a, b, c, 1, S12, 0xe8c7b756, rtmp1, rtmp2, rmask32);
md5_FF(reg_cache, c, d, a, b, 2, S13, 0x242070db, rtmp1, rtmp2, rmask32);
md5_FF(reg_cache, b, c, d, a, 3, S14, 0xc1bdceee, rtmp1, rtmp2, rmask32);
md5_FF(reg_cache, a, b, c, d, 4, S11, 0xf57c0faf, rtmp1, rtmp2, rmask32);
md5_FF(reg_cache, d, a, b, c, 5, S12, 0x4787c62a, rtmp1, rtmp2, rmask32);
md5_FF(reg_cache, c, d, a, b, 6, S13, 0xa8304613, rtmp1, rtmp2, rmask32);
md5_FF(reg_cache, b, c, d, a, 7, S14, 0xfd469501, rtmp1, rtmp2, rmask32);
md5_FF(reg_cache, a, b, c, d, 8, S11, 0x698098d8, rtmp1, rtmp2, rmask32);
md5_FF(reg_cache, d, a, b, c, 9, S12, 0x8b44f7af, rtmp1, rtmp2, rmask32);
md5_FF(reg_cache, c, d, a, b, 10, S13, 0xffff5bb1, rtmp1, rtmp2, rmask32);
md5_FF(reg_cache, b, c, d, a, 11, S14, 0x895cd7be, rtmp1, rtmp2, rmask32);
md5_FF(reg_cache, a, b, c, d, 12, S11, 0x6b901122, rtmp1, rtmp2, rmask32);
md5_FF(reg_cache, d, a, b, c, 13, S12, 0xfd987193, rtmp1, rtmp2, rmask32);
md5_FF(reg_cache, c, d, a, b, 14, S13, 0xa679438e, rtmp1, rtmp2, rmask32);
md5_FF(reg_cache, b, c, d, a, 15, S14, 0x49b40821, rtmp1, rtmp2, rmask32);
// Round 2
md5_GG(reg_cache, a, b, c, d, 1, S21, 0xf61e2562, rtmp1, rtmp2, rmask32);
md5_GG(reg_cache, d, a, b, c, 6, S22, 0xc040b340, rtmp1, rtmp2, rmask32);
md5_GG(reg_cache, c, d, a, b, 11, S23, 0x265e5a51, rtmp1, rtmp2, rmask32);
md5_GG(reg_cache, b, c, d, a, 0, S24, 0xe9b6c7aa, rtmp1, rtmp2, rmask32);
md5_GG(reg_cache, a, b, c, d, 5, S21, 0xd62f105d, rtmp1, rtmp2, rmask32);
md5_GG(reg_cache, d, a, b, c, 10, S22, 0x02441453, rtmp1, rtmp2, rmask32);
md5_GG(reg_cache, c, d, a, b, 15, S23, 0xd8a1e681, rtmp1, rtmp2, rmask32);
md5_GG(reg_cache, b, c, d, a, 4, S24, 0xe7d3fbc8, rtmp1, rtmp2, rmask32);
md5_GG(reg_cache, a, b, c, d, 9, S21, 0x21e1cde6, rtmp1, rtmp2, rmask32);
md5_GG(reg_cache, d, a, b, c, 14, S22, 0xc33707d6, rtmp1, rtmp2, rmask32);
md5_GG(reg_cache, c, d, a, b, 3, S23, 0xf4d50d87, rtmp1, rtmp2, rmask32);
md5_GG(reg_cache, b, c, d, a, 8, S24, 0x455a14ed, rtmp1, rtmp2, rmask32);
md5_GG(reg_cache, a, b, c, d, 13, S21, 0xa9e3e905, rtmp1, rtmp2, rmask32);
md5_GG(reg_cache, d, a, b, c, 2, S22, 0xfcefa3f8, rtmp1, rtmp2, rmask32);
md5_GG(reg_cache, c, d, a, b, 7, S23, 0x676f02d9, rtmp1, rtmp2, rmask32);
md5_GG(reg_cache, b, c, d, a, 12, S24, 0x8d2a4c8a, rtmp1, rtmp2, rmask32);
// Round 3
md5_HH(reg_cache, a, b, c, d, 5, S31, 0xfffa3942, rtmp1, rtmp2, rmask32);
md5_HH(reg_cache, d, a, b, c, 8, S32, 0x8771f681, rtmp1, rtmp2, rmask32);
md5_HH(reg_cache, c, d, a, b, 11, S33, 0x6d9d6122, rtmp1, rtmp2, rmask32);
md5_HH(reg_cache, b, c, d, a, 14, S34, 0xfde5380c, rtmp1, rtmp2, rmask32);
md5_HH(reg_cache, a, b, c, d, 1, S31, 0xa4beea44, rtmp1, rtmp2, rmask32);
md5_HH(reg_cache, d, a, b, c, 4, S32, 0x4bdecfa9, rtmp1, rtmp2, rmask32);
md5_HH(reg_cache, c, d, a, b, 7, S33, 0xf6bb4b60, rtmp1, rtmp2, rmask32);
md5_HH(reg_cache, b, c, d, a, 10, S34, 0xbebfbc70, rtmp1, rtmp2, rmask32);
md5_HH(reg_cache, a, b, c, d, 13, S31, 0x289b7ec6, rtmp1, rtmp2, rmask32);
md5_HH(reg_cache, d, a, b, c, 0, S32, 0xeaa127fa, rtmp1, rtmp2, rmask32);
md5_HH(reg_cache, c, d, a, b, 3, S33, 0xd4ef3085, rtmp1, rtmp2, rmask32);
md5_HH(reg_cache, b, c, d, a, 6, S34, 0x04881d05, rtmp1, rtmp2, rmask32);
md5_HH(reg_cache, a, b, c, d, 9, S31, 0xd9d4d039, rtmp1, rtmp2, rmask32);
md5_HH(reg_cache, d, a, b, c, 12, S32, 0xe6db99e5, rtmp1, rtmp2, rmask32);
md5_HH(reg_cache, c, d, a, b, 15, S33, 0x1fa27cf8, rtmp1, rtmp2, rmask32);
md5_HH(reg_cache, b, c, d, a, 2, S34, 0xc4ac5665, rtmp1, rtmp2, rmask32);
// Round 4
md5_II(reg_cache, a, b, c, d, 0, S41, 0xf4292244, rtmp1, rtmp2, rmask32);
md5_II(reg_cache, d, a, b, c, 7, S42, 0x432aff97, rtmp1, rtmp2, rmask32);
md5_II(reg_cache, c, d, a, b, 14, S43, 0xab9423a7, rtmp1, rtmp2, rmask32);
md5_II(reg_cache, b, c, d, a, 5, S44, 0xfc93a039, rtmp1, rtmp2, rmask32);
md5_II(reg_cache, a, b, c, d, 12, S41, 0x655b59c3, rtmp1, rtmp2, rmask32);
md5_II(reg_cache, d, a, b, c, 3, S42, 0x8f0ccc92, rtmp1, rtmp2, rmask32);
md5_II(reg_cache, c, d, a, b, 10, S43, 0xffeff47d, rtmp1, rtmp2, rmask32);
md5_II(reg_cache, b, c, d, a, 1, S44, 0x85845dd1, rtmp1, rtmp2, rmask32);
md5_II(reg_cache, a, b, c, d, 8, S41, 0x6fa87e4f, rtmp1, rtmp2, rmask32);
md5_II(reg_cache, d, a, b, c, 15, S42, 0xfe2ce6e0, rtmp1, rtmp2, rmask32);
md5_II(reg_cache, c, d, a, b, 6, S43, 0xa3014314, rtmp1, rtmp2, rmask32);
md5_II(reg_cache, b, c, d, a, 13, S44, 0x4e0811a1, rtmp1, rtmp2, rmask32);
md5_II(reg_cache, a, b, c, d, 4, S41, 0xf7537e82, rtmp1, rtmp2, rmask32);
md5_II(reg_cache, d, a, b, c, 11, S42, 0xbd3af235, rtmp1, rtmp2, rmask32);
md5_II(reg_cache, c, d, a, b, 2, S43, 0x2ad7d2bb, rtmp1, rtmp2, rmask32);
md5_II(reg_cache, b, c, d, a, 9, S44, 0xeb86d391, rtmp1, rtmp2, rmask32);
__ addw(state0, state0, a);
__ addw(state1, state1, b);
__ addw(state2, state2, c);
__ addw(state3, state3, d);
if (multi_block) {
__ addi(buf, buf, 64);
__ addi(ofs, ofs, 64);
// if (ofs <= limit) goto m5_loop
__ bge(limit, ofs, md5_loop);
__ mv(c_rarg0, ofs); // return ofs
}
// to minimize the number of memory operations:
// write back the 4 state 4-byte values in pairs, with a single sd
__ andr(state0, state0, rmask32);
__ slli(state1, state1, 32);
__ orr(state0, state0, state1);
__ sd(state0, Address(state));
__ andr(state2, state2, rmask32);
__ slli(state3, state3, 32);
__ orr(state2, state2, state3);
__ sd(state2, Address(state, 8));
__ pop_reg(saved_regs, sp);
__ ret();
return (address) start;
}
#if INCLUDE_JFR
static void jfr_prologue(address the_pc, MacroAssembler* _masm, Register thread) {
@@ -4128,6 +4491,11 @@ class StubGenerator: public StubCodeGenerator {
generate_compare_long_strings();
generate_string_indexof_stubs();
if (UseMD5Intrinsics) {
StubRoutines::_md5_implCompress = generate_md5_implCompress(false, "md5_implCompress");
StubRoutines::_md5_implCompressMB = generate_md5_implCompress(true, "md5_implCompressMB");
}
#endif // COMPILER2_OR_JVMCI
}

View File

@@ -426,7 +426,8 @@ address TemplateInterpreterGenerator::generate_return_entry_for(TosState state,
address entry = __ pc();
// Restore stack bottom in case i2c adjusted stack
__ ld(esp, Address(fp, frame::interpreter_frame_last_sp_offset * wordSize));
__ ld(t0, Address(fp, frame::interpreter_frame_last_sp_offset * wordSize));
__ shadd(esp, t0, fp, t0, LogBytesPerWord);
// and null it as marker that esp is now tos until next java call
__ sd(zr, Address(fp, frame::interpreter_frame_last_sp_offset * wordSize));
__ restore_bcp();
@@ -483,7 +484,8 @@ address TemplateInterpreterGenerator::generate_deopt_entry_for(TosState state,
__ restore_sp_after_call(); // Restore SP to extended SP
// Restore expression stack pointer
__ ld(esp, Address(fp, frame::interpreter_frame_last_sp_offset * wordSize));
__ ld(t0, Address(fp, frame::interpreter_frame_last_sp_offset * wordSize));
__ shadd(esp, t0, fp, t0, LogBytesPerWord);
// null last_sp until next java call
__ sd(zr, Address(fp, frame::interpreter_frame_last_sp_offset * wordSize));
@@ -1604,7 +1606,8 @@ void TemplateInterpreterGenerator::generate_throw_exception() {
/* notify_jvmdi */ false);
// Restore the last_sp and null it out
__ ld(esp, Address(fp, frame::interpreter_frame_last_sp_offset * wordSize));
__ ld(t0, Address(fp, frame::interpreter_frame_last_sp_offset * wordSize));
__ shadd(esp, t0, fp, t0, LogBytesPerWord);
__ sd(zr, Address(fp, frame::interpreter_frame_last_sp_offset * wordSize));
__ restore_bcp();

View File

@@ -38,6 +38,7 @@
#include "oops/methodData.hpp"
#include "oops/objArrayKlass.hpp"
#include "oops/oop.inline.hpp"
#include "oops/resolvedFieldEntry.hpp"
#include "oops/resolvedIndyEntry.hpp"
#include "prims/jvmtiExport.hpp"
#include "prims/methodHandles.hpp"
@@ -169,7 +170,16 @@ void TemplateTable::patch_bytecode(Bytecodes::Code bc, Register bc_reg,
// additional, required work.
assert(byte_no == f1_byte || byte_no == f2_byte, "byte_no out of range");
assert(load_bc_into_bc_reg, "we use bc_reg as temp");
__ get_cache_and_index_and_bytecode_at_bcp(temp_reg, bc_reg, temp_reg, byte_no, 1);
__ load_field_entry(temp_reg, bc_reg);
if (byte_no == f1_byte) {
__ la(temp_reg, Address(temp_reg, in_bytes(ResolvedFieldEntry::get_code_offset())));
} else {
__ la(temp_reg, Address(temp_reg, in_bytes(ResolvedFieldEntry::put_code_offset())));
}
// Load-acquire the bytecode to match store-release in ResolvedFieldEntry::fill_in()
__ membar(MacroAssembler::AnyAny);
__ lbu(temp_reg, Address(temp_reg, 0));
__ membar(MacroAssembler::LoadLoad | MacroAssembler::LoadStore);
__ mv(bc_reg, bc);
__ beqz(temp_reg, L_patch_done);
break;
@@ -2104,6 +2114,19 @@ void TemplateTable::_return(TosState state) {
__ membar(MacroAssembler::StoreStore);
}
if (_desc->bytecode() != Bytecodes::_return_register_finalizer) {
Label no_safepoint;
__ ld(t0, Address(xthread, JavaThread::polling_word_offset()));
__ test_bit(t0, t0, exact_log2(SafepointMechanism::poll_bit()));
__ beqz(t0, no_safepoint);
__ push(state);
__ push_cont_fastpath(xthread);
__ call_VM(noreg, CAST_FROM_FN_PTR(address, InterpreterRuntime::at_safepoint));
__ pop_cont_fastpath(xthread);
__ pop(state);
__ bind(no_safepoint);
}
// Narrow result if state is itos but result type is smaller.
// Need to narrow in the return bytecode rather than in generate_return_entry
// since compiled code callers expect the result to already be narrowed.
@@ -2155,11 +2178,6 @@ void TemplateTable::resolve_cache_and_index(int byte_no,
Label resolved, clinit_barrier_slow;
Bytecodes::Code code = bytecode();
switch (code) {
case Bytecodes::_nofast_getfield: code = Bytecodes::_getfield; break;
case Bytecodes::_nofast_putfield: code = Bytecodes::_putfield; break;
default: break;
}
assert(byte_no == f1_byte || byte_no == f2_byte, "byte_no out of range");
__ get_cache_and_index_and_bytecode_at_bcp(Rcache, index, temp, byte_no, 1, index_size);
@@ -2188,6 +2206,71 @@ void TemplateTable::resolve_cache_and_index(int byte_no,
}
}
void TemplateTable::resolve_cache_and_index_for_field(int byte_no,
Register Rcache,
Register index) {
const Register temp = x9;
assert_different_registers(Rcache, index, temp);
Label resolved;
Bytecodes::Code code = bytecode();
switch (code) {
case Bytecodes::_nofast_getfield: code = Bytecodes::_getfield; break;
case Bytecodes::_nofast_putfield: code = Bytecodes::_putfield; break;
default: break;
}
assert(byte_no == f1_byte || byte_no == f2_byte, "byte_no out of range");
__ load_field_entry(Rcache, index);
if (byte_no == f1_byte) {
__ la(temp, Address(Rcache, in_bytes(ResolvedFieldEntry::get_code_offset())));
} else {
__ la(temp, Address(Rcache, in_bytes(ResolvedFieldEntry::put_code_offset())));
}
// Load-acquire the bytecode to match store-release in ResolvedFieldEntry::fill_in()
__ membar(MacroAssembler::AnyAny);
__ lbu(temp, Address(temp, 0));
__ membar(MacroAssembler::LoadLoad | MacroAssembler::LoadStore);
__ mv(t0, (int) code); // have we resolved this bytecode?
__ beq(temp, t0, resolved);
// resolve first time through
address entry = CAST_FROM_FN_PTR(address, InterpreterRuntime::resolve_from_cache);
__ mv(temp, (int) code);
__ call_VM(noreg, entry, temp);
// Update registers with resolved info
__ load_field_entry(Rcache, index);
__ bind(resolved);
}
void TemplateTable::load_resolved_field_entry(Register obj,
Register cache,
Register tos_state,
Register offset,
Register flags,
bool is_static = false) {
assert_different_registers(cache, tos_state, flags, offset);
// Field offset
__ load_sized_value(offset, Address(cache, in_bytes(ResolvedFieldEntry::field_offset_offset())), sizeof(int), true /*is_signed*/);
// Flags
__ load_unsigned_byte(flags, Address(cache, in_bytes(ResolvedFieldEntry::flags_offset())));
// TOS state
__ load_unsigned_byte(tos_state, Address(cache, in_bytes(ResolvedFieldEntry::type_offset())));
// Klass overwrite register
if (is_static) {
__ ld(obj, Address(cache, ResolvedFieldEntry::field_holder_offset()));
const int mirror_offset = in_bytes(Klass::java_mirror_offset());
__ ld(obj, Address(obj, mirror_offset));
__ resolve_oop_handle(obj, x15, t1);
}
}
// The Rcache and index registers must be set before call
// n.b unlike x86 cache already includes the index offset
void TemplateTable::load_field_cp_cache_entry(Register obj,
@@ -2343,8 +2426,7 @@ void TemplateTable::jvmti_post_field_access(Register cache, Register index,
__ beqz(x10, L1);
__ get_cache_and_index_at_bcp(c_rarg2, c_rarg3, 1);
__ la(c_rarg2, Address(c_rarg2, in_bytes(ConstantPoolCache::base_offset())));
__ load_field_entry(c_rarg2, index);
if (is_static) {
__ mv(c_rarg1, zr); // null object reference
@@ -2354,11 +2436,10 @@ void TemplateTable::jvmti_post_field_access(Register cache, Register index,
}
// c_rarg1: object pointer or null
// c_rarg2: cache entry pointer
// c_rarg3: jvalue object on the stack
__ call_VM(noreg, CAST_FROM_FN_PTR(address,
InterpreterRuntime::post_field_access),
c_rarg1, c_rarg2, c_rarg3);
__ get_cache_and_index_at_bcp(cache, index, 1);
c_rarg1, c_rarg2);
__ load_field_entry(cache, index);
__ bind(L1);
}
}
@@ -2370,17 +2451,17 @@ void TemplateTable::pop_and_check_object(Register r) {
}
void TemplateTable::getfield_or_static(int byte_no, bool is_static, RewriteControl rc) {
const Register cache = x12;
const Register index = x13;
const Register cache = x14;
const Register obj = x14;
const Register index = x13;
const Register tos_state = x13;
const Register off = x9;
const Register flags = x10;
const Register raw_flags = x16;
const Register flags = x16;
const Register bc = x14; // uses same reg as obj, so don't mix them
resolve_cache_and_index(byte_no, cache, index, sizeof(u2));
resolve_cache_and_index_for_field(byte_no, cache, index);
jvmti_post_field_access(cache, index, is_static, false);
load_field_cp_cache_entry(obj, cache, index, off, raw_flags, is_static);
load_resolved_field_entry(obj, cache, tos_state, off, flags, is_static);
if (!is_static) {
// obj is on the stack
@@ -2393,12 +2474,8 @@ void TemplateTable::getfield_or_static(int byte_no, bool is_static, RewriteContr
Label Done, notByte, notBool, notInt, notShort, notChar,
notLong, notFloat, notObj, notDouble;
__ slli(flags, raw_flags, XLEN - (ConstantPoolCacheEntry::tos_state_shift +
ConstantPoolCacheEntry::tos_state_bits));
__ srli(flags, flags, XLEN - ConstantPoolCacheEntry::tos_state_bits);
assert(btos == 0, "change code, btos != 0");
__ bnez(flags, notByte);
__ bnez(tos_state, notByte);
// Don't rewrite getstatic, only getfield
if (is_static) {
@@ -2415,7 +2492,7 @@ void TemplateTable::getfield_or_static(int byte_no, bool is_static, RewriteContr
__ j(Done);
__ bind(notByte);
__ sub(t0, flags, (u1)ztos);
__ sub(t0, tos_state, (u1)ztos);
__ bnez(t0, notBool);
// ztos (same code as btos)
@@ -2429,7 +2506,7 @@ void TemplateTable::getfield_or_static(int byte_no, bool is_static, RewriteContr
__ j(Done);
__ bind(notBool);
__ sub(t0, flags, (u1)atos);
__ sub(t0, tos_state, (u1)atos);
__ bnez(t0, notObj);
// atos
do_oop_load(_masm, field, x10, IN_HEAP);
@@ -2440,7 +2517,7 @@ void TemplateTable::getfield_or_static(int byte_no, bool is_static, RewriteContr
__ j(Done);
__ bind(notObj);
__ sub(t0, flags, (u1)itos);
__ sub(t0, tos_state, (u1)itos);
__ bnez(t0, notInt);
// itos
__ access_load_at(T_INT, IN_HEAP, x10, field, noreg, noreg);
@@ -2453,7 +2530,7 @@ void TemplateTable::getfield_or_static(int byte_no, bool is_static, RewriteContr
__ j(Done);
__ bind(notInt);
__ sub(t0, flags, (u1)ctos);
__ sub(t0, tos_state, (u1)ctos);
__ bnez(t0, notChar);
// ctos
__ access_load_at(T_CHAR, IN_HEAP, x10, field, noreg, noreg);
@@ -2465,7 +2542,7 @@ void TemplateTable::getfield_or_static(int byte_no, bool is_static, RewriteContr
__ j(Done);
__ bind(notChar);
__ sub(t0, flags, (u1)stos);
__ sub(t0, tos_state, (u1)stos);
__ bnez(t0, notShort);
// stos
__ access_load_at(T_SHORT, IN_HEAP, x10, field, noreg, noreg);
@@ -2477,7 +2554,7 @@ void TemplateTable::getfield_or_static(int byte_no, bool is_static, RewriteContr
__ j(Done);
__ bind(notShort);
__ sub(t0, flags, (u1)ltos);
__ sub(t0, tos_state, (u1)ltos);
__ bnez(t0, notLong);
// ltos
__ access_load_at(T_LONG, IN_HEAP, x10, field, noreg, noreg);
@@ -2489,7 +2566,7 @@ void TemplateTable::getfield_or_static(int byte_no, bool is_static, RewriteContr
__ j(Done);
__ bind(notLong);
__ sub(t0, flags, (u1)ftos);
__ sub(t0, tos_state, (u1)ftos);
__ bnez(t0, notFloat);
// ftos
__ access_load_at(T_FLOAT, IN_HEAP, noreg /* ftos */, field, noreg, noreg);
@@ -2502,7 +2579,7 @@ void TemplateTable::getfield_or_static(int byte_no, bool is_static, RewriteContr
__ bind(notFloat);
#ifdef ASSERT
__ sub(t0, flags, (u1)dtos);
__ sub(t0, tos_state, (u1)dtos);
__ bnez(t0, notDouble);
#endif
// dtos
@@ -2522,7 +2599,7 @@ void TemplateTable::getfield_or_static(int byte_no, bool is_static, RewriteContr
__ bind(Done);
Label notVolatile;
__ test_bit(t0, raw_flags, ConstantPoolCacheEntry::is_volatile_shift);
__ test_bit(t0, flags, ResolvedFieldEntry::is_volatile_shift);
__ beqz(t0, notVolatile);
__ membar(MacroAssembler::LoadLoad | MacroAssembler::LoadStore);
__ bind(notVolatile);
@@ -2546,8 +2623,6 @@ void TemplateTable::getstatic(int byte_no)
void TemplateTable::jvmti_post_field_mod(Register cache, Register index, bool is_static) {
transition(vtos, vtos);
ByteSize cp_base_offset = ConstantPoolCache::base_offset();
if (JvmtiExport::can_post_field_modification()) {
// Check to see if a field modification watch has been set before
// we take the time to call into the VM.
@@ -2561,7 +2636,7 @@ void TemplateTable::jvmti_post_field_mod(Register cache, Register index, bool is
});
__ beqz(x10, L1);
__ get_cache_and_index_at_bcp(c_rarg2, t0, 1);
__ mv(c_rarg2, cache);
if (is_static) {
// Life is simple. Null out the object pointer.
@@ -2571,11 +2646,7 @@ void TemplateTable::jvmti_post_field_mod(Register cache, Register index, bool is
// the object. We don't know the size of the value, though; it
// could be one or two words depending on its type. As a result,
// we must find the type to determine where the object is.
__ lwu(c_rarg3, Address(c_rarg2,
in_bytes(cp_base_offset +
ConstantPoolCacheEntry::flags_offset())));
__ srli(c_rarg3, c_rarg3, ConstantPoolCacheEntry::tos_state_shift);
ConstantPoolCacheEntry::verify_tos_state_shift();
__ load_unsigned_byte(c_rarg3, Address(c_rarg2, in_bytes(ResolvedFieldEntry::type_offset())));
Label nope2, done, ok;
__ ld(c_rarg1, at_tos_p1()); // initially assume a one word jvalue
__ sub(t0, c_rarg3, ltos);
@@ -2586,8 +2657,6 @@ void TemplateTable::jvmti_post_field_mod(Register cache, Register index, bool is
__ ld(c_rarg1, at_tos_p2()); // ltos (two word jvalue);
__ bind(nope2);
}
// cache entry pointer
__ add(c_rarg2, c_rarg2, in_bytes(cp_base_offset));
// object (tos)
__ mv(c_rarg3, esp);
// c_rarg1: object pointer set up above (null if static)
@@ -2597,7 +2666,7 @@ void TemplateTable::jvmti_post_field_mod(Register cache, Register index, bool is
CAST_FROM_FN_PTR(address,
InterpreterRuntime::post_field_modification),
c_rarg1, c_rarg2, c_rarg3);
__ get_cache_and_index_at_bcp(cache, index, 1);
__ load_field_entry(cache, index);
__ bind(L1);
}
}
@@ -2605,23 +2674,24 @@ void TemplateTable::jvmti_post_field_mod(Register cache, Register index, bool is
void TemplateTable::putfield_or_static(int byte_no, bool is_static, RewriteControl rc) {
transition(vtos, vtos);
const Register cache = x12;
const Register index = x13;
const Register obj = x12;
const Register off = x9;
const Register flags = x10;
const Register bc = x14;
const Register cache = x12;
const Register index = x13;
const Register tos_state = x13;
const Register obj = x12;
const Register off = x9;
const Register flags = x10;
const Register bc = x14;
resolve_cache_and_index(byte_no, cache, index, sizeof(u2));
resolve_cache_and_index_for_field(byte_no, cache, index);
jvmti_post_field_mod(cache, index, is_static);
load_field_cp_cache_entry(obj, cache, index, off, flags, is_static);
load_resolved_field_entry(obj, cache, tos_state, off, flags, is_static);
Label Done;
__ mv(x15, flags);
{
Label notVolatile;
__ test_bit(t0, x15, ConstantPoolCacheEntry::is_volatile_shift);
__ test_bit(t0, x15, ResolvedFieldEntry::is_volatile_shift);
__ beqz(t0, notVolatile);
__ membar(MacroAssembler::StoreStore | MacroAssembler::LoadStore);
__ bind(notVolatile);
@@ -2630,12 +2700,8 @@ void TemplateTable::putfield_or_static(int byte_no, bool is_static, RewriteContr
Label notByte, notBool, notInt, notShort, notChar,
notLong, notFloat, notObj, notDouble;
__ slli(flags, flags, XLEN - (ConstantPoolCacheEntry::tos_state_shift +
ConstantPoolCacheEntry::tos_state_bits));
__ srli(flags, flags, XLEN - ConstantPoolCacheEntry::tos_state_bits);
assert(btos == 0, "change code, btos != 0");
__ bnez(flags, notByte);
__ bnez(tos_state, notByte);
// Don't rewrite putstatic, only putfield
if (is_static) {
@@ -2659,7 +2725,7 @@ void TemplateTable::putfield_or_static(int byte_no, bool is_static, RewriteContr
}
__ bind(notByte);
__ sub(t0, flags, (u1)ztos);
__ sub(t0, tos_state, (u1)ztos);
__ bnez(t0, notBool);
// ztos
@@ -2679,7 +2745,7 @@ void TemplateTable::putfield_or_static(int byte_no, bool is_static, RewriteContr
}
__ bind(notBool);
__ sub(t0, flags, (u1)atos);
__ sub(t0, tos_state, (u1)atos);
__ bnez(t0, notObj);
// atos
@@ -2700,7 +2766,7 @@ void TemplateTable::putfield_or_static(int byte_no, bool is_static, RewriteContr
}
__ bind(notObj);
__ sub(t0, flags, (u1)itos);
__ sub(t0, tos_state, (u1)itos);
__ bnez(t0, notInt);
// itos
@@ -2720,7 +2786,7 @@ void TemplateTable::putfield_or_static(int byte_no, bool is_static, RewriteContr
}
__ bind(notInt);
__ sub(t0, flags, (u1)ctos);
__ sub(t0, tos_state, (u1)ctos);
__ bnez(t0, notChar);
// ctos
@@ -2740,7 +2806,7 @@ void TemplateTable::putfield_or_static(int byte_no, bool is_static, RewriteContr
}
__ bind(notChar);
__ sub(t0, flags, (u1)stos);
__ sub(t0, tos_state, (u1)stos);
__ bnez(t0, notShort);
// stos
@@ -2760,7 +2826,7 @@ void TemplateTable::putfield_or_static(int byte_no, bool is_static, RewriteContr
}
__ bind(notShort);
__ sub(t0, flags, (u1)ltos);
__ sub(t0, tos_state, (u1)ltos);
__ bnez(t0, notLong);
// ltos
@@ -2780,7 +2846,7 @@ void TemplateTable::putfield_or_static(int byte_no, bool is_static, RewriteContr
}
__ bind(notLong);
__ sub(t0, flags, (u1)ftos);
__ sub(t0, tos_state, (u1)ftos);
__ bnez(t0, notFloat);
// ftos
@@ -2801,7 +2867,7 @@ void TemplateTable::putfield_or_static(int byte_no, bool is_static, RewriteContr
__ bind(notFloat);
#ifdef ASSERT
__ sub(t0, flags, (u1)dtos);
__ sub(t0, tos_state, (u1)dtos);
__ bnez(t0, notDouble);
#endif
@@ -2831,7 +2897,7 @@ void TemplateTable::putfield_or_static(int byte_no, bool is_static, RewriteContr
{
Label notVolatile;
__ test_bit(t0, x15, ConstantPoolCacheEntry::is_volatile_shift);
__ test_bit(t0, x15, ResolvedFieldEntry::is_volatile_shift);
__ beqz(t0, notVolatile);
__ membar(MacroAssembler::StoreLoad | MacroAssembler::StoreStore);
__ bind(notVolatile);
@@ -2884,7 +2950,7 @@ void TemplateTable::jvmti_post_fast_field_mod() {
}
__ mv(c_rarg3, esp); // points to jvalue on the stack
// access constant pool cache entry
__ get_cache_entry_pointer_at_bcp(c_rarg2, x10, 1);
__ load_field_entry(c_rarg2, x10);
__ verify_oop(x9);
// x9: object pointer copied above
// c_rarg2: cache entry pointer
@@ -2918,21 +2984,18 @@ void TemplateTable::fast_storefield(TosState state) {
jvmti_post_fast_field_mod();
// access constant pool cache
__ get_cache_and_index_at_bcp(x12, x11, 1);
__ load_field_entry(x12, x11);
__ push_reg(x10);
// X11: field offset, X12: TOS, X13: flags
load_resolved_field_entry(x12, x12, x10, x11, x13);
__ pop_reg(x10);
// Must prevent reordering of the following cp cache loads with bytecode load
__ membar(MacroAssembler::LoadLoad);
// test for volatile with x13
__ lwu(x13, Address(x12, in_bytes(base +
ConstantPoolCacheEntry::flags_offset())));
// replace index with field offset from cache entry
__ ld(x11, Address(x12, in_bytes(base + ConstantPoolCacheEntry::f2_offset())));
{
Label notVolatile;
__ test_bit(t0, x13, ConstantPoolCacheEntry::is_volatile_shift);
__ test_bit(t0, x13, ResolvedFieldEntry::is_volatile_shift);
__ beqz(t0, notVolatile);
__ membar(MacroAssembler::StoreStore | MacroAssembler::LoadStore);
__ bind(notVolatile);
@@ -2980,7 +3043,7 @@ void TemplateTable::fast_storefield(TosState state) {
{
Label notVolatile;
__ test_bit(t0, x13, ConstantPoolCacheEntry::is_volatile_shift);
__ test_bit(t0, x13, ResolvedFieldEntry::is_volatile_shift);
__ beqz(t0, notVolatile);
__ membar(MacroAssembler::StoreLoad | MacroAssembler::StoreStore);
__ bind(notVolatile);
@@ -3002,7 +3065,7 @@ void TemplateTable::fast_accessfield(TosState state) {
});
__ beqz(x12, L1);
// access constant pool cache entry
__ get_cache_entry_pointer_at_bcp(c_rarg2, t1, 1);
__ load_field_entry(c_rarg2, t1);
__ verify_oop(x10);
__ push_ptr(x10); // save object pointer before call_VM() clobbers it
__ mv(c_rarg1, x10);
@@ -3017,15 +3080,13 @@ void TemplateTable::fast_accessfield(TosState state) {
}
// access constant pool cache
__ get_cache_and_index_at_bcp(x12, x11, 1);
__ load_field_entry(x12, x11);
// Must prevent reordering of the following cp cache loads with bytecode load
__ membar(MacroAssembler::LoadLoad);
__ ld(x11, Address(x12, in_bytes(ConstantPoolCache::base_offset() +
ConstantPoolCacheEntry::f2_offset())));
__ lwu(x13, Address(x12, in_bytes(ConstantPoolCache::base_offset() +
ConstantPoolCacheEntry::flags_offset())));
__ load_sized_value(x11, Address(x12, in_bytes(ResolvedFieldEntry::field_offset_offset())), sizeof(int), true /*is_signed*/);
__ load_unsigned_byte(x13, Address(x12, in_bytes(ResolvedFieldEntry::flags_offset())));
// x10: object
__ verify_oop(x10);
@@ -3066,7 +3127,7 @@ void TemplateTable::fast_accessfield(TosState state) {
}
{
Label notVolatile;
__ test_bit(t0, x13, ConstantPoolCacheEntry::is_volatile_shift);
__ test_bit(t0, x13, ResolvedFieldEntry::is_volatile_shift);
__ beqz(t0, notVolatile);
__ membar(MacroAssembler::LoadLoad | MacroAssembler::LoadStore);
__ bind(notVolatile);
@@ -3079,9 +3140,8 @@ void TemplateTable::fast_xaccess(TosState state) {
// get receiver
__ ld(x10, aaddress(0));
// access constant pool cache
__ get_cache_and_index_at_bcp(x12, x13, 2);
__ ld(x11, Address(x12, in_bytes(ConstantPoolCache::base_offset() +
ConstantPoolCacheEntry::f2_offset())));
__ load_field_entry(x12, x13, 2);
__ load_sized_value(x11, Address(x12, in_bytes(ResolvedFieldEntry::field_offset_offset())), sizeof(int), true /*is_signed*/);
// make sure exception is reported in correct bcp range (getfield is
// next instruction)
@@ -3108,9 +3168,8 @@ void TemplateTable::fast_xaccess(TosState state) {
{
Label notVolatile;
__ lwu(x13, Address(x12, in_bytes(ConstantPoolCache::base_offset() +
ConstantPoolCacheEntry::flags_offset())));
__ test_bit(t0, x13, ConstantPoolCacheEntry::is_volatile_shift);
__ load_unsigned_byte(x13, Address(x12, in_bytes(ResolvedFieldEntry::flags_offset())));
__ test_bit(t0, x13, ResolvedFieldEntry::is_volatile_shift);
__ beqz(t0, notVolatile);
__ membar(MacroAssembler::LoadLoad | MacroAssembler::LoadStore);
__ bind(notVolatile);

View File

@@ -171,9 +171,8 @@ void VM_Version::initialize() {
FLAG_SET_DEFAULT(UseCRC32CIntrinsics, false);
}
if (UseMD5Intrinsics) {
warning("MD5 intrinsics are not available on this CPU.");
FLAG_SET_DEFAULT(UseMD5Intrinsics, false);
if (FLAG_IS_DEFAULT(UseMD5Intrinsics)) {
FLAG_SET_DEFAULT(UseMD5Intrinsics, true);
}
if (UseRVV) {
@@ -268,8 +267,8 @@ void VM_Version::c2_initialize() {
if (MaxVectorSize > _initial_vector_length) {
warning("Current system only supports max RVV vector length %d. Set MaxVectorSize to %d",
_initial_vector_length, _initial_vector_length);
MaxVectorSize = _initial_vector_length;
}
MaxVectorSize = _initial_vector_length;
} else {
vm_exit_during_initialization(err_msg("Unsupported MaxVectorSize: %d", (int)MaxVectorSize));
}

View File

@@ -23,8 +23,76 @@
*/
#include "precompiled.hpp"
#include "asm/macroAssembler.inline.hpp"
#include "code/codeBlob.hpp"
#include "code/codeCache.hpp"
#include "code/vmreg.inline.hpp"
#include "compiler/oopMap.hpp"
#include "logging/logStream.hpp"
#include "memory/resourceArea.hpp"
#include "prims/downcallLinker.hpp"
#include "utilities/debug.hpp"
#include "runtime/globals.hpp"
#include "runtime/stubCodeGenerator.hpp"
#define __ _masm->
class DowncallStubGenerator : public StubCodeGenerator {
BasicType* _signature;
int _num_args;
BasicType _ret_bt;
const ABIDescriptor& _abi;
const GrowableArray<VMStorage>& _input_registers;
const GrowableArray<VMStorage>& _output_registers;
bool _needs_return_buffer;
int _captured_state_mask;
bool _needs_transition;
int _frame_complete;
int _frame_size_slots;
OopMapSet* _oop_maps;
public:
DowncallStubGenerator(CodeBuffer* buffer,
BasicType* signature,
int num_args,
BasicType ret_bt,
const ABIDescriptor& abi,
const GrowableArray<VMStorage>& input_registers,
const GrowableArray<VMStorage>& output_registers,
bool needs_return_buffer,
int captured_state_mask,
bool needs_transition)
:StubCodeGenerator(buffer, PrintMethodHandleStubs),
_signature(signature),
_num_args(num_args),
_ret_bt(ret_bt),
_abi(abi),
_input_registers(input_registers),
_output_registers(output_registers),
_needs_return_buffer(needs_return_buffer),
_captured_state_mask(captured_state_mask),
_needs_transition(needs_transition),
_frame_complete(0),
_frame_size_slots(0),
_oop_maps(nullptr) {
}
void generate();
int frame_complete() const {
return _frame_complete;
}
int framesize() const {
return (_frame_size_slots >> (LogBytesPerWord - LogBytesPerInt));
}
OopMapSet* oop_maps() const {
return _oop_maps;
}
};
static const int native_invoker_code_base_size = 512;
static const int native_invoker_size_per_args = 8;
RuntimeStub* DowncallLinker::make_downcall_stub(BasicType* signature,
int num_args,
@@ -35,6 +103,197 @@ RuntimeStub* DowncallLinker::make_downcall_stub(BasicType* signature,
bool needs_return_buffer,
int captured_state_mask,
bool needs_transition) {
Unimplemented();
return nullptr;
int code_size = native_invoker_code_base_size + (num_args * native_invoker_size_per_args);
int locs_size = 1; //must be non zero
CodeBuffer code("nep_invoker_blob", code_size, locs_size);
DowncallStubGenerator g(&code, signature, num_args, ret_bt, abi,
input_registers, output_registers,
needs_return_buffer, captured_state_mask,
needs_transition);
g.generate();
code.log_section_sizes("nep_invoker_blob");
RuntimeStub* stub =
RuntimeStub::new_runtime_stub("nep_invoker_blob",
&code,
g.frame_complete(),
g.framesize(),
g.oop_maps(), false);
#ifndef PRODUCT
LogTarget(Trace, foreign, downcall) lt;
if (lt.is_enabled()) {
ResourceMark rm;
LogStream ls(lt);
stub->print_on(&ls);
}
#endif
return stub;
}
void DowncallStubGenerator::generate() {
Register call_target_address = Z_R1_scratch,
tmp = Z_R0_scratch;
VMStorage shuffle_reg = _abi._scratch1;
JavaCallingConvention in_conv;
NativeCallingConvention out_conv(_input_registers);
ArgumentShuffle arg_shuffle(_signature, _num_args, _signature, _num_args, &in_conv, &out_conv, shuffle_reg);
#ifndef PRODUCT
LogTarget(Trace, foreign, downcall) lt;
if (lt.is_enabled()) {
ResourceMark rm;
LogStream ls(lt);
arg_shuffle.print_on(&ls);
}
#endif
assert(_abi._shadow_space_bytes == frame::z_abi_160_size, "expected space according to ABI");
int allocated_frame_size = _abi._shadow_space_bytes;
allocated_frame_size += arg_shuffle.out_arg_bytes();
assert(!_needs_return_buffer, "unexpected needs_return_buffer");
RegSpiller out_reg_spiller(_output_registers);
int spill_offset = allocated_frame_size;
allocated_frame_size += BytesPerWord;
StubLocations locs;
locs.set(StubLocations::TARGET_ADDRESS, _abi._scratch2);
if (_captured_state_mask != 0) {
__ block_comment("{ _captured_state_mask is set");
locs.set_frame_data(StubLocations::CAPTURED_STATE_BUFFER, allocated_frame_size);
allocated_frame_size += BytesPerWord;
__ block_comment("} _captured_state_mask is set");
}
allocated_frame_size = align_up(allocated_frame_size, StackAlignmentInBytes);
_frame_size_slots = allocated_frame_size >> LogBytesPerInt;
_oop_maps = _needs_transition ? new OopMapSet() : nullptr;
address start = __ pc();
__ save_return_pc();
__ push_frame(allocated_frame_size, Z_R11); // Create a new frame for the wrapper.
_frame_complete = __ pc() - start; // frame build complete.
if (_needs_transition) {
__ block_comment("{ thread java2native");
__ get_PC(Z_R1_scratch);
address the_pc = __ pc();
__ set_last_Java_frame(Z_SP, Z_R1_scratch);
OopMap* map = new OopMap(_frame_size_slots, 0);
_oop_maps->add_gc_map(the_pc - start, map);
// State transition
__ set_thread_state(_thread_in_native);
__ block_comment("} thread java2native");
}
__ block_comment("{ argument shuffle");
arg_shuffle.generate(_masm, shuffle_reg, frame::z_jit_out_preserve_size, _abi._shadow_space_bytes, locs);
__ block_comment("} argument shuffle");
__ call(as_Register(locs.get(StubLocations::TARGET_ADDRESS)));
//////////////////////////////////////////////////////////////////////////////
if (_captured_state_mask != 0) {
__ block_comment("{ save thread local");
out_reg_spiller.generate_spill(_masm, spill_offset);
__ load_const_optimized(call_target_address, CAST_FROM_FN_PTR(uint64_t, DowncallLinker::capture_state));
__ z_lg(Z_ARG1, Address(Z_SP, locs.data_offset(StubLocations::CAPTURED_STATE_BUFFER)));
__ load_const_optimized(Z_ARG2, _captured_state_mask);
__ call(call_target_address);
out_reg_spiller.generate_fill(_masm, spill_offset);
__ block_comment("} save thread local");
}
//////////////////////////////////////////////////////////////////////////////
Label L_after_safepoint_poll;
Label L_safepoint_poll_slow_path;
Label L_reguard;
Label L_after_reguard;
if (_needs_transition) {
__ block_comment("{ thread native2java");
__ set_thread_state(_thread_in_native_trans);
if (!UseSystemMemoryBarrier) {
__ z_fence(); // Order state change wrt. safepoint poll.
}
__ safepoint_poll(L_safepoint_poll_slow_path, tmp);
__ load_and_test_int(tmp, Address(Z_thread, JavaThread::suspend_flags_offset()));
__ z_brne(L_safepoint_poll_slow_path);
__ bind(L_after_safepoint_poll);
// change thread state
__ set_thread_state(_thread_in_Java);
__ block_comment("reguard stack check");
__ z_cli(Address(Z_thread, JavaThread::stack_guard_state_offset() + in_ByteSize(sizeof(StackOverflow::StackGuardState) - 1)),
StackOverflow::stack_guard_yellow_reserved_disabled);
__ z_bre(L_reguard);
__ bind(L_after_reguard);
__ reset_last_Java_frame();
__ block_comment("} thread native2java");
}
__ pop_frame();
__ restore_return_pc(); // This is the way back to the caller.
__ z_br(Z_R14);
//////////////////////////////////////////////////////////////////////////////
if (_needs_transition) {
__ block_comment("{ L_safepoint_poll_slow_path");
__ bind(L_safepoint_poll_slow_path);
// Need to save the native result registers around any runtime calls.
out_reg_spiller.generate_spill(_masm, spill_offset);
__ load_const_optimized(call_target_address, CAST_FROM_FN_PTR(uint64_t, JavaThread::check_special_condition_for_native_trans));
__ z_lgr(Z_ARG1, Z_thread);
__ call(call_target_address);
out_reg_spiller.generate_fill(_masm, spill_offset);
__ z_bru(L_after_safepoint_poll);
__ block_comment("} L_safepoint_poll_slow_path");
//////////////////////////////////////////////////////////////////////////////
__ block_comment("{ L_reguard");
__ bind(L_reguard);
// Need to save the native result registers around any runtime calls.
out_reg_spiller.generate_spill(_masm, spill_offset);
__ load_const_optimized(call_target_address, CAST_FROM_FN_PTR(uint64_t, SharedRuntime::reguard_yellow_pages));
__ call(call_target_address);
out_reg_spiller.generate_fill(_masm, spill_offset);
__ z_bru(L_after_reguard);
__ block_comment("} L_reguard");
}
//////////////////////////////////////////////////////////////////////////////
__ flush();
}

View File

@@ -23,34 +23,209 @@
*/
#include "precompiled.hpp"
#include "code/vmreg.hpp"
#include "asm/macroAssembler.inline.hpp"
#include "code/vmreg.inline.hpp"
#include "runtime/jniHandles.hpp"
#include "runtime/jniHandles.inline.hpp"
#include "oops/typeArrayOop.inline.hpp"
#include "oops/oopCast.inline.hpp"
#include "prims/foreignGlobals.hpp"
#include "utilities/debug.hpp"
#include "prims/foreignGlobals.inline.hpp"
#include "prims/vmstorage.hpp"
#include "utilities/formatBuffer.hpp"
class MacroAssembler;
#define __ masm->
bool ABIDescriptor::is_volatile_reg(Register reg) const {
return _integer_volatile_registers.contains(reg);
}
bool ABIDescriptor::is_volatile_reg(FloatRegister reg) const {
return _float_argument_registers.contains(reg)
|| _float_additional_volatile_registers.contains(reg);
}
bool ForeignGlobals::is_foreign_linker_supported() {
return false;
return true;
}
const ABIDescriptor ForeignGlobals::parse_abi_descriptor(jobject jabi) {
Unimplemented();
return {};
oop abi_oop = JNIHandles::resolve_non_null(jabi);
ABIDescriptor abi;
objArrayOop inputStorage = jdk_internal_foreign_abi_ABIDescriptor::inputStorage(abi_oop);
parse_register_array(inputStorage, StorageType::INTEGER, abi._integer_argument_registers, as_Register);
parse_register_array(inputStorage, StorageType::FLOAT, abi._float_argument_registers, as_FloatRegister);
objArrayOop outputStorage = jdk_internal_foreign_abi_ABIDescriptor::outputStorage(abi_oop);
parse_register_array(outputStorage, StorageType::INTEGER, abi._integer_return_registers, as_Register);
parse_register_array(outputStorage, StorageType::FLOAT, abi._float_return_registers, as_FloatRegister);
objArrayOop volatileStorage = jdk_internal_foreign_abi_ABIDescriptor::volatileStorage(abi_oop);
parse_register_array(volatileStorage, StorageType::INTEGER, abi._integer_volatile_registers, as_Register);
parse_register_array(volatileStorage, StorageType::FLOAT, abi._float_additional_volatile_registers, as_FloatRegister);
abi._stack_alignment_bytes = jdk_internal_foreign_abi_ABIDescriptor::stackAlignment(abi_oop);
abi._shadow_space_bytes = jdk_internal_foreign_abi_ABIDescriptor::shadowSpace(abi_oop);
abi._scratch1 = parse_vmstorage(jdk_internal_foreign_abi_ABIDescriptor::scratch1(abi_oop));
abi._scratch2 = parse_vmstorage(jdk_internal_foreign_abi_ABIDescriptor::scratch2(abi_oop));
return abi;
}
int RegSpiller::pd_reg_size(VMStorage reg) {
Unimplemented();
return -1;
if (reg.type() == StorageType::INTEGER || reg.type() == StorageType::FLOAT) {
return 8;
}
return 0; // stack and BAD
}
void RegSpiller::pd_store_reg(MacroAssembler* masm, int offset, VMStorage reg) {
Unimplemented();
if (reg.type() == StorageType::INTEGER) {
__ reg2mem_opt(as_Register(reg), Address(Z_SP, offset), true);
} else if (reg.type() == StorageType::FLOAT) {
__ freg2mem_opt(as_FloatRegister(reg), Address(Z_SP, offset), true);
} else {
// stack and BAD
}
}
void RegSpiller::pd_load_reg(MacroAssembler* masm, int offset, VMStorage reg) {
Unimplemented();
if (reg.type() == StorageType::INTEGER) {
__ mem2reg_opt(as_Register(reg), Address(Z_SP, offset), true);
} else if (reg.type() == StorageType::FLOAT) {
__ mem2freg_opt(as_FloatRegister(reg), Address(Z_SP, offset), true);
} else {
// stack and BAD
}
}
static int reg2offset(VMStorage vms, int stk_bias) {
assert(!vms.is_reg(), "wrong usage");
return vms.index_or_offset() + stk_bias;
}
static void move_reg(MacroAssembler* masm, int out_stk_bias,
VMStorage from_reg, VMStorage to_reg) {
int out_bias = 0;
switch (to_reg.type()) {
case StorageType::INTEGER:
if (to_reg.segment_mask() == REG64_MASK && from_reg.segment_mask() == REG32_MASK ) {
// see CCallingConventionRequiresIntsAsLongs
__ z_lgfr(as_Register(to_reg), as_Register(from_reg));
} else {
__ lgr_if_needed(as_Register(to_reg), as_Register(from_reg));
}
break;
case StorageType::STACK:
out_bias = out_stk_bias; //fallthrough
case StorageType::FRAME_DATA: {
// Integer types always get a 64 bit slot in C.
if (from_reg.segment_mask() == REG32_MASK) {
// see CCallingConventionRequiresIntsAsLongs
__ z_lgfr(as_Register(from_reg), as_Register(from_reg));
}
switch (to_reg.stack_size()) {
case 8: __ reg2mem_opt(as_Register(from_reg), Address(Z_SP, reg2offset(to_reg, out_bias)), true); break;
case 4: __ reg2mem_opt(as_Register(from_reg), Address(Z_SP, reg2offset(to_reg, out_bias)), false); break;
default: ShouldNotReachHere();
}
} break;
default: ShouldNotReachHere();
}
}
static void move_float(MacroAssembler* masm, int out_stk_bias,
VMStorage from_reg, VMStorage to_reg) {
switch (to_reg.type()) {
case StorageType::FLOAT:
if (from_reg.segment_mask() == REG64_MASK)
__ move_freg_if_needed(as_FloatRegister(to_reg), T_DOUBLE, as_FloatRegister(from_reg), T_DOUBLE);
else
__ move_freg_if_needed(as_FloatRegister(to_reg), T_FLOAT, as_FloatRegister(from_reg), T_FLOAT);
break;
case StorageType::STACK:
if (from_reg.segment_mask() == REG64_MASK) {
assert(to_reg.stack_size() == 8, "size should match");
__ freg2mem_opt(as_FloatRegister(from_reg), Address(Z_SP, reg2offset(to_reg, out_stk_bias)), true);
} else {
assert(to_reg.stack_size() == 4, "size should match");
__ freg2mem_opt(as_FloatRegister(from_reg), Address(Z_SP, reg2offset(to_reg, out_stk_bias)), false);
}
break;
default: ShouldNotReachHere();
}
}
static void move_stack(MacroAssembler* masm, Register tmp_reg, int in_stk_bias, int out_stk_bias,
VMStorage from_reg, VMStorage to_reg) {
int out_bias = 0;
Address from_addr(Z_R11, reg2offset(from_reg, in_stk_bias));
switch (to_reg.type()) {
case StorageType::INTEGER:
switch (from_reg.stack_size()) {
case 8: __ mem2reg_opt(as_Register(to_reg), from_addr, true);break;
case 4: __ mem2reg_opt(as_Register(to_reg), from_addr, false);break;
default: ShouldNotReachHere();
}
break;
case StorageType::FLOAT:
switch (from_reg.stack_size()) {
case 8: __ mem2freg_opt(as_FloatRegister(to_reg), from_addr, true);break;
case 4: __ mem2freg_opt(as_FloatRegister(to_reg), from_addr, false);break;
default: ShouldNotReachHere();
}
break;
case StorageType::STACK:
out_bias = out_stk_bias; // fallthrough
case StorageType::FRAME_DATA: {
switch (from_reg.stack_size()) {
case 8: __ mem2reg_opt(tmp_reg, from_addr, true); break;
case 4: if (to_reg.stack_size() == 8) {
__ mem2reg_signed_opt(tmp_reg, from_addr);
} else {
__ mem2reg_opt(tmp_reg, from_addr, false);
}
break;
default: ShouldNotReachHere();
}
switch (to_reg.stack_size()) {
case 8: __ reg2mem_opt(tmp_reg, Address (Z_SP, reg2offset(to_reg, out_bias)), true); break;
case 4: __ reg2mem_opt(tmp_reg, Address (Z_SP, reg2offset(to_reg, out_bias)), false); break;
default: ShouldNotReachHere();
}
} break;
default: ShouldNotReachHere();
}
}
void ArgumentShuffle::pd_generate(MacroAssembler* masm, VMStorage tmp, int in_stk_bias, int out_stk_bias, const StubLocations& locs) const {
Unimplemented();
Register tmp_reg = as_Register(tmp);
for (int i = 0; i < _moves.length(); i++) {
Move move = _moves.at(i);
VMStorage from_reg = move.from;
VMStorage to_reg = move.to;
// replace any placeholders
if (from_reg.type() == StorageType::PLACEHOLDER) {
from_reg = locs.get(from_reg);
}
if (to_reg.type() == StorageType::PLACEHOLDER) {
to_reg = locs.get(to_reg);
}
switch (from_reg.type()) {
case StorageType::INTEGER:
move_reg(masm, out_stk_bias, from_reg, to_reg);
break;
case StorageType::FLOAT:
move_float(masm, out_stk_bias, from_reg, to_reg);
break;
case StorageType::STACK:
move_stack(masm, tmp_reg, in_stk_bias, out_stk_bias, from_reg, to_reg);
break;
default: ShouldNotReachHere();
}
}
}

View File

@@ -24,6 +24,23 @@
#ifndef CPU_S390_VM_FOREIGN_GLOBALS_S390_HPP
#define CPU_S390_VM_FOREIGN_GLOBALS_S390_HPP
class ABIDescriptor {};
struct ABIDescriptor {
GrowableArray<Register> _integer_argument_registers;
GrowableArray<Register> _integer_return_registers;
GrowableArray<FloatRegister> _float_argument_registers;
GrowableArray<FloatRegister> _float_return_registers;
GrowableArray<Register> _integer_volatile_registers;
GrowableArray<FloatRegister> _float_additional_volatile_registers;
int32_t _stack_alignment_bytes;
int32_t _shadow_space_bytes;
VMStorage _scratch1;
VMStorage _scratch2;
bool is_volatile_reg(Register reg) const;
bool is_volatile_reg(FloatRegister reg) const;
};
#endif // CPU_S390_VM_FOREIGN_GLOBALS_S390_HPP

View File

@@ -218,13 +218,32 @@ frame frame::sender_for_entry_frame(RegisterMap *map) const {
}
UpcallStub::FrameData* UpcallStub::frame_data_for_frame(const frame& frame) const {
ShouldNotCallThis();
return nullptr;
assert(frame.is_upcall_stub_frame(), "wrong frame");
// need unextended_sp here, since normal sp is wrong for interpreter callees
return reinterpret_cast<UpcallStub::FrameData*>(
reinterpret_cast<address>(frame.unextended_sp()) + in_bytes(_frame_data_offset));
}
bool frame::upcall_stub_frame_is_first() const {
ShouldNotCallThis();
return false;
assert(is_upcall_stub_frame(), "must be optimized entry frame");
UpcallStub* blob = _cb->as_upcall_stub();
JavaFrameAnchor* jfa = blob->jfa_for_frame(*this);
return jfa->last_Java_sp() == nullptr;
}
frame frame::sender_for_upcall_stub_frame(RegisterMap* map) const {
assert(map != nullptr, "map must be set");
UpcallStub* blob = _cb->as_upcall_stub();
// Java frame called from C; skip all C frames and return top C
// frame of that chunk as the sender
JavaFrameAnchor* jfa = blob->jfa_for_frame(*this);
assert(!upcall_stub_frame_is_first(), "must have a frame anchor to go back to");
assert(jfa->last_Java_sp() > sp(), "must be above this frame on stack");
map->clear();
assert(map->include_argument_oops(), "should be set by clear");
frame fr(jfa->last_Java_sp(), jfa->last_Java_pc());
return fr;
}
frame frame::sender_for_interpreter_frame(RegisterMap *map) const {

View File

@@ -350,12 +350,10 @@ inline frame frame::sender(RegisterMap* map) const {
// update it accordingly.
map->set_include_argument_oops(false);
if (is_entry_frame()) {
return sender_for_entry_frame(map);
}
if (is_interpreted_frame()) {
return sender_for_interpreter_frame(map);
}
if (is_entry_frame()) return sender_for_entry_frame(map);
if (is_upcall_stub_frame()) return sender_for_upcall_stub_frame(map);
if (is_interpreted_frame()) return sender_for_interpreter_frame(map);
assert(_cb == CodeCache::find_blob(pc()),"Must be the same");
if (_cb != nullptr) return sender_for_compiled_frame(map);

View File

@@ -28,7 +28,7 @@
#define ShortenBranches true
const int StackAlignmentInBytes = 16;
const int StackAlignmentInBytes = 8;
// All faults on s390x give the address only on page granularity.
// Set Pdsegfault_address to minimum one page address.

View File

@@ -349,7 +349,16 @@ address MethodHandles::generate_method_handle_interpreter_entry(MacroAssembler*
void MethodHandles::jump_to_native_invoker(MacroAssembler* _masm, Register nep_reg, Register temp_target) {
BLOCK_COMMENT("jump_to_native_invoker {");
__ should_not_reach_here();
assert(nep_reg != noreg, "required register");
// Load the invoker, as NEP -> .invoker
__ verify_oop(nep_reg);
__ z_lg(temp_target, Address(nep_reg,
NONZERO(jdk_internal_foreign_abi_NativeEntryPoint::downcall_stub_address_offset_in_bytes())));
__ z_br(temp_target);
BLOCK_COMMENT("} jump_to_native_invoker");
}

View File

@@ -1505,6 +1505,9 @@ bool Matcher::match_rule_supported(int opcode) {
case Op_PopCountL:
// PopCount supported by H/W from z/Architecture G5 (z196) on.
return (UsePopCountInstruction && VM_Version::has_PopCount());
case Op_FmaF:
case Op_FmaD:
return UseFMA;
}
return true; // Per default match rules are supported.
@@ -7160,6 +7163,7 @@ instruct maddF_reg_reg(regF dst, regF src1, regF src2) %{
size(4);
format %{ "MAEBR $dst, $src1, $src2" %}
ins_encode %{
assert(UseFMA, "Needs FMA instructions support.");
__ z_maebr($dst$$FloatRegister, $src1$$FloatRegister, $src2$$FloatRegister);
%}
ins_pipe(pipe_class_dummy);
@@ -7173,6 +7177,7 @@ instruct maddD_reg_reg(regD dst, regD src1, regD src2) %{
size(4);
format %{ "MADBR $dst, $src1, $src2" %}
ins_encode %{
assert(UseFMA, "Needs FMA instructions support.");
__ z_madbr($dst$$FloatRegister, $src1$$FloatRegister, $src2$$FloatRegister);
%}
ins_pipe(pipe_class_dummy);
@@ -7186,6 +7191,7 @@ instruct msubF_reg_reg(regF dst, regF src1, regF src2) %{
size(4);
format %{ "MSEBR $dst, $src1, $src2" %}
ins_encode %{
assert(UseFMA, "Needs FMA instructions support.");
__ z_msebr($dst$$FloatRegister, $src1$$FloatRegister, $src2$$FloatRegister);
%}
ins_pipe(pipe_class_dummy);
@@ -7199,6 +7205,7 @@ instruct msubD_reg_reg(regD dst, regD src1, regD src2) %{
size(4);
format %{ "MSDBR $dst, $src1, $src2" %}
ins_encode %{
assert(UseFMA, "Needs FMA instructions support.");
__ z_msdbr($dst$$FloatRegister, $src1$$FloatRegister, $src2$$FloatRegister);
%}
ins_pipe(pipe_class_dummy);
@@ -7212,6 +7219,7 @@ instruct maddF_reg_mem(regF dst, regF src1, memoryRX src2) %{
size(6);
format %{ "MAEB $dst, $src1, $src2" %}
ins_encode %{
assert(UseFMA, "Needs FMA instructions support.");
__ z_maeb($dst$$FloatRegister, $src1$$FloatRegister,
Address(reg_to_register_object($src2$$base), $src2$$index$$Register, $src2$$disp));
%}
@@ -7226,6 +7234,7 @@ instruct maddD_reg_mem(regD dst, regD src1, memoryRX src2) %{
size(6);
format %{ "MADB $dst, $src1, $src2" %}
ins_encode %{
assert(UseFMA, "Needs FMA instructions support.");
__ z_madb($dst$$FloatRegister, $src1$$FloatRegister,
Address(reg_to_register_object($src2$$base), $src2$$index$$Register, $src2$$disp));
%}
@@ -7240,6 +7249,7 @@ instruct msubF_reg_mem(regF dst, regF src1, memoryRX src2) %{
size(6);
format %{ "MSEB $dst, $src1, $src2" %}
ins_encode %{
assert(UseFMA, "Needs FMA instructions support.");
__ z_mseb($dst$$FloatRegister, $src1$$FloatRegister,
Address(reg_to_register_object($src2$$base), $src2$$index$$Register, $src2$$disp));
%}
@@ -7254,6 +7264,7 @@ instruct msubD_reg_mem(regD dst, regD src1, memoryRX src2) %{
size(6);
format %{ "MSDB $dst, $src1, $src2" %}
ins_encode %{
assert(UseFMA, "Needs FMA instructions support.");
__ z_msdb($dst$$FloatRegister, $src1$$FloatRegister,
Address(reg_to_register_object($src2$$base), $src2$$index$$Register, $src2$$disp));
%}
@@ -7268,6 +7279,7 @@ instruct maddF_mem_reg(regF dst, memoryRX src1, regF src2) %{
size(6);
format %{ "MAEB $dst, $src1, $src2" %}
ins_encode %{
assert(UseFMA, "Needs FMA instructions support.");
__ z_maeb($dst$$FloatRegister, $src2$$FloatRegister,
Address(reg_to_register_object($src1$$base), $src1$$index$$Register, $src1$$disp));
%}
@@ -7282,6 +7294,7 @@ instruct maddD_mem_reg(regD dst, memoryRX src1, regD src2) %{
size(6);
format %{ "MADB $dst, $src1, $src2" %}
ins_encode %{
assert(UseFMA, "Needs FMA instructions support.");
__ z_madb($dst$$FloatRegister, $src2$$FloatRegister,
Address(reg_to_register_object($src1$$base), $src1$$index$$Register, $src1$$disp));
%}
@@ -7296,6 +7309,7 @@ instruct msubF_mem_reg(regF dst, memoryRX src1, regF src2) %{
size(6);
format %{ "MSEB $dst, $src1, $src2" %}
ins_encode %{
assert(UseFMA, "Needs FMA instructions support.");
__ z_mseb($dst$$FloatRegister, $src2$$FloatRegister,
Address(reg_to_register_object($src1$$base), $src1$$index$$Register, $src1$$disp));
%}
@@ -7310,6 +7324,7 @@ instruct msubD_mem_reg(regD dst, memoryRX src1, regD src2) %{
size(6);
format %{ "MSDB $dst, $src1, $src2" %}
ins_encode %{
assert(UseFMA, "Needs FMA instructions support.");
__ z_msdb($dst$$FloatRegister, $src2$$FloatRegister,
Address(reg_to_register_object($src1$$base), $src1$$index$$Register, $src1$$disp));
%}

View File

@@ -22,15 +22,287 @@
*/
#include "precompiled.hpp"
#include "asm/macroAssembler.inline.hpp"
#include "logging/logStream.hpp"
#include "memory/resourceArea.hpp"
#include "prims/upcallLinker.hpp"
#include "utilities/debug.hpp"
#include "runtime/sharedRuntime.hpp"
#include "runtime/signature.hpp"
#include "runtime/stubRoutines.hpp"
#include "utilities/formatBuffer.hpp"
#include "utilities/globalDefinitions.hpp"
#define __ _masm->
// for callee saved regs, according to the caller's ABI
static int compute_reg_save_area_size(const ABIDescriptor& abi) {
int size = 0;
for (int i = 0; i < Register::number_of_registers; i++) {
Register reg = as_Register(i);
// Z_SP saved/restored by prologue/epilogue
if (reg == Z_SP) continue;
if (!abi.is_volatile_reg(reg)) {
size += 8; // bytes
}
}
for (int i = 0; i < FloatRegister::number_of_registers; i++) {
FloatRegister reg = as_FloatRegister(i);
if (!abi.is_volatile_reg(reg)) {
size += 8; // bytes
}
}
return size;
}
static void preserve_callee_saved_registers(MacroAssembler* _masm, const ABIDescriptor& abi, int reg_save_area_offset) {
// 1. iterate all registers in the architecture
// - check if they are volatile or not for the given abi
// - if NOT, we need to save it here
int offset = reg_save_area_offset;
__ block_comment("{ preserve_callee_saved_regs ");
for (int i = 0; i < Register::number_of_registers; i++) {
Register reg = as_Register(i);
// Z_SP saved/restored by prologue/epilogue
if (reg == Z_SP) continue;
if (!abi.is_volatile_reg(reg)) {
__ z_stg(reg, Address(Z_SP, offset));
offset += 8;
}
}
for (int i = 0; i < FloatRegister::number_of_registers; i++) {
FloatRegister reg = as_FloatRegister(i);
if (!abi.is_volatile_reg(reg)) {
__ z_std(reg, Address(Z_SP, offset));
offset += 8;
}
}
__ block_comment("} preserve_callee_saved_regs ");
}
static void restore_callee_saved_registers(MacroAssembler* _masm, const ABIDescriptor& abi, int reg_save_area_offset) {
// 1. iterate all registers in the architecture
// - check if they are volatile or not for the given abi
// - if NOT, we need to restore it here
int offset = reg_save_area_offset;
__ block_comment("{ restore_callee_saved_regs ");
for (int i = 0; i < Register::number_of_registers; i++) {
Register reg = as_Register(i);
// Z_SP saved/restored by prologue/epilogue
if (reg == Z_SP) continue;
if (!abi.is_volatile_reg(reg)) {
__ z_lg(reg, Address(Z_SP, offset));
offset += 8;
}
}
for (int i = 0; i < FloatRegister::number_of_registers; i++) {
FloatRegister reg = as_FloatRegister(i);
if (!abi.is_volatile_reg(reg)) {
__ z_ld(reg, Address(Z_SP, offset));
offset += 8;
}
}
__ block_comment("} restore_callee_saved_regs ");
}
static const int upcall_stub_code_base_size = 1024; // depends on GC (resolve_jobject)
static const int upcall_stub_size_per_arg = 16; // arg save & restore + move
address UpcallLinker::make_upcall_stub(jobject receiver, Method* entry,
BasicType* in_sig_bt, int total_in_args,
BasicType* out_sig_bt, int total_out_args,
BasicType ret_type,
jobject jabi, jobject jconv,
bool needs_return_buffer, int ret_buf_size) {
ShouldNotCallThis();
return nullptr;
ResourceMark rm;
const ABIDescriptor abi = ForeignGlobals::parse_abi_descriptor(jabi);
const CallRegs call_regs = ForeignGlobals::parse_call_regs(jconv);
int code_size = upcall_stub_code_base_size + (total_in_args * upcall_stub_size_per_arg);
CodeBuffer buffer("upcall_stub", code_size, /* locs_size = */ 0);
Register call_target_address = Z_R1_scratch;
VMStorage shuffle_reg = abi._scratch1;
JavaCallingConvention out_conv;
NativeCallingConvention in_conv(call_regs._arg_regs);
ArgumentShuffle arg_shuffle(in_sig_bt, total_in_args, out_sig_bt, total_out_args, &in_conv, &out_conv, shuffle_reg);
// The Java call uses the JIT ABI, but we also call C.
int out_arg_area = MAX2(frame::z_jit_out_preserve_size + arg_shuffle.out_arg_bytes(), (int)frame::z_abi_160_size);
#ifndef PRODUCT
LogTarget(Trace, foreign, upcall) lt;
if (lt.is_enabled()) {
ResourceMark rm;
LogStream ls(lt);
arg_shuffle.print_on(&ls);
}
#endif
int reg_save_area_size = compute_reg_save_area_size(abi);
RegSpiller arg_spiller(call_regs._arg_regs);
RegSpiller result_spiller(call_regs._ret_regs);
int res_save_area_offset = out_arg_area;
int arg_save_area_offset = res_save_area_offset + result_spiller.spill_size_bytes();
int reg_save_area_offset = arg_save_area_offset + arg_spiller.spill_size_bytes();
int frame_data_offset = reg_save_area_offset + reg_save_area_size;
int frame_bottom_offset = frame_data_offset + sizeof(UpcallStub::FrameData);
int frame_size = align_up(frame_bottom_offset, StackAlignmentInBytes);
StubLocations locs;
// The space we have allocated will look like:
//
//
// FP-> | |
// |---------------------| = frame_bottom_offset = frame_size
// | |
// | FrameData |
// |---------------------| = frame_data_offset
// | |
// | reg_save_area |
// |---------------------| = reg_save_are_offset
// | |
// | arg_save_area |
// |---------------------| = arg_save_are_offset
// | |
// | res_save_area |
// |---------------------| = res_save_are_offset
// | |
// SP-> | out_arg_area | needs to be at end for shadow space
//
//
//////////////////////////////////////////////////////////////////////////////
MacroAssembler* _masm = new MacroAssembler(&buffer);
address start = __ pc();
__ save_return_pc();
assert((abi._stack_alignment_bytes % StackAlignmentInBytes) == 0, "must be 8 byte aligned");
// allocate frame (frame_size is also aligned, so stack is still aligned)
__ push_frame(frame_size);
// we have to always spill args since we need to do a call to get the thread
// (and maybe attach it).
arg_spiller.generate_spill(_masm, arg_save_area_offset);
// Java methods won't preserve them, so save them here:
preserve_callee_saved_registers(_masm, abi, reg_save_area_offset);
__ block_comment("{ on_entry");
__ load_const_optimized(call_target_address, CAST_FROM_FN_PTR(uint64_t, UpcallLinker::on_entry));
__ z_aghik(Z_ARG1, Z_SP, frame_data_offset);
__ call(call_target_address);
__ z_lgr(Z_thread, Z_RET);
__ block_comment("} on_entry");
arg_spiller.generate_fill(_masm, arg_save_area_offset);
__ block_comment("{ argument shuffle");
arg_shuffle.generate(_masm, shuffle_reg, abi._shadow_space_bytes, frame::z_jit_out_preserve_size, locs);
__ block_comment("} argument shuffle");
__ block_comment("{ receiver ");
__ load_const_optimized(Z_ARG1, (intptr_t)receiver);
__ resolve_jobject(Z_ARG1, Z_tmp_1, Z_tmp_2);
__ block_comment("} receiver ");
__ load_const_optimized(Z_method, (intptr_t)entry);
__ z_stg(Z_method, Address(Z_thread, in_bytes(JavaThread::callee_target_offset())));
__ z_lg(call_target_address, Address(Z_method, in_bytes(Method::from_compiled_offset())));
__ call(call_target_address);
// return value shuffle
assert(!needs_return_buffer, "unexpected needs_return_buffer");
// CallArranger can pick a return type that goes in the same reg for both CCs.
if (call_regs._ret_regs.length() > 0) { // 0 or 1
VMStorage ret_reg = call_regs._ret_regs.at(0);
// Check if the return reg is as expected.
switch (ret_type) {
case T_BOOLEAN:
case T_BYTE:
case T_SHORT:
case T_CHAR:
case T_INT:
__ z_lgfr(Z_RET, Z_RET); // Clear garbage in high half.
// fallthrough
case T_LONG:
assert(as_Register(ret_reg) == Z_RET, "unexpected result register");
break;
case T_FLOAT:
case T_DOUBLE:
assert(as_FloatRegister(ret_reg) == Z_FRET, "unexpected result register");
break;
default:
fatal("unexpected return type: %s", type2name(ret_type));
}
}
result_spiller.generate_spill(_masm, res_save_area_offset);
__ block_comment("{ on_exit");
__ load_const_optimized(call_target_address, CAST_FROM_FN_PTR(uint64_t, UpcallLinker::on_exit));
__ z_aghik(Z_ARG1, Z_SP, frame_data_offset);
__ call(call_target_address);
__ block_comment("} on_exit");
restore_callee_saved_registers(_masm, abi, reg_save_area_offset);
result_spiller.generate_fill(_masm, res_save_area_offset);
__ pop_frame();
__ restore_return_pc();
__ z_br(Z_R14);
//////////////////////////////////////////////////////////////////////////////
__ block_comment("{ exception handler");
intptr_t exception_handler_offset = __ pc() - start;
// Native caller has no idea how to handle exceptions,
// so we just crash here. Up to callee to catch exceptions.
__ verify_oop(Z_ARG1);
__ load_const_optimized(call_target_address, CAST_FROM_FN_PTR(uint64_t, UpcallLinker::handle_uncaught_exception));
__ call_c(call_target_address);
__ should_not_reach_here();
__ block_comment("} exception handler");
_masm->flush();
#ifndef PRODUCT
stringStream ss;
ss.print("upcall_stub_%s", entry->signature()->as_C_string());
const char* name = _masm->code_string(ss.as_string());
#else // PRODUCT
const char* name = "upcall_stub";
#endif // PRODUCT
buffer.log_section_sizes(name);
UpcallStub* blob
= UpcallStub::create(name,
&buffer,
exception_handler_offset,
receiver,
in_ByteSize(frame_data_offset));
#ifndef PRODUCT
if (lt.is_enabled()) {
ResourceMark rm;
LogStream ls(lt);
blob->print_on(&ls);
}
#endif
return blob->code_begin();
}

View File

@@ -707,7 +707,7 @@ void VM_Version::print_features_internal(const char* text, bool print_anyway) {
}
if (ContendedPaddingWidth > 0) {
tty->cr();
tty->print_cr("ContendedPaddingWidth " INTX_FORMAT, ContendedPaddingWidth);
tty->print_cr("ContendedPaddingWidth %d", ContendedPaddingWidth);
}
}
}

View File

@@ -29,24 +29,79 @@
#include "asm/register.hpp"
enum class StorageType : int8_t {
STACK = 0,
PLACEHOLDER = 1,
// special locations used only by native code
FRAME_DATA = PLACEHOLDER + 1,
INTEGER = 0,
FLOAT = 1,
STACK = 2,
PLACEHOLDER = 3,
// special locations used only by native code
FRAME_DATA = 4,
INVALID = -1
};
// need to define this before constructing VMStorage (below)
constexpr inline bool VMStorage::is_reg(StorageType type) {
return false;
return type == StorageType::INTEGER || type == StorageType::FLOAT;
}
constexpr inline StorageType VMStorage::stack_type() { return StorageType::STACK; }
constexpr inline StorageType VMStorage::placeholder_type() { return StorageType::PLACEHOLDER; }
constexpr inline StorageType VMStorage::frame_data_type() { return StorageType::FRAME_DATA; }
// Needs to be consistent with S390Architecture.java.
constexpr uint16_t REG32_MASK = 0b0000000000000001;
constexpr uint16_t REG64_MASK = 0b0000000000000011;
inline Register as_Register(VMStorage vms) {
assert(vms.type() == StorageType::INTEGER, "not the right type");
return ::as_Register(vms.index());
}
inline FloatRegister as_FloatRegister(VMStorage vms) {
assert(vms.type() == StorageType::FLOAT, "not the right type");
return ::as_FloatRegister(vms.index());
}
inline VMStorage as_VMStorage(Register reg, uint16_t segment_mask = REG64_MASK) {
return VMStorage::reg_storage(StorageType::INTEGER, segment_mask, reg->encoding());
}
inline VMStorage as_VMStorage(FloatRegister reg, uint16_t segment_mask = REG64_MASK) {
return VMStorage::reg_storage(StorageType::FLOAT, segment_mask, reg->encoding());
}
inline VMStorage as_VMStorage(VMReg reg, BasicType bt) {
if (reg->is_Register()) {
uint16_t segment_mask = 0;
switch (bt) {
case T_BOOLEAN:
case T_CHAR :
case T_BYTE :
case T_SHORT :
case T_INT : segment_mask = REG32_MASK; break;
default : segment_mask = REG64_MASK; break;
}
return as_VMStorage(reg->as_Register(), segment_mask);
} else if (reg->is_FloatRegister()) {
// FP regs always use double format. However, we need the correct format for loads /stores.
return as_VMStorage(reg->as_FloatRegister(), (bt == T_FLOAT) ? REG32_MASK : REG64_MASK);
} else if (reg->is_stack()) {
uint16_t size = 0;
switch (bt) {
case T_BOOLEAN:
case T_CHAR :
case T_BYTE :
case T_SHORT :
case T_INT :
case T_FLOAT : size = 4; break;
default : size = 8; break;
}
return VMStorage(StorageType::STACK, size,
checked_cast<uint16_t>(reg->reg2stack() * VMRegImpl::stack_slot_size));
} else if (!reg->is_valid()) {
return VMStorage::invalid();
}
ShouldNotReachHere();
return VMStorage::invalid();
}
#endif // CPU_S390_VMSTORAGE_S390_INLINE_HPP
#endif // CPU_S390_VMSTORAGE_S390_INLINE_HPP

View File

@@ -146,5 +146,6 @@ IntelJccErratumAlignment::~IntelJccErratumAlignment() {
return;
}
assert(pc() - _start_pc > 0, "No instruction aligned");
assert(!IntelJccErratum::is_crossing_or_ending_at_32_byte_boundary(_start_pc, pc()), "Invalid jcc_size estimate");
}

View File

@@ -79,8 +79,8 @@ frame FreezeBase::new_heap_frame(frame& f, frame& caller) {
intptr_t *sp, *fp; // sp is really our unextended_sp
if (FKind::interpreted) {
assert((intptr_t*)f.at(frame::interpreter_frame_last_sp_offset) == nullptr
|| f.unextended_sp() == (intptr_t*)f.at(frame::interpreter_frame_last_sp_offset), "");
assert((intptr_t*)f.at_relative_or_null(frame::interpreter_frame_last_sp_offset) == nullptr
|| f.unextended_sp() == (intptr_t*)f.at_relative(frame::interpreter_frame_last_sp_offset), "");
intptr_t locals_offset = *f.addr_at(frame::interpreter_frame_locals_offset);
// If the caller.is_empty(), i.e. we're freezing into an empty chunk, then we set
// the chunk's argsize in finalize_freeze and make room for it above the unextended_sp
@@ -120,7 +120,7 @@ frame FreezeBase::new_heap_frame(frame& f, frame& caller) {
void FreezeBase::adjust_interpreted_frame_unextended_sp(frame& f) {
assert((f.at(frame::interpreter_frame_last_sp_offset) != 0) || (f.unextended_sp() == f.sp()), "");
intptr_t* real_unextended_sp = (intptr_t*)f.at(frame::interpreter_frame_last_sp_offset);
intptr_t* real_unextended_sp = (intptr_t*)f.at_relative_or_null(frame::interpreter_frame_last_sp_offset);
if (real_unextended_sp != nullptr) {
f.set_unextended_sp(real_unextended_sp); // can be null at a safepoint
}
@@ -141,8 +141,8 @@ inline void FreezeBase::relativize_interpreted_frame_metadata(const frame& f, co
|| (f.unextended_sp() == f.sp()), "");
assert(f.fp() > (intptr_t*)f.at(frame::interpreter_frame_initial_sp_offset), "");
// at(frame::interpreter_frame_last_sp_offset) can be null at safepoint preempts
*hf.addr_at(frame::interpreter_frame_last_sp_offset) = hf.unextended_sp() - hf.fp();
// Make sure that last_sp is already relativized.
assert((intptr_t*)hf.at_relative(frame::interpreter_frame_last_sp_offset) == hf.unextended_sp(), "");
// Make sure that locals is already relativized.
assert((*hf.addr_at(frame::interpreter_frame_locals_offset) == frame::sender_sp_offset + f.interpreter_frame_method()->max_locals() - 1), "");
@@ -282,7 +282,9 @@ static inline void derelativize_one(intptr_t* const fp, int offset) {
inline void ThawBase::derelativize_interpreted_frame_metadata(const frame& hf, const frame& f) {
intptr_t* vfp = f.fp();
derelativize_one(vfp, frame::interpreter_frame_last_sp_offset);
// Make sure that last_sp is kept relativized.
assert((intptr_t*)f.at_relative(frame::interpreter_frame_last_sp_offset) == f.unextended_sp(), "");
derelativize_one(vfp, frame::interpreter_frame_initial_sp_offset);
}

View File

@@ -125,7 +125,8 @@ inline intptr_t* ContinuationHelper::InterpretedFrame::frame_top(const frame& f,
assert(res == (intptr_t*)f.interpreter_frame_monitor_end() - expression_stack_sz, "");
assert(res >= f.unextended_sp(),
"res: " INTPTR_FORMAT " initial_sp: " INTPTR_FORMAT " last_sp: " INTPTR_FORMAT " unextended_sp: " INTPTR_FORMAT " expression_stack_size: %d",
p2i(res), p2i(f.addr_at(frame::interpreter_frame_initial_sp_offset)), f.at(frame::interpreter_frame_last_sp_offset), p2i(f.unextended_sp()), expression_stack_sz);
p2i(res), p2i(f.addr_at(frame::interpreter_frame_initial_sp_offset)), f.at_relative_or_null(frame::interpreter_frame_last_sp_offset),
p2i(f.unextended_sp()), expression_stack_sz);
return res;
}

View File

@@ -166,7 +166,7 @@ void DowncallStubGenerator::generate() {
allocated_frame_size += arg_shuffle.out_arg_bytes();
// when we don't use a return buffer we need to spill the return value around our slow path calls
bool should_save_return_value = !_needs_return_buffer && _needs_transition;
bool should_save_return_value = !_needs_return_buffer;
RegSpiller out_reg_spiller(_output_registers);
int spill_rsp_offset = -1;
@@ -201,7 +201,9 @@ void DowncallStubGenerator::generate() {
__ enter();
// return address and rbp are already in place
__ subptr(rsp, allocated_frame_size); // prolog
if (allocated_frame_size > 0) {
__ subptr(rsp, allocated_frame_size); // prolog
}
_frame_complete = __ pc() - start;

View File

@@ -352,7 +352,9 @@ void frame::interpreter_frame_set_monitor_end(BasicObjectLock* value) {
// Used by template based interpreter deoptimization
void frame::interpreter_frame_set_last_sp(intptr_t* sp) {
*((intptr_t**)addr_at(interpreter_frame_last_sp_offset)) = sp;
assert(is_interpreted_frame(), "interpreted frame expected");
// set relativized last_sp
ptr_at_put(interpreter_frame_last_sp_offset, sp != nullptr ? (sp - fp()) : 0);
}
frame frame::sender_for_entry_frame(RegisterMap* map) const {
@@ -496,7 +498,7 @@ bool frame::is_interpreted_frame_valid(JavaThread* thread) const {
// do some validation of frame elements
// first the method
Method* m = *interpreter_frame_method_addr();
Method* m = safe_interpreter_frame_method();
// validate the method we'd find in this potential sender
if (!Method::is_valid_method(m)) return false;

View File

@@ -250,7 +250,9 @@ inline intptr_t* frame::interpreter_frame_locals() const {
}
inline intptr_t* frame::interpreter_frame_last_sp() const {
return (intptr_t*)at(interpreter_frame_last_sp_offset);
intptr_t n = *addr_at(interpreter_frame_last_sp_offset);
assert(n <= 0, "n: " INTPTR_FORMAT, n);
return n != 0 ? &fp()[n] : nullptr;
}
inline intptr_t* frame::interpreter_frame_bcp_addr() const {

View File

@@ -356,7 +356,7 @@ static void emit_store_fast_path_check_c2(MacroAssembler* masm, Address ref_addr
// This is a JCC erratum mitigation wrapper for calling the inner check
int size = store_fast_path_check_size(masm, ref_addr, is_atomic, medium_path);
// Emit JCC erratum mitigation nops with the right size
IntelJccErratumAlignment(*masm, size);
IntelJccErratumAlignment intel_alignment(*masm, size);
// Emit the JCC erratum mitigation guarded code
emit_store_fast_path_check(masm, ref_addr, is_atomic, medium_path);
#endif

View File

@@ -74,8 +74,10 @@ static void z_load_barrier(MacroAssembler& _masm, const MachNode* node, Address
return;
}
ZLoadBarrierStubC2* const stub = ZLoadBarrierStubC2::create(node, ref_addr, ref);
IntelJccErratumAlignment(_masm, 6);
__ jcc(Assembler::above, *stub->entry());
{
IntelJccErratumAlignment intel_alignment(_masm, 6);
__ jcc(Assembler::above, *stub->entry());
}
__ bind(*stub->continuation());
}

View File

@@ -32,6 +32,7 @@
#include "oops/markWord.hpp"
#include "oops/methodData.hpp"
#include "oops/method.hpp"
#include "oops/resolvedFieldEntry.hpp"
#include "oops/resolvedIndyEntry.hpp"
#include "prims/jvmtiExport.hpp"
#include "prims/jvmtiThreadState.hpp"
@@ -794,7 +795,10 @@ void InterpreterMacroAssembler::prepare_to_jump_from_interpreted() {
// set sender sp
lea(_bcp_register, Address(rsp, wordSize));
// record last_sp
movptr(Address(rbp, frame::interpreter_frame_last_sp_offset * wordSize), _bcp_register);
mov(rcx, _bcp_register);
subptr(rcx, rbp);
sarptr(rcx, LogBytesPerWord);
movptr(Address(rbp, frame::interpreter_frame_last_sp_offset * wordSize), rcx);
}
@@ -2115,7 +2119,22 @@ void InterpreterMacroAssembler::load_resolved_indy_entry(Register cache, Registe
if (is_power_of_2(sizeof(ResolvedIndyEntry))) {
shll(index, log2i_exact(sizeof(ResolvedIndyEntry))); // Scale index by power of 2
} else {
imull(index, index, sizeof(ResolvedIndyEntry)); // Scale the index to be the entry index * sizeof(ResolvedInvokeDynamicInfo)
imull(index, index, sizeof(ResolvedIndyEntry)); // Scale the index to be the entry index * sizeof(ResolvedIndyEntry)
}
lea(cache, Address(cache, index, Address::times_1, Array<ResolvedIndyEntry>::base_offset_in_bytes()));
}
void InterpreterMacroAssembler::load_field_entry(Register cache, Register index, int bcp_offset) {
// Get index out of bytecode pointer
movptr(cache, Address(rbp, frame::interpreter_frame_cache_offset * wordSize));
get_cache_index_at_bcp(index, bcp_offset, sizeof(u2));
movptr(cache, Address(cache, ConstantPoolCache::field_entries_offset()));
// Take shortcut if the size is a power of 2
if (is_power_of_2(sizeof(ResolvedFieldEntry))) {
shll(index, log2i_exact(sizeof(ResolvedFieldEntry))); // Scale index by power of 2
} else {
imull(index, index, sizeof(ResolvedFieldEntry)); // Scale the index to be the entry index * sizeof(ResolvedFieldEntry)
}
lea(cache, Address(cache, index, Address::times_1, Array<ResolvedFieldEntry>::base_offset_in_bytes()));
}

View File

@@ -307,7 +307,7 @@ class InterpreterMacroAssembler: public MacroAssembler {
void profile_parameters_type(Register mdp, Register tmp1, Register tmp2);
void load_resolved_indy_entry(Register cache, Register index);
void load_field_entry(Register cache, Register index, int bcp_offset = 1);
};
#endif // CPU_X86_INTERP_MASM_X86_HPP

View File

@@ -206,7 +206,8 @@ address TemplateInterpreterGenerator::generate_return_entry_for(TosState state,
#endif // _LP64
// Restore stack bottom in case i2c adjusted stack
__ movptr(rsp, Address(rbp, frame::interpreter_frame_last_sp_offset * wordSize));
__ movptr(rcx, Address(rbp, frame::interpreter_frame_last_sp_offset * wordSize));
__ lea(rsp, Address(rbp, rcx, Address::times_ptr));
// and null it as marker that esp is now tos until next java call
__ movptr(Address(rbp, frame::interpreter_frame_last_sp_offset * wordSize), NULL_WORD);
@@ -1616,6 +1617,7 @@ void TemplateInterpreterGenerator::generate_throw_exception() {
#ifndef _LP64
__ mov(rax, rsp);
__ movptr(rbx, Address(rbp, frame::interpreter_frame_last_sp_offset * wordSize));
__ lea(rbx, Address(rbp, rbx, Address::times_ptr));
__ get_thread(thread);
// PC must point into interpreter here
__ set_last_Java_frame(thread, noreg, rbp, __ pc(), noreg);
@@ -1624,6 +1626,7 @@ void TemplateInterpreterGenerator::generate_throw_exception() {
#else
__ mov(c_rarg1, rsp);
__ movptr(c_rarg2, Address(rbp, frame::interpreter_frame_last_sp_offset * wordSize));
__ lea(c_rarg2, Address(rbp, c_rarg2, Address::times_ptr));
// PC must point into interpreter here
__ set_last_Java_frame(noreg, rbp, __ pc(), rscratch1);
__ super_call_VM_leaf(CAST_FROM_FN_PTR(address, InterpreterRuntime::popframe_move_outgoing_args), r15_thread, c_rarg1, c_rarg2);
@@ -1631,7 +1634,8 @@ void TemplateInterpreterGenerator::generate_throw_exception() {
__ reset_last_Java_frame(thread, true);
// Restore the last_sp and null it out
__ movptr(rsp, Address(rbp, frame::interpreter_frame_last_sp_offset * wordSize));
__ movptr(rcx, Address(rbp, frame::interpreter_frame_last_sp_offset * wordSize));
__ lea(rsp, Address(rbp, rcx, Address::times_ptr));
__ movptr(Address(rbp, frame::interpreter_frame_last_sp_offset * wordSize), NULL_WORD);
__ restore_bcp();

View File

@@ -36,6 +36,7 @@
#include "oops/methodData.hpp"
#include "oops/objArrayKlass.hpp"
#include "oops/oop.inline.hpp"
#include "oops/resolvedFieldEntry.hpp"
#include "oops/resolvedIndyEntry.hpp"
#include "prims/jvmtiExport.hpp"
#include "prims/methodHandles.hpp"
@@ -197,7 +198,13 @@ void TemplateTable::patch_bytecode(Bytecodes::Code bc, Register bc_reg,
// additional, required work.
assert(byte_no == f1_byte || byte_no == f2_byte, "byte_no out of range");
assert(load_bc_into_bc_reg, "we use bc_reg as temp");
__ get_cache_and_index_and_bytecode_at_bcp(temp_reg, bc_reg, temp_reg, byte_no, 1);
__ load_field_entry(temp_reg, bc_reg);
if (byte_no == f1_byte) {
__ load_unsigned_byte(temp_reg, Address(temp_reg, in_bytes(ResolvedFieldEntry::get_code_offset())));
} else {
__ load_unsigned_byte(temp_reg, Address(temp_reg, in_bytes(ResolvedFieldEntry::put_code_offset())));
}
__ movl(bc_reg, bc);
__ cmpl(temp_reg, (int) 0);
__ jcc(Assembler::zero, L_patch_done); // don't patch
@@ -2656,11 +2663,6 @@ void TemplateTable::resolve_cache_and_index(int byte_no,
Label resolved;
Bytecodes::Code code = bytecode();
switch (code) {
case Bytecodes::_nofast_getfield: code = Bytecodes::_getfield; break;
case Bytecodes::_nofast_putfield: code = Bytecodes::_putfield; break;
default: break;
}
assert(byte_no == f1_byte || byte_no == f2_byte, "byte_no out of range");
__ get_cache_and_index_and_bytecode_at_bcp(cache, index, temp, byte_no, 1, index_size);
@@ -2691,6 +2693,68 @@ void TemplateTable::resolve_cache_and_index(int byte_no,
}
}
void TemplateTable::resolve_cache_and_index_for_field(int byte_no,
Register cache,
Register index) {
const Register temp = rbx;
assert_different_registers(cache, index, temp);
Label resolved;
Bytecodes::Code code = bytecode();
switch (code) {
case Bytecodes::_nofast_getfield: code = Bytecodes::_getfield; break;
case Bytecodes::_nofast_putfield: code = Bytecodes::_putfield; break;
default: break;
}
assert(byte_no == f1_byte || byte_no == f2_byte, "byte_no out of range");
__ load_field_entry(cache, index);
if (byte_no == f1_byte) {
__ load_unsigned_byte(temp, Address(cache, in_bytes(ResolvedFieldEntry::get_code_offset())));
} else {
__ load_unsigned_byte(temp, Address(cache, in_bytes(ResolvedFieldEntry::put_code_offset())));
}
__ cmpl(temp, code); // have we resolved this bytecode?
__ jcc(Assembler::equal, resolved);
// resolve first time through
address entry = CAST_FROM_FN_PTR(address, InterpreterRuntime::resolve_from_cache);
__ movl(temp, code);
__ call_VM(noreg, entry, temp);
// Update registers with resolved info
__ load_field_entry(cache, index);
__ bind(resolved);
}
void TemplateTable::load_resolved_field_entry(Register obj,
Register cache,
Register tos_state,
Register offset,
Register flags,
bool is_static = false) {
assert_different_registers(cache, tos_state, flags, offset);
// Field offset
__ load_sized_value(offset, Address(cache, in_bytes(ResolvedFieldEntry::field_offset_offset())), sizeof(int), true /*is_signed*/);
// Flags
__ load_unsigned_byte(flags, Address(cache, in_bytes(ResolvedFieldEntry::flags_offset())));
// TOS state
__ load_unsigned_byte(tos_state, Address(cache, in_bytes(ResolvedFieldEntry::type_offset())));
// Klass overwrite register
if (is_static) {
__ movptr(obj, Address(cache, ResolvedFieldEntry::field_holder_offset()));
const int mirror_offset = in_bytes(Klass::java_mirror_offset());
__ movptr(obj, Address(obj, mirror_offset));
__ resolve_oop_handle(obj, rscratch2);
}
}
// The cache and index registers must be set before call
void TemplateTable::load_field_cp_cache_entry(Register obj,
Register cache,
@@ -2838,9 +2902,7 @@ void TemplateTable::jvmti_post_field_access(Register cache,
__ jcc(Assembler::zero, L1);
// cache entry pointer
__ addptr(cache, in_bytes(ConstantPoolCache::base_offset()));
__ shll(index, LogBytesPerWord);
__ addptr(cache, index);
__ load_field_entry(cache, index);
if (is_static) {
__ xorptr(rax, rax); // null object reference
} else {
@@ -2851,8 +2913,9 @@ void TemplateTable::jvmti_post_field_access(Register cache,
// rax,: object pointer or null
// cache: cache entry pointer
__ call_VM(noreg, CAST_FROM_FN_PTR(address, InterpreterRuntime::post_field_access),
rax, cache);
__ get_cache_and_index_at_bcp(cache, index, 1);
rax, cache);
__ load_field_entry(cache, index);
__ bind(L1);
}
}
@@ -2866,16 +2929,17 @@ void TemplateTable::pop_and_check_object(Register r) {
void TemplateTable::getfield_or_static(int byte_no, bool is_static, RewriteControl rc) {
transition(vtos, vtos);
const Register obj = LP64_ONLY(c_rarg3) NOT_LP64(rcx);
const Register cache = rcx;
const Register index = rdx;
const Register obj = LP64_ONLY(c_rarg3) NOT_LP64(rcx);
const Register off = rbx;
const Register flags = rax;
const Register tos_state = rax;
const Register flags = rdx;
const Register bc = LP64_ONLY(c_rarg3) NOT_LP64(rcx); // uses same reg as obj, so don't mix them
resolve_cache_and_index(byte_no, cache, index, sizeof(u2));
resolve_cache_and_index_for_field(byte_no, cache, index);
jvmti_post_field_access(cache, index, is_static, false);
load_field_cp_cache_entry(obj, cache, index, off, flags, is_static);
load_resolved_field_entry(obj, cache, tos_state, off, flags, is_static);
if (!is_static) pop_and_check_object(obj);
@@ -2883,13 +2947,11 @@ void TemplateTable::getfield_or_static(int byte_no, bool is_static, RewriteContr
Label Done, notByte, notBool, notInt, notShort, notChar, notLong, notFloat, notObj;
__ shrl(flags, ConstantPoolCacheEntry::tos_state_shift);
// Make sure we don't need to mask edx after the above shift
assert(btos == 0, "change code, btos != 0");
__ andl(flags, ConstantPoolCacheEntry::tos_state_mask);
__ testl(tos_state, tos_state);
__ jcc(Assembler::notZero, notByte);
// btos
__ access_load_at(T_BYTE, IN_HEAP, rax, field, noreg, noreg);
__ push(btos);
@@ -2900,7 +2962,7 @@ void TemplateTable::getfield_or_static(int byte_no, bool is_static, RewriteContr
__ jmp(Done);
__ bind(notByte);
__ cmpl(flags, ztos);
__ cmpl(tos_state, ztos);
__ jcc(Assembler::notEqual, notBool);
// ztos (same code as btos)
@@ -2914,7 +2976,7 @@ void TemplateTable::getfield_or_static(int byte_no, bool is_static, RewriteContr
__ jmp(Done);
__ bind(notBool);
__ cmpl(flags, atos);
__ cmpl(tos_state, atos);
__ jcc(Assembler::notEqual, notObj);
// atos
do_oop_load(_masm, field, rax);
@@ -2925,7 +2987,7 @@ void TemplateTable::getfield_or_static(int byte_no, bool is_static, RewriteContr
__ jmp(Done);
__ bind(notObj);
__ cmpl(flags, itos);
__ cmpl(tos_state, itos);
__ jcc(Assembler::notEqual, notInt);
// itos
__ access_load_at(T_INT, IN_HEAP, rax, field, noreg, noreg);
@@ -2937,7 +2999,7 @@ void TemplateTable::getfield_or_static(int byte_no, bool is_static, RewriteContr
__ jmp(Done);
__ bind(notInt);
__ cmpl(flags, ctos);
__ cmpl(tos_state, ctos);
__ jcc(Assembler::notEqual, notChar);
// ctos
__ access_load_at(T_CHAR, IN_HEAP, rax, field, noreg, noreg);
@@ -2949,7 +3011,7 @@ void TemplateTable::getfield_or_static(int byte_no, bool is_static, RewriteContr
__ jmp(Done);
__ bind(notChar);
__ cmpl(flags, stos);
__ cmpl(tos_state, stos);
__ jcc(Assembler::notEqual, notShort);
// stos
__ access_load_at(T_SHORT, IN_HEAP, rax, field, noreg, noreg);
@@ -2961,7 +3023,7 @@ void TemplateTable::getfield_or_static(int byte_no, bool is_static, RewriteContr
__ jmp(Done);
__ bind(notShort);
__ cmpl(flags, ltos);
__ cmpl(tos_state, ltos);
__ jcc(Assembler::notEqual, notLong);
// ltos
// Generate code as if volatile (x86_32). There just aren't enough registers to
@@ -2973,7 +3035,7 @@ void TemplateTable::getfield_or_static(int byte_no, bool is_static, RewriteContr
__ jmp(Done);
__ bind(notLong);
__ cmpl(flags, ftos);
__ cmpl(tos_state, ftos);
__ jcc(Assembler::notEqual, notFloat);
// ftos
@@ -2988,7 +3050,7 @@ void TemplateTable::getfield_or_static(int byte_no, bool is_static, RewriteContr
__ bind(notFloat);
#ifdef ASSERT
Label notDouble;
__ cmpl(flags, dtos);
__ cmpl(tos_state, dtos);
__ jcc(Assembler::notEqual, notDouble);
#endif
// dtos
@@ -3028,29 +3090,25 @@ void TemplateTable::getstatic(int byte_no) {
// The registers cache and index expected to be set before call.
// The function may destroy various registers, just not the cache and index registers.
void TemplateTable::jvmti_post_field_mod(Register cache, Register index, bool is_static) {
const Register robj = LP64_ONLY(c_rarg2) NOT_LP64(rax);
const Register RBX = LP64_ONLY(c_rarg1) NOT_LP64(rbx);
const Register RCX = LP64_ONLY(c_rarg3) NOT_LP64(rcx);
const Register RDX = LP64_ONLY(rscratch1) NOT_LP64(rdx);
ByteSize cp_base_offset = ConstantPoolCache::base_offset();
// Cache is rcx and index is rdx
const Register entry = LP64_ONLY(c_rarg2) NOT_LP64(rax); // ResolvedFieldEntry
const Register obj = LP64_ONLY(c_rarg1) NOT_LP64(rbx); // Object pointer
const Register value = LP64_ONLY(c_rarg3) NOT_LP64(rcx); // JValue object
if (JvmtiExport::can_post_field_modification()) {
// Check to see if a field modification watch has been set before
// we take the time to call into the VM.
Label L1;
assert_different_registers(cache, index, rax);
assert_different_registers(cache, obj, rax);
__ mov32(rax, ExternalAddress((address)JvmtiExport::get_field_modification_count_addr()));
__ testl(rax, rax);
__ jcc(Assembler::zero, L1);
__ get_cache_and_index_at_bcp(robj, RDX, 1);
__ mov(entry, cache);
if (is_static) {
// Life is simple. Null out the object pointer.
__ xorl(RBX, RBX);
__ xorl(obj, obj);
} else {
// Life is harder. The stack holds the value on top, followed by
@@ -3060,53 +3118,44 @@ void TemplateTable::jvmti_post_field_mod(Register cache, Register index, bool is
#ifndef _LP64
Label two_word, valsize_known;
#endif
__ movl(RCX, Address(robj, RDX,
Address::times_ptr,
in_bytes(cp_base_offset +
ConstantPoolCacheEntry::flags_offset())));
NOT_LP64(__ mov(rbx, rsp));
__ shrl(RCX, ConstantPoolCacheEntry::tos_state_shift);
// Make sure we don't need to mask rcx after the above shift
ConstantPoolCacheEntry::verify_tos_state_shift();
__ load_unsigned_byte(value, Address(entry, in_bytes(ResolvedFieldEntry::type_offset())));
#ifdef _LP64
__ movptr(c_rarg1, at_tos_p1()); // initially assume a one word jvalue
__ cmpl(c_rarg3, ltos);
__ movptr(obj, at_tos_p1()); // initially assume a one word jvalue
__ cmpl(value, ltos);
__ cmovptr(Assembler::equal,
c_rarg1, at_tos_p2()); // ltos (two word jvalue)
__ cmpl(c_rarg3, dtos);
obj, at_tos_p2()); // ltos (two word jvalue)
__ cmpl(value, dtos);
__ cmovptr(Assembler::equal,
c_rarg1, at_tos_p2()); // dtos (two word jvalue)
obj, at_tos_p2()); // dtos (two word jvalue)
#else
__ cmpl(rcx, ltos);
__ mov(obj, rsp);
__ cmpl(value, ltos);
__ jccb(Assembler::equal, two_word);
__ cmpl(rcx, dtos);
__ cmpl(value, dtos);
__ jccb(Assembler::equal, two_word);
__ addptr(rbx, Interpreter::expr_offset_in_bytes(1)); // one word jvalue (not ltos, dtos)
__ addptr(obj, Interpreter::expr_offset_in_bytes(1)); // one word jvalue (not ltos, dtos)
__ jmpb(valsize_known);
__ bind(two_word);
__ addptr(rbx, Interpreter::expr_offset_in_bytes(2)); // two words jvalue
__ addptr(obj, Interpreter::expr_offset_in_bytes(2)); // two words jvalue
__ bind(valsize_known);
// setup object pointer
__ movptr(rbx, Address(rbx, 0));
__ movptr(obj, Address(obj, 0));
#endif
}
// cache entry pointer
__ addptr(robj, in_bytes(cp_base_offset));
__ shll(RDX, LogBytesPerWord);
__ addptr(robj, RDX);
// object (tos)
__ mov(RCX, rsp);
// c_rarg1: object pointer set up above (null if static)
// c_rarg2: cache entry pointer
// c_rarg3: jvalue object on the stack
__ mov(value, rsp);
// obj: object pointer set up above (null if static)
// cache: field entry pointer
// value: jvalue object on the stack
__ call_VM(noreg,
CAST_FROM_FN_PTR(address,
InterpreterRuntime::post_field_modification),
RBX, robj, RCX);
__ get_cache_and_index_at_bcp(cache, index, 1);
CAST_FROM_FN_PTR(address,
InterpreterRuntime::post_field_modification),
obj, entry, value);
// Reload field entry
__ load_field_entry(cache, index);
__ bind(L1);
}
}
@@ -3114,42 +3163,41 @@ void TemplateTable::jvmti_post_field_mod(Register cache, Register index, bool is
void TemplateTable::putfield_or_static(int byte_no, bool is_static, RewriteControl rc) {
transition(vtos, vtos);
const Register obj = rcx;
const Register cache = rcx;
const Register index = rdx;
const Register obj = rcx;
const Register tos_state = rdx;
const Register off = rbx;
const Register flags = rax;
resolve_cache_and_index(byte_no, cache, index, sizeof(u2));
resolve_cache_and_index_for_field(byte_no, cache, index);
jvmti_post_field_mod(cache, index, is_static);
load_field_cp_cache_entry(obj, cache, index, off, flags, is_static);
load_resolved_field_entry(obj, cache, tos_state, off, flags, is_static);
// [jk] not needed currently
// volatile_barrier(Assembler::Membar_mask_bits(Assembler::LoadStore |
// Assembler::StoreStore));
Label notVolatile, Done;
__ movl(rdx, flags);
__ shrl(rdx, ConstantPoolCacheEntry::is_volatile_shift);
__ andl(rdx, 0x1);
// Check for volatile store
__ testl(rdx, rdx);
__ andl(flags, (1 << ResolvedFieldEntry::is_volatile_shift));
__ testl(flags, flags);
__ jcc(Assembler::zero, notVolatile);
putfield_or_static_helper(byte_no, is_static, rc, obj, off, flags);
putfield_or_static_helper(byte_no, is_static, rc, obj, off, tos_state);
volatile_barrier(Assembler::Membar_mask_bits(Assembler::StoreLoad |
Assembler::StoreStore));
__ jmp(Done);
__ bind(notVolatile);
putfield_or_static_helper(byte_no, is_static, rc, obj, off, flags);
putfield_or_static_helper(byte_no, is_static, rc, obj, off, tos_state);
__ bind(Done);
}
void TemplateTable::putfield_or_static_helper(int byte_no, bool is_static, RewriteControl rc,
Register obj, Register off, Register flags) {
Register obj, Register off, Register tos_state) {
// field addresses
const Address field(obj, off, Address::times_1, 0*wordSize);
@@ -3161,10 +3209,8 @@ void TemplateTable::putfield_or_static_helper(int byte_no, bool is_static, Rewri
const Register bc = LP64_ONLY(c_rarg3) NOT_LP64(rcx);
__ shrl(flags, ConstantPoolCacheEntry::tos_state_shift);
assert(btos == 0, "change code, btos != 0");
__ andl(flags, ConstantPoolCacheEntry::tos_state_mask);
// Test TOS state
__ testl(tos_state, tos_state);
__ jcc(Assembler::notZero, notByte);
// btos
@@ -3179,7 +3225,7 @@ void TemplateTable::putfield_or_static_helper(int byte_no, bool is_static, Rewri
}
__ bind(notByte);
__ cmpl(flags, ztos);
__ cmpl(tos_state, ztos);
__ jcc(Assembler::notEqual, notBool);
// ztos
@@ -3194,7 +3240,7 @@ void TemplateTable::putfield_or_static_helper(int byte_no, bool is_static, Rewri
}
__ bind(notBool);
__ cmpl(flags, atos);
__ cmpl(tos_state, atos);
__ jcc(Assembler::notEqual, notObj);
// atos
@@ -3210,7 +3256,7 @@ void TemplateTable::putfield_or_static_helper(int byte_no, bool is_static, Rewri
}
__ bind(notObj);
__ cmpl(flags, itos);
__ cmpl(tos_state, itos);
__ jcc(Assembler::notEqual, notInt);
// itos
@@ -3225,7 +3271,7 @@ void TemplateTable::putfield_or_static_helper(int byte_no, bool is_static, Rewri
}
__ bind(notInt);
__ cmpl(flags, ctos);
__ cmpl(tos_state, ctos);
__ jcc(Assembler::notEqual, notChar);
// ctos
@@ -3240,7 +3286,7 @@ void TemplateTable::putfield_or_static_helper(int byte_no, bool is_static, Rewri
}
__ bind(notChar);
__ cmpl(flags, stos);
__ cmpl(tos_state, stos);
__ jcc(Assembler::notEqual, notShort);
// stos
@@ -3255,7 +3301,7 @@ void TemplateTable::putfield_or_static_helper(int byte_no, bool is_static, Rewri
}
__ bind(notShort);
__ cmpl(flags, ltos);
__ cmpl(tos_state, ltos);
__ jcc(Assembler::notEqual, notLong);
// ltos
@@ -3273,7 +3319,7 @@ void TemplateTable::putfield_or_static_helper(int byte_no, bool is_static, Rewri
}
__ bind(notLong);
__ cmpl(flags, ftos);
__ cmpl(tos_state, ftos);
__ jcc(Assembler::notEqual, notFloat);
// ftos
@@ -3290,7 +3336,7 @@ void TemplateTable::putfield_or_static_helper(int byte_no, bool is_static, Rewri
__ bind(notFloat);
#ifdef ASSERT
Label notDouble;
__ cmpl(flags, dtos);
__ cmpl(tos_state, dtos);
__ jcc(Assembler::notEqual, notDouble);
#endif
@@ -3360,8 +3406,8 @@ void TemplateTable::jvmti_post_fast_field_mod() {
}
__ mov(scratch, rsp); // points to jvalue on the stack
// access constant pool cache entry
LP64_ONLY(__ get_cache_entry_pointer_at_bcp(c_rarg2, rax, 1));
NOT_LP64(__ get_cache_entry_pointer_at_bcp(rax, rdx, 1));
LP64_ONLY(__ load_field_entry(c_rarg2, rax));
NOT_LP64(__ load_field_entry(rax, rdx));
__ verify_oop(rbx);
// rbx: object pointer copied above
// c_rarg2: cache entry pointer
@@ -3388,29 +3434,18 @@ void TemplateTable::jvmti_post_fast_field_mod() {
void TemplateTable::fast_storefield(TosState state) {
transition(state, vtos);
ByteSize base = ConstantPoolCache::base_offset();
Register cache = rcx;
Label notVolatile, Done;
jvmti_post_fast_field_mod();
// access constant pool cache
__ get_cache_and_index_at_bcp(rcx, rbx, 1);
// test for volatile with rdx but rdx is tos register for lputfield.
__ movl(rdx, Address(rcx, rbx, Address::times_ptr,
in_bytes(base +
ConstantPoolCacheEntry::flags_offset())));
// replace index with field offset from cache entry
__ movptr(rbx, Address(rcx, rbx, Address::times_ptr,
in_bytes(base + ConstantPoolCacheEntry::f2_offset())));
// [jk] not needed currently
// volatile_barrier(Assembler::Membar_mask_bits(Assembler::LoadStore |
// Assembler::StoreStore));
Label notVolatile, Done;
__ shrl(rdx, ConstantPoolCacheEntry::is_volatile_shift);
__ andl(rdx, 0x1);
__ push(rax);
__ load_field_entry(rcx, rax);
load_resolved_field_entry(noreg, cache, rax, rbx, rdx);
// RBX: field offset, RCX: RAX: TOS, RDX: flags
__ andl(rdx, (1 << ResolvedFieldEntry::is_volatile_shift));
__ pop(rax);
// Get object from stack
pop_and_check_object(rcx);
@@ -3485,8 +3520,8 @@ void TemplateTable::fast_accessfield(TosState state) {
__ testl(rcx, rcx);
__ jcc(Assembler::zero, L1);
// access constant pool cache entry
LP64_ONLY(__ get_cache_entry_pointer_at_bcp(c_rarg2, rcx, 1));
NOT_LP64(__ get_cache_entry_pointer_at_bcp(rcx, rdx, 1));
LP64_ONLY(__ load_field_entry(c_rarg2, rcx));
NOT_LP64(__ load_field_entry(rcx, rdx));
__ verify_oop(rax);
__ push_ptr(rax); // save object pointer before call_VM() clobbers it
LP64_ONLY(__ mov(c_rarg1, rax));
@@ -3499,18 +3534,8 @@ void TemplateTable::fast_accessfield(TosState state) {
}
// access constant pool cache
__ get_cache_and_index_at_bcp(rcx, rbx, 1);
// replace index with field offset from cache entry
// [jk] not needed currently
// __ movl(rdx, Address(rcx, rbx, Address::times_8,
// in_bytes(ConstantPoolCache::base_offset() +
// ConstantPoolCacheEntry::flags_offset())));
// __ shrl(rdx, ConstantPoolCacheEntry::is_volatile_shift);
// __ andl(rdx, 0x1);
//
__ movptr(rbx, Address(rcx, rbx, Address::times_ptr,
in_bytes(ConstantPoolCache::base_offset() +
ConstantPoolCacheEntry::f2_offset())));
__ load_field_entry(rcx, rbx);
__ load_sized_value(rbx, Address(rcx, in_bytes(ResolvedFieldEntry::field_offset_offset())), sizeof(int), true /*is_signed*/);
// rax: object
__ verify_oop(rax);
@@ -3565,11 +3590,9 @@ void TemplateTable::fast_xaccess(TosState state) {
// get receiver
__ movptr(rax, aaddress(0));
// access constant pool cache
__ get_cache_and_index_at_bcp(rcx, rdx, 2);
__ movptr(rbx,
Address(rcx, rdx, Address::times_ptr,
in_bytes(ConstantPoolCache::base_offset() +
ConstantPoolCacheEntry::f2_offset())));
__ load_field_entry(rcx, rdx, 2);
__ load_sized_value(rbx, Address(rcx, in_bytes(ResolvedFieldEntry::field_offset_offset())), sizeof(int), true /*is_signed*/);
// make sure exception is reported in correct bcp range (getfield is
// next instruction)
__ increment(rbcp);

View File

@@ -1805,7 +1805,7 @@ void VM_Version::get_processor_features() {
}
// Allocation prefetch settings
intx cache_line_size = prefetch_data_size();
int cache_line_size = checked_cast<int>(prefetch_data_size());
if (FLAG_IS_DEFAULT(AllocatePrefetchStepSize) &&
(cache_line_size > AllocatePrefetchStepSize)) {
FLAG_SET_DEFAULT(AllocatePrefetchStepSize, cache_line_size);
@@ -1913,9 +1913,9 @@ void VM_Version::get_processor_features() {
}
}
if (AllocatePrefetchLines > 1) {
log->print_cr(" at distance %d, %d lines of %d bytes", (int) AllocatePrefetchDistance, (int) AllocatePrefetchLines, (int) AllocatePrefetchStepSize);
log->print_cr(" at distance %d, %d lines of %d bytes", AllocatePrefetchDistance, AllocatePrefetchLines, AllocatePrefetchStepSize);
} else {
log->print_cr(" at distance %d, one line of %d bytes", (int) AllocatePrefetchDistance, (int) AllocatePrefetchStepSize);
log->print_cr(" at distance %d, one line of %d bytes", AllocatePrefetchDistance, AllocatePrefetchStepSize);
}
}
@@ -3169,7 +3169,7 @@ bool VM_Version::is_intel_tsc_synched_at_init() {
return false;
}
intx VM_Version::allocate_prefetch_distance(bool use_watermark_prefetch) {
int VM_Version::allocate_prefetch_distance(bool use_watermark_prefetch) {
// Hardware prefetching (distance/size in bytes):
// Pentium 3 - 64 / 32
// Pentium 4 - 256 / 128

View File

@@ -750,7 +750,7 @@ public:
static bool supports_compare_and_exchange() { return true; }
static intx allocate_prefetch_distance(bool use_watermark_prefetch);
static int allocate_prefetch_distance(bool use_watermark_prefetch);
// SSE2 and later processors implement a 'pause' instruction
// that can be used for efficient implementation of

View File

@@ -1566,6 +1566,8 @@ bool Matcher::match_rule_supported(int opcode) {
return false;
}
break;
case Op_FmaF:
case Op_FmaD:
case Op_FmaVD:
case Op_FmaVF:
if (!UseFMA) {
@@ -3960,11 +3962,11 @@ instruct onspinwait() %{
// a * b + c
instruct fmaD_reg(regD a, regD b, regD c) %{
predicate(UseFMA);
match(Set c (FmaD c (Binary a b)));
format %{ "fmasd $a,$b,$c\t# $c = $a * $b + $c" %}
ins_cost(150);
ins_encode %{
assert(UseFMA, "Needs FMA instructions support.");
__ fmad($c$$XMMRegister, $a$$XMMRegister, $b$$XMMRegister, $c$$XMMRegister);
%}
ins_pipe( pipe_slow );
@@ -3972,11 +3974,11 @@ instruct fmaD_reg(regD a, regD b, regD c) %{
// a * b + c
instruct fmaF_reg(regF a, regF b, regF c) %{
predicate(UseFMA);
match(Set c (FmaF c (Binary a b)));
format %{ "fmass $a,$b,$c\t# $c = $a * $b + $c" %}
ins_cost(150);
ins_encode %{
assert(UseFMA, "Needs FMA instructions support.");
__ fmaf($c$$XMMRegister, $a$$XMMRegister, $b$$XMMRegister, $c$$XMMRegister);
%}
ins_pipe( pipe_slow );
@@ -9864,6 +9866,7 @@ instruct vfma_reg_masked(vec dst, vec src2, vec src3, kReg mask) %{
match(Set dst (FmaVD (Binary dst src2) (Binary src3 mask)));
format %{ "vfma_masked $dst, $src2, $src3, $mask \t! vfma masked operation" %}
ins_encode %{
assert(UseFMA, "Needs FMA instructions support.");
int vlen_enc = vector_length_encoding(this);
BasicType bt = Matcher::vector_element_basic_type(this);
int opc = this->ideal_Opcode();
@@ -9878,6 +9881,7 @@ instruct vfma_mem_masked(vec dst, vec src2, memory src3, kReg mask) %{
match(Set dst (FmaVD (Binary dst src2) (Binary (LoadVector src3) mask)));
format %{ "vfma_masked $dst, $src2, $src3, $mask \t! vfma masked operation" %}
ins_encode %{
assert(UseFMA, "Needs FMA instructions support.");
int vlen_enc = vector_length_encoding(this);
BasicType bt = Matcher::vector_element_basic_type(this);
int opc = this->ideal_Opcode();

View File

@@ -605,7 +605,7 @@ int ZeroInterpreter::getter_entry(Method* method, intptr_t UNUSED, TRAPS) {
// Get the entry from the constant pool cache, and drop into
// the slow path if it has not been resolved
ConstantPoolCache* cache = method->constants()->cache();
ConstantPoolCacheEntry* entry = cache->entry_at(index);
ResolvedFieldEntry* entry = cache->resolved_field_entry_at(index);
if (!entry->is_resolved(Bytecodes::_getfield)) {
return normal_entry(method, 0, THREAD);
}
@@ -622,7 +622,7 @@ int ZeroInterpreter::getter_entry(Method* method, intptr_t UNUSED, TRAPS) {
// If needed, allocate additional slot on stack: we already have one
// for receiver, and double/long need another one.
switch (entry->flag_state()) {
switch (entry->tos_state()) {
case ltos:
case dtos:
stack->overflow_check(1, CHECK_0);
@@ -634,12 +634,12 @@ int ZeroInterpreter::getter_entry(Method* method, intptr_t UNUSED, TRAPS) {
}
// Read the field to stack(0)
int offset = entry->f2_as_index();
int offset = entry->field_offset();
if (entry->is_volatile()) {
if (support_IRIW_for_not_multiple_copy_atomic_cpu) {
OrderAccess::fence();
}
switch (entry->flag_state()) {
switch (entry->tos_state()) {
case btos:
case ztos: SET_STACK_INT(object->byte_field_acquire(offset), 0); break;
case ctos: SET_STACK_INT(object->char_field_acquire(offset), 0); break;
@@ -653,7 +653,7 @@ int ZeroInterpreter::getter_entry(Method* method, intptr_t UNUSED, TRAPS) {
ShouldNotReachHere();
}
} else {
switch (entry->flag_state()) {
switch (entry->tos_state()) {
case btos:
case ztos: SET_STACK_INT(object->byte_field(offset), 0); break;
case ctos: SET_STACK_INT(object->char_field(offset), 0); break;
@@ -696,7 +696,7 @@ int ZeroInterpreter::setter_entry(Method* method, intptr_t UNUSED, TRAPS) {
// Get the entry from the constant pool cache, and drop into
// the slow path if it has not been resolved
ConstantPoolCache* cache = method->constants()->cache();
ConstantPoolCacheEntry* entry = cache->entry_at(index);
ResolvedFieldEntry* entry = cache->resolved_field_entry_at(index);
if (!entry->is_resolved(Bytecodes::_putfield)) {
return normal_entry(method, 0, THREAD);
}
@@ -707,7 +707,7 @@ int ZeroInterpreter::setter_entry(Method* method, intptr_t UNUSED, TRAPS) {
// Figure out where the receiver is. If there is a long/double
// operand on stack top, then receiver is two slots down.
oop object = nullptr;
switch (entry->flag_state()) {
switch (entry->tos_state()) {
case ltos:
case dtos:
object = STACK_OBJECT(-2);
@@ -724,9 +724,9 @@ int ZeroInterpreter::setter_entry(Method* method, intptr_t UNUSED, TRAPS) {
}
// Store the stack(0) to field
int offset = entry->f2_as_index();
int offset = entry->field_offset();
if (entry->is_volatile()) {
switch (entry->flag_state()) {
switch (entry->tos_state()) {
case btos: object->release_byte_field_put(offset, STACK_INT(0)); break;
case ztos: object->release_byte_field_put(offset, STACK_INT(0) & 1); break; // only store LSB
case ctos: object->release_char_field_put(offset, STACK_INT(0)); break;
@@ -741,7 +741,7 @@ int ZeroInterpreter::setter_entry(Method* method, intptr_t UNUSED, TRAPS) {
}
OrderAccess::storeload();
} else {
switch (entry->flag_state()) {
switch (entry->tos_state()) {
case btos: object->byte_field_put(offset, STACK_INT(0)); break;
case ztos: object->byte_field_put(offset, STACK_INT(0) & 1); break; // only store LSB
case ctos: object->char_field_put(offset, STACK_INT(0)); break;

View File

@@ -108,7 +108,7 @@ class AixAttachListener: AllStatic {
static bool is_shutdown() { return _shutdown; }
// write the given buffer to a socket
static int write_fully(int s, char* buf, int len);
static int write_fully(int s, char* buf, size_t len);
static AixAttachOperation* dequeue();
};
@@ -276,7 +276,7 @@ AixAttachOperation* AixAttachListener::read_request(int s) {
// where <ver> is the protocol version (1), <cmd> is the command
// name ("load", "datadump", ...), and <arg> is an argument
int expected_str_count = 2 + AttachOperation::arg_count_max;
const int max_len = (sizeof(ver_str) + 1) + (AttachOperation::name_length_max + 1) +
const size_t max_len = (sizeof(ver_str) + 1) + (AttachOperation::name_length_max + 1) +
AttachOperation::arg_count_max*(AttachOperation::arg_length_max + 1);
char buf[max_len];
@@ -285,15 +285,15 @@ AixAttachOperation* AixAttachListener::read_request(int s) {
// Read until all (expected) strings have been read, the buffer is
// full, or EOF.
int off = 0;
int left = max_len;
size_t off = 0;
size_t left = max_len;
do {
int n;
ssize_t n;
// Don't block on interrupts because this will
// hang in the clean-up when shutting down.
n = read(s, buf+off, left);
assert(n <= left, "buffer was too small, impossible!");
assert(n <= checked_cast<ssize_t>(left), "buffer was too small, impossible!");
buf[max_len - 1] = '\0';
if (n == -1) {
return nullptr; // reset by peer or other error
@@ -414,9 +414,9 @@ AixAttachOperation* AixAttachListener::dequeue() {
}
// write the given buffer to the socket
int AixAttachListener::write_fully(int s, char* buf, int len) {
int AixAttachListener::write_fully(int s, char* buf, size_t len) {
do {
int n = ::write(s, buf, len);
ssize_t n = ::write(s, buf, len);
if (n == -1) {
if (errno != EINTR) return -1;
} else {
@@ -579,15 +579,6 @@ void AttachListener::pd_data_dump() {
os::signal_notify(SIGQUIT);
}
AttachOperationFunctionInfo* AttachListener::pd_find_operation(const char* n) {
return nullptr;
}
jint AttachListener::pd_set_flag(AttachOperation* op, outputStream* out) {
out->print_cr("flag '%s' cannot be changed", op->arg(0));
return JNI_ERR;
}
void AttachListener::pd_detachall() {
// Cleanup server socket to detach clients.
listener_cleanup();

View File

@@ -29,13 +29,16 @@
#include <dlfcn.h>
#include <string.h>
#include "runtime/arguments.hpp"
#include "runtime/os.hpp"
dynamicOdm::dynamicOdm() {
const char *libodmname = "/usr/lib/libodm.a(shr_64.o)";
_libhandle = dlopen(libodmname, RTLD_MEMBER | RTLD_NOW);
const char* libodmname = "/usr/lib/libodm.a(shr_64.o)";
char ebuf[512];
void* _libhandle = os::dll_load(libodmname, ebuf, sizeof(ebuf));
if (!_libhandle) {
trcVerbose("Couldn't open %s", libodmname);
trcVerbose("Cannot load %s (error %s)", libodmname, ebuf);
return;
}
_odm_initialize = (fun_odm_initialize )dlsym(_libhandle, "odm_initialize" );

View File

@@ -26,6 +26,7 @@
#include "libperfstat_aix.hpp"
#include "misc_aix.hpp"
#include "runtime/os.hpp"
#include <dlfcn.h>
@@ -71,11 +72,11 @@ static fun_perfstat_reset_t g_fun_perfstat_reset = nullptr;
static fun_wpar_getcid_t g_fun_wpar_getcid = nullptr;
bool libperfstat::init() {
// Dynamically load the libperfstat porting library.
g_libhandle = dlopen("/usr/lib/libperfstat.a(shr_64.o)", RTLD_MEMBER | RTLD_NOW);
const char* libperfstat = "/usr/lib/libperfstat.a(shr_64.o)";
char ebuf[512];
g_libhandle = os::dll_load(libperfstat, ebuf, sizeof(ebuf));
if (!g_libhandle) {
trcVerbose("Cannot load libperfstat.a (dlerror: %s)", dlerror());
trcVerbose("Cannot load %s (error: %s)", libperfstat, ebuf);
return false;
}

View File

@@ -79,6 +79,9 @@
#include "utilities/events.hpp"
#include "utilities/growableArray.hpp"
#include "utilities/vmError.hpp"
#if INCLUDE_JFR
#include "jfr/jfrEvents.hpp"
#endif
// put OS-includes here (sorted alphabetically)
#ifdef AIX_XLC_GE_17
@@ -1098,8 +1101,6 @@ bool os::dll_address_to_library_name(address addr, char* buf,
return true;
}
// Loads .dll/.so and in case of error it checks if .dll/.so was built
// for the same architecture as Hotspot is running on.
void *os::dll_load(const char *filename, char *ebuf, int ebuflen) {
log_info(os)("attempting shared library load of %s", filename);
@@ -1114,13 +1115,34 @@ void *os::dll_load(const char *filename, char *ebuf, int ebuflen) {
return nullptr;
}
// RTLD_LAZY is currently not implemented. The dl is loaded immediately with all its dependants.
void * result= ::dlopen(filename, RTLD_LAZY);
#if INCLUDE_JFR
EventNativeLibraryLoad event;
event.set_name(filename);
#endif
// RTLD_LAZY has currently the same behavior as RTLD_NOW
// The dl is loaded immediately with all its dependants.
int dflags = RTLD_LAZY;
// check for filename ending with ')', it indicates we want to load
// a MEMBER module that is a member of an archive.
int flen = strlen(filename);
if (flen > 0 && filename[flen - 1] == ')') {
dflags |= RTLD_MEMBER;
}
void * result= ::dlopen(filename, dflags);
if (result != nullptr) {
Events::log_dll_message(nullptr, "Loaded shared library %s", filename);
// Reload dll cache. Don't do this in signal handling.
LoadedLibraries::reload();
log_info(os)("shared library load of %s was successful", filename);
#if INCLUDE_JFR
event.set_success(true);
event.set_errorMessage(nullptr);
event.commit();
#endif
return result;
} else {
// error analysis when dlopen fails
@@ -1134,6 +1156,12 @@ void *os::dll_load(const char *filename, char *ebuf, int ebuflen) {
}
Events::log_dll_message(nullptr, "Loading shared library %s failed, %s", filename, error_report);
log_info(os)("shared library load of %s failed, %s", filename, error_report);
#if INCLUDE_JFR
event.set_success(false);
event.set_errorMessage(error_report);
event.commit();
#endif
}
return nullptr;
}
@@ -2957,7 +2985,7 @@ int os::get_core_path(char* buffer, size_t bufferSize) {
jio_snprintf(buffer, bufferSize, "%s/core or core.%d",
p, current_process_id());
return strlen(buffer);
return checked_cast<int>(strlen(buffer));
}
bool os::start_debugging(char *buf, int buflen) {
@@ -2995,7 +3023,7 @@ static inline time_t get_mtime(const char* filename) {
int os::compare_file_modified_times(const char* file1, const char* file2) {
time_t t1 = get_mtime(file1);
time_t t2 = get_mtime(file2);
return t1 - t2;
return primitive_compare(t1, t2);
}
bool os::supports_map_sync() {

Some files were not shown because too many files have changed in this diff Show More