JBR-5183 as dcevm-23 base

JBR-5183 - ref openjdk/8292818 - special access flags removed

JBR-5183 - add new DeoptimizationScope from openjdk

JBR-5183 clean DCEVM code separation in standard jdk code + typo fix

JBR-5464 Fix native method registration

JBR-5183 - fix compilation on win - using size_t

JBR-5183 - fix error: invalid use of incomplete type 'struct Atomic::StoreImpl

JBR-5183 - fix windows build

JBR-5183 - nullptr in VM_EnhancedRedefineClasses

JBR-5183 - fix compilation locking

JBR-5183 fix G1GC forward pointer check

JBR-5183 fix problem with _first_dead in serial GC

JBR-5183 fix bug from dcevm21 merge resolving

JBR-5183 use sorted static fields in class comparison

JBR-5183 do not use EnhancedRedefineClasses_lock

JBR-5183 fix assert in make_jmethod_id

JBR-5183 remove VM_ThreadsSuspendJVMTI

JBR-5183 fix dcevm21 issues after merge dcevm17 updates

JBR-5183 dcevm17 squashed commits

JBR-3111 Update class in all dictionaries where it was already defined

This patch keeps compatibility with std redefinition, that does not
create a new Klass, but modifies it, then it is modified in all
dictionaries containing this class.

Add ClassLoaderDataGraph_lock to define new class in enhanced
redefiniton

ClassLoaderDataGraph locking for introduced in redefinition in
java.version>11
JBR-3140 - support for modularized HotswapAgent

Add -XX:HotswapAgent=[disabled,fatjar.core]

Support for redefinition of Well Known classses (java.*,jdk.*, sun.*)

Fix fastdebug compilation issues - cast_to_oop
JBR-3458: Skip dynamic proxy classes based on com.sun.proxy
JBR-3459: Fix race condition in ClassLoaderDataGraph::classes_do

InstanceKlass in ClassLoaderData can be uninitialized when
ClassLoaderDataGraph::classes_do is called. Using
ClassLoaderDataGraph::dictionary_classes_do is safe but problem is still
persisting with anonymous classes.

Fix compilation problems

Fix dcevm issues related to refactorization of Thread to JavaThread
Fix init_method_MemberName after Thread to JavaThread refactorization
Fix "implicit conversion of NULL constant to 'bool'"
Fix, pass SystemDictionary::resolve_from_stream cl_info param
Search for affected classes in all initialized classes in cld

Fix also case when lambda interface is redefined. Lambda class is
missing in cld dictionary since it is hidden since j17
Fix compilation issue
Remove duplicated lambdaFormInvokers.cpp

JBR-3867 - update keys of jvmti TAG map after redefinition

jdwp keeps relation class_ptr->class_ref in jvmti tag. class_ptr is used
as a tag key, tag value is refnode. There are new class_ptrs after
redefinition, therefore jdwp redefinition method update all affected
keys in the tag map.
JBR-3867 - fix msvc compilation issue with non const array on stack
Attempt to fix JBR-3887
JBR-3937 Fix crashes in C1/C2 compilers

There is a race condition in enhanced redefinition with C1/C2. Therefore
the patch stops C1/C2 compilation before redefinition and release after
redefinition finishing. There is no performance impact since dcevm
flushes all code cache.

Fix line ending CRLF->LF
G1 fixes, code cleanup
JBR-3867 - fix dcevm redefinition stops due the not updated weak oops

Dcevm must update also oops in weak storage using WeakProcessor. Oops
storage is new concept in java17.
JBR-4018 - fix zero variant compilation issues

JBR-3997 - fix _invokehandle and _invokedynamic race conditions

Old clear mechanism of CpCacheEntry has cleared partially _flags and the
entire _f1, but both values could be later used in interpreter for
invocation. It ended up with various types of crashes. To prevent dcevm
crashes, we keep the old _f1 and _flags values until they are resolved
again. We need a new flag 'is_f1_null_dcevm_shift' indicating that _f1
is NULL (while f1 keeps old value).

JBR-4053 - Fix fastdebug compilation issue
JBR-4125 - fix wrong addition of java.lang.Object as superclass
JBR-4110 - disable UseEmptySlotsInSupers

dcevm instance transformation expects increasing field's offset when
fields of class are iterated. This ordering is no more valid if
UseEmptySlotsInSupers=true.
JBR-4148 - removed meaningless copying of data to itself
JBR-4312 - fix crash call ResolvedMethodTable from ServiceThread

adjust_metod_entries_dcevm incorrectly changed the hashes of resolved
method oops stored in ResolvedMethodTable. Now all oops of old methods
are first removed, then updated and then added to table again
JBR-4352 - fix AARCH64 compilation issues

- use correct INCLUDE_JFR condition for jfr code
- exclude jvmtiEnhancedRedefineClasses.cpp if INCLUDE_JVMTI=0
Remove version-numbers left over from the merge of dcevm17
JBR-4392 - use only loaded classes when collecting affected classes
JBR-4386 - disable AllowEnhancedClassRedefinition in jfr

JBR-5183 fix dcevm21 compilation issues

JBR-5183 pre-dcevm17 squashed commits

dcevm11 fixes

1. We need to set classRedefinitionCount on new class, not old class.

2.Fix crashes in MetadataOnStackMark::~MetadataOnSta

MetadataOnStackMark should not remove dcevm stuff. It was added
accidentaly in dcevm9 and never was part of doit() in previous versions.

3. Fix problem with nested members

Reported at :
https://stackoverflow.com/questions/53370380/hotswapagent-incompatibleclasschangeerror-type-headerpanel1-is-not-a-nest-mem

4. Use init_mark_raw()

method changed since j8 - it used init_mark()

5. Fix methodHandles and fieldHandles

6. Code cleanup

7. Fix force_forward in dead space

8. Fix check_class

9. increment_class_counter() using orig dcevm code

Probably it is cause of SISEGV on:
_
VM_EnhancedRedefineClasses::redefine_single_class->java_mirror()

10 Fix 11.0.7 compilation issues

11. Refactor ClearCpoolCacheAndUnpatch

12. not nullable oop_store_not_null() method+handle NULL in mem_name in
dmh

13. Use INCLUDE_CDS condition on "UseSharedSpaces" block from master

14. Add codecache flush optimization, but just flush all cache.

15. Cleanup

16. Use original code for adjust_method_entries in standard redefinition

17. iterate old method version only in dcevm

18. Revert code for !AllowEnhancedClassRedefinition

19. Code cleanup

20. Activate cpCache definition asserts for !dcevm

21. Skip GC runs for redefinitions without instance size change

22. This is the 2nd commit message:

23. dcevm15 - Cleanup code related to removed CMS

Fix class cast exception on redefinition of class A, that is superclass
of B that has anonymous class C

Support for Lambda class redefinition

Fix "no original bytecode found" error if method with bkp is missing

Sometimes IDE can deploy class with erroneous method, such method has
n bytecode, but breakpoint position can still exist.

Replace deleted method with Universe::throw_no_such_method_error

+ Change log level in advanced redefinition
- Change log level for "Comparing different class ver.." to debug
- Fix adjust_method_entries_dcevm logging levels and severity
Support for G1 gc

AllowEnhancedClassRedefinition is false (disabled) by default

Set HOTSPOT_VM_DISTRO=Dynamic Code Evolution

Clear dcevm code separation

Fix LoadedClassesClosure - fixes problems with remote debugging

dcevm15 - fix java15 compilation issues
dcevm15 - add ClassLoaderDataGraph_lock on
ClassLoaderDataGraph::classes_do

ClassLoaderDataGraph::classes_do and need safepoint or lock,
find_sorted_affected_classes is not in safepoint therefore it must be
locked
ClassLoaderDataGraph::rollback_redefinition need safepoint too
dcevm15 - fix Universe::root_oops_do

Removed ClassLoaderDataGraph::cld_do was cause of crashes due multiple
oop patching. ClassLoaderDataGraph::cld_do replaced in dcevm15
previously used and removed SystemDictionary:oops_do
dcevm15 - check if has_nestmate_access_to has newest host class
dcevm15 - mark_as_scavengable only alive methods
dcevm15 - fix hidded classes

dcevm15 - DON'T clear F2 in CP cache after indy unevolving

It's not clear why it was cleared in dcevm7-11
Cleanup and review comments
Disable AllowEnhancedClassRedefinition in flight recorder

dcevm17 - fix compilation issues

Fix crash on GrowableArray allocation in C_HEAP
Rename confusing method name old_if_redefined to old_if_redefining
Check InstanceKlass::has_nestmate_access_to with active classes

Dcevm can leave old host in nested class if nested class is not
redefined together with host class
This commit is contained in:
Vladimir Dvorak
2018-11-15 00:09:39 +04:00
committed by Vitaly Provodin
parent f60a827d64
commit 993a1a6489
91 changed files with 4779 additions and 85 deletions

View File

@@ -93,7 +93,7 @@ ifneq ($(call check-jvm-feature, jvmti), true)
jvmtiEnvThreadState.cpp jvmtiTagMap.cpp jvmtiEventController.cpp \
evmCompat.cpp jvmtiEnter.xsl jvmtiExport.cpp \
jvmtiClassFileReconstituter.cpp jvmtiTagMapTable.cpp jvmtiAgent.cpp \
jvmtiAgentList.cpp jfrJvmtiAgent.cpp
jvmtiAgentList.cpp jfrJvmtiAgent.cpp jvmtiEnhancedRedefineClasses.cpp
endif
ifneq ($(call check-jvm-feature, jvmci), true)

View File

@@ -211,6 +211,7 @@ void LambdaFormInvokers::regenerate_class(char* class_name, ClassFileStream& st,
class_name_sym,
cld,
cl_info,
false, // pick_newest
CHECK);
assert(result->java_mirror() != nullptr, "must be");

View File

@@ -139,6 +139,7 @@ public:
juint prototype_header_offset();
uintptr_t prototype_header();
Klass* new_version() { return get_Klass()->new_version(); }
};
#endif // SHARE_CI_CIKLASS_HPP

View File

@@ -75,7 +75,10 @@ GrowableArray<ciMetadata*>* ciObjectFactory::_shared_ci_metadata = nullptr;
ciSymbol* ciObjectFactory::_shared_ci_symbols[vmSymbols::number_of_symbols()];
int ciObjectFactory::_shared_ident_limit = 0;
volatile bool ciObjectFactory::_initialized = false;
volatile bool ciObjectFactory::_reinitialize_vm_klasses = false;
// TODO: review...
Arena* ciObjectFactory::_initial_arena = NULL;
// ------------------------------------------------------------------
// ciObjectFactory::ciObjectFactory
@@ -111,6 +114,9 @@ void ciObjectFactory::initialize() {
// compiler thread that initializes the initial ciObjectFactory which
// creates the shared ciObjects that all later ciObjectFactories use.
Arena* arena = new (mtCompiler) Arena(mtCompiler);
if (AllowEnhancedClassRedefinition) {
ciObjectFactory::_initial_arena = arena;
}
ciEnv initial(arena);
ciEnv* env = ciEnv::current();
env->_factory->init_shared_objects();
@@ -119,6 +125,36 @@ void ciObjectFactory::initialize() {
}
// (DCEVM) vm classes could be modified
void ciObjectFactory::reinitialize_vm_classes() {
ASSERT_IN_VM;
JavaThread* thread = JavaThread::current();
HandleMark handle_mark(thread);
// This Arena is long lived and exists in the resource mark of the
// compiler thread that initializes the initial ciObjectFactory which
// creates the shared ciObjects that all later ciObjectFactories use.
// Arena* arena = new (mtCompiler) Arena(mtCompiler);
ciEnv initial(ciObjectFactory::_initial_arena);
ciEnv* env = ciEnv::current();
env->_factory->do_reinitialize_vm_classes();
_reinitialize_vm_klasses = false;
}
// (DCEVM) vm classes could be modified
void ciObjectFactory::do_reinitialize_vm_classes() {
#define VM_CLASS_DEFN(name, ignore_s) \
if (ciEnv::_##name != NULL && ciEnv::_##name->new_version() != NULL) { \
int old_ident = ciEnv::_##name->ident(); \
ciEnv::_##name = get_metadata(vmClasses::name())->as_instance_klass(); \
ciEnv::_##name->compute_nonstatic_fields(); \
ciEnv::_##name->set_ident(old_ident); \
}
VM_CLASSES_DO(VM_CLASS_DEFN)
#undef VM_CLASS_DEFN
}
void ciObjectFactory::init_shared_objects() {
_next_ident = 1; // start numbering CI objects at 1
@@ -736,3 +772,28 @@ void ciObjectFactory::print() {
_unloaded_instances.length(),
_unloaded_klasses.length());
}
int ciObjectFactory::compare_cimetadata(ciMetadata** a, ciMetadata** b) {
Metadata* am = (*a)->constant_encoding();
Metadata* bm = (*b)->constant_encoding();
return ((am > bm) ? 1 : ((am == bm) ? 0 : -1));
}
// FIXME: review... Resoring the ciObject arrays after class redefinition
void ciObjectFactory::resort_shared_ci_metadata() {
if (_shared_ci_metadata == NULL) return;
_shared_ci_metadata->sort(ciObjectFactory::compare_cimetadata);
#ifdef ASSERT
if (CIObjectFactoryVerify) {
Metadata* last = NULL;
for (int j = 0; j< _shared_ci_metadata->length(); j++) {
Metadata* o = _shared_ci_metadata->at(j)->constant_encoding();
assert(last < o, "out of order");
last = o;
}
}
#endif // ASSERT
}

View File

@@ -42,9 +42,11 @@ class ciObjectFactory : public ArenaObj {
private:
static volatile bool _initialized;
static volatile bool _reinitialize_vm_klasses;
static GrowableArray<ciMetadata*>* _shared_ci_metadata;
static ciSymbol* _shared_ci_symbols[];
static int _shared_ident_limit;
static Arena* _initial_arena;
Arena* _arena;
GrowableArray<ciMetadata*> _ci_metadata;
@@ -89,10 +91,15 @@ private:
ciInstance* get_unloaded_instance(ciInstanceKlass* klass);
static int compare_cimetadata(ciMetadata** a, ciMetadata** b);
void do_reinitialize_vm_classes();
public:
static bool is_initialized() { return _initialized; }
static bool is_reinitialize_vm_klasses() { return _reinitialize_vm_klasses; }
static void set_reinitialize_vm_klasses() { _reinitialize_vm_klasses = true; }
static void initialize();
static void reinitialize_vm_classes();
void init_shared_objects();
void remove_symbols();
@@ -148,6 +155,8 @@ public:
void print_contents();
void print();
static void resort_shared_ci_metadata();
};
#endif // SHARE_CI_CIOBJECTFACTORY_HPP

View File

@@ -826,6 +826,9 @@ void ClassFileParser::parse_interfaces(const ClassFileStream* const stream,
false, CHECK);
}
// (DCEVM) pick newest
interf = (Klass *) maybe_newest(interf);
if (!interf->is_interface()) {
THROW_MSG(vmSymbols::java_lang_IncompatibleClassChangeError(),
err_msg("class %s can not implement %s, because it is not an interface (%s)",
@@ -3798,7 +3801,8 @@ const InstanceKlass* ClassFileParser::parse_super_class(ConstantPool* const cp,
// However, make sure it is not an array type.
bool is_array = false;
if (cp->tag_at(super_class_index).is_klass()) {
super_klass = InstanceKlass::cast(cp->resolved_klass_at(super_class_index));
// (DCEVM) pick newest
super_klass = InstanceKlass::cast(maybe_newest(cp->resolved_klass_at(super_class_index)));
if (need_verify)
is_array = super_klass->is_array_klass();
} else if (need_verify) {
@@ -3938,7 +3942,10 @@ void ClassFileParser::set_precomputed_flags(InstanceKlass* ik) {
if (!_has_empty_finalizer) {
if (_has_finalizer ||
(super != nullptr && super->has_finalizer())) {
ik->set_has_finalizer();
// FIXME - (DCEVM) this is condition from previous DCEVM version, however after reload a new finalize() method is not active
if (ik->old_version() == NULL || ik->old_version()->has_finalizer()) {
ik->set_has_finalizer();
}
}
}
@@ -5267,6 +5274,7 @@ ClassFileParser::ClassFileParser(ClassFileStream* stream,
ClassLoaderData* loader_data,
const ClassLoadInfo* cl_info,
Publicity pub_level,
const bool pick_newest,
TRAPS) :
_stream(stream),
_class_name(nullptr),
@@ -5326,7 +5334,8 @@ ClassFileParser::ClassFileParser(ClassFileStream* stream,
_has_contended_fields(false),
_has_finalizer(false),
_has_empty_finalizer(false),
_max_bootstrap_specifier_index(-1) {
_max_bootstrap_specifier_index(-1),
_pick_newest(pick_newest) {
_class_name = name != nullptr ? name : vmSymbols::unknown_class_name();
_class_name->increment_refcount();
@@ -5715,16 +5724,18 @@ void ClassFileParser::post_process_parsed_stream(const ClassFileStream* const st
CHECK);
}
Handle loader(THREAD, _loader_data->class_loader());
Klass* super_klass;
if (loader.is_null() && super_class_name == vmSymbols::java_lang_Object()) {
_super_klass = vmClasses::Object_klass();
super_klass = vmClasses::Object_klass();
} else {
_super_klass = (const InstanceKlass*)
super_klass = (InstanceKlass*)
SystemDictionary::resolve_super_or_fail(_class_name,
super_class_name,
loader,
true,
CHECK);
}
_super_klass = (InstanceKlass*) maybe_newest(super_klass);
}
if (_super_klass != nullptr) {

View File

@@ -198,6 +198,9 @@ class ClassFileParser {
bool _has_empty_finalizer;
int _max_bootstrap_specifier_index; // detects BSS values
// (DCEVM) Enhanced class redefinition
const bool _pick_newest;
void parse_stream(const ClassFileStream* const stream, TRAPS);
void mangle_hidden_class_name(InstanceKlass* const ik);
@@ -485,6 +488,8 @@ class ClassFileParser {
TRAPS);
void update_class_name(Symbol* new_name);
// (DCEVM) Enhanced class redefinition
inline const Klass* maybe_newest(const Klass* klass) const { return klass != NULL && _pick_newest ? klass->newest_version() : klass; }
public:
ClassFileParser(ClassFileStream* stream,
@@ -492,6 +497,7 @@ class ClassFileParser {
ClassLoaderData* loader_data,
const ClassLoadInfo* cl_info,
Publicity pub_level,
const bool pick_newest,
TRAPS);
~ClassFileParser();
@@ -522,6 +528,7 @@ class ClassFileParser {
ClassLoaderData* loader_data() const { return _loader_data; }
const Symbol* class_name() const { return _class_name; }
const InstanceKlass* super_klass() const { return _super_klass; }
Array<InstanceKlass*>* local_interfaces() const { return _local_interfaces; }
ReferenceType super_reference_type() const;
bool is_instance_ref_klass() const;

View File

@@ -1111,6 +1111,7 @@ InstanceKlass* ClassLoader::load_class(Symbol* name, PackageEntry* pkg_entry, bo
name,
loader_data,
cl_info,
false, // pick_newest
CHECK_NULL);
result->set_classpath_index(classpath_index);
return result;

View File

@@ -651,6 +651,15 @@ Dictionary* ClassLoaderData::create_dictionary() {
return new Dictionary(this, size);
}
void ClassLoaderData::exchange_holders(ClassLoaderData* cld) {
oop holder_oop = _holder.peek();
_holder.replace(cld->_holder.peek());
cld->_holder.replace(holder_oop);
WeakHandle exchange = _holder;
_holder = cld->_holder;
cld->_holder = exchange;
}
// Tell the GC to keep this klass alive. Needed while iterating ClassLoaderDataGraph,
// and any runtime code that uses klasses.
oop ClassLoaderData::holder() const {

View File

@@ -208,6 +208,8 @@ private:
// Resolving the holder keeps this CLD alive for the current GC cycle.
oop holder() const;
void keep_alive() const { (void)holder(); }
// (DCEVM)
void exchange_holders(ClassLoaderData* cld);
void classes_do(void f(Klass* const));

View File

@@ -352,6 +352,33 @@ void ClassLoaderDataGraph::verify_dictionary() {
}
}
#define FOR_ALL_DICTIONARY(X) ClassLoaderDataGraphIterator iter; \
while (ClassLoaderData* X = iter.get_next()) \
if (X->dictionary() != nullptr)
// (DCEVM) - iterate over dict classes
void ClassLoaderDataGraph::dictionary_classes_do(KlassClosure* klass_closure) {
FOR_ALL_DICTIONARY(cld) {
cld->dictionary()->classes_do(klass_closure);
}
}
// (DCEVM) rollback redefined classes
void ClassLoaderDataGraph::rollback_redefinition() {
FOR_ALL_DICTIONARY(cld) {
cld->dictionary()->rollback_redefinition();
}
}
// (DCEVM) - iterate over all classes in all dictionaries
bool ClassLoaderDataGraph::dictionary_classes_do_update_klass(Thread* current, Symbol* name, InstanceKlass* k, InstanceKlass* old_klass) {
bool ok = false;
FOR_ALL_DICTIONARY(cld) {
ok = cld->dictionary()->update_klass(current, name, k, old_klass) || ok;
}
return ok;
}
void ClassLoaderDataGraph::print_dictionary(outputStream* st) {
ClassLoaderDataGraphIterator iter;
while (ClassLoaderData *cld = iter.get_next()) {

View File

@@ -83,6 +83,7 @@ class ClassLoaderDataGraph : public AllStatic {
// for redefinition. These classes are removed during the next class unloading.
// Walking the ClassLoaderDataGraph also includes hidden classes.
static void classes_do(KlassClosure* klass_closure);
static void classes_do(void f(Klass* const));
static void methods_do(void f(Method*));
static void modules_do_keepalive(void f(ModuleEntry*));
@@ -100,6 +101,11 @@ class ClassLoaderDataGraph : public AllStatic {
// Called from VMOperation
static void walk_metadata_and_clean_metaspaces();
// (DCEVM) Enhanced class redefinition
static void dictionary_classes_do(KlassClosure* klass_closure);
static void rollback_redefinition();
static bool dictionary_classes_do_update_klass(Thread* current, Symbol* name, InstanceKlass* k, InstanceKlass* old_klass);
static void verify_dictionary();
static void print_dictionary(outputStream* st);
static void print_table_statistics(outputStream* st);

View File

@@ -91,6 +91,31 @@ void Dictionary::classes_do(void f(InstanceKlass*)) {
_table->do_scan(Thread::current(), doit);
}
void Dictionary::classes_do_safepoint(void f(InstanceKlass*)) {
auto doit = [&] (DictionaryEntry** value) {
InstanceKlass* k = (*value)->instance_klass();
if (loader_data() == k->class_loader_data()) {
f(k);
}
return true;
};
_table->do_safepoint_scan(doit);
}
// (DCEVM) iterate over dict entry
void Dictionary::classes_do(KlassClosure* closure) {
auto doit = [&] (DictionaryEntry** value) {
InstanceKlass* k = (*value)->instance_klass();
if (loader_data() == k->class_loader_data()) {
closure->do_klass(k);
}
return true;
};
_table->do_scan(Thread::current(), doit);
}
// All classes, and their class loaders, including initiating class loaders
void Dictionary::all_entries_do(KlassClosure* closure) {
auto all_doit = [&] (InstanceKlass** value) {
@@ -182,6 +207,30 @@ InstanceKlass* Dictionary::find_class(Thread* current, Symbol* class_name) {
return result;
}
// (DCEVM) replace old_class by new class in dictionary
bool Dictionary::update_klass(Thread* current, Symbol* class_name, InstanceKlass* k, InstanceKlass* old_klass) {
DictionaryEntry* entry = get_entry(current, class_name);
if (entry != NULL) {
assert(entry->instance_klass() == old_klass, "should be old class");
entry->set_instance_klass(k);
return true;
}
return false;
}
// (DCEVM) rollback redefinition
void Dictionary::rollback_redefinition() {
// TODO : (DCEVM)
auto all_doit = [&] (DictionaryEntry** value) {
if ((*value)->instance_klass()->is_redefining()) {
(*value)->set_instance_klass((InstanceKlass*) (*value)->instance_klass()->old_version());
}
return true;
};
_table->do_scan(Thread::current(), all_doit);
}
void Dictionary::print_size(outputStream* st) const {
st->print_cr("Java dictionary (table_size=%d, classes=%d)",
table_size(), _number_of_entries);

View File

@@ -63,6 +63,10 @@ public:
InstanceKlass* find_class(Thread* current, Symbol* name);
void classes_do(void f(InstanceKlass*));
// (DCEVM)
void classes_do_safepoint(void f(InstanceKlass*));
void classes_do(KlassClosure* closure);
void all_entries_do(KlassClosure* closure);
void classes_do(MetaspaceClosure* it);
@@ -71,6 +75,11 @@ public:
void print_on(outputStream* st) const;
void print_size(outputStream* st) const;
void verify();
// (DCEVM) Enhanced class redefinition
bool update_klass(Thread* current, Symbol* class_name, InstanceKlass* k, InstanceKlass* old_klass);
void rollback_redefinition();
};
#endif // SHARE_CLASSFILE_DICTIONARY_HPP

View File

@@ -2855,6 +2855,8 @@ void java_lang_Throwable::fill_in_stack_trace(Handle throwable, const methodHand
skip_throwableInit_check = true;
}
}
// (DCEVM): Line numbers from newest version must be used for EMCP-swapped methods
method = method->newest_version();
if (method->is_hidden() || method->is_continuation_enter_intrinsic()) {
if (skip_hidden) {
if (total_count == 0) {
@@ -4196,6 +4198,62 @@ void java_lang_invoke_DirectMethodHandle::serialize_offsets(SerializeClosure* f)
}
#endif
// Support for java_lang_invoke_DirectMethodHandle$StaticAccessor
int java_lang_invoke_DirectMethodHandle_StaticAccessor::_static_offset_offset;
long java_lang_invoke_DirectMethodHandle_StaticAccessor::static_offset(oop dmh) {
assert(_static_offset_offset != 0, "");
return dmh->long_field(_static_offset_offset);
}
void java_lang_invoke_DirectMethodHandle_StaticAccessor::set_static_offset(oop dmh, long static_offset) {
assert(_static_offset_offset != 0, "");
dmh->long_field_put(_static_offset_offset, static_offset);
}
#define DIRECTMETHODHANDLE_STATIC_ACCESSOR_FIELDS_DO(macro) \
macro(_static_offset_offset, k, vmSymbols::static_offset_name(), long_signature, false)
void java_lang_invoke_DirectMethodHandle_StaticAccessor::compute_offsets() {
InstanceKlass* k = vmClasses::DirectMethodHandle_StaticAccessor_klass();
DIRECTMETHODHANDLE_STATIC_ACCESSOR_FIELDS_DO(FIELD_COMPUTE_OFFSET);
}
#if INCLUDE_CDS
void java_lang_invoke_DirectMethodHandle_StaticAccessor::serialize_offsets(SerializeClosure* f) {
DIRECTMETHODHANDLE_STATIC_ACCESSOR_FIELDS_DO(FIELD_SERIALIZE_OFFSET);
}
#endif
// Support for java_lang_invoke_DirectMethodHandle$Accessor
int java_lang_invoke_DirectMethodHandle_Accessor::_field_offset_offset;
int java_lang_invoke_DirectMethodHandle_Accessor::field_offset(oop dmh) {
assert(_field_offset_offset != 0, "");
return dmh->int_field(_field_offset_offset);
}
void java_lang_invoke_DirectMethodHandle_Accessor::set_field_offset(oop dmh, int field_offset) {
assert(_field_offset_offset != 0, "");
dmh->int_field_put(_field_offset_offset, field_offset);
}
#define DIRECTMETHODHANDLE_ACCESSOR_FIELDS_DO(macro) \
macro(_field_offset_offset, k, vmSymbols::field_offset_name(), int_signature, false)
void java_lang_invoke_DirectMethodHandle_Accessor::compute_offsets() {
InstanceKlass* k = vmClasses::DirectMethodHandle_Accessor_klass();
DIRECTMETHODHANDLE_ACCESSOR_FIELDS_DO(FIELD_COMPUTE_OFFSET);
}
#if INCLUDE_CDS
void java_lang_invoke_DirectMethodHandle_Accessor::serialize_offsets(SerializeClosure* f) {
DIRECTMETHODHANDLE_ACCESSOR_FIELDS_DO(FIELD_SERIALIZE_OFFSET);
}
#endif
// Support for java_lang_invoke_MethodHandle
int java_lang_invoke_MethodHandle::_type_offset;
@@ -5398,6 +5456,8 @@ void java_lang_InternalError::serialize_offsets(SerializeClosure* f) {
f(java_lang_invoke_MethodType) \
f(java_lang_invoke_CallSite) \
f(java_lang_invoke_ConstantCallSite) \
f(java_lang_invoke_DirectMethodHandle_StaticAccessor) \
f(java_lang_invoke_DirectMethodHandle_Accessor) \
f(java_lang_reflect_AccessibleObject) \
f(java_lang_reflect_Method) \
f(java_lang_reflect_Constructor) \

View File

@@ -267,7 +267,9 @@ class java_lang_Class : AllStatic {
static void set_init_lock(oop java_class, oop init_lock);
static void set_protection_domain(oop java_class, oop protection_domain);
static void set_class_loader(oop java_class, oop class_loader);
public: // DCEVM
static void set_component_mirror(oop java_class, oop comp_mirror);
private:
static void initialize_mirror_fields(Klass* k, Handle mirror, Handle protection_domain,
Handle classData, TRAPS);
static void set_mirror_module_field(JavaThread* current, Klass* K, Handle mirror, Handle module);
@@ -1118,6 +1120,55 @@ class java_lang_invoke_DirectMethodHandle: AllStatic {
static int member_offset() { CHECK_INIT(_member_offset); }
};
// Interface to java.lang.invoke.DirectMethodHandle$StaticAccessor objects
class java_lang_invoke_DirectMethodHandle_StaticAccessor: AllStatic {
friend class JavaClasses;
private:
static int _static_offset_offset; // offset to static field
static void compute_offsets();
public:
// Accessors
static long static_offset(oop dmh);
static void set_static_offset(oop dmh, long value);
// Testers
static bool is_subclass(Klass* klass) {
return klass->is_subclass_of(vmClasses::DirectMethodHandle_StaticAccessor_klass());
}
static bool is_instance(oop obj);
static void serialize_offsets(SerializeClosure* f) NOT_CDS_RETURN;
};
// Interface to java.lang.invoke.DirectMethodHandle$Accessor objects
class java_lang_invoke_DirectMethodHandle_Accessor: AllStatic {
friend class JavaClasses;
private:
static int _field_offset_offset; // offset to field
static void compute_offsets();
public:
// Accessors
static int field_offset(oop dmh);
static void set_field_offset(oop dmh, int value);
// Testers
static bool is_subclass(Klass* klass) {
return klass->is_subclass_of(vmClasses::DirectMethodHandle_Accessor_klass());
}
static bool is_instance(oop obj);
static void serialize_offsets(SerializeClosure* f) NOT_CDS_RETURN;
};
// Interface to java.lang.invoke.LambdaForm objects
// (These are a private interface for managing adapter code generation.)

View File

@@ -320,6 +320,14 @@ inline bool java_lang_invoke_DirectMethodHandle::is_instance(oop obj) {
return obj != nullptr && is_subclass(obj->klass());
}
inline bool java_lang_invoke_DirectMethodHandle_StaticAccessor::is_instance(oop obj) {
return obj != NULL && is_subclass(obj->klass());
}
inline bool java_lang_invoke_DirectMethodHandle_Accessor::is_instance(oop obj) {
return obj != NULL && is_subclass(obj->klass());
}
inline bool java_lang_Module::is_instance(oop obj) {
return obj != nullptr && obj->klass() == vmClasses::Module_klass();
}

View File

@@ -86,6 +86,7 @@ InstanceKlass* KlassFactory::check_shared_class_file_load_hook(
loader_data,
&cl_info,
ClassFileParser::BROADCAST, // publicity level
false,
CHECK_NULL);
const ClassInstanceInfo* cl_inst_info = cl_info.class_hidden_info_ptr();
InstanceKlass* new_ik = parser.create_instance_klass(true, // changed_by_loadhook
@@ -171,6 +172,7 @@ InstanceKlass* KlassFactory::create_from_stream(ClassFileStream* stream,
Symbol* name,
ClassLoaderData* loader_data,
const ClassLoadInfo& cl_info,
const bool pick_newest,
TRAPS) {
assert(stream != nullptr, "invariant");
assert(loader_data != nullptr, "invariant");
@@ -200,6 +202,7 @@ InstanceKlass* KlassFactory::create_from_stream(ClassFileStream* stream,
loader_data,
&cl_info,
ClassFileParser::BROADCAST, // publicity level
pick_newest,
CHECK_NULL);
const ClassInstanceInfo* cl_inst_info = cl_info.class_hidden_info_ptr();

View File

@@ -64,6 +64,7 @@ class KlassFactory : AllStatic {
Symbol* name,
ClassLoaderData* loader_data,
const ClassLoadInfo& cl_info,
const bool pick_newest,
TRAPS);
static InstanceKlass* check_shared_class_file_load_hook(
InstanceKlass* ik,

View File

@@ -218,6 +218,21 @@ void LoaderConstraintTable::add_loader_constraint(Symbol* name, InstanceKlass* k
}
}
// (DCEVM) update constraint entries to new classes, called from dcevm redefinition code only
void LoaderConstraintTable::update_after_redefinition() {
auto update_old = [&] (SymbolHandle& key, ConstraintSet& set) {
int len = set.num_constraints();
for (int i = 0; i < len; i++) {
LoaderConstraint* probe = set.constraint_at(i);
if (probe->klass() != NULL) {
// We swap the class with the newest version with an assumption that the hash will be the same
probe->set_klass((InstanceKlass*) probe->klass()->newest_version());
}
}
};
_loader_constraint_table.iterate_all(update_old);
}
class PurgeUnloadedConstraints : public StackObj {
public:
bool do_entry(SymbolHandle& name, ConstraintSet& set) {

View File

@@ -44,6 +44,9 @@ private:
LoaderConstraint* pp2, InstanceKlass* klass);
public:
static void initialize();
// (DCEVM) update all klasses with newest version
static void update_after_redefinition();
// Check class loader constraints
static bool add_entry(Symbol* name, InstanceKlass* klass1, ClassLoaderData* loader1,
InstanceKlass* klass2, ClassLoaderData* loader2);

View File

@@ -300,7 +300,7 @@ static void verify_dictionary_entry(Symbol* class_name, InstanceKlass* k) {
Dictionary* dictionary = loader_data->dictionary();
assert(class_name == k->name(), "Must be the same");
InstanceKlass* kk = dictionary->find_class(JavaThread::current(), class_name);
assert(kk == k, "should be present in dictionary");
assert((kk == k && !k->is_redefining()) || (k->is_redefining() && kk == k->old_version()), "should be present in dictionary");
}
#endif
@@ -337,6 +337,7 @@ Klass* SystemDictionary::resolve_or_fail(Symbol* class_name, Handle class_loader
if (HAS_PENDING_EXCEPTION || klass == nullptr) {
handle_resolution_exception(class_name, throw_error, CHECK_NULL);
}
assert(klass == NULL || klass->new_version() == NULL || klass->newest_version()->is_redefining(), "must be");
return klass;
}
@@ -785,11 +786,14 @@ InstanceKlass* SystemDictionary::resolve_hidden_class_from_stream(
Symbol* class_name,
Handle class_loader,
const ClassLoadInfo& cl_info,
InstanceKlass* old_klass,
TRAPS) {
EventClassLoad class_load_start_event;
ClassLoaderData* loader_data;
bool is_redefining = (old_klass != NULL);
// - for hidden classes that are not strong: create a new CLD that has a class holder and
// whose loader is the Lookup class's loader.
// - for hidden class: add the class to the Lookup class's loader's CLD.
@@ -804,8 +808,13 @@ InstanceKlass* SystemDictionary::resolve_hidden_class_from_stream(
class_name,
loader_data,
cl_info,
is_redefining, // pick_newest
CHECK_NULL);
assert(k != nullptr, "no klass created");
assert(k != NULL, "no klass created");
if (is_redefining && k != nullptr) {
k->set_redefining(true);
k->set_old_version(old_klass);
}
// Hidden classes that are not strong must update ClassLoaderData holder
// so that they can be unloaded when the mirror is no longer referenced.
@@ -841,10 +850,12 @@ InstanceKlass* SystemDictionary::resolve_class_from_stream(
Symbol* class_name,
Handle class_loader,
const ClassLoadInfo& cl_info,
InstanceKlass* old_klass,
TRAPS) {
HandleMark hm(THREAD);
bool is_redefining = (old_klass != NULL);
ClassLoaderData* loader_data = register_loader(class_loader);
// Classloaders that support parallelism, e.g. bootstrap classloader,
@@ -859,6 +870,7 @@ InstanceKlass* SystemDictionary::resolve_class_from_stream(
InstanceKlass* k = nullptr;
#if INCLUDE_CDS
// FIXME: (DCEVM) what to do during redefinition?
if (!CDSConfig::is_dumping_static_archive()) {
k = SystemDictionaryShared::lookup_from_stream(class_name,
class_loader,
@@ -869,7 +881,12 @@ InstanceKlass* SystemDictionary::resolve_class_from_stream(
#endif
if (k == nullptr) {
k = KlassFactory::create_from_stream(st, class_name, loader_data, cl_info, CHECK_NULL);
k = KlassFactory::create_from_stream(st, class_name, loader_data, cl_info, is_redefining, CHECK_NULL);
}
if (is_redefining && k != nullptr) {
k->set_redefining(true);
k->set_old_version(old_klass);
}
assert(k != nullptr, "no klass created");
@@ -880,10 +897,10 @@ InstanceKlass* SystemDictionary::resolve_class_from_stream(
// If a class loader supports parallel classloading, handle parallel define requests.
// find_or_define_instance_class may return a different InstanceKlass,
// in which case the old k would be deallocated
if (is_parallelCapable(class_loader)) {
if (is_parallelCapable(class_loader) && !is_redefining) {
k = find_or_define_instance_class(h_name, class_loader, k, CHECK_NULL);
} else {
define_instance_class(k, class_loader, THREAD);
define_instance_class(k, old_klass, class_loader, THREAD);
// If defining the class throws an exception register 'k' for cleanup.
if (HAS_PENDING_EXCEPTION) {
@@ -903,11 +920,12 @@ InstanceKlass* SystemDictionary::resolve_from_stream(ClassFileStream* st,
Symbol* class_name,
Handle class_loader,
const ClassLoadInfo& cl_info,
InstanceKlass* old_klass,
TRAPS) {
if (cl_info.is_hidden()) {
return resolve_hidden_class_from_stream(st, class_name, class_loader, cl_info, CHECK_NULL);
return resolve_hidden_class_from_stream(st, class_name, class_loader, cl_info, old_klass, CHECK_NULL);
} else {
return resolve_class_from_stream(st, class_name, class_loader, cl_info, CHECK_NULL);
return resolve_class_from_stream(st, class_name, class_loader, cl_info, old_klass, CHECK_NULL);
}
}
@@ -1335,10 +1353,11 @@ static void post_class_define_event(InstanceKlass* k, const ClassLoaderData* def
}
}
void SystemDictionary::define_instance_class(InstanceKlass* k, Handle class_loader, TRAPS) {
void SystemDictionary::define_instance_class(InstanceKlass* k, InstanceKlass* old_klass, Handle class_loader, TRAPS) {
ClassLoaderData* loader_data = k->class_loader_data();
assert(loader_data->class_loader() == class_loader(), "they must be the same");
bool is_redefining = (old_klass != NULL);
// Bootstrap and other parallel classloaders don't acquire a lock,
// they use placeholder token.
@@ -1360,7 +1379,17 @@ void SystemDictionary::define_instance_class(InstanceKlass* k, Handle class_load
// classloader lock held
// Parallel classloaders will call find_or_define_instance_class
// which will require a token to perform the define class
check_constraints(k, loader_data, true, CHECK);
if (is_redefining) {
// Update all dictionaries containing old_class to new_class
// outcome must be same as result of standard redefinition, that does not create a new Klass
ClassLoaderDataGraph_lock->lock();
Symbol* name_h = k->name();
bool ok = ClassLoaderDataGraph::dictionary_classes_do_update_klass(THREAD, name_h, k, old_klass);
ClassLoaderDataGraph_lock->unlock();
assert (ok, "must have found old class and updated!");
}
check_constraints(k, loader_data, !is_redefining, CHECK);
// Register class just loaded with class loader (placed in ArrayList)
// Note we do this before updating the dictionary, as this can
@@ -1383,7 +1412,7 @@ void SystemDictionary::define_instance_class(InstanceKlass* k, Handle class_load
update_dictionary(THREAD, k, loader_data);
// notify jvmti
if (JvmtiExport::should_post_class_load()) {
if (!is_redefining && JvmtiExport::should_post_class_load()) {
JvmtiExport::post_class_load(THREAD, k);
}
post_class_define_event(k, loader_data);
@@ -1455,7 +1484,7 @@ InstanceKlass* SystemDictionary::find_or_define_helper(Symbol* class_name, Handl
}
}
define_instance_class(k, class_loader, THREAD);
define_instance_class(k, NULL, class_loader, THREAD);
// definer must notify any waiting threads
{
@@ -1493,6 +1522,14 @@ InstanceKlass* SystemDictionary::find_or_define_instance_class(Symbol* class_nam
}
// (DCEVM) - remove from klass hierarchy
void SystemDictionary::remove_from_hierarchy(InstanceKlass* k) {
assert(k != NULL, "just checking");
// remove receiver from sibling list
k->remove_from_sibling_list();
}
// ----------------------------------------------------------------------------
// GC support
@@ -1600,7 +1637,7 @@ void SystemDictionary::check_constraints(InstanceKlass* k,
// else - ok, class loaded by a different thread in parallel.
// We should only have found it if it was done loading and ok to use.
if ((defining == true) || (k != check)) {
if ((defining == true) || (k != check && (!AllowEnhancedClassRedefinition || k->old_version() != check))) {
throwException = true;
ss.print("loader %s", loader_data->loader_name_and_id());
ss.print(" attempted duplicate %s definition for %s. (%s)",

View File

@@ -125,6 +125,7 @@ class SystemDictionary : AllStatic {
Symbol* class_name,
Handle class_loader,
const ClassLoadInfo& cl_info,
InstanceKlass* old_klass,
TRAPS);
// Resolve a class from stream (called by jni_DefineClass and JVM_DefineClass)
@@ -133,6 +134,7 @@ class SystemDictionary : AllStatic {
Symbol* class_name,
Handle class_loader,
const ClassLoadInfo& cl_info,
InstanceKlass* old_klass,
TRAPS);
static oop get_system_class_loader_impl(TRAPS);
@@ -144,6 +146,7 @@ class SystemDictionary : AllStatic {
Symbol* class_name,
Handle class_loader,
const ClassLoadInfo& cl_info,
InstanceKlass* old_klass,
TRAPS);
// Lookup an already loaded class. If not found null is returned.
@@ -203,6 +206,9 @@ class SystemDictionary : AllStatic {
// Initialization
static void initialize(TRAPS);
// (DCEVM) Enhanced class redefinition
static void remove_from_hierarchy(InstanceKlass* k);
public:
// Returns java system loader
static oop java_system_loader();
@@ -297,7 +303,7 @@ private:
static Klass* resolve_array_class_or_null(Symbol* class_name,
Handle class_loader,
TRAPS);
static void define_instance_class(InstanceKlass* k, Handle class_loader, TRAPS);
static void define_instance_class(InstanceKlass* k, InstanceKlass* old_klass, Handle class_loader, TRAPS);
static InstanceKlass* find_or_define_helper(Symbol* class_name,
Handle class_loader,
InstanceKlass* k, TRAPS);

View File

@@ -373,6 +373,7 @@ class ClassVerifier : public StackObj {
VerificationType object_type() const;
InstanceKlass* _klass_to_verify;
InstanceKlass* _klass; // the class being verified
methodHandle _method; // current method being verified
VerificationType _this_type; // the verification type of the current class

View File

@@ -110,6 +110,8 @@
\
/* support for dynamic typing */ \
do_klass(DirectMethodHandle_klass, java_lang_invoke_DirectMethodHandle ) \
do_klass(DirectMethodHandle_StaticAccessor_klass, java_lang_invoke_DirectMethodHandle_StaticAccessor ) \
do_klass(DirectMethodHandle_Accessor_klass, java_lang_invoke_DirectMethodHandle_Accessor ) \
do_klass(MethodHandle_klass, java_lang_invoke_MethodHandle ) \
do_klass(VarHandle_klass, java_lang_invoke_VarHandle ) \
do_klass(MemberName_klass, java_lang_invoke_MemberName ) \

View File

@@ -259,3 +259,13 @@ BasicType vmClasses::box_klass_type(Klass* k) {
}
return T_OBJECT;
}
bool vmClasses::update_vm_klass(InstanceKlass* old_klass, InstanceKlass* new_klass) {
for (int id = static_cast<int>(vmClassID::FIRST); id < static_cast<int>(vmClassID::LIMIT); id++) {
if (_klasses[id] == old_klass) {
_klasses[id] = new_klass;
return true;
}
}
return false;
}

View File

@@ -108,6 +108,10 @@ public:
static bool Cloneable_klass_loaded() { return is_loaded(VM_CLASS_AT(Cloneable_klass)); }
static bool Parameter_klass_loaded() { return is_loaded(VM_CLASS_AT(reflect_Parameter_klass)); }
static bool ClassLoader_klass_loaded() { return is_loaded(VM_CLASS_AT(ClassLoader_klass)); }
// (DCEVM) vmClasses could be modified
static bool update_vm_klass(InstanceKlass* new_klass, InstanceKlass* old_klass);
};
#endif // SHARE_CLASSFILE_VMCLASSES_HPP

View File

@@ -313,6 +313,8 @@ class SerializeClosure;
template(java_lang_invoke_CallSite, "java/lang/invoke/CallSite") \
template(java_lang_invoke_ConstantCallSite, "java/lang/invoke/ConstantCallSite") \
template(java_lang_invoke_DirectMethodHandle, "java/lang/invoke/DirectMethodHandle") \
template(java_lang_invoke_DirectMethodHandle_StaticAccessor, "java/lang/invoke/DirectMethodHandle$StaticAccessor") \
template(java_lang_invoke_DirectMethodHandle_Accessor, "java/lang/invoke/DirectMethodHandle$Accessor") \
template(java_lang_invoke_MutableCallSite, "java/lang/invoke/MutableCallSite") \
template(java_lang_invoke_VolatileCallSite, "java/lang/invoke/VolatileCallSite") \
template(java_lang_invoke_MethodHandle, "java/lang/invoke/MethodHandle") \
@@ -386,6 +388,8 @@ class SerializeClosure;
template(interrupt_method_name, "interrupt") \
template(exit_method_name, "exit") \
template(remove_method_name, "remove") \
template(registerNatives_method_name, "registerNatives") \
template(initIDs_method_name, "initIDs") \
template(parent_name, "parent") \
template(maxPriority_name, "maxPriority") \
template(shutdown_name, "shutdown") \
@@ -512,6 +516,9 @@ class SerializeClosure;
template(maxThawingSize_name, "maxThawingSize") \
template(lockStackSize_name, "lockStackSize") \
template(objectWaiter_name, "objectWaiter") \
template(static_offset_name, "staticOffset") \
template(static_base_name, "staticBase") \
template(field_offset_name, "fieldOffset") \
\
/* name symbols needed by intrinsics */ \
VM_INTRINSICS_DO(VM_INTRINSIC_IGNORE, VM_SYMBOL_IGNORE, template, VM_SYMBOL_IGNORE, VM_ALIAS_IGNORE) \

View File

@@ -144,6 +144,8 @@ CompileLog** CompileBroker::_compiler2_logs = nullptr;
volatile jint CompileBroker::_compilation_id = 0;
volatile jint CompileBroker::_osr_compilation_id = 0;
volatile jint CompileBroker::_native_compilation_id = 0;
volatile bool CompileBroker::_compilation_stopped = false;
volatile jint CompileBroker::_active_compilations = 0;
// Performance counters
PerfCounter* CompileBroker::_perf_total_compilation = nullptr;
@@ -1968,6 +1970,17 @@ void CompileBroker::compiler_thread_loop() {
if (method()->number_of_breakpoints() == 0) {
// Compile the method.
if ((UseCompiler || AlwaysCompileLoopMethods) && CompileBroker::should_compile_new_jobs()) {
// TODO: review usage of CompileThread_lock (DCEVM)
if (ciObjectFactory::is_reinitialize_vm_klasses())
{
ASSERT_IN_VM;
MutexLocker only_one(CompileThread_lock);
if (ciObjectFactory::is_reinitialize_vm_klasses()) {
ciObjectFactory::reinitialize_vm_classes();
}
}
invoke_compiler_on_method(task);
thread->start_idle_timer();
} else {
@@ -2324,8 +2337,19 @@ void CompileBroker::invoke_compiler_on_method(CompileTask* task) {
if (WhiteBoxAPI && WhiteBox::compilation_locked) {
whitebox_lock_compilation();
}
comp->compile_method(&ci_env, target, osr_bci, true, directive);
if (AllowEnhancedClassRedefinition) {
{
MonitorLocker locker(DcevmCompilation_lock, Mutex::_no_safepoint_check_flag);
while (_compilation_stopped) {
locker.wait();
}
Atomic::add(&_active_compilations, 1);
}
comp->compile_method(&ci_env, target, osr_bci, true, directive);
Atomic::sub(&_active_compilations, 1);
} else {
comp->compile_method(&ci_env, target, osr_bci, true, directive);
}
/* Repeat compilation without installing code for profiling purposes */
int repeat_compilation_count = directive->RepeatCompilationOption;
while (repeat_compilation_count > 0) {
@@ -2334,6 +2358,7 @@ void CompileBroker::invoke_compiler_on_method(CompileTask* task) {
comp->compile_method(&ci_env, target, osr_bci, false, directive);
repeat_compilation_count--;
}
}
@@ -2911,3 +2936,30 @@ void CompileBroker::print_heapinfo(outputStream* out, const char* function, size
}
out->print_cr("\n__ CodeHeapStateAnalytics total duration %10.3f seconds _________\n", ts_total.seconds());
}
void CompileBroker::stopCompilationBeforeEnhancedRedefinition() {
// There are hard to fix C1/C2 race conditions with dcevm. The easiest solution
// is to stop compilation.
if (AllowEnhancedClassRedefinition) {
DcevmCompilation_lock->lock_without_safepoint_check();
_compilation_stopped = true;
while (_active_compilations > 0) {
DcevmCompilation_lock->wait_without_safepoint_check(10);
if (_active_compilations > 0) {
DcevmCompilation_lock->unlock(); // must unlock to run following VM op
VM_ForceSafepoint forceSafePoint; // force safepoint to avoid deadlock
VMThread::execute(&forceSafePoint);
DcevmCompilation_lock->lock_without_safepoint_check();
}
}
DcevmCompilation_lock->unlock();
}
}
void CompileBroker::releaseCompilationAfterEnhancedRedefinition() {
if (AllowEnhancedClassRedefinition) {
MonitorLocker locker(DcevmCompilation_lock, Mutex::_no_safepoint_check_flag);
_compilation_stopped = false;
locker.notify_all();
}
}

View File

@@ -200,6 +200,9 @@ class CompileBroker: AllStatic {
static volatile jint _osr_compilation_id;
static volatile jint _native_compilation_id;
static volatile bool _compilation_stopped;
static volatile jint _active_compilations;
static CompileQueue* _c2_compile_queue;
static CompileQueue* _c1_compile_queue;
@@ -450,6 +453,9 @@ public:
// CodeHeap State Analytics.
static void print_info(outputStream *out);
static void print_heapinfo(outputStream *out, const char* function, size_t granularity);
static void stopCompilationBeforeEnhancedRedefinition();
static void releaseCompilationAfterEnhancedRedefinition();
};
// In order to achiveve a maximally fast warmup we attempt to compile important methods as soon as all

View File

@@ -1915,6 +1915,24 @@ public:
}
};
class G1IterateObjectClosureTask : public WorkerTask {
private:
ObjectClosure* _cl;
G1CollectedHeap* _g1h;
HeapRegionClaimer _hrclaimer;
public:
G1IterateObjectClosureTask(ObjectClosure* cl, G1CollectedHeap* g1h) : WorkerTask("IterateObject Closure"),
_cl(cl), _g1h(g1h), _hrclaimer(g1h->workers()->active_workers()) { }
virtual void work(uint worker_id) {
Thread *thread = Thread::current();
HandleMark hm(thread); // make sure any handles created are deleted
ResourceMark rm(thread);
IterateObjectClosureRegionClosure blk(_cl);
_g1h->heap_region_par_iterate_from_worker_offset(&blk, &_hrclaimer, worker_id);
}
};
void G1CollectedHeap::object_iterate(ObjectClosure* cl) {
IterateObjectClosureRegionClosure blk(cl);
heap_region_iterate(&blk);
@@ -1956,6 +1974,11 @@ void G1CollectedHeap::heap_region_iterate(G1HeapRegionIndexClosure* cl) const {
_hrm.iterate(cl);
}
void G1CollectedHeap::object_par_iterate(ObjectClosure* cl) {
G1IterateObjectClosureTask iocl_task(cl, this);
workers()->run_task(&iocl_task);
}
void G1CollectedHeap::heap_region_par_iterate_from_worker_offset(G1HeapRegionClosure* cl,
G1HeapRegionClaimer *hrclaimer,
uint worker_id) const {

View File

@@ -160,6 +160,7 @@ class G1CollectedHeap : public CollectedHeap {
// Closures used in implementation.
friend class G1EvacuateRegionsTask;
friend class G1PLABAllocator;
friend class G1FullGCPrepareTask;
// Other related classes.
friend class G1HeapPrinterMark;
@@ -1064,6 +1065,7 @@ public:
// Iteration functions.
void object_iterate_parallel(ObjectClosure* cl, uint worker_id, G1HeapRegionClaimer* claimer);
void object_par_iterate(ObjectClosure* cl);
// Iterate over all objects, calling "cl.do_object" on each.
void object_iterate(ObjectClosure* cl) override;

View File

@@ -355,14 +355,18 @@ void G1FullCollector::phase2_prepare_compaction() {
// Try to avoid OOM immediately after Full GC in case there are no free regions
// left after determining the result locations (i.e. this phase). Prepare to
// maximally compact the tail regions of the compaction queues serially.
if (scope()->do_maximal_compaction() || !has_free_compaction_targets) {
phase2c_prepare_serial_compaction();
if (!Universe::is_redefining_gc_run()) {
if (scope()->do_maximal_compaction() || !has_free_compaction_targets) {
phase2c_prepare_serial_compaction();
if (scope()->do_maximal_compaction() &&
has_humongous() &&
serial_compaction_point()->has_regions()) {
phase2d_prepare_humongous_compaction();
if (scope()->do_maximal_compaction() &&
has_humongous() &&
serial_compaction_point()->has_regions()) {
phase2d_prepare_humongous_compaction();
}
}
} else {
phase2c_prepare_serial_compaction_dcevm();
}
}
@@ -473,6 +477,27 @@ void G1FullCollector::phase2d_prepare_humongous_compaction() {
}
}
void G1FullCollector::phase2c_prepare_serial_compaction_dcevm() {
GCTraceTime(Debug, gc, phases) debug("Phase 2: Prepare Serial Compaction", scope()->timer());
for (uint i = 0; i < workers(); i++) {
G1FullGCCompactionPoint* cp = compaction_point(i);
// collect remaining, not forwarded rescued oops using serial compact point
while (cp->last_rescued_oop() < cp->rescued_oops()->length()) {
HeapRegion* hr = G1CollectedHeap::heap()->new_region(HeapRegion::GrainBytes / HeapWordSize, HeapRegionType::Eden, true, G1NUMA::AnyNodeIndex);
if (hr == nullptr) {
vm_exit_out_of_memory(0, OOM_MMAP_ERROR, "G1 - not enough of free regions after redefinition.");
}
set_compaction_top(hr, hr->bottom());
cp->add(hr);
cp->forward_rescued();
cp->update();
}
}
}
void G1FullCollector::phase3_adjust_pointers() {
// Adjust the pointers to reflect the new locations
GCTraceTime(Info, gc, phases) info("Phase 3: Adjust pointers", scope()->timer());
@@ -488,8 +513,12 @@ void G1FullCollector::phase4_do_compaction() {
run_task(&task);
// Serial compact to avoid OOM when very few free regions.
if (serial_compaction_point()->has_regions()) {
task.serial_compaction();
if (!Universe::is_redefining_gc_run()) {
if (serial_compaction_point()->has_regions()) {
task.serial_compaction();
}
} else {
task.serial_compaction_dcevm();
}
if (!_humongous_compaction_regions.is_empty()) {

View File

@@ -160,6 +160,7 @@ private:
bool phase2b_forward_oops();
void phase2c_prepare_serial_compaction();
void phase2d_prepare_humongous_compaction();
void phase2c_prepare_serial_compaction_dcevm();
void phase3_adjust_pointers();
void phase4_do_compaction();

View File

@@ -30,6 +30,7 @@
#include "gc/g1/g1HeapRegion.inline.hpp"
#include "gc/shared/fullGCForwarding.inline.hpp"
#include "gc/shared/gcTraceTime.inline.hpp"
#include "gc/shared/dcevmSharedGC.hpp"
#include "logging/log.hpp"
#include "oops/oop.inline.hpp"
#include "utilities/ticks.hpp"
@@ -88,10 +89,28 @@ void G1FullGCCompactTask::compact_region(G1HeapRegion* hr) {
void G1FullGCCompactTask::work(uint worker_id) {
Ticks start = Ticks::now();
GrowableArray<G1HeapRegion*>* compaction_queue = collector()->compaction_point(worker_id)->regions();
for (GrowableArrayIterator<G1HeapRegion*> it = compaction_queue->begin();
it != compaction_queue->end();
++it) {
compact_region(*it);
if (!Universe::is_redefining_gc_run()) {
GrowableArray<G1HeapRegion*>* compaction_queue = collector()->compaction_point(worker_id)->regions();
for (GrowableArrayIterator<G1HeapRegion*> it = compaction_queue->begin();
it != compaction_queue->end();
++it) {
compact_region(*it);
}
} else {
GrowableArrayIterator<HeapWord*> rescue_oops_it = collector()->compaction_point(worker_id)->rescued_oops()->begin();
GrowableArray<HeapWord*>* rescued_oops_values = collector()->compaction_point(worker_id)->rescued_oops_values();
for (GrowableArrayIterator<G1HeapRegion*> it = compaction_queue->begin();
it != compaction_queue->end();
++it) {
compact_region_dcevm(*it, rescued_oops_values, &rescue_oops_it);
}
assert(rescue_oops_it.at_end(), "Must be at end");
G1FullGCCompactionPoint* cp = collector()->compaction_point(worker_id);
if (cp->last_rescued_oop() > 0) {
DcevmSharedGC::copy_rescued_objects_back(rescued_oops_values, 0, cp->last_rescued_oop(), false);
}
}
}
@@ -149,4 +168,76 @@ void G1FullGCCompactTask::free_non_overlapping_regions(uint src_start_idx, uint
G1HeapRegion* hr = _g1h->region_at(i);
_g1h->free_humongous_region(hr, nullptr);
}
void G1FullGCCompactTask::compact_region_dcevm(G1HeapRegion* hr, GrowableArray<HeapWord*>* rescued_oops_values,
GrowableArrayIterator<HeapWord*>* rescue_oops_it) {
assert(!hr->is_humongous(), "Should be no humongous regions in compaction queue");
ResourceMark rm; //
if (!collector()->is_free(hr->hrm_index())) {
// The compaction closure not only copies the object to the new
// location, but also clears the bitmap for it. This is needed
// for bitmap verification and to be able to use the bitmap
// for evacuation failures in the next young collection. Testing
// showed that it was better overall to clear bit by bit, compared
// to clearing the whole region at the end. This difference was
// clearly seen for regions with few marks.
G1CompactRegionClosureDcevm compact(collector()->mark_bitmap(), rescued_oops_values, rescue_oops_it);
hr->apply_to_marked_objects(collector()->mark_bitmap(), &compact);
}
hr->reset_compacted_after_full_gc(_collector->compaction_top(hr));
}
void G1FullGCCompactTask::serial_compaction_dcevm() {
GCTraceTime(Debug, gc, phases) tm("Phase 4: Serial Compaction", collector()->scope()->timer());
// Clear allocated resources at compact points now, since all rescued oops are copied to destination.
for (uint i = 0; i < collector()->workers(); i++) {
G1FullGCCompactionPoint* cp = collector()->compaction_point(i);
DcevmSharedGC::clear_rescued_objects_heap(cp->rescued_oops_values());
}
}
void G1FullGCCompactTask::G1CompactRegionClosureDcevm::clear_in_bitmap(oop obj) {
assert(_bitmap->is_marked(obj), "Should only compact marked objects");
_bitmap->clear(obj);
}
size_t G1FullGCCompactTask::G1CompactRegionClosureDcevm::apply(oop obj) {
size_t size = obj->size();
if (obj->is_forwarded()) {
HeapWord* destination = cast_from_oop<HeapWord*>(obj->forwardee());
// copy object and reinit its mark
HeapWord *obj_addr = cast_from_oop<HeapWord *>(obj);
if (!_rescue_oops_it->at_end() && **_rescue_oops_it == obj_addr) {
++(*_rescue_oops_it);
HeapWord *rescued_obj = NEW_C_HEAP_ARRAY(HeapWord, size, mtInternal);
Copy::aligned_disjoint_words(obj_addr, rescued_obj, size);
_rescued_oops_values->append(rescued_obj);
DEBUG_ONLY(Copy::fill_to_words(obj_addr, size, 0));
return size;
}
if (obj->klass()->new_version() != NULL) {
Klass *new_version = obj->klass()->new_version();
if (new_version->update_information() == NULL) {
Copy::aligned_conjoint_words(obj_addr, destination, size);
cast_to_oop(destination)->set_klass(new_version);
} else {
DcevmSharedGC::update_fields(obj, cast_to_oop(destination));
}
cast_to_oop(destination)->init_mark();
assert(cast_to_oop(destination)->klass() != NULL, "should have a class");
return size;
}
Copy::aligned_conjoint_words(obj_addr, destination, size);
cast_to_oop(destination)->init_mark();
assert(cast_to_oop(destination)->klass() != NULL, "should have a class");
}
clear_in_bitmap(obj);
return size;
}

View File

@@ -45,6 +45,8 @@ class G1FullGCCompactTask : public G1FullGCTask {
void free_non_overlapping_regions(uint src_start_idx, uint dest_start_idx, uint num_regions);
static void copy_object_to_new_location(oop obj);
void compact_region_dcevm(G1HeapRegion* hr, GrowableArray<HeapWord*>* rescued_oops_values,
GrowableArrayIterator<HeapWord*>* rescue_oops_it);
public:
G1FullGCCompactTask(G1FullCollector* collector) :
@@ -56,6 +58,7 @@ public:
void work(uint worker_id);
void serial_compaction();
void humongous_compaction();
void serial_compaction_dcevm();
class G1CompactRegionClosure : public StackObj {
G1CMBitMap* _bitmap;
@@ -64,6 +67,22 @@ public:
G1CompactRegionClosure(G1CMBitMap* bitmap) : _bitmap(bitmap) { }
size_t apply(oop object);
};
class G1CompactRegionClosureDcevm : public StackObj {
G1CMBitMap* _bitmap;
GrowableArray<HeapWord*>* _rescued_oops_values;
GrowableArrayIterator<HeapWord*>* _rescue_oops_it;
void clear_in_bitmap(oop object);
public:
G1CompactRegionClosureDcevm(G1CMBitMap* bitmap,
GrowableArray<HeapWord*>* rescued_oops_values,
GrowableArrayIterator<HeapWord*>* rescue_oops_it) :
_bitmap(bitmap),
_rescued_oops_values(rescued_oops_values),
_rescue_oops_it(rescue_oops_it)
{ }
size_t apply(oop object);
};
};
#endif // SHARE_GC_G1_G1FULLGCCOMPACTTASK_HPP

View File

@@ -34,13 +34,18 @@ G1FullGCCompactionPoint::G1FullGCCompactionPoint(G1FullCollector* collector, Pre
_collector(collector),
_current_region(nullptr),
_compaction_top(nullptr),
_preserved_stack(preserved_stack) {
_preserved_stack(preserved_stack),
_last_rescued_oop(0) {
_compaction_regions = new (mtGC) GrowableArray<G1HeapRegion*>(32, mtGC);
_compaction_region_iterator = _compaction_regions->begin();
_rescued_oops = new (mtGC) GrowableArray<HeapWord*>(128, mtGC);
_rescued_oops_values = new (mtGC) GrowableArray<HeapWord*>(128, mtGC);
}
G1FullGCCompactionPoint::~G1FullGCCompactionPoint() {
delete _compaction_regions;
delete _rescued_oops;
delete _rescued_oops_values;
}
void G1FullGCCompactionPoint::update() {
@@ -80,6 +85,14 @@ GrowableArray<G1HeapRegion*>* G1FullGCCompactionPoint::regions() {
return _compaction_regions;
}
GrowableArray<HeapWord*>* G1FullGCCompactionPoint::rescued_oops() {
return _rescued_oops;
}
GrowableArray<HeapWord*>* G1FullGCCompactionPoint::rescued_oops_values() {
return _rescued_oops_values;
}
bool G1FullGCCompactionPoint::object_will_fit(size_t size) {
size_t space_left = pointer_delta(_current_region->end(), _compaction_top);
return size <= space_left;
@@ -216,3 +229,59 @@ uint G1FullGCCompactionPoint::find_contiguous_before(G1HeapRegion* hr, uint num_
// Return the index of the first region in the range of contiguous regions.
return range_end - contiguous_region_count;
}
HeapWord* G1FullGCCompactionPoint::forward_compact_top(size_t size) {
assert(_current_region != NULL, "Must have been initialized");
// Ensure the object fit in the current region.
while (!object_will_fit(size)) {
if (!_compaction_region_iterator.has_next()) {
return NULL;
}
switch_region();
}
return _compaction_top;
}
void G1FullGCCompactionPoint::forward_dcevm(oop object, size_t size, bool force_forward) {
assert(_current_region != NULL, "Must have been initialized");
// Ensure the object fit in the current region.
while (!object_will_fit(size)) {
switch_region();
}
// Store a forwarding pointer if the object should be moved.
if (cast_from_oop<HeapWord*>(object) != _compaction_top || force_forward) {
object->forward_to(cast_to_oop(_compaction_top));
assert(object->is_forwarded(), "must be forwarded");
} else {
assert(!object->is_forwarded(), "must not be forwarded");
}
// Update compaction values.
_compaction_top += size;
_current_region->update_bot_for_block(_compaction_top - size, _compaction_top);
}
void G1FullGCCompactionPoint::forward_rescued() {
int i;
i = _last_rescued_oop;
for (;i<rescued_oops()->length(); i++) {
HeapWord* q = rescued_oops()->at(i);
size_t size = cast_to_oop(q)->size();
// (DCEVM) There is a new version of the class of q => different size
if (cast_to_oop(q)->klass()->new_version() != NULL) {
// assert(size != new_size, "instances without changed size have to be updated prior to GC run");
size = cast_to_oop(q)->size_given_klass(cast_to_oop(q)->klass()->new_version());
}
if (forward_compact_top(size) == NULL) {
break;
}
forward_dcevm(cast_to_oop(q), size, true);
}
_last_rescued_oop = i;
}

View File

@@ -41,6 +41,9 @@ class G1FullGCCompactionPoint : public CHeapObj<mtGC> {
PreservedMarks* _preserved_stack;
GrowableArray<G1HeapRegion*>* _compaction_regions;
GrowableArrayIterator<G1HeapRegion*> _compaction_region_iterator;
GrowableArray<HeapWord*>* _rescued_oops;
GrowableArray<HeapWord*>* _rescued_oops_values;
int _last_rescued_oop;
bool object_will_fit(size_t size);
void initialize_values();
@@ -60,6 +63,8 @@ public:
void forward_humongous(G1HeapRegion* hr);
void add(G1HeapRegion* hr);
void add_humongous(G1HeapRegion* hr);
HeapWord* forward_compact_top(size_t size);
void forward_dcevm(oop object, size_t size, bool force_forward);
void remove_at_or_above(uint bottom);
G1HeapRegion* current_region();
@@ -75,6 +80,12 @@ public:
assert(_preserved_stack == nullptr, "only initialize once");
_preserved_stack = preserved_stack;
}
GrowableArray<HeapWord*>* rescued_oops();
GrowableArray<HeapWord*>* rescued_oops_values();
void forward_rescued();
int last_rescued_oop() { return _last_rescued_oop; }
};
#endif // SHARE_GC_G1_G1FULLGCCOMPACTIONPOINT_HPP

View File

@@ -45,6 +45,7 @@ G1DetermineCompactionQueueClosure::G1DetermineCompactionQueueClosure(G1FullColle
bool G1FullGCPrepareTask::G1CalculatePointersClosure::do_heap_region(G1HeapRegion* hr) {
uint region_idx = hr->hrm_index();
assert(_collector->is_compaction_target(region_idx), "must be");
hr->set_processing_order(_region_processing_order++);
assert(!hr->is_humongous(), "must be");
@@ -82,6 +83,9 @@ void G1FullGCPrepareTask::work(uint worker_id) {
++it) {
closure.do_heap_region(*it);
}
if (Universe::is_redefining_gc_run()) {
compaction_point->forward_rescued();
}
compaction_point->update();
// Determine if there are any unused compaction targets. This is only the case if
// there are
@@ -100,7 +104,8 @@ G1FullGCPrepareTask::G1CalculatePointersClosure::G1CalculatePointersClosure(G1Fu
_g1h(G1CollectedHeap::heap()),
_collector(collector),
_bitmap(collector->mark_bitmap()),
_cp(cp) { }
_cp(cp),
_region_processing_order(0) { }
G1FullGCPrepareTask::G1PrepareCompactLiveClosure::G1PrepareCompactLiveClosure(G1FullGCCompactionPoint* cp) :
@@ -114,7 +119,61 @@ size_t G1FullGCPrepareTask::G1PrepareCompactLiveClosure::apply(oop object) {
void G1FullGCPrepareTask::G1CalculatePointersClosure::prepare_for_compaction(G1HeapRegion* hr) {
if (!_collector->is_free(hr->hrm_index())) {
G1PrepareCompactLiveClosure prepare_compact(_cp);
hr->apply_to_marked_objects(_bitmap, &prepare_compact);
if (!Universe::is_redefining_gc_run()) {
G1PrepareCompactLiveClosure prepare_compact(_cp);
hr->apply_to_marked_objects(_bitmap, &prepare_compact);
} else {
G1PrepareCompactLiveClosureDcevm prepare_compact(_cp, hr->processing_order());
hr->apply_to_marked_objects(_bitmap, &prepare_compact);
}
}
}
G1FullGCPrepareTask::G1PrepareCompactLiveClosureDcevm::G1PrepareCompactLiveClosureDcevm(G1FullGCCompactionPoint* cp,
uint region_processing_order) :
_cp(cp),
_region_processing_order(region_processing_order) { }
size_t G1FullGCPrepareTask::G1PrepareCompactLiveClosureDcevm::apply(oop object) {
size_t size = object->size();
size_t forward_size = size;
// (DCEVM) There is a new version of the class of q => different size
if (object->klass()->new_version() != NULL) {
forward_size = object->size_given_klass(object->klass()->new_version());
}
HeapWord* compact_top = _cp->forward_compact_top(forward_size);
if (compact_top == NULL || must_rescue(object, cast_to_oop(compact_top))) {
_cp->rescued_oops()->append(cast_from_oop<HeapWord*>(object));
} else {
_cp->forward_dcevm(object, forward_size, (size != forward_size));
}
return size;
}
bool G1FullGCPrepareTask::G1PrepareCompactLiveClosureDcevm::must_rescue(oop old_obj, oop new_obj) {
// Only redefined objects can have the need to be rescued.
if (old_obj->klass()->new_version() == NULL) {
return false;
}
if (_region_processing_order > _cp->current_region()->processing_order()) {
return false;
}
if (_region_processing_order < _cp->current_region()->processing_order()) {
return true;
}
// old obj and new obj are within same region
size_t new_size = old_obj->size_given_klass(oop(old_obj)->klass()->new_version());
size_t original_size = old_obj->size();
// what if old_obj > new_obj ?
bool overlap = (cast_from_oop<HeapWord*>(old_obj) + original_size < cast_from_oop<HeapWord*>(new_obj) + new_size);
return overlap;
}

View File

@@ -79,6 +79,7 @@ private:
G1FullCollector* _collector;
G1CMBitMap* _bitmap;
G1FullGCCompactionPoint* _cp;
uint _region_processing_order;
void prepare_for_compaction(G1HeapRegion* hr);
@@ -96,6 +97,16 @@ private:
G1PrepareCompactLiveClosure(G1FullGCCompactionPoint* cp);
size_t apply(oop object);
};
class G1PrepareCompactLiveClosureDcevm : public StackObj {
G1FullGCCompactionPoint* _cp;
uint _region_processing_order;
bool must_rescue(oop old_obj, oop new_obj);
public:
G1PrepareCompactLiveClosureDcevm(G1FullGCCompactionPoint* cp, uint region_processing_order);
size_t apply(oop object);
};
};
// Closure to re-prepare objects in the serial compaction point queue regions for

View File

@@ -46,6 +46,9 @@ inline bool G1DetermineCompactionQueueClosure::should_compact(G1HeapRegion* hr)
if (hr->is_humongous() || hr->has_pinned_objects()) {
return false;
}
if (Universe::is_redefining_gc_run()) {
return true;
}
size_t live_words = _collector->live_words(hr->hrm_index());
size_t live_words_threshold = _collector->scope()->region_compaction_threshold();
// High live ratio region will not be compacted.

View File

@@ -199,6 +199,8 @@ private:
// The remembered set for this region.
G1HeapRegionRemSet* _rem_set;
uint _processing_order;
// Cached index of this region in the heap region sequence.
const uint _hrm_index;
@@ -433,6 +435,14 @@ public:
return _rem_set;
}
uint processing_order() {
return _processing_order;
}
void set_processing_order(uint processing_order) {
_processing_order = processing_order;
}
inline bool in_collection_set() const;
void prepare_remset_for_scan();

View File

@@ -0,0 +1,292 @@
/*
* Copyright (c) 2001, 2023, Oracle and/or its affiliates. All rights reserved.
* DO NOT ALTER OR REMOVE COPYRIGHT NOTICES OR THIS FILE HEADER.
*
* This code is free software; you can redistribute it and/or modify it
* under the terms of the GNU General Public License version 2 only, as
* published by the Free Software Foundation.
*
* This code is distributed in the hope that it will be useful, but WITHOUT
* ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or
* FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License
* version 2 for more details (a copy is included in the LICENSE file that
* accompanied this code).
*
* You should have received a copy of the GNU General Public License version
* 2 along with this work; if not, write to the Free Software Foundation,
* Inc., 51 Franklin St, Fifth Floor, Boston, MA 02110-1301 USA.
*
* Please contact Oracle, 500 Oracle Parkway, Redwood Shores, CA 94065 USA
* or visit www.oracle.com if you need additional information or have any
* questions.
*
*/
#include "precompiled.hpp"
#include "classfile/classLoaderDataGraph.hpp"
#include "classfile/javaClasses.hpp"
#include "classfile/stringTable.hpp"
#include "classfile/symbolTable.hpp"
#include "classfile/systemDictionary.hpp"
#include "classfile/vmSymbols.hpp"
#include "code/codeCache.hpp"
#include "code/icBuffer.hpp"
#include "compiler/oopMap.hpp"
#include "gc/serial/genMarkSweep.hpp"
#include "gc/serial/serialGcRefProcProxyTask.hpp"
#include "gc/shared/collectedHeap.inline.hpp"
#include "gc/shared/gcHeapSummary.hpp"
#include "gc/shared/gcTimer.hpp"
#include "gc/shared/gcTrace.hpp"
#include "gc/shared/gcTraceTime.inline.hpp"
#include "gc/shared/genCollectedHeap.hpp"
#include "gc/shared/generation.hpp"
#include "gc/shared/genOopClosures.inline.hpp"
#include "gc/shared/modRefBarrierSet.hpp"
#include "gc/shared/referencePolicy.hpp"
#include "gc/shared/referenceProcessorPhaseTimes.hpp"
#include "gc/shared/space.hpp"
#include "gc/shared/strongRootsScope.hpp"
#include "gc/shared/weakProcessor.hpp"
#include "gc/shared/dcevmSharedGC.hpp"
#include "memory/universe.hpp"
#include "oops/instanceRefKlass.hpp"
#include "oops/oop.inline.hpp"
#include "prims/jvmtiExport.hpp"
#include "runtime/handles.inline.hpp"
#include "runtime/javaThread.hpp"
#include "runtime/synchronizer.hpp"
#include "runtime/vmThread.hpp"
#include "utilities/copy.hpp"
#include "utilities/events.hpp"
#include "utilities/stack.inline.hpp"
#if INCLUDE_JVMCI
#include "jvmci/jvmci.hpp"
#endif
void GenMarkSweep::invoke_at_safepoint(ReferenceProcessor* rp, bool clear_all_softrefs) {
assert(SafepointSynchronize::is_at_safepoint(), "must be at a safepoint");
GenCollectedHeap* gch = GenCollectedHeap::heap();
#ifdef ASSERT
if (gch->soft_ref_policy()->should_clear_all_soft_refs()) {
assert(clear_all_softrefs, "Policy should have been checked earlier");
}
#endif
// hook up weak ref data so it can be used during Mark-Sweep
assert(ref_processor() == nullptr, "no stomping");
assert(rp != nullptr, "should be non-null");
set_ref_processor(rp);
gch->trace_heap_before_gc(_gc_tracer);
// Increment the invocation count
_total_invocations++;
// Capture used regions for each generation that will be
// subject to collection, so that card table adjustments can
// be made intelligently (see clear / invalidate further below).
gch->save_used_regions();
allocate_stacks();
mark_sweep_phase1(clear_all_softrefs);
mark_sweep_phase2();
// Don't add any more derived pointers during phase3
#if COMPILER2_OR_JVMCI
assert(DerivedPointerTable::is_active(), "Sanity");
DerivedPointerTable::set_active(false);
#endif
mark_sweep_phase3();
mark_sweep_phase4();
restore_marks();
// Set saved marks for allocation profiler (and other things? -- dld)
// (Should this be in general part?)
gch->save_marks();
deallocate_stacks();
MarkSweep::_string_dedup_requests->flush();
// If compaction completely evacuated the young generation then we
// can clear the card table. Otherwise, we must invalidate
// it (consider all cards dirty). In the future, we might consider doing
// compaction within generations only, and doing card-table sliding.
CardTableRS* rs = gch->rem_set();
Generation* old_gen = gch->old_gen();
// Clear/invalidate below make use of the "prev_used_regions" saved earlier.
if (gch->young_gen()->used() == 0) {
// We've evacuated the young generation.
rs->clear_into_younger(old_gen);
} else {
// Invalidate the cards corresponding to the currently used
// region and clear those corresponding to the evacuated region.
rs->invalidate_or_clear(old_gen);
}
gch->prune_scavengable_nmethods();
// refs processing: clean slate
set_ref_processor(nullptr);
// Update heap occupancy information which is used as
// input to soft ref clearing policy at the next gc.
Universe::heap()->update_capacity_and_used_at_gc();
// Signal that we have completed a visit to all live objects.
Universe::heap()->record_whole_heap_examined_timestamp();
gch->trace_heap_after_gc(_gc_tracer);
}
void GenMarkSweep::allocate_stacks() {
GenCollectedHeap* gch = GenCollectedHeap::heap();
// Scratch request on behalf of old generation; will do no allocation.
ScratchBlock* scratch = gch->gather_scratch(gch->old_gen(), 0);
// $$$ To cut a corner, we'll only use the first scratch block, and then
// revert to malloc.
if (scratch != nullptr) {
_preserved_count_max =
scratch->num_words * HeapWordSize / sizeof(PreservedMark);
} else {
_preserved_count_max = 0;
}
_preserved_marks = (PreservedMark*)scratch;
_preserved_count = 0;
}
void GenMarkSweep::deallocate_stacks() {
GenCollectedHeap* gch = GenCollectedHeap::heap();
gch->release_scratch();
_preserved_overflow_stack.clear(true);
_marking_stack.clear();
_objarray_stack.clear(true);
}
void GenMarkSweep::mark_sweep_phase1(bool clear_all_softrefs) {
// Recursively traverse all live objects and mark them
GCTraceTime(Info, gc, phases) tm("Phase 1: Mark live objects", _gc_timer);
GenCollectedHeap* gch = GenCollectedHeap::heap();
ClassLoaderDataGraph::verify_claimed_marks_cleared(ClassLoaderData::_claim_stw_fullgc_mark);
{
StrongRootsScope srs(0);
CLDClosure* weak_cld_closure = ClassUnloading ? nullptr : &follow_cld_closure;
MarkingCodeBlobClosure mark_code_closure(&follow_root_closure, !CodeBlobToOopClosure::FixRelocations, true);
gch->process_roots(GenCollectedHeap::SO_None,
&follow_root_closure,
&follow_cld_closure,
weak_cld_closure,
&mark_code_closure);
}
// Process reference objects found during marking
{
GCTraceTime(Debug, gc, phases) tm_m("Reference Processing", gc_timer());
ReferenceProcessorPhaseTimes pt(_gc_timer, ref_processor()->max_num_queues());
SerialGCRefProcProxyTask task(is_alive, keep_alive, follow_stack_closure);
const ReferenceProcessorStats& stats = ref_processor()->process_discovered_references(task, pt);
pt.print_all_references();
gc_tracer()->report_gc_reference_stats(stats);
}
// This is the point where the entire marking should have completed.
assert(_marking_stack.is_empty(), "Marking should have completed");
{
GCTraceTime(Debug, gc, phases) tm_m("Weak Processing", gc_timer());
WeakProcessor::weak_oops_do(&is_alive, &do_nothing_cl);
}
{
GCTraceTime(Debug, gc, phases) tm_m("Class Unloading", gc_timer());
CodeCache::UnloadingScope scope(&is_alive);
// Unload classes and purge the SystemDictionary.
bool purged_class = SystemDictionary::do_unloading(gc_timer());
// Unload nmethods.
CodeCache::do_unloading(purged_class);
// Prune dead klasses from subklass/sibling/implementor lists.
Klass::clean_weak_klass_links(purged_class);
// Clean JVMCI metadata handles.
JVMCI_ONLY(JVMCI::do_unloading(purged_class));
}
gc_tracer()->report_object_count_after_gc(&is_alive);
}
void GenMarkSweep::mark_sweep_phase2() {
// Now all live objects are marked, compute the new object addresses.
GCTraceTime(Info, gc, phases) tm("Phase 2: Compute new object addresses", _gc_timer);
GenCollectedHeap::heap()->prepare_for_compaction();
}
class GenAdjustPointersClosure: public GenCollectedHeap::GenClosure {
public:
void do_generation(Generation* gen) {
gen->adjust_pointers();
}
};
void GenMarkSweep::mark_sweep_phase3() {
GenCollectedHeap* gch = GenCollectedHeap::heap();
// Adjust the pointers to reflect the new locations
GCTraceTime(Info, gc, phases) tm("Phase 3: Adjust pointers", gc_timer());
ClassLoaderDataGraph::verify_claimed_marks_cleared(ClassLoaderData::_claim_stw_fullgc_adjust);
CodeBlobToOopClosure code_closure(&adjust_pointer_closure, CodeBlobToOopClosure::FixRelocations);
gch->process_roots(GenCollectedHeap::SO_AllCodeCache,
&adjust_pointer_closure,
&adjust_cld_closure,
&adjust_cld_closure,
&code_closure);
gch->gen_process_weak_roots(&adjust_pointer_closure);
adjust_marks();
GenAdjustPointersClosure blk;
gch->generation_iterate(&blk, true);
}
class GenCompactClosure: public GenCollectedHeap::GenClosure {
public:
void do_generation(Generation* gen) {
gen->compact();
}
};
void GenMarkSweep::mark_sweep_phase4() {
// All pointers are now adjusted, move objects accordingly
GCTraceTime(Info, gc, phases) tm("Phase 4: Move objects", _gc_timer);
GenCompactClosure blk;
GenCollectedHeap::heap()->generation_iterate(&blk, true);
if (AllowEnhancedClassRedefinition) {
DcevmSharedGC::copy_rescued_objects_back(MarkSweep::_rescued_oops, true);
DcevmSharedGC::clear_rescued_objects_resource(MarkSweep::_rescued_oops);
MarkSweep::_rescued_oops = NULL;
}
}

View File

@@ -0,0 +1,256 @@
/*
* Copyright (c) 1997, 2023, Oracle and/or its affiliates. All rights reserved.
* DO NOT ALTER OR REMOVE COPYRIGHT NOTICES OR THIS FILE HEADER.
*
* This code is free software; you can redistribute it and/or modify it
* under the terms of the GNU General Public License version 2 only, as
* published by the Free Software Foundation.
*
* This code is distributed in the hope that it will be useful, but WITHOUT
* ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or
* FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License
* version 2 for more details (a copy is included in the LICENSE file that
* accompanied this code).
*
* You should have received a copy of the GNU General Public License version
* 2 along with this work; if not, write to the Free Software Foundation,
* Inc., 51 Franklin St, Fifth Floor, Boston, MA 02110-1301 USA.
*
* Please contact Oracle, 500 Oracle Parkway, Redwood Shores, CA 94065 USA
* or visit www.oracle.com if you need additional information or have any
* questions.
*
*/
#include "precompiled.hpp"
#include "compiler/compileBroker.hpp"
#include "gc/serial/markSweep.inline.hpp"
#include "gc/shared/collectedHeap.inline.hpp"
#include "gc/shared/gcTimer.hpp"
#include "gc/shared/gcTrace.hpp"
#include "gc/shared/gc_globals.hpp"
#include "memory/iterator.inline.hpp"
#include "memory/universe.hpp"
#include "oops/access.inline.hpp"
#include "oops/compressedOops.inline.hpp"
#include "oops/instanceClassLoaderKlass.inline.hpp"
#include "oops/instanceKlass.inline.hpp"
#include "oops/instanceMirrorKlass.inline.hpp"
#include "oops/instanceRefKlass.inline.hpp"
#include "oops/methodData.hpp"
#include "oops/objArrayKlass.inline.hpp"
#include "oops/oop.inline.hpp"
#include "oops/typeArrayOop.inline.hpp"
#include "utilities/macros.hpp"
#include "utilities/stack.inline.hpp"
uint MarkSweep::_total_invocations = 0;
Stack<oop, mtGC> MarkSweep::_marking_stack;
Stack<ObjArrayTask, mtGC> MarkSweep::_objarray_stack;
Stack<PreservedMark, mtGC> MarkSweep::_preserved_overflow_stack;
size_t MarkSweep::_preserved_count = 0;
size_t MarkSweep::_preserved_count_max = 0;
PreservedMark* MarkSweep::_preserved_marks = nullptr;
ReferenceProcessor* MarkSweep::_ref_processor = nullptr;
STWGCTimer* MarkSweep::_gc_timer = nullptr;
SerialOldTracer* MarkSweep::_gc_tracer = nullptr;
StringDedup::Requests* MarkSweep::_string_dedup_requests = nullptr;
GrowableArray<HeapWord*>* MarkSweep::_rescued_oops = nullptr;
MarkSweep::FollowRootClosure MarkSweep::follow_root_closure;
MarkAndPushClosure MarkSweep::mark_and_push_closure(ClassLoaderData::_claim_stw_fullgc_mark);
CLDToOopClosure MarkSweep::follow_cld_closure(&mark_and_push_closure, ClassLoaderData::_claim_stw_fullgc_mark);
CLDToOopClosure MarkSweep::adjust_cld_closure(&adjust_pointer_closure, ClassLoaderData::_claim_stw_fullgc_adjust);
template <class T> void MarkSweep::KeepAliveClosure::do_oop_work(T* p) {
mark_and_push(p);
}
void MarkSweep::push_objarray(oop obj, size_t index) {
ObjArrayTask task(obj, index);
assert(task.is_valid(), "bad ObjArrayTask");
_objarray_stack.push(task);
}
void MarkSweep::follow_array(objArrayOop array) {
mark_and_push_closure.do_klass(array->klass());
// Don't push empty arrays to avoid unnecessary work.
if (array->length() > 0) {
MarkSweep::push_objarray(array, 0);
}
}
void MarkSweep::follow_object(oop obj) {
assert(obj->is_gc_marked(), "should be marked");
if (obj->is_objArray()) {
// Handle object arrays explicitly to allow them to
// be split into chunks if needed.
MarkSweep::follow_array((objArrayOop)obj);
} else {
obj->oop_iterate(&mark_and_push_closure);
}
}
void MarkSweep::follow_array_chunk(objArrayOop array, int index) {
const int len = array->length();
const int beg_index = index;
assert(beg_index < len || len == 0, "index too large");
const int stride = MIN2(len - beg_index, (int) ObjArrayMarkingStride);
const int end_index = beg_index + stride;
array->oop_iterate_range(&mark_and_push_closure, beg_index, end_index);
if (end_index < len) {
MarkSweep::push_objarray(array, end_index); // Push the continuation.
}
}
void MarkSweep::follow_stack() {
do {
while (!_marking_stack.is_empty()) {
oop obj = _marking_stack.pop();
assert (obj->is_gc_marked(), "p must be marked");
follow_object(obj);
}
// Process ObjArrays one at a time to avoid marking stack bloat.
if (!_objarray_stack.is_empty()) {
ObjArrayTask task = _objarray_stack.pop();
follow_array_chunk(objArrayOop(task.obj()), task.index());
}
} while (!_marking_stack.is_empty() || !_objarray_stack.is_empty());
}
MarkSweep::FollowStackClosure MarkSweep::follow_stack_closure;
void MarkSweep::FollowStackClosure::do_void() { follow_stack(); }
template <class T> void MarkSweep::follow_root(T* p) {
assert(!Universe::heap()->is_in(p),
"roots shouldn't be things within the heap");
T heap_oop = RawAccess<>::oop_load(p);
if (!CompressedOops::is_null(heap_oop)) {
oop obj = CompressedOops::decode_not_null(heap_oop);
if (!obj->mark().is_marked()) {
mark_object(obj);
follow_object(obj);
}
}
follow_stack();
}
void MarkSweep::FollowRootClosure::do_oop(oop* p) { follow_root(p); }
void MarkSweep::FollowRootClosure::do_oop(narrowOop* p) { follow_root(p); }
void PreservedMark::adjust_pointer() {
MarkSweep::adjust_pointer(&_obj);
}
void PreservedMark::restore() {
_obj->set_mark(_mark);
}
// We preserve the mark which should be replaced at the end and the location
// that it will go. Note that the object that this markWord belongs to isn't
// currently at that address but it will be after phase4
void MarkSweep::preserve_mark(oop obj, markWord mark) {
// We try to store preserved marks in the to space of the new generation since
// this is storage which should be available. Most of the time this should be
// sufficient space for the marks we need to preserve but if it isn't we fall
// back to using Stacks to keep track of the overflow.
if (_preserved_count < _preserved_count_max) {
_preserved_marks[_preserved_count++] = PreservedMark(obj, mark);
} else {
_preserved_overflow_stack.push(PreservedMark(obj, mark));
}
}
void MarkSweep::set_ref_processor(ReferenceProcessor* rp) {
_ref_processor = rp;
mark_and_push_closure.set_ref_discoverer(_ref_processor);
}
void MarkSweep::mark_object(oop obj) {
if (StringDedup::is_enabled() &&
java_lang_String::is_instance(obj) &&
SerialStringDedup::is_candidate_from_mark(obj)) {
_string_dedup_requests->add(obj);
}
// some marks may contain information we need to preserve so we store them away
// and overwrite the mark. We'll restore it at the end of markSweep.
markWord mark = obj->mark();
obj->set_mark(markWord::prototype().set_marked());
ContinuationGCSupport::transform_stack_chunk(obj);
if (obj->mark_must_be_preserved(mark)) {
preserve_mark(obj, mark);
}
}
template <class T> void MarkSweep::mark_and_push(T* p) {
T heap_oop = RawAccess<>::oop_load(p);
if (!CompressedOops::is_null(heap_oop)) {
oop obj = CompressedOops::decode_not_null(heap_oop);
if (!obj->mark().is_marked()) {
mark_object(obj);
_marking_stack.push(obj);
}
}
}
template <typename T>
void MarkAndPushClosure::do_oop_work(T* p) { MarkSweep::mark_and_push(p); }
void MarkAndPushClosure::do_oop( oop* p) { do_oop_work(p); }
void MarkAndPushClosure::do_oop(narrowOop* p) { do_oop_work(p); }
AdjustPointerClosure MarkSweep::adjust_pointer_closure;
void MarkSweep::adjust_marks() {
// adjust the oops we saved earlier
for (size_t i = 0; i < _preserved_count; i++) {
_preserved_marks[i].adjust_pointer();
}
// deal with the overflow stack
StackIterator<PreservedMark, mtGC> iter(_preserved_overflow_stack);
while (!iter.is_empty()) {
PreservedMark* p = iter.next_addr();
p->adjust_pointer();
}
}
void MarkSweep::restore_marks() {
log_trace(gc)("Restoring " SIZE_FORMAT " marks", _preserved_count + _preserved_overflow_stack.size());
// restore the marks we saved earlier
for (size_t i = 0; i < _preserved_count; i++) {
_preserved_marks[i].restore();
}
// deal with the overflow
while (!_preserved_overflow_stack.is_empty()) {
PreservedMark p = _preserved_overflow_stack.pop();
p.restore();
}
}
MarkSweep::IsAliveClosure MarkSweep::is_alive;
bool MarkSweep::IsAliveClosure::do_object_b(oop p) { return p->is_gc_marked(); }
MarkSweep::KeepAliveClosure MarkSweep::keep_alive;
void MarkSweep::KeepAliveClosure::do_oop(oop* p) { MarkSweep::KeepAliveClosure::do_oop_work(p); }
void MarkSweep::KeepAliveClosure::do_oop(narrowOop* p) { MarkSweep::KeepAliveClosure::do_oop_work(p); }
void MarkSweep::initialize() {
MarkSweep::_gc_timer = new STWGCTimer();
MarkSweep::_gc_tracer = new SerialOldTracer();
MarkSweep::_string_dedup_requests = new StringDedup::Requests();
}

View File

@@ -0,0 +1,159 @@
/*
* Copyright (c) 1997, 2018, Oracle and/or its affiliates. All rights reserved.
* DO NOT ALTER OR REMOVE COPYRIGHT NOTICES OR THIS FILE HEADER.
*
* This code is free software; you can redistribute it and/or modify it
* under the terms of the GNU General Public License version 2 only, as
* published by the Free Software Foundation.
*
* This code is distributed in the hope that it will be useful, but WITHOUT
* ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or
* FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License
* version 2 for more details (a copy is included in the LICENSE file that
* accompanied this code).
*
* You should have received a copy of the GNU General Public License version
* 2 along with this work; if not, write to the Free Software Foundation,
* Inc., 51 Franklin St, Fifth Floor, Boston, MA 02110-1301 USA.
*
* Please contact Oracle, 500 Oracle Parkway, Redwood Shores, CA 94065 USA
* or visit www.oracle.com if you need additional information or have any
* questions.
*
*/
#include "precompiled.hpp"
#include "gc/shared/dcevmSharedGC.hpp"
#include "memory/iterator.inline.hpp"
#include "oops/access.inline.hpp"
#include "oops/compressedOops.inline.hpp"
#include "oops/instanceClassLoaderKlass.inline.hpp"
#include "oops/instanceKlass.inline.hpp"
#include "oops/instanceMirrorKlass.inline.hpp"
#include "oops/instanceRefKlass.inline.hpp"
#include "oops/oop.inline.hpp"
#include "utilities/macros.hpp"
void DcevmSharedGC::copy_rescued_objects_back(GrowableArray<HeapWord*>* rescued_oops, bool must_be_new) {
if (rescued_oops != NULL) {
copy_rescued_objects_back(rescued_oops, 0, rescued_oops->length(), must_be_new);
}
}
// (DCEVM) Copy the rescued objects to their destination address after compaction.
void DcevmSharedGC::copy_rescued_objects_back(GrowableArray<HeapWord*>* rescued_oops, int from, int to, bool must_be_new) {
if (rescued_oops != NULL) {
for (int i=from; i < to; i++) {
HeapWord* rescued_ptr = rescued_oops->at(i);
oop rescued_obj = cast_to_oop(rescued_ptr);
size_t size = rescued_obj->size();
oop new_obj = rescued_obj->forwardee();
assert(!must_be_new || rescued_obj->klass()->new_version() != NULL, "Just checking");
Klass* new_klass = rescued_obj->klass()->new_version();
if (new_klass!= NULL) {
if (new_klass->update_information() != NULL) {
DcevmSharedGC::update_fields(rescued_obj, new_obj);
} else {
rescued_obj->set_klass(new_klass);
Copy::aligned_disjoint_words(cast_from_oop<HeapWord*>(rescued_obj), cast_from_oop<HeapWord*>(new_obj), size);
}
} else {
Copy::aligned_disjoint_words(cast_from_oop<HeapWord*>(rescued_obj), cast_from_oop<HeapWord*>(new_obj), size);
}
new_obj->init_mark();
assert(oopDesc::is_oop(new_obj), "must be a valid oop");
}
}
}
void DcevmSharedGC::clear_rescued_objects_resource(GrowableArray<HeapWord*>* rescued_oops) {
if (rescued_oops != NULL) {
for (int i=0; i < rescued_oops->length(); i++) {
HeapWord* rescued_ptr = rescued_oops->at(i);
size_t size = cast_to_oop(rescued_ptr)->size();
FREE_RESOURCE_ARRAY(HeapWord, rescued_ptr, size);
}
rescued_oops->clear();
}
}
void DcevmSharedGC::clear_rescued_objects_heap(GrowableArray<HeapWord*>* rescued_oops) {
if (rescued_oops != NULL) {
for (int i=0; i < rescued_oops->length(); i++) {
HeapWord* rescued_ptr = rescued_oops->at(i);
FREE_C_HEAP_ARRAY(HeapWord, rescued_ptr);
}
rescued_oops->clear();
}
}
// (DCEVM) Update instances of a class whose fields changed.
void DcevmSharedGC::update_fields(oop q, oop new_location) {
assert(q->klass()->new_version() != NULL, "class of old object must have new version");
Klass* old_klass_oop = q->klass();
Klass* new_klass_oop = q->klass()->new_version();
InstanceKlass *old_klass = InstanceKlass::cast(old_klass_oop);
InstanceKlass *new_klass = InstanceKlass::cast(new_klass_oop);
size_t size = q->size_given_klass(old_klass);
size_t new_size = q->size_given_klass(new_klass);
HeapWord* tmp = NULL;
oop tmp_obj = q;
// Save object somewhere, there is an overlap in fields
if (new_klass_oop->is_copying_backwards()) {
if ((cast_from_oop<HeapWord*>(q) >= cast_from_oop<HeapWord*>(new_location) && cast_from_oop<HeapWord*>(q) < cast_from_oop<HeapWord*>(new_location) + new_size) ||
(cast_from_oop<HeapWord*>(new_location) >= cast_from_oop<HeapWord*>(q) && cast_from_oop<HeapWord*>(new_location) < cast_from_oop<HeapWord*>(q) + size)) {
tmp = NEW_RESOURCE_ARRAY(HeapWord, size);
q = cast_to_oop(tmp);
Copy::aligned_disjoint_words(cast_from_oop<HeapWord*>(tmp_obj), cast_from_oop<HeapWord*>(q), size);
}
}
q->set_klass(new_klass_oop);
int *cur = new_klass_oop->update_information();
assert(cur != NULL, "just checking");
DcevmSharedGC::update_fields(new_location, q, cur);
if (tmp != NULL) {
FREE_RESOURCE_ARRAY(HeapWord, tmp, size);
}
}
void DcevmSharedGC::update_fields(oop new_location, oop tmp_obj, int *cur) {
assert(cur != NULL, "just checking");
char* to = (char*)cast_from_oop<HeapWord*>(new_location);
while (*cur != 0) {
int size = *cur;
if (size > 0) {
cur++;
int offset = *cur;
HeapWord* from = (HeapWord*)(((char *)cast_from_oop<HeapWord*>(tmp_obj)) + offset);
if (size == HeapWordSize) {
*((HeapWord*)to) = *from;
} else if (size == HeapWordSize * 2) {
*((HeapWord*)to) = *from;
*(((HeapWord*)to) + 1) = *(from + 1);
} else {
Copy::conjoint_jbytes(from, to, size);
}
to += size;
cur++;
} else {
assert(size < 0, "");
int skip = -*cur;
Copy::fill_to_bytes(to, skip, 0);
to += skip;
cur++;
}
}
}

View File

@@ -0,0 +1,49 @@
/*
* Copyright (c) 1997, 2018, Oracle and/or its affiliates. All rights reserved.
* DO NOT ALTER OR REMOVE COPYRIGHT NOTICES OR THIS FILE HEADER.
*
* This code is free software; you can redistribute it and/or modify it
* under the terms of the GNU General Public License version 2 only, as
* published by the Free Software Foundation.
*
* This code is distributed in the hope that it will be useful, but WITHOUT
* ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or
* FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License
* version 2 for more details (a copy is included in the LICENSE file that
* accompanied this code).
*
* You should have received a copy of the GNU General Public License version
* 2 along with this work; if not, write to the Free Software Foundation,
* Inc., 51 Franklin St, Fifth Floor, Boston, MA 02110-1301 USA.
*
* Please contact Oracle, 500 Oracle Parkway, Redwood Shores, CA 94065 USA
* or visit www.oracle.com if you need additional information or have any
* questions.
*
*/
#ifndef SHARE_GC_DCEVM_SHARED_GC_HPP
#define SHARE_GC_DCEVM_SHARED_GC_HPP
#include "gc/shared/collectedHeap.hpp"
#include "gc/shared/genOopClosures.hpp"
#include "gc/shared/taskqueue.hpp"
#include "memory/iterator.hpp"
#include "oops/markWord.hpp"
#include "oops/oop.hpp"
#include "runtime/timer.hpp"
#include "utilities/growableArray.hpp"
#include "utilities/stack.hpp"
// Shared GC code used from different GC (Serial, CMS, G1) on enhanced redefinition
class DcevmSharedGC : AllStatic {
public:
static void copy_rescued_objects_back(GrowableArray<HeapWord*>* rescued_oops, bool must_be_new);
static void copy_rescued_objects_back(GrowableArray<HeapWord*>* rescued_oops, int from, int to, bool must_be_new);
static void clear_rescued_objects_resource(GrowableArray<HeapWord*>* rescued_oops);
static void clear_rescued_objects_heap(GrowableArray<HeapWord*>* rescued_oops);
static void update_fields(oop q, oop new_location);
static void update_fields(oop new_location, oop tmp_obj, int *cur);
};
#endif

View File

@@ -95,7 +95,10 @@ void GCConfig::fail_if_non_included_gc_is_selected() {
}
void GCConfig::select_gc_ergonomically() {
if (os::is_server_class_machine()) {
if (AllowEnhancedClassRedefinition && !UseSerialGC) {
// (DCEVM) use G1 as default GC in Enhanced class redefinition
FLAG_SET_ERGO(UseG1GC, true);
} else if (os::is_server_class_machine()) {
#if INCLUDE_G1GC
FLAG_SET_ERGO_IF_DEFAULT(UseG1GC, true);
#elif INCLUDE_PARALLELGC

View File

@@ -369,7 +369,7 @@ Bytecodes::Code Bytecodes::code_at(Method* method, int bci) {
Bytecodes::Code Bytecodes::non_breakpoint_code_at(const Method* method, address bcp) {
assert(method != nullptr, "must have the method for breakpoint conversion");
assert(method->contains(bcp), "must be valid bcp in method");
return method->orig_bytecode_at(method->bci_from(bcp));
return method->orig_bytecode_at(method->bci_from(bcp), false);
}
int Bytecodes::special_length_at(Bytecodes::Code code, address bcp, address end) {

View File

@@ -779,7 +779,7 @@ JRT_END
// Invokes
JRT_ENTRY(Bytecodes::Code, InterpreterRuntime::get_original_bytecode_at(JavaThread* current, Method* method, address bcp))
return method->orig_bytecode_at(method->bci_from(bcp));
return method->orig_bytecode_at(method->bci_from(bcp), false);
JRT_END
JRT_ENTRY(void, InterpreterRuntime::set_original_bytecode_at(JavaThread* current, Method* method, address bcp, Bytecodes::Code new_code))

View File

@@ -158,14 +158,14 @@ void CallInfo::set_common(Klass* resolved_klass,
}
// utility query for unreflecting a method
CallInfo::CallInfo(Method* resolved_method, Klass* resolved_klass, TRAPS) {
CallInfo::CallInfo(Method* resolved_method, Klass* resolved_klass, Thread* thread) {
Klass* resolved_method_holder = resolved_method->method_holder();
if (resolved_klass == nullptr) { // 2nd argument defaults to holder of 1st
resolved_klass = resolved_method_holder;
}
_resolved_klass = resolved_klass;
_resolved_method = methodHandle(THREAD, resolved_method);
_selected_method = methodHandle(THREAD, resolved_method);
_resolved_method = methodHandle(thread, resolved_method);
_selected_method = methodHandle(thread, resolved_method);
// classify:
CallKind kind = CallInfo::unknown_kind;
int index = resolved_method->vtable_index();
@@ -206,8 +206,9 @@ CallInfo::CallInfo(Method* resolved_method, Klass* resolved_klass, TRAPS) {
_call_index = index;
_resolved_appendix = Handle();
// Find or create a ResolvedMethod instance for this Method*
set_resolved_method_name(CHECK);
if (thread->is_Java_thread()) { // exclude DCEVM VM thread
set_resolved_method_name(JavaThread::cast(thread));
}
DEBUG_ONLY(verify());
}
@@ -217,6 +218,10 @@ void CallInfo::set_resolved_method_name(TRAPS) {
_resolved_method_name = Handle(THREAD, rmethod_name);
}
void CallInfo::set_resolved_method_name_dcevm(oop rmethod_name, Thread* thread) {
_resolved_method_name = Handle(thread, rmethod_name);
}
#ifdef ASSERT
void CallInfo::verify() {
switch (call_kind()) { // the meaning and allowed value of index depends on kind
@@ -314,9 +319,14 @@ void LinkResolver::check_klass_accessibility(Klass* ref_klass, Klass* sel_klass,
if (!base_klass->is_instance_klass()) {
return; // no relevant check to do
}
Klass* refKlassNewest = ref_klass;
Klass* baseKlassNewest = base_klass;
if (AllowEnhancedClassRedefinition) {
refKlassNewest = ref_klass->newest_version();
baseKlassNewest = base_klass->newest_version();
}
Reflection::VerifyClassAccessResults vca_result =
Reflection::verify_class_access(ref_klass, InstanceKlass::cast(base_klass), true);
Reflection::verify_class_access(refKlassNewest, InstanceKlass::cast(baseKlassNewest), true);
if (vca_result != Reflection::ACCESS_OK) {
ResourceMark rm(THREAD);
char* msg = Reflection::verify_class_access_msg(ref_klass,
@@ -578,7 +588,8 @@ void LinkResolver::check_method_accessability(Klass* ref_klass,
// We'll check for the method name first, as that's most likely
// to be false (so we'll short-circuit out of these tests).
if (sel_method->name() == vmSymbols::clone_name() &&
sel_klass == vmClasses::Object_klass() &&
( (!AllowEnhancedClassRedefinition && sel_klass == vmClasses::Object_klass()) ||
(AllowEnhancedClassRedefinition && sel_klass->newest_version() == vmClasses::Object_klass()->newest_version()) ) &&
resolved_klass->is_array_klass()) {
// We need to change "protected" to "public".
assert(flags.is_protected(), "clone not protected?");
@@ -1040,7 +1051,7 @@ void LinkResolver::resolve_field(fieldDescriptor& fd,
// or by the <init> method (in case of an instance field).
if (is_put && fd.access_flags().is_final()) {
if (sel_klass != current_klass) {
if (sel_klass != current_klass && (!AllowEnhancedClassRedefinition || sel_klass != current_klass->active_version())) {
ResourceMark rm(THREAD);
stringStream ss;
ss.print("Update to %s final field %s.%s attempted from a different class (%s) than the field's declaring class",

View File

@@ -89,7 +89,7 @@ class CallInfo : public StackObj {
// utility to extract an effective CallInfo from a method and an optional receiver limit
// does not queue the method for compilation. This also creates a ResolvedMethodName
// object for the resolved_method.
CallInfo(Method* resolved_method, Klass* resolved_klass, TRAPS);
CallInfo(Method* resolved_method, Klass* resolved_klass, Thread* thread);
Klass* resolved_klass() const { return _resolved_klass; }
Method* resolved_method() const;
@@ -98,6 +98,7 @@ class CallInfo : public StackObj {
Handle resolved_method_name() const { return _resolved_method_name; }
// Materialize a java.lang.invoke.ResolvedMethodName for this resolved_method
void set_resolved_method_name(TRAPS);
void set_resolved_method_name_dcevm(oop rmethod_name, Thread* thread);
CallKind call_kind() const { return _call_kind; }
int vtable_index() const {

View File

@@ -71,6 +71,7 @@ static bool enable() {
}
_enabled = FlightRecorder;
assert(_enabled, "invariant");
AllowEnhancedClassRedefinition = false;
return _enabled;
}

View File

@@ -180,6 +180,7 @@ int Universe::_base_vtable_size = 0;
bool Universe::_bootstrapping = false;
bool Universe::_module_initialized = false;
bool Universe::_fully_initialized = false;
bool Universe::_is_redefining_gc_run = false; // FIXME: review
OopStorage* Universe::_vm_weak = nullptr;
OopStorage* Universe::_vm_global = nullptr;
@@ -1074,6 +1075,14 @@ void Universe::initialize_known_methods(JavaThread* current) {
vmSymbols::doStackWalk_signature(), false);
}
void Universe::reinitialize_loader_addClass_method(TRAPS) {
// Set up method for registering loaded classes in class loader vector
initialize_known_method(_loader_addClass_cache,
vmClasses::ClassLoader_klass(),
"addClass",
vmSymbols::class_void_signature(), false, CHECK);
}
void universe2_init() {
EXCEPTION_MARK;
Universe::genesis(CATCH);

View File

@@ -160,6 +160,7 @@ class Universe: AllStatic {
static uintptr_t _verify_oop_mask;
static uintptr_t _verify_oop_bits;
static bool _is_redefining_gc_run;
// Table of primitive type mirrors, excluding T_OBJECT and T_ARRAY
// but including T_VOID, hence the index including T_VOID
@@ -175,6 +176,10 @@ class Universe: AllStatic {
static void calculate_verify_data(HeapWord* low_boundary, HeapWord* high_boundary) PRODUCT_RETURN;
static void set_verify_data(uintptr_t mask, uintptr_t bits) PRODUCT_RETURN;
// Advanced class redefinition. FIXME: review?
static bool is_redefining_gc_run() { return _is_redefining_gc_run; }
static void set_redefining_gc_run(bool b) { _is_redefining_gc_run = b; }
// Known classes in the VM
static TypeArrayKlass* boolArrayKlass() { return typeArrayKlass(T_BOOLEAN); }
static TypeArrayKlass* byteArrayKlass() { return typeArrayKlass(T_BYTE); }
@@ -249,6 +254,8 @@ class Universe: AllStatic {
// Function to initialize these
static void initialize_known_methods(JavaThread* current);
static void reinitialize_loader_addClass_method(TRAPS);
static void create_preallocated_out_of_memory_errors(TRAPS);
// Reference pending list manipulation. Access is protected by

View File

@@ -431,11 +431,21 @@ bool InstanceKlass::has_nestmate_access_to(InstanceKlass* k, TRAPS) {
return false;
}
// (DCEVM) cur_host can be old, decide accessibility based on active version
if (AllowEnhancedClassRedefinition) {
cur_host = InstanceKlass::cast(cur_host->active_version());
}
Klass* k_nest_host = k->nest_host(CHECK_false);
if (k_nest_host == nullptr) {
return false;
}
// (DCEVM) k_nest_host can be old, decide accessibility based on active version
if (AllowEnhancedClassRedefinition) {
k_nest_host = InstanceKlass::cast(k_nest_host->active_version());
}
bool access = (cur_host == k_nest_host);
ResourceMark rm(THREAD);
@@ -998,7 +1008,10 @@ bool InstanceKlass::link_class_impl(TRAPS) {
if (is_shared()) {
assert(!verified_at_dump_time(), "must be");
}
{
// (DCEVM): If class A is being redefined and class B->A (B is extended from A) and B is host class of anonymous class C
// then second redefinition fails with cannot cast klass exception. So we currently turn off bytecode verification
// on redefinition.
if (!AllowEnhancedClassRedefinition || !newest_version()->is_redefining()) {
bool verify_ok = verify_code(THREAD);
if (!verify_ok) {
return false;
@@ -1060,7 +1073,7 @@ bool InstanceKlass::link_class_impl(TRAPS) {
} else {
set_init_state(linked);
}
if (JvmtiExport::should_post_class_prepare()) {
if (JvmtiExport::should_post_class_prepare() && (!AllowEnhancedClassRedefinition || old_version() == NULL /* JVMTI deadlock otherwise */)) {
JvmtiExport::post_class_prepare(THREAD, this);
}
}
@@ -1206,7 +1219,8 @@ void InstanceKlass::initialize_impl(TRAPS) {
// If we were to use wait() instead of waitInterruptibly() then
// we might end up throwing IE from link/symbol resolution sites
// that aren't expected to throw. This would wreak havoc. See 6320309.
while (is_being_initialized() && !is_reentrant_initialization(jt)) {
while ((is_being_initialized() && !is_reentrant_initialization(jt))
|| (AllowEnhancedClassRedefinition && old_version() != NULL && InstanceKlass::cast(old_version())->is_being_initialized())) {
if (debug_logging_enabled) {
ResourceMark rm(jt);
log_debug(class, init)("Thread \"%s\" waiting for initialization of %s by thread \"%s\"",
@@ -1494,6 +1508,15 @@ void InstanceKlass::init_implementor() {
}
}
// (DCEVM) - init_implementor() for dcevm
void InstanceKlass::init_implementor_from_redefine() {
assert(is_interface(), "not interface");
InstanceKlass* volatile* addr = adr_implementor();
assert(addr != NULL, "null addr");
if (addr != NULL) {
*addr = NULL;
}
}
void InstanceKlass::process_interfaces() {
// link this class into the implementors list of every interface it implements
@@ -1550,6 +1573,20 @@ bool InstanceKlass::implements_interface(Klass* k) const {
return false;
}
// (DCEVM)
bool InstanceKlass::implements_interface_any_version(Klass* k) const {
k = k->newest_version();
if (this->newest_version() == k) return true;
assert(k->is_interface(), "should be an interface class");
for (int i = 0; i < transitive_interfaces()->length(); i++) {
if (transitive_interfaces()->at(i)->newest_version() == k) {
return true;
}
}
return false;
}
bool InstanceKlass::is_same_or_direct_interface(Klass *k) const {
// Verify direct super interface
if (this == k) return true;
@@ -1899,6 +1936,21 @@ void InstanceKlass::methods_do(void f(Method* method)) {
}
}
// (DCEVM) Update information contains mapping of fields from old class to the new class.
// Info is stored on HEAP, you need to call clear_update_information to free the space.
void InstanceKlass::store_update_information(GrowableArray<int> &values) {
int *arr = NEW_C_HEAP_ARRAY(int, values.length(), mtClass);
for (int i = 0; i < values.length(); i++) {
arr[i] = values.at(i);
}
set_update_information(arr);
}
void InstanceKlass::clear_update_information() {
FREE_C_HEAP_ARRAY(int, update_information());
set_update_information(NULL);
}
void InstanceKlass::do_local_static_fields(FieldClosure* cl) {
for (JavaFieldStream fs(this); !fs.done(); fs.next()) {
@@ -1936,6 +1988,36 @@ static int compare_fields_by_offset(FieldInfo* a, FieldInfo* b) {
return a->offset() - b->offset();
}
void InstanceKlass::do_nonstatic_fields_sorted(FieldClosure* cl) {
InstanceKlass* super = superklass();
if (super != NULL) {
super->do_nonstatic_fields_sorted(cl);
}
fieldDescriptor fd;
// In DebugInfo nonstatic fields are sorted by offset.
GrowableArray<Pair<int,int> > fields_sorted;
int i = 0;
for (AllFieldStream fs(this); !fs.done(); fs.next()) {
if (!fs.access_flags().is_static()) {
fd = fs.field_descriptor();
Pair<int,int> f(fs.offset(), fs.index());
fields_sorted.push(f);
i++;
}
}
if (i > 0) {
int length = i;
assert(length == fields_sorted.length(), "duh");
// _sort_Fn is defined in growableArray.hpp.
fields_sorted.sort(compare_fields_by_offset);
for (int i = 0; i < length; i++) {
fd.reinitialize(this, fields_sorted.at(i).second);
assert(!fd.is_static() && fd.offset() == fields_sorted.at(i).first, "only nonstatic fields");
cl->do_field(&fd);
}
}
}
void InstanceKlass::print_nonstatic_fields(FieldClosure* cl) {
InstanceKlass* super = superklass();
if (super != nullptr) {
@@ -2519,6 +2601,20 @@ void InstanceKlass::clean_dependency_context() {
dependencies().clean_unloading_dependents();
}
// DCEVM - update jmethod ids
bool InstanceKlass::update_jmethod_id(Method* method, jmethodID newMethodID) {
size_t idnum = (size_t)method->method_idnum();
jmethodID* jmeths = methods_jmethod_ids_acquire();
size_t length; // length assigned as debugging crumb
jmethodID id = NULL;
if (jmeths != NULL && // If there is a cache
(length = (size_t)jmeths[0]) > idnum) { // and if it is long enough,
jmeths[idnum+1] = newMethodID; // Set method id (may be NULL)
return true;
}
return false;
}
#ifndef PRODUCT
void InstanceKlass::print_dependent_nmethods(bool verbose) {
dependencies().print_dependent_nmethods(verbose);
@@ -4109,7 +4205,8 @@ void InstanceKlass::verify_on(outputStream* st) {
}
guarantee(sib->is_klass(), "should be klass");
guarantee(sib->super() == super, "siblings should have same superklass");
// TODO: (DCEVM) explain
guarantee(sib->super() == super || AllowEnhancedClassRedefinition && super->newest_version() == vmClasses::Object_klass(), "siblings should have same superklass");
}
// Verify local interfaces

View File

@@ -136,6 +136,7 @@ class InstanceKlass: public Klass {
friend class JVMCIVMStructs;
friend class ClassFileParser;
friend class CompileReplay;
friend class VM_EnhancedRedefineClasses;
public:
static const KlassKind Kind = InstanceKlassKind;
@@ -785,6 +786,7 @@ public:
void ensure_space_for_methodids(int start_offset = 0);
jmethodID jmethod_id_or_null(Method* method);
void update_methods_jmethod_cache();
bool update_jmethod_id(Method* method, jmethodID newMethodID);
// annotations support
Annotations* annotations() const { return _annotations; }
@@ -861,6 +863,7 @@ public:
// subclass/subinterface checks
bool implements_interface(Klass* k) const;
bool implements_interface_any_version(Klass* k) const;
bool is_same_or_direct_interface(Klass* k) const;
#ifdef ASSERT
@@ -874,6 +877,7 @@ public:
int nof_implementors() const;
void add_implementor(InstanceKlass* ik); // ik is a new class that implements this interface
void init_implementor(); // initialize
void init_implementor_from_redefine(); // initialize
private:
// link this class into the implementors list of every interface it implements
@@ -892,9 +896,15 @@ public:
// Iterators
void do_local_static_fields(FieldClosure* cl);
void do_nonstatic_fields(FieldClosure* cl); // including inherited fields
// (DCEVM)
void do_nonstatic_fields_sorted(FieldClosure* cl);
void do_local_static_fields(void f(fieldDescriptor*, Handle, TRAPS), Handle, TRAPS);
void print_nonstatic_fields(FieldClosure* cl); // including inherited and injected fields
// Advanced class redefinition: FIXME: why here?
void store_update_information(GrowableArray<int> &values);
void clear_update_information();
void methods_do(void f(Method* method));
static InstanceKlass* cast(Klass* k) {

View File

@@ -301,7 +301,13 @@ Klass::Klass() : _kind(UnknownKlassKind) {
// which doesn't zero out the memory before calling the constructor.
Klass::Klass(KlassKind kind) : _kind(kind),
_prototype_header(make_prototype(this)),
_shared_class_path_index(-1) {
_shared_class_path_index(-1),
_old_version(nullptr),
_new_version(nullptr),
_redefinition_flags(Klass::NoRedefinition),
_is_redefining(false),
_update_information(nullptr),
_is_copying_backwards(false) {
CDS_ONLY(_shared_class_flags = 0;)
CDS_JAVA_HEAP_ONLY(_archived_mirror_index = -1;)
_primary_supers[0] = this;
@@ -718,6 +724,27 @@ void Klass::clean_subklass() {
}
}
void Klass::remove_from_sibling_list() {
DEBUG_ONLY(verify();)
// remove ourselves to superklass' subklass list
InstanceKlass* super = superklass();
if (super == NULL) return; // special case: class Object
if (super->subklass() == this) {
// this klass is the first subklass
super->set_subklass(next_sibling());
} else {
Klass* sib = super->subklass();
assert(sib != NULL, "cannot find this class in sibling list!");
while (sib->next_sibling() != this) {
sib = sib->next_sibling();
assert(sib != NULL, "cannot find this class in sibling list!");
}
sib->set_next_sibling(next_sibling());
}
DEBUG_ONLY(verify();)
}
void Klass::clean_weak_klass_links(bool unloading_occurred, bool clean_alive_klasses) {
if (!ClassUnloading || !unloading_occurred) {
return;

View File

@@ -165,6 +165,18 @@ class Klass : public Metadata {
uintx _secondary_supers_bitmap;
uint8_t _hash_slot;
// Advanced class redefinition
// Old version (used in advanced class redefinition)
Klass* _old_version;
// New version (used in advanced class redefinition)
Klass* _new_version;
int _redefinition_flags; // Level of class redefinition
bool _is_redefining;
int* _update_information;
bool _is_copying_backwards; // Does the class need to copy fields backwards? => possibly overwrite itself?
private:
// This is an index into AOTClassLocationConfig::class_locations(), to
// indicate the AOTClassLocation where this class is loaded from during
@@ -302,6 +314,7 @@ protected:
InstanceKlass* superklass() const;
void append_to_sibling_list(); // add newly created receiver to superklass' subklass list
void remove_from_sibling_list(); // enhanced class redefinition
void set_next_link(Klass* k) { _next_link = k; }
Klass* next_link() const { return _next_link; } // The next klass defined by the class loader.
@@ -402,6 +415,31 @@ protected:
virtual ModuleEntry* module() const = 0;
virtual PackageEntry* package() const = 0;
// Advanced class redefinition
Klass* old_version() const { return _old_version; }
void set_old_version(Klass* klass) { assert(_old_version == NULL || klass == NULL, "Old version can only be set once!"); _old_version = klass; }
Klass* new_version() const { return _new_version; }
void set_new_version(Klass* klass) { assert(_new_version == NULL || klass == NULL, "New version can only be set once!"); _new_version = klass; }
bool is_redefining() const { return _is_redefining; }
void set_redefining(bool b) { _is_redefining = b; }
int redefinition_flags() const { return _redefinition_flags; }
bool check_redefinition_flag(int flags) const { return (_redefinition_flags & flags) != 0; }
void clear_redefinition_flag(int flag) { _redefinition_flags &= ~flag; }
void set_redefinition_flag(int flag) { _redefinition_flags |= flag; }
void set_redefinition_flags(int flags) { _redefinition_flags = flags; }
const Klass* newest_version() const { return _new_version == NULL ? this : _new_version->newest_version(); }
Klass* newest_version() { return _new_version == NULL ? this : _new_version->newest_version(); }
const Klass* active_version() const { return _new_version == NULL || _new_version->is_redefining() ? this : _new_version->active_version(); }
Klass* active_version() { return _new_version == NULL || _new_version->is_redefining() ? this : _new_version->active_version(); }
// update information
int *update_information() const { return _update_information; }
void set_update_information(int *info) { _update_information = info; }
bool is_copying_backwards() const { return _is_copying_backwards; }
void set_copying_backwards(bool b) { _is_copying_backwards = b; }
protected: // internal accessors
void set_subklass(Klass* s);
void set_next_sibling(Klass* s);
@@ -432,6 +470,15 @@ protected:
static constexpr uintx SECONDARY_SUPERS_BITMAP_EMPTY = 0;
static constexpr uintx SECONDARY_SUPERS_BITMAP_FULL = ~(uintx)0;
enum RedefinitionFlags {
NoRedefinition, // This class is not redefined at all!
ModifyClass = 1, // There are changes to the class meta data.
ModifyClassSize = ModifyClass << 1, // The size of the class meta data changes.
ModifyInstances = ModifyClassSize << 1, // There are change to the instance format.
ModifyInstanceSize = ModifyInstances << 1, // The size of instances changes.
RemoveSuperType = ModifyInstanceSize << 1, // A super type of this class is removed.
MarkedAsAffected = RemoveSuperType << 1 // This class has been marked as an affected class.
};
// Compiler support
static ByteSize super_offset() { return byte_offset_of(Klass, _super); }

View File

@@ -103,7 +103,8 @@ Method* Method::allocate(ClassLoaderData* loader_data,
return new (loader_data, size, MetaspaceObj::MethodType, THREAD) Method(cm, access_flags, name);
}
Method::Method(ConstMethod* xconst, AccessFlags access_flags, Symbol* name) {
Method::Method(ConstMethod* xconst, AccessFlags access_flags, Symbol* name) : _new_version(NULL),
_old_version(NULL) {
NoSafepointVerifier no_safepoint;
set_constMethod(xconst);
set_access_flags(access_flags);
@@ -1914,14 +1915,14 @@ bool CompressedLineNumberReadStream::read_pair() {
#if INCLUDE_JVMTI
Bytecodes::Code Method::orig_bytecode_at(int bci) const {
Bytecodes::Code Method::orig_bytecode_at(int bci, bool no_fatal) const {
BreakpointInfo* bp = method_holder()->breakpoints();
for (; bp != nullptr; bp = bp->next()) {
if (bp->match(this, bci)) {
return bp->orig_bytecode();
}
}
{
if (!no_fatal) {
ResourceMark rm;
fatal("no original bytecode found in %s at bci %d", name_and_sig_as_C_string(), bci);
}
@@ -2026,7 +2027,7 @@ BreakpointInfo::BreakpointInfo(Method* m, int bci) {
_signature_index = m->signature_index();
_orig_bytecode = (Bytecodes::Code) *m->bcp_from(_bci);
if (_orig_bytecode == Bytecodes::_breakpoint)
_orig_bytecode = m->orig_bytecode_at(_bci);
_orig_bytecode = m->orig_bytecode_at(_bci, false);
_next = nullptr;
}
@@ -2035,7 +2036,7 @@ void BreakpointInfo::set(Method* method) {
{
Bytecodes::Code code = (Bytecodes::Code) *method->bcp_from(_bci);
if (code == Bytecodes::_breakpoint)
code = method->orig_bytecode_at(_bci);
code = method->orig_bytecode_at(_bci, false);
assert(orig_bytecode() == code, "original bytecode must be the same");
}
#endif
@@ -2211,7 +2212,10 @@ jmethodID Method::make_jmethod_id(ClassLoaderData* cld, Method* m) {
// Have to add jmethod_ids() to class loader data thread-safely.
// Also have to add the method to the list safely, which the lock
// protects as well.
assert(JmethodIdCreation_lock->owned_by_self(), "sanity check");
assert(AllowEnhancedClassRedefinition || JmethodIdCreation_lock->owned_by_self(), "sanity check");
if (AllowEnhancedClassRedefinition && m != m->newest_version()) {
m = m->newest_version();
}
ResourceMark rm;
log_debug(jmethod)("Creating jmethodID for Method %s", m->external_name());

View File

@@ -79,6 +79,9 @@ class Method : public Metadata {
int _vtable_index; // vtable index of this method (see VtableIndexFlag)
AccessFlags _access_flags; // Access flags
MethodFlags _flags;
// (DCEVM) Newer version of method available?
Method* _new_version;
Method* _old_version;
u2 _intrinsic_id; // vmSymbols::intrinsic_id (0 == _none)
@@ -144,6 +147,23 @@ class Method : public Metadata {
u2 name_index() const { return constMethod()->name_index(); }
void set_name_index(int index) { constMethod()->set_name_index(index); }
Method* new_version() const { return _new_version; }
void set_new_version(Method* m) { _new_version = m; }
Method* newest_version() { return (_new_version == NULL) ? this : _new_version->newest_version(); }
Method* old_version() const { return _old_version; }
void set_old_version(Method* m) {
/*if (m == NULL) {
_old_version = NULL;
return;
}*/
assert(_old_version == NULL, "may only be set once");
assert(this->code_size() == m->code_size(), "must have same code length");
_old_version = m;
}
const Method* oldest_version() const { return (_old_version == NULL) ? this : _old_version->oldest_version(); }
// signature
Symbol* signature() const { return constants()->symbol_at(signature_index()); }
u2 signature_index() const { return constMethod()->signature_index(); }
@@ -199,7 +219,7 @@ class Method : public Metadata {
// JVMTI breakpoints
#if !INCLUDE_JVMTI
Bytecodes::Code orig_bytecode_at(int bci) const {
Bytecodes::Code orig_bytecode_at(int bci, bool no_fatal) const {
ShouldNotReachHere();
return Bytecodes::_shouldnotreachhere;
}
@@ -208,7 +228,7 @@ class Method : public Metadata {
};
u2 number_of_breakpoints() const {return 0;}
#else // !INCLUDE_JVMTI
Bytecodes::Code orig_bytecode_at(int bci) const;
Bytecodes::Code orig_bytecode_at(int bci, bool no_fatal) const;
void set_orig_bytecode_at(int bci, Bytecodes::Code code);
void set_breakpoint(int bci);
void clear_breakpoint(int bci);

View File

@@ -293,6 +293,7 @@ JNI_ENTRY(jclass, jni_DefineClass(JNIEnv *env, const char *name, jobject loaderR
Klass* k = SystemDictionary::resolve_from_stream(&st, class_name,
class_loader,
cl_info,
NULL,
CHECK_NULL);
if (log_is_enabled(Debug, class, resolve)) {

View File

@@ -886,6 +886,7 @@ static jclass jvm_define_class_common(const char *name,
Klass* k = SystemDictionary::resolve_from_stream(&st, class_name,
class_loader,
cl_info,
NULL,
CHECK_NULL);
if (log_is_enabled(Debug, class, resolve)) {
@@ -973,6 +974,7 @@ static jclass jvm_lookup_define_class(jclass lookup, const char *name,
ik = SystemDictionary::resolve_from_stream(&st, class_name,
class_loader,
cl_info,
NULL,
CHECK_NULL);
if (log_is_enabled(Debug, class, resolve)) {
@@ -989,6 +991,7 @@ static jclass jvm_lookup_define_class(jclass lookup, const char *name,
ik = SystemDictionary::resolve_from_stream(&st, class_name,
class_loader,
cl_info,
NULL,
CHECK_NULL);
// The hidden class loader data has been artificially been kept alive to

File diff suppressed because it is too large Load Diff

View File

@@ -0,0 +1,199 @@
/*
* Copyright (c) 2003, 2016, Oracle and/or its affiliates. All rights reserved.
* DO NOT ALTER OR REMOVE COPYRIGHT NOTICES OR THIS FILE HEADER.
*
* This code is free software; you can redistribute it and/or modify it
* under the terms of the GNU General Public License version 2 only, as
* published by the Free Software Foundation.
*
* This code is distributed in the hope that it will be useful, but WITHOUT
* ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or
* FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License
* version 2 for more details (a copy is included in the LICENSE file that
* accompanied this code).
*
* You should have received a copy of the GNU General Public License version
* 2 along with this work; if not, write to the Free Software Foundation,
* Inc., 51 Franklin St, Fifth Floor, Boston, MA 02110-1301 USA.
*
*
*
* Please contact Oracle, 500 Oracle Parkway, Redwood Shores, CA 94065 USA
* or visit www.oracle.com if you need additional information or have any
* questions.
*
*/
#ifndef SHARE_VM_PRIMS_JVMTIREDEFINECLASSES2_HPP
#define SHARE_VM_PRIMS_JVMTIREDEFINECLASSES2_HPP
#include "jvmtifiles/jvmtiEnv.hpp"
#include "memory/oopFactory.hpp"
#include "memory/resourceArea.hpp"
#include "oops/objArrayKlass.hpp"
#include "oops/objArrayOop.hpp"
#include "gc/shared/gcVMOperations.hpp"
#include "../../../java.base/unix/native/include/jni_md.h"
//
// Enhanced class redefiner.
//
// This class implements VM_GC_Operation - the usual usage should be:
// VM_EnhancedRedefineClasses op(class_count, class_definitions, jvmti_class_load_kind_redefine);
// VMThread::execute(&op);
// Which in turn runs:
// - doit_prologue() - calculate all affected classes (add subclasses etc) and load new class versions
// - doit() - main redefition, adjust existing objects on the heap, clear caches
// - doit_epilogue() - cleanup
class VM_EnhancedRedefineClasses: public VM_GC_Operation {
private:
// These static fields are needed by ClassLoaderDataGraph::classes_do()
// facility and the AdjustCpoolCacheAndVtable helper:
static Array<Method*>* _old_methods;
static Array<Method*>* _new_methods;
static Method** _matching_old_methods;
static Method** _matching_new_methods;
static Method** _deleted_methods;
static Method** _added_methods;
static int _matching_methods_length;
static int _deleted_methods_length;
static int _added_methods_length;
static Klass* _the_class_oop;
static u8 _id_counter;
// The instance fields are used to pass information from
// doit_prologue() to doit() and doit_epilogue().
jint _class_count;
const jvmtiClassDefinition *_class_defs; // ptr to _class_count defs
// This operation is used by both RedefineClasses and
// RetransformClasses. Indicate which.
JvmtiClassLoadKind _class_load_kind;
GrowableArray<InstanceKlass*>* _new_classes;
jvmtiError _res;
// Set if any of the InstanceKlasses have entries in the ResolvedMethodTable
// to avoid walking after redefinition if the redefined classes do not
// have any entries.
bool _any_class_has_resolved_methods;
// (DCEVM) Enhanced class redefinition, affected klasses contain all classes which should be redefined
// either because of redefine, class hierarchy or interface change
GrowableArray<Klass*>* _affected_klasses;
int _max_redefinition_flags;
// Performance measurement support. These timers do not cover all
// the work done for JVM/TI RedefineClasses() but they do cover
// the heavy lifting.
elapsedTimer _timer_vm_op_doit;
elapsedTimer _timer_vm_op_prologue;
elapsedTimer _timer_heap_iterate;
elapsedTimer _timer_heap_full_gc;
// Redefinition id used by JFR
u8 _id;
// These routines are roughly in call order unless otherwise noted.
// Load and link new classes (either redefined or affected by redefinition - subclass, ...)
//
// - find sorted affected classes
// - resolve new class
// - calculate redefine flags (field change, method change, supertype change, ...)
// - calculate modified fields and mapping to old fields
// - link new classes
//
// The result is sotred in _affected_klasses(old definitions) and _new_classes(new definitions) arrays.
jvmtiError load_new_class_versions(TRAPS);
// Searches for all affected classes and performs a sorting such tha
// a supertype is always before a subtype.
jvmtiError find_sorted_affected_classes(TRAPS);
jvmtiError do_topological_class_sorting(TRAPS);
jvmtiError find_class_bytes(InstanceKlass* the_class, const unsigned char **class_bytes, jint *class_byte_count, jboolean *not_changed);
int calculate_redefinition_flags(InstanceKlass* new_class);
void calculate_instance_update_information(Klass* new_version);
void rollback();
static void mark_as_scavengable(nmethod* nm);
static void unregister_nmethod_g1(nmethod* nm);
static void register_nmethod_g1(nmethod* nm);
static void unpatch_bytecode(Method* method);
void root_oops_do(OopClosure *oopClosure);
// Figure out which new methods match old methods in name and signature,
// which methods have been added, and which are no longer present
void compute_added_deleted_matching_methods();
// Change jmethodIDs to point to the new methods
void update_jmethod_ids(Thread* current);
// marking methods as old and/or obsolete
void check_methods_and_mark_as_obsolete();
void transfer_old_native_function_registrations(InstanceKlass* new_class);
void redefine_single_class(Thread* current, InstanceKlass* new_class_oop);
// Increment the classRedefinedCount field in the specific InstanceKlass
// and in all direct and indirect subclasses.
void increment_class_counter(Thread* current, InstanceKlass *ik);
void flush_dependent_code();
u8 next_id();
void reinitializeJDKClasses();
static void check_class(InstanceKlass* k_oop);
static void dump_methods();
// Check that there are no old or obsolete methods
class CheckClass : public KlassClosure {
Thread* _thread;
public:
CheckClass(Thread* t) : _thread(t) {}
void do_klass(Klass* k);
};
// Unevolving classes may point to methods of the_class directly
// from their constant pool caches, itables, and/or vtables. We
// use the ClassLoaderDataGraph::classes_do() facility and this helper
// to fix up these pointers.
class ClearCpoolCacheAndUnpatch : public KlassClosure {
Thread* _thread;
public:
ClearCpoolCacheAndUnpatch(Thread* t) : _thread(t) {}
void do_klass(Klass* k);
};
// Clean MethodData out
class MethodDataCleaner : public KlassClosure {
public:
MethodDataCleaner() {}
void do_klass(Klass* k);
};
public:
VM_EnhancedRedefineClasses(jint class_count,
const jvmtiClassDefinition *class_defs,
JvmtiClassLoadKind class_load_kind);
VMOp_Type type() const { return VMOp_RedefineClasses; }
bool doit_prologue();
void doit();
void doit_epilogue();
bool allow_nested_vm_operations() const { return true; }
jvmtiError check_error() { return _res; }
u8 id() { return _id; }
// Modifiable test must be shared between IsModifiableClass query
// and redefine implementation
static bool is_modifiable_class(oop klass_mirror);
};
#endif // SHARE_VM_PRIMS_JVMTIREDEFINECLASSES2_HPP

View File

@@ -30,6 +30,7 @@
#include "classfile/vmClasses.hpp"
#include "classfile/vmSymbols.hpp"
#include "gc/shared/collectedHeap.hpp"
#include "compiler/compileBroker.hpp"
#include "interpreter/bytecodeStream.hpp"
#include "interpreter/interpreter.hpp"
#include "jfr/jfrEvents.hpp"
@@ -54,6 +55,7 @@
#include "prims/jvmtiManageCapabilities.hpp"
#include "prims/jvmtiRawMonitor.hpp"
#include "prims/jvmtiRedefineClasses.hpp"
#include "prims/jvmtiEnhancedRedefineClasses.hpp"
#include "prims/jvmtiTagMap.hpp"
#include "prims/jvmtiThreadState.inline.hpp"
#include "prims/jvmtiUtil.hpp"
@@ -407,8 +409,13 @@ JvmtiEnv::GetClassLoaderClasses(jobject initiating_loader, jint* class_count_ptr
// is_modifiable_class_ptr - pre-checked for null
jvmtiError
JvmtiEnv::IsModifiableClass(oop k_mirror, jboolean* is_modifiable_class_ptr) {
*is_modifiable_class_ptr = VM_RedefineClasses::is_modifiable_class(k_mirror)?
JNI_TRUE : JNI_FALSE;
if (AllowEnhancedClassRedefinition) {
*is_modifiable_class_ptr = VM_EnhancedRedefineClasses::is_modifiable_class(k_mirror)?
JNI_TRUE : JNI_FALSE;
} else {
*is_modifiable_class_ptr = VM_RedefineClasses::is_modifiable_class(k_mirror)?
JNI_TRUE : JNI_FALSE;
}
return JVMTI_ERROR_NONE;
} /* end IsModifiableClass */
@@ -438,7 +445,8 @@ JvmtiEnv::RetransformClasses(jint class_count, const jclass* classes) {
return JVMTI_ERROR_INVALID_CLASS;
}
if (!VM_RedefineClasses::is_modifiable_class(k_mirror)) {
if ((!AllowEnhancedClassRedefinition && !VM_RedefineClasses::is_modifiable_class(k_mirror)) ||
(AllowEnhancedClassRedefinition && !VM_EnhancedRedefineClasses::is_modifiable_class(k_mirror))) {
return JVMTI_ERROR_UNMODIFIABLE_CLASS;
}
@@ -470,13 +478,29 @@ JvmtiEnv::RetransformClasses(jint class_count, const jclass* classes) {
}
class_definitions[index].klass = jcls;
}
EventRetransformClasses event;
VM_RedefineClasses op(class_count, class_definitions, jvmti_class_load_kind_retransform);
VMThread::execute(&op);
jvmtiError error = op.check_error();
jvmtiError error;
u8 op_id;
if (AllowEnhancedClassRedefinition) {
// MutexLocker sd_mutex(EnhancedRedefineClasses_lock, Monitor::_no_safepoint_check_flag);
// Stop compilation to avoid compilator race condition (crashes) with advanced redefinition
CompileBroker::stopCompilationBeforeEnhancedRedefinition();
VM_EnhancedRedefineClasses op(class_count, class_definitions, jvmti_class_load_kind_retransform);
VMThread::execute(&op);
CompileBroker::releaseCompilationAfterEnhancedRedefinition();
op_id = op.id();
error = (op.check_error());
} else {
VM_RedefineClasses op(class_count, class_definitions, jvmti_class_load_kind_retransform);
VMThread::execute(&op);
op_id = op.id();
error = op.check_error();
}
if (error == JVMTI_ERROR_NONE) {
event.set_classCount(class_count);
event.set_redefinitionId(op.id());
event.set_redefinitionId(op_id);
event.commit();
}
return error;
@@ -489,12 +513,28 @@ jvmtiError
JvmtiEnv::RedefineClasses(jint class_count, const jvmtiClassDefinition* class_definitions) {
//TODO: add locking
EventRedefineClasses event;
VM_RedefineClasses op(class_count, class_definitions, jvmti_class_load_kind_redefine);
VMThread::execute(&op);
jvmtiError error = op.check_error();
jvmtiError error;
u8 op_id;
if (AllowEnhancedClassRedefinition) {
// MutexLocker sd_mutex(EnhancedRedefineClasses_lock, Monitor::_no_safepoint_check_flag);
// Stop compilation to avoid compilator race condition (crashes) with advanced redefinition
CompileBroker::stopCompilationBeforeEnhancedRedefinition();
VM_EnhancedRedefineClasses op(class_count, class_definitions, jvmti_class_load_kind_redefine);
VMThread::execute(&op);
CompileBroker::releaseCompilationAfterEnhancedRedefinition();
op_id = op.id();
error = (op.check_error());
} else {
VM_RedefineClasses op(class_count, class_definitions, jvmti_class_load_kind_redefine);
VMThread::execute(&op);
op_id = op.id();
error = op.check_error();
}
if (error == JVMTI_ERROR_NONE) {
event.set_classCount(class_count);
event.set_redefinitionId(op.id());
event.set_redefinitionId(op_id);
event.commit();
}
return error;

View File

@@ -3083,7 +3083,7 @@ JvmtiDynamicCodeEventCollector::JvmtiDynamicCodeEventCollector() : _code_blobs(n
// iterate over any code blob descriptors collected and post a
// DYNAMIC_CODE_GENERATED event to the profiler.
JvmtiDynamicCodeEventCollector::~JvmtiDynamicCodeEventCollector() {
assert(!JavaThread::current()->owns_locks(), "all locks must be released to post deferred events");
assert(AllowEnhancedClassRedefinition || !JavaThread::current()->owns_locks(), "all locks must be released to post deferred events");
// iterate over any code blob descriptors that we collected
if (_code_blobs != nullptr) {
for (int i=0; i<_code_blobs->length(); i++) {

View File

@@ -199,6 +199,7 @@ class JvmtiExport : public AllStatic {
// systems as needed to relax invariant checks.
static uint64_t _redefinition_count;
friend class VM_RedefineClasses;
friend class VM_EnhancedRedefineClasses;
inline static void increment_redefinition_count() {
JVMTI_ONLY(_redefinition_count++;)
}

View File

@@ -166,6 +166,21 @@ static jvmtiError JNICALL GetCarrierThread(const jvmtiEnv* env, ...) {
return JVMTI_ERROR_NONE;
}
// extension function
static jvmtiError JNICALL IsEnhancedClassRedefinitionEnabled(const jvmtiEnv* env, ...) {
jboolean* enabled = NULL;
va_list ap;
va_start(ap, env);
enabled = va_arg(ap, jboolean *);
va_end(ap);
if (enabled == NULL) {
return JVMTI_ERROR_NULL_POINTER;
}
*enabled = (jboolean)AllowEnhancedClassRedefinition;
return JVMTI_ERROR_NONE;
}
// register extension functions and events. In this implementation we
// have a single extension function (to prove the API) that tests if class
// unloading is enabled or disabled. We also have a single extension event
@@ -246,6 +261,7 @@ void JvmtiExtensions::register_extensions() {
sizeof(class_unload_event_params)/sizeof(class_unload_event_params[0]),
class_unload_event_params
};
static jvmtiExtensionEventInfo virtual_thread_mount_ext_event = {
EXT_EVENT_VIRTUAL_THREAD_MOUNT,
(char*)"com.sun.hotspot.events.VirtualThreadMount",
@@ -264,6 +280,21 @@ void JvmtiExtensions::register_extensions() {
_ext_events->append(&class_unload_ext_event);
_ext_events->append(&virtual_thread_mount_ext_event);
_ext_events->append(&virtual_thread_unmount_ext_event);
static jvmtiParamInfo func_params_enh_redef[] = {
{ (char*)"IsEnhancedClassRedefinitionEnabled", JVMTI_KIND_OUT, JVMTI_TYPE_JBOOLEAN, JNI_FALSE }
};
static jvmtiExtensionFunctionInfo ext_func_enh_redef = {
(jvmtiExtensionFunction)IsEnhancedClassRedefinitionEnabled,
(char*)"com.sun.hotspot.functions.IsEnhancedClassRedefinitionEnabled",
(char*)"Tell if enhanced class redefinition is enabled (-noclassgc)",
sizeof(func_params_enh_redef)/sizeof(func_params_enh_redef[0]),
func_params_enh_redef,
0, // no non-universal errors
NULL
};
_ext_functions->append(&ext_func_enh_redef);
}

View File

@@ -70,12 +70,19 @@ public:
void do_klass(Klass* k) {
// Collect all jclasses
_classStack.push((jclass) _env->jni_reference(Handle(_cur_thread, k->java_mirror())));
if (_dictionary_walk) {
// Collect array classes this way when walking the dictionary (because array classes are
// not in the dictionary).
for (Klass* l = k->array_klass_or_null(); l != nullptr; l = l->array_klass_or_null()) {
_classStack.push((jclass) _env->jni_reference(Handle(_cur_thread, l->java_mirror())));
// Collect all jclasses
// DCEVM : LoadedClassesClosure in dcevm7 iterates over classes from SystemDictionary therefore the class "k" is always
// the new version (SystemDictionary stores only new versions). But the LoadedClassesClosure's functionality was
// changed in java8 where jvmtiLoadedClasses collects all classes from all classloaders, therefore we
// must use new versions only.
if (!AllowEnhancedClassRedefinition || k->new_version()==NULL) {
_classStack.push((jclass) _env->jni_reference(Handle(_cur_thread, k->java_mirror())));
if (_dictionary_walk) {
// Collect array classes this way when walking the dictionary (because array classes are
// not in the dictionary).
for (Klass* l = k->array_klass_or_null(); l != nullptr; l = l->array_klass_or_null()) {
_classStack.push((jclass) _env->jni_reference(Handle(_cur_thread, l->java_mirror())));
}
}
}
}

View File

@@ -128,6 +128,13 @@ void JvmtiBreakpoint::each_method_version_do(method_action meth_act) {
Symbol* m_name = _method->name();
Symbol* m_signature = _method->signature();
// (DCEVM) Go through old versions of method
if (AllowEnhancedClassRedefinition) {
for (Method* m = _method->old_version(); m != NULL; m = m->old_version()) {
(m->*meth_act)(_bci);
}
}
// search previous versions if they exist
for (InstanceKlass* pv_node = ik->previous_versions();
pv_node != nullptr;

View File

@@ -1380,6 +1380,7 @@ jvmtiError VM_RedefineClasses::load_new_class_versions() {
the_class->name(),
the_class->class_loader_data(),
cl_info,
false,
THREAD);
// Clear class_being_redefined just to be sure.

View File

@@ -179,6 +179,9 @@ public:
assert(ref_kind_is_valid(ref_kind), "");
return (ref_kind & 1) != 0;
}
static bool ref_kind_is_static(int ref_kind) {
return !ref_kind_has_receiver(ref_kind) && (ref_kind != JVM_REF_newInvokeSpecial);
}
static int ref_kind_to_flags(int ref_kind);

View File

@@ -33,6 +33,7 @@
#include "oops/access.inline.hpp"
#include "oops/method.hpp"
#include "oops/oop.inline.hpp"
#include "oops/klass.inline.hpp"
#include "oops/weakHandle.inline.hpp"
#include "prims/resolvedMethodTable.hpp"
#include "runtime/atomic.hpp"
@@ -377,6 +378,31 @@ public:
}
};
class AdjustMethodEntriesDcevm : public StackObj {
GrowableArray<oop>* _oops_to_update;
GrowableArray<Method*>* _old_methods;
public:
AdjustMethodEntriesDcevm(GrowableArray<oop>* oops_to_update, GrowableArray<Method*>* old_methods) :
_oops_to_update(oops_to_update), _old_methods(old_methods) {};
bool operator()(WeakHandle* entry) {
oop mem_name = entry->peek();
if (mem_name == NULL) {
// Removed
return true;
}
Method* old_method = (Method*)java_lang_invoke_ResolvedMethodName::vmtarget(mem_name);
if (old_method->is_old()) {
_oops_to_update->append(mem_name);
_old_methods->append(old_method);
}
return true;
}
};
// It is called at safepoint only for RedefineClasses
void ResolvedMethodTable::adjust_method_entries(bool * trace_name_printed) {
assert(SafepointSynchronize::is_at_safepoint(), "only called at safepoint");
@@ -384,6 +410,73 @@ void ResolvedMethodTable::adjust_method_entries(bool * trace_name_printed) {
AdjustMethodEntries adjust(trace_name_printed);
_local_table->do_safepoint_scan(adjust);
}
// It is called at safepoint only for EnhancedRedefineClasses
void ResolvedMethodTable::adjust_method_entries_dcevm(bool * trace_name_printed) {
assert(SafepointSynchronize::is_at_safepoint(), "only called at safepoint");
GrowableArray<oop> oops_to_update(0);
GrowableArray<Method*> old_methods(0);
AdjustMethodEntriesDcevm adjust(&oops_to_update, &old_methods);
_local_table->do_safepoint_scan(adjust);
Thread* thread = Thread::current();
for (int i = 0; i < oops_to_update.length(); i++) {
oop mem_name = oops_to_update.at(i);
Method *old_method = old_methods.at(i);
// 1. Remove old method, since we are going to update class that could be used for hash evaluation in parallel running ServiceThread
ResolvedMethodTableLookup lookup(thread, method_hash(old_method), old_method);
_local_table->remove(thread, lookup);
InstanceKlass* newer_klass = InstanceKlass::cast(old_method->method_holder()->new_version());
Method* newer_method;
// Method* new_method;
if (old_method->is_deleted()) {
newer_method = Universe::throw_no_such_method_error();
} else {
newer_method = newer_klass->method_with_idnum(old_method->orig_method_idnum());
assert(newer_method != NULL, "method_with_idnum() should not be NULL");
assert(newer_klass == newer_method->method_holder(), "call after swapping redefined guts");
assert(old_method != newer_method, "sanity check");
Thread* thread = Thread::current();
ResolvedMethodTableLookup lookup(thread, method_hash(newer_method), newer_method);
ResolvedMethodGet rmg(thread, newer_method);
if (_local_table->get(thread, lookup, rmg)) {
// old method was already adjusted if new method exists in _the_table
continue;
}
}
log_debug(redefine, class, update)("Adjusting method: '%s' of new class %s", newer_method->name_and_sig_as_C_string(), newer_klass->name()->as_C_string());
// 2. Update method
java_lang_invoke_ResolvedMethodName::set_vmtarget(mem_name, newer_method);
java_lang_invoke_ResolvedMethodName::set_vmholder(mem_name, newer_method->method_holder()->java_mirror());
newer_klass->set_has_resolved_methods();
ResourceMark rm;
if (!(*trace_name_printed)) {
log_debug(redefine, class, update)("adjust: name=%s", old_method->method_holder()->external_name());
*trace_name_printed = true;
}
log_debug(redefine, class, update, constantpool)
("ResolvedMethod method update: %s(%s)",
newer_method->name()->as_C_string(), newer_method->signature()->as_C_string());
// 3. add updated method to table again
add_method(newer_method, Handle(thread, mem_name));
}
}
#endif // INCLUDE_JVMTI
// Verification

View File

@@ -69,6 +69,7 @@ public:
// JVMTI Support - It is called at safepoint only for RedefineClasses
JVMTI_ONLY(static void adjust_method_entries(bool * trace_name_printed);)
JVMTI_ONLY(static void adjust_method_entries_dcevm(bool * trace_name_printed);)
// Debugging
static size_t items_count();

View File

@@ -1814,6 +1814,38 @@ static unsigned int patch_mod_count = 0;
static unsigned int enable_native_access_count = 0;
static bool patch_mod_javabase = false;
// Check consistency of GC selection
bool Arguments::check_gc_consistency() {
// Ensure that the user has not selected conflicting sets
// of collectors.
uint i = 0;
if (UseSerialGC) i++;
if (UseParallelGC) i++;
if (UseG1GC) i++;
if (UseEpsilonGC) i++;
if (UseZGC) i++;
if (UseShenandoahGC) i++;
if (AllowEnhancedClassRedefinition) {
// Must use serial GC. This limitation applies because the instance size changing GC modifications
// are only built into the mark and compact algorithm.
if (!UseSerialGC && !UseG1GC && i >= 1) {
jio_fprintf(defaultStream::error_stream(),
"Must use the Serial or G1 GC with enhanced class redefinition.\n");
return false;
}
}
if (i > 1) {
jio_fprintf(defaultStream::error_stream(),
"Conflicting collector combinations in option list; "
"please refer to the release notes for the combinations "
"allowed\n");
return false;
}
return true;
}
// Check the consistency of vm_init_args
bool Arguments::check_vm_args_consistency() {
// This may modify compiler flags. Must be called before CompilerConfig::check_args_consistency()
@@ -1835,6 +1867,8 @@ bool Arguments::check_vm_args_consistency() {
status = false;
}
status = status && check_gc_consistency();
status = CompilerConfig::check_args_consistency(status);
#if INCLUDE_JVMCI
if (status && EnableJVMCI) {
@@ -3719,6 +3753,21 @@ jint Arguments::parse(const JavaVMInitArgs* initial_cmd_args) {
// Set object alignment values.
set_object_alignment();
#if INCLUDE_JFR
if (FlightRecorder) {
if (AllowEnhancedClassRedefinition || StartFlightRecording != NULL) {
warning("EnhancedClassRedefinition was disabled, it is not allowed in FlightRecorder.");
AllowEnhancedClassRedefinition = false;
}
}
#endif
setup_hotswap_agent();
if (AllowEnhancedClassRedefinition) {
UseEmptySlotsInSupers = false;
}
#if !INCLUDE_CDS
if (CDSConfig::is_dumping_static_archive() || RequireSharedSpaces) {
jio_fprintf(defaultStream::error_stream(),
@@ -4100,3 +4149,77 @@ void Arguments::add_virtualization_information_property()
JFR_ONLY(virt_name = JfrOSInterface::virtualization_name();)
PropertyList_add(&_system_properties, new SystemProperty("jbr.virtualization.information", virt_name, false));
}
void Arguments::setup_hotswap_agent() {
if (DumpSharedSpaces)
return;
if (HotswapAgent == NULL || strcmp(HotswapAgent, "disabled") == 0)
return;
// Force AllowEnhancedClassRedefinition if HA is enabled
AllowEnhancedClassRedefinition = true;
bool ha_fatjar = strcmp(HotswapAgent, "fatjar") == 0;
bool ha_core = strcmp(HotswapAgent, "core") == 0;
// Set HotswapAgent
if (ha_fatjar || ha_core) {
char ext_path_str[JVM_MAXPATHLEN];
os::jvm_path(ext_path_str, sizeof(ext_path_str));
for (int i = 0; i < 3; i++) {
char *end = strrchr(ext_path_str, *os::file_separator());
if (end != NULL) *end = '\0';
}
size_t ext_path_length = strlen(ext_path_str);
if (ext_path_length >= 3) {
if (strcmp(ext_path_str + ext_path_length - 3, "lib") != 0) {
if (ext_path_length < JVM_MAXPATHLEN - 4) {
jio_snprintf(ext_path_str + ext_path_length, sizeof(ext_path_str) - ext_path_length, "%slib", os::file_separator());
ext_path_length += 4;
}
}
}
if (ext_path_length < JVM_MAXPATHLEN - 10) {
if (ha_fatjar) {
jio_snprintf(ext_path_str + ext_path_length, sizeof(ext_path_str) - ext_path_length,
"%shotswap%shotswap-agent.jar", os::file_separator(), os::file_separator());
} else {
jio_snprintf(ext_path_str + ext_path_length, sizeof(ext_path_str) - ext_path_length,
"%shotswap%shotswap-agent-core.jar", os::file_separator(), os::file_separator());
}
int fd = ::open(ext_path_str, O_RDONLY);
if (fd >= 0) {
::close(fd);
size_t length = strlen(ext_path_str) + 1;
char *options = NEW_C_HEAP_ARRAY(char, length, mtArguments);
jio_snprintf(options, length, "%s", ext_path_str);
add_init_agent("instrument", ext_path_str, false);
jio_fprintf(defaultStream::output_stream(), "Starting HotswapAgent '%s'\n", ext_path_str);
}
else
{
jio_fprintf(defaultStream::error_stream(), "HotswapAgent not found on path:'%s'!\n", ext_path_str);
}
}
}
// TODO: open it only for org.hotswap.agent module
// Use to access java.lang.reflect.Proxy/proxyCache
create_numbered_module_property("jdk.module.addopens", "java.base/java.lang=ALL-UNNAMED", addopens_count++);
// Class of field java.lang.reflect.Proxy/proxyCache
create_numbered_module_property("jdk.module.addopens", "java.base/jdk.internal.loader=ALL-UNNAMED", addopens_count++);
// Use to access java.io.Reader, java.io.InputStream, java.io.FileInputStream
create_numbered_module_property("jdk.module.addopens", "java.base/java.io=ALL-UNNAMED", addopens_count++);
// java.beans.Introspector access
create_numbered_module_property("jdk.module.addopens", "java.desktop/java.beans=ALL-UNNAMED", addopens_count++);
// java.beans.Introspector access
create_numbered_module_property("jdk.module.addopens", "java.desktop/com.sun.beans=ALL-UNNAMED", addopens_count++);
// com.sun.beans.introspect.ClassInfo access
create_numbered_module_property("jdk.module.addopens", "java.desktop/com.sun.beans.introspect=ALL-UNNAMED", addopens_count++);
// com.sun.beans.introspect.util.Cache access
create_numbered_module_property("jdk.module.addopens", "java.desktop/com.sun.beans.util=ALL-UNNAMED", addopens_count++);
}

View File

@@ -389,12 +389,19 @@ class Arguments : AllStatic {
// Adjusts the arguments after the OS have adjusted the arguments
static jint adjust_after_os();
// Check for consistency in the selection of the garbage collector.
static bool check_gc_consistency(); // Check user-selected gc
// Check consistency or otherwise of VM argument settings
static bool check_vm_args_consistency();
// Used by os_solaris
static bool process_settings_file(const char* file_name, bool should_exist, jboolean ignore_unrecognized);
static size_t conservative_max_heap_alignment() { return _conservative_max_heap_alignment; }
// Initialize HotswapAgent
static void setup_hotswap_agent();
// Return the maximum size a heap with compressed oops can take
static size_t max_heap_for_compressed_oops();

View File

@@ -136,6 +136,15 @@ JVMFlag::Error NUMAInterleaveGranularityConstraintFunc(size_t value, bool verbos
" ... %zu ]\n", value, min, max);
return JVMFlag::VIOLATES_CONSTRAINT;
}
return JVMFlag::SUCCESS;
}
JVMFlag::Error HotswapAgentConstraintFunc(ccstr value, bool verbose) {
if (value != NULL) {
if (strcmp("disabled", value) != 0 && strcmp("fatjar", value) != 0 && strcmp("core", value) != 0 && strcmp("external", value) != 0) {
JVMFlag::printError(verbose, "HotswapAgent(%s) must be one of disabled,fatjar,core or external.\n", value);
return JVMFlag::VIOLATES_CONSTRAINT;
}
}
return JVMFlag::SUCCESS;
}

View File

@@ -41,7 +41,8 @@
f(int, ContendedPaddingWidthConstraintFunc) \
f(int, PerfDataSamplingIntervalFunc) \
f(uintx, VMPageSizeConstraintFunc) \
f(size_t, NUMAInterleaveGranularityConstraintFunc)
f(size_t, NUMAInterleaveGranularityConstraintFunc) \
f(ccstr, HotswapAgentConstraintFunc)
RUNTIME_CONSTRAINTS(DECLARE_CONSTRAINT)

View File

@@ -2017,6 +2017,20 @@ const int ObjectAlignmentInBytes = 8;
develop(uint, BinarySearchThreshold, 16, \
"Minimal number of elements in a sorted collection to prefer" \
"binary search over simple linear search." ) \
\
product(bool, AllowEnhancedClassRedefinition, false, \
"Allow enhanced class redefinition beyond swapping method " \
"bodies") \
\
product(ccstr, HotswapAgent, "disabled", \
"Specify HotswapAgent image to be used." \
"disabled: hotswap agent is disabled (default)" \
"fatjar: full HA. Use integrated hotswap-agent.jar" \
"core: core HA. Use integrated hotswap-agent-core.jar" \
"external: external HA. use external HA, open required JDK " \
"modules.") \
constraint(HotswapAgentConstraintFunc, AfterErgo)
// end of RUNTIME_FLAGS

View File

@@ -184,7 +184,7 @@ class ThreadInVMfromNative : public ThreadStateTransition {
class ThreadToNativeFromVM : public ThreadStateTransition {
public:
ThreadToNativeFromVM(JavaThread *thread) : ThreadStateTransition(thread) {
assert(!thread->owns_locks(), "must release all locks when leaving VM");
assert(AllowEnhancedClassRedefinition || !thread->owns_locks(), "must release all locks when leaving VM");
transition_from_vm(thread, _thread_in_native);
}
~ThreadToNativeFromVM() {

View File

@@ -56,7 +56,8 @@ JavaCallWrapper::JavaCallWrapper(const methodHandle& callee_method, Handle recei
JavaThread* thread = THREAD;
guarantee(thread->is_Java_thread(), "crucial check - the VM thread cannot and must not escape to Java code");
assert(!thread->owns_locks(), "must release all locks when leaving VM");
// DCEVM allow locks on leaving JVM
assert(AllowEnhancedClassRedefinition || !thread->owns_locks(), "must release all locks when leaving VM");
guarantee(thread->can_call_java(), "cannot make java calls from the native compiler");
_result = result;

View File

@@ -83,6 +83,7 @@ Monitor* CompileTaskWait_lock = nullptr;
Monitor* MethodCompileQueue_lock = nullptr;
Monitor* CompileThread_lock = nullptr;
Monitor* Compilation_lock = nullptr;
Monitor* DcevmCompilation_lock = nullptr;
Monitor* CompileTaskAlloc_lock = nullptr;
Mutex* CompileStatistics_lock = nullptr;
Mutex* DirectivesStack_lock = nullptr;
@@ -108,6 +109,7 @@ Mutex* OldSets_lock = nullptr;
Mutex* Uncommit_lock = nullptr;
Monitor* RootRegionScan_lock = nullptr;
Mutex* EnhancedRedefineClasses_lock = nullptr;
Mutex* Management_lock = nullptr;
Monitor* MonitorDeflation_lock = nullptr;
Monitor* Service_lock = nullptr;
@@ -256,6 +258,7 @@ void mutex_init() {
MUTEX_DEFN(Terminator_lock , PaddedMonitor, safepoint, true);
MUTEX_DEFN(InitCompleted_lock , PaddedMonitor, nosafepoint);
MUTEX_DEFN(Notify_lock , PaddedMonitor, safepoint, true);
MUTEX_DEFN(EnhancedRedefineClasses_lock , PaddedMutex , safepoint); // for ensuring that class redefinition is not done in parallel
MUTEX_DEFN(JfieldIdCreation_lock , PaddedMutex , safepoint);
@@ -284,6 +287,8 @@ void mutex_init() {
MUTEX_DEFN(Compilation_lock , PaddedMonitor, nosafepoint);
}
def(DcevmCompilation_lock , PaddedMonitor, nosafepoint);
#if INCLUDE_JFR
MUTEX_DEFN(JfrBuffer_lock , PaddedMutex , event);
MUTEX_DEFN(JfrMsg_lock , PaddedMonitor, event);

View File

@@ -85,6 +85,7 @@ extern Monitor* CompileThread_lock; // a lock held by compile threa
extern Monitor* Compilation_lock; // a lock used to pause compilation
extern Mutex* TrainingData_lock; // a lock used when accessing training records
extern Monitor* TrainingReplayQueue_lock; // a lock held when class are added/removed to the training replay queue
extern Monitor* DcevmCompilation_lock; // a lock used to pause compilation from dcevm
extern Monitor* CompileTaskAlloc_lock; // a lock held when CompileTasks are allocated
extern Monitor* CompileTaskWait_lock; // a lock held when CompileTasks are waited/notified
extern Mutex* CompileStatistics_lock; // a lock held when updating compilation statistics
@@ -103,6 +104,8 @@ extern Mutex* RawMonitor_lock;
extern Mutex* PerfDataMemAlloc_lock; // a lock on the allocator for PerfData memory for performance data
extern Mutex* PerfDataManager_lock; // a long on access to PerfDataManager resources
extern Mutex* EnhancedRedefineClasses_lock; // locks classes from parallel enhanced redefinition
extern Mutex* FreeList_lock; // protects the free region list during safepoints
extern Mutex* OldSets_lock; // protects the old region sets
extern Mutex* Uncommit_lock; // protects the uncommit list when not at safepoints

View File

@@ -593,6 +593,12 @@ bool Reflection::verify_member_access(const Klass* current_class,
bool classloader_only,
bool protected_restriction,
TRAPS) {
// (DCEVM) Decide accessibility based on active version
if (AllowEnhancedClassRedefinition && current_class != NULL) {
current_class = current_class->active_version();
}
// Verify that current_class can access a member of member_class, where that
// field's access bits are "access". We assume that we've already verified
// that current_class can access member_class.

View File

@@ -108,6 +108,7 @@
template(RehashSymbolTable) \
template(PrintCompileQueue) \
template(PrintClassHierarchy) \
template(ThreadsSuspendJVMTI) \
template(PrintClasses) \
template(PrintMetadata) \
template(GTestExecuteAtSafepoint) \

View File

@@ -892,6 +892,10 @@ class GrowableArrayIterator : public StackObj {
assert(_array == rhs._array, "iterator belongs to different array");
return _position != rhs._position;
}
bool at_end() { return _position >= _array->length(); }
bool has_next() { return _position < _array->length() - 1; }
};
// Arrays for basic types

View File

@@ -477,6 +477,19 @@ redefineClasses(PacketInputStream *in, PacketOutputStream *out)
if (ok == JNI_TRUE) {
jvmtiError error;
jlong* classIds = NULL;
if (gdata->isEnhancedClassRedefinitionEnabled) {
classIds = jvmtiAllocate(classCount*(int)sizeof(jlong));
if (classIds == NULL) {
outStream_setError(out, JDWP_ERROR(OUT_OF_MEMORY));
return JNI_TRUE;
}
for (i = 0; i < classCount; i++) {
classIds[i] = commonRef_refToID(env, classDefs[i].klass);
}
}
error = JVMTI_FUNC_PTR(gdata->jvmti,RedefineClasses)
(gdata->jvmti, classCount, classDefs);
if (error != JVMTI_ERROR_NONE) {
@@ -486,6 +499,19 @@ redefineClasses(PacketInputStream *in, PacketOutputStream *out)
for ( i = 0 ; i < classCount; i++ ) {
eventHandler_freeClassBreakpoints(classDefs[i].klass);
}
if (gdata->isEnhancedClassRedefinitionEnabled && classIds != NULL) {
/* Update tags in jvmti to use new classes */
for ( i = 0 ; i < classCount; i++ ) {
/* pointer in classIds[i] is updated by advanced redefinition to a new class */
error = commonRef_updateTags(env, classIds[i]);
if (error != JVMTI_ERROR_NONE) {
break;
}
}
jvmtiDeallocate((void*) classIds);
}
}
}

View File

@@ -729,3 +729,32 @@ commonRef_unlock(void)
{
debugMonitorExit(gdata->refLock);
}
/*
* Update JVMTI tags, used from enhanced redefinition
*/
jvmtiError
commonRef_updateTags(JNIEnv *env, jlong id)
{
jvmtiError error;
error = JVMTI_ERROR_NONE;
if (id == NULL_OBJECT_ID) {
return error;
}
debugMonitorEnter(gdata->refLock); {
RefNode *node;
node = findNodeByID(env, id);
if (node != NULL) {
error = JVMTI_FUNC_PTR(gdata->jvmti, SetTag)
(gdata->jvmti, node->ref, ptr_to_jlong(node));
} else {
printf("Node not found\n");
}
} debugMonitorExit(gdata->refLock);
return error;
}

View File

@@ -43,4 +43,6 @@ void commonRef_compact(void);
void commonRef_lock(void);
void commonRef_unlock(void);
jvmtiError commonRef_updateTags(JNIEnv *env, jlong id);
#endif

View File

@@ -43,6 +43,7 @@ BackendGlobalData *gdata = NULL;
static jboolean isInterface(jclass clazz);
static jboolean isArrayClass(jclass clazz);
static char * getPropertyUTF8(JNIEnv *env, char *propertyName);
static jboolean isEnhancedClassRedefinitionEnabled(JNIEnv *env);
/* Save an object reference for use later (create a NewGlobalRef) */
void
@@ -273,6 +274,8 @@ util_initialize(JNIEnv *env)
saveGlobalRef(env, localAgentProperties, &(gdata->agent_properties));
}
gdata->isEnhancedClassRedefinitionEnabled = isEnhancedClassRedefinitionEnabled(env);
} END_WITH_LOCAL_REFS(env);
}
@@ -1858,6 +1861,36 @@ getPropertyUTF8(JNIEnv *env, char *propertyName)
return value;
}
static jboolean
isEnhancedClassRedefinitionEnabled(JNIEnv *env)
{
jvmtiError error;
jint count, i;
jvmtiExtensionFunctionInfo* ext_funcs;
error = JVMTI_FUNC_PTR(gdata->jvmti,GetExtensionFunctions)
(gdata->jvmti, &count, &ext_funcs);
if (error != JVMTI_ERROR_NONE) {
return JNI_FALSE;
}
for (i=0; i<count; i++) {
if (strcmp(ext_funcs[i].id, (char*)"com.sun.hotspot.functions.IsEnhancedClassRedefinitionEnabled") == 0) {
jboolean enabled;
error = (*ext_funcs[i].func)(gdata->jvmti, &enabled);
if (error != JVMTI_ERROR_NONE) {
return JNI_FALSE;
} else {
return enabled;
}
}
}
return JNI_FALSE;
}
jboolean
isMethodObsolete(jmethodID method)
{

View File

@@ -143,6 +143,9 @@ typedef struct {
/* Indication that VM_DEATH has been received and the JVMTI callbacks have been cleared. */
volatile jboolean jvmtiCallBacksCleared;
/* true if enhanced class redefinition is enabled */
jboolean isEnhancedClassRedefinitionEnabled;
} BackendGlobalData;
extern BackendGlobalData * gdata;