Date   

Re: Java 17 Compatiblity

Andy
 
Edited


Java 17 Compatiblity

Shivaraj Sivasankaran
 

We use below components from datanucleus, since we have a plan to upgrade to Java 17 need to know the compatibility to run the below components on java 17 runtime environment.

Vendor

Software Name

Version

Data Nucleus

datanucleus-api-jdo

5.2.7

datanucleus-core

5.2.9

datanucleus-joda-time

5.2.0-release

datanucleus-hbase

5.2.2

datanucleus-jdo-query

5.0.9

datanucleus-maven-plugin

5.2.1


Re: Spanner Adapter type conversion/matching problem

Andy
 
Edited

Hi,

if you have a class with a field of type "int" then, to find its JavaTypeMapping, DataNucleus will look at the column mappings that the adapter has registered. In your adapters case, it has Integer so will find JDBC type of INTEGER as the DEFAULT for that java type, and the IntegerColumnMapping for the column. It will also make use of either what the JDBC driver provides for JDBC INTEGER or what your adapter provides for INTEGER. To persist data it will probably call IntegerColumnMapping.setInt and to read data from the database it will probably call IntegerColumnMapping.getInt.

You should look in the log and find entries like this
Field [mydomain.model.A.id] -> Column(s) [A.ID] using mapping of type "org.datanucleus.store.rdbms.mapping.java.LongMapping" (org.datanucleus.store.rdbms.mapping.column.BigIntColumnMapping)
This tells you what mappings are being used, and hence what behaviour you should expect for read/write operations.

The CloudSpannerTypeInfo entries are added when the JDBC driver doesn't provide them. Does it provide them itself? Run DataNucleus SchemaTool "dbinfo" mode, which tells you what is either provided by the JDBC driver or added by your adapter. You can look at the info under here for other datastores for reference. Maybe contribute this output to GitHub also?


Spanner Adapter type conversion/matching problem

yunus@...
 

Hi everyone,

Recently I have created a pull request for Spanner database adapter. While running the Datanucleus tests I have encountered an issue which made me realize that I have not fully understood the type conversions.
In the Datanucleus tests there is a Person class with int age attribute. This field is correctly created as INT64 type in Spanner, but during a read, I get an error saying: it was impossible to set the field "AGE" type "java.lang.Long".
Looks like Spanner adapter considers this field as LONG java type while reading. As a side note, Spanner has a single integer type, which is INT64. In database metadata, it is mapped to BIGINT.

So here is my question. I have added sqlTypesforJdbcTypes which performs a mapping. But I also have registerColumnMapping which maps a java type to jdbc and sql type.
Could you please explain how the Java type, SQL type and JDBC type are related to each other? How does datanucleus make a decision on type mapping?

Best regards
yunus


Re: Datanucleus with Informix

gustavo.echenique@...
 

Thanks for your information Andy!

Is very important for me.


Re: Datanucleus with Informix

Andy
 
Edited

Hello,
DataNucleus has a datastore adapter for Informix, shown here. This was developed quite a few years ago by someone, who also provided these docs specific to the Informix adapter.
I don't have Informix so can't comment on how up to date it is. But then you haven't presented any meaningful information about what the "problem" is. A problem has an error message for example. DataNucleus (RDBMS) requires a valid JDBC driver to connect to the database.

What you do in "Spring Boot" you'd have to ask the people who develop that. DataNucleus is simply a standards compliant JDO / JPA provider providing support for those APIs


Datanucleus with Informix

gustavo.echenique@...
 

Dear colleagues:

I am developing a project with Spring Boot and Informix IDS 7.31TD9, but I find that I cannot connect to the database.
I tried to load the dependencies of all the versions of Informix that are in the Maven repository and it did not work, nor did the strategy of loading the Informix JDBC 3.00JC3 jars in the project classpath, which have allowed me to connect from JavaEE 7 projects.
 
I need to know if Datanucleus supports this version of Informix, and if yes, if there is documentation to replace Hibernate in Spring Boot.
 
Thank you in advance for your attention.


Re: Programmatic retrieval of a persistent class' discriminator

Andy
 
Edited

The discriminator is clearly in the retrieval SQL when you query for objects, but is only used to decide which type of class to instantiate.
It would be part of the metadata for the class, so you could use the JDO MetaData API to extract the discriminator value for a specific class

pmf.getMetadata(MyClass.class.getName()).getInheritanceMetadata().getDiscriminatorMetadata().getValue();


Programmatic retrieval of a persistent class' discriminator

Page bloom
 
Edited

Clearly we can associate a unique (usually integer in our case) class ID / discriminator to each persistent class.

Given that this class ID is always unique across all classes within our system I have found a use case where using that concise unique integer would be beneficial/optimal for use outside of persistence (e.g. concisely identifying classes across microservice linked components without having to supply a FQCN) - the trouble is, it's embedded in the metadata (.jdo) file at build time and I haven't been able to work out a convenient way of retrieving it at runtime.

Does the JDO API or any DN extension support, given a persistent class i.e. Class<>  (not an actual persistent object instance), retrieving the discriminator (ClassID) associated with that class?

I realize that the discriminator is not only optional but it also supports a variety of types, not just integer but perhaps there is an internal discriminator "object" that holds this info that we could extract from.


Re: How to identify fields that are made dirty during an InstanceCallback from within a listener?

Andy
 
Edited

The JDO spec (12.15) explicitly insists that InstanceLifecycleListener methods are called BEFORE InstanceCallbacks, so we can't just change the code to calls to the opposite order, hence using LifecycleListeners for this will always miss anything the user does subsequently (in jdoPreStore).

You could look at adding an extra (DN-specific) listener giving you the explicit info you require, for classes that you want to do auditing on. e.g AuditPersistListener, and then update JDOCallbackHandler to call all AuditPersistListener when all other listeners/callbacks have been called, i.e https://github.com/datanucleus/datanucleus-api-jdo/blob/master/src/main/java/org/datanucleus/api/jdo/JDOCallbackHandler.java#L159

Means a vendor extension so dependent on DN code, but then your solution will be either way.


How to identify fields that are made dirty during an InstanceCallback from within a listener?

ebenzacar@...
 

Hi Andy,

I'm running into a bit of a catch-22 solution here and would love some ideas if you have something.

I'm trying to identify all dirty fields of an object during a StoreLifecycleListener.  I have both the preStore() and postStore() methods implemented.
The catch-22 situation I'm running into is that only the preStore() has the dirty flags identified.  Once DN persists the data, it clears all the dirty flags and then calls the postStore() listeners.  So by the time it gets to the postStore() listener, all fields are "clean".


However, the preStore() LifecycleListener is fired _before_ the StoreCallback.jdoPreStore().  Which means that if a field is changed/modified in the jdoPreStore() I won't have access the updated data during the StoreLifecycleListener.preStore() event.

If I wait until the postStore() event to retrieve the value (since it is changed AFTER preStore()), then I have no way of identifying which field(s) were changed since the dirty flags are cleared by that point.

I'm a bit in a pickle; I want to try to identify exactly which field(s) and their value(s) were updated/sent to the DB, but cannot find the right approach.

Any ideas/thoughts?

Thanks,

Eric



Re: Where/how to report updates needed to documentation?

ebenzacar@...
 


Re: Where/how to report updates needed to documentation?

Andy
 
Edited

All is in GitHub, of course (link). Extensions docs are in Asciidoc format, under https://github.com/datanucleus/docs-accessplatform/tree/master/src/main/asciidoc/extensions


Where/how to report updates needed to documentation?

ebenzacar@...
 
Edited

Hi,

In reading through the Plugin documentation on the website, I noticed an oversight in the docs.  How/where can I report this?  I tried to look to see if there was a github repo for the website, but couldn't find it.

In the Java Types section, there is missing the parameter/documentation for:
  - container-handler

I'm not 100% sure how to document it other than copy/paste the definition from the `org.datanucleus.store.types.ContainerHandler` class and identify that it is mandatory for a container-object and must implement the container-handler.

Thanks,

Eric


Re: Force index usage in mySql

Andy
 

Non-standard SQL of that nature is specific to one RDBMS (i.e MySQL), and not supported. All SQL is in the RDBMS plugin if wanting to add (some level of) support for it. If doing so, try to make support general enough something like "optional SQL to put after a FROM table" ("FORCE INDEX" is deprecated after MySQL v8)


Force index usage in mySql

Christophe
 

Hi,

I am using datanucleus with mySql and I am looking for a way to force an index usage in a query (JDOQL). So the generated SQL should look like "select x from T FORCE INDEX (z) where ...". I have inspected the documentation and I do not think this is something that could be done right of the box. Am I right ? If yes, I would appreciate some hints on the way to achieve this. Using plugin ?

Thank you for your help,
Christophe


Re: StateManager savedImage retains references - not the original values

Andy
 
Edited

It knows what to update based on what methods the user calls on the collection/map. Take the example of a field of type Collection, using
https://github.com/datanucleus/datanucleus-core/blob/master/src/main/java/org/datanucleus/store/types/wrappers/backed/Collection.java

The user does an "add" of an element, so calls https://github.com/datanucleus/datanucleus-core/blob/master/src/main/java/org/datanucleus/store/types/wrappers/backed/Collection.java#L645

If using optimistic txns (i.e delaying sending updates to the DB til required) it is a "queued update", so it calls
ownerSM.getExecutionContext().addOperationToQueue(new CollectionAddOperation(ownerSM, backingStore, element));
and otherwise (sending updates to the DB immediately) it calls backingStore.add(ownerSM, element, useCache ? delegate.size() : -1);

But then that is the whole point of using a proxy, it intercepts calls, and takes action as required. Current use-cases here are for persistence, and since it has all info that it needs for that no "original values" are explicitly stored there. Your use-case is different to what it is designed to cater for


Re: StateManager savedImage retains references - not the original values

ebenzacar@...
 

I found the set of Proxy collections in the `org.datanucleus.store.types.wrappers`.  I don't see any state trackers in the SCO proxies that would retain the original values before they are modified.  Does anything like this exist in some other format?

I see that DN follows a very similar pattern/design to OpenJPA (and likely Hibernate/etc) with its Proxy objects, but OpenJPA uss a ChangeTracker on their Proxy objects to identify which objects in the SCO have been changed or modified.

How does DN identify which values in an SCO map need to updated if there is not equivalent ChangeTracker?  I cannot see/identify where/how within the DN Proxy objects DN is able to know which values need to be changed.

For instance, given a map of 1000 items, if I modify the value of one key, how does DN identify which key/value pair needs updating (as opposed to persisting the entire map again)?

Thanks,

Eric


Re: StateManager savedImage retains references - not the original values

Andy
 
Edited

1. Talk to developers of TJDO, they wrote basic StateManager handling before DataNucleus existed.
2. If it kept a deep copy then you'd get an object graph stored alongside each object. A StateManager needs to be minimal in storage, or you slow things down. You have seen the code so can see there is no way to "override" it with current codebase, without contributions.
3. No idea.

When a SCO collection/map (not an array, since an array is not a type) field is accessed a user is returned the wrapper / proxy, which will be one of these for RDBMS, and one of these for non-RDBMS. It is enabled otherwise DataNucleus would never catch updates to the collection/map.


StateManager savedImage retains references - not the original values

ebenzacar@...
 
Edited

Hi,

I've been trying to work with the StateManager and its savedImage object which I had expected to be a representation of the current object at the time that "saveFields()" is called.  Unfortunately, I just noticed that this is not the case for any Array / Collection elements.  That is to say, that the array object is copied from the source Persistable to the savedImage,  but as a shallow-copy only; the savedImage retains the same Collection/Array object as the source.  Which means that any changes to the source Collection will be reflected in the savedImage's collection as well.

To be more clear, here is some sample objects I'm working with:

Persistable:
public class Address implements Detachable, Persistable { @Join(column="ID") @Element(column="ELEMENT") Street[] street;
....
....
protected final void dnCopyField(Address obj, int index) { switch (index) { case 0: this.street = obj.street; break; default: throw new IllegalArgumentException("out of field index :" + index); } }

...

}


StateManagerImpl:

public void
saveFields()
{
savedImage = myPC.dnNewInstance(this);
savedImage.dnCopyFields(myPC, cmd.getAllMemberPositions());
savedPersistenceFlags = persistenceFlags;
savedLoadedFields = loadedFields.clone();
}


A few questions:
1) Is this intentional?
2) Is there anyway to override this behaviour, and keep a deep copy of the value instead of a referential value only?
3) Is there any other way I can capture the original value of a Collection/Array/Map/etc prior to it being updated/modified?  Are there any methods in the StateManager (or elsewhere) that can identify when the value of Collection/etc is being modified?
I read in the DN docs regarding SCOs:
> proxy : whether the field is represented by a "proxy" that intercepts any operations to detect whether it has changed internally.

Where can I find the implementation of this "proxy"?  Is it enabled automatically/by default?  Do I need to enable a flag in the dn persistence manager to enable it?

Thanks,

Eric