Re: Evict parent class from Query Results Cache at commit


The docs describes how you can gain access to internal objects and hence evict particular things should you need to

Evict parent class from Query Results Cache at commit



I hope you can help with this issue. I have a simple inheritance of these two classes.

 I've got them mapped like this

with subclass-table inheritence strategy. Also I have query results cache enabled in my

Then in my code I have 2 services classes, one for BasePlayer class which is BasePlayerService where I'm making a select query to retrieve all the players in the database, the other one is for Player which is PlayerService where I'm making a insert of the Player.

The 1st list of users shows all the 6 records saved in my database, but when I insert a new Player, the 2nd list shows the same 6 records, but in fact another Player was saved in the database, so the second list should list a total of 7 Players
This is the code from my BasePlayerService

And this is the code of my PlayerService class for saving a new Player

I know that In the BasePlayerService I'm using the class BasePlayer and in the PlayerService I'm using the Player class and I also know that any query related to Player class is evicted from Query Results Cache once the commit is executed.
My question is if there is a way to tell datanucleus to evict the results related to the parent class BasePlayer too, because the first list query is being cached and it's not cleared from Query Results Cache.

I hope you can help me.
If there is any other information you need to understand the issue better, please write me!

Thanks a lot!

Re: Cache Results not working with "count" query


IgnoreCache is for the L1 cache, as per the JDO spec.

You simply don't use result caching on that query, i.e either don't enable it in the first place or set datanucleus.query.resultCacheType to none.

Re: Cache Results not working with "count" query


Is there a method I can call, when executing the count query, to avoid getting the result from cache???
I've tried with the method setIgnoreCache.

But it still getting the same error.
If you can help me please!

Re: Cache Results not working with "count" query


As replied on Gitter,
Clearly caching results (persistable object ids) when you are not returning persistable objects makes no sense. Just turn off "results caching" when not returning persistable results

Result Caching is for candidate results and nothing else currently, as shown by the implementation. A "count" query is not for candidate results

Cache Results not working with "count" query


I've got a project with datanucleus v6.0.0-release and in my properties file I've got configured with the following values:

As you can see I have the datanucleus.query.results.cached enabled
In my project I have the following code:

I'm calling the count procedure twice, and in the second call throws an error

And the line where the exception is thrown is this

As I could review in the sources the error happens in the class line 186

In this line the value of the nextElement is the result of the count query that is a Long value

In the class in line 294 is verifying if the result is Persistable to return an ObjectId but in this case it returns a null value.
Then the collection resultIds in the class is added to the cache the first time.
Then in the second time it throws an error.
Is there any way to avoid this without disabling the property datanucleus.query.results.cached ?

Re: Persisting attribute using "new" Java time Instant in MySQL

Page bloom

That was enlightening: I hadn't appreciated the two levels of types: JDBC types and MySQL types and that the mappings between them are not a simple one-to-one. Given that the type names are identical I assumed that TIMESTAMP (JDBC) would be mapped to TIMESTAMP (in MySQL)  - silly me ;)

I got specific with the sql-type and now it's working:

        <field name="created">
                <column jdbc-type="TIMESTAMP" sql-type="TIMESTAMP"/>

Re: Persisting attribute using "new" Java time Instant in MySQL


If you look at what the MySQL JDBC driver provides to DataNucleus, you see that it supports DATETIME and TIMESTAMP as the SQL TYPE when the JDBC TYPE is TIMESTAMP.

so setting the JDBC type to TIMESTAMP will simply result in the default SQL type for that JDBC type ... which is ... DATETIME. Run SchemaTool in "dbinfo" mode and you can see all you need to know

Re: Persisting attribute using "new" Java time Instant in MySQL

Page bloom

I tried adding an explicit JDBC type via:

        <field name="created">
                <column jdbc-type="TIMESTAMP" />

but DN still creates a DATETIME column instead of a TIMESTAMP column.

I don't know why ...

Persisting attribute using "new" Java time Instant in MySQL

Page bloom

Traditionally we have used the java.util.Date class for storing time instants in UTC time - in all existing tables DN has created a 'TIMESTAMP' type field in MySQL - which works fine.

Recently we added our first attribute of type java.time.Instant (getting all jiggy with the "new" Java time library - better late than never!) and the DN docs say that the default mapping is a TIMESTAMP (via the convention that the bold type is the default):

Java Type:  java.time.Instant
Comments:     Persisted as TIMESTAMP, String, Long, or DATETIME.

However, the Instant attribute has been mapped to a DATETIME instead of TIMESTAMP.

We have not added any specific customizations to the XML metadata so would assume that it would have mapped to the default TIMESTAMP.

Any idea what me going on? Have I misunderstood the docs? Do we need to explicitly set the MySQL type to TIMESTAMP somehow?

Re: JDOQL ordering using primary key field

Page bloom

Thanks Andy, that worked well.

I used it within my setOrdering call:

setOrdering("score ascending, JDOHelper.getObjectId(this) ascending);

Re: JDOQL ordering using primary key field


JDO has

"SELECT FROM mydomain.MyClass ORDER BY JDOHelper.getObjectId(this)"

JDOQL ordering using primary key field

Page bloom

If a class does not have an attribute defined to be the primary key field but, rather, relies on DN creating the primary key column in the RBDMS table implicitly, is there any way to specify the default primary key colunn in the ordering clause:


Query q = ....
q.setOrdering(score ascending, PRIMARYKEYCOLUMN ascending)

I know the column exists in the RDBMS table and could write the native SQL to set up ordering that included it but at the Java level there is no Java attribute to refer to it in JDOQL that I am aware of. 

Perhaps DN has some keyword that can be used to represent the underlying, implicitly created primary key field.


Re: Use multiple columns for Discriminator in Inheritance Tree with single-table


Yes, JDO, JPA (and Jakarta Persistence) all allow 1 discriminator (value) only. JDO metadata strictly speaking permits multiple columns for some reason, but there is only space for 1 value and that's what DataNucleus has always stuck to. Multiple discriminator values makes little sense to me FWIW.

I don't see any way of having multiple discriminator columns by using extensions with current DataNucleus code.

  • Update DN code to cater for multiple columns (and values) for each class, and then provide a mechanism for getting class name from the column values. This is potentially a lot of work, and would have to retain compatibility with standard JDO / JPA handling.
  • Update your DB to only have a single (surrogate) column and use standard JDO discriminator handling.
  • Update your class(es) to map on to a single one of the (surrogate) columns and optionally read in the other column (if it is still important) into a field of the class(es).

Use multiple columns for Discriminator in Inheritance Tree with single-table

Kraen David Christensen

We have a table like: node(nodetype, subnodetype, ...)
From this one table we want to have several persistent java classes.
Both node.nodetype and node.subnodetype is needed to discriminate the java class we need (sometimes only nodetype is enough).
As far as I can see JDO/JPA only allows ONE discriminator column.
Is there anyway I can customize and build my own discriminator type (eg. by specifying something like:
@Discriminator(customStrategy = "MyStrategy")
on node table?
I've tried with no luck - but perhaps this is not possible.
It might be that I should write my own StoreManager extension for this?
And would that be recommended/easy?

Best regards, Kræn David Christensen

Clone of test-jdo is not working


I am trying to build a test case for JDO.

When I clone then mvn clean compile test I get a failure, also created

Re: Java 17 Compatiblity


Java 17 Compatiblity

Shivaraj Sivasankaran

We use below components from datanucleus, since we have a plan to upgrade to Java 17 need to know the compatibility to run the below components on java 17 runtime environment.


Software Name


Data Nucleus













Re: Spanner Adapter type conversion/matching problem



if you have a class with a field of type "int" then, to find its JavaTypeMapping, DataNucleus will look at the column mappings that the adapter has registered. In your adapters case, it has Integer so will find JDBC type of INTEGER as the DEFAULT for that java type, and the IntegerColumnMapping for the column. It will also make use of either what the JDBC driver provides for JDBC INTEGER or what your adapter provides for INTEGER. To persist data it will probably call IntegerColumnMapping.setInt and to read data from the database it will probably call IntegerColumnMapping.getInt.

You should look in the log and find entries like this
Field [] -> Column(s) [A.ID] using mapping of type "" (
This tells you what mappings are being used, and hence what behaviour you should expect for read/write operations.

The CloudSpannerTypeInfo entries are added when the JDBC driver doesn't provide them. Does it provide them itself? Run DataNucleus SchemaTool "dbinfo" mode, which tells you what is either provided by the JDBC driver or added by your adapter. You can look at the info under here for other datastores for reference. Maybe contribute this output to GitHub also?

Spanner Adapter type conversion/matching problem


Hi everyone,

Recently I have created a pull request for Spanner database adapter. While running the Datanucleus tests I have encountered an issue which made me realize that I have not fully understood the type conversions.
In the Datanucleus tests there is a Person class with int age attribute. This field is correctly created as INT64 type in Spanner, but during a read, I get an error saying: it was impossible to set the field "AGE" type "java.lang.Long".
Looks like Spanner adapter considers this field as LONG java type while reading. As a side note, Spanner has a single integer type, which is INT64. In database metadata, it is mapped to BIGINT.

So here is my question. I have added sqlTypesforJdbcTypes which performs a mapping. But I also have registerColumnMapping which maps a java type to jdbc and sql type.
Could you please explain how the Java type, SQL type and JDBC type are related to each other? How does datanucleus make a decision on type mapping?

Best regards

1 - 20 of 471