the multi-morph feature of JPQL is not support when using jdoAPI?


We have :
@Inheritance(strategy= InheritanceStrategy.SUBCLASS_TABLE)
@DatastoreIdentity(strategy= IdGeneratorStrategy.SEQUENCE, sequence="jdoid_seq", column="ID")
@Version(strategy=VersionStrategy.VERSION_NUMBER, column="VERSN")
public abstract class SMSMessageAbstract{

public class SMSMessage extends SMSMessageAbstract{

@Inheritance(strategy= InheritanceStrategy.NEW_TABLE)
@Discriminator(strategy=DiscriminatorStrategy.CLASS_NAME, column="TYP", indexed="true")
@DatastoreIdentity(strategy= IdGeneratorStrategy.SEQUENCE, sequence="jdoid_seq", column="ID")
@Version(strategy=VersionStrategy.VERSION_NUMBER, column="VERSN")
public class SMSMessageProcessed extends SMSMessageAbstract{

When we execute:   select t from izone.adams.model.messaging.SMSMessageProcessed t where t.creationDate>=:p_startDate , we get expected result.
But when we execute:
select t from izone.adams.model.messaging.SMSMessageAbstract t where t.creationDate>=:p_startDate, 
we get exception:

org.datanucleus.exceptions.NucleusUserException: Unable to find table for primary t.creationDate since the class izone.adams.model.messaging.SMSMessageAbstract is managed in multiple tables.

When we execute the jpaql, we are using jdoapi, like:
Query query = pm.newQuery("JPQL", sql);
List results = query.executeResultList(SmsMessageAbstract.class);  //Or SmsMessageProcessed.class, depening on the jpaql. 

Our question is:   According to the above code, seems  jpql  doesn't support multip-morph query when using jdoapi.  Is this an known problem or it is an unexpected bug?

Thanks a lot for any answering.

Re: Custom Value Generator is not working in 5.2


Hi Andy,

Thanks for the reply.
Following is the plugin.xml that we are using, it has unique=true.
<?xml version="1.0"?>
<plugin id="xyz" name="DataNucleus plug-ins" provider-name="XYZ">
<extension point="org.datanucleus.store_valuegenerator">
<valuegenerator name="XyzValueGenerator" class-name="" unique="true"/>
As I said earlier the same plugin.xml works fine in 5.0

After debugging, the link that you shared in the previous thread is where the bug is
In the Constructor of RDBMSStoreManager the AbstractStoreManger constructor is called
public RDBMSStoreManager(ClassLoaderResolver clr, PersistenceNucleusContext ctx, Map<String, Object> props)
super("rdbms", clr, ctx, props);

The AbstractStoreManger instantiates ValueGenerationManager before the nucleusContext is set in the constructor
protected ValueGenerationManager valueGenerationMgr = new ValueGenerationManagerImpl(this);

protected AbstractStoreManager(String key, ClassLoaderResolver clr, PersistenceNucleusContext nucleusContext, Map<String, Object> props) { this.storeManagerKey = key; this.nucleusContext = nucleusContext;

So in the constructor of ValueGenerationManagerImpl while loading the plugin there is always a NullPointerException (storeMgr.getNucleusContext() == null). Due to this the custom value generator is never added to uniqueGeneratorsByName map.
ConfigurationElement[] elems = storeMgr.getNucleusContext().getPluginManager().getConfigurationElementsForExtension("org.datanucleus.store_valuegenerator",
"unique", "true");
if (elems != null)
for (ConfigurationElement elem : elems)
// Assumed to not take any properties
generator = (ValueGenerator)storeMgr.getNucleusContext().getPluginManager().createExecutableExtension("org.datanucleus.store_valuegenerator",
new String[] {"name", "unique"}, new String[] {elem.getName(), "true"},
"class-name", new Class[] {StoreManager.class, String.class}, new Object[] {this, elem.getName()});
uniqueGeneratorsByName.put(elem.getName(), generator);
catch (Exception e)

Re: Custom Value Generator is not working in 5.2


Sure some classes have been refactored over the years, but then since people can't be bothered to "get involved" with open source then docs don't get updated sometimes.

As can be seen here, any plugin registered generators are loaded when "unique" is true. All built-in generators use the exact same API, so suggest that you use those for reference.

Custom Value Generator is not working in 5.2


The Datanucleus Guide for Value Generator is old and does not correspond to 5.2 API

There is no such Class
The class that is available now is
The constructor of the class is also changed to
public AbstractGenerator(StoreManager storeMgr, String name)

After changing according to the current API when the application is started.
The Map 
ValueGenerationManager.uniqueGeneratorsByName of RDBMSStoreManager is not having the custom value generator.

In Datanucleus 5.0 this is working.

Is it a bug or the way of providing Custom Value Generator is changed?

Can you please help?

Re: javax.jdo (JDO API) 3.2.0-release


That is Apache JDO, not the DN version of the API jar (what this is talking about). They will only release their version of the API jar at some point in the future when they have their spec releasable.
Report their problems to their mailing list

Re: javax.jdo (JDO API) 3.2.0-release


A few things i noticed when browsing about JDO 3.2

1) Not available yet on

2) All "Relation" links are broken (point to DN website)

javax.jdo (JDO API) 3.2.0-release


With JDO API v3.2 pretty much feature complete finally (and DataNucleus v5.2 and 6.0 passing the JDO TCK, as they have at all points in its development) we have now released v3.2.0-release of our version of the JDO API.

Re: JDOQL Subquery failing


JDO Query programmatic "API" with separated items for "from", "filter", "result" etc never had subqueries in JDO1, and when subqueries were added in JDO2 they were specifiable using that programmatic "API" using addSubquery() and variables.
Single string JDOQL also came in at JDO2, with subqueries directly in the single string.

Clearly you could contribute handling, which would involve going to Query class where setFrom is called and parse it for a user including subqueries directly, and store those into the "subqueries" field. Then the primary query and each subquery are each compiled using their own individual JDOQLCompiler like now. No interest here. The other option in setFrom would be to search for the start of a subquery (e.g "SELECT ") and throw an exception informing the user of the error of their ways

Re: JDOQL Subquery failing


Thanks for the link.  I'll take a closer look at them.

I did notice that none of the examples I had found added a subquery filter to a parent query.  I wasn't sure if that was for clarity or by design.

It would seem that appending subquery text into a filter clause of a parent query is not supported.  Would that be accurate?  Is that be a limitation or by design?

Like I said, I tried to step through the parser to identify where it was failing to provide a patch for it, but I was having to trouble  grasping the sequence  of the Lexer parser well enough.



Re: JDOQL Subquery failing


Ample examples of JDO subqueries in these tests. Note that none of the DN docs or these examples codes a subquery text into a "filter" clause of the parent query; they either single-string the whole query, or they use a variable for the subquery. They also always package-qualify all class names.

JDOQL Subquery failing


I'm trying to get a complex query working with a subquery in the filter which is failing in the JDOQL Parser.  I have diluted it to a very simple use case with a subselect which is failing as well.  Please see the test-case on github.
I am fairly sure there is something which does not match expected syntax, but I am trying to follow the examples in the DN Docs, but the parser is still failing. 

My query is:

            Query q = pm.newQuery(Person.class);
q.setFilter("id == (select count(id) from Person p) "); // fails
List<Person> results = q.executeList();

and the exception I get is the following:

2021-07-23 15:31:44 INFO  JDO:92 - Exception thrown
expected ')' at character 15 in "id == (select id from Person p)" expected ')' at character 15 in "id == (select id from Person p)"
at org.datanucleus.query.compiler.JDOQLParser.processPrimary(
at org.datanucleus.query.compiler.JDOQLParser.processUnaryExpression(
at org.datanucleus.query.compiler.JDOQLParser.processMultiplicativeExpression(
at org.datanucleus.query.compiler.JDOQLParser.processAdditiveExpression(
at org.datanucleus.query.compiler.JDOQLParser.processRelationalExpression(
at org.datanucleus.query.compiler.JDOQLParser.processAndExpression(
at org.datanucleus.query.compiler.JDOQLParser.processExclusiveOrExpression(
at org.datanucleus.query.compiler.JDOQLParser.processInclusiveOrExpression(
at org.datanucleus.query.compiler.JDOQLParser.processConditionalAndExpression(
at org.datanucleus.query.compiler.JDOQLParser.processConditionalOrExpression(
at org.datanucleus.query.compiler.JDOQLParser.processExpression(
at org.datanucleus.query.compiler.JDOQLParser.parse(
at org.datanucleus.query.compiler.JavaQueryCompiler.compileFilter(
at org.datanucleus.query.compiler.JDOQLCompiler.compile(
at org.datanucleus.api.jdo.JDOQuery.executeInternal(
at org.datanucleus.api.jdo.JDOQuery.executeList(

Does anyone have any ideas what I have done wrong with my subselect?  I've tried tracing through the `JDOQLParser` but without much success.



Anyone using Redis or Hazelcast as a distributed L2 caching solution?


I need a distributed L2 caching solution and I've narrowed my options down to the big 3:
- Redis
- HazelCast
- EhCache/Teracotta

Although I've used Teracotta several years ago, I've somewhat eliminated Ehcache/Teracotta due to the lack of knowledge of the platform by the development & operations team, and some of the additional complexities associated with it.  The biggest benefits it provides are:
- per class caching regions
- pinning/unpinning of objects

both of which I'm not sure are critical.  I would definitely like the per-class caching regions, however, I fear the cost to support the Teracotta cluster might be too big.

From what I read, neither Redis nor Hazelcast support either of those in DN.

That being said, I was wondering if anyone had any experience using either Redis or Hazelcast as an L2 cache with DN (JDO) and had recommendations or advice they could share.




Re: Lazy loaded objects fetching from cache even cache is off


The log entries are incomplete.
You still haven't defined what you are invoking. Only by you DEFINING simple classes, and what operation, and seeing persistence code can anyone have a chance of commenting.
The JDO spec tells you what "ignoreCache" can be relied on to do, and it doesn't say that the L1 cache will never be touched. It can be or not, dependent on the implementations choice.

Get the code (datanucleus-rdbms, datanucleus-core) and work through it. Likely PersistentClassROF, ResultSetGetter, ExecutionContextImpl are the classes to look at

Re: Lazy loaded objects fetching from cache even cache is off


Hi Andy,

Below are the logs

Using query to fetch user setting cache false


Query "SELECT FROM  User WHERE userId == USERID PARAMETERS Integer USERID FetchPlan [default]" of language "JDOQL" has been run before so reusing existing generic compilation


processor Query "SELECT FROM User WHERE userId == USERID PARAMETERS Integer USERID FetchPlan [default]" of language "JDOQL" for datastore "rdbms-mysql" has been run before so reusing existing datastore compilation



ManagedConnection OPENED : "$ManagedConnectionImpl@3563658 [conn=com.zaxxer.hikari.pool.HikariProxyConnection@12c8578f, commitOnRelease=true, closeOnRelease=true, closeOnTxnEnd=true]" on resource "nontx" with isolation level "read-committed" and auto-commit=false


processor Using PreparedStatement "HikariProxyPreparedStatement@1513474198 wrapping com.mysql.cj.jdbc.ClientPreparedStatement: SELECT 

SQL Execution Time = 3 ms


ManagedConnection COMMITTING : "$ManagedConnectionImpl@3563658 [conn=com.zaxxer.hikari.pool.HikariProxyConnection@12c8578f, commitOnRelease=true, closeOnRelease=true, closeOnTxnEnd=true]"User WHERE userId == USERID PARAMETERS Integer USERID" since the connection used is closing/committing



Logs while fetching Location by user.location


processor Object "User@3a88a807" having fields “Location” fetched from table "`user`"

ManagedConnection OPENED : "$ManagedConnectionImpl@41428055 [conn=com.zaxxer.hikari.pool.HikariProxyConnection@4c142df1, commitOnRelease=true, closeOnRelease=true, closeOnTxnEnd=true]" on resource "nontx" with isolation level "read-committed" and auto-commit=false

Using PreparedStatement "HikariProxyPreparedStatement

SELECT { join statement}

SQL Execution Time = 1 ms

Object "User@3a88a807" (id=User$PK@10b8a44e[userId=5303]") taken from Level 1 cache (loadedFlags="[YYYYYYYYYYYYYYYYYYYYYYYNYNYYY]")

Object “Location@45adce24" (id=“Location$PK@28bb18c5[User=User$PK@4e0db6b8[userId=5303]]") taken from Level 1 cache 

Closing PreparedStatement ""

ManagedConnection COMMITTING : "$ManagedConnectionImpl@41428055 [conn=com.zaxxer.hikari.pool.HikariProxyConnection@4c142df1, commitOnRelease=true, closeOnRelease=true, closeOnTxnEnd=true]"

ManagedConnection CLOSED : "$ManagedConnectionImpl@41428055 [conn=com.zaxxer.hikari.pool.HikariProxyConnection@4c142df1, commitOnRelease=true, closeOnRelease=true, closeOnTxnEnd=true]"

Re: Lazy loaded objects fetching from cache even cache is off


The LOG tells you where objects come from. So post the relevant bits of the LOG for the retrieve, then there is basis for comment. And while you're at it, post the actual PM calls, since ignoreCache only applies in some situations

Lazy loaded objects fetching from cache even cache is off



I have two tables Like User and Location and they are mapped by one to many (Location is lazy loaded)

Here are the operations performed (Same persistence manager for both requests)

Ist request-> fetching user from database(persistence manger has set with property "datanucleus.IgnoreCache", true ). So getting user from database by passing cache. And location data is extracted by calling user.location

Changing values in database manually on user and location 

2nd request-> fetching user from database as the cache set to false user changes are appeared but when getting location from user [user.location] old data is appearing. 

As persistence manager cache is set to false, why in lazy loading it is fetching from cache instead of database ?

Re: "No such database row" error when lazily loading a field of an entity


Yes, well aware, and I will dig deeper into it, since we would like to re-enable L2 caching. 

Another issue we observed when using the L2 cache in our Apache Isis app was that sometimes (?) when an object could not be found in L1 and L2 and was added to L1 the postLoad event was not triggered. Since in Apache Isis dependencies are injected into entities through this event we had cases in which the dependencies were null. In such cases the state of the object was "hollow" (as opposed to "persistent clean"). Also haven't had time to dig deeper into this. 

Re: "No such database row" error when lazily loading a field of an entity


The l2 cache performs a very simple task (providing access to field values across PMs) and is not the cause of "strange effects". If a field is required and is available in the L2 cache then it populates it from there. If you have some object in some other PM that had the field then it is made available to other PMs. Just enabling/disabling features isn't the way to debug your problems. Disabling the L2 cache will have an impact on speed. Your choice.

Re: "No such database row" error when lazily loading a field of an entity


Thanks, Andy. It turned out this (and other strange "effects") was seemingly caused by the L2 cache. When we disabled it it went away.

Re: "No such database row" error when lazily loading a field of an entity


An exception is thrown because it cant find the object in the DB to get the fields from. And it has SQL in the log so you know what it is invoking at that point. So work backwards why it cant find it.

81 - 100 of 476