Date   

Re: Any way to configure "development" mode for DataNucleus to force it to reload class metadata?

ebenzacar@...
 

Thanks for the tip.  I was looking for a `clearCache` or a `clearMetadata` method or something to that extent that I didn't notice the `unmanage/unload` methods.

I also found the plugin from Dan Haywood for Eclipse that he wrote a long time ago.  Am including the links here for future reference.
- https://www.danhaywood.com/2014/01/23/isis-integration-with-jrebel/
- https://github.com/danhaywood/isis-jrebel-plugin

I'm working with IntelliJ which has a different plugin architecture, and will have to look at this more carefully,  It will lead to many additional questions how to integrate DN with IJ more seamlessly, as currently the enhancement process must be run as a manual task each time a class is modified in IJ.  An IJ plugin that automatically triggers the enhancer for modified classes would be ideal.  

Thanks,

Eric


Re: Any way to configure "development" mode for DataNucleus to force it to reload class metadata?

Andy
 
Edited

There is a MetaDataManager for each PMF. It loads metadata when it needs to. There is a method unloadMetaDataForClass to unload a class' metadata if needing to do so.
There is also a method on JDOPersistenceManagerFactory unmanageClass(String className) which will call the underlying MetaDataManager method as well as the associated StoreManager method. Added in DN 4.0.

You could also ask on Apache ISIS support (which uses DataNucleus) because Dan Haywood played around with a JRebel addon to do something related some time ago. No idea if he published it.


Any way to configure "development" mode for DataNucleus to force it to reload class metadata?

ebenzacar@...
 

I'm working on a fairly large, slow startup legacy application in which I am slowly trying to integrate DataNucleus.  I am also leveraging JRebel, which allows me to modify most classes on the fly without having to restart the application everytime.  For most business oriented logic, this works great.

However, with the DN persistence classes, DN seem to load the metadata into a cache/singleton of some sorts, which prevents me from modifying/reloading a class on the fly, and forces me to stop the application server everytime and relaunch it.  I've sifted through the code and couldn't see anything to enable this, nor did I notice anything in the DataNucleus Metadata Properties so I'm trying to find if there is another mechanism I can use to clear the Metadata cache and have DN reload it on the fly.

Similarly, I would be happy with any process that allows me to modify the persistence class and re-enhance it without needing to restart the application everytime.

Thanks,

Eric


Re: Migration from Kodo: Any way to make it store foreign keys to later get already loaded objects from cache (for performance)?

ebenzacar@...
 

Hi,

I'm running into a similar problem as you where I need to create an ugly JDOHelper.getObjectIdAsLong() hack as well due to some legacy Kodo code.

Did you get something fully functional finally?  Can you share your plugin?  I've looked through the `org.datanucleus.store.rdbms.sql.method.JDOHelperGetObjectIdMethod` implementation, but frankly a little lost by the way it processes everything.  I'm a little unclear on the use of a NumericExpression vs an ObjectLiteral, etc.

Thanks,

Eric

 


Multitenancy query result error - query cache key (bug ?)

stephane <passignat@...>
 

Hi,

When 2 queries with the same tenant but different tenantReaders are executed, the second one returns the result of the first one.

The SQL generated is the same for the second query while it should be different.

The query cache key only uses the tenant which may explain the case.
String multiTenancyId = ec.getNucleusContext().getMultiTenancyId(ec);
if (multiTenancyId != null)
{
queryCacheKey += (" " + multiTenancyId);
}
(no test case yet)

--
Stephane


Re: Strange behaviour running in WIldfly 10 with EmulatedXAResource

Andy
 
Edited

You are using a JEE connection pool but are setting the resource type as "local" hence DN will try to commit the connection (since it commits local things). Your connection (JBoss JCA) then throws an exception because it wants to commit things, maybe?. Use a local connection if using local ? But then I don't use JEE containers ...

You also have "datanucleus.connectionPoolingType" yet have already specified "connectionFactoryName" to be from JBoss. Either DataNucleus creates the connections to the datastore or JBoss does, but you can't have both, clearly.


Strange behaviour running in WIldfly 10 with EmulatedXAResource

ebenzacar@...
 

I'm encountering a strange behaviour for which I have not been able to pinpoint the cause and was hoping that someone could help point me in the right direction.

I have an application running under  JBoss EAP 7.0 (Wildfly 10), but only having the datasource specified at the container level.  I am trying to fully manage the JDOPersistenceFactory from within the JEE (EJB) application.

My persistence.xml is configured as follows (located in my EJB jar in the META-INF/ folder:
<persistence-unit name="cache" transaction-type="RESOURCE_LOCAL">
<non-jta-data-source>java:jboss/datasources/DS</non-jta-data-source>
<mapping-file>package.jdo</mapping-file>
<exclude-unlisted-classes/>
<properties>
<property name="datanucleus.connection.resourceType" value="RESOURCE_LOCAL"/>
<property name="datanucleus.transaction.type" value="RESOURCE_LOCAL"/>
<property name="datanucleus.storeManagerType" value="rdbms"/>
<property name="datanucleus.schema.autoCreateAll" value="false"/>
<property name="datanucleus.ConnectionFactoryName" value="java:jboss/datasources/DS"/>

<property name="datanucleus.connectionPoolingType" value="dbcp2-builtin"/>
</properties>
</persistence-unit>

I have instantiated my connection factory statically as follows:
public class DataNucleusPersistenceManagerFactory {

// hack to provide access to the pmfs from a static context
private static Map<PersistenceUnit, PersistenceManagerFactory> pmf = new ConcurrentHashMap<>();

static{
PersistenceManagerFactory persistenceManagerFactory = JDOHelper.getPersistenceManagerFactory("cache");
pmf.put(PersistenceUnit.CACHE, persistenceManagerFactory);
}

/**
* Retrieve PersistenceManagerFactory from legacy code
* @param type
* @return
*/
public static PersistenceManagerFactory getPersistenceManagerFactory( PersistenceUnit type){
return pmf.get(type);
}


All my Stateless EJBs are defined as BeanManagedTransactions by extending a base class with the following interceptor defined:
@TransactionManagement(TransactionManagementType.BEAN)
public class BaseManagerBean implements BaseManager {

PersistenceManager pm;

@AroundInvoke
public Object log(InvocationContext inv) throws Exception {
pm =
DataNucleusPersistenceManagerFactory.getPersistenceManagerFactory( CACHE );
pm.begin();

Object ret = inv.proceed();
pm.commit();

}

}

This allows for the logic for my bean method to simply use the pm within its logic without needing to worry about transaction boundaries/etc.  I have simplified the interceptor for illustration purposes.

My issue that I am encountering is when I have nested EJBs (ie: one EJB which calls another).  In this context, I end up with nested transactions, since each EJB method call will create a new PM and manage its own transaction.

When this happens, the inner transaction completes successfully, but the outer one throws and exception that:

Caused by: java.sql.SQLException: IJ031019: You cannot commit during a managed transaction
        at org.jboss.jca.adapters.jdbc.BaseWrapperManagedConnection.jdbcCommit(BaseWrapperManagedConnection.java:1063)
        at org.jboss.jca.adapters.jdbc.WrappedConnection.commit(WrappedConnection.java:834)
        at org.datanucleus.store.rdbms.ConnectionFactoryImpl$EmulatedXAResource.commit(ConnectionFactoryImpl.java:734)
        at org.datanucleus.transaction.ResourcedTransaction.commit(ResourcedTransaction.java:348)
        at org.datanucleus.transaction.ResourcedTransactionManager.commit(ResourcedTransactionManager.java:64)
        at org.datanucleus.TransactionImpl.internalCommit(TransactionImpl.java:417)
        at org.datanucleus.TransactionImpl.commit(TransactionImpl.java:288)
        at org.datanucleus.api.jdo.JDOTransaction.commit(JDOTransaction.java:99)
        ... 128 more
 


Stepping through the code, falling into the IronJacamar libraries, I see that the ConnectionWrappers are treating this as an XA transaction.  I also see that from the `org.datanucleus.store.rdbms.ConnectionFactoryImpl$EmulatedXAResource` resource which the DN CF creates.

This is where I get confused.  I'm not sure why this is happening, nor if this is "normal"/expected behaviour.  I have tried to reproduce this in a small test application by embedding 2 txs, with the same PU defined, but it works properly there.  I have not yet tried embedding two EJBs in a test application.

I can only suspect that I have an errant configuration somewhere which is causing this, but cannot identify what.  My Datasource is configured the same for my test application.  Just for reference:

<datasource jndi-name="java:jboss/datasources/DS" pool-name="DS" enabled="true">
<
connection-url>jdbc:sqlserver://${db.tx.host};databaseName=${adams.db.tx.name}</connection-url>
<
driver>sqlserver</driver>
<
pool>
<
min-pool-size>5</min-pool-size>
<
max-pool-size>2000</max-pool-size>
</
pool>
<
security>
<
user-name>${adams.db.tx.username}</user-name>
<
password>${adams.db.tx.password}</password>
</
security>
<
validation>
<
check-valid-connection-sql>select getdate()</check-valid-connection-sql>
</
validation>
</
datasource>
<drivers> <driver name="sqlserver" module="com.microsoft.sqlserver"> <xa-datasource-class>com.microsoft.sqlserver.jdbc.SQLServerDriver</xa-datasource-class> </driver> </drivers>


What am I missing, doing wrong?  Do I have a incorrect or missing configuration?  What is  DN / WF seeing this as an XA transaction?  Or is this expected behaviour when working with nested EJBs due to the inherent stateless / pooling behaviour of EJBs?  If the latter, am I forced to move to Container Managed Transactions such that no bean commits its own transaction?  If not, how would I know which transaction is the outer transaction?

Thanks for any insights.

Eric


Re: How to configured the PersistenceManager for Bean Managed Transactions?

Andy
 

https://www.datanucleus.org/products/accessplatform_5_2/jdo/persistence.html#jboss7 says that "DataNucleus JCA adapter supports both Local and XA transaction types".


Re: Proper configuration with the JCA connector in WIldfly

Andy
 
Edited

FYI The JCA was written by people who no longer develop DataNucleus, and is dependent on volunteers. Consequently you're pretty much on your own. Clearly JDO is just as usable outside of JCA, creating a PMF and PM's as required.


Re: How to configured the PersistenceManager for Bean Managed Transactions?

ebenzacar@...
 

As a followup question, does the JCA adapter support anything other than XATransactions?
I see the following in ra.xml:

<transaction-support>XATransaction</transaction-support>

Does it support any other mode?  ie: NoTransaction or LocalTransaction?

Thanks,


Eric


How to configured the PersistenceManager for Bean Managed Transactions?

ebenzacar@...
 

I'm trying to integrate DN (JDO) into my JEE application via the JCA adapter, and configure the Persistence manager to understand BeanManagedTransactions.  I am retrofitting this into existing code (formerly using Kodo), and am having difficulties getting the configuration correct.

In my `persistence.xml` file, I have defined my PU as follows:

    <persistence-unit name="pu">
        <jta-data-source>java:jboss/datasources/POC_DN</jta-data-source>
        <properties>
            <property name="datanucleus.connection.resourceType" value="JTA"/>
            <property name="datanucleus.transaction.type" value="JTA"/>
            <property name="datanucleus.storeManagerType" value="rdbms"/>
            <property name="datanucleus.schema.autoCreateAll" value="false"/>
            <property name="datanucleus.ConnectionFactoryName" value="java:jboss/datasources/POC_DN"/>
 
            <property name="datanucleus.connectionPoolingType" value="dbcp2-builtin"/>
            <property name="javax.persistence.jta-data-source" value="java:jboss/datasources/POC_DN"/>
 
            <property name="datanucleus.jtaLocator" value="jboss"/>
        </properties>
    </persistence-unit>
 

In my Bean code, I my bean is defined as a:     
@TransactionManagement(TransactionManagementType.BEAN)

Finally, in my actual code, I am retrieving the persistence manager from the JNDI and getting the transaction from it (where jndiName is the name of my connection-definition defined in the container)
         PersistenceManagerFactory pmf = (PersistenceManagerFactory) context.lookup(jndiName);
         pm = pmf.getPersistenceManager();
         pm.currentTransaction.begin();
...
...
         pm.currentTransaction.commit();


However, in doing so, I get error messages that I cannot begin a transaction that has already started.  If I omit the begin(), then I get an error message that "IJ031019: You cannot commit during a managed transaction".

In both circumstances, it is clear to me that the PM does not understand that I want to use BeanManagedTransactions.

Is there a special configuration that I need to use?  Is retrieving the PM from the JNDI explicitly like this causing my issues?  How else can I access the Tx statically?  Am I forced/required to retrieve the `java:jboss/UserTransaction` from the JNDI instead?

Thanks for any insight/tips.

Eric


Proper configuration with the JCA connector in WIldfly

ebenzacar@...
 

I'm trying to integrate my instance of JBoss/Wildfly 10 with the Datanucleus JCA-JDO connector.  I saw an original post for JBoss AS7 (https://developer.jboss.org/docs/DOC-17094) as linked from the DataNucleus docs, but I find it somewhat incomplete.  I've been struggling with improving the configuration and will try to post a complete howto/installation once done. 

At the moment, I am having trouble understanding how to define my JNDI datasource in my persistence unit.   I have my DataSource properly defined in my Wildfly standalone.xml configuration:

<subsystem xmlns="urn:jboss:domain:datasources:4.0">
<datasources>
<datasource jndi-name="java:jboss/datasources/POC_DN" pool-name="POC_DN" enabled="true">
<connection-url>jdbc:sqlserver://127.0.0.1:1433;databaseName=POC_DATANUCLEUS</connection-url>
<driver>sqlserver</driver>
<pool>
<min-pool-size>5</min-pool-size>
<max-pool-size>2000</max-pool-size>
</pool>
<security>
<user-name>datanucleus</user-name>
<password>datanucleus</password>
</security>
<validation>
<check-valid-connection-sql>select getdate()</check-valid-connection-sql>
</validation>
</datasource>

I have the following configured in my persistence unit:
    <persistence-unit name="pu">
<jta-data-source>java:jboss/datasources/POC_DN</jta-data-source>
<properties>
<property name="datanucleus.connection.resourceType" value="JTA"/>
<property name="datanucleus.storeManagerType" value="rdbms"/>
<property name="datanucleus.schema.autoCreateAll" value="false"/>
<property name="datanucleus.connectionPoolingType" value="dbcp2-builtin"/>
<property name="datanucleus.jtaLocator" value="jboss"/>
</properties>
</persistence-unit>

Finally, I have my rar configured as:
<resource-adapter id="datanucleus-jca-jdo">
<archive>
datanucleus-jdo-jca-5.2.7.rar
</archive>
<connection-definitions>
<connection-definition class-name="org.datanucleus.jdo.connector.ManagedConnectionFactoryImpl" jndi-name="java:/adap_1" enabled="true" connectable="true" use-java-context="false" pool-name="adap_1" use-ccm="true" sharable="true" enlistment="true">
<config-property name="PersistenceUnitName">pu<config-property>
<config-property name="PersistenceXmlFilename">file:///dev/Projects/poc/datanucleus/etc/configuration/persistence.xml</config-property>
<pool>
<prefill>false</prefill>
<use-strict-min>false</use-strict-min>
<flush-strategy>FailingConnectionOnly</flush-strategy>
</pool>
<security>
<application/>
</security>
<validation>
<use-fast-fail>false</use-fast-fail>
<validate-on-match>false</validate-on-match>
</validation>
</connection-definition>
</connection-definitions>
</resource-adapter>

Now, in my Spring Java code, I am trying to retrieve the PMF via JNDI and retrieve a PersistenceManager in the following method:
@Bean
@RequestScope
PersistenceManager getPersistenceManager( PersistenceManagerFactory pmf){
return pmf.getPersistenceManager();
}

@Bean
public JndiObjectFactoryBean datanucleusPersistenceManagerFactory(){
JndiObjectFactoryBean jndiObjectFactoryBean =new JndiObjectFactoryBean();
jndiObjectFactoryBean.setJndiName("java:/adap_1");
jndiObjectFactoryBean.setResourceRef(true);
jndiObjectFactoryBean.setProxyInterface(PersistenceManagerFactory.class);
return jndiObjectFactoryBean;
}

All these pieces work; I get a PM injected into my controller class:
@RestController
@RequestMapping(value ="/orgs")
public class OrgController {

@Autowired
PersistenceManager pm;

@GetMapping("/{id}")
public Organization get(@PathVariable Long id) {
Organization o = null;
o = (Organization)pm.getObjectById(Organization.class, id);
return o;
}

However, when it actually tries to execute the `getObjectById()` call, I get an exception thrown:
Caused by: org.datanucleus.exceptions.NucleusUserException: Unable to create transactional datasource for connections due to invalid/insufficient input. Consult the log for details and/or review the settings of "datastore.connectionXXX" properties
at org.datanucleus.store.rdbms.ConnectionFactoryImpl.initialiseDataSource(ConnectionFactoryImpl.java:129)
at org.datanucleus.store.rdbms.ConnectionFactoryImpl.<init>(ConnectionFactoryImpl.java:91)
... 125 more


Drilling more into things, I see that the PM is trying to initialize a Connection via the `org.datanucleus.store.rdbms.ConnectionFactoryImp.initialiseDataSource()` method, which delegates to the `generateDataSource` method.  However the JNDI connection string here should come from the datanucleus.ConnectionFactoryName and not the `jta-data-source`.  Am I missing a configuration somewhere?

If I override the value explicitly in my PU, and set my DS JNDI to the `datanucleus.ConnectionFactoryName`, it seems to work as expected.  I see that the the `JDOPersistenceManagerFactory` automatically copies the the jta-data-source to the DN property, but the `org.datanucleus.jdo.connector.PersistenceManagerFactoryImpl` does not seem to do this.  

Is this a configuration problem on my behalf, working as expected, or a bug/missing feature of the JCA adaptor? 

Finally, is there a  "correct" configuration of the RAR adaptor to retrieve the Persistence-Unit definition(s) from my webapp's Classpath instead of the RAR's classpath?  Right now, I can either put the persistence.xml in the RAR itself, or point to a fixed path.  But theoretically, I would want to include it within my application.  But the rar gets deployed before my application(s) and consequently attempts to load the persistence.xml prior to the application loading.  How do I configure this to delegate loading/initializing only once my app is deployed?

Thanks,

Eric


Re: Any way to use JPA and JDO together easily?

ebenzacar@...
 

Thanks; I hadn't seen that extension.

I see that the the JDO pm allows for using JPQL but remaining within the JDO framework.  Which from a consistency perspective is great.  However, from a "retrofit" perspective, I still have the problem of my legacy code using all javax.persistence constructs and classes.  But hopefully shouldn't be too bad - I see many of the javax.jdo.Query methods are analogous to the javax.persistence.Query methods.

I will post as I continue the efforts if I have additional JPQL questions/issues.  So far, a quick test has yielded great results.


Multitenancy disabled doesn't work (probably a bug)

stephane <passignat@...>
 
Edited

Hi,

The disable option doesn't work:
@MultiTenant(disable = true)

In the MultiTenantHandler DN looks for disabled instead of disable attribute name. And then the disable extension is added if it's not disabled
Boolean disabled = (Boolean)annotationValues.get("disable");
if (disabled != null && !disabled) {
cmd.addExtension("multitenancy-disable", "true");
}
Suggestion:
Boolean disabled = (Boolean)annotationValues.get("disable");
if (disabled != null && disabled)
{
cmd.addExtension(MetaData.EXTENSION_CLASS_MULTITENANCY_DISABLE, "true");
}

--
Stephane


Re: Compilation error on ObjectExpression in generated Q class

Andy
 

Suggest that you work out what the Q class should be, get the code from here, and contribute an update.


Compilation error on ObjectExpression in generated Q class

mwhesse@...
 

I have a javax.validation.Min annotation on an Integer field in my JDO entity

@Column(allowsNull = "true")
@Min(0)
@Getter @Setter
private Integer scale;
 
The generated Q code for this field is this, and makes my build fail with a compilation error saying it is not able to find the symbol "class java".

public final ObjectExpression<@javax.validation.constraints.Min(0L) java.lang.Integer> scale;


Re: Any way to use JPA and JDO together easily?

Andy
 
Edited

Perhaps you would be better off looking at the DataNucleus (and JDO) documentation.

The JDO spec mandates being able to use JPA metadata (XML / annotations) with JDO persistence, and DataNucleus (JPA) also allows use of JDO metadata (XML / annotations) with JPA persistence.
DataNucleus seemingly allows use of JPQL with the JDO API (without the need to create any EntityManagerFactory), though not sure what query needs JPQL for that JDOQL can't express.

A query language like JPQL is in core code and hence available to both APIs.

Kodo doesn't seem to be as flexible as that ;-)


Any way to use JPA and JDO together easily?

ebenzacar@...
 

As I am migrating my legacy Kodo based application to DataNucleus, I noticed that the application uses both JDO for most persistence based interaction, however has several JPQL queries which leverage a JPA Entity Manager.  The app leverages OpenJPA's  `JPAFacadeHelper.toEntityManager()` to create a JPA entity manager from Kodo's persistence manager.  My application exclusively uses a RDBMS store (MSSQL to be specific).

I've skimmed through the DataNucleus Javadocs and don't see anything similar; something to provide a JPA interface from a given JDO Persistence Manager.  Given that they are 2 completely different APIs, I'm not completely surprised, but was still hopeful that something might exist.

So the question becomes:
1) Does DN provide any kind of Facade/factory that allows me to create/generate a JPA Entity Manager from a JDO Persistence Manager?
2) Is there an easy way to use a DN JPA Entity Manager  with a JDO application?  Or is my only hope to identity all my persistence objects as `@Entity` and enhance them with the JPA enhancer?  I would rather stick with JDO definitions if possible (annotations and/or package.jdo)  as JPA/JPQL is only used for queries (not for updates or deletes).  Similarly, maintaining both an orm.xml and a package.jdo is going to be risky and problematic.

Converting all JPQL queries to JDO based queries is not an option at this time; I need to find a solution to use them both together.

Thanks,

Eric


Re: Trouble understanding how to lookup objects that have a Datastore Identity by their PK (if the PK is not identified in the model)

ebenzacar@...
 

You could do something like
MyClass myObj = pm.getObjectById(MyClass.class, theIdValue);
This would likely work whether you are using (single-field) application identity OR datastore identity. And you don't need to cast anything ...


Works perfectly.  Didn't realize that the solution was that straightfoward.  I had understood from the javadoc that the `theIdValue` had to be an Identity object.

Thanks,

Eric


Re: Trouble understanding how to lookup objects that have a Datastore Identity by their PK (if the PK is not identified in the model)

Andy
 
Edited

Use of implementation-specific classes is always a bad idea IMHO. Imagine hardcoding Kodo-specific stuff and then finding that you need to change provider some time later ...

In the same way, reliance on things outside the JDO spec can be problematic.

You could do something like
MyClass myObj = pm.getObjectById(MyClass.class, theIdValue);
This would likely work whether you are using (single-field) application identity OR datastore identity. And you don't need to cast anything ...
I did add an enhancement to datanucleus-core for datastore-id newObjectId, so whether it needs that only you can find out