Re: "No such database row" error when lazily loading a field of an entity
mwhesse@...
Thanks Andy. Does this mean we should try to load these fields eagerly?
It also seems this only happens under higher load. What would be a good place to start looking for why this exception is being thrown? Thankful for any clue, really. |
|
Re: "No such database row" error when lazily loading a field of an entity
This is an indicator that a field is NOT loaded and something wants it. Only you know your class, and what field you have there, and what is invoked to try to get that field
|
|
Re: "No such database row" error when lazily loading a field of an entity
mwhesse@...
On Wed, Jul 7, 2021 at 08:24 AM, <mwhesse@...> wrote:
loadUnloadedFieldsInFetchPlanWondering about that bit, is this an indicator that the field was loaded previously and has been unloaded? Under which circumstances would that happen? And why does reloading the field fail? |
|
"No such database row" error when lazily loading a field of an entity
mwhesse@...
I'm getting the below error (rather "randomly') when loading certain fields of my entities. I don't know where to start looking. Any kind of advice is appreciated.
On DN 5.2 with SQL Server. Let me know if you need more info, thanks. org.datanucleus.exceptions.NucleusObjectNotFoundException |
|
Re: DN 6.0.0-M1 released
It is all utterly inconsistent, of course, with javax.persistence developed outside of Oracle (under Eclipse) and so being renamed (just to add pain to its users), and javax.jdo developed outside of Oracle (under Apache) and so not being renamed. Anyway, DataNucleus supports any reasonable persistence spec wherever it is developed.
|
|
Re: DN 6.0.0-M1 released
Page bloom
So on further reading I realized Oracle has just separated out JEE stuff to Eclipse. I had previously misunderstood Oracle's stance to imply that all non "core Java" libraries using javax.* packaging were being separated out to jakarta.* land.
|
|
Re: DN 6.0.0-M1 released
Page bloom
Ah cool. It was my understanding that any 'javax.*' packages would have to be renamed.
|
|
Re: DN 6.0.0-M1 released
No. It's nothing to do with 'java ee'.
|
|
Re: DN 6.0.0-M1 released
Page bloom
Will the JDO API have to undergo a similar Jakartification?
|
|
ResultClass cannot inherit class with same field names?
Hi,
I'm not sure if this is a bug or by design. I have raised an issue in the Datanucleus-rdbms project at https://github.com/datanucleus/datanucleus-rdbms/issues/384, but not sure which forum is better suited to discuss the finding(s).
From the exception, I would expect that fields with the same name, and the same case are permitted. However, this indicates otherwise.
Is this a bug or by design? If a bug, I have submitted a proposed PR to resolve the issue. Thanks, Eric
|
|
Re: How to use Queries that do not start with "SELECT" but return a data set?
Since RDBMS datastores each have their own dialect, and since they have such a wide variety of random stuff you can put into an SQL statement, it is necessary to categorise statements and use the appropriate JDBC execute method. As a result, "non standard" stuff like your statement won't be caught.
If you want to make that linked example (the call to "getInternalQuery().setType(...)") accessible via a standard JDO method you could contribute a GitHub PullRequest that provides an "extension" for the query type and specify the "type" through that, and then call "query.addExtension(...)". |
|
How to use Queries that do not start with "SELECT" but return a data set?
My DBA has prepared some custom SQL queries that he does not want to put into StoredProcs. So I want to execute them via a normal Query.executeResultList(). However, I noticed that if the query does not start with "SELECT", then DN only returns a boolean "TRUE".
For instance, the query looks something like the following: WITH
tab1 AS (SELECT 1 col1),
tab2 AS (SELECT 2 col2)
SELECT
tab1.col1
,tab2.col2
FROM tab1
JOIN tab2 ON tab1.col1 <> tab2.col2
From a quick look at the code, I see that the `org.datanucleus.store.query.Query#executeQuery` first parses the SQL Query to identify if it is a SELECT, BULK_UPDATE, BULK_DELETE, BULK_INSERT or OTHER. I presume a fix would be to update the `org.datanucleus.store.rdbms.query.SQLQuery` parser to somehow detect this, but it seems like that could be a little difficult to parse appropriately (many edge cases).
Alternatively, is there another JDO way to execute the query and get the result set? I noticed there is an example here (https://www.datanucleus.org/products/accessplatform/jdo/query.html#stored_procedures_as_sql), but that uses internal/DataNucleus libs/dependencies. Is there anyway while remaining implementation agnostic? Thanks, Eric
|
|
Re: How to use PreparedStatements with the Persistence Manager?
ebenzacar@...
Thanks for the link. I had tried a github search for PreparedStatement on the samples repo, but hadn't thought of searching for tests. I had not realized that the JDOConnection could simply be cast to a Connection; that resolves my issue cleanly. Eric |
|
Re: How to use PreparedStatements with the Persistence Manager?
The JDO spec is public, as is the JDO TCK which will have tests. Also this test.
|
|
How to use PreparedStatements with the Persistence Manager?
I have a few pieces of code that use native PreparedStatements, but am having trouble using them in conjunction with the PersistenceManager. The current approach is:
Connection conn = (Connection)persistenceManager.getDataStoreConnection.getNativeConnection();
org.datanucleus.exceptions.NucleusUserException: The Connection was acquired by the developer and must be closed before using the persistence API.
at org.datanucleus.store.connection.ConnectionManagerImpl.getManagedConnection(ConnectionManagerImpl.java:310)
at org.datanucleus.store.connection.ConnectionManagerImpl.allocateManagedConnection(ConnectionManagerImpl.java:349)
at org.datanucleus.store.connection.ConnectionManagerImpl.getConnection(ConnectionManagerImpl.java:213)
at org.datanucleus.store.connection.ConnectionManager.getConnection(ConnectionManager.java:62)
at org.datanucleus.store.rdbms.scostore.JoinSetStore.iterator(JoinSetStore.java:888)
at org.datanucleus.store.types.wrappers.backed.Set.loadFromStore(Set.java:292)
at org.datanucleus.store.types.wrappers.backed.Set.iterator(Set.java:468)
I have managed to track the issue down to fact that the NativeConnection is being closed instead of the DataStoreConnection being closed. Does JDO spec identify/specify the requirements/contract for using PreparedStatements? Or is this part of the DN implementation only? Are there any examples that I can review to help identify best practices for using PreparedStatements within DataNucleus (I couldn't find anything in the datanucleus/samples-jdo repo)? I did find a snippet of code on the DN Persistence pages but I'm not sure if I need to close the native connection as well or not. If, instead of using the PM.getDataSource().getNativeConnection(), I were to use the PMF to get a brand new PM and new NativeConnection, if I only close the native connection (ie: nativeConnection.close()), will that cause a memory leak? Am I always required to close the JDOConnection as well? Thanks, Eric
|
|
Re: Problems with Java 16
claudio_rosati@...
Thank you Andy for your reply. I've modified my master POM file to use the pluginManagement section and magically everything now works (probably there was some wrong dependency somewhere, probably with ASM which now I'll use it in ver. 9.1). Now that DN v6.0.0.m1 is released I'm able to use DN directly from Maven rather than using my local build. Thank you for your help. If I'll have problems with the new version I'll let you know. |
|
Re: Does JDO provide a hook/trigger to identify a transaction commit/boundary?
Look at JDO spec 13.4.3, you can call
transaction.setSynchronization(...)and be notified via the (standard) beforeCompletion/afterCompletion methods. Note that this is for a transaction itself. It will not include NON-TRANSACTIONAL ops which are ... not transactional. |
|
Does JDO provide a hook/trigger to identify a transaction commit/boundary?
ebenzacar@...
I'm working on migrating legacy code that was using the OpenJPA Transaction Listener to identify a transaction boundary (commit) to trigger some post-commit functionality. The logic is a bit messy, but the idea is that after a commit is completed, an MQ message must be published to trigger some asynchronous processing. It is critical that the MQ message only be published after the commit as the MQ Client only has access to the persisted data and must only be triggered one the commit is confirmed.
` The legacy code is leveraging the OpenJPA `org.apache.openjpa.event.EndTransactionListener#afterCommit()` listener to trigger the message publishing. Is there any similar constructs that I can use with JDO? The closest I have found are the JDO Lifecycle listeners, but those unfortunately provide triggers on the individual objects and not the transaction as a whole. I noticed that the DN TransactionImpl has listeners n(org.datanucleus.TransactionEventListener) that I could likely leverage, but that would then imply that I need to have dependencies on the Impl instead of just the JDO standard. Am I overlooking anything? |
|
DN 6.0.0-M1 released
DN v6.0.0.m1 is released.
Notable changes are
Report any problems in the normal way with associated testcase or, better still, get involved and provide Pull Requests. |
|
Re: Problems with Java 16
Thanks for the info. Since I don't have Java16, and unlikely to ever have it (not an extended lifetime release AFAIK), I cant confirm or otherwise. You can provide a GitHub pull request for the maven-bundle-plugin, or anything else. I (currently) use Java 11 (for DN v6), so don't meet such things.
No need to rebuild Maven plugin since there is nothing in there post 6.0.0-m1 You need to talk to ASM people if you want to resolve bytecode issues. "datanucleus-core" master branch uses (repackaged) ASM v9.1 (which is the latest version according to https://repo1.maven.org/maven2/org/ow2/asm/asm/ ), so supports what that handles. You could look at the setting of this line; I simply left it unchanged at upgrade of ASM, but likely needs bumping, try it and see |
|