Re: Problems with extra N+1 requests - likely due to poor mapping defns, but not sure how to rectify
Thanks; will look into that more closely. Thanks, Eric
|
|
Re: JDBC Batch
stephane
opensource development on such complex product (closely comparable to database software development) requires a big big expertise on the product design, constraints, ... which I don't have (yet) on DN.
Step by step ... -- Stephane
|
|
Re: JDBC Batch
JDO/JPA API calls are processed in the appropriate order. If a required statement is batchable then it is batched. Referential integrity is always assumed.
https://www.datanucleus.org/products/accessplatform_5_2/datastores/datastores.html#statement_batching There is no mode to do "all table X followed by all table Y followed by all table Z". Open source ...
|
|
Re: JDBC Batch
stephane
I mean when performing modifications, do something like:
-1 set constraints deffered => disable foreign-key, unique constraints, ... -2 Batch updates table per table (addBatch, ...) - perform all Deletes from table A if any - perform all Inserts into table A if any - perform all Updates of table A if any - perform all Deletes from table B if any - perform all Inserts into table B if any - perform all Updates of table B if any - ... -3 commit (Let the constraints are checked at the commit time.) -- Stephane
|
|
Re: JDBC Batch
You mean DELETE them in the datastore, do random operations, then re-CREATE them? Or do a datastore-specific disable (such as MySQL set foreign_key_checks=0). Nope.
But then its open source so anyone can contribute "features", not that I personally would be using that one you describe
|
|
Re: Problems with extra N+1 requests - likely due to poor mapping defns, but not sure how to rectify
That metadata extension works for me in this sample. How you apply it is up to you
|
|
JDBC Batch
stephane
Hi,
Are there any options to deactivate foreign-key constraints, then insert/update/delete table per table using batches, and then reactivate foreign-key at the commit ? I used a product with that feature which was working at least on mysql and oracle. thanks -- Stephane
|
|
Re: How do the Query/Result caches and the L1 cache work together?
An L1 cache is an object cache, as defined by the JDO spec. You have an "id" you get the equivalent persistable object. It is used anywhere a persistable object is accessible, and that includes queries if the queries returns a persistable object. The log tells you where objects come from.
Query caching (in DataNucleus, not defined by the JDO spec) is a 3 level process ... generic compilation, datastore compilation, results. Compilations are always cached. Results are only cached when you tell it to cache them, via the query extension datanucleus.query.results.cached It would make no sense to cache the results of every query for all types of software that use DataNucleus, hence it is off by default.
|
|
Re: Problems with extra N+1 requests - likely due to poor mapping defns, but not sure how to rectify
ebenzacar@...
I've tried to add the extension definition to my package jdo file but either it isn't working as I expect, or I've defined it incorrectly. This is what I added:
<field name="addPerson" column="ADDPERSON" default-fetch-group="true"> With this definition, I was expecting that the object retrieval would basically retrieve the FK value for addPerson from my Manufacturer object. However, it seems to be trying to retrieve the full object, and puts itself in an endless loop for some reason, causing a StackOverflow. The datamodel isn't exactly what I listed above, but rather has a bit of a circular design (ie: the AuditPerson actually contains an instance of Audit). When trying to debug the issue further, I see that DN is calling `dnReplaceFields()` method on my Manufacturer object, but it is difficult to identify which field it is. I can see the field index value which is causing the endless loop, but can't determine which fieldname the index maps to. I cannot find the metadata/mapping information where the fieldname is mapped to the field number. In my case, when I put the debugger on the dnReplaceField in the call stack, I see that it is trying to load field 56, and that this field is causing the infinite loop. How/where can I identify which fieldname field 56 is mapped to? Similarly, the DN logs show me what I think is a list of fields that are loaded/not loaded when the object is retreived from cache (loadedFlags) (ex: (id="zzzz" taken from Level 1 cache (loadedFlags="[YYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYNNNNNNNNNNN]"). But how do I identify which index is which field? I would have expected that setting datanucleus.maxFetchDepth=1 would prevent any recursion from happening, but it is still endlessly trying to reload that same field #56. Removing the `addPerson/modPerson` from the default-fetch-group stops the infinite loop. Any suggestions what/how to investigate next would be appreciated.
Thanks, Eric
|
|
How do the Query/Result caches and the L1 cache work together?
ebenzacar@...
I've been encountering something I don't understand with the Query caches and the L1 cache. I've tried reading the DN docs a few times, but I am still confused by the results I am seeing.
My persistence unit uses the default cache configuration, with no specific settings for the caches identified. To my understanding then, the queryCache and the queryResult cache should be using soft references with unlimited size. However, when I add profiling to my DB connection, I see the exact same query being executed multiple times. For example, I will see the following query executed 4 times in a row:
Normally, I would have expect this query to be added to the query cache, and the result to be in the queryResult cache. Similarly, I would also expect the retrieved object(s) to be added to the L1 cache, and used for subsequent retrievals. Questions: 1) Is the L1 cache only used if retrieving objects by Identity? 2) How does the query cache work? What makes a query cacheable? 3) Why would the query/result cache not be responding instead of re-querying the DB multiple times? Thanks, Eric
|
|
Re: Problems with extra N+1 requests - likely due to poor mapping defns, but not sure how to rectify
With a single object relation with the FK at the selected object side there are only 3 things that would be feasible.
If you always want to load the related object with just the PK field(s) then you can always mark it as "fetch-fk-only" on that field, in the metadata. If you only sometimes want to load the related object with just the PK fields, then put the field in the fetch plan for the SELECT, but also remove all other fields of the related object from the fetch plan (so it doesn't add a join to the related object).
|
|
Re: Problems with extra N+1 requests - likely due to poor mapping defns, but not sure how to rectify
ebenzacar@...
Thanks Andy. Fair enough (embedded vs related).
Eric Eric
|
|
Re: Problems with extra N+1 requests - likely due to poor mapping defns, but not sure how to rectify
Audit fields "addPerson" and "modPerson" aren't embedded (exist in the same table), they are related (have their own table), using JDO terminology ... or at least without seeing your mapping (XML/annotations) that's what it looks like.
Your mapping definition controls where things are stored, and what is in the default fetch group. Your runtime API usage defines what is in the active fetch group. What is in the fetch group defines what is fetched ... clue in the name. Kodo maybe didn't have JDO (2) fetch groups. You rectify it by specifying the appropriate fetching (group) for the appropriate moment in the API usage.
|
|
Problems with extra N+1 requests - likely due to poor mapping defns, but not sure how to rectify
ebenzacar@...
I'm having some strange RDBMS N+1 calls most likely due to my mappings, but am not sure if this is a DN Persistence configuration issue or a mapping issue. I'm using MSSQL.
My problem is that I have an Object "Person" which has some audit fields "addPerson" and "modPerson". Both "addPerson" and "modPerson" are simple N-1 relations. Conceptually like the following: // InheritanceStrategy=New-table Class Person extends Audit{
String name;
int age; }
// InheritanceStrategy=subclass-table Class Audit{ AuditPerson addPerson;
AuditPerson modPerson;
}
Class AuditPerson{ String username; } When I retrieve the Person (using a datastore identity), I see an SQL Query which selects the primitive fields but not the embedded Objects. Something like: "Select name, age FROM Person where ID = 123". When I try to convert to a DTO and read the `addPerson`, and `modPerson` fields, DN then launches the next queries:
|
|
Re: Any way to configure "development" mode for DataNucleus to force it to reload class metadata?
ebenzacar@...
Thanks for the tip. I was looking for a `clearCache` or a `clearMetadata` method or something to that extent that I didn't notice the `unmanage/unload` methods. Eric
|
|
Re: Any way to configure "development" mode for DataNucleus to force it to reload class metadata?
There is a MetaDataManager for each PMF. It loads metadata when it needs to. There is a method unloadMetaDataForClass to unload a class' metadata if needing to do so.
There is also a method on JDOPersistenceManagerFactory unmanageClass(String className) which will call the underlying MetaDataManager method as well as the associated StoreManager method. Added in DN 4.0. You could also ask on Apache ISIS support (which uses DataNucleus) because Dan Haywood played around with a JRebel addon to do something related some time ago. No idea if he published it.
|
|
Any way to configure "development" mode for DataNucleus to force it to reload class metadata?
ebenzacar@...
I'm working on a fairly large, slow startup legacy application in which I am slowly trying to integrate DataNucleus. I am also leveraging JRebel, which allows me to modify most classes on the fly without having to restart the application everytime. For most business oriented logic, this works great. Eric
|
|
Re: Migration from Kodo: Any way to make it store foreign keys to later get already loaded objects from cache (for performance)?
ebenzacar@...
Hi, I'm running into a similar problem as you where I need to create an ugly JDOHelper.getObjectIdAsLong() hack as well due to some legacy Kodo code.
|
|
Multitenancy query result error - query cache key (bug ?)
stephane
Hi,
When 2 queries with the same tenant but different tenantReaders are executed, the second one returns the result of the first one. The SQL generated is the same for the second query while it should be different. The query cache key only uses the tenant which may explain the case. String multiTenancyId = ec.getNucleusContext().getMultiTenancyId(ec);(no test case yet) -- Stephane
|
|
Re: Strange behaviour running in WIldfly 10 with EmulatedXAResource
You are using a JEE connection pool but are setting the resource type as "local" hence DN will try to commit the connection (since it commits local things). Your connection (JBoss JCA) then throws an exception because it wants to commit things, maybe?. Use a local connection if using local ? But then I don't use JEE containers ...
You also have "datanucleus.connectionPoolingType" yet have already specified "connectionFactoryName" to be from JBoss. Either DataNucleus creates the connections to the datastore or JBoss does, but you can't have both, clearly.
|
|