Date   

Re: JPA PreUpdate fired "without" changes

Andy
 
Edited

An object is not "made dirty" when setting a field to the same value ... except if the value is a floating point / double (aka imprecise) value.
NucleusJPAHelper.getObjectState(obj) defines the state at any time.


JPA PreUpdate fired "without" changes

passignat@...
 

Hi,

When I update an attribute with the same value (let's say I replace with an equals value), this object is made dirty. Not a big deal, but a side effect is PreUpdate callbacks are triggered, and I don't want because no changes need to be flushed in the storage.

Ex:
em.detach(client);
client.setName(client.getName());
em.merge(client);
=> PreUpdate callback is triggered.

Looking at the statemanager, the related field (name in the example) is made dirty which I think is the reason for triggering the PreUpdate. Are there any solution to avoid making the object dirty in such situation ? Would that offend any spec to do it ? 

thanks,

--
Stephane


Re: Multi Tenancy : user belonging to several tenant

Andy
 

DataNucleus does standard multitenancy, as defined in the docs. A user is running the PMF/EMF and so on creation of a record the record is tagged against that user. Read is for data for that user ... an exact match.

The only difference (extension) to that is what is in this issue regarding datanucleus.TenantReadIds.

Require anything else means you get the code ...


Multi Tenancy : user belonging to several tenant

passignat@...
 

Hi,

I'm looking again at how to segment data based on user "profile". An easy to understand concept is to imagine an organisation with subsidiaries, each having departments, departments organized in teams.
User can belong to any and potentially several of the organizational unit. The CEO belong to the root organization, a Director to a subsidiary, and a worker to one or several teams.

The idea is a user can see everything of belong to it's organization unit and sub-units.

I had a look at Multitenancy. That looks great but the filtering seems based on an exact match (=).

Are there anything else which can help ? 

thanks


--
Stephane


Re: Migration from Kodo: Any way to make it store foreign keys to later get already loaded objects from cache (for performance)?

dosiwelldi
 

Ah ok, thanks a lot.

I have a script going from Kodo mapping to DN mapping. I could not map the "natural" keys until I tried "mapped-by", then DN was happy with the mapping. So that will not work and I have to correct our DB Schema...


Re: Migration from Kodo: Any way to make it store foreign keys to later get already loaded objects from cache (for performance)?

Andy
 

The mapped-by refers to the FIELD in the other class that is an object of this objects type. Joining is on the PK of this class, always. There are no "special" field names.
JDO does not support "natural" keys, assuming that is what you're expecting it to offer.


Re: Migration from Kodo: Any way to make it store foreign keys to later get already loaded objects from cache (for performance)?

dosiwelldi
 

Does mapped-by="id" have special semantics? Can a field that is not the primary key not be named "id"? If I have fields named "id" then mapped-by="id" will use the primary key instead of the field for joining.

Example:

<class name="Article" table="Article">
    <datastore-identity strategy="identity" column="ArtID"/>
    <field name="id">
        <column name="ArtItemID" allows-null="true"/>
    </field>
</class>

<class name="ClearingItem" table="ClearingItem">
    <datastore-identity strategy="identity" column="ClitActivityID"/>
    <field name="article">                                             <!-- field of type Article -->
        <column name="ClitItemID" mapped-by="id" allows-null="true"/>
    </field>
</class>

clearingItem.getArticle() will try to join with Article on ArtID instead of ArtItemID.

Query:
SELECT B0.ArtItemName,B0.ArtItemID,B0.ArtPrice,B0.ArtRevenueAccount,B0.ArtERPID,B0.ArtID FROM ClearingItem A0 LEFT OUTER JOIN Article B0 ON A0.ClitItemID = B0.ArtID WHERE A0.ClitActivityID = <3872418>


Re: JPA @ManyToMany @ForeignKey name not used

Andy
 
Edited

You're wrong in the mapping definition. @JoinTable has foreignKey and inverseForeignKey.

Take it up with the people behind JPA why their definition of annotations are unintuitive ... but then you wont find anyone accountable for them, they ran away many years ago.


JPA @ManyToMany @ForeignKey name not used

passignat@...
 

Hi,
I'm trying to specify foreign-key names. This helps to reduce the effort on db schema comparison at the end of development, to prepare the schema upgrade script.
Actually it works very well on most relationships but I can't find the solution for JoinTables.

Here is an example of User class containing a collection of Group
@JoinTable(name = "iam_user_groups",
joinColumns =@JoinColumn(name = "user_id",referencedColumnName = "id",foreignKey = @ForeignKey(name="fk_user_groups" ))
,inverseJoinColumns = @JoinColumn(name = "group_id",referencedColumnName = "id",foreignKey = @ForeignKey(name="fk_groups_user"))
)
private Set<Group> groups = new HashSet<>();

The SchemaTool and DN doesn't seem to care about the foreignKey = @ForeignKey(name="fk_user_groups" and still generate iam_user_groups_fk1. 

Am I wrong in the mapping definition or is it not yet supported by Datanucleus. (I haven't seen restriction in the JPA specification.)

thanks,
--
Stephane


Re: Envers-style Auditing in DataNucleus?

Andy
 
Edited

I can't say there'll be an easy way of doing any of this. You have to create separate tables, hence table creation handling. You have to intercept inserts, updates, deletes hence persistence handling. You have to provide a way of retrieval of objects of a particular revision hence new API calls.

First step is to define what tables would be created, mentioning how relations will be handled / stored. Without an outline "design" of what is required and whether that caters for all needs then there is little point in doing further. You could create an issue (datanucleus-core plugin for example) and write the details there, giving examples, persistable class X with 1-N join table relation to persistable class Y, so what tables are created, what is entered in these tables with sample persistence operations etc.

All persistable objects are of type Persistable. That has dnGetStateManager.

But then there may not be adequate callbacks to do all that is needed and it could be better to have an Audit interface with suitable methods on it, and an implementation for each datastore that it is required for. StateManagerImpl has a savedLoadedFields/savedImage which would have "old" values, but then why you would need them I don't get. But your design would demonstrate what is needed and why.


Re: Envers-style Auditing in DataNucleus?

ebenzacar@...
 

I would suspect non RDBMS would also be supportable.  Concept being that each time an object is modified, the dirty fields of the given object (before and after) are persisted in a document/table, with any custom contextual information that can be provided to the persistence manager.  

Envers essentially provides this behaviour by creating separate tables in a different connection/database to track changes to given entities.  I realize that JPA/JDO do not specify any of this, but was wondering if there was an easy way to create/implement something like this with DataNucleus.  I would essentially see if as putting a listener on the 'commit' phase, checking for dirty fields, and persisting the change of the fields with the object identifier in a table/document.  My immediate use case is RDBMS, but NOSQL could be interesting to support as well.

My problem is multi-fold.
1) I'm not sure how to put a listener on any 'commit'.  Rather, there seems to be separate listeners for 'create', 'update', 'delete'.
2) Once the listener triggers, I'm not sure how to retrieve which fields are 'dirty'.  I see there is a `JDOStateManager`, but not sure how to access it from an `InstanceLifecycleEvent`.  I suspect it must be from a `getPersistentInstance()` but I cannot seem to find a helper method anywhere that retrieves a `JDOStateManager`.
3) Even once I am able to get the dirty fields via the `JDOStateManager`, I do not know how to retrieve the original value of the field without needing to re-read the object from the DB

Any suggestions would be greatly appreciated.

Thanks,

Eric


Re: Add default value to column

passignat@...
 

On Fri, Jan 29, 2021 at 10:08 AM, Andy wrote:
Why not decide if you are using JDO mapping files or not? You seem to be trying to use JPA format for an attribute that is only defined for JDO.
The doc doesn't make any mention of "default-value". With JPA you have to dump it all into "column-definition".
thanks Andy, I probably mixed up the mapping... my bad...

Why not the JDO, for many bad reasons... 
 
--
Stephane


Re: Add default value to column

Andy
 

Why not decide if you are using JDO mapping files or not? You seem to be trying to use JPA format for an attribute that is only defined for JDO.
The doc doesn't make any mention of "default-value". With JPA you have to dump it all into "column-definition".


Add default value to column

passignat@...
 

I would like to add the default value of a field to the database column definition, in order to have the SchemaTool generating it in the database.

I tried like in the doc, but without success:
<mapping-file>META-INF/orm-mysql.xml</mapping-file>
 <basic name="currency">
            <column name="CURRENCY" default-value="GBP"/>
        </basic>

any suggestion ?

--
Stephane


Re: Migration from Kodo: Any way to make it store foreign keys to later get already loaded objects from cache (for performance)?

Andy
 

1). A CAST has syntax "(identifier)", as does your use of parentheses with a field (just like with Java). Hence the compiler would need to know all about candidate class, imports etc etc at the point of the generic compile to try to guess which you mean. It currently doesn't hence that is classified as a cast. See the code at this link. You can develop an update to support both if you really want to put "(myBooleanField)" into JDOQL queries.

2). Fetch size is used in extracting results from a query, as shown in the code at this link. Suggest that you debug your usage around that


Re: Migration from Kodo: Any way to make it store foreign keys to later get already loaded objects from cache (for performance)?

dosiwelldi
 

Hi again, 2 more things:

1) Parser Bug

I think I found a bug in the JDOQL query parser: booleans fields in parentheses lead to javax.jdo.JDOUserException: Method/Identifier expected

While these are ok (`active` is a boolean field on Class enterprise):
pm.newQuery(Enterprise.class, "active");
pm.newQuery(Enterprise.class, "!active");
pm.newQuery(Enterprise.class, "(!active)");

This one will throw an exception:
pm.newQuery(Enterprise.class, "(active)");     //Method/Identifier expected

Stack trace:
    [junit] Method/Identifier expected at character 9 in "(active)"
    [junit] org.datanucleus.store.query.QueryCompilerSyntaxException: Method/Identifier expected at character 9 in "(active)"
    [junit]     at org.datanucleus.query.compiler.JDOQLParser.processPrimary(JDOQLParser.java:802)
    [junit]     at org.datanucleus.query.compiler.JDOQLParser.processUnaryExpression(JDOQLParser.java:643)
    [junit]     at org.datanucleus.query.compiler.JDOQLParser.processMultiplicativeExpression(JDOQLParser.java:569)
    [junit]     at org.datanucleus.query.compiler.JDOQLParser.processAdditiveExpression(JDOQLParser.java:540)
    [junit]     at org.datanucleus.query.compiler.JDOQLParser.processRelationalExpression(JDOQLParser.java:454)
    [junit]     at org.datanucleus.query.compiler.JDOQLParser.processAndExpression(JDOQLParser.java:437)
    [junit]     at org.datanucleus.query.compiler.JDOQLParser.processExclusiveOrExpression(JDOQLParser.java:423)
    [junit]     at org.datanucleus.query.compiler.JDOQLParser.processInclusiveOrExpression(JDOQLParser.java:409)
    [junit]     at org.datanucleus.query.compiler.JDOQLParser.processConditionalAndExpression(JDOQLParser.java:395)
    [junit]     at org.datanucleus.query.compiler.JDOQLParser.processConditionalOrExpression(JDOQLParser.java:376)
    [junit]     at org.datanucleus.query.compiler.JDOQLParser.processExpression(JDOQLParser.java:365)
    [junit]     at org.datanucleus.query.compiler.JDOQLParser.parse(JDOQLParser.java:88)
    [junit]     at org.datanucleus.query.compiler.JavaQueryCompiler.compileFilter(JavaQueryCompiler.java:600)
    [junit]     at org.datanucleus.query.compiler.JDOQLCompiler.compile(JDOQLCompiler.java:103)
    [junit]     at org.datanucleus.store.query.AbstractJDOQLQuery.compileGeneric(AbstractJDOQLQuery.java:392)
    [junit]     at org.datanucleus.store.query.AbstractJDOQLQuery.compileInternal(AbstractJDOQLQuery.java:450)
    [junit]     at org.datanucleus.store.rdbms.query.JDOQLQuery.compileInternal(JDOQLQuery.java:263)
    [junit]     at org.datanucleus.store.query.Query.executeQuery(Query.java:1936)
    [junit]     at org.datanucleus.store.query.Query.executeWithArray(Query.java:1864)
    [junit]     at org.datanucleus.store.query.Query.execute(Query.java:1846)
    [junit]     at org.datanucleus.api.jdo.JDOQuery.executeInternal(JDOQuery.java:439)
    [junit]     at org.datanucleus.api.jdo.JDOQuery.execute(JDOQuery.java:263)


2) query.getFetchPlan().setFetchSize(1) will still retrieve the whole table

In short:
I probably understood wrongly what query.getFetchPlan().setFetchSize(1) does. I wanted to prevent queries from loading whole tables initially, to just get rows as needed. But I see in the debugger that JDOFetchPlan.getFetchSize() is never called for my code so it is probably not meant for that.

In long:
I wrote a script that tries to extract most of our queries into a unit test to see if they still work. I only care if they compile so I do not iterate over the result collection. Kodo takes about 1 minute to execute all of them. If I run the same test with DN I get an out of memory exception (I think from junit that tries to cache the log file). Without logging it runs but will never finish.

I then added query.getFetchPlan().setFetchSize(1) to each query so it should only get one row. But it still gets the whole table.

//Example: Takes 68s with DN and reads 710k rows:
Query query = pm.newQuery(OperationalEvents.class);
query.getFetchPlan().setFetchSize(1);
query.execute();

I now added query.setRange(0, 1) to all tests as a workaround. This works fine for my test. But we have some queries in the real code that ask for a lot of data but then only access the first row... Will be fun to try to find them all.

Log after many seconds: now getting rows 101542 and 101543:
    [junit] 2021-01-28 19:36:20,481 DEBUG [main] DataNucleus.Persistence - Retrieved object with OID "xxx.yyy.OperationalEvents:101542"
    [junit] 2021-01-28 19:36:20,481 DEBUG [main] DataNucleus.Cache - Object with id "xxx.yyy.OperationalEvents:101542" not found in Level 1 cache
    [junit] 2021-01-28 19:36:20,481 DEBUG [main] DataNucleus.Cache - Object with id "xxx.yyy.OperationalEvents:101542" not found in Level 2 cache
    [junit] 2021-01-28 19:36:20,481 DEBUG [main] DataNucleus.Cache - Object "xxx.yyy.OperationalEvents@394851d7" (id="xxx.yyy.OperationalEvents:101542") added to Level 1 cache (loadedFlags="[NNNNNNN]")
    [junit] 2021-01-28 19:36:20,481 DEBUG [main] DataNucleus.Cache - Object "xxx.yyy.OperationalEvents@394851d7" (id="xxx.yyy.OperationalEvents:101542") added to Level 2 cache (fields="[0, 1, 2, 3, 4, 5, 6]", version="")
    [junit] 2021-01-28 19:36:20,482 DEBUG [main] DataNucleus.Persistence - Retrieved object with OID "xxx.yyy.OperationalEvents:101543"
    [junit] 2021-01-28 19:36:20,482 DEBUG [main] DataNucleus.Cache - Object with id "xxx.yyy.OperationalEvents:101543" not found in Level 1 cache
    [junit] 2021-01-28 19:36:20,482 DEBUG [main] DataNucleus.Cache - Object with id "xxx.yyy.OperationalEvents:101543" not found in Level 2 cache


Re: Table not found in MySQL8

passignat@...
 

Sure it's an amazing amount of good quality work you did those past 17 years !

Here I don't know how to make a test case. It's linked to the database settings at the schema creation time, and how they are translated into jdbc metadata by drivers. My feeling is upgrade process from 5.7 to 8 doesn't give the same result than creating the schema on a fresh install of mysql8.

What I saw is the mapping made by the driver changed over time about lowercase_table_names variable to the jdbc metadata properties. Now, with mysql8, there are also changes in the dictionary and joins ... and recommendation to upgrade the jdbc driver... 

thanks for the option, I missed it in the code. I finally rolled back to mysql 5.7 recreate everything with lowercase_table_names = 1. Application seems to work again, but I don't know yet how to upgrade to mysql 8...

thanks
--
Stephane


Re: Table not found in MySQL8

Andy
 
Edited

DN applies whatever it is told to use by the user, such as via "datanucleus.identifier.case", as defined in the docs.

I've never had a need for changing the code to use MariaDB, so have to assume that your use of persistence properties is the issue. The only way code will get changed for such an area (that has been used for the last 17 years), is to provide valid testcase, including start up database data.


Table not found in MySQL8

passignat@...
 
Edited

Hi,

I'm facing some issue to use a fresh new mysql8 server. DN doesn't find the tables because of case sensitivity.

Tables are created in lowercase while DN search for upper case.

Debugging schema introspection:
- First, DN load database options/features: JDBC metadata storesUpperCaseQuotedIdentifiers and supportsMixedCaseIdentifiers returns TRUE which forces DN to upper case tables, columns, ... names (in DatastoreIdentifierImpl)

- Then DN loads database objects (tables, columns, ...) matching these upper case names with lower case names returned by the database (RDBMSSchemaHandler getTableType) and doesn't find any match


There are probably a lot of good reasons to have this algorithm, but on one hand (tables names coming from the mapping) DN forces upper case and on the other hand
 (tables names coming from the database).

Could that makes sense to force the name of database objects (table column, index, ...) in the same way DN forces these names of tables, columns, indexes, ... coming from the mapping
, applying IdentifierFactory.getIdentifierInAdapterCase on rs.getString(3) here in RDBMSSchemaHandler getTableType:

if ((insensitive && tableName.equalsIgnoreCase(rs.getString(3))) ||
(!insensitive && tableName.equals(rs.getString(3))))

thanks,

I haven't seen so far any settings to override this behavior to force lower case or force upper case or just leave names unchanged ?

I uses DN 5.2.1, MySQL 8.0.23 and tested the 2 drivers JDBC 5.1.40 and 8.0.23. The connection pool is HikaryCP.

--
Stephane


Re: Envers-style Auditing in DataNucleus?

Andy
 

Neither JDO nor JPA provide any standardised "auditing" capability.

DataNucleus provides first level support for auditing only, namely the ability to tag the create / update user/timestamp against each record, as per this link.

To provide second level support for auditing (being able to see records at particular points in time, to see what was updated in which change, etc) that would presumably require an audit table for each persistable entity table. It would require a definition of what the handling would be, and an implementation. Presumably you would only be looking at supporting it for RDBMS datastores.

1 - 20 of 276