Date   

Re: Breaking change in CLOB columns in PostgreSQL coming from DN 5.2.2

Andy
 
Edited

Hi,
Define what you had before and what you have now.
Was the String field stored in a CLOB column (defined in the generated DDL as "CLOB") ? and presumably it was previously read/written using ClobColumnMapping?
And now (current code), it is read/written using LongVarcharColumnMapping or ClobColumnMapping?

Yes, it would be better if JDBC drivers actually provided proper support for all types, so when a user says CLOB they get a JDBC CLOB, not something else behind the scenes, which is the source of manu such "problems".

FWIW I have a field marked as jdbcType="CLOB" and using PostgreSQL v13 using JDBC v42.2. It creates a field of type "text" and uses LongVarcharColumnMapping. Read and write works fine.. Maybe you have something different to that ...


A project where there is but one person to maintain it and who uses a subset of what is available will cater for what they see best, and that won't necessarily include your projects requirements, but that is why this project has encouraged people to actually get involved and contribute tests etc so that their requirements are met.


Re: Multi Tenancy : find fails with a NPE on DN 2.5.6

stephane
 
Edited

On Sat, Feb 27, 2021 at 01:22 AM, <passignat@...> wrote:
multiTenancyProvider

Case: https://github.com/datanucleus/datanucleus-core/issues/365
I think it's available in this fix
DN Version: 5.2.7

thanks Andy 
--
Stephane


Error when using the jdbcType NVARCHAR with H2

mwhesse@...
 

When mapping a column with java type String and using jdbcType="NVARCHAR" in the @Column annotation we receive an error that the type is not supported by our H2 datastore.

We looked at the H2Adapter and it does register NVARCHAR as a supported JDBCType, but when DN queries the store metadata and retrieves the supported JDBCTypeInfos NVARCHAR is not returned by the H2 driver and subsequently DN unregisters the type from the supported types. Yet H2 claims to support NVARCHAR.

How can we use the JDBC type NVARCHAR with H2 and DN?

Using the latest DN 5.2 version and H2 1.4.200. 


Breaking change in CLOB columns in PostgreSQL coming from DN 5.2.2

keil@...
 

Hi,

I'm using Dependency-Track with PostgreSQL, which is built on DN. Dependency-Track recently switched from DN 5.2.2 to DN 5.2.6, making data in String fields annotated as CLOB inaccessible when stored in PostgreSQL. As far as debugged in the corresponding issue in Dependency-Track, the breaking change seems to be the transition from DN 5.2.2 to 5.2.3, more specifically issue 338 and the corresponding commit. The actual problem for Dependency-Track can be seen in this example project: data that was persisted with 5.2.2 cannot be read again after upgrading DN, since the column mapping changes from ClobColumnMapping to LongVarcharColumnMapping and the returned data only contains PostgreSQL's large object ID.

Reading the DN issue sounds to me like the previous behavior was considered a hack and probably even buggy, but the change - if it is responsible and I didn't miss anything - breaks previously working Dependency-Track. I didn't find anything regarding this in the migration guide.

In summary - and please correct me if I'm wrong - the pre 5.2.3 behavior is considered wrong and applications having stored data that way need to either migrate their data or explicitly revert to the previous behavior by overwriting the mapping via orm file.


Re: JPA PreUpdate fired "without" changes

Andy
 
Edited

User makes changes when an object is detached. The persistence provider is not involved in the process when the object is detached. The dirty flag is then set on ALL fields that have their setters called - read the byte code enhancement contract. When an object is "merged" the persistence provider has no original value to compare against, so just takes the dirty flags. So it updates all dirty fields.


Multi Tenancy : find fails with a NPE on DN 2.5.6

stephane
 

I'm trying to use datanucleus.TenantReadIds at the EntityManager level, setting this property entityManager.setProperty(PropertyNames.PROPERTY_MAPPING_TENANT_READ_IDS, tenant);

As it doesn't seems applied, I look at PersistenceNucleusContextImpl:getMultiTenancyReadIds

public String[] getMultiTenancyReadIds(ExecutionContext ec)
{
if (multiTenancyProvider != null)
{
String[] tenantReadIds = multiTenancyProvider.getTenantReadIds(ec);
return (tenantReadIds != null) ? tenantReadIds : new String[] {multiTenancyProvider.getTenantId(ec)};
}

String readIds = config.getStringProperty(PropertyNames.PROPERTY_MAPPING_TENANT_READ_IDS);
if (readIds != null)
{
// Return the tenant read ids if defined (for context)
return readIds.split(",");
}

// Fallback to just the current tenant id for this execution context
return new String[] {ec.getStringProperty(PropertyNames.PROPERTY_MAPPING_TENANT_ID)};
}

Is it possible to read the PROPERTY_MAPPING_TENANT_READ_IDS from the EntityManger settings before doing the fallback ? 

thanks,
--
Stephane


Multi Tenancy : find fails with a NPE on DN 2.5.6

stephane
 

Hi,

I'm testing datanucleus.TenantReadIds and receive a NullPointerException 
PersistenceNucleusContextImpl:getMultiTenancyReadIds line 1928 return new String[] {ec.getStringProperty(PropertyNames.PROPERTY_MAPPING_TENANT_ID)};

I receive this error because LocateRequest does this String[] tenantReadIds = storeMgr.getNucleusContext().getMultiTenancyReadIds(null);
The provided executionContext (ec) is null.

Context:
When I perform a find, it seems the tenant is not used. So I guess it could be related to caches. So I'm forcing DN to validate objects in the datastore using datanucleus.findObject.validateWhenCached=true at the EntityManagerFactory level.  

I attach a Junit Testcase. Just have a look at testInsertFind -> checkFind -> entityManager.find(Element.class, id) 

emf properties are in resources/persistence/

thanks
--
Stephane


Re: JPA PreUpdate fired "without" changes

stephane
 
Edited

Please use this attachment.

Failing tests:
testUpdateNoChangeDetach
testUpdateNoChangeCopy

Workaround for simple situation:
testUpdateNoChangeApply

--
Stephane


Re: JPA PreUpdate fired "without" changes

stephane
 

Hi Andy,

my issue seems related to the merge operation. I attached a junit testcase. 

thanks
--
Stephane


Re: JPA PreUpdate fired "without" changes

Andy
 
Edited

An object is not "made dirty" when setting a field to the same value ... except if the value is a floating point / double (aka imprecise) value.
NucleusJPAHelper.getObjectState(obj) defines the state at any time.


JPA PreUpdate fired "without" changes

stephane
 

Hi,

When I update an attribute with the same value (let's say I replace with an equals value), this object is made dirty. Not a big deal, but a side effect is PreUpdate callbacks are triggered, and I don't want because no changes need to be flushed in the storage.

Ex:
em.detach(client);
client.setName(client.getName());
em.merge(client);
=> PreUpdate callback is triggered.

Looking at the statemanager, the related field (name in the example) is made dirty which I think is the reason for triggering the PreUpdate. Are there any solution to avoid making the object dirty in such situation ? Would that offend any spec to do it ? 

thanks,

--
Stephane


Re: Multi Tenancy : user belonging to several tenant

Andy
 

DataNucleus does standard multitenancy, as defined in the docs. A user is running the PMF/EMF and so on creation of a record the record is tagged against that user. Read is for data for that user ... an exact match.

The only difference (extension) to that is what is in this issue regarding datanucleus.TenantReadIds.

Require anything else means you get the code ...


Multi Tenancy : user belonging to several tenant

stephane
 

Hi,

I'm looking again at how to segment data based on user "profile". An easy to understand concept is to imagine an organisation with subsidiaries, each having departments, departments organized in teams.
User can belong to any and potentially several of the organizational unit. The CEO belong to the root organization, a Director to a subsidiary, and a worker to one or several teams.

The idea is a user can see everything of belong to it's organization unit and sub-units.

I had a look at Multitenancy. That looks great but the filtering seems based on an exact match (=).

Are there anything else which can help ? 

thanks


--
Stephane


Re: Migration from Kodo: Any way to make it store foreign keys to later get already loaded objects from cache (for performance)?

dosiwelldi
 

Ah ok, thanks a lot.

I have a script going from Kodo mapping to DN mapping. I could not map the "natural" keys until I tried "mapped-by", then DN was happy with the mapping. So that will not work and I have to correct our DB Schema...


Re: Migration from Kodo: Any way to make it store foreign keys to later get already loaded objects from cache (for performance)?

Andy
 

The mapped-by refers to the FIELD in the other class that is an object of this objects type. Joining is on the PK of this class, always. There are no "special" field names.
JDO does not support "natural" keys, assuming that is what you're expecting it to offer.


Re: Migration from Kodo: Any way to make it store foreign keys to later get already loaded objects from cache (for performance)?

dosiwelldi
 

Does mapped-by="id" have special semantics? Can a field that is not the primary key not be named "id"? If I have fields named "id" then mapped-by="id" will use the primary key instead of the field for joining.

Example:

<class name="Article" table="Article">
    <datastore-identity strategy="identity" column="ArtID"/>
    <field name="id">
        <column name="ArtItemID" allows-null="true"/>
    </field>
</class>

<class name="ClearingItem" table="ClearingItem">
    <datastore-identity strategy="identity" column="ClitActivityID"/>
    <field name="article">                                             <!-- field of type Article -->
        <column name="ClitItemID" mapped-by="id" allows-null="true"/>
    </field>
</class>

clearingItem.getArticle() will try to join with Article on ArtID instead of ArtItemID.

Query:
SELECT B0.ArtItemName,B0.ArtItemID,B0.ArtPrice,B0.ArtRevenueAccount,B0.ArtERPID,B0.ArtID FROM ClearingItem A0 LEFT OUTER JOIN Article B0 ON A0.ClitItemID = B0.ArtID WHERE A0.ClitActivityID = <3872418>


Re: JPA @ManyToMany @ForeignKey name not used

Andy
 
Edited

You're wrong in the mapping definition. @JoinTable has foreignKey and inverseForeignKey.

Take it up with the people behind JPA why their definition of annotations are unintuitive ... but then you wont find anyone accountable for them, they ran away many years ago.


JPA @ManyToMany @ForeignKey name not used

stephane
 

Hi,
I'm trying to specify foreign-key names. This helps to reduce the effort on db schema comparison at the end of development, to prepare the schema upgrade script.
Actually it works very well on most relationships but I can't find the solution for JoinTables.

Here is an example of User class containing a collection of Group
@JoinTable(name = "iam_user_groups",
joinColumns =@JoinColumn(name = "user_id",referencedColumnName = "id",foreignKey = @ForeignKey(name="fk_user_groups" ))
,inverseJoinColumns = @JoinColumn(name = "group_id",referencedColumnName = "id",foreignKey = @ForeignKey(name="fk_groups_user"))
)
private Set<Group> groups = new HashSet<>();

The SchemaTool and DN doesn't seem to care about the foreignKey = @ForeignKey(name="fk_user_groups" and still generate iam_user_groups_fk1. 

Am I wrong in the mapping definition or is it not yet supported by Datanucleus. (I haven't seen restriction in the JPA specification.)

thanks,
--
Stephane


Re: Envers-style Auditing in DataNucleus?

Andy
 
Edited

I can't say there'll be an easy way of doing any of this. You have to create separate tables, hence table creation handling. You have to intercept inserts, updates, deletes hence persistence handling. You have to provide a way of retrieval of objects of a particular revision hence new API calls.

First step is to define what tables would be created, mentioning how relations will be handled / stored. Without an outline "design" of what is required and whether that caters for all needs then there is little point in doing further. You could create an issue (datanucleus-core plugin for example) and write the details there, giving examples, persistable class X with 1-N join table relation to persistable class Y, so what tables are created, what is entered in these tables with sample persistence operations etc.

All persistable objects are of type Persistable. That has dnGetStateManager.

But then there may not be adequate callbacks to do all that is needed and it could be better to have an Audit interface with suitable methods on it, and an implementation for each datastore that it is required for. StateManagerImpl has a savedLoadedFields/savedImage which would have "old" values, but then why you would need them I don't get. But your design would demonstrate what is needed and why.


Re: Envers-style Auditing in DataNucleus?

ebenzacar@...
 

I would suspect non RDBMS would also be supportable.  Concept being that each time an object is modified, the dirty fields of the given object (before and after) are persisted in a document/table, with any custom contextual information that can be provided to the persistence manager.  

Envers essentially provides this behaviour by creating separate tables in a different connection/database to track changes to given entities.  I realize that JPA/JDO do not specify any of this, but was wondering if there was an easy way to create/implement something like this with DataNucleus.  I would essentially see if as putting a listener on the 'commit' phase, checking for dirty fields, and persisting the change of the fields with the object identifier in a table/document.  My immediate use case is RDBMS, but NOSQL could be interesting to support as well.

My problem is multi-fold.
1) I'm not sure how to put a listener on any 'commit'.  Rather, there seems to be separate listeners for 'create', 'update', 'delete'.
2) Once the listener triggers, I'm not sure how to retrieve which fields are 'dirty'.  I see there is a `JDOStateManager`, but not sure how to access it from an `InstanceLifecycleEvent`.  I suspect it must be from a `getPersistentInstance()` but I cannot seem to find a helper method anywhere that retrieves a `JDOStateManager`.
3) Even once I am able to get the dirty fields via the `JDOStateManager`, I do not know how to retrieve the original value of the field without needing to re-read the object from the DB

Any suggestions would be greatly appreciated.

Thanks,

Eric

81 - 100 of 365