Breaking change in CLOB columns in PostgreSQL coming from DN 5.2.2



I'm using Dependency-Track with PostgreSQL, which is built on DN. Dependency-Track recently switched from DN 5.2.2 to DN 5.2.6, making data in String fields annotated as CLOB inaccessible when stored in PostgreSQL. As far as debugged in the corresponding issue in Dependency-Track, the breaking change seems to be the transition from DN 5.2.2 to 5.2.3, more specifically issue 338 and the corresponding commit. The actual problem for Dependency-Track can be seen in this example project: data that was persisted with 5.2.2 cannot be read again after upgrading DN, since the column mapping changes from ClobColumnMapping to LongVarcharColumnMapping and the returned data only contains PostgreSQL's large object ID.

Reading the DN issue sounds to me like the previous behavior was considered a hack and probably even buggy, but the change - if it is responsible and I didn't miss anything - breaks previously working Dependency-Track. I didn't find anything regarding this in the migration guide.

In summary - and please correct me if I'm wrong - the pre 5.2.3 behavior is considered wrong and applications having stored data that way need to either migrate their data or explicitly revert to the previous behavior by overwriting the mapping via orm file.


Define what you had before and what you have now.
Was the String field stored in a CLOB column (defined in the generated DDL as "CLOB") ? and presumably it was previously read/written using ClobColumnMapping?
And now (current code), it is read/written using LongVarcharColumnMapping or ClobColumnMapping?

Yes, it would be better if JDBC drivers actually provided proper support for all types, so when a user says CLOB they get a JDBC CLOB, not something else behind the scenes, which is the source of manu such "problems".

FWIW I have a field marked as jdbcType="CLOB" and using PostgreSQL v13 using JDBC v42.2. It creates a field of type "text" and uses LongVarcharColumnMapping. Read and write works fine.. Maybe you have something different to that ...

A project where there is but one person to maintain it and who uses a subset of what is available will cater for what they see best, and that won't necessarily include your projects requirements, but that is why this project has encouraged people to actually get involved and contribute tests etc so that their requirements are met.


If you are indeed seeing use of ClobColumnMapping on 5.2.2 and LongVarcharColumnMapping on 5.2.6 then you could add
@Column(jdbcType="CLOB",     extensions=@Extension(vendorName="datanucleus", key="column-mapping-class", value=""))
<column jdbc-type="CLOB">    <extension vendor-name="datanucleus" key="column-mapping-class" value=""/></column>

which should override any internally chosen column mapping class.


Hi Andy,

yes, as in the example project, the column is a String column annotated as CLOB with no further annotation information. With DN 5.2.2 this is stored in a text column using  a ClobColumnMapping and working fine. After the upgrade DN changes to using a LongVarcharColumn, which on itself is also working fine. The problem is with the transition that stops previous data from being readable.

The point with the requirements is perfectly reasonable from my point of view. I think also that both mappings are reasonable for storing String values. The change in the used default mapping was the surprise =). The question basically is: is the change in the default a bug or is it considered the correct default that I think should then be mentioned in the migration guide coming from 5.2.2?




Both are stored as a "TEXT" column in the table.
Postgres doesnt support "CLOB" type in the same way as other RDBMS, so DN simply uses a "TEXT" when "CLOB" specified.
DataNucleus doesn't support non-standard, non-JDBC Postgres "large object" API usage.
The only difference is whether to access the "large string" column using JDBC LONGVARCHAR or the clob accessor. LongVarcharMapping makes more sense (to me), and that also means that if PostgreSQL ever bother to support JDBC CLOB using the JDBC API then it can handle both.
Hence no it is not a "bug" (to me). Now added to migration notes.


Hi Andy,

thanks for the clarification and the added note.