Topics

DataNucleus JDO 5: problems with limited max fetch depth


jacek.czerwinski.ext@...
 

After updating of DataNuleus from version 3.3 to 5.2, I get the following error from the query, which limits the max. fetch depth. This Query works correctly with DataNucleus 3.3.

java.lang.ArrayIndexOutOfBoundsException: 26
     at org.datanucleus.store.rdbms.sql.SQLTableAlphaNamer.getLettersForNumber(SQLTableAlphaNamer.java:153)

the char array for tables aliasies is the same in both DataNucleus versions 3.3 and 5.2:

public class SQLTableAlphaNamer implements SQLTableNamer

{

    static String[] CHARS = new String[]{"A", "B", "C", "D", "E", "F", "G", "H", "I", "J", "K", "L", "M",

                                         "N", "O", "P", "Q", "R", "S", "T", "U", "V", "W", "X", "Y", "Z"};


If I set the max. fetch depth to unlimited (-1), the query works without exception with DataNucleus 5.2 too. However the unlimited fetch depth seems to slow down the query.

The query method:

public <T> T getByQueryOrderedWithFetchGroups(Class<?> clazz, String resultSpecification, String queryString,

                    String orderSpecification, int maxFetchDepth, String[] fetchGroups,

                    Object... parameterValues)

    {

        PersistenceManager pm = getPersistenceManager();

        Transaction tx = pm.currentTransaction();

        try {

            tx.begin();

 

            // IMPORTANT maximum fetch depth

            if (maxFetchDepth != 0) {

                pm.getFetchPlan().setMaxFetchDepth(maxFetchDepth);

            }

 

            // fetch groups

            boolean firstFetchGroup = true;

            

            for(String fetchGroup : fetchGroups) {

                

                if(firstFetchGroup) {

                    

                    pm.getFetchPlan().setGroup(fetchGroup);

                    firstFetchGroup = false;

                }

                else {

                    

                    pm.getFetchPlan().addGroup(fetchGroup);

                }

            }

            

            Query query = pm.newQuery(clazz, null, queryString);

            if (resultSpecification != null) {

                query.setResult(resultSpecification);

            }

            if (orderSpecification != null) {

                query.setOrdering(orderSpecification);

            }

 

            T result = (T) query.executeWithArray(parameterValues);

 

            tx.commit();

 

            return result;

        }

        finally {

            if (tx.isActive()) {

                _log.warn("transaction failed! query:\n{}", queryString);

                tx.rollback();

            }

            pm.close();

        }

    }


Usage of this method, which throws the 
java.lang.ArrayIndexOutOfBoundsException exception (DataNucleus 5.2) (DataNucleus 3.3: without exception:

 

List<Project> projects = getByQueryOrderedWithFetchGroups(Project.class, null, 

        "_zsv == :zsv" + " && (_status == 'AR' || _status == 'A' || _status == 'TR' || _status == 'CR' "

                        + "|| ((_status == 'T' || _status == 'C') "

                        + "&& (_startDate <= :fiscalYearEnd) && (_developmentEndDate >= :fiscalYearStart)))",

                "_prjNr asc", 10 /* limited max fetch depth */,
new String[] { WPConstants.FETCH_GROUP_BUDGET, DataServiceConstants.MILESTONES },

 

                parameters);


Usage of this method whitout exception (DataNucleus 5.2):


List<Project> projects = getByQueryOrderedWithFetchGroups(Project.class, null, 

         "_zsv == :zsv" + " && (_status == 'AR' || _status == 'A' || _status == 'TR' || _status == 'CR' "

                        + "|| ((_status == 'T' || _status == 'C') "

                        + "&& (_startDate <= :fiscalYearEnd) && (_developmentEndDate >= :fiscalYearStart)))",

                "_prjNr asc", -1 /* unlimited max fetch depth */
new String[] { WPConstants.FETCH_GROUP_BUDGET, DataServiceConstants.MILESTONES },

 

                parameters);

How can I solve this problem? 


Andy
 

The only answer is to debug things (by getting the code and comparing). If you run it with some ancient version then you can look at what SQL is generated with that. And if it is trying (with current code) to go beyond the number of table groups (26, number of letters in the latin alphabet) then it suggests something is not right somewhere. Only you know your model and why it would think it needs more than 26 table groups.