0

I have a Spring integrated test that is wrapped in @Transactional. The Hibernate/JPA database interface extends JpaRepository and uses PESSIMISTIC_READ and PESSIMISTIC_WRITE locks on its functions. Within the test, the following steps occur:

  1. An @Entity object is read from the repo.
  2. The target function runs, which updates and writes that same @Entity to the repo inside another @Service-level transaction.
  3. That @Entity object is again read from the repo and compared to the first object.

The problem is that after writing in step 2, the first @Entity object has actually been updated locally. When comparing the @Entity objects in step 3, both are equal instead of having expected "before and after" differences.

How is this "syncing" of local @Entity objects happening, and is this expected behavior?

wrapperapps
  • 752
  • 2
  • 14
  • 28
  • What do you mean by "another `@Service`-level transaction"? Is the service method configured to require a **new** transaction (i.e., `REQUIRES_NEW`)? – Sam Brannen Jan 13 '18 at 13:59
  • 1
    If not, then the use case you describe is likely covered by this _note_ in the Spring Reference Manual: https://docs.spring.io/spring/docs/current/spring-framework-reference/testing.html#testcontext-tx-false-positives – Sam Brannen Jan 13 '18 at 14:00
  • @SamBrannen the service just uses the default `@Transactional`. Thanks for the reference, I'll look into that for my tests – wrapperapps Jan 13 '18 at 20:16

1 Answers1

1

After some research using the right terms (e.g. caching in persistence context), it appears to be expected as described here: https://vladmihalcea.com/how-does-hibernate-store-second-level-cache-entries/

A viable workaround would be to manually refresh the entity or separate the 2 reads into different transactions, as mentioned here: Force hibernate to read database and not returning cached entity

wrapperapps
  • 752
  • 2
  • 14
  • 28