0

As I understand it, one of the key rules of Hexagonal Architecture is how the Domain Layer is isolated from everything except the Application Layer which works with it (the Domain Layer has no dependencies at all as it sits in the core):

enter image description here

My question is then, does the Domain Layer ever do any work or have any knowledge of data persistence whatsoever? Suppose we have some business logic which depends on data being retrieved and then persisted, should it always be the Application Layer which is orchestrating this?

Load everything required for the business logic to run -> Tell the Domain Layer to run all the business logic -> Extract the results of the business logic and tell the Infrastructure Layer to persist them ->

In this sense, wouldn't the Application Layer always need to keep track of any result the Domain Layer has calculated, and therefore would always implement some kind of UnitOfWork pattern to track these results?

Would the Domain Layer ever work with a repository, or an interface of a repository whatsoever? There are some sources which seem to suggest this is fine, which completely contradicts the diagram from my perspective.

FBryant87
  • 3,751
  • 2
  • 33
  • 56

1 Answers1

2

Suppose we have some business logic which depends on data being retrieved and then persisted, should it always be the Application Layer which is orchestrating this?

In an idealized setting, you have clean separation of concerns: the domain model computes things using information already available in local memory, and the application code that orchestrates the copying of information from/to local memory.

Expressed somewhat differently: we should be able to replace all of the plumbing without disturbing the implementation of the domain model at all.

In the easy version, we know right away what information we need locally, so the application can fetch copies of everything, the domain model computes changes, then the application code copies the local changes to where they need to be.

That gets trickier in cases where we don't necessarily know up front all of the information we are going to need. In those cases, you end up making a choice between asking the domain model what information it needs, then fetching it, or passing to the domain model the capability to look up the information itself.

You probably wouldn't work with a REPOSITORY directly from the domain model - not exactly; you'd be more likely to see a DOMAIN SERVICE. In other words, the capability to fetch some information might have some representation that you pass to your domain model as an argument, so that it can, for example, perform its own queries.

Note: in the original book by Evans, domain services are a pattern that appears when modeling a domain (chapter 5), where repositories are a pattern that shows up in lifecycle management (chapter 6).

Distributed information usually involves failure modes, and we would normally prefer that our domain code not be cluttered with a bunch of failure management logic, in much the same way that we don't normally want our domain code to be concerned with a bunch of persistence concerns.

See also

VoiceOfUnreason
  • 40,245
  • 4
  • 34
  • 73