1

I've got the opportunity to rewrite the core of an internally-developed application that my employer uses for document control. My "core" requirements list goes something like this:

  • Make it easier to import/export to various formats (collection of files + fairly extensive metadata being the common factor)
  • Make it easier to add new fields (whose presence is data-driven rather than global) at multiple levels
  • Introduce several new pieces of functionality which violate the fundamental premise of the old system (basically, the structure of metadata surrounding documents is undergoing a radical change)
  • Maintain the ability to tightly control document and metadata relations and conventions

I've been playing around with an architecture that uses serialization as its primary means of communication with the world, and so far I'm pleased with the results - I can serialize to & deserialize from a user interface, an XML store, and a database with ease without modifying the core classes to accomodate the various sources and sinks. I consider this to be fundamentally a hexagonal architecture - it treats every serialization target the same way (as an injectable dependancy for the Serialize method).

This is my first go around with this approach, however, and I'm wondering if anyone has any experience with it, and any insights or advice if so.

Mike Burton
  • 2,900
  • 23
  • 31

1 Answers1

2

My first instinct is that anything that depends heavily on serialization of your core classes is likely to run into hairy versioning issues - changes to your core are going to require simultaneous modification of all of your serialization providers & consumers (and probably all of your persistent stores), rather than a service/contract-based approach which would allow the interface to remain static where possible.

However, it's really difficult to give any sort of opinion without making a large set of assumptions about how the system is going to be used & evolve over time - if you're happy with the approach, continue with it & let us know how it goes.

Sam
  • 4,634
  • 2
  • 33
  • 47
  • I think I can at least avoid major issues with consumers, I'm looking at an ORM-ish scheme for dealing with persistence and UI elements (we already have one in place for UI, so it makes sense to re-use that structure). To some extent "custom" fields should help with that. My initial impulse is to tear out a lot of the metadata from the core tables and put them in a hooked-in custom field structure. You're certainly right that versioning isn't going to be the simplest thing to deal with. Hard to keep that in mind when the current system is such a mess! – Mike Burton Feb 03 '10 at 18:38
  • I can't see you avoiding a custom field structure if the metadata is radically changing (esp. if users can modify it) - it will give you more options for building something more change-tolerant at the front and back ends. You will probably lose strong typing in your core classes and the db schema will be a little more obscure, also you'll need to decide whether to model your metadata relations in code or in the db. You may want to consider leaving some metadata out of the custom fields (eg timestamps, doc types & the like). – Sam Feb 04 '10 at 03:50
  • Yeah, I'm building a prototype of it making heavy use of loose-linked metadata now. I'm trying to define the "core" fields now. – Mike Burton Feb 11 '10 at 21:09