11

The way DUnit normally works is you write some published methods, and DUnit runs them as tests. What I want to do is a little different. I want to create tests at run time based on data. I'm trying to test a particular module that processes input files to creates output files. I have a set of test input files with corresponding known good output files. The idea is to dynamically create tests, one for each input file, that process the inputs and check the outputs against the known good ones.

The actual source of data here however isn't important. The difficulty is making DUnit behave in a data-driven way. For the sake of this problem, suppose that the data source were just a random number generator. Here is an example concrete problem that gets to the heart of the difficulty:

Create some test objects (TTestCase or whatever) at runtime, say 10 of them, where each one

  1. Is named at run time from a randomly generated integer. (By 'name' I mean the name of the test that appears in the test-runner tree.)
  2. Passes or fails based on a random integer. Pass for even, fail for odd.

From the design of DUnit, it looks like it was designed with enough flexibility in mind to make such things possible. I'm not sure that it is though. I tried to create my own test class by inheriting from TAbstractTest and ITest, but some crucial methods weren't accessible. I also tried inheriting from TTestCase, but that class is closely tied to the idea of running published methods (and the tests are named after the methods, so I couldn't just have a single one called, say, 'go', because then all my tests would be called 'go', and I want all my tests to be individually named).

Or alternatively, is there some alternative to DUnit that could do what I want?

dan-gph
  • 14,921
  • 12
  • 55
  • 75

2 Answers2

18
program UnitTest1;

{$IFDEF CONSOLE_TESTRUNNER}
{$APPTYPE CONSOLE}
{$ENDIF}

uses
  Forms, Classes, SysUtils,
  TestFramework,
  GUITestRunner,
  TextTestRunner;

{$R *.RES}

type
  TIntTestCase = class(TTestCase)
  private
    FValue: Integer;
  public
    constructor Create(AValue: Integer); reintroduce;
    function GetName: string; override;
  published
    procedure Run;
  end;

{ TIntTestCase }

constructor TIntTestCase.Create(AValue: Integer);
begin
  inherited Create('Run');
  FValue := AValue;
end;

function TIntTestCase.GetName: string;
begin
  Result := Format('Run_%.3d', [FValue]);
end;

procedure TIntTestCase.Run;
begin
  Check(FValue mod 2 = 0, Format('%d is not an even value', [FValue]));
end;

procedure RegisterTests;
const
  TestCount = 10;
  ValueHigh = 1000;
var
  I: Integer;
begin
  Randomize;
  for I := 0 to TestCount - 1 do
    RegisterTest(TIntTestCase.Create(Random(ValueHigh) + 1));
end;

begin
  Application.Initialize;
  RegisterTests;
  if IsConsole then
    TextTestRunner.RunRegisteredTests
  else
    GUITestRunner.RunRegisteredTests;
end.
Ondrej Kelle
  • 36,175
  • 2
  • 60
  • 122
  • That is awesome. Thanks. I was trying something similar, but a few missteps along the way added up to make it not work. Thanks again. – dan-gph Apr 01 '09 at 12:26
  • What do you suggest to have both data-driven cases and ordinary ones in the same test class? – Thijs van Dien Feb 23 '16 at 23:53
  • @Thijs van Dien Just add class function Suite: ITestSuite; override; to your normal TTestCase class containing normal (published) tests that has as first line "result := inherited Suite; Then after this line run your test registration code that uses the result suite to register the dynamic tests as shown in OK's post, e.g. suite.AddTest( TIntTestCase.Create(...)) etc. – W.Prins Aug 07 '17 at 11:40
2

I'd say you basically want to have a single "super-test" function that then calls other tests, one for each data file. This is what we do with one of our DUnit tests. You just load each available file in turn in a loop, and run the test with Check's as appropriate.

The alternative, which we also use in the same project to test the final app and its data loading and analysis, is to use something like FinalBuilder to loop on the application (presumably you could loop on the DUnit app too and use a parameter) with the various different data files. The app then runs, does an analysis, then quits after saving. Another app then compares the resulting data with the ideal data, and reports a failure if appropriate.

mj2008
  • 6,517
  • 2
  • 36
  • 53
  • The super test idea won't work in this case because each test takes some time to run and typically only a few will break at a time. We need to be able to isolate only those tests in order to fix the code. Running the whole super test every time would unfortunately be very slow. – dan-gph Apr 01 '09 at 11:57
  • @dangph: That's where the FinalBuilder test comes in. The individual test may fail, but I use the try/catch facility in it to record the single failure, but carry on with the rest and then fail the build right after the last one. – mj2008 Apr 01 '09 at 12:11
  • @mj2008, what we are doing is refactoring this rather gnarly module. As we refactor things, tests will break. We want to run just the broken tests (via DUnit checkboxes) while we fix the code because the tests are pretty slow, and we will typically have to loop a few times. – dan-gph Apr 01 '09 at 12:44