0

I am working with an Arduino Mega 2560. I have two (2) GLCD display attached to the the processor. The display library is U2G2. I am using Arduino's IDE as a C++ compiler.

The BitMap file format in use is "xmp" and the file generated is generated using the Gimp software package.

I need to be able to reduce the size of a Monochrome BitMap image from about 600x300 to 128x64. In researching the issue, I found several methods to work with including Bi-linear, Cubic and BiCubic interpolation. Most of these articles deal with color BMP files which are structured completely differently from the xbm files I am working with.

I tried to use a Bi-Linear method and that did not work at all.

Referenced Article Image downscaling algorithm

Recently, I found an article on this site covering the issue that was written by Mark Ransom back in March of 2012. This article looks to me to be a good way to go as it is using an averaging technique and it seems to be working with the same type of image that I am working with.

  1. In Mark's post, he uses a two dimensional array to store the source and destination BitMap image in. I don't understand why. The images I have been working with are all stored in a single dimensional array. The libraries I have been using all use a single dimensional array with two variables (width, Height) to tell the library how to use the database.

  2. If I were to use the two dimensional array, I would have to convert the single dimensional database into a two dimensional array. Once I had run the routine to downsize the image, I would have to reconvert the data back into a single dimensional array to be used by the display drivers. Seems like a lot of work. Any thoughts??

  3. In Mark's code one dimension of the array represents the "x" axis and the other the "y" axis. But since the database I am working with provides the image data as top to bottom from left to right with the "width" variable providing the carriage return/Linefeed function (so to speak) there is really not much need for the "y" axis array. It looks to me like the addition of the second dimension only doubles the size of the database. A two dimensional database would consume 2.048 bytes of RAM and currently I only have 8k to work with. How would I load the arrays? x1 = 1, y1 = 1 for an "On" pixel and x1 = 0, y1 = 0 for and "Off" pixel?

  4. It is my understanding that the routine in Mark's article is an averaging method. It appears to me that the code is working with the image data one byte at a time. Considering that each byte of data in the xmp data structure represents eight (8) pixels, how does this work?

  5. Since each bit in the database represents one pixel, do I really need to do the averaging at the bit level? If that were the case, would we do the (4) point averages?? Also, how do you deal with crossing over from one byte to another? I have done some bit splicing but nothing with this level of complexity.

I am not a programmer but I do understand some of what I am trying to do. I am willing to put whatever effort and time is required on my part to understand this (I'm retired).

Community
  • 1
  • 1
  • Not sure this really fits on StackOverflow. Not really a programming question. And, you've actually asked many questions about the article in question. I'd suggest thinking about on-topic ways to ask specific questions (and one question per post). As written, this looks like more of a discussion kickoff (which doesn't fit on StackOverflow). Also, not saying it would make it on-topic, but you didn't even include any code from, or a link to, the article you mentioned – David Makogon Apr 25 '17 at 00:10
  • Sorry if my question is not in conformance with Stackoverflow's policies. This is my first shot at asking for help on this web site. – Ken Kloster Apr 25 '17 at 00:40
  • Sorry if my question is not in conformance with Stackoverflow's policies. This is my first shot at asking for help on this web site. I did add a link to the article I found on stackoverflow. As for a lot of questions, I have a lot of questions about the process in question. It is a very complicated task. I was not aware that there was a one question per post policy. – Ken Kloster Apr 25 '17 at 00:46
  • There isn't necessarily a 1-question-per-post policy. But this question is enormous. And an answer would then need to encompass all 5 questions. But again - it's not a specific programming question. You're asking for an explanation for someone's implementation. And any answer would just be guessing. Or would require someone to learn the code and do an analysis. – David Makogon Apr 25 '17 at 02:22
  • Good point. I was kind of looking to see if Mark would pick up the question as he wrote the code in question. I have not found a way to contact him directly. Thanks for your input. I will setup another thread asking for help with this issue directly. . – Ken Kloster Apr 25 '17 at 14:16

1 Answers1

0

Preface for the lazy:

The final (working) algorithm can be found in the last source code listing.

I documented the steps before also, because the questioner stated to be "not a programmer". (Btw. I'm not feeling able to explain the final code without the steps before...)

Introduction

Some years ago, I found an article in the German computer journal c't about up-scaling and down-scaling of RGB images. These algorithms became part of my personal library and I used them from time to time e.g. for adjusting size of images in our software - mostly to well-prepare OpenGL textures.

The basic idea of this article was to consider the spatial ratio with which a source pixel (imagined as square) covers a destination pixel (or vice versa). Therefore the author distinguished up scale and down scale. The consideration of partly covering pixels was done using float values.

When reading the question, I realized two special cases:

  1. the requirement to deal with bitmaps (due to monochrom LCD output)

  2. the ratio of source to destination width and height is 75/16.

The ratio 75/16 means that 75×75 source pixels map to 16×16 destination pixels, e.g. 4.6875×4.6875 source pixels to one destination pixel. Therefore, there are pixels in the source image which map partly to two or even four neighbouring destination pixels.

Concerning your special requirements, I got the idea that in this special case it should be possible to do it with integer arithmetic only. (According to your hint (destination platform is an embedded CPU), this should be welcome as these usually don't provide native floating point instructions.)

Mastering 1D

For warm-up, I started with

  1. bytes instead of bits

  2. implemented the down-scale of a one line image:

The idea is to accumulate source pixel values to a gray level in the range [0,75] which then is binarized again using a binary threshold.

#include <iostream>

// convenience type for a byte
typedef unsigned char uint8;

// ratio of source image size and destination image size
enum { nR = 75, dR = 16 };

// source image size
enum { wSrc = 1 * nR };

// destination image size
enum { wDst = dR * wSrc / nR };

// binary threshold
enum { tBin = nR / 2 };

// source image
static uint8 imgSrc[wSrc] = {
  1, 1, 1, 1, 1, 1, 1, 1, 0, 0, 0, 0, 0, 0, 0, 0, // 0 ... 15
  1, 1, 1, 1, 1, 1, 1, 1, 0, 0, 0, 0, 0, 0, 0, 0, // 16 ... 31
  1, 1, 1, 1, 1, 1, 1, 1, 0, 0, 0, 0, 0, 0, 0, 0, // 32 ... 47
  1, 1, 1, 1, 1, 1, 1, 1, 0, 0, 0, 0, 0, 0, 0, 0, // 48 ... 63
  1, 1, 1, 1, 1, 1, 1, 1, 0, 0, 0                 // 64 ... 74
};

// destination image
static uint8 imgDst[wDst];

// returns a source pixel.
inline int getPixel(int x) { return imgSrc[x]; }

// stores a destination pixel
inline void setPixel(int x, int value)
{
  imgDst[x] = !!value; // forces destination value to 0 or 1
}

// prints an image.
void printImg(
  int w, // width of image
  const uint8 *img) // the image data
{
  for (int x = 0; x < w; ++x) std::cout << (char)('0' + img[x]);
  std::cout << std::endl;
}

// main function.
int main()
{
  // print source image for visual check
  std::cout << "Source image (" << wSrc << "):" << std::endl;
  printImg(wSrc, imgSrc);
  // scale x
  int xSrc = 0; int n = 0;
  for (int xDst = 0; xDst < wDst; ++xDst) {
    int value = 0; // destination pixel accumulator
    // process right of cut pixel
    if (n) { value += (dR - n) * getPixel(xSrc); ++xSrc; n -= dR; }
    n += nR;
    // process full pixels
    for (; n >= dR; ++xSrc, n -= dR) value += dR * getPixel(xSrc);
    // process left of cut pixel
    if (n) value += n * getPixel(xSrc);
    // store value: 0 ... tBin -> 0, tBin + 1 ... wSrc -> 1
    setPixel(xDst, value >= tBin);
  }
  // print destination image for visual check
  std::cout << "Destination image (" << wDst << "):"
    << std::endl;
  printImg(wDst, imgDst);
  // done
  return 0;
}

I compiled and tested in VisualStudio 2013 and got the following output:

Source image (75):
111111110000000011111111000000001111111100000000111111110000000011111111000
Destination image (16):
1101100110110010

Remembering that roughly 5 source pixels map to 1 destination pixel, the output looks quite sufficient for me.

Extension to 2D

The next step was to extend the first sample for two-dimensional images. I soon realized that my accumulation approach had to be extended to a full destination image row. This was achieved using a values array instead of a single value. Following my first approach, the source image rows which are splitted have to be processed twice. To prevent code duplication I introduced helper functions for this: accuPixel() and accuRow().

#include <cassert>
#include <iostream>

// convenience type for a byte
typedef unsigned char uint8;

// convenience type for an image
struct Image {
  int w, h; // width and height of image
  uint8 *data; // image data

  int getPixel(int x, int y) const
  {
    assert(x >= 0 && x < w);
    assert(y >= 0 && y < h);
    return data[x * w + y];
  }

  void setPixel(int x, int y, int value)
  {
    assert(x >= 0 && x < w);
    assert(y >= 0 && y < h);
    data[x * w + y] = !!value; // '!!' forces dest. value to 0 or 1
  }

  void print() const
  {
    for (int y = 0; y < h; ++y) {
      for (int x = 0; x < w; ++x) {
        std::cout << (char)('0' + data[y * w + x]);
      }
      std::cout << std::endl;
    }
  }
};

// ratio of source image size and destination image size
enum { nR = 75, dR = 16 };

// source image size
enum { wSrc = 1 * nR, hSrc = 1 * nR };

// destination image size
enum { wDst = dR * wSrc / nR, hDst = dR * hSrc / nR };

// binary threshold
enum { tBin = nR * nR / 2 };

// source image
static uint8 dataSrc[wSrc * hSrc];
static Image imgSrc = {
  /* int w, h: */ wSrc, hSrc,
  /* uint8 *data: */ dataSrc
};

// destination image
static uint8 dataDst[wDst * hDst];
static Image imgDst = {
  /* int w, h: */ wDst, hDst,
  /* uint8 *data: */ dataDst
};

/* accumulates value for a destination pixel from the according number
 * of source pixels in one source image row.
 */
void accuPixel(
  int &value, // the accumulation value (updated)
  const Image &imgSrc, // the source image
  int &xSrc, // column index of source pixels (updated)
  int ySrc, // row index of source pixels
  int &n, // counter of accumulated values (updated)
  int fY) // vertical weight of row
{
  // process right part of cut pixel
  if (n) {
    value += fY * (dR - n) * imgSrc.getPixel(xSrc, ySrc);
    ++xSrc; n -= dR;
  }
  n += nR;
  // process full pixels
  for (; n >= dR; ++xSrc, n -= dR) {
    value += fY * dR * imgSrc.getPixel(xSrc, ySrc);
  }
  // process left part of cut pixel
  if (n) value += fY * n * imgSrc.getPixel(xSrc, ySrc);
}

/* accumulates values for one destination image row from one source
 * image row.
 */
void accuRow(
  int wDst, // width of destination image
  int *values, // accumulation values for destination row
  const Image &imgSrc, // the source image
  int ySrc, // row index of source pixels
  int fY) // vertical weight of row
{
  for (int xSrc = 0, n = 0, xDst = 0; xDst < wDst; ++xDst) {
    accuPixel(values[xDst], imgSrc, xSrc, ySrc, n, fY);
  }
}

// main function
int main()
{
  // fill source image with a chess board pattern
  for (int y = 0; y < hSrc; ++y) {
    for (int x = 0; x < wSrc; ++x) {
      imgSrc.setPixel(x, y, (x % 16 < 8) == (y % 16 < 8));
    }
  }
  // print source image for visual check
  std::cout << "Source image (" << wSrc << 'x' << hSrc << "):"
    << std::endl;
  imgSrc.print();
  // scale source image to destination image
  int ySrc = 0; int m = 0;
  for (int yDst = 0; yDst < hDst; ++yDst) {
    int values[wDst];
    for (int &value : values) value = 0; // init accu values
    // process bottom of cut row
    if (m) {
      accuRow(imgDst.w, values, imgSrc, ySrc, dR - m);
      ++ySrc; m -= dR;
    }
    m += nR;
    // process full rows
    for (; m >= dR; ++ySrc, m -= dR) {
      accuRow(imgDst.w, values, imgSrc, ySrc, dR);
    }
    // process top of cut row
    if (m) accuRow(imgDst.w, values, imgSrc, ySrc, m);
    // process accumulated values
    for (int xDst = 0; xDst < wDst; ++xDst) {
      imgDst.setPixel(xDst, yDst, values[xDst] >= tBin);
    }
  }
  // print destination image for visual check
  std::cout << "Destination image (" << wDst << 'x' << hDst << "):"
    << std::endl;
  imgDst.print();
  // done
  return 0;
}

The output of the program is (as well as the input) a checker board. Though, the output checker board does not have equal sized cells due to the interpolation and the following binary separation.

The Actual Down-Scaling of Bit Maps

After the scaling does what I expected, the sample code gots its finish:

The Image class was modified to support bit maps. This would have been easy if I had used std::vector<bool> (the specialized version of std::vector<>) which packs values as bits. This had probably simplified parts of code. I decided against std::vector<bool> because I'm uncertain how the data is provided in the OP. I believe, my "explicit" C++ sample code is simpler to adapt to the existing data model on the platform of the questioner.

I considered file I/O to make the sample more flexible. I'm not sure about the image format in the OP. My first thought was that XMP were simply a typo meaning XPM. But then I became suspicious and googled a little bit. Thus, I found XMP. Could this be meant? If I understood it right the XMP is a standard for meta data which might be added to certain image formats like JPEG and TIFF. So, I'm still uncertain...

To come around this, I decided to use a file format instead for which the loading and saving would need only a few lines of code: PBM.

Once I had implemented the PBM I/O I struggled over two issues which IMHO are worth to be noted:

  1. If the length of image row is not a multiple of 8: Are the rows byte aligned or not? Thus, I added the _bPR // bits per row member to my Image class. In the case of PBM, the rows are byte aligned. (I converted the Wikipedia 'J' sample image with GIMP from ASCII to RAW version to check this out.)

  2. The first working version (which didn't crash) produced an output image which looked not completely wrong but somehow "wrong in stripes". Thus, I came to the conclusion that I stored the bits per byte in wrong order. (From two possible solutions I initially chose the wrong one.) The correct way is that the most left pixel in a byte has to be stored in the most significant bit of it. (In the opposite case, the bit shifting in Image::getPixel() and Image::setPixel() had to be changed. I left the (in case of PBM) wrong versions as disabled code, just for the case.)

The final sample code:

#include <cassert>
#include <iostream>
#include <fstream>
#include <sstream>
#include <string>

// convenience type for bytes
typedef unsigned char uint8;

// image helper class
class Image {
  private: // variables:
    int _w, _h; // image size
    int _bPR; // bits per row
    uint8 *_data; // image data

  public: // methods:
    // constructor.
    Image(): _w(0), _h(0), _bPR(0), _data(nullptr) { }
    // destructor.
    ~Image() { free(); }
    // returns width of image.
    int w() const { return _w; }
    // returns height of image.
    int h() const { return _h; }
    // returns data.
    const uint8* data() const { return _data; }
    // returns data size (in bytes).
    size_t size() const { return (_h * _bPR + 7) / 8; }
    // clears image.
    void free()
    {
      delete[] _data; _data = 0; _w = _h = _bPR = 0;
    }
    // allocates image data.
    uint8* alloc( // returns allocated buffer or 0 in case of error
      int w, // image width
      int h, // image height
      int bPR) // bits per row
    {
      assert(w >= 0 && w <= bPR);
      assert(h >= 0);
      free();
      size_t size = (h * bPR + 7) / 8;
      if (size && (_data = new uint8[size])) {
        _w = w; _h = h; _bPR = bPR;
      }
      return _data;
    }
    // returns pixel.
    int getPixel(
      int x, // column
      int y) // row
    const {
      assert(x >= 0 && x < _w);
      assert(y >= 0 && y < _h);
#if 0 // wrong for PBM
      int b = y * _bPR + x, bit = b % 8; // most left pixel is LSB
#else // correct for PBM
      int b = y * _bPR + x, bit = 7 - b % 8; // most left pixel is MSB
#endif // 0
      return _data[b / 8] >> bit & 1;
    }
    // sets pixel.
    void setPixel(
      int x, // column
      int y, // row
      int value) // value (should be 0 or 1)
    {
      assert(x >= 0 && x < _w);
      assert(y >= 0 && y < _h);
      int b = y * _bPR + x;
#if 0 // wrong for PBM
      uint8 *pB = _data + b / 8, bit = b % 8; // most left pixel is LSB
#else // correct for PBM
      uint8 *pB = _data + b / 8, bit = 7 - b % 8; // most left pixel is MSB
#endif // 0
      *pB &= (uint8)~(1 << bit); *pB |= !!value << bit; // bit fiddling
    }
};

// reads a PBM binary file.
void readPBM(
  std::istream &in, // input stream (to read from)
  Image &img) // image to store read data into
{
  std::string buffer;
  std::getline(in, buffer);
  if (buffer != "P4") {
    throw "ERROR! File is not a PBM binary file.";
  }
  do {
    std::getline(in, buffer);
  } while (buffer[0] == '#');
  std::istringstream sIn(buffer);
  int w = 0, h = 0;
  sIn >> w >> h;
  // PBM stores rows aligned to bytes
  int bitsPerRow = (w + 7) & ~0x7;
  // allocate data memory
  char *data = (char*)img.alloc(w, h, bitsPerRow);
  // read rest of file at once
  in.read(data, img.size());
}

// writes a PBM binary file.
void writePBM(
  std::ostream &out, // output stream (to write to)
  const Image &img) // image which shall be written
{
  out << "P4" << std::endl
    << img.w() << ' ' << img.h() << std::endl;
  out.write((const char*)img.data(), img.size());
}

// converts a text to an integer.
int strToI( // returns the integer or throws
  const char *text) // text to convert
{
  const char *end = text; int value = strtol(text, (char**)&end, 0);
  if (end == text || *end != '\0') throw "Not a number.";
  return value;
}

/* accumulates value for a destination pixel from the according number
 * of source pixels in one source image row.
 */
void accuPixel(
  int &value, // the accumulation value (updated)
  const Image &imgSrc, // the source image
  int &xSrc, // column index of source pixels (updated)
  int ySrc, // row index of source pixels
  int &n, // counter of accumulated values (updated)
  int fY, // vertical weight of row
  int nR, // numerator of ratio (source to destination image size)
  int dR) // denominator of ratio (source to destination image size)
{
  // process right part of cut pixel
  if (n) {
    value += fY * (dR - n) * imgSrc.getPixel(xSrc, ySrc);
    ++xSrc; n -= dR;
  }
  n += nR;
  // process full pixels
  for (; n >= dR; ++xSrc, n -= dR) {
    value += fY * dR * imgSrc.getPixel(xSrc, ySrc);
  }
  // process left part of cut pixel
  if (n) value += fY * n * imgSrc.getPixel(xSrc, ySrc);
}

/* accumulates values for one destination image row from one source
 * image row.
 */
void accuRow(
  int wDst, // width of destination image
  int *values, // accumulation values for destination row
  const Image &imgSrc, // the source image
  int ySrc, // row index of source pixels
  int fY, // vertical weight of row
  int nR, // numerator of ratio (source to destination image size)
  int dR) // denominator of ratio (source to destination image size)
{
  for (int xSrc = 0, n = 0, xDst = 0; xDst < wDst; ++xDst) {
    accuPixel(values[xDst], imgSrc, xSrc, ySrc, n, fY, nR, dR);
  }
}

// scales source image to destination image.
void scale(
  const Image &imgSrc, // source image
  Image &imgDst, // destination image
  int nR, // numerator of ratio (source to destination image size)
  int dR, // denominator of ratio (source to destination image size)
  int tBin) // binary threshold e.g. nR * nR / 2
{
  // allocate space for destination image
  const int wDst = dR * imgSrc.w() / nR;
  const int hDst = dR * imgSrc.h() / nR;
  if (!imgDst.alloc(wDst, hDst, wDst + 7 & ~7)) {
    throw "ERROR! Allocation of destination image failed!";
  }
  int *values = new int[wDst]; // aux. buffer to accumulate values
  for (int ySrc = 0, m = 0, yDst = 0; yDst < hDst; ++yDst) {
    // init accu values
    for (int i = 0; i < wDst; ++i) values[i] = 0;
    // process bottom of cut row
    if (m) {
      accuRow(wDst, values, imgSrc, ySrc, dR - m, nR, dR);
      ++ySrc; m -= dR;
    }
    m += nR;
    // process full rows
    for (; m >= dR; ++ySrc, m -= dR) {
      accuRow(wDst, values, imgSrc, ySrc, dR, nR, dR);
    }
    // process top of cut row
    if (m) accuRow(wDst, values, imgSrc, ySrc, m, nR, dR);
    // process accumulated values
    for (int xDst = 0; xDst < wDst; ++xDst) {
      imgDst.setPixel(xDst, yDst, values[xDst] > tBin);
    }
  }
  delete[] values; // free aux. buffer
}

// main function
int main( // returns 0 on success and another value in error case
  int argc, // number of command line arguments
  char **argv) // command line arguments
{
  // check for sufficient number of arguments
  if (argc <= 4) {
    std::cerr << "ERROR! Missing command line arguments." << std::endl;
    std::cout
      << "Usage:" << std::endl
      << argv[0] << " INFILE OUTFILE NR DR" << std::endl
      << "where" << std::endl
      << "INFILE ... file name of PBM input file (must exist)" << std::endl
      << "OUTFILE ... file name of PBM output file (overwritten if existing)" << std::endl
      << "NR ... numerator of ratio (src. to dest. image size)" << std::endl
      << "DR ... denominator of ratio (src. to dest. image size)" << std::endl
      << "NR and DR must be (not too large) positive integers: 0 < DR < NR" << std::endl;
    return 1; // ERROR!
  }
  try {
    // read command line arguments
    const char *fileIn = argv[1];
    const char *fileOut = argv[2];
    int nR;
    try {
      nR = strToI(argv[3]);
    } catch (const char*) {
      throw "ERROR in $3! (Not a number.)";
    }
    int dR;
    try {
      dR = strToI(argv[4]);
    } catch (const char*) {
      throw "ERROR in $4! (Not a number.)";
    }
    int tBin = nR * nR / 2; // might become cmd. line arg. also
    // read input file
    Image imgSrc;
    std::ifstream fIn(fileIn, std::ios::in | std::ios::binary);
    fIn.exceptions(std::ifstream::badbit);
    readPBM(fIn, imgSrc);
    // scale source image to destination image
    Image imgDst;
    scale(imgSrc, imgDst, nR, dR, tBin);
    // write output file
    std::ofstream fOut(fileOut, std::ios::out | std::ios::binary);
    fOut.exceptions(std::ofstream::badbit);
    writePBM(fOut, imgDst);
  } catch (const char *error) {
    std::cerr << error << std::endl;
    return 1; // ERROR!
  } catch (const std::exception &error) {
    std::cerr << error.what() << std::endl;
    return 1; // ERROR!
  }
  // done (probably successfully)
  return 0;
}

To test the sample code, I prepared a photo of mine as sample image. The original photo shows cat Moritz playing with a screw:

Photo of cat Moritz with skrew

I GIMPed it a little bit to get an appropriate sample image (mainly because GIMP can write, load, and display PBM files):

The PBM test image (converted back to PNG for proper display)

Although, I did all development and tests in VisualStudio 2013, the following sample session has been done with g++ (in cygwin on Windows 10 (64 bit)):

$ g++ --version
g++ (GCC) 5.4.0

$ g++ -std=c++11 -o scale-bitmap scale-bitmap.cc 

$ ./scale-bitmap cat.bin.pbm out.bin.pbm 75 16

$

This produced the following output:

The output PBM image (converted back to PNG for proper display)

If I'm not wrong the sample code just implements Bilinear filtering which is probably the 2nd worst approach after simply removing rows and columns from the source image.

As the sample output illustrates, the quality of the output is rather limited. Better results might be achieved with more sophisticated processing:

  1. A better interpolation may help. The Wikipedia articles Image scaling and Pixel art scaling algorithms provide probably a good start.

  2. Especially for monochrome images, Dithering could be an option.

All these nice things will definitely require more development effort and code (if not used out of a library).

However, a little improvement might be achieved changing the binary threshold tBin. I didn't try it but I can imagine this because I played with the binary threshold in GIMP when I prepared the test image...

Last but not least

While writing this doc. I found also a similar question SO: Image downscaling algorithm. ...and saw after sending this answer that the questioner already mentioned it...

If I had separated nR and dR for horizontal and vertical scaling the algorithm could be applied to non-ratio scaling also. It shouldn't be too hard to change this but it wasn't required in the OP.

Finally, I guessed about limitations of the target platform. Concerning the described source and destination image dimensions of the OP, the highest accumulated value will be 75 * 75 = 5625 (downscaling all source pixels with 1 - a complete white (or black?) area). These are good news because even if the C/C++ compiler for the Atmel ATmega provides 16 bit integers only, the sample code should work without harm.

Community
  • 1
  • 1
Scheff's Cat
  • 16,517
  • 5
  • 25
  • 45
  • First, I would like to say thanks for your time and input. I am quite ompressed with the quality of the reduction you obtained. – Ken Kloster Apr 26 '17 at 14:57
  • I would like to say thanks for your time. I am impressed with the quality of the reduction obtained. The Atmel 2560 processor does provide a 32 bit floating point function but it is quite slow. The processor has 256k of RAM for the program, but only allows 8k of RAM for dynamic variables. The algorithm I am using makes an average of 9 bits around each bit and deletes every other row and column. The quality of the reduction is poor. I am working with the Floyd-Steinberg dithering algorithm to clean up the image. I will give this technique a try. – Ken Kloster Apr 26 '17 at 15:17
  • @KenKloster I have two additional ideas for possible performance improvement about this: 1. It could be tried whether the explicit usage of 16 bit integers provides faster code. 2. The code could be ported to C. If your Atmel-C compiler supports C11 (or at least "dynamic" local arrays) then the `new int[]` could be replaced with an `int[]`. (C++ does not support "dynamic" local arrays.) May be the C port could gain a little extra speed-up/memory saving. There is nothing in this implementation which should/must be done in C++. I just followed your tag/requirement. – Scheff's Cat Apr 26 '17 at 16:26