22

I'm trying to use reduce() combine a set of arrays in a "collated" order so items with similar indexes are together. For example:

input = [["one","two","three"],["uno","dos"],["1","2","3","4"],["first","second","third"]]

output = [ 'first','1','uno','one','second','2','dos','two','third','3','three','4' ]

It doesn't matter what order the items with similar index go as long as they are together, so a result of 'one','uno','1'... is a good as what's above. I would like to do it just using immutable variables if possible.

I have a way that works:

    const output = input.reduce((accumulator, currentArray, arrayIndex)=>{
        currentArray.forEach((item,itemIndex)=>{
            const newIndex = itemIndex*(arrayIndex+1);
            accumulator.splice(newIndex<accumulator.length?newIndex:accumulator.length,0,item);
        })
        return accumulator;
    })

But it's not very pretty and I don't like it, especially because of the way it mutates the accumulator in the forEach method. I feel there must be a more elegant method.

I can't believe no one has asked this before but I've tried a bunch of different queries and can't find it, so kindly tell me if it's there and I missed it. Is there a better way?

To clarify per question in comments, I would like to be able to do this without mutating any variables or arrays as I'm doing with the accumulator.splice and to only use functional methods such as .map, or .reduce not a mutating loop like a .forEach.

jimboweb
  • 3,694
  • 3
  • 18
  • 40
  • _"Is there a better way?"_ What do you mean by _"better"_? _"I don't like it"_ is not an objective coding problem. What is the requirement? – guest271314 Feb 20 '19 at 23:06
  • Well for one thing I'd like to do it without mutating any variables, or introducing a `.forEach` loop which isn't really functional. And I think it can be done much more concisely as well. If I could zip a set of arrays together like in Python then reduce(concat) them together that way would work too. – jimboweb Feb 20 '19 at 23:08
  • I think I pretty clearly said I'd like the result to not mutate any arrays or variables, as I am doing with the `accumulator.splice`. That is an objective requirement. – jimboweb Feb 20 '19 at 23:13
  • Does the requirement include a restriction on using `JSON.parse(JSON.stringify(input))` to avoid mutating the original array? – guest271314 Feb 21 '19 at 00:07
  • Wow, lots of great answers, thanks. I'm a little torn on which one to accept as a solution, so I have to try them out. I'll choose a solution in a day or so. but I voted you all up. Thanks again. – jimboweb Feb 21 '19 at 02:47
  • 1
    I've learned a lot from all of these answers. I had a very hard time picking one as "the" solution because there are so many interesting ways to approach the problem, so I feel bad picking only one. The one I picked is closest to what I had in mind, though not necessarily the best. Thanks everyone. – jimboweb Feb 22 '19 at 03:05

10 Answers10

8

Maybe just a simple for... i loop that checks each array for an item in position i

var input = [["one","two","three"],["uno","dos"],["1","2","3","4"],["1st","2nd","3rd"]]

var output = []
var maxLen = Math.max(...input.map(arr => arr.length));

for (i=0; i < maxLen; i++) {
  input.forEach(arr => { if (arr[i] !== undefined) output.push(arr[i]) })
}

console.log(output)

Simple, but predictable and readable


Avoiding For Each Loop

If you need to avoid forEach, here's a similar approach where you could: get the max child array length, build a range of integers that would've been created by the for loop ([1,2,3,4]), map each value to pivot the arrays, flatten the multi-dimensional array, and then filter out the empty cells.

First in discrete steps, and then as a one liner:

var input = [["one","two","three"],["uno","dos"],["1","2","3","4"],["1st","2nd","3rd"]];

Multiple Steps:

var maxLen = Math.max(...input.map(arr => arr.length));
var indexes = Array(maxLen).fill().map((_,i) => i);
var pivoted = indexes.map(i => input.map(arr => arr[i] ));
var flattened = pivoted.flat().filter(el => el !== undefined);

One Liner:

var output = Array(Math.max(...input.map(arr => arr.length))).fill().map((_,i) => i)
               .map(i => input.map(arr => arr[i] ))
               .flat().filter(el => el !== undefined)
KyleMit
  • 45,382
  • 53
  • 367
  • 544
  • 1
    Thank you this a great and detailed answer and I learned a lot. You're right, a for loop is the most predictable outcome and I guess there's no reason not to do it. Your functional solution is great. I didn't think of using Math.max to start with the biggest array, then filling out the indexes. – jimboweb Feb 22 '19 at 04:37
7

Use Array.from() to create a new array with the length of the longest sub array. To get the length of the longest sub array, get an array of the lengths with Array.map() and take the max item.

In the callback of Array.from() use Array.reduceRight() or Array.reduce() (depending on the order you want) to collect items from each sub array. Take the item if the current index exists in the sub array. Flatten the sub arrays with Array.flat().

const input = [["one","two","three"],["uno","dos"],["1","2","3","4"],["first","second","third"]]

const result = Array.from(
    { length: Math.max(...input.map(o => o.length)) },
    (_, i) => input.reduceRight((r, o) =>
      i < o.length ? [...r, o[i]] : r
    , [])
  )
  .flat();

console.log(result);
Ori Drori
  • 145,770
  • 24
  • 170
  • 162
  • Cool! I like it! – KyleMit Feb 21 '19 at 00:22
  • -4 bytes `[...Array(Math.max(...input.map(o => o.length)))].map((_,i)=>{})` – guest271314 Feb 21 '19 at 00:33
  • @guest271314 - you'll need to fill the array afterwards, because you can't map sparse items. – Ori Drori Feb 21 '19 at 06:07
  • 1
    @OriDrori Not sure what you mean. You must not have tried the code. `[...Array(N)]` has the same output as `Array.from({length:N})`. The only difference is that `[...Array(N)].map((_,i)=>{})` is 4 less bytes than `Array.from({length:N},(_,i)=>{})`. – guest271314 Feb 21 '19 at 08:14
  • 1
    Thank you for suggesting reduceRight if I want to change the order that items with the same index go together. This is a really nice solution. – jimboweb Feb 22 '19 at 04:39
6

I made it with recursion approach to avoid mutation.

let input = [["one","two","three"],["uno","dos"],["1","2","3","4"],["first","second","third"]]

function recursion(input, idx = 0) {
  let tmp = input.map(elm => elm[idx])
                 .filter(e => e !== undefined)
  return tmp[0] ? tmp.concat(recursion(input, idx + 1)) : []
}

console.log(recursion(input))
bird
  • 1,609
  • 11
  • 23
6

Here's a recursive solution that meets the standards of elegance that you specified:

const head = xs => xs[0];

const tail = xs => xs.slice(1);

const notNull = xs => xs.length > 0;

console.log(collate([ ["one", "two", "three"]
                    , ["uno", "dos"]
                    , ["1", "2", "3", "4"]
                    , ["first", "second", "third"]
                    ]));

function collate(xss) {
    if (xss.length === 0) return [];

    const yss = xss.filter(notNull);

    return yss.map(head).concat(collate(yss.map(tail)));
}

It can be directly translated into Haskell code:

collate :: [[a]] -> [a]
collate []  = []
collate xss = let yss = filter (not . null) xss
              in map head yss ++ collate (map tail yss)

The previous solution took big steps to compute the answer. Here's a recursive solution that takes small steps to compute the answer:

console.log(collate([ ["one", "two", "three"]
                    , ["uno", "dos"]
                    , ["1", "2", "3", "4"]
                    , ["first", "second", "third"]
                    ]));

function collate(xss_) {
    if (xss_.length === 0) return [];

    const [xs_, ...xss] = xss_;

    if (xs_.length === 0) return collate(xss);

    const [x, ...xs] = xs_;

    return [x, ...collate(xss.concat([xs]))];
}

Here's the equivalent Haskell code:

collate :: [[a]] -> [a]
collate []           = []
collate ([]:xss)     = collate xss
collate ((x:xs):xss) = x : collate (xss ++ [xs])

Hope that helps.

Aadit M Shah
  • 67,342
  • 26
  • 146
  • 271
  • 1
    `let yss = filter (not . null) xss in map head yss ++ collate (map tail yss)` === `concatMap (take 1) xss ++ collate [t | (_:t) – Will Ness Feb 21 '19 at 18:08
  • 1
    also, as a curiosity, `collate = concat . takeWhile (not . null) . unfoldr (Just . traverse (splitAt 1))`. – Will Ness Feb 21 '19 at 18:14
  • Yes this definitely meets the standards of elegance I was hoping for. I wish I could pick more than one solution! Thank you for putting it in Haskell code. – jimboweb Feb 22 '19 at 04:28
  • 1
    (also, `concat . transpose`, but (re)inventing is much more fun than just learning...) – Will Ness Feb 23 '19 at 10:31
4

Funny solution

  1. add index as prefix on inner array
  2. Flatten the array
  3. sort the array
  4. Remove the prefix

let input = [["one","two","three"],["uno","dos"],["1","2","3","4"],["first","second","third"]]
let ranked=input.map(i=>i.map((j,k)=>k+'---'+j)).slice()
console.log(ranked.flat().sort().map(i=>i.split('---')[1]));
sumit
  • 13,148
  • 10
  • 57
  • 103
  • Clever answer. I did consider doing something like this. I was thinking of something more like making the inner map return something like `{ind:k,val:j}` followed by a `.sort((a,b)=>return a.ind-b.ind)` but it's the same idea. I decided against it because creating unnecessary objects (or strings) felt like doing unnecessary work. But still a good answer, thanks. – jimboweb Feb 21 '19 at 02:57
4

I've seen the problem called round-robin, but maybe interleave is a better name. Sure, map, reduce, and filter are functional procedures, but not all functional programs need rely on them. When these are the only functions we know how to use, the resulting program is sometimes awkward because there is often a better fit.

  • map produces a one-to-one result. If we have 4 subarrays, our result will have 4 elements. interleave should produce a result equal to the length of the combined subarrays, so map could only possibly get us part way there. Additional steps would be required to get the final result.

  • reduce iterates through the input elements one-at-a-time to produce a final result. In the first reduce, we will be given the first subarray, but there's no straightforward way to process the entire subarray before moving onto the next one. We can force our program to use reduce, but in doing so, it makes us think about our collation procedure as a recuding procedure instead of what it actually is.

The reality is you're not limited to the use of these primitive functional procedures. You can write interleave in a way that directly encodes its intention. I think a interleave has a beautiful recursive definition. I think the use of deep destructuring assignment is nice here because the function's signature shows the shape of the data that interleave is expecting; an array of arrays. Mathematical induction allows us to naturally handle the branches of our program -

const None =
  Symbol ('None')

const interleave =
  ( [ [ v = None, ...vs ] = []  // first subarray
    , ...rest                   // rest of subarrays
    ]
  ) =>
    v === None
      ? rest.length === 0
        ? vs                                   // base: no `v`, no `rest`
        : interleave (rest)                    // inductive: some `rest`
      : [ v, ...interleave ([ ...rest, vs ]) ] // inductive: some `v`, some `rest`

const input =
  [ [ "one", "two", "three" ]
  , [ "uno", "dos" ]
  , [ "1", "2", "3", "4" ]
  , [ "first", "second", "third" ]
  ]

console.log (interleave (input))
// [ "one", "uno", "1", "first", "two", "dos", "2", "second", "three", "3", "third", "4" ]

interleave has released us from the shackles of close-minded thinking. I no longer need to think about my problem in terms of misshapen pieces that awkwardly fit together – I'm not thinking about array indexes, or sort, or forEach, or mutating state with push, or making comparisons using > or Math.max. Nor am I having to think about perverse things like array-like – wow, we really do take for granted just how much we've come to know about JavaScript!

Above, it should feel refreshing that there are no dependencies. Imagine a beginner approaching this program: he/she would only need to learn 1) how to define a function, 2) destructuring syntax, 3) ternary expressions. Programs cobbled together with countless small dependencies will require the learner to familiarize him/herself with each before an intuition for the program can be acquired.

That said, JavaScript syntaxes for destructuring values are not the most pretty and sometimes trades for convenience are made for increased readability -

const interleave = ([ v, ...vs ], acc = []) =>
  v === undefined
    ? acc
: isEmpty (v)
    ? interleave (vs, acc)
: interleave
    ( [ ...vs, tail (v) ]
    , [ ...acc, head (v) ]
    )

The dependencies that evolved here are isEmpty, tail, and head -

const isEmpty = xs =>
  xs.length === 0

const head = ([ x, ...xs ]) =>
  x

const tail = ([ x, ...xs ]) =>
  xs

Functionality is the same -

const input =
  [ [ "one", "two", "three" ]
  , [ "uno", "dos" ]
  , [ "1", "2", "3", "4" ]
  , [ "first", "second", "third" ]
  ]

console.log (interleave (input))
// [ "one", "uno", "1", "first", "two", "dos", "2", "second", "three", "3", "third", "4" ]

Verify the results in your own browser below -

const isEmpty = xs =>
  xs.length === 0
  
const head = ([ x , ...xs ]) =>
  x
  
const tail = ([ x , ...xs ]) =>
  xs
  
const interleave = ([ v, ...vs ], acc = []) =>
  v === undefined
    ? acc
: isEmpty (v)
    ? interleave (vs, acc)
: interleave
    ( [ ...vs, tail (v) ]
    , [ ...acc, head (v) ]
    )

const input =
  [ [ "one", "two", "three" ]
  , [ "uno", "dos" ]
  , [ "1", "2", "3", "4" ]
  , [ "first", "second", "third" ]
  ]

console.log (interleave (input))
// [ "one", "uno", "1", "first", "two", "dos", "2", "second", "three", "3", "third", "4" ]

If you start thinking about interleave by using map, filter, and reduce, then it's likely they will be a part of the final solution. If this is your approach, it should surprise you that map, filter, and reduce are nowhere to be seen in the two programs in this answer. The lesson here is you become a prisoner to what you know. You sometimes need to forget map and reduce in order to observe that other problems have a unique nature and thus a common approach, although potentially valid, is not necessarily the best fit.

Thank you
  • 107,507
  • 28
  • 191
  • 224
  • The `v === undefined` pattern is a terrible way to test whether an array is empty. What if the input was a sparse array such as `[,2,3]`? The only reliable way to test whether an array is empty is to check its `length` property. – Aadit M Shah Feb 21 '19 at 06:51
  • That depends on whether sparse arrays are possible in your program. I think sparse arrays are dangerous, so I don't use them. If my program has an `undefined` behaviour due to an `undefined` value, I'm OK with that; it means something _undefined_ will happen. – Thank you Feb 21 '19 at 15:20
  • True. Sometimes it's better not to think about such rare edge cases, and being able to pattern match might be worth not accounting for sparse arrays. It would be nice to have a way to do proper pattern matching in JavaScript. Here's a solution that I came up with: `const collate = (vvs, acc = [], [v, ...vs] = vvs) => vvs.length === 0 ? acc : ....`. It's a gross abuse of default parameters but it enables succinct pattern matching while still working with sparse arrays. – Aadit M Shah Feb 21 '19 at 15:39
  • By the way, another way to define your function (using big steps rather than small steps) is as follows: `const collate = (xss, yss = xss.filter(xs => xs.length > 0)) => yss.length === 0 ? [] : yss.map(([x, ...xs]) => x).concat(collate(yss.map(([x, ...xs]) => xs)));`. – Aadit M Shah Feb 21 '19 at 15:55
  • @AaditMShah Thank you for the discussion. I think it's harmful to think of every problem as a series of `map`, `filter`, or `reduce` calls. To me, it signals the programmer may be yielding to his/her tools rather than thinking about his/her problem in an unconstrained way. I made an edit that elaborates on this a bit more. – Thank you Feb 21 '19 at 16:42
  • 1
    Thanks for the reference to the round-robin problem. You're right, by focusing on the existing methods like map and reduce I was limiting myself. This is very elegant. I am going to study it a bit to make sure I really understand what you did with your roll-your-own `tail` and `rest` methods. Thanks for taking time to give this great solution and explanation. – jimboweb Feb 22 '19 at 03:23
  • ugh typo, I meant roll your own `head` and `tail`methods – jimboweb Feb 22 '19 at 03:34
  • @jimboweb I'm delighted to help. For a good intuition, try `head ([ 1, 2, 3, 4 ])` and `tail ([ 1, 2, 3, 4 ])`. If you have any other questions, feel free to ask. – Thank you Feb 22 '19 at 03:43
  • @jimboweb, I would like to share [this Q&A](https://stackoverflow.com/a/54242632/633183) in hopes it will be of interest; particularly the `merge` function near the end - there's all sorts of ways to solve the problem creatively once we learn to sharpen our focus on writing simple functions. – Thank you Feb 22 '19 at 04:33
  • Thank you! That q&a looks really interesting, I'm going to study it. – jimboweb Feb 22 '19 at 04:41
  • Using `map`, `filter`, or `reduce` for every problem is certainly not the way to go. However, in this particular case, it definitely makes sense to use `map` and `filter`. In a round-robin algorithm, you want to take the first element of each producer. Hence, you use `map head xss`. After that, you want to start over but with the first element removed. Hence, you recurse on `map tail xss`. You also want to remove producers which don't have any more elements. Hence, you filter those producers which have elements yet to be consumed. As I said, it's big steps vs small steps. Both are valid means. – Aadit M Shah Feb 22 '19 at 05:19
  • I never meant to imply it doesn't _make sense_ to think of the problem in terms of `map head` and `map tail` - I am suggesting to let intentions direct the program and not the other way around. If it so happens that my recursive function takes the form of `map`, `reduce`, etc then I will make that substitution. If I start by saying "this is a `reduce`" or "this is a `map` and `filter`", then I've pigeonholed myself into thinking the problem can only be solved using these means. – Thank you Feb 22 '19 at 05:25
3

Here I have provided a generator function that will yield the values in the desired order. You could easily turn this into a regular function returning an array if you replace the yield with a push to a results array to be returned.

The algorithm takes in all the arrays as arguments, then gets the iterators for each of them. Then it enters the main loop where it treats the iters array like a queue, taking the iterator in front, yielding the next generated value, then placing it back at the end of the queue unless it is empty. The efficiency would improve if you transformed the array into a linked list where adding to the front and back take constant time, whereas a shift on an array is linear time to shift everything down one spot.

function* collate(...arrays) {
  const iters = arrays.map(a => a.values());
  while(iters.length > 0) {
    const iter = iters.shift();
    const {done, value} = iter.next();
    if(done) continue;
    yield value;
    iters.push(iter);
  }
}
kamoroso94
  • 1,550
  • 1
  • 16
  • 18
  • 1
    This is a great answer, thanks. But the while loop felt kind of like the .forEach I was trying to get around. But it's still a good answer so I voted it up. – jimboweb Feb 21 '19 at 02:59
3

What about this?

  • Sort arrays to put the longest one first.
  • flatMap them, so the index of each item in the longest array gets each index of any other array in xs.
  • Filter out undefined items in the flat array (those produced by getting indexes out of range of each possible array)

const input = [
  ["one", "two", "three"],
  ["uno", "dos"],
  ["1", "2", "3", "4"],
  ["first", "second", "third"]
]

const arrayLengthComparer = (a, b) => b.length - a.length

const collate = xs => {
  const [xs_, ...ys] = xs.sort (arrayLengthComparer)
  
  return xs_.flatMap ((x, i) => [x, ...ys.map (y => y[i])])
            .filter (x => x)
}

const output = collate (input)

console.log (output)
   
Matías Fidemraizer
  • 59,064
  • 16
  • 107
  • 181
  • 1
    You could simplify your comparator function as follows: `const arrayLengthComparer = (a, b) => b.length - a.length;`. – Aadit M Shah Feb 21 '19 at 15:46
  • Oh, nice, someone suggested a flatMap solution earlier and deleted it before I could read it, was that you? This is a great solution. Sorting them into a head and tail section first was really smart. – jimboweb Feb 22 '19 at 04:25
  • @jimboweb Hey thanks. No, this wasn't my answer. Really, I didn't know that there was an answer using `flatMap` already – Matías Fidemraizer Feb 22 '19 at 20:28
2

This is what I came up with... although now after seeing the other answers, this solution seems to be bulkier... and it still uses a forEach. I'm interested to hear the benefit of not using a forEach.

var input = [["1a", "2a", "3a"], ["1b"], ["1c", "2c", "3c", "4c", "5c", "6c", "7c"],["one","two","three","four","five","six","seven"],["uno","dos","tres"],["1","2","3","4","5","6","7","8","9","10","11"],["first","second","third","fourth"]];

// sort input array by length
input = input.sort((a, b) => {
  return b.length - a.length;
});

let output = input[0];
let multiplier = 2;

document.writeln(output + "<br>");

input.forEach((arr, i1) => {
  if (i1 > 0) {
    let index = -1;
    arr.forEach((item) => {  
      index = index + multiplier;
      output.splice(index, 0, item);        
    });
    
    document.writeln(output + "<br>");
    multiplier++;
  }
});
Nick Young
  • 758
  • 1
  • 7
  • 18
  • Honestly I guess the forEach loop & variable mutation doesn't really hurt anything, since we're only mutating a disposable accumulator variable inside a loop. I was interested in using purely functional methods I guess because if you can use simple loops there's a pretty trivial solution involving a for loop inside a while loop so why use reduce at all? – jimboweb Feb 22 '19 at 03:10
0
Array.prototype.coallate = function (size) {
  return [...Array(Math.ceil(this.length / size)).keys()].map(i => this.slice(i * size, size * (i + 1)));
};
const result = [0,1,2,3,4,5,6,7,8,9].coallate(3)


console.log(JSON.stringify(result));

Result: [[0,1,2],[3,4,5],[6,7,8],[9]]

Christian Bongiorno
  • 3,923
  • 1
  • 25
  • 65