I think the problem might be a bit misstated here. First from a performance standpoint:
Any method of hashing a list of strings will take longer as the number (and length) of the strings increases. The only way to avoid this would be to ignore some of the data in (at least some of) the strings, and then you lose the assurances that a hash should give you.
So you can try to make the whole thing faster, so that you can process more (and/or longer) strings in an acceptable time frame. Without knowing the performance characteristics of the hashing function, we can't say if that's possible; but as farbiondriven's answer suggests, about the only plausible strategy is to assemble a single string and hash that once.
The potential objection to this, I suppose, would be: does it affect the uniqueness of the hash. There are two factors to consider:
First, if you just concatenate all the strings together, then you would get the same output hash for
["element one and ", "element two"]
as for
["element one ", "and element two"]
because the concatenated data is the same. One way to correct this is to insert each string's length before the string (with a delimiter to show the end of the length). For example you could build
"16:element one and 11:element two"
for the first array above, and
"12:element one 15:and element two"
for the second.
The other possible concern (though it isn't really valid) could arise if the individual strings are never longer than a single SHA512 hash, but the total amount of data in the array is. In that case, your method (hashing each string and concatenating them) might seem safer, because whenever you has data that's longer than the actual hash, it's mathematically possible for a hash collision to occur. But as I say, this concern is not valid for at least one, and possibly two reasons.
The biggest reason is: hash collisions in a 512-bit hash are ridiculously unlikely. Even though the math says it could happen, it is beyond safe to assume that it literally never will. If you're going to worry about a hash collision at that level, you might as well also worry about your data being spontaneously corrupted due to RAM errors that occur in just such a pattern as to avoid detection. At that level of improbability, you simply can't program around a vast number of catastrophic things that "could" (but won't) happen, and you really might as well count hash collisions among them.
The second reason is: if you're paranoid enough not to buy the first reason, then how can you be sure that hashing shorter strings guarantees uniqueness?
What concatenating a hash per string does do if the individual strings are less than 512 bits, is it means that the hash ends up being longer than the source data - which defeats the typical purposes of a hash. If that's acceptable, then you probably want an encryption algorithm instead of a hash.