I have a program that needs to allocate 2 1.5billion-length integer arrays. It's for a coding challenge (https://projecteuler.net/problem=282) and there isn't a way around using such large arrays (if there is, please don't tell me; I'm supposed to find the answer on my own). They need to be 32-bit integers since their values are between 0 and 1.5 billion. 3 billion integers takes up about 12 gigabytes, so I decided to use an EC2 r5.xlarge instance with 32 gigabytes of memory, but I get a segmentation fault
error in my C code. When I test the code locally, it works for smaller arrays but receives a segmentation fault:11
error on the full-length version.
I've looked online, and have tried changing settings in ulimit
with ulimit -m 15000000
and ulimit -v 15000000
(both numbers are in kbytes). These were already set to unlimited
so I don't think this did anything.
The C code
#include <stdio.h>
#include <stdlib.h>
#include <math.h>
int main(int argc, char* argv[]) {
int magic = pow(14, 8); // 14**8 is 1,475,789,056
// more lines
int* a = malloc(4 * magic);
int* b = malloc(4 * magic);
if (a == NULL || b == NULL) {
printf("malloc failed\n");
exit(0);
}
for (int i = 0; i < magic; i++) a[i] = (2 * i + 3) % magic;
// some more lines
I get a segmentation fault
error in my C code on the EC2 instance. When I test the code locally, it prints out the correct value for smaller length arrays but receives a segmentation fault:11
error.