C++ maximum float array size

MrWizard6600

Supreme [H]ardness
Joined
Jan 15, 2006
Messages
5,791
I'm writing a program that includes a few dozen sorting algorithms, and to get some good data I started off with an array initialized to 100,000 floats long. I then changed it to 1,000,000 too see just how fast quicksort really was. Problem is, I got a general (read: you really screwed the pooch this time Mr programmer) runtime error. I was scratching my head thinking that there had to be a memory access violation somewhere. I ran through my code for a few minutes before finally finally changing the array size back to 100,000. Low and behold no run time error.

I'm wondering if there's some simple way to get:
using namespace std;
const int LIST_SIZE = 1000000;
int main(){
float list[LIST_SIZE];
//...
}
to run without this runtime error... without initializing 10 arrays of size 100,000.
(or... hell if i could and just bit-twiddle em so that they're back to back to back that would be ok...)

Presumably its windows memory management which is shutting me down on account of trying to access more than a few megs of memory (32bits @ 1,000,000 elements long is 8 megs). Is there any way I can get the man to stop shutting me down?
system("GOAWAY")? :p
 
What was the runtime error? How are you populating the array?
 
You can also just increase the stack size.
Or you could do the right thing and put it on the heap:
Code:
float *float_array = (float *) calloc(LIST_SIZE, sizeof(float));
Then, treat it just like any other float array and remember to free().

Edit: To be really clean, you should probably check the return value to make sure it's not NULL, etc., and do the proper error checking on errno, etc. I don't know off-hand what the maximum allowable heap allocation is, but it's probably orders of magnitude higher than what you should be using on the stack.
 
Or you could do the right thing and put it on the heap:
If you're going to be picky and claim to know the guaranteed right thing, you'd do well to avoid the abomination that is C-style memory allocation and use the new operator....
 
If you're going to be picky and claim to know the guaranteed right thing, you'd do well to avoid the abomination that is C-style memory allocation and use the new operator....
Ah. Been doing a lot of C development as of late and have availed myself of the warm comforts of C++; completely glossed over the "++" in the title. Indeed, you should just new it.
 
Wow; I can barely keep up with the rhetoric in this thread.

Or you could do the right thing and put it on the heap:
Why do you assert that using the heap is the "right" thing?

If you're going to be picky and claim to know the guaranteed right thing, you'd do well to avoid the abomination that is C-style memory allocation and use the new operator....
What makes malloc() an abomination? operator new() provides no benefit in this case, and makes for more complicated code.
 
I fail to see how
Code:
float *float_array = new float[1000000];

// code

delete float_array;
is more complex than
Code:
float *float_array = (float *) calloc(LIST_SIZE, sizeof(float));

// code

free(float_array);
If nothing else, the first example is a LOT more readable.
 
Neither example includes error checking. And the first example is incorrect, anyway. And the examples you've provided aren't equivalent. Again: where's the abomination? A "LOT" more readable? How so? Because you used a literal instead of the macro symbol? Wouldn't most people consider that less readable?
 
Neither example includes error checking. And the first example is incorrect, anyway. And the examples you've provided aren't equivalent. Again: where's the abomination? A "LOT" more readable? How so? Because you used a literal instead of the macro symbol? Wouldn't most people consider that less readable?

Why is it incorrect? It compiles without error or warning, executes, and does what's expected. Error checking is implicit; new will throw an exception if the allocation fails, and normally terminating the application in this situation is the desired behaviour, so catching the exception isn't necessary. If you wanted to catch it, you'd probably want to do it somewhere above it in the call stack anyway where you can handle it gracefully.

Replace the literal with the macro, and it's a lot more readable. There are a lot less operators, it reads clearly as pseudo-english and consists of a lot less text to type/read.
 
Why is it incorrect?
Because it uses the wrong delete operator. Since you call vector new to allocate that pointer, you must use vector delete to delete that memory. If it works without error, then you're relying on undefined behaviour doing what you happen to expect. If it compiles without error, you're relying on a lax compiler that doesn't emit a diagnostic for this case.

Error checking is implicit; new will throw an exception if the allocation fails, and normally terminating the application in this situation is the desired behaviour, so catching the exception isn't necessary. If you wanted to catch it, you'd probably want to do it somewhere above it in the call stack anyway where you can handle it gracefully.
There's no accessible place higher in the call stack; we're in main(). Ending the application with an unhanded exception is almost never desired behavior, and certainly not the case here.
 
Because it uses the wrong delete operator. Since you call vector new to allocate that pointer, you must use vector delete to delete that memory. If it works without error, then you're relying on undefined behaviour doing what you happen to expect. If it compiles without error, you're relying on a lax compiler that doesn't emit a diagnostic for this case.
Ah. I didn't copy/paste the code and used the correct operator. It's an easy one to miss.

There's no accessible place higher in the call stack; we're in main(). Ending the application with an unhanded exception is almost never desired behavior, and certainly not the case here.

This is irrelevant. The error is handled properly, though perhaps not gracefully, whereas in the C case it must be handled manually or undefined behaviour can result.

And yes, I know we're in main in this trivial example, but obviously in real code that would not be the case.
 
If it's an easy one to miss, and you had to be led to the problem, is the code really "a LOT more readable" as you claim? After all, the C version doesn't require an operator that matches the allocation site. And you got it right in your C-based example.

The error isn't handled properly. It's un-handled. If you want to hide behind "real code", then that's even more important; when you don't handle an exception, no other destructors run. Those destructors might have allocated things that you won't get back -- like connections to other machines, or disk space -- without writing a handler for the exception thrown by new.

In C++ out-of-memory conditions need to be handled just like in C.
 
If it's an easy one to miss, and you had to be led to the problem, is the code really "a LOT more readable" as you claim? After all, the C version doesn't require an operator that matches the allocation site. And you got it right in your C-based example.
I'm not the one who started this discussion, I just agree with him. I don't understand how you can argue the C example is easier to read, it clearly is not. Without reading 'man calloc', a programmer unfamiliar with C would be hard-pressed to have any idea what it does. Nevermind that it's overly verbose. delete[] is a flaw of syntax, but less of one than malloc(); it's easy to catch at compile-time.

The error isn't handled properly. It's un-handled.
It may not be handled properly, but it's handled safely. Application termination is a legitimate response to an OOM condition. Doing random things with null pointers is not. Either way, the C++ case will fail in a defined way regardless of whether you catch the exception or not, and that's not true of C. C++ is better.

If you want to hide behind "real code", then that's even more important; when you don't handle an exception, no other destructors run. Those destructors might have allocated things that you won't get back -- like connections to other machines, or disk space -- without writing a handler for the exception thrown by new.
It's hardly the rule that you're going to need to clean up these kinds of things after an allocation failure, and certainly doing so is much more cumbersome in C than in C++. A lot of the time the things you need to clean up are out of scope and the solution in C is rather ugly. And a lot of the time you can just use a generic 'this operation failed' handler to clean up when things fail for any reason, and letting the OOM exception pass up the stack without explicit handling will trigger that cleanup.

In C++ out-of-memory conditions need to be handled just like in C.
They should be, but it's sure a lot nicer in C++, and even if you don't, it does the safe thing for you anyway 'for free'. Nevermind that new/delete works properly with OOP code and is type-safe.
 
Last edited:
Generally, the way it works is, if you run out of memory, you've got larger problems to deal with. Just crash the program, report this major flaw to your developers, and patch it. Though, of course, it's ideal for everything to run gracefully all the time, but reality doesn't seem to demonstrate that this is required (at least in the context we're discussing).
 
Generally, the way it works is, if you run out of memory, you've got larger problems to deal with. Just crash the program, report this major flaw to your developers, and patch it. Though, of course, it's ideal for everything to run gracefully all the time, but reality doesn't seem to demonstrate that this is required (at least in the context we're discussing).
What context is that, specifically?

Since we're the developers (evidenced by the fact that we're talking about code and coding practice) who will we report the problem to?

Do you think that, as you're typing along in an editor, it should just crash when the file gets too big? When you open a file that's too large for your system, you should just crash because you don't have enough available memory at the moment? When a user does a bunch of work, then takes the next step they should lose all of that work because of a crash, since they have no way to anticipate or measure the memory requirement of the feature they're about to use? That servers should just go down when they get too busy?

I'm not the one who started this discussion, I just agree with him.
Why do you think that memory allocation is an "abomination" in C, then?

I don't understand how you can argue the C example is easier to read, it clearly is not. Without reading 'man calloc', a programmer unfamiliar with C would be hard-pressed to have any idea what it does.
Your argument applies to C++, as well: a programmer unfamiliar with C++ would be hard pressed to have any idea what it does. Should we be speaking Chinese because English is hard to understand to someone who isn't familiar with it?

Nevermind that it's overly verbose. delete[] is a flaw of syntax, but less of one than malloc(); it's easy to catch at compile-time.
Huh? Is malloc() a flaw of syntax? Using delete instead of delete[] isn't a syntactical problem; it's a semantic problem, anyway. So, what is your actual point here?

It may not be handled properly, but it's handled safely. Application termination is a legitimate response to an OOM condition.
Ignoring it isn't safe, simply because dirty shutdown isn't a legitimate result.
Doing random things with null pointers is not.
Nobody here is doing random things with any pointers. If the result comes back NULL from calloc(), then the first de-reference of that pointer is going to cause the same unhanded exception. It's precisely the same result!

It's hardly the rule that you're going to need to clean up these kinds of things after an allocation failure,
What do you mean by "hardly the rule"? What we're talking about is the idea that allocating memory without handling the potential exception is acceptable coding practice. That coding practice was offerd as justification for comparing apples and oranges while avoiding an explanation of the opinion that C memory management is an "abomination".
and certainly doing so is much more cumbersome in C than in C++. And a lot of the time you can just use a generic 'this operation failed' handler to clean up when things fail for any reason, and letting the OOM exception pass up the stack without explicit handling will trigger that cleanup
If you believe this, then you also believe that C++ memory management exceptions offer no real benefit. At this point, what you're asking us to compare is this:

Code:
float *fArray = (float*) calloc(LIST_SIZE, sizeof(float));
if (fArray == NULL)
	GenericThisOperationFailed();

with this:

Code:
float *fArray = new float[LIST_SIZE];
memset(fArray, 0, sizeof(float) * LIST_SIZE);

(Which incidentally gives a stronger hint at one of the other reasons why Arainach's first comparison was broken.) The C++ code isn't any more readable than the C code because the handler for the error is completely hidden. If calloc() fails, it returns NULL, and the next line of code handles that case. In the C++ code, we don't know if new throws or not (because some implementations don't -- you can always override operator new()), and if it does throw, we have to go searching around to try and figure out what handler it might hit and what that handler actually does.

In the original example, main() doesn't offer any handler, and the backstop handler in the runtime library or the OS itself is used, causing the program to abnormally end without calling any local or static destruction, leaking whatever resources they might have allocated. For an academic example, this isn't much of an issue aside from reinforcing poor practices.

They should be, but it's sure a lot nicer in C++, and even if you don't, it does the safe thing for you anyway 'for free'.
In other words, nothing is happening for free.
 
Back
Top