For example, the immutability of strings allows the reuse of a single instance of a string value, without allocating multiple redundant values.
Possible even in C++, although that's usually implemented with COW
, which can be a bottleneck in multi-threaded apps.
.NET also supports string interning, which in theory is cool, but has to be used very
If you are trying to reduce the total amount of memory your application allocates, keep in mind that interning a string has two unwanted side effects. First, the memory allocated for interned String objects is not likely be released until the common language runtime (CLR) terminates. The reason is that the CLR's reference to the interned String object can persist after your application, or even your application domain, terminates. Second, to intern a string, you must first create the string. The memory used by the String object must still be allocated, even though the memory will eventually be garbage collected.
Also, the CLR design of generics is much more efficient than any other language/platform that I'm aware of. In some cases this can allow the source code to be much smaller. Basically, the definition of MyGeneric<MyClass> only needs to be stored once; whereas C++ for example must separately compile this for each different MyClass that's used.
Might be true for the IL code generated, but what happens when the JIT'er runs?
- also, there's object allocation overhead every time you use a delegate... which includes the very innocent-looking lambda expressions. Setting up the closures might be relatively inexpensive, but it isn't free (I measured a 10x speed hit in object serialization because of a INotifyPropertyChanged implementation using lambda expressions).
And as mentioned earlier, we have to keep the difference between win32 memory usage and CLR memory usage in mind. There's reasons for it; not freeing win32 memory right away means subsequent CLR allocations can be done faster. But holding on to (win32) memory until system memory pressure is high enough might leave other apps deciding against, say, allocating more cache because the available
(win32) memory is low. Pros and cons.
In general, then, some kinds of programs will take more memory, and some may take less. But that's really comparing the same program, ported to different platforms. I'm betting that if you design your code from the ground up with an understanding of .Net (or whatever platform you're building for), you should be able to come up with a design that meshes well with whatever criteria are important to you.
Wise words. Idiomatic .NET (at least C#) programming does tend to involve a fair amount of objects being created, though. Fortunately a lot of them are short-lived and get collected fast, not putting much pressure on the win32 memory. Still, there's a fair amount of memory overhead from the framework. This becomes pretty inconsequential on larger apps that need a lot of memory for <whatever> processing, but it can be noticable on small tools. Whether this matters depends on the situation
The CLR memory model is really interesting when considering long-running server applications; for normal managed apps, memory fragmentation can end up being a pretty big issue, unless you're writing custom allocators. With .NET, you get address space defragmentation for free.