While programming in C#, I stumbled upon a strange language design decision that I just can't understand.
So, C# (and the CLR) has two aggregate data types: struct
(value-type, stored on the stack, no inheritance) and class
(reference-type, stored on the heap, has inheritance).
This setup sounds nice at first, but then you stumble upon a method taking an aggregate type as a parameter, and to figure out if it is actually of a value type or of a reference type, you have to find its type's declaration. It can get really confusing at times.
The generally accepted solution to the problem seems to be declaring all struct
s as "immutable" (setting their fields to readonly
) to prevent possible mistakes, limiting struct
s' usefulness.
C++, for example, employs a much more usable model: it allows you to create an object instance either on the stack or on the heap and pass it by value or by reference (or by pointer). I keep hearing that C# was inspired by C++, and I just can't understand why didn't it take on this one technique. Combining class
and struct
into one construct with two different allocation options (heap and stack) and passing them around as values or (explicitly) as references via the ref
and out
keywords seems like a nice thing.
The question is, why did class
and struct
become separate concepts in C# and the CLR instead of one aggregate type with two allocation options?
Aucun commentaire:
Enregistrer un commentaire