CSharp知识点整理记录
////////////////////////Enum.GetUnderlyingType(Type) Method////////////
using System;
public class Example
{
public static void Main()
{
Enum[] enumValues = { ConsoleColor.Red, DayOfWeek.Monday,
MidpointRounding.ToEven, PlatformID.Win32NT,
DateTimeKind.Utc, StringComparison.Ordinal };
Console.WriteLine("{0,-10} {1, 18} {2,15}\n",
"Member", "Enumeration", "Underlying Type");
foreach (var enumValue in enumValues)
DisplayEnumInfo(enumValue);
}
static void DisplayEnumInfo(Enum enumValue)
{
Type enumType = enumValue.GetType();
Type underlyingType = Enum.GetUnderlyingType(enumType);
Console.WriteLine("{0,-10} {1, 18} {2,15}",
enumValue, enumType.Name, underlyingType.Name);
}
}
// The example displays the following output:
// Member Enumeration Underlying Type
//
// Red ConsoleColor Int32
// Monday DayOfWeek Int32
// ToEven MidpointRounding Int32
// Win32NT PlatformID Int32
// Utc DateTimeKind Int32
// Ordinal StringComparison Int32
enum Foo : long { One, Two };
And then GetUnderlyingType is going to return long for typeof(Foo).
Note that the underlying type can be any integral type except for char types.
获得大小:Marshal.SizeOf(Enum.GetUnderlyingType(enumType))
///////////////////////////////// HashSet vs. List //////////////////////////////////////////
A lot of people are saying that once you get to the size where speed is actually a concern that HashSet<T> will always beat List<T>, but that depends on what you are doing.
Let's say you have a List<T> that will only ever have on average 5 items in it. Over a large number of cycles, if a single item is added or removed each cycle, you may well be better off using a List<T>.
I did a test for this on my machine, and, well, it has to be very very small to get an advantage from List<T>. For a list of short strings, the advantage went away after size 5, for objects after size 20.
1 item LIST strs time: 617ms
1 item HASHSET strs time: 1332ms
2 item LIST strs time: 781ms
2 item HASHSET strs time: 1354ms
3 item LIST strs time: 950ms
3 item HASHSET strs time: 1405ms
4 item LIST strs time: 1126ms
4 item HASHSET strs time: 1441ms
5 item LIST strs time: 1370ms
5 item HASHSET strs time: 1452ms
6 item LIST strs time: 1481ms
6 item HASHSET strs time: 1418ms
7 item LIST strs time: 1581ms
7 item HASHSET strs time: 1464ms
8 item LIST strs time: 1726ms
8 item HASHSET strs time: 1398ms
9 item LIST strs time: 1901ms
9 item HASHSET strs time: 1433ms
1 item LIST objs time: 614ms
1 item HASHSET objs time: 1993ms
4 item LIST objs time: 837ms
4 item HASHSET objs time: 1914ms
7 item LIST objs time: 1070ms
7 item HASHSET objs time: 1900ms
10 item LIST objs time: 1267ms
10 item HASHSET objs time: 1904ms
13 item LIST objs time: 1494ms
13 item HASHSET objs time: 1893ms
16 item LIST objs time: 1695ms
16 item HASHSET objs time: 1879ms
19 item LIST objs time: 1902ms
19 item HASHSET objs time: 1950ms
22 item LIST objs time: 2136ms
22 item HASHSET objs time: 1893ms
25 item LIST objs time: 2357ms
25 item HASHSET objs time: 1826ms
28 item LIST objs time: 2555ms
28 item HASHSET objs time: 1865ms
31 item LIST objs time: 2755ms
31 item HASHSET objs time: 1963ms
34 item LIST objs time: 3025ms
34 item HASHSET objs time: 1874ms
37 item LIST objs time: 3195ms
37 item HASHSET objs time: 1958ms
40 item LIST objs time: 3401ms
40 item HASHSET objs time: 1855ms
43 item LIST objs time: 3618ms
43 item HASHSET objs time: 1869ms
46 item LIST objs time: 3883ms
46 item HASHSET objs time: 2046ms
49 item LIST objs time: 4218ms
49 item HASHSET objs time: 1873ms
Here is that data displayed as a graph:
Here's the code:
static void Main(string[] args)
{
int times = 10000000;
for (int listSize = 1; listSize < 10; listSize++)
{
List<string> list = new List<string>();
HashSet<string> hashset = new HashSet<string>();
for (int i = 0; i < listSize; i++)
{
list.Add("string" + i.ToString());
hashset.Add("string" + i.ToString());
}
Stopwatch timer = new Stopwatch();
timer.Start();
for (int i = 0; i < times; i++)
{
list.Remove("string0");
list.Add("string0");
}
timer.Stop();
Console.WriteLine(listSize.ToString() + " item LIST strs time: " + timer.ElapsedMilliseconds.ToString() + "ms");
timer = new Stopwatch();
timer.Start();
for (int i = 0; i < times; i++)
{
hashset.Remove("string0");
hashset.Add("string0");
}
timer.Stop();
Console.WriteLine(listSize.ToString() + " item HASHSET strs time: " + timer.ElapsedMilliseconds.ToString() + "ms");
Console.WriteLine();
}
for (int listSize = 1; listSize < 50; listSize+=3)
{
List<object> list = new List<object>();
HashSet<object> hashset = new HashSet<object>();
for (int i = 0; i < listSize; i++)
{
list.Add(new object());
hashset.Add(new object());
}
object objToAddRem = list[0];
Stopwatch timer = new Stopwatch();
timer.Start();
for (int i = 0; i < times; i++)
{
list.Remove(objToAddRem);
list.Add(objToAddRem);
}
timer.Stop();
Console.WriteLine(listSize.ToString() + " item LIST objs time: " + timer.ElapsedMilliseconds.ToString() + "ms");
timer = new Stopwatch();
timer.Start();
for (int i = 0; i < times; i++)
{
hashset.Remove(objToAddRem);
hashset.Add(objToAddRem);
}
timer.Stop();
Console.WriteLine(listSize.ToString() + " item HASHSET objs time: " + timer.ElapsedMilliseconds.ToString() + "ms");
Console.WriteLine();
}
Console.ReadLine();
}
It's essentially pointless to compare two structures for performance that behave differently. Use the structure that conveys the intent. Even if you say your List<T>
wouldn't have duplicates and iteration order doesn't matter making it comparable to a HashSet<T>
, its still a poor choice to use List<T>
because its relatively less fault tolerant.
Whether to use a HashSet<T> or List<T> comes down to how you need to access your collection. If you need to guarantee the order of items, use a List. If you don't, use a HashSet. Let Microsoft worry about the implementation of their hashing algorithms and objects.
A HashSet will access items without having to enumerate the collection (complexity of O(1) or near it), and because a List guarantees order, unlike a HashSet, some items will have to be enumerated (complexity of O(n)).
- HashSet.Add will skip a new item if it’s deemed equal to one of the existing items and return false.
- Dictionary.Add will throw an exception if the new key being added is deemed equal to one of the existing keys. However, if you use the Dictionary‘s indexer instead, it will replace the existing item if the new item is deemed equal to it.
- List.Add will simply add the same item twice.
- HashSet provides some very useful methods such as IsSubsetOf and Overlaps, both can be achieved on the other collection types using LINQ but HashSet provides an optimized, ready-made solution
The Original Collections: System.Collections namespace
The original collection classes are largely considered deprecated by developers and by Microsoft itself. In fact they indicate that for the most part you should always favor the generic or concurrent collections, and only use the original collections when you are dealing with legacy .NET code.
Because these collections are out of vogue, let's just briefly mention the original collection and their generic equivalents:
-
ArrayList
- A dynamic, contiguous collection of objects.
- Favor the generic collection List<T> instead.
-
Hashtable
- Associative, unordered collection of key-value pairs of objects.
- Favor the generic collection Dictionary<TKey,TValue> instead.
-
Queue
- First-in-first-out (FIFO) collection of objects.
- Favor the generic collection Queue<T> instead.
-
SortedList
- Associative, ordered collection of key-value pairs of objects.
- Favor the generic collection SortedList<T> instead.
-
Stack
- Last-in-first-out (LIFO) collection of objects.
- Favor the generic collection Stack<T> instead.
In general, the older collections are non-type-safe and in some cases less performant than their generic counterparts. Once again, the only reason you should fall back on these older collections is for backward compatibility with legacy code and libraries only.
The Concurrent Collections: System.Collections.Concurrent namespace
The concurrent collections are new as of .NET 4.0 and are included in the System.Collections.Concurrent namespace. These collections are optimized for use in situations where multi-threaded read and write access of a collection is desired.
The concurrent queue, stack, and dictionary work much as you'd expect. The bag and blocking collection are more unique. Below is the summary of each with a link to a blog post I did on each of them.
-
ConcurrentQueue
- Thread-safe version of a queue (FIFO).
- For more information see: C#/.NET Little Wonders: The ConcurrentStack and ConcurrentQueue
-
ConcurrentStack
- Thread-safe version of a stack (LIFO).
- For more information see: C#/.NET Little Wonders: The ConcurrentStack and ConcurrentQueue
-
ConcurrentBag
- Thread-safe unordered collection of objects.
- Optimized for situations where a thread may be bother reader and writer.
- For more information see: C#/.NET Little Wonders: The ConcurrentBag and BlockingCollection
-
ConcurrentDictionary
- Thread-safe version of a dictionary.
- Optimized for multiple readers (allows multiple readers under same lock).
- For more information see C#/.NET Little Wonders: The ConcurrentDictionary
-
BlockingCollection
- Wrapper collection that implement producers & consumers paradigm.
- Readers can block until items are available to read.
- Writers can block until space is available to write (if bounded).
- For more information see C#/.NET Little Wonders: The ConcurrentBag and BlockingCollection
////////////////////////////////////////////////////////////////////////////