So all other things being equal, you shouldn’t have any protected members at all. But that said, if you have too few, then your class may not be usable as a superclass, or at least not as an efficient superclass. Often you find out after the fact. My philosophy is to have as few protected members as possible when you first write the class. Then try to subclass it. You may find out that without a particular protected method, all subclasses will have to do some bad thing.
Protected data is even more dangerous in terms of messing up your data invariants. If you give someone else access to some internal data, they have free reign over it.
You shouldn’t make something extendable (inheritable) unless you have a good reason to do it. minimize mutability.
My view is you can always add something, but you can’t take it away. Make it final. If somebody really needs to subclass it, they will call you. Listen to their argument. Find out if it’s legitimate. If it is, in the next release you can take out the final. In terms of binary compatibility, it is not something that you have to live with forever. You can take something that was final and make it non-final.
in the case of immutable, I think factory methods are great. In the case of classes that reasonably admit multiple implementations, for which you want the ability to change implementations over time, or change implementations at run time based on usage characteristics, I think they are good.
I think that to get the most robust programs, you want to do as much static type checking as possible.
Async Console Programs
You can work around this by providing your own async-compatible context. AsyncContext is a general-purpose context that can be used to enable an asynchronous MainAsync:
Creating a new thread (or Task) for non-CPU-intensive work is relatively expensive when you consider that all the thread is doing is waiting for the activity to complete.
Strictly speaking, the return will always execute in the same synchronization context as the caller. However, given that this is WPF code and the synchronization context is single-threaded, the return will always be to the same thread.
Note that regardless of whether the awaits occur within an iteration or as separate entries, they’ll execute serially, one after the other and in the same order they were invoked from the calling thread. The underlying implementation is to queue await calls together in the semantic equivalent of Task.ContinueWith, except the code between the awaits will all execute in the caller’s synchronization context.
The first and most important problem here is that when you call Abort, whatever is happening in that loop is interrupted. Your program could be in the middle of a critical multi-part data update, could be holding a mutex, have allocated critical system resources, etc. When the thread aborted, the update is left unfinished. Unless you’re religious about using try/finally, any mutex you held continues to be held, critical system resources aren’t released, etc. Even if you do clean up allocated resources, you still have the problem of an unfinished data update. Your program is left in an unknown and therefore potentially corrupt state from which you can’t reliably recover.
Your best bet is to forget that Thread.Abort, Thread.Suspend, and Thread.Resume even exist. I can think of very few good uses for Abort (think of it as the goto of multi-threaded programming), and no good use for Suspend and Resume. Don’t use them. If you do, you’ll regret it.
Do not use the Suspend and Resume methods to synchronize the activities of threads. You have no way of knowing what code a thread is executing when you suspend it. If you suspend a thread while it holds locks during a security permission evaluation, other threads in the AppDomain might be blocked. If you suspend a thread while it is executing a class constructor, other threads in the AppDomain that attempt to use that class are blocked. Deadlocks can occur very easily.
Finally, Thread.Abort() is the wrong tool for the job (well, in fact, there isn’t a job for which Thread.Abort() is a good tool). Use either a semaphore or simply a global flag that tells the thread to exit. Trying to kill a thread with Thread.Abort() leads to all kinds of problems.
The approach I always recommend is dead simple. Have a volatile bool field that is visible both to your worker thread and your UI thread. If the user clicks cancel, set this flag. Meanwhile, on your worker thread, test the flag from time to time. If you see it get set, stop what you’re doing.
When a thread calls Abort on itself, the effect is similar to throwing an exception; the ThreadAbortException happens immediately, and the result is predictable. However, if one thread calls Abort on another thread, the abort interrupts whatever code is running. There is also a chance that a static constructor could be aborted. In rare cases, this might prevent instances of that class from being created in that application domain.
The CancellationTokenSource class implements the IDisposable interface. You should be sure to call the CancellationTokenSource.Dispose method when you have finished using the cancellation token source to free any unmanaged resources it holds.
The optimal frequency of polling depends on the type of application. It is up to the developer to determine the best polling frequency for any given program. Polling itself does not significantly impact performance. The following example shows one possible way to poll.
WaitHandle: It is an abstract class, you don’t use it directly. Concrete derived classes are ManualResetEvent, AutoResetEvent, Mutex and Semaphore. Important classes in your toolbox to implement thread synchronization. They inherit the WaitOne, WaitAll and WaitAny methods, you use them to detect that one or more threads signaled the wait condition.
In some cases, a listener may have to listen to multiple cancellation tokens simultaneously. For example, a cancelable operation may have to monitor an internal cancellation token in addition to a token passed in externally as an argument to a method parameter. To accomplish this, create a linked token source that can join two or more tokens into one token, as shown in the following example.
Calling ThrowIfCancellationRequested is extremely fast and does not introduce significant overhead in loops.
Register callbacks to Cancellation Request
To achieve the benefits of asynchrony, can’t I just wrap my synchronous methods in calls to Task.Run?
It depends on your goals for why you want to invoke the methods asynchronously. If your goal is simply to offload the work you’re doing to another thread, so as to, for example, maintain the responsiveness of your UI thread, then sure. If your goal is to help with scalability, then no, just wrapping a synchronous call in a Task.Run won’t help.
Is “await task;” the same thing as “task.Wait()”?
“task.Wait()” is a synchronous, potentially blocking call: it will not return to the caller of Wait() until the task has entered a final state, meaning that it’s completed in the RanToCompletion, Faulted, or Canceled state. In contrast, “await task;” tells the compiler to insert a potential suspension/resumption point into a method marked as “async”, such that if the task has not yet completed when it’s awaited, the async method should return to its caller, and its execution should resume when and only when the awaited task completes. Using “task.Wait()” when “await task;” would have been more appropriate can lead to unresponsive applications and deadlocks. Task.Wait blocks until the task is complete — you ignore your friend until the task is complete. await keeps processing messages in the message queue, and when the task is complete, it enqueues a message that says “pick up where you left off after that await”.
Can I use “await” in console apps?
Sure. You can’t use “await” inside of your Main method, however, as entry points can’t be marked as async. Instead, you can use “await” in other methods in your console app, and then if you call those methods from Main, you can synchronously wait (rather than asynchronously wait) for them to complete, e.g.
Both SynchronizationContext and TaskScheduler are abstractions that represent a “scheduler”, something that you give some work to, and it determines when and where to run that work. There are many different forms of schedulers. For example, the ThreadPool is a scheduler: you call ThreadPool.QueueUserWorkItem to supply a delegate to run, that delegate gets queued, and one of the ThreadPool’s threads eventually picks up and runs that delegate. Your user interface also has a scheduler: the message pump. A dedicated thread sits in a loop, monitoring a queue of messages and processing each; that loop typically processes messages like mouse events or keyboard events or paint events, but in many frameworks you can also explicitly hand it work to do, e.g. the Control.BeginInvoke method in Windows Forms, or the Dispatcher.BeginInvoke method in WPF.
But there’s one common kind of application that doesn’t have a SynchronizationContext: console apps. When your console application’s Main method is invoked, SynchronizationContext.Current will return null. That means that if you invoke an asynchronous method in your console app, unless you do something special, your asynchronous methods will not have thread affinity: the continuations within those asynchronous methods could end up running “anywhere.”
The primary difference is that SynchronizationContext is a general mechanism for working with delegates, whereas TaskScheduler is specific to and catered to Tasks (you can get a TaskScheduler that wraps a SynchronizationContext using TaskScheduler.FromCurrentSynchronizationContext). This is why awaiting Tasks takes both into account, first checking a SynchronizationContext (as the more general mechanism that most UI frameworks support), and then falling back to a TaskScheduler. Awaiting a different kind of object might choose to first use a SynchronizationContext, and then fall back to some other mechanism specific to that particular type.
There are four conditions, known as the Coffman conditions, that are necessary for a deadlock to occur. Remove any one of these, and a deadlock can’t happen:
Mutual exclusion. There’s a resource that only one entity can use at a time.
Hold and wait. An entity is holding one resource while waiting for another.
No preemption. An entity gets to decide when it releases its hold on a resource.
Circular wait. There must be a cycle of waits, such that entity 1 is waiting on a resource held by entity 2; entity 2 is waiting on a resource held by entity 3; …; and entity N is waiting on a resource held by entity 1.
The cancellation token returned by this property cannot be canceled; that is, its CanBeCanceled property is false.
You can also use the C# default(CancellationToken) statement to create an empty cancellation token.
IDisposable instances should always be disposed.
Use Thread.Sleep when you want to block the current thread.
Use Task.Delay when you want a logical delay without blocking the current thread.
Using Wait on an uncompleted task is indeed blocking the thread until the task completes.
Using Thread.Sleep is clearer since you’re explicitly blocking a thread instead of implicitly blocking on a task.
The only way using Task.Delay is preferable is that it allows using a CancellationToken so you can cancel the block if you like to.
in C# 5.0 is if you run an async method using Task.Factory.StartNew(). If that’s the case, you should either call the async method directly, or, if you want to run it on a background thread, use Task.Run(), which automatically unwraps the Task<Task> into a simple Task.
StartNew doesn’t understand asynchronous delegates. StartNew is an extremely low-level API that should not be used in 99.99% of production code. Use Task.Run instead.
The best way to share information with deriving classes is the read-only property:
A consumer should only be able to set an object’s state at initialization (via the constructor). Once the object has come to life, it should be internally responsible for its own state lifecycle. Allowing consumers to affect the state adds unnecessary complexity and risk.
Weak encryption algorithms and hashing functions are used today for a number of reasons, but they should not be used to guarantee the confidentiality or integrity of the data they protect. This rule triggers when it finds TripleDES, SHA1, or RIPEMD160 algorithms in the code.
Broken cryptographic algorithms are not considered secure and their use should be strongly discouraged. This rule triggers when it finds the MD5 hash algorithm or either the DES or RC2 encryption algorithms in code.
For DES and RC2, use Aes encryption.
For TripleDES encryption, use Aes encryption.
For SHA1 or RIPEMD160 hashing functions, use ones in the SHA-2 family (e.g. SHA512, SHA384, SHA256).
For MD5, use hashes in the SHA-2 family (e.g. SHA512, SHA384, SHA256).
As an example, consider wanting to spin up another process and then asynchronously wait for that process to complete, e.g.
Task.Wait and cancellation