Futures can result in faster execution if hardware resources are available for parallel execution. However, if all cores are otherwise occupied, futures will be evaluated without parallelism.
Dynamic task parallelism is also known as recursive decomposition or “divide and conquer.”
The following code demonstrates how to implement a pipeline that uses the BlockingCollection class for the buffers and tasks for the stages of the pipeline.
When there is an unhandled exception in one pipeline stage, you should cancel the other stages. If you don’t do this, deadlock can occur.
Use a special instantiation of the CancellationTokenSource class to allow your application to coordinate the shutdown of all the pipeline stages when an exception occurs in one of them. Here’s an example.
The BlockingCollection<T> class allows you to read values from more than one producer.
It’s easy to forget to call the GetConsumingEnumerable because the BlockingCollection class implements IEnumerable<T>. Enumerating over the blocking collection instance won’t consume values. Watch out for this!
In general, using the blocking collections provided by the .NET Framework is easier and safer than writing your own implementation.
The Pipeline pattern doesn’t automatically scale with the number of cores. This is one of its limitations, unless additional parallelism is introduced within a pipeline stage itself.
The Decorator pattern overrides the behavior of an underlying class. Decorators use a “contains” relationship and inheritance.
The Adapter pattern translates from one interface to another. This pattern is applicable to parallelism because, like with the Façade pattern, it allows you to hide complexity and to expose an interface that’s easier for developers to use. Use the adapter pattern to modify an existing interface to make it easier for developers who are not familiar with parallelism to use.
ADO.NET objects don’t lock resources and must be used on only a single thread. The ADO.NET Entity Framework (EF) is built on top of ADO.NET and has the same limitation.
Parallelism is appropriate for applications that merge the results of queries from several different data sources. Applications that run on Windows Azure are good examples. Windows Azure applications often store data in a mixture of table storage and blob storage and may break data up between databases for reasons of scalability and performance.
Here are some guidelines for accessing data:
● Don’t share ADO.NET connections between tasks . ADO.NET is not thread safe. Each task should use its own connection .
● Keep connections to the data source for as little time as possible. Explicitly (manually) close connections to ensure that unused connections aren’t left open. Don’t rely on garbage collection to do this for you .
● Use tasks to parallelize connections to different databases. Using tasks to open multiple database connections to the same database may have significant performance implications.
Normally, instances of an object are created using the class constructor. This is shown in the following example.
Instead, you want to be able to obtain the single instance, as shown here
Here, Instance is a static read-only property of the class, and each time it’s called, it’ll refer to the same single instance of MyClass. By adding this
special Instance property, MyClass is transformed into a singleton class.
The application uses continuation tasks instead of event handlers or other callback mechanisms. In fact, you can think of a continuation task as a kind of callback.
Here are some guidelines for using futures and continuations:
● Use continuation tasks that are attached to the model’s tasks to update the view model. This approach helps to decouple the model from the view.
● Use immutable types to pass results from the continuation tasks to the UI thread. This makes it easier to ensure that your code is correct.
● Use continuation tasks instead of background worker threads when you want to execute several different tasks in the background and merge the
results before passing them back to the UI thread.
● Use the correct task scheduler to make sure that modifications to UI objects, which have thread affinity, take place on the main UI thread.
Don’t overlook the importance of immutable types in parallel programming. They can improve reliability as well as performance.
Immutable types are a very useful form of scalable sharing. For example, when you append to a string, the result is a string that includes the addition. The original string is unmodified. This means that you can always use a particular string value in parallel code without worrying that another task or thread is modifying it. The combination of immutable types and the .NET Framework concurrent collection classes is particularly useful.
The Adatum Dashboard example passes StockData and StockDataCollection objects between tasks. No locking is required because the types themselves are immutable and therefore free of side effects
Collections from the namespace System.Collections.Generic are not thread safe. You must use the collections in System.Collections.Concurrent whenever collections are shared by more than one thread.
● Think twice before using sharing data.
● Where possible, use shared data collections in preference to locks.
● Use the shared data classes provided by the .NET Framework 4 in preference to writing your own classes.
Multicore vs multiprocessor
In regards to their speed, if both systems have the same clock speed, number of CPU’s and cores and RAM, the multicore system will run more efficiently on a single program. This is because the cores can execute multiple instructions at the same time, but not multiple programs; this is because of its shared cache (L1, L2, and L3). This is where your multiprocessor comes in handy. With each CPU having their own cache, they can execute programs simultaneously but will take longer than your multi core.
To summarize, a multicore system is a more favorable system for ordinary users. It does not demand any support or extra configurations and will likely cost a bit less. Performance-wise, depending on how you run your programs, each has their pros and cons.
fork/join. A parallel computing pattern that uses task parallelism. Fork occurs when tasks start; join occurs when all tasks finish.
You probably should not return an array as the value of a public method or property, particularly when the information content of the array is logically immutable.
Returning an array means that you have to make a fresh copy of the array every time you return it. You get called a hundred times, you’d better make a hundred array instances, no matter how large they are. It’s a performance nightmare.
You can build yourself a nice read-only collection object once, and then just pass out references to it as much as you want.
A frequently-stated principle of good software design is that code which calls attention to bugs by throwing exceptions is better than code which hides bugs by muddling on through and doing the wrong thing.
multi-cast delegates are immutable; when you add or remove a handler, you replace the existing multi-cast delegate object with a different delegate object that has different behavior. You do not modify an existing object, you modify the variable that stores the event handler. Therefore, stashing away the current reference stored in that variable into a temporary variable effectively makes a copy of the current state.
“Thread safety” is nothing more nor less than a code contract, like any other code contract. You agree to talk to an object in a particular manner, and it agrees to give you correct results if you do so; working out exactly what that manner is, and what the correct responses are, is a potentially tough problem.
exceptions are not completely immutable in .NET. The exception object’s stack trace is set at the point where the exception is thrown, every time it is thrown, not at the point where it is created. The “throw;” does not reset the stack trace, “throw ex;” does.
Making exceptions is cheap, and you’re already in a fatal error situation; it doesn’t matter if the app crashes a few microseconds slower. Take the time to allocate as many exceptions as you plan on throwing.
How to: Write a Parallel.For Loop with Thread-Local Variables
Interlocked helps with threaded programs. It safely changes the value of a shared variable from multiple threads. This is possible with the lock statement. But you can instead use the Interlocked type for simpler and faster code.
Interlocked.Add. When using Interlocked, forget all you know about addition, subtraction and assignment operators. Instead, you will use the Add, Increment, Decrement, Exchange and CompareExchange methods.
Interlocked.Increment was several times faster, requiring only 6 nanoseconds versus 40 nanoseconds for the lock construct.
Cancel a Parallel.For or ForEach Loop
In a Parallel.For or Parallel.ForEach loop, you cannot use the same break or Exit statement that is used in a sequential loop because those language
constructs are valid for loops, and a parallel “loop” is actually a method, not a loop. Instead, you use either the Stop or Break method.
C# 7 feature
Non-Nullable reference types
Here is a catalog of the operations that you often find in collection pipelines. Every language makes different choices on what operations are available and what they are called, but I’ve tried to look at them through their common capabilities.
Using the Caller Info Attributes and Logging
You can think of a delegate type as being a bit like an interface with a single method. Delegates types are declared with the delegate keyword. They can appear either on their own or nested within a class, as shown below.
It’s important to understand that delegate instances are always immutable. Anything which combines them together (or takes one away from the other) creates a new delegate instance to represent the new list of targets/methods to call. This is just like strings: if you call String.PadLeft for instance, it doesn’t actually change the string you call it on – it just returns a new string with the appropriate padding.
First things first: events aren’t delegate instances. Let’s try that again. Events aren’t delegate instances.
Event Handler Add/Remove
A shortcut: field-like events
C# provides a simple way of declaring both a delegate variable and an event at the same time. This is called a field-like event, and is declared very simply – it’s the same as the “longhand” event declaration, but without the “body” part:
Event handler sample.
What’s the point of having both delegates and events?
The answer is encapsulation. Suppose events didn’t exist as a concept in C#/.NET. How would another class subscribe to an event? Three options:
1. A public delegate variable
2. A delegate variable backed by a property
3. A delegate variable with AddXXXHandler and RemoveXXXHandler methods
Option 1 is clearly horrible, for all the normal reasons we abhor (regard with disgust and hatred) public variables.
Option 2 is slightly better, but allows subscribers to effectively override each other – it would be all too easy to write
someInstance.MyEvent = eventHandler; which would replace any existing event handlers rather than adding a new one. In addition, you still need to write the properties.
Option 3 is basically what events give you, but with a guaranteed convention (generated by the compiler and backed by extra flags in the IL) and a “free” implementation if you’re happy with the semantics that field-like events give you. Subscribing to and unsubscribing from events is encapsulated without allowing arbitrary access to the list of event handlers, and languages can make things simpler by providing syntax for both declaration and subscription.
The delegates are executed on threads created by the system thread-pool.
Async Delegate Example:
The calls to EndInvoke block until the delegate has completed in much the same way as calls to Thread.Join block until the threads involved have
Async Delegate example
Here is callback function:
Thread-pool threads are background threads – without the extra Sleep call, the application would terminate before the delegate calls finished executing.
Closures allow you to encapsulate some behaviour, pass it around like any other object, and still have access to the context in which they were first declared.
The delegate is a type that defines a signature, that is, the return value type and parameter list types for a method. You can use the delegate type to declare a variable that can refer to any method with the same signature as the delegate.
The EventHandler delegate is a predefined delegate that specifically represents an event handler method for an event that does not generate data. If your event does generate data, you must use the generic EventHandler<TEventArgs> delegate class.
To associate the event with the method that will handle the event, add an instance of the delegate to the event. The event handler is called whenever the event occurs, unless you remove the delegate.
Events and Inheritance
When creating a general component that can be derived from, what seems to be a problem sometimes arises with events. Since events can only be invoked from within the class that declared them, derived classes cannot directly invoke events declared within the base class. Although this is sometimes what is desired, often it is appropriate to give the derived class the freedom to invoke the event. This is typically done by creating a protected invoking method for the event. By calling this invoking method, derived classes can invoke the event. For even more flexibility, the invoking method is often declared as virtual, which allows the derived class to override it. This allows the derived class to intercept the events that the base class is invoking, possibly doing its own processing of
The event keyword indicates to the compiler that the delegate can be invoked only by the defining class, and that other classes can subscribe to and unsubscribe from the delegate using only the appropriate += and -= operators, respectively.
Adding the event keyword fixes both problems. Classes can no longer attempt to subscribe to the event using the assignment operator (=), as they could previously, nor can they invoke the event directly, as was done in the preceding example. Either of these attempts will now generate a compile error:
If you subscribe your method as anonymous, you cannot invoke the method except through the delegate; but that is exactly what you want.
In .NET, all event handlers return void, and take two parameters. The first parameter is of type object and is the object that raises the event; the second argument is an object of type EventArgs or of a type derived from EventArgs, which may contain useful information about the event.
A lambda expression is an expression using the operator => that returns an unnamed method. Lambda expressions are similar to anonymous methods, but aren’t restricted to being used as delegates.
Some examples of responsibilities to consider that may need to be separated include:
● Error Handling
● Class Selection / Instantiation
Usage errors should never occur in production code. For example, if passing a null reference as one of the method’s arguments causes an error state (usually represented by an ArgumentNullException exception), you can modify the calling code to ensure that a null reference is never passed. For all other errors, exceptions that occur when an asynchronous method is running should be assigned to the returned task, even if the asynchronous method happens to complete synchronously before the task is returned. Typically, a task contains at most one exception. However, if the task represents multiple operations (for example, WhenAll), multiple exceptions may be associated with a single task.
Consumers of a TAP method may safely assume that the returned task is active and should not try to call Start on any Task that is returned from a TAP method. Calling Start on an active task results in an InvalidOperationException exception.
If the cancellation request results in work being ended prematurely, the TAP method returns a task that ends in the Canceled state; there is no available result and no exception is thrown. The Canceled state is considered to be a final (completed) state for a task, along with the Faulted and RanToCompletion states.
For example, consider an asynchronous method that renders an image. The body of the task can poll the cancellation token so that the code may exit early if a cancellation request arrives during rendering. In addition, if the cancellation request arrives before rendering starts, you’ll want to prevent the rendering operation:
OrdinalIgnoreCase property treats the characters in the strings to compare as if they were converted to uppercase using the conventions of the invariant culture, and then performs a simple byte comparison that is independent of language. This is most appropriate when comparing strings that are generated programmatically or when comparing case-insensitive resources such as paths and filenames.
For example, if you wanted to download a bunch of web pages asynchronously and then only do something when all of them were complete, with WebClient by itself you’d have to code up that logic manually; in contrast, if you had a Task<T> for each download, you could then use ContinueWhenAll to launch additional work when all of those downloads completed (and you’d get back a Task that represents that continuation). In short, once you have a Task<T> to represent the EAP operation, you can compose it with any other tasks you might have, and you have the full functionality afforded to tasks to process that EAP operation.
And usage of returned task;
TaskCompletionSource another example;
Execution of the method;