C# (Part 3)

Under the covers, the await functionality installs a callback on the task by using a continuation. This callback resumes the asynchronous method at the point of suspension.

The purpose of the synchronization model implemented by this class is to allow the internal asynchronous/synchronous operations of the common language runtime to behave properly with different synchronization models.

Task.Yield usage. As you can see below, for loop is called under async concept. Even loop iterations are running on separate threads, still they are running in an order.

Creates an awaitable task that asynchronously yields back to the current context when awaited. You can use await Task.Yield(); in an asynchronous method to force the method to complete asynchronously. Do not rely on await Task.Yield(); to keep a UI responsive.

http://stackoverflow.com/questions/22645024/when-would-i-use-task-yield

When you use async/await, there is no guarantee that the method you call when you do await FooAsync() will actually run asynchronously. The internal implementation is free to return using a completely synchronous path.

If you’re making an API where it’s critical that you don’t block and you run some code asynchronously, and there’s a chance that the called method will run synchronously (effectively blocking), using await Task.Yield() will force your method to be asynchronous, and return control at that point. The rest of the code will execute at a later time (at which point, it still may run synchronously) on the current context.

This can also be useful if you make an asynchronous method that requires some “long running” initialization, ie:

Without the Task.Yield() call, the method will execute synchronously all the way up to the first call to await.

http://stackoverflow.com/a/23441833/929902

For a non-UI thread with no synchronization context, await Task.Yield() just switches the continuation to a random pool thread. There is no guarantee it is going to be a different thread from the current thread, it’s only guaranteed to be an asynchronous continuation. If ThreadPool is starving, it may schedule the continuation onto the same thread.

In ASP.NET, doing await Task.Yield() doesn’t make sense at all, except for the workaround mentioned in @StephenCleary’s answer. Otherwise, it will only hurt the web app performance with a redundant thread switch.

So, is await Task.Yield() useful? IMO, not much. It can be used as a shortcut to run the continuation via SynchronizationContext.Post or ThreadPool.QueueUserWorkItem, if you really need to impose asynchrony upon a part of your method.

Task.FromResult

Use the FromResult<TResult> method in scenarios where data may already be available and just needs to be returned from a task-returning method lifted into a Task<TResult>:

Interleaving

Consider a case where you’re downloading images from the web and processing each image (for example, adding the image to a UI control). You have to do the processing sequentially on the UI thread, but you want to download the images as concurrently as possible. Also, you don’t want to hold up adding the images to the UI until they’re all downloaded—you want to add them as they complete:

 

The general pattern for implementing the cooperative cancellation model is:

  • Instantiate a CancellationTokenSource object, which manages and sends cancellation notification to the individual cancellation tokens.
  • Pass the token returned by the CancellationTokenSource.Token property to each task or thread that listens for cancellation.
  • Call the CancellationToken.IsCancellationRequested method from operations that receive the cancellation token. Provide a mechanism for each task or thread to respond to a cancellation request. Whether you choose to cancel an operation, and exactly how you do it, depends on your application logic.
  • Call the CancellationTokenSource.Cancel method to provide notification of cancellation. This sets the CancellationToken.IsCancellationRequested property on every copy of the cancellation token to true.

Call the Dispose method when you are finished with the CancellationTokenSource object.

 

All public and protected members of CancellationTokenSource are thread-safe and may be used concurrently from multiple threads, with the exception of Dispose, which must only be used when all other operations on the CancellationTokenSource object have completed.
Real life scenarios for using TaskCompletionSource<T>?

The CancellationToken is a struct so many copies could exist due to passing it along to methods. The CancellationTokenSource sets the state of ALL copies of a token when calling Cancel on the source.

Convert synchronous zip operation to async

An ExecutionContext that is associated with a thread cannot be set on another thread. Attempting to do so will result in an exception being thrown. To propagate the ExecutionContext from one thread to another, make a copy of the ExecutionContext.

It’s important to make a distinction between two different types of concurrency. Asynchronous concurrency is when you have multiple asynchronous operations in flight (and since each operation is asynchronous, none of them are actually using a thread). Parallel concurrency is when you have multiple threads each doing a separate operation.

This is an essential truth of async in its purest form: There is no thread.

Some time after the write request started, the device finishes writing. It notifies the CPU via an interrupt. The device driver’s Interrupt Service Routine (ISR) responds to the interrupt. An interrupt is a CPU-level event, temporarily seizing control of the CPU away from whatever thread was running. You could think of an ISR as “borrowing” the currently-running thread, but I prefer to think of ISRs as executing at such a low level that the concept of “thread” doesn’t exist – so they come in “beneath” all threads, so to speak. Anyway, the ISR is properly written, so all it does is tell the device “thank you for the interrupt” and queue a Deferred Procedure Call (DPC).

The task has captured the UI context, so it does not resume the async method directly on the thread pool thread. Instead, it queues the continuation of that method onto the UI context, and the UI thread will resume executing that method when it gets around to it.

Device (Hard Disk) >

Notifies the CPU via an interrupt >

The device driver’s Interrupt Service Routine (ISR) responds to the interrupt

A process requesting I/O services is not notified of completion of the I/O services, but instead checks the I/O completion port’s message queue to determine the status of its I/O requests. The I/O completion port manages multiple threads and their concurrency.

  • epoll tells you when a file descriptor is ready to perform a requested operation – such as “you can read from this socket now”.
  • IOCP tells you when a requested operation is completed, or failed to complete – such as “the requested read operation has been completed”.

IOCP works by queuing the ReadFile and WriteFile operations, which will complete later. Both read and write operate on buffers, and require the buffers passed to them to be left intact until the operation completes. More, you are not allowed to touch the data in those buffers.

Both epoll and IOCP are suitable for, and typically used to write high performance networking servers handling a large number of concurrent connections.

interrupt handler, also known as an interrupt service routine or ISR, is a callback function in microcontroller firmware, an operating system, or a device driver whose execution is triggered by the reception of an interrupt.

For example, pressing a key on a computer keyboard, or moving the mouse, triggers interrupts that call interrupt handlers which read the key, or the mouse’s position, and copy the associated information into the computer’s memory.

An interrupt handler is a low-level counterpart (same thing) of event handlers.

An interrupt service routine (ISR) is a software routine that hardware invokes in response to an interrupt. ISRs examine an interrupt and determine how to handle it. ISRs handle the interrupt, and then return a logical interrupt value.

A Deferred Procedure Call (DPC) is a Microsoft Windows operating system mechanism which allows high-priority tasks (e.g. an interrupt handler) to defer required but lower-priority tasks for later execution. This permits device drivers and other low-level event consumers to perform the high-priority part of their processing quickly, and schedule non-critical additional processing for execution at a lower priority.

Each processor has a separate DPC queue. DPCs have three priority levels: low, medium and high. DPCs execute directly on the CPU, “beneath” the threading system.

The DPC takes the IRP representing the write request and marks it as “complete”. However, that “completion” status only exists at the OS level; the process has its own memory space that must be notified. So the OS queues a special-kernel-mode Asynchronous Procedure Call (APC) to the thread owning the HANDLE.

The task has captured the UI context, so it does not resume the async method directly on the thread pool thread. Instead, it queues the continuation of that method onto the UI context, and the UI thread will resume executing that method when it gets around to it.

When working with streaming audio or video that uses interrupts, DPCs are used to process the audio in each buffer as they stream in. If another DPC (from a poorly written driver) takes too long and another interrupt generates a new buffer of data, before the first one can be processed, a drop-out results.

An Interrupt Request Level (IRQL) is a hardware independent means with which Windows prioritizes interrupts that come from the system’s processors. On processor architectures which Windows runs on, hardware generates signals which are sent to an interrupt controller. The interrupt controller sends an interrupt request (or IRQ) to the CPU with a certain priority level, and the CPU sets a mask which causes any other interrupts with a lower priority to be put into a pending state, until the CPU releases control back to the interrupt controller. If a signal comes in at a higher priority, then the current interrupt will be put into a pending state, the CPU sets the interrupt mask to the priority and places any interrupts with a lower priority into a pending state until the CPU finishes handling the new, higher priority interrupt.

It’s better to think of an async operation as a ‘message’ that gets passed around, and that message changes forms several times (i.e. from a system call => IRP => APC => UI Dispatcher Queue item). Every time this message is processed, it ends up changing forms.

Now everything is at the mercy of the Device Driver programmer to SpinLock the right memory locations and coordinate all the literally non-time sliced but actually simultaneous DPC’s so that I/O completes without incident and starts the more orderly sequence nicely described in the article to return to Windows Userland.

The idea that “there must be a thread somewhere processing the asynchronous operation” is not the truth.

Free your mind. Do not try to find this “async thread” — that’s impossible. Instead, only try to realize the truth: There is no thread.

If you have CPU-bound code to run, then that code has to run on a thread.

Now, you can push the CPU-bound code to a thread pool thread (e.g., `Task.Run`) and then await it from the UI thread so that your UI thread is not blocked. But that’s not an actual asynchronous operation – it’s a synchronous operation on another thread.

A method marked as “async” may only become asynchronous when it performs an “await“. I assume at the end of your call chain, you’d have an async method without an await – and the compiler will warn you that it will execute synchronously.

The core idea to keep in mind is that a method should only be marked “async” if it has asynchronous work to do. If the method only has synchronous work to do, then it should have a synchronous API, not an asynchronous (Task-returning) one. So, in your example, the entire call chain should be synchronous, not asynchronous.

However, when you’re consuming an API, the situation is a bit different. An API method is either asynchronous or synchronous, and there’s nothing you can do as a consumer to change the nature of the API. So if you have a blocking API, you can’t consume it asynchronously – that doesn’t make sense. If you’re writing a UI app, you have the option of pushing the blocking to a background thread, but since it’s the actual API implementation that does the blocking, you can’t entirely avoid the blocking.

When a completion packet is released to a thread, the system releases the last (most recent) thread associated with that port, passing it the completion information for the oldest I/O completion. A single thread can be associated with, at most, one I/O completion port.

An I/O completion port is associated with the process that created it and is not sharable between processes. However, a single handle is shareable between threads in the same process.

The extra threads appear to be useless and never run, but that assumes that the running thread never gets put in a wait state by some other mechanism, terminates, or otherwise closes its associated I/O completion port. Consider all such thread execution ramifications (a consequence of an action or event, especially when complex or unwelcome)  when designing the application.

Good rule of thumb is to have a minimum of twice as many threads in the thread pool as there are processors on the system. It means if you have 4 core computers, you should have at least 8 threads in the pool.

A process has a virtual address space, executable code, open handles to system objects, a security context, a unique process identifier, environment variables, a priority class, minimum and maximum working set sizes, and at least one thread of execution. Each process is started with a single thread, often called the primary thread, but can create additional threads from any of its threads.

The thread context includes the thread’s set of machine registers, the kernel stack, a thread environment block, and a user stack in the address space of the thread’s process. Threads can also have their own security context, which can be used for impersonating clients.

Preemptive multitasking is task in which a computer operating system uses some criteria to decide how long to allocate to any one task before giving another task a turn to use the operating system. The act of taking control of the operating system from one task and giving it to another task is called preempting.

Microsoft Windows supports preemptive multitasking, which creates the effect of simultaneous execution of multiple threads from multiple processes. On a multiprocessor computer, the system can simultaneously execute as many threads as there are processors on the computer.

An application can use the thread pool to reduce the number of application threads and provide management of the worker threads.

The length of the time slice depends on the operating system and the processor. Because each time slice is small (approximately 20 milliseconds), multiple threads appear to be executing at the same time. This is actually the case on multiprocessor systems, where the executable threads are distributed among the available processors. However, you must use caution when using multiple threads in an application, because system performance can decrease if there are too many threads.

It is typically more efficient for an application to implement multitasking by creating a single, multithreaded process, rather than creating multiple processes, for the following reasons:

  • The system can perform a context switch more quickly for threads than processes, because a process has more overhead than a thread does (the process context is larger than the thread context).
  • All threads of a process share the same address space and can access the process’s global variables, which can simplify communication between threads.
  • All threads of a process can share open handles to resources, such as files and pipes.

Summary: Single Process Multiple Threads > Multiple Process Single or Multiple Threads.

The recommended guideline is to use as few threads as possible, thereby minimizing the use of system resources. This improves performance. Multitasking has resource requirements and potential conflicts to be considered when designing your application.

A stack is freed when its thread exits. It is not freed if the thread is terminated by another thread.

For the threads of a single process, critical-section objects provide a more efficient means of synchronization than mutexes. A critical section is used like a mutex to enable one thread at a time to use the protected resource.

 

Entity Framework async features are there to support an asynchronous programming model, not to enable parallelism.

Func<T, TResult> Delegate

Encapsulates a method that has one parameter and returns a value of the type specified by the TResult parameter.

A lambda expression is an anonymous function that you can use to create delegates or expression tree types. By using lambda expressions, you can write local functions that can be passed as arguments or returned as the value of function calls. Lambda expressions are particularly helpful for writing LINQ query expressions.

IQueryable Interface

Provides functionality to evaluate queries against a specific data source wherein the type of the data is not specified.

The IQueryable<T> interface is intended for implementation by query providers.

Expression Class

Provides the base class from which the classes that represent expression tree nodes are derived. It also contains static factory methods to create the various node types. This is an abstract class.

The following code example shows how to create a block expression. The block expression consists of two MethodCallExpression objects and one ConstantExpression object.

The expression tree is an in-memory data representation of the lambda expression. The expression tree makes the structure of the lambda expression transparent and explicit. You can interact with the data in the expression tree just as you can with any other data structure.

Await usage

Why would you use Expression<Func<T>> rather than Func<T>?

Func<> will be converted to a method on the c# compiler level ,Expression<Func<>> will be executed on the MSIL level after compiling the code directly, that is the reason it is faster. An expression simply turns a delegate into a data about itself.

The fix was simply to turn Func<T, bool> into Expression<Func<T, bool>>, so I googled why it needs an Expression instead of Func, ending up here.

Func didn’t work because my DbContext was blind to what was actually in the lambda expression to turn it into SQL, so it did the next best thing and iterated that conditional through each row in my table.

Func<string, int> length = s => s.Length;

But we can wrap our “Func<string,int>” in an Expression<T> like this:

Expression<Func<string, int>> length = s => s.Length;

But now we can’t call it anymore. Why is that? Well, “length” is no longer a delegate, but instead it is an expression tree. An expression tree is simply a tree structure that represents the lambda “s => s.Length”. Instead of the C# compiler turning this into an executable method, it simply goes through the syntax and forms a tree that expresses what the lambda is doing.

In fact, there is a method on the Expression type called “Compile” that lets us turn this expression tree into a Func<string, int> that we can run:

Func<string,int> lengthMethod = length.Compile();

int stringLength = lengthMethod(myString);

The Where extension method has two flavors. One extends IQueryable and takes an Expression parameter. The other extends IEnumerable and takes a Func.

You can use a lambda expression anywhere you can use a delegate.

IEnumerable<T> is great for working with sequences that are iterated in-memory, but

IQueryable<T> allows for out-of memory things like a remote data source, such as a database or web service.

The difference is that IQueryable<T> is the interface that allows LINQ-to-SQL (LINQ.-to-anything really) to work. So if you further refine your query on an IQueryable<T>, that query will be executed in the database, if possible.

For the IEnumerable<T> case, it will be LINQ-to-object, meaning that all objects matching the original query will have to be loaded into memory from the database.

IObservable sample;

 

Observable example (Rx extensions)

You do not need to implement the IObservable<T>/IObserver<T> interfaces yourself.  Rx provides internal implementations of these interfaces for you and exposes them through various extension methods provided by the Observable and Observer types.

You do not need to implement the IObservable<T> interface manually to create an observable sequences.

Similarly, you do not need to implement IObserver<T> either to subscribe to a sequence

 

It’s been a .NET design guideline that in cases like this, where an exception’s propagation needs to be interrupted, it should be wrapped in another exception object.

When an observer subscribes to an observable sequence, the thread calling the Subscribe method can be different from the thread in which the sequence runs till completion. Therefore, the Subscribe call is asynchronous in that the caller is not blocked until the observation of the sequence completes.

For example, the code

var x = Observable.Zip(a,b).Subscribe();

will subscribe x to both sequences a and b. If a throws an error, x will immediately be unsubscribed from b.

Using a timer

Using the ToObservable operator, you can convert a generic enumerable collection to an observable sequence and subscribe to it.

Cold observables start running upon subscription, i.e., the observable sequence only starts pushing values to the observers when Subscribe is called. Values are also not shared among subscribers. This is different from hot observables such as mouse move events or stock tickers which are already producing values even before a subscription is active.

Cold example:

Hot subscription example;

Rx does not aim at replacing existing asynchronous programming models such as .NET events, the asynchronous pattern or the Task Parallel Library.

The Entity Framework DbContext (or LINQ-to-SQL DataContext) is a Unit Of Work implementation. That means that the same DbContext should be used for all operations (both reading and writing) within a single web or service request.

What is the difference between ManualResetEvent and AutoResetEvent in .NET?

Yes. It’s like the difference between a tollbooth and a door. The ManualResetEvent is the door, which needs to be closed (reset) manually. The AutoResetEvent is a tollbooth, allowing one car to go by and automatically closing before the next one can get through.

Forcing developers to cast the value retrieved from a property on the EventArgs is never acceptable.

The lock statement is a good general-purpose tool, but the Interlocked class provides better performance for updates that must be atomic.

Unlike ASP.NET, SQL Server uses only one process that cannot be recycled without taking a database down for an unacceptably long time.

Although it is theoretically possible to write managed code to handle ThreadAbortException, StackOverflowException, and OutOfMemoryException exceptions, expecting developers to write such robust code throughout an entire application is unreasonable.

Process-wide or cross-application domain mutable shared state is extremely difficult to alter safely and should be avoided whenever possible.

Out-of-memory conditions are not rare in SQL Server.

Observable.FromEventPattern and Throttle method.

To make this work on UI thread, use ObserveOnDispatcher() method.

 

Task.Yield continues on the current synchronization context or on the current TaskScheduler if one is present. Task.Run does not do that. It always uses the thread-pool.

For example Task.Yield would stay on the UI thread.

Avoid Task.Yield. It’s semantics are less clear. The linked answer is a code smell.

 

Do not throw System.Exception, System.SystemException, System.NullReferenceException, or System.IndexOutOfRangeException intentionally from your own source code.
Quartz basic example

SimpleTrigger is handy if you need ‘one-shot’ execution (just single execution of a job at a given moment in time), or if you need to fire a job at a given time, and have it repeat N times, with a delay of T between executions.

CronTrigger is useful if you wish to have triggering based on calendar-like schedules – such as “every Friday, at noon” or “at 10:15 on the 10th day of every month.”

DisallowConcurrentExecution is an attribute that can be added to the Job class that tells Quartz not to execute multiple instances of a given job definition (that refers to the given job class) concurrently. Notice the wording there, as it was chosen very carefully.

PersistJobDataAfterExecution attribute, you should strongly consider also using the DisallowConcurrentExecution attribute, in order to avoid possible confusion (race conditions) of what data was left stored when two instances of the same job (JobDetail) executed concurrently.

The only type of exception that you should throw from the execute method is the JobExecutionException.

Explicit Interface Implementation

If a class implements two interfaces that contain a member with the same signature, then implementing that member on the class will cause both interfaces to use that member as their implementation. In the following example, all the calls to Paint invoke the same method.

Resolution:

Protected Methods:

Short version: it breaks encapsulation but it’s a necessary evil that should be kept to a minimum.

So all other things being equal, you shouldn’t have any protected members at all. But that said, if you have too few, then your class may not be usable as a super class, or at least not as an efficient super class. Often you find out after the fact. My philosophy is to have as few protected members as possible when you first write the class. Then try to subclass it. You may find out that without a particular protected method, all subclasses will have to do some bad thing.

And protected data?

Josh Bloch: The same thing, but even more. Protected data is even more dangerous in terms of messing up your data invariants. If you give someone else access to some internal data, they have free reign over it.

There is boxing involved when using explicit implementation of an interface on value types so be aware of the performance cost.

Using explicit implementation to hide the details of IDisposable.

Program in tasks (chores), not threads (cores). Leave the mapping of tasks to threads or processor cores as a distinctly separate operation in your program, preferably an abstraction you are using that handles thread/core management for you. Create an abundance of tasks in your program, or a task that can be spread across processor cores automatically (such as an OpenMP loop). By creating tasks, you are free to create as many as you can without worrying about oversubscription.
Avoid using locks. Simply say “no” to locks. Locks slow programs, reduce their scalability, and are the source of bugs in parallel programs. Make implicit synchronization the solution for your program. When you still need explicit synchronization, use atomic operations. Use locks only as a last resort. Work hard to design the need for locks completely out of your program.

 

 

 

C# – walk-through – (Part 2)

Futures can result in faster execution if hardware resources are available for parallel execution. However, if all cores are otherwise occupied, futures will be evaluated without parallelism.

Dynamic task parallelism is also known as recursive decomposition or “divide and conquer.”

The following code demonstrates how to implement a pipeline that uses the BlockingCollection class for the buffers and tasks for the stages of the pipeline.

When there is an unhandled exception in one pipeline stage, you should cancel the other stages. If you don’t do this, deadlock can occur.
Use a special instantiation of the CancellationTokenSource class to allow your application to coordinate the shutdown of all the pipeline stages when an exception occurs in one of them. Here’s an example.

The BlockingCollection<T> class allows you to read values from more than one producer.

It’s easy to forget to call the GetConsumingEnumerable because the BlockingCollection class implements IEnumerable<T>. Enumerating over the blocking collection instance won’t consume values. Watch out for this!

In general, using the blocking collections provided by the .NET Framework is easier and safer than writing your own implementation.

The Pipeline pattern doesn’t automatically scale with the number of cores. This is one of its limitations, unless additional parallelism is introduced within a pipeline stage itself.

The Decorator pattern overrides the behavior of an underlying class. Decorators use a “contains” relationship and inheritance.

The Adapter pattern translates from one interface to another. This pattern is applicable to parallelism because, like with the Façade pattern, it allows you to hide complexity and to expose an interface that’s easier for developers to use. Use the adapter pattern to modify an existing interface to make it easier for developers who are not familiar with parallelism to use.

ADO.NET objects don’t lock resources and must be used on only a single thread. The ADO.NET Entity Framework (EF) is built on top of ADO.NET and has the same limitation.

Parallelism is appropriate for applications that merge the results of queries from several different data sources. Applications that run on Windows Azure are good examples. Windows Azure applications often store data in a mixture of table storage and blob storage and may break data up between databases for reasons of scalability and performance.

Guidelines
Here are some guidelines for accessing data:
● Don’t share ADO.NET connections between tasks . ADO.NET is not thread safe. Each task should use its own connection .
● Keep connections to the data source for as little time as possible. Explicitly (manually) close connections to ensure that unused connections aren’t left open. Don’t rely on garbage collection to do this for you .
● Use tasks to parallelize connections to different databases. Using tasks to open multiple database connections to the same database may have significant performance implications.

Normally, instances of an object are created using the class constructor. This is shown in the following example.

Instead, you want to be able to obtain the single instance, as shown here

Here, Instance is a static read-only property of the class, and each time it’s called, it’ll refer to the same single instance of MyClass. By adding this
special Instance property, MyClass is transformed into a singleton class.

LazySingleton

The application uses continuation tasks instead of event handlers or other callback mechanisms. In fact, you can think of a continuation task as a kind of callback.

Here are some guidelines for using futures and continuations:
● Use continuation tasks that are attached to the model’s tasks to update the view model. This approach helps to decouple the model from the view.
● Use immutable types to pass results from the continuation tasks to the UI thread. This makes it easier to ensure that your code is correct.
● Use continuation tasks instead of background worker threads when you want to execute several different tasks in the background and merge the
results before passing them back to the UI thread.
● Use the correct task scheduler to make sure that modifications to UI objects, which have thread affinity, take place on the main UI thread.

Don’t overlook the importance of immutable types in parallel programming. They can improve reliability as well as performance.

Immutable types are a very useful form of scalable sharing. For example, when you append to a string, the result is a string that includes the addition. The original string is unmodified. This means that you can always use a particular string value in parallel code without worrying that another task or thread is modifying it. The combination of immutable types and the .NET Framework concurrent collection classes is particularly useful.

The Adatum Dashboard example passes StockData and StockDataCollection objects between tasks. No locking is required because the types themselves are immutable and therefore free of side effects

Collections from the namespace System.Collections.Generic are not thread safe. You must use the collections in System.Collections.Concurrent whenever collections are shared by more than one thread.

● Think twice before using sharing data.
● Where possible, use shared data collections in preference to locks.
● Use the shared data classes provided by the .NET Framework 4 in preference to writing your own classes.

Multicore vs multiprocessor
In regards to their speed, if both systems have the same clock speed, number of CPU’s and cores and RAM, the multicore system will run more efficiently on a single program. This is because the cores can execute multiple instructions at the same time, but not multiple programs; this is because of its shared cache (L1, L2, and L3). This is where your multiprocessor comes in handy. With each CPU having their own cache, they can execute programs simultaneously but will take longer than your multi core.

To summarize, a multicore system is a more favorable system for ordinary users. It does not demand any support or extra configurations and will likely cost a bit less. Performance-wise, depending on how you run your programs, each has their pros and cons.

Source: http://theydiffer.com/difference-between-multicore-and-multiprocessor-systems/

fork/join. A parallel computing pattern that uses task parallelism. Fork occurs when tasks start; join occurs when all tasks finish.

You probably should not return an array as the value of a public method or property, particularly when the information content of the array is logically immutable.
Returning an array means that you have to make a fresh copy of the array every time you return it. You get called a hundred times, you’d better make a hundred array instances, no matter how large they are. It’s a performance nightmare.

You can build yourself a nice read-only collection object once, and then just pass out references to it as much as you want.

A frequently-stated principle of good software design is that code which calls attention to bugs by throwing exceptions is better than code which hides bugs by muddling on through and doing the wrong thing.

multi-cast delegates are immutable; when you add or remove a handler, you replace the existing multi-cast delegate object with a different delegate object that has different behavior. You do not modify an existing object, you modify the variable that stores the event handler. Therefore, stashing away the current reference stored in that variable into a temporary variable effectively makes a copy of the current state.

Thread safetyis nothing more nor less than a code contract, like any other code contract. You agree to talk to an object in a particular manner, and it agrees to give you correct results if you do so; working out exactly what that manner is, and what the correct responses are, is a potentially tough problem.

exceptions are not completely immutable in .NET. The exception object’s stack trace is set at the point where the exception is thrown, every time it is thrown, not at the point where it is created. The “throw;” does not reset the stack trace, “throw ex;” does.

Making exceptions is cheap, and you’re already in a fatal error situation; it doesn’t matter if the app crashes a few microseconds slower. Take the time to allocate as many exceptions as you plan on throwing.

How to: Write a Parallel.For Loop with Thread-Local Variables

Using interlocked.

Interlocked helps with threaded programs. It safely changes the value of a shared variable from multiple threads. This is possible with the lock statement. But you can instead use the Interlocked type for simpler and faster code.

Interlocked.Add. When using Interlocked, forget all you know about addition, subtraction and assignment operators. Instead, you will use the Add, Increment, Decrement, Exchange and CompareExchange methods.

Interlocked.Increment was several times faster, requiring only 6 nanoseconds versus 40 nanoseconds for the lock construct.

Parallel ForEach

Cancel a Parallel.For or ForEach Loop

In a Parallel.For or Parallel.ForEach loop, you cannot use the same break or Exit statement that is used in a sequential loop because those language
constructs are valid for loops, and a parallel “loop” is actually a method, not a loop. Instead, you use either the Stop or Break method.

C# 7 feature

Non-Nullable reference types

Operation Catalog
http://martinfowler.com/articles/collection-pipeline/
Here is a catalog of the operations that you often find in collection pipelines. Every language makes different choices on what operations are available and what they are called, but I’ve tried to look at them through their common capabilities.

Using the Caller Info Attributes and Logging

You can think of a delegate type as being a bit like an interface with a single method. Delegates types are declared with the delegate keyword. They can appear either on their own or nested within a class, as shown below.

It’s important to understand that delegate instances are always immutable. Anything which combines them together (or takes one away from the other) creates a new delegate instance to represent the new list of targets/methods to call. This is just like strings: if you call String.PadLeft for instance, it doesn’t actually change the string you call it on – it just returns a new string with the appropriate padding.

First things first: events aren’t delegate instances. Let’s try that again. Events aren’t delegate instances.

Event Handler Add/Remove

A shortcut: field-like events

C# provides a simple way of declaring both a delegate variable and an event at the same time. This is called a field-like event, and is declared very simply – it’s the same as the “longhand” event declaration, but without the “body” part:

Event handler sample.

http://csharpindepth.com/Articles/Chapter2/Events.aspx
What’s the point of having both delegates and events?

The answer is encapsulation. Suppose events didn’t exist as a concept in C#/.NET. How would another class subscribe to an event? Three options:
1. A public delegate variable
2. A delegate variable backed by a property
3. A delegate variable with AddXXXHandler and RemoveXXXHandler methods

Option 1 is clearly horrible, for all the normal reasons we abhor (regard with disgust and hatred) public variables.

Option 2 is slightly better, but allows subscribers to effectively override each other – it would be all too easy to write
someInstance.MyEvent = eventHandler; which would replace any existing event handlers rather than adding a new one. In addition, you still need to write the properties.

Option 3 is basically what events give you, but with a guaranteed convention (generated by the compiler and backed by extra flags in the IL) and a “free” implementation if you’re happy with the semantics that field-like events give you. Subscribing to and unsubscribing from events is encapsulated without allowing arbitrary access to the list of event handlers, and languages can make things simpler by providing syntax for both declaration and subscription.

The delegates are executed on threads created by the system thread-pool.

Async Delegate Example:
The calls to EndInvoke block until the delegate has completed in much the same way as calls to Thread.Join block until the threads involved have
terminated.

Async Delegate example

Here is callback function:

Thread-pool threads are background threads – without the extra Sleep call, the application would terminate before the delegate calls finished executing.

Closures allow you to encapsulate some behaviour, pass it around like any other object, and still have access to the context in which they were first declared.

Predicate usage

Event Handler
https://www.dotnetperls.com/event

The delegate is a type that defines a signature, that is, the return value type and parameter list types for a method. You can use the delegate type to declare a variable that can refer to any method with the same signature as the delegate.

The EventHandler delegate is a predefined delegate that specifically represents an event handler method for an event that does not generate data. If your event does generate data, you must use the generic EventHandler<TEventArgs> delegate class.
To associate the event with the method that will handle the event, add an instance of the delegate to the event. The event handler is called whenever the event occurs, unless you remove the delegate.

Events and Inheritance
When creating a general component that can be derived from, what seems to be a problem sometimes arises with events. Since events can only be invoked from within the class that declared them, derived classes cannot directly invoke events declared within the base class. Although this is sometimes what is desired, often it is appropriate to give the derived class the freedom to invoke the event. This is typically done by creating a protected invoking method for the event. By calling this invoking method, derived classes can invoke the event. For even more flexibility, the invoking method is often declared as virtual, which allows the derived class to override it. This allows the derived class to intercept the events that the base class is invoking, possibly doing its own processing of
them.

The event keyword indicates to the compiler that the delegate can be invoked only by the defining class, and that other classes can subscribe to and unsubscribe from the delegate using only the appropriate += and -= operators, respectively.

Adding the event keyword fixes both problems. Classes can no longer attempt to subscribe to the event using the assignment operator (=), as they could previously, nor can they invoke the event directly, as was done in the preceding example. Either of these attempts will now generate a compile error:

If you subscribe your method as anonymous, you cannot invoke the method except through the delegate; but that is exactly what you want.

In .NET, all event handlers return void, and take two parameters. The first parameter is of type object and is the object that raises the event; the second argument is an object of type EventArgs or of a type derived from EventArgs, which may contain useful information about the event.

A lambda expression is an expression using the operator => that returns an unnamed method. Lambda expressions are similar to anonymous methods, but aren’t restricted to being used as delegates.
Some examples of responsibilities to consider that may need to be separated include:

● Persistence
● Validation
● Notification
● Error Handling
● Logging
● Class Selection / Instantiation
● Formatting
● Parsing
● Mappin

Usage errors should never occur in production code. For example, if passing a null reference as one of the method’s arguments causes an error state (usually represented by an ArgumentNullException exception), you can modify the calling code to ensure that a null reference is never passed. For all other errors, exceptions that occur when an asynchronous method is running should be assigned to the returned task, even if the asynchronous method happens to complete synchronously before the task is returned. Typically, a task contains at most one exception. However, if the task represents multiple operations (for example, WhenAll), multiple exceptions may be associated with a single task.

Consumers of a TAP method may safely assume that the returned task is active and should not try to call Start on any Task that is returned from a TAP method. Calling Start on an active task results in an InvalidOperationException exception.

If the cancellation request results in work being ended prematurely, the TAP method returns a task that ends in the Canceled state; there is no available result and no exception is thrown. The Canceled state is considered to be a final (completed) state for a task, along with the Faulted and RanToCompletion states.

For example, consider an asynchronous method that renders an image. The body of the task can poll the cancellation token so that the code may exit early if a cancellation request arrives during rendering. In addition, if the cancellation request arrives before rendering starts, you’ll want to prevent the rendering operation:

OrdinalIgnoreCase property treats the characters in the strings to compare as if they were converted to uppercase using the conventions of the invariant culture, and then performs a simple byte comparison that is independent of language. This is most appropriate when comparing strings that are generated programmatically or when comparing case-insensitive resources such as paths and filenames.

TaskCompletionSource example;

https://channel9.msdn.com/Blogs/philpenn/TaskCompletionSourceTResult

For example, if you wanted to download a bunch of web pages asynchronously and then only do something when all of them were complete, with WebClient by itself you’d have to code up that logic manually; in contrast, if you had a Task<T> for each download, you could then use ContinueWhenAll to launch additional work when all of those downloads completed (and you’d get back a Task that represents that continuation). In short, once you have a Task<T> to represent the EAP operation, you can compose it with any other tasks you might have, and you have the full functionality afforded to tasks to process that EAP operation.

And usage of returned task;

TaskCompletionSource another example;

Execution of the method;

 

 

C# – walk-through – (Part 1)

http://www.quartz-scheduler.net/documentation/quartz-2.x/quick-start.html

There are three ways (which are not mutually exclusive) to supply Quartz.NET configuration information:

  • Programmatically via providing NameValueCollection parameter to scheduler factory
  • Via standard youapp.exe.config configuration file using quartz-element
  • quartz.config file in your application’s root directory

Quartz.NET comes with sane defaults.

NCrunch will religiously set the environment variable ‘NCrunch’ equal to ‘1’ inside each of its task runner processes. This applies to both build tasks and test tasks. You can make use of this environment variable to redirect your continuous tests to a different schema/database, for example:

WebJobs SDK

Azure Web Jobs Dashboard format for the URL to access is this: https://YOURSITE.scm.azurewebsites.net/azurejobs.

https://blogs.msdn.microsoft.com/ericlippert/2009/11/12/closing-over-the-loop-variable-considered-harmful/

var values = new List<int>() { 100, 110, 120 };
var funcs = new List<Func<int>>();
foreach(var v in values)
funcs.Add( ()=>v );
foreach(var f in funcs)
Console.WriteLine(f());

Most people expect it to be 100 / 110 / 120.  It is in fact 120 / 120 / 120. Why?

Because ()=>v means “return the current value of variable v“, not “return the value v was back when the delegate was created”. Closures close over variables, not over values. And when the methods run, clearly the last value that was assigned to v was 120, so it still has that value.

This is very confusing. The correct way to write the code is:

foreach(var v in values)
{
var v2 = v;
funcs.Add( ()=>v2 );
}

Now what happens? Every time we re-start the loop body, we logically create a fresh new variable v2. Each closure is closed over a different v2, which is only assigned to once, so it always keeps the correct value.

TaskCompletionSource sample;

You can use

await Task.Yield();

in an asynchronous method to force the method to complete asynchronously. If there is a current synchronization context
(SynchronizationContex object), this will post the remainder of the method’s execution back to that context. However, the context will decide how to prioritize this work relative to other work that may be pending. The synchronization context that is present on a UI thread in most UI environments will often prioritize work posted to the context higher than input and rendering work. For this reason,  do not rely on await Task.Yield(); to keep a UI responsive.
Task that’s not bound to a thread.

With that, you can write:
await p.ExitedAsync();
and you won’t be blocking any threads while asynchronously waiting for the process to exit.

If you use the standard TPL functionality, by default, it’ll use a ThreadPool thread.

Async methods does not create new threads. They compose asynchrony, does not create asynchrony.

When you create a Task or Task<TResult> object to perform some task asynchronously, by default the task is scheduled to run on a thread pool
thread.

The threads in the managed thread pool are background threads. That is, their IsBackground properties are true. This means that a
ThreadPool thread will not keep an application running after all foreground threads have exited.

The thread pool uses background threads, which do not keep the application running if all foreground threads have terminated. (
There is no way to cancel a work item after it has been queued.
There is one thread pool per process.
In most cases the thread pool will perform better with its own algorithm for allocating threads.

When you work with tasks, they run their code using underlying threads (software threads, scheduled on certain hardware threads or logical cores). However,
there isn’t a 1­to­1 relationship between tasks and threads. This means you’re not creating a new thread each time you create a new task. The CLR creates
the necessary threads to support the tasks’ execution needs.

Creating and starting a Task (passing it a delegate) as the equivalent of calling QueueUserWorkItem on the ThreadPool.

The other worker thread completes Task1 and then goes to its local queue and finds it empty; it then goes to the global queue and finds it empty.
We don’t want it sitting there idle so a beautiful thing happens: work stealing
.  The thread goes to a local queue of another thread and “steals” a
Task and executes it!

.NET threads take up at least 1MB of memory (because they set aside 1MB for their stack)

Parallel ForEach

LINQ Parallel

The correct number of threads is, of course, equal to the number of cores on the box.
The problem with the current ThreadPool API is that it has almost no API. You simply throw items to it in a “fire and forget” manner. You get back no handle
to the work item. No way of cancelling it, waiting on it, composing a group of items in a structured way, handling exceptions thrown concurrently or any other
richer construct built on top of it.
Never explicitly use threads for anything at all (not just in the context of parallelism, but under no circumstances whatsoever).
Any class that deals with unmanaged code is supposed to implement the IDisposable interface and provide a Dispose() method that explicitly cleans up
the memory usage from any unmanaged code.
Implement IDisposable only if you are using unmanaged resources directly. If your app simply uses an object that implements IDisposable, don’t provide
an IDisposable implementation. Instead, you should call the object’s IDisposable.Dispose implementation when you are finished using it.
The following code fragment reflects the dispose pattern for base classes. It assumes that your type does not override the Object.Finalize method.

The following code fragment reflects the dispose pattern for derived classes. It assumes that your type does not override the Object.Finalize method.

Difference Between References and Pointers
A reference encapsulates a memory address, limiting the operations that can be performed on the address value to a language­specified subset.
A pointer gives you unfettered access to the address itself, enabling all operations that can legally be performed on a native integer.

Whenever you use anything that implements IDisposable, there’s a good chance that there’s some interop going on behind the scenes.
important CLR memory concepts
Each process has its own, separate virtual address space.
By default, on 32­bit computers, each process has a 2­GB user­mode virtual address space.
Before a garbage collection starts, all managed threads are suspended except for the thread that triggered the garbage collection.

Server garbage collection can be resource­intensive. For example, if you have 12 processes running on a computer that has 4 processors, there will be 48
dedicated garbage collection threads (each of 12 processes has 4 GC threads ­ thread per processor)  if they are all using server garbage collection. In a
high memory load situation, if all the processes start doing garbage collection, the garbage collector will have 48 threads to schedule. If you are running
hundreds of instances of an application, consider using workstation garbage collection with concurrent garbage collection disabled. This will result in
less context switching, which can improve performance.
The overall goal is to decompose the problem into independent tasks that do not share data, while providing sufficient tasks to occupy the number of cores
available.
Keep in mind that tasks are not threads. Tasks and threads take very different approaches to scheduling. Tasks are much more compatible with the concept
of potential parallelism than threads are. While a new thread immediately introduces additional concurrency to your application, a new task introduces only
the potential for additional concurrency. A task’s potential for additional concurrency will be realized only when there are enough available cores.
Every form of synchronization is a form of serialization. Your tasks can end up contending over the locks instead of doing the work you want them to do.
Programming with locks is also error ­prone.
Locks can be thought of as the go to statements of parallel programming: they are error prone but necessary in certain situations, and they are best left,
when possible, to compilers and libraries.

Parallel Break
The Parallel.For method has an overload that provides a ParallelLoopState object as a second argument to the loop body. You can ask the loop to break by calling the Break method of the ParallelLoopState object. Here’s an example.

LowestBreakIteration

The Parallel.For and Parallel.ForEach methods include overloaded versions that accept parallel loop options as one of the arguments. You can specify a
cancellation token as one of these options. If you provide a cancellation token as an option to a parallel loop, the loop will use that token to look for a
cancellation request. Here’s an example.

If the body of a parallel loop throws an unhandled exception, the parallel loop no longer begins any new steps. By default, iterations that are executing at
the time of the exception, other than the iteration that threw the exception, will complete. After they finish, the parallel loop will throw an exception in the
context of the thread that invoked it.

The .NET Framework Random class does not support multi­threaded access. Therefore, you need a separate instance of the random number
generator for each thread.

Arbitrarily increasing the degree of parallelism puts you at risk of processor oversubscription, a situation that occurs when there are many more
compute­intensive worker threads than there are cores.

In most cases, the built­in load balancing algorithms in the .NET Framework are the most effective way to manage tasks. They coordinate resources
among parallel loops and other tasks that are running concurrently.

The Parallel class and PLINQ work on slightly different threading models in the .NET Framework 4.
PLINQ uses a fixed number of tasks to execute a query; by default, it creates the same number of tasks as there are logical cores in the computer.

Conversely, by default, the Parallel.ForEach and Parallel.For methods can use a variable number of tasks. The idea is that the system can use fewer
threads than requested to process a loop.

You can also use the Parallel.Invoke method to achieve parallelism. The Parallel.Invoke method has very convenient syntax. This is shown in the following
code.

For example, in an interactive GUI­based application, checking for cancellation more than once per second is probably a good idea. An application that
runs in the background could poll for cancellation much less frequently, perhaps every two to ten seconds. Profiling your application can give you performance
data that you can use when determining the best places to test for cancellation requests in your code.

In many cases, unhandled task exceptions will be observed in a different thread than the one that executed the task.

The Parallel.Invoke method includes an implicit call to WaitAll. Exceptions from all of the tasks are grouped together in an AggregateException object and
thrown in the calling context of the WaitAll or Wait method.

The Flatten method of the AggregateException class is useful when tasks are nested within other tasks. In this case, it’s possible that an aggregate exception
can contain other aggregate exceptions as inner exceptions.

Speculative Execution

In C#, a closure can be created with a lambda expression in the form args => body that represents an unnamed (anonymous) delegate.
A unique feature of closures is that they may refer to variables defined outside their lexical scope, such as local variables that were declared in a
scope that contains the closure.

Terminating tasks with the Thread.Abort method leaves the AppDomain in a potentially unusable state. Also, aborting a thread pool worker thread is never
recommended. If you need to cancel a task, use the technique described in the section, “Canceling a Task,” earlier in this chapter. Do not abort the task’s
thread.

Never attempt to cancel a task by calling the Abort method of the thread that is executing the task.

There is one more task status, TaskStatus.Created. This is the status of a task immediately after it’s created by the Task class’s constructor; however, it’s
recommended that you use a factory method to create tasks instead of the new operator.

Implementing parallel aggregation with PLINQ doesn’t require adding locks in your code. Instead, all the synchronization occurs internally, within PLINQ.

Here’s how to use PLINQ to apply map/reduce to the social networking example.

SelectMany flattens queries that return lists of lists. For example

The syntax for locking in C# is lock ( object ) { body }. The object uniquely identifies the lock. All cooperating threads must use the same synchronizing object,
which must be a reference type such as Object and not a value type such as int or double. When you use lock with Parallel.For or Parallel.ForEach you should
create a dummy object and set it as the value of a captured local variable dedicated to this purpose. (A captured variable is a local variable from the
enclosing scope that is referenced in the body of a lambda expression.) The lock’s body is the region of code that will be protected by the lock. The body
should take only a small amount of execution time. Which shared variables are protected by the lock object varies by application and is something that all
programmers whose code accesses those variables must be careful not to contradict.

Task is guaranteed to execute from start to finish on only one thread.