C# – walk-through – (Part 2)

Futures can result in faster execution if hardware resources are available for parallel execution. However, if all cores are otherwise occupied, futures will be evaluated without parallelism.

Dynamic task parallelism is also known as recursive decomposition or “divide and conquer.”

The following code demonstrates how to implement a pipeline that uses the BlockingCollection class for the buffers and tasks for the stages of the pipeline.

When there is an unhandled exception in one pipeline stage, you should cancel the other stages. If you don’t do this, deadlock can occur.
Use a special instantiation of the CancellationTokenSource class to allow your application to coordinate the shutdown of all the pipeline stages when an exception occurs in one of them. Here’s an example.

The BlockingCollection<T> class allows you to read values from more than one producer.

It’s easy to forget to call the GetConsumingEnumerable because the BlockingCollection class implements IEnumerable<T>. Enumerating over the blocking collection instance won’t consume values. Watch out for this!

In general, using the blocking collections provided by the .NET Framework is easier and safer than writing your own implementation.

The Pipeline pattern doesn’t automatically scale with the number of cores. This is one of its limitations, unless additional parallelism is introduced within a pipeline stage itself.

The Decorator pattern overrides the behavior of an underlying class. Decorators use a “contains” relationship and inheritance.

The Adapter pattern translates from one interface to another. This pattern is applicable to parallelism because, like with the Façade pattern, it allows you to hide complexity and to expose an interface that’s easier for developers to use. Use the adapter pattern to modify an existing interface to make it easier for developers who are not familiar with parallelism to use.

ADO.NET objects don’t lock resources and must be used on only a single thread. The ADO.NET Entity Framework (EF) is built on top of ADO.NET and has the same limitation.

Parallelism is appropriate for applications that merge the results of queries from several different data sources. Applications that run on Windows Azure are good examples. Windows Azure applications often store data in a mixture of table storage and blob storage and may break data up between databases for reasons of scalability and performance.

Guidelines
Here are some guidelines for accessing data:
● Don’t share ADO.NET connections between tasks . ADO.NET is not thread safe. Each task should use its own connection .
● Keep connections to the data source for as little time as possible. Explicitly (manually) close connections to ensure that unused connections aren’t left open. Don’t rely on garbage collection to do this for you .
● Use tasks to parallelize connections to different databases. Using tasks to open multiple database connections to the same database may have significant performance implications.

Normally, instances of an object are created using the class constructor. This is shown in the following example.

Instead, you want to be able to obtain the single instance, as shown here

Here, Instance is a static read-only property of the class, and each time it’s called, it’ll refer to the same single instance of MyClass. By adding this
special Instance property, MyClass is transformed into a singleton class.

LazySingleton

The application uses continuation tasks instead of event handlers or other callback mechanisms. In fact, you can think of a continuation task as a kind of callback.

Here are some guidelines for using futures and continuations:
● Use continuation tasks that are attached to the model’s tasks to update the view model. This approach helps to decouple the model from the view.
● Use immutable types to pass results from the continuation tasks to the UI thread. This makes it easier to ensure that your code is correct.
● Use continuation tasks instead of background worker threads when you want to execute several different tasks in the background and merge the
results before passing them back to the UI thread.
● Use the correct task scheduler to make sure that modifications to UI objects, which have thread affinity, take place on the main UI thread.

Don’t overlook the importance of immutable types in parallel programming. They can improve reliability as well as performance.

Immutable types are a very useful form of scalable sharing. For example, when you append to a string, the result is a string that includes the addition. The original string is unmodified. This means that you can always use a particular string value in parallel code without worrying that another task or thread is modifying it. The combination of immutable types and the .NET Framework concurrent collection classes is particularly useful.

The Adatum Dashboard example passes StockData and StockDataCollection objects between tasks. No locking is required because the types themselves are immutable and therefore free of side effects

Collections from the namespace System.Collections.Generic are not thread safe. You must use the collections in System.Collections.Concurrent whenever collections are shared by more than one thread.

● Think twice before using sharing data.
● Where possible, use shared data collections in preference to locks.
● Use the shared data classes provided by the .NET Framework 4 in preference to writing your own classes.

Multicore vs multiprocessor
In regards to their speed, if both systems have the same clock speed, number of CPU’s and cores and RAM, the multicore system will run more efficiently on a single program. This is because the cores can execute multiple instructions at the same time, but not multiple programs; this is because of its shared cache (L1, L2, and L3). This is where your multiprocessor comes in handy. With each CPU having their own cache, they can execute programs simultaneously but will take longer than your multi core.

To summarize, a multicore system is a more favorable system for ordinary users. It does not demand any support or extra configurations and will likely cost a bit less. Performance-wise, depending on how you run your programs, each has their pros and cons.

Source: http://theydiffer.com/difference-between-multicore-and-multiprocessor-systems/

fork/join. A parallel computing pattern that uses task parallelism. Fork occurs when tasks start; join occurs when all tasks finish.

You probably should not return an array as the value of a public method or property, particularly when the information content of the array is logically immutable.
Returning an array means that you have to make a fresh copy of the array every time you return it. You get called a hundred times, you’d better make a hundred array instances, no matter how large they are. It’s a performance nightmare.

You can build yourself a nice read-only collection object once, and then just pass out references to it as much as you want.

A frequently-stated principle of good software design is that code which calls attention to bugs by throwing exceptions is better than code which hides bugs by muddling on through and doing the wrong thing.

multi-cast delegates are immutable; when you add or remove a handler, you replace the existing multi-cast delegate object with a different delegate object that has different behavior. You do not modify an existing object, you modify the variable that stores the event handler. Therefore, stashing away the current reference stored in that variable into a temporary variable effectively makes a copy of the current state.

Thread safetyis nothing more nor less than a code contract, like any other code contract. You agree to talk to an object in a particular manner, and it agrees to give you correct results if you do so; working out exactly what that manner is, and what the correct responses are, is a potentially tough problem.

exceptions are not completely immutable in .NET. The exception object’s stack trace is set at the point where the exception is thrown, every time it is thrown, not at the point where it is created. The “throw;” does not reset the stack trace, “throw ex;” does.

Making exceptions is cheap, and you’re already in a fatal error situation; it doesn’t matter if the app crashes a few microseconds slower. Take the time to allocate as many exceptions as you plan on throwing.

How to: Write a Parallel.For Loop with Thread-Local Variables

Using interlocked.

Interlocked helps with threaded programs. It safely changes the value of a shared variable from multiple threads. This is possible with the lock statement. But you can instead use the Interlocked type for simpler and faster code.

Interlocked.Add. When using Interlocked, forget all you know about addition, subtraction and assignment operators. Instead, you will use the Add, Increment, Decrement, Exchange and CompareExchange methods.

Interlocked.Increment was several times faster, requiring only 6 nanoseconds versus 40 nanoseconds for the lock construct.

Parallel ForEach

Cancel a Parallel.For or ForEach Loop

In a Parallel.For or Parallel.ForEach loop, you cannot use the same break or Exit statement that is used in a sequential loop because those language
constructs are valid for loops, and a parallel “loop” is actually a method, not a loop. Instead, you use either the Stop or Break method.

C# 7 feature

Non-Nullable reference types

Operation Catalog
http://martinfowler.com/articles/collection-pipeline/
Here is a catalog of the operations that you often find in collection pipelines. Every language makes different choices on what operations are available and what they are called, but I’ve tried to look at them through their common capabilities.

Using the Caller Info Attributes and Logging

You can think of a delegate type as being a bit like an interface with a single method. Delegates types are declared with the delegate keyword. They can appear either on their own or nested within a class, as shown below.

It’s important to understand that delegate instances are always immutable. Anything which combines them together (or takes one away from the other) creates a new delegate instance to represent the new list of targets/methods to call. This is just like strings: if you call String.PadLeft for instance, it doesn’t actually change the string you call it on – it just returns a new string with the appropriate padding.

First things first: events aren’t delegate instances. Let’s try that again. Events aren’t delegate instances.

Event Handler Add/Remove

A shortcut: field-like events

C# provides a simple way of declaring both a delegate variable and an event at the same time. This is called a field-like event, and is declared very simply – it’s the same as the “longhand” event declaration, but without the “body” part:

Event handler sample.

http://csharpindepth.com/Articles/Chapter2/Events.aspx
What’s the point of having both delegates and events?

The answer is encapsulation. Suppose events didn’t exist as a concept in C#/.NET. How would another class subscribe to an event? Three options:
1. A public delegate variable
2. A delegate variable backed by a property
3. A delegate variable with AddXXXHandler and RemoveXXXHandler methods

Option 1 is clearly horrible, for all the normal reasons we abhor (regard with disgust and hatred) public variables.

Option 2 is slightly better, but allows subscribers to effectively override each other – it would be all too easy to write
someInstance.MyEvent = eventHandler; which would replace any existing event handlers rather than adding a new one. In addition, you still need to write the properties.

Option 3 is basically what events give you, but with a guaranteed convention (generated by the compiler and backed by extra flags in the IL) and a “free” implementation if you’re happy with the semantics that field-like events give you. Subscribing to and unsubscribing from events is encapsulated without allowing arbitrary access to the list of event handlers, and languages can make things simpler by providing syntax for both declaration and subscription.

The delegates are executed on threads created by the system thread-pool.

Async Delegate Example:
The calls to EndInvoke block until the delegate has completed in much the same way as calls to Thread.Join block until the threads involved have
terminated.

Async Delegate example

Here is callback function:

Thread-pool threads are background threads – without the extra Sleep call, the application would terminate before the delegate calls finished executing.

Closures allow you to encapsulate some behaviour, pass it around like any other object, and still have access to the context in which they were first declared.

Predicate usage

Event Handler
https://www.dotnetperls.com/event

The delegate is a type that defines a signature, that is, the return value type and parameter list types for a method. You can use the delegate type to declare a variable that can refer to any method with the same signature as the delegate.

The EventHandler delegate is a predefined delegate that specifically represents an event handler method for an event that does not generate data. If your event does generate data, you must use the generic EventHandler<TEventArgs> delegate class.
To associate the event with the method that will handle the event, add an instance of the delegate to the event. The event handler is called whenever the event occurs, unless you remove the delegate.

Events and Inheritance
When creating a general component that can be derived from, what seems to be a problem sometimes arises with events. Since events can only be invoked from within the class that declared them, derived classes cannot directly invoke events declared within the base class. Although this is sometimes what is desired, often it is appropriate to give the derived class the freedom to invoke the event. This is typically done by creating a protected invoking method for the event. By calling this invoking method, derived classes can invoke the event. For even more flexibility, the invoking method is often declared as virtual, which allows the derived class to override it. This allows the derived class to intercept the events that the base class is invoking, possibly doing its own processing of
them.

The event keyword indicates to the compiler that the delegate can be invoked only by the defining class, and that other classes can subscribe to and unsubscribe from the delegate using only the appropriate += and -= operators, respectively.

Adding the event keyword fixes both problems. Classes can no longer attempt to subscribe to the event using the assignment operator (=), as they could previously, nor can they invoke the event directly, as was done in the preceding example. Either of these attempts will now generate a compile error:

If you subscribe your method as anonymous, you cannot invoke the method except through the delegate; but that is exactly what you want.

In .NET, all event handlers return void, and take two parameters. The first parameter is of type object and is the object that raises the event; the second argument is an object of type EventArgs or of a type derived from EventArgs, which may contain useful information about the event.

A lambda expression is an expression using the operator => that returns an unnamed method. Lambda expressions are similar to anonymous methods, but aren’t restricted to being used as delegates.
Some examples of responsibilities to consider that may need to be separated include:

● Persistence
● Validation
● Notification
● Error Handling
● Logging
● Class Selection / Instantiation
● Formatting
● Parsing
● Mappin

Usage errors should never occur in production code. For example, if passing a null reference as one of the method’s arguments causes an error state (usually represented by an ArgumentNullException exception), you can modify the calling code to ensure that a null reference is never passed. For all other errors, exceptions that occur when an asynchronous method is running should be assigned to the returned task, even if the asynchronous method happens to complete synchronously before the task is returned. Typically, a task contains at most one exception. However, if the task represents multiple operations (for example, WhenAll), multiple exceptions may be associated with a single task.

Consumers of a TAP method may safely assume that the returned task is active and should not try to call Start on any Task that is returned from a TAP method. Calling Start on an active task results in an InvalidOperationException exception.

If the cancellation request results in work being ended prematurely, the TAP method returns a task that ends in the Canceled state; there is no available result and no exception is thrown. The Canceled state is considered to be a final (completed) state for a task, along with the Faulted and RanToCompletion states.

For example, consider an asynchronous method that renders an image. The body of the task can poll the cancellation token so that the code may exit early if a cancellation request arrives during rendering. In addition, if the cancellation request arrives before rendering starts, you’ll want to prevent the rendering operation:

OrdinalIgnoreCase property treats the characters in the strings to compare as if they were converted to uppercase using the conventions of the invariant culture, and then performs a simple byte comparison that is independent of language. This is most appropriate when comparing strings that are generated programmatically or when comparing case-insensitive resources such as paths and filenames.

TaskCompletionSource example;

https://channel9.msdn.com/Blogs/philpenn/TaskCompletionSourceTResult

For example, if you wanted to download a bunch of web pages asynchronously and then only do something when all of them were complete, with WebClient by itself you’d have to code up that logic manually; in contrast, if you had a Task<T> for each download, you could then use ContinueWhenAll to launch additional work when all of those downloads completed (and you’d get back a Task that represents that continuation). In short, once you have a Task<T> to represent the EAP operation, you can compose it with any other tasks you might have, and you have the full functionality afforded to tasks to process that EAP operation.

And usage of returned task;

TaskCompletionSource another example;

Execution of the method;

 

 

C# – walk-through – (Part 1)

http://www.quartz-scheduler.net/documentation/quartz-2.x/quick-start.html

There are three ways (which are not mutually exclusive) to supply Quartz.NET configuration information:

  • Programmatically via providing NameValueCollection parameter to scheduler factory
  • Via standard youapp.exe.config configuration file using quartz-element
  • quartz.config file in your application’s root directory

Quartz.NET comes with sane defaults.

NCrunch will religiously set the environment variable ‘NCrunch’ equal to ‘1’ inside each of its task runner processes. This applies to both build tasks and test tasks. You can make use of this environment variable to redirect your continuous tests to a different schema/database, for example:

WebJobs SDK

Azure Web Jobs Dashboard format for the URL to access is this: https://YOURSITE.scm.azurewebsites.net/azurejobs.

https://blogs.msdn.microsoft.com/ericlippert/2009/11/12/closing-over-the-loop-variable-considered-harmful/

var values = new List<int>() { 100, 110, 120 };
var funcs = new List<Func<int>>();
foreach(var v in values)
funcs.Add( ()=>v );
foreach(var f in funcs)
Console.WriteLine(f());

Most people expect it to be 100 / 110 / 120.  It is in fact 120 / 120 / 120. Why?

Because ()=>v means “return the current value of variable v“, not “return the value v was back when the delegate was created”. Closures close over variables, not over values. And when the methods run, clearly the last value that was assigned to v was 120, so it still has that value.

This is very confusing. The correct way to write the code is:

foreach(var v in values)
{
var v2 = v;
funcs.Add( ()=>v2 );
}

Now what happens? Every time we re-start the loop body, we logically create a fresh new variable v2. Each closure is closed over a different v2, which is only assigned to once, so it always keeps the correct value.

TaskCompletionSource sample;

You can use

await Task.Yield();

in an asynchronous method to force the method to complete asynchronously. If there is a current synchronization context
(SynchronizationContex object), this will post the remainder of the method’s execution back to that context. However, the context will decide how to prioritize this work relative to other work that may be pending. The synchronization context that is present on a UI thread in most UI environments will often prioritize work posted to the context higher than input and rendering work. For this reason,  do not rely on await Task.Yield(); to keep a UI responsive.
Task that’s not bound to a thread.

With that, you can write:
await p.ExitedAsync();
and you won’t be blocking any threads while asynchronously waiting for the process to exit.

If you use the standard TPL functionality, by default, it’ll use a ThreadPool thread.

Async methods does not create new threads. They compose asynchrony, does not create asynchrony.

When you create a Task or Task<TResult> object to perform some task asynchronously, by default the task is scheduled to run on a thread pool
thread.

The threads in the managed thread pool are background threads. That is, their IsBackground properties are true. This means that a
ThreadPool thread will not keep an application running after all foreground threads have exited.

The thread pool uses background threads, which do not keep the application running if all foreground threads have terminated. (
There is no way to cancel a work item after it has been queued.
There is one thread pool per process.
In most cases the thread pool will perform better with its own algorithm for allocating threads.

When you work with tasks, they run their code using underlying threads (software threads, scheduled on certain hardware threads or logical cores). However,
there isn’t a 1­to­1 relationship between tasks and threads. This means you’re not creating a new thread each time you create a new task. The CLR creates
the necessary threads to support the tasks’ execution needs.

Creating and starting a Task (passing it a delegate) as the equivalent of calling QueueUserWorkItem on the ThreadPool.

The other worker thread completes Task1 and then goes to its local queue and finds it empty; it then goes to the global queue and finds it empty.
We don’t want it sitting there idle so a beautiful thing happens: work stealing
.  The thread goes to a local queue of another thread and “steals” a
Task and executes it!

.NET threads take up at least 1MB of memory (because they set aside 1MB for their stack)

Parallel ForEach

LINQ Parallel

The correct number of threads is, of course, equal to the number of cores on the box.
The problem with the current ThreadPool API is that it has almost no API. You simply throw items to it in a “fire and forget” manner. You get back no handle
to the work item. No way of cancelling it, waiting on it, composing a group of items in a structured way, handling exceptions thrown concurrently or any other
richer construct built on top of it.
Never explicitly use threads for anything at all (not just in the context of parallelism, but under no circumstances whatsoever).
Any class that deals with unmanaged code is supposed to implement the IDisposable interface and provide a Dispose() method that explicitly cleans up
the memory usage from any unmanaged code.
Implement IDisposable only if you are using unmanaged resources directly. If your app simply uses an object that implements IDisposable, don’t provide
an IDisposable implementation. Instead, you should call the object’s IDisposable.Dispose implementation when you are finished using it.
The following code fragment reflects the dispose pattern for base classes. It assumes that your type does not override the Object.Finalize method.

The following code fragment reflects the dispose pattern for derived classes. It assumes that your type does not override the Object.Finalize method.

Difference Between References and Pointers
A reference encapsulates a memory address, limiting the operations that can be performed on the address value to a language­specified subset.
A pointer gives you unfettered access to the address itself, enabling all operations that can legally be performed on a native integer.

Whenever you use anything that implements IDisposable, there’s a good chance that there’s some interop going on behind the scenes.
important CLR memory concepts
Each process has its own, separate virtual address space.
By default, on 32­bit computers, each process has a 2­GB user­mode virtual address space.
Before a garbage collection starts, all managed threads are suspended except for the thread that triggered the garbage collection.

Server garbage collection can be resource­intensive. For example, if you have 12 processes running on a computer that has 4 processors, there will be 48
dedicated garbage collection threads (each of 12 processes has 4 GC threads ­ thread per processor)  if they are all using server garbage collection. In a
high memory load situation, if all the processes start doing garbage collection, the garbage collector will have 48 threads to schedule. If you are running
hundreds of instances of an application, consider using workstation garbage collection with concurrent garbage collection disabled. This will result in
less context switching, which can improve performance.
The overall goal is to decompose the problem into independent tasks that do not share data, while providing sufficient tasks to occupy the number of cores
available.
Keep in mind that tasks are not threads. Tasks and threads take very different approaches to scheduling. Tasks are much more compatible with the concept
of potential parallelism than threads are. While a new thread immediately introduces additional concurrency to your application, a new task introduces only
the potential for additional concurrency. A task’s potential for additional concurrency will be realized only when there are enough available cores.
Every form of synchronization is a form of serialization. Your tasks can end up contending over the locks instead of doing the work you want them to do.
Programming with locks is also error ­prone.
Locks can be thought of as the go to statements of parallel programming: they are error prone but necessary in certain situations, and they are best left,
when possible, to compilers and libraries.

Parallel Break
The Parallel.For method has an overload that provides a ParallelLoopState object as a second argument to the loop body. You can ask the loop to break by calling the Break method of the ParallelLoopState object. Here’s an example.

LowestBreakIteration

The Parallel.For and Parallel.ForEach methods include overloaded versions that accept parallel loop options as one of the arguments. You can specify a
cancellation token as one of these options. If you provide a cancellation token as an option to a parallel loop, the loop will use that token to look for a
cancellation request. Here’s an example.

If the body of a parallel loop throws an unhandled exception, the parallel loop no longer begins any new steps. By default, iterations that are executing at
the time of the exception, other than the iteration that threw the exception, will complete. After they finish, the parallel loop will throw an exception in the
context of the thread that invoked it.

The .NET Framework Random class does not support multi­threaded access. Therefore, you need a separate instance of the random number
generator for each thread.

Arbitrarily increasing the degree of parallelism puts you at risk of processor oversubscription, a situation that occurs when there are many more
compute­intensive worker threads than there are cores.

In most cases, the built­in load balancing algorithms in the .NET Framework are the most effective way to manage tasks. They coordinate resources
among parallel loops and other tasks that are running concurrently.

The Parallel class and PLINQ work on slightly different threading models in the .NET Framework 4.
PLINQ uses a fixed number of tasks to execute a query; by default, it creates the same number of tasks as there are logical cores in the computer.

Conversely, by default, the Parallel.ForEach and Parallel.For methods can use a variable number of tasks. The idea is that the system can use fewer
threads than requested to process a loop.

You can also use the Parallel.Invoke method to achieve parallelism. The Parallel.Invoke method has very convenient syntax. This is shown in the following
code.

For example, in an interactive GUI­based application, checking for cancellation more than once per second is probably a good idea. An application that
runs in the background could poll for cancellation much less frequently, perhaps every two to ten seconds. Profiling your application can give you performance
data that you can use when determining the best places to test for cancellation requests in your code.

In many cases, unhandled task exceptions will be observed in a different thread than the one that executed the task.

The Parallel.Invoke method includes an implicit call to WaitAll. Exceptions from all of the tasks are grouped together in an AggregateException object and
thrown in the calling context of the WaitAll or Wait method.

The Flatten method of the AggregateException class is useful when tasks are nested within other tasks. In this case, it’s possible that an aggregate exception
can contain other aggregate exceptions as inner exceptions.

Speculative Execution

In C#, a closure can be created with a lambda expression in the form args => body that represents an unnamed (anonymous) delegate.
A unique feature of closures is that they may refer to variables defined outside their lexical scope, such as local variables that were declared in a
scope that contains the closure.

Terminating tasks with the Thread.Abort method leaves the AppDomain in a potentially unusable state. Also, aborting a thread pool worker thread is never
recommended. If you need to cancel a task, use the technique described in the section, “Canceling a Task,” earlier in this chapter. Do not abort the task’s
thread.

Never attempt to cancel a task by calling the Abort method of the thread that is executing the task.

There is one more task status, TaskStatus.Created. This is the status of a task immediately after it’s created by the Task class’s constructor; however, it’s
recommended that you use a factory method to create tasks instead of the new operator.

Implementing parallel aggregation with PLINQ doesn’t require adding locks in your code. Instead, all the synchronization occurs internally, within PLINQ.

Here’s how to use PLINQ to apply map/reduce to the social networking example.

SelectMany flattens queries that return lists of lists. For example

The syntax for locking in C# is lock ( object ) { body }. The object uniquely identifies the lock. All cooperating threads must use the same synchronizing object,
which must be a reference type such as Object and not a value type such as int or double. When you use lock with Parallel.For or Parallel.ForEach you should
create a dummy object and set it as the value of a captured local variable dedicated to this purpose. (A captured variable is a local variable from the
enclosing scope that is referenced in the body of a lambda expression.) The lock’s body is the region of code that will be protected by the lock. The body
should take only a small amount of execution time. Which shared variables are protected by the lock object varies by application and is something that all
programmers whose code accesses those variables must be careful not to contradict.

Task is guaranteed to execute from start to finish on only one thread.

HttpClient SendAsync method usage for POST request

Here is very basic example for usage of HttpClient for your REST calls.

public static async Task<string> UpdateUser(string endpoint, string accessToken, string displayName)
       {
           HttpClient client = new HttpClient();
           HttpRequestMessage message = new HttpRequestMessage(new HttpMethod("POST"), endpoint);
           message.Headers.Accept.Add(new MediaTypeWithQualityHeaderValue("application/x-www-form-urlencoded"));
           message.Headers.Authorization = new AuthenticationHeaderValue("bearer", accessToken);
           List<KeyValuePair<string, string>> nameValueCollection = new List<KeyValuePair<string, string>>();
           nameValueCollection.Add(new KeyValuePair<string, string>("display_name", displayName));
           message.Content = new FormUrlEncodedContent(nameValueCollection);
           try
           {
               HttpResponseMessage httpResponseMessage = await client.SendAsync(message);
               httpResponseMessage.EnsureSuccessStatusCode();
               HttpContent httpContent = httpResponseMessage.Content;
               string responseString = await httpContent.ReadAsStringAsync();
               return responseString;
           }
           catch (Exception ex)
           {
               string errorType = ex.GetType().ToString();
               string errorMessage = errorType + ": " + ex.Message;
               throw new Exception(errorMessage, ex.InnerException);
           }
       }