Solr – walk-through – Part 2

Please note that <lib/> directives are processed in the order
that they appear in your solrconfig.xml file, and are “stacked”
on top of each other when building a ClassLoader – so if you have
plugin jars with dependencies on other jars, the “lower level”
dependency jars should be loaded first.

If a “./lib” directory exists in your instanceDir, all files
found in it are included as if you had used the following
syntax…

<lib dir=”./lib” />

Documents Screen

The first step is to define the RequestHandler to use (aka, ‘qt’). By default will be defined. To use Solr /update Cell, for example, change the request handler to . /update/extract.

The Files screen lets you browse & view the various configuration files (such and the schema solrconfig.xml file) for the collection you selected.

Configuration files cannot be edited with this screen, so a text editor of some kind must be used.

 

Android – Part 1

Full Source: https://developer.android.com

Android Studio uses Gradle to compile and build your app. This is where your app’s build dependencies are set, including the defaultConfig settings.

hyper-v and android emulators

gradlew.bat assbeleDebug

The at sign (@) is required when you’re referring to any resource object from XML. It is followed by the resource type (id in this case), a slash, then the resource name (edit_message)

By default, your Android project includes a string resource file at res > values > strings.xml. Here, you’ll add two new strings.

Make the Input Box Fill in the Screen Width
In activity_main.xml, modify the <EditText> so that the attributes look like this:

An Intent is an object that provides runtime binding between separate components (such as two activities). The Intent represents an app’s “intent to do something.” You can use intents for a wide variety of tasks.

Every Activity is invoked by an Intent, regardless of how the user navigated there.

To add support for more languages, create additional values directories inside res/ that include a hyphen and the ISO language code at the end of the directory name. For example, values-es/ is the directory containing simple resources for the Locales with the language code “es”.

In your source code, you can refer to a string resource with the syntax R.string.<string_name>. There are a variety of methods that accept a string resource this way.

Avoid specifying dimensions with px units, since they do not scale with screen density. Instead, specify dimensions with density-independent pixel (dp) units.

To create a dynamic and multi-pane user interface on Android, you need to encapsulate UI components and activity behaviors into modules that you can swap into and out of your activities. You can create these modules with the Fragment class, which behaves somewhat like a nested activity that can define its own layout and manage its own lifecycle.

Just like an activity, a fragment should implement other lifecycle callbacks that allow you to manage its state as it is added or removed from the activity and as the activity transitions between its lifecycle states. For instance, when the activity’s onPause() method is called, any fragments in the activity also receive a call to onPause().

Here is an example layout file that adds two fragments to an activity when the device screen is considered “large” (specified by the large qualifier in the directory name).

When you add a fragment to an activity layout by defining the fragment in the layout XML file, you cannot remove the fragment at runtime. If you plan to swap your fragments in and out during user interaction, you must add the fragment to the activity when the activity first starts, as shown in the next lesson.

C# – walk-through – (Part 1)

http://www.quartz-scheduler.net/documentation/quartz-2.x/quick-start.html

There are three ways (which are not mutually exclusive) to supply Quartz.NET configuration information:

  • Programmatically via providing NameValueCollection parameter to scheduler factory
  • Via standard youapp.exe.config configuration file using quartz-element
  • quartz.config file in your application’s root directory

Quartz.NET comes with sane defaults.

NCrunch will religiously set the environment variable ‘NCrunch’ equal to ‘1’ inside each of its task runner processes. This applies to both build tasks and test tasks. You can make use of this environment variable to redirect your continuous tests to a different schema/database, for example:

WebJobs SDK

Azure Web Jobs Dashboard format for the URL to access is this: https://YOURSITE.scm.azurewebsites.net/azurejobs.

https://blogs.msdn.microsoft.com/ericlippert/2009/11/12/closing-over-the-loop-variable-considered-harmful/

var values = new List<int>() { 100, 110, 120 };
var funcs = new List<Func<int>>();
foreach(var v in values)
funcs.Add( ()=>v );
foreach(var f in funcs)
Console.WriteLine(f());

Most people expect it to be 100 / 110 / 120.  It is in fact 120 / 120 / 120. Why?

Because ()=>v means “return the current value of variable v“, not “return the value v was back when the delegate was created”. Closures close over variables, not over values. And when the methods run, clearly the last value that was assigned to v was 120, so it still has that value.

This is very confusing. The correct way to write the code is:

foreach(var v in values)
{
var v2 = v;
funcs.Add( ()=>v2 );
}

Now what happens? Every time we re-start the loop body, we logically create a fresh new variable v2. Each closure is closed over a different v2, which is only assigned to once, so it always keeps the correct value.

TaskCompletionSource sample;

You can use

await Task.Yield();

in an asynchronous method to force the method to complete asynchronously. If there is a current synchronization context
(SynchronizationContex object), this will post the remainder of the method’s execution back to that context. However, the context will decide how to prioritize this work relative to other work that may be pending. The synchronization context that is present on a UI thread in most UI environments will often prioritize work posted to the context higher than input and rendering work. For this reason,  do not rely on await Task.Yield(); to keep a UI responsive.
Task that’s not bound to a thread.

With that, you can write:
await p.ExitedAsync();
and you won’t be blocking any threads while asynchronously waiting for the process to exit.

If you use the standard TPL functionality, by default, it’ll use a ThreadPool thread.

Async methods does not create new threads. They compose asynchrony, does not create asynchrony.

When you create a Task or Task<TResult> object to perform some task asynchronously, by default the task is scheduled to run on a thread pool
thread.

The threads in the managed thread pool are background threads. That is, their IsBackground properties are true. This means that a
ThreadPool thread will not keep an application running after all foreground threads have exited.

The thread pool uses background threads, which do not keep the application running if all foreground threads have terminated. (
There is no way to cancel a work item after it has been queued.
There is one thread pool per process.
In most cases the thread pool will perform better with its own algorithm for allocating threads.

When you work with tasks, they run their code using underlying threads (software threads, scheduled on certain hardware threads or logical cores). However,
there isn’t a 1­to­1 relationship between tasks and threads. This means you’re not creating a new thread each time you create a new task. The CLR creates
the necessary threads to support the tasks’ execution needs.

Creating and starting a Task (passing it a delegate) as the equivalent of calling QueueUserWorkItem on the ThreadPool.

The other worker thread completes Task1 and then goes to its local queue and finds it empty; it then goes to the global queue and finds it empty.
We don’t want it sitting there idle so a beautiful thing happens: work stealing
.  The thread goes to a local queue of another thread and “steals” a
Task and executes it!

.NET threads take up at least 1MB of memory (because they set aside 1MB for their stack)

Parallel ForEach

LINQ Parallel

The correct number of threads is, of course, equal to the number of cores on the box.
The problem with the current ThreadPool API is that it has almost no API. You simply throw items to it in a “fire and forget” manner. You get back no handle
to the work item. No way of cancelling it, waiting on it, composing a group of items in a structured way, handling exceptions thrown concurrently or any other
richer construct built on top of it.
Never explicitly use threads for anything at all (not just in the context of parallelism, but under no circumstances whatsoever).
Any class that deals with unmanaged code is supposed to implement the IDisposable interface and provide a Dispose() method that explicitly cleans up
the memory usage from any unmanaged code.
Implement IDisposable only if you are using unmanaged resources directly. If your app simply uses an object that implements IDisposable, don’t provide
an IDisposable implementation. Instead, you should call the object’s IDisposable.Dispose implementation when you are finished using it.
The following code fragment reflects the dispose pattern for base classes. It assumes that your type does not override the Object.Finalize method.

The following code fragment reflects the dispose pattern for derived classes. It assumes that your type does not override the Object.Finalize method.

Difference Between References and Pointers
A reference encapsulates a memory address, limiting the operations that can be performed on the address value to a language­specified subset.
A pointer gives you unfettered access to the address itself, enabling all operations that can legally be performed on a native integer.

Whenever you use anything that implements IDisposable, there’s a good chance that there’s some interop going on behind the scenes.
important CLR memory concepts
Each process has its own, separate virtual address space.
By default, on 32­bit computers, each process has a 2­GB user­mode virtual address space.
Before a garbage collection starts, all managed threads are suspended except for the thread that triggered the garbage collection.

Server garbage collection can be resource­intensive. For example, if you have 12 processes running on a computer that has 4 processors, there will be 48
dedicated garbage collection threads (each of 12 processes has 4 GC threads ­ thread per processor)  if they are all using server garbage collection. In a
high memory load situation, if all the processes start doing garbage collection, the garbage collector will have 48 threads to schedule. If you are running
hundreds of instances of an application, consider using workstation garbage collection with concurrent garbage collection disabled. This will result in
less context switching, which can improve performance.
The overall goal is to decompose the problem into independent tasks that do not share data, while providing sufficient tasks to occupy the number of cores
available.
Keep in mind that tasks are not threads. Tasks and threads take very different approaches to scheduling. Tasks are much more compatible with the concept
of potential parallelism than threads are. While a new thread immediately introduces additional concurrency to your application, a new task introduces only
the potential for additional concurrency. A task’s potential for additional concurrency will be realized only when there are enough available cores.
Every form of synchronization is a form of serialization. Your tasks can end up contending over the locks instead of doing the work you want them to do.
Programming with locks is also error ­prone.
Locks can be thought of as the go to statements of parallel programming: they are error prone but necessary in certain situations, and they are best left,
when possible, to compilers and libraries.

Parallel Break
The Parallel.For method has an overload that provides a ParallelLoopState object as a second argument to the loop body. You can ask the loop to break by calling the Break method of the ParallelLoopState object. Here’s an example.

LowestBreakIteration

The Parallel.For and Parallel.ForEach methods include overloaded versions that accept parallel loop options as one of the arguments. You can specify a
cancellation token as one of these options. If you provide a cancellation token as an option to a parallel loop, the loop will use that token to look for a
cancellation request. Here’s an example.

If the body of a parallel loop throws an unhandled exception, the parallel loop no longer begins any new steps. By default, iterations that are executing at
the time of the exception, other than the iteration that threw the exception, will complete. After they finish, the parallel loop will throw an exception in the
context of the thread that invoked it.

The .NET Framework Random class does not support multi­threaded access. Therefore, you need a separate instance of the random number
generator for each thread.

Arbitrarily increasing the degree of parallelism puts you at risk of processor oversubscription, a situation that occurs when there are many more
compute­intensive worker threads than there are cores.

In most cases, the built­in load balancing algorithms in the .NET Framework are the most effective way to manage tasks. They coordinate resources
among parallel loops and other tasks that are running concurrently.

The Parallel class and PLINQ work on slightly different threading models in the .NET Framework 4.
PLINQ uses a fixed number of tasks to execute a query; by default, it creates the same number of tasks as there are logical cores in the computer.

Conversely, by default, the Parallel.ForEach and Parallel.For methods can use a variable number of tasks. The idea is that the system can use fewer
threads than requested to process a loop.

You can also use the Parallel.Invoke method to achieve parallelism. The Parallel.Invoke method has very convenient syntax. This is shown in the following
code.

For example, in an interactive GUI­based application, checking for cancellation more than once per second is probably a good idea. An application that
runs in the background could poll for cancellation much less frequently, perhaps every two to ten seconds. Profiling your application can give you performance
data that you can use when determining the best places to test for cancellation requests in your code.

In many cases, unhandled task exceptions will be observed in a different thread than the one that executed the task.

The Parallel.Invoke method includes an implicit call to WaitAll. Exceptions from all of the tasks are grouped together in an AggregateException object and
thrown in the calling context of the WaitAll or Wait method.

The Flatten method of the AggregateException class is useful when tasks are nested within other tasks. In this case, it’s possible that an aggregate exception
can contain other aggregate exceptions as inner exceptions.

Speculative Execution

In C#, a closure can be created with a lambda expression in the form args => body that represents an unnamed (anonymous) delegate.
A unique feature of closures is that they may refer to variables defined outside their lexical scope, such as local variables that were declared in a
scope that contains the closure.

Terminating tasks with the Thread.Abort method leaves the AppDomain in a potentially unusable state. Also, aborting a thread pool worker thread is never
recommended. If you need to cancel a task, use the technique described in the section, “Canceling a Task,” earlier in this chapter. Do not abort the task’s
thread.

Never attempt to cancel a task by calling the Abort method of the thread that is executing the task.

There is one more task status, TaskStatus.Created. This is the status of a task immediately after it’s created by the Task class’s constructor; however, it’s
recommended that you use a factory method to create tasks instead of the new operator.

Implementing parallel aggregation with PLINQ doesn’t require adding locks in your code. Instead, all the synchronization occurs internally, within PLINQ.

Here’s how to use PLINQ to apply map/reduce to the social networking example.

SelectMany flattens queries that return lists of lists. For example

The syntax for locking in C# is lock ( object ) { body }. The object uniquely identifies the lock. All cooperating threads must use the same synchronizing object,
which must be a reference type such as Object and not a value type such as int or double. When you use lock with Parallel.For or Parallel.ForEach you should
create a dummy object and set it as the value of a captured local variable dedicated to this purpose. (A captured variable is a local variable from the
enclosing scope that is referenced in the body of a lambda expression.) The lock’s body is the region of code that will be protected by the lock. The body
should take only a small amount of execution time. Which shared variables are protected by the lock object varies by application and is something that all
programmers whose code accesses those variables must be careful not to contradict.

Task is guaranteed to execute from start to finish on only one thread.