ASP.NET MVC 5 – walk-through – (Part 3)

Performing Raw SQL Queries

The Entity Framework Code First API includes methods that enable you to pass SQL commands directly to the database. You have the following options:

  • Use the DbSet.SqlQuery method for queries that return entity types. The returned objects must be of the type expected by the DbSetobject, and they are automatically tracked by the database context unless you turn tracking off.
  • Use the Database.SqlQuery method for queries that return types that aren’t entities. The returned data isn’t tracked by the database context, even if you use this method to retrieve entity types.
  • Use the Database.ExecuteSqlCommand for non-query commands.

One of the advantages of using the Entity Framework is that it avoids tying your code too closely to a particular method of storing data. It does this by generating SQL queries and commands for you, which also frees you from having to write them yourself.

You can disable tracking of entity objects in memory by using the AsNoTracking method. Typical scenarios in which you might want to do that include the following:

  • A query retrieves such a large volume of data that turning off tracking might noticeably enhance performance.
  • You want to attach an entity in order to update it, but you earlier retrieved the same entity for a different purpose. Because the entity is already being tracked by the database context, you can’t attach the entity that you want to change. One way to handle this situation is to use the AsNoTracking option with the earlier query.

Many developers write code to implement the repository and unit of work patterns as a wrapper around code that works with the Entity Framework. These patterns are intended to create an abstraction layer between the data access layer and the business logic layer of an application. Implementing these patterns can help insulate your application from changes in the data store and can facilitate automated unit testing or test-driven development (TDD). However, writing additional code to implement these patterns is not always the best choice for applications that use EF, for several reasons:

  • The EF context class itself insulates your code from data-store-specific code.
  • The EF context class can act as a unit-of-work class for database updates that you do using EF.
  • Features introduced in Entity Framework 6 make it easier to implement TDD without writing repository code.

Most of the time you don’t need to be aware of this use of proxies, but there are exceptions:

  • In some scenarios you might want to prevent the Entity Framework from creating proxy instances. For example, when you’re serializing entities you generally want the POCO classes, not the proxy classes. One way to avoid serialization problems is to serialize data transfer objects (DTOs) instead of entity objects, another way is to  disable proxy creation.
  • When you instantiate an entity class using the new operator, you don’t get a proxy instance. This means you don’t get functionality such as lazy loading and automatic change tracking. This is typically okay; you generally don’t need lazy loading, because you’re creating a new entity that isn’t in the database, and you generally don’t need change tracking if you’re explicitly marking the entity as Added. However, if you do need lazy loading and you need change tracking, you can create new entity instances with proxies using the Create method of the DbSet class.
  • You might want to get an actual entity type from a proxy type. You can use the GetObjectType method of the ObjectContext class to get the actual entity type of a proxy type instance.

To Create SQL Server Database Project you can download template from:

post deployment scripts

SQL Server Object Explorer

Models are classes that you will use to work with the data. Each model mirrors a table in the database and contains properties that correspond to the columns in the table.

New Scaffolded Item



HTTP Error 403.14 – Forbidden – The Web server is configured to not list the contents of this directory

data annotations and validation

You can add a metadata class that contains the attributes. When you associate the model class to the metadata class, those attributes are applied to the model. In this approach, the model class can be regenerated without losing all of the attributes that have been applied to the metadata class.


These attributes will not be lost when you regenerate the model classes because the metadata attribute is applied in partial classes that are not regenerated.

enabling SSL for web projects

RegisterBundle, adding bootstrap css files into existing bundle.

How to seed data with AddOrUpdate with a complex key

Group By Multiple Columns

EF: Include with where clause

Default configuration settings are specified in the Machine.config file located in the %SystemRoot%\Microsoft.NET\Framework\versionNumber\CONFIG\ directory. Values are inherited by child sites and applications. If there is a configuration file in a child site or application, the inherited values do not appear, but can be overridden and are available to the configuration API.

The following default <anonymousIdentification> element is not explicitly configured in the Machine.config file or in the root Web.config file. However, it is the default configuration that is returned by an application.




Solr – walk-through – (Part 1)

Full Resource:

to start solr

java -jar start.jar

alternative way to start solr

C:\solr-6.0.0\bin>solr start -p 8984



cat field added as an extra:


adding text/csv into solr


advanced query

solr response


Solr builds on another open source search technology: Lucene, a Java library that provides indexing and search technology, as well as spellchecking, hit highlighting and advanced analysis/tokenization capabilities.

The default port when running Solr is 8983. The Lucene search library currently ranks among the top 15 open source projects and is one of the top 5 Apache projects.

checking java version

Start Solr with a Specific Example Configuration

solr status

Create a Core

Solr is built to find documents that match queries. Solr’s schema provides an idea of how content is structured (more on the schema ), but without documents there is nothing to find. Solr needs input before it can do later

java -Dc=SampleCore -jar post.jar C:\solr-6.0.0\example\exampledocs\*.xml



example docs:



Faceted browsing is one of Solr’s key features. It allows users to narrow search results in ways that are meaningful to your application.

For example, a shopping site could provide facets to narrow search results by manufacturer or price.



Here is a example of how Solr might be integrated into an application:

Solr queries are RESTful, which means, in essence, that a query is a simple HTTP request URL and the response is a structured document: mainly XML, but it could also be JSON, CSV, or some other format.

You have so much data, or so many queries, that a single Solr server is
unable to handle your entire workload. In this case, you can scale up the capabilities of your application using So to better distribute the data, and the processing of requests, across many servers. Multiple options can SolrCloud be mixed and matched depending on the type of scalability you need.

solr home directory


It is highly recommended that you fully re-index after changing this setting as it can affect both how text is indexed and queried.

index location for SampleCore

restart solr

The start and restart commands have several options to allow you to run in SolrCloud mode, use an example configuration set, start with a hostname or port that is not the default and point to a local ZooKeeper ensemble.

To emphasize how the default settings work take a moment to understand that the following commands are

Setting Java System Properties

The bin/solr script will pass any additional parameters that begin with -D to the JVM, which allows you to set arbitrary Java system properties. For example, to set the auto soft-commit frequency to 3 seconds, you can do:

stop all solr instances:

sample configsets

bin/solr status

solr healthcheck

In cloud mode, all the configuration is stored in ZooKeeper, and the create script does not need to make directories or copy
configuration files. Solr itself will create all the necessary directories.

default configurations

creating core with config parameters

Notice that we used the option to specify a different configuration than the default. Solr provides several -d built-in configurations under . However you can also provide the path to your own server/solr/configsets
configuration directory using the option.

The following command will create a new collection that shares the basic configuration -n created previously:

Do not share data-driven configurations between collections unless you are certain that all collections should inherit the changes made when indexing data into one of the collections.

solr logging level by classes

very simple two-node cluster created using the
bin/solr -e cloud

If you are running a single node Solr instance, you will not see a Collections option in the left nav menu of the Admin UI.

solr status after cloud created

Analysis Screen

dll missing in JDBC

Dataimport Screen

dataimport section under SampleCore

solrconfig.xml changes for required libraries

include libraries for data import and sql connection.

additional solrconfig.xml changes for requestHandler section.

db-data-config.xml changes for dataSource. In this case this is sql server.

we should include sqljdbc_auth.dll under C:\Windows\System32

click on execute again

and here is the result!


PowerShell – Part 1

Full Resource:

PowerShell says “execution of scripts is disabled on this system.”

How to run a PowerShell script?

Calling a specific PowerShell function from the command line

How to call a function in another PowerShell script when executing PowerShell script using ‘Run With PowerShell’

To include scripts in the context in ISE, you should first add it into context. You can simply do that by opening the file you want to include via ISE and click on run button. This will add this file into context and you can go ahead and use these functions from other files.

To find which version of Windows you are running, enter the following commands in the Command Prompt or Powershell:

PowerShell – Backup SQL Server System Databases

PowerShell – Backup SQL Server System Databases

this is just and image and you cannot c/p the code. Please visit the source for full article.

Creating new IIS website with Powershell

Nano Server Powershell support

Azure CDN, purging cache

Purge-AzureCdnEnapointContent -EndpointName {@endpointName} -ProfileName {@profileName} -ResourceGroupName {@resourceGroupName} -PurgeContenta @(“/samplefolder/panda.jpg”)

How to get IIS AppPool Worker Process ID

also try

Determine installed PowerShell version

Using the Move-Item Cmdlet

Get processes with Window Title

Get-alias dir

Pipe and sort

Get-winevent and where statement

Get-Member will list possible output properties.

Format-Table will help us to customize output

Get-service sort, where, format-table example

Get-help -showWndow

Help command


Help get-service -full

Help get-process online





Gets instances of WMI classes or information about the available classes. The Get-WmiObject cmdlet gets instances of Windows Management Instrumentation (WMI) classes or information about the available WMI classes.




(Get-WmiObject Win32_ComputerSystem).Name
(Get-WmiObject Win32_ComputerSystem).Domain
(Get-WmiObject Win32_ComputerSystem) | Get-Member

start and stop App Pool


Displays and modifies entries in the Address Resolution Protocol (ARP) cache, which contains one or more tables that are used to store IP addresses and their resolved Ethernet or Token Ring physical addresses.

Get-ChildItem | sort -Descending lastwritetime | Select-Object -First 5

When you create a function, PowerShell stores it in memory for the duration of your session. During that session, you can call the function at any time by simply entering the function’s name, as in

How to get the current directory of the cmdlet being executed

How do I pass multiple parameters into a function in PowerShell?

How to run a PowerShell script?

If you are executing that line from PowerShell rather than from CMD, you can use the PowerShell environment variable syntax: