Great India Shopping

Sunday, August 2, 2009

Microsoft Visual studio 2010 and .NET 4.0 CTP features and download link

Hi folks.Try out the latest release of microsoft visual studio 2010 and .NET 4.0 CTP release .Visual studio has many enhanced features ,which include
 1) Enhanced  User experience providing better support for floating documents and windows.Enhanced document targeting and improved animation support
2) Parallel Programming providing IDE support and native C++ libraries that use lambda functions.Resource management of hardware and parallel debugging views as well as windows are provided.
3) Better Application Lifecycle management which helps you to design and share multiple digram types of usecase and sequence diagrams.It also provides better tooling for documentation of test scenarios and includes a new Test Impact View.
4) It provides C++ development experience which helps developers navigate and understand complex C++ source code.
5) For web development it has enriched Javascript Intellisense,one click deployment and full support for silverlight
6) Windows Azure tools for visual studio provides C# and VB templates for building cloud services,tools to change service role configuration and building packages of cloud services
7)Multiple database support which helps Developers to work with IBM DB2 and Oracle in adition to SQL Server

The download link is Get Visual Studio 2010 Files from Here

Layouts in Windows Presenation Foundation(wpf)

In this post let me give you a detailed view of Layout controls in WPF.
Layout Principles
The developers of WPF knew that layout was going to be an intrinsic part of the system. The goal was to define a single layout system that could span from
paginated documents to traditional application UIs. Eventually we realized that a single, monolithic system for this wide span of scenarios was impossible (or at least very difficult) and moved to a model of layout composition. We ended up solving the bulk of the layout problems in the same way that we tackled the control library: by allowing layouts to be nested inside of other layouts.
Given this approach, it seemed that the most critical thing to determine was how a child control would communicate with the parent layout. This “contract” between the parent and child would, we hoped, enable any control to be hosted in any layout.
Layout Library
On top of the basic layout control and the more complete framework layout odel, WPF provides a suite of layout panels. These panels implement set of basic layouts that tries to represent the most common needs for layout. e will cover some of the more common layout panels: Canvas, tackPanel, DockPanel, and UniformGrid. The most complex layout, Grid, will be covered next, in a section all to itself.
1) Canvas
Canvas is the simplest layout included in WPF. Canvas offers four properties:Top, Left, Right, and Bottom. Canvas lets us position its child elements at any offset from one corner of the panel. Notice that I said “position”; Canvas does not introduce any sizing constraints on child elements. Canvas simply takes the desired size from an element and positions it relative to one of the four corners. Only two properties can be used: one horizontal coordinate and one vertical coordinate. If more are specified, the extra properties will be ignored.
Canvas doesn’t place any interesting constraints on the width or height of the layout slot, so the HorizontalAlignment and VerticalAlignment properties are irrelevant. Margins are still honored, but they are behaviorally identical to setting the Canvas layout properties. For this reason, Canvas is sometimes referred to as the “slotless” layout panel.
2) StackPanel
Up to this point in the article, StackPanel is the only layout panel that we’ve seen, so you’ve probably figured out how it works by now, but it’s still useful to talk about it. As the name implies, StackPanel stacks things up in a row. Through the Orientation property we can control whether the stack is horizontal or vertical.
The slot for each child in StackPanel is given the entire width or height of the control (depending on its orientation), and StackPanel determines its preferred size according to the maximum size of its children. To show this, we can nest a couple of StackPanel controls and see how they behave. The outermost instance has a border around it (so that it’s visible), and each inner panel has a background:
StackPanel must be used carefully because it measures children using an infinite width or height based on the orientation. This lack of a control on size can break other child layout panels, specifically causing problems for TextBlock with wrapping.
3) DockPanel
DockPanel is fairly similar to StackPanel, except that it allows mixing of stacking from different edges within the same layout container.
DockPanel is probably one of the most common UI layouts today. Windows Forms natively supported docking, and Java supports docking with its BorderLayout class. Docking allows elements to be stacked at any edge of a container, with the final element filling the remaining space.
We can break down Windows Explorer into its major structural elements: menu, toolbar, folder list, and details pane .Implementing this layout in WPF is relatively simple: The DockPanel offers a single property, Dock, which allows us to specify the edge to which a control is docked. The declaration order of the elements determines the order in which they’re placed, and by default the last child fills the remaining space.
4) WrapPanel
If DockPanel is a stack panel with multiple edges, then WrapPanel is a stack panel with wrapping support. Remember that StackPanel positions elements with infinite width or height (depending on the orientation), allowing any number of elements, stacked one after the other. WrapPanel, on the other hand, uses the available space and fits elements to it; and when it runs out of room, it wraps to the next line. The classic example of a wrappanel is toolbar layouts.
By default, WrapPanel simply sizes all the children to fit their content, although we can fix the width and height of the children by using the ItemWidth and ItemHeight properties:

5) UniformGrid
The final basic layout, which is really nothing like StackPanel, is UniformGrid. UniformGrid hides in the System.Windows.Controls.Primitives namespace and provides a very basic grid layout: Each cell is the same size (hence uniform), and the locations of the items are determined simply by their order in the children collection.
To use UniformGrid, we specify the number of columns and rows we want. If we specify only columns, then rows will be calculated as the number of children divided by the number of columns, and vice versa.
6) Grid
UniformGrid offers a basic grid, but for most scenarios it doesn’t go far enough. People want grids with items that span cells, nonuniform row and column spacing, and so on. Grid is by far the most power, flexible, and complex of the UI layouts. On the surface, Grid is simple: Elements are positioned within grid cells defined by a series of rows and columns. Easy, right?
The simplest use of Grid is to set the RowDefinitions and Column- Definitions properties, add some children, and use the Grid.Row and Grid.Column attached properties to specify which child goes in which slot :

We will see different WPF controls in the next article on WPF Controls

Basic XAML(Extensible Aplication Markup Language)

In this post we will have a brief overview of XAML nad its usage in WPF
One of the primary objectives of WPF was to bring together the best features of both Windows development and the Web model. Before we look at the features of WPF, it is important to understand the new programming model in the .NET Framework 3.0: XAML.

A Brief Look at the XAML Programming Model

One of the major, and often misunderstood, features of .NET 3.0 is the new XAML programming model. XAML provides a set of semantics on top of raw XML that enables a common interpretation. To oversimplify slightly,XAML is an XML-based instantiation script for CLR objects. There is a mapping from XML tags to CLR types, and from XML attributes to CLR properties and events. The following example shows an object being created and a property being set in both XAML and C#:

// C# version
MyObject obj = new MyObject();
obj.SomeProperty = 1;

XAML was created to be a markup language that integrated well with the CLR and provided for rich tool support. A secondary goal was to create a markup format that was easy to read and write. It may seem a little rude to design a feature of the platform that is optimized first for tools, then for humans, but the WPF team felt strongly that WPF applications would typically be authored with the assistance of a visual design tool like Microsoft Visual Studio or Microsoft Expression. To walk the line between tools and humans, WPF allows the type author to define one property to be the content property.

For further readability, XAML has a feature known as markup extensions. This is a general way to extend the markup parser to produce simpler markup. Markup extensions are implemented as CLR types, and they work almost exactly like CLR attribute definitions. Markup extensions are enclosed in curly braces, { }. For example, to set a property value to the special value null, we can use the built-in Null markup extension:

In the next post we will see the layouts in WPF and their usage in detail.

Starting Windows Presentation Foundation(Basics)

Starting with WPF Layouts here, we will move on to cover entire WPF in the next coming articles.Here i am posting basic WPF information and details of WPF Layouts.

WINDOWS PRESENTATION FOUNDATION (WPF) represents a major step forward in user interface technology. This chapter will lay out some of the basic principles of WPF and walk through a quick overview of the entire platform. You can think of this chapter as a preview of the rest of the book.
A primary goal of WPF is to preserve as much developer knowledge as possible. Even though WPF is a new presentation system completely different from Windows Forms, we can write the equivalent program in WPF with very similar code1 (changes are in boldface):
/* sample code */
using System.Windows;using System;
class Program {[STAThread]static void Main()
{Window f = new Window();f.Title = "Hello World";new Application().Run(f);}}
In both cases the call to Run on the Application object is the replacement for the message loop, and the standard CLR (Common Language Runtime) type system is used for defining instances and types. Windows Forms is really a managed layer on top of User32, and it is therefore limited to only the fundamental features that User32 provides.
User32 is a great 2D widget platform. It is based on an on-demand, clipbased painting system; that is, when a widget needs to be displayed, the system calls back to the user code (on demand) to paint within a bounding box that it protects (with clipping). The great thing about clip-based painting systems is that they’re fast; no memory is wasted on buffering the content of a widget, nor are any cycles wasted on painting anything but the widget that has been changed.
The downsides of on-demand, clip-based painting systems relate mainly to responsiveness and composition. In the first case, because the system has to call back to user code to paint anything, often one component may prevent other components from painting. This problem is evident in Windows when an application hangs and goes white, or stops painting correctly. In the second case, it is extremely difficult to have a single pixel affected by two components, yet that capability is desirable in many scenarios—forexample, partial opacity, anti-aliasing, and shadows.
WPF is based on a retained-mode composition system. For each component a list of drawing instructions is maintained, allowing for the system to automatically render the contents of any widget without interacting with user code. In addition, the system is implemented with a painter’s algorithm, which ensures that overlapping widgets are painted from back to front, allowing them to paint on top of each other. This model lets the system manage the graphics resource, in much the same way that the CLR manages memory, to achieve some great effects. The system can perform high-speed animations, send drawing instructions to another machine, or even project the display onto 3D surfaces—all without the widget being aware of the complexity. In WPF’s composition engine, all controls are contained, grouped, and composited. A button in WPF is actually made up of several smaller controls. This move to embrace composition, coupled with a vector-based approach, enables any level of containment.In addition to addressing the limitations of User32 and GDI32, one of WPF’s goals was to bring many of the best features from the Web programming model to Windows developers.
HTML, a.k.a. the Web
One of the biggest assets of Web development is a simple entry to creating content. The most basic HTML “program” is really nothing more than a few HTML tags in a text file:
--Hello World

Welcome to my document!
--


In fact, all of the tags can be omitted, and we can simply create a file with the text “Welcome to my document!”, name it .html, and view it in a browser This amazingly low barrier to entry has made developers out of millions of people who never thought they could program anything. In WPF we can accomplish the same thing using a new markup format called XAML (Extensible Application Markup Language), pronounced “zammel.” Because XAML is a dialect of XML, it requires a slightly stricter syntax. Probably the most obvious requirement is that the xmlns directive must be used to associate the namespace with each tag:
In WPF we can accomplish the same thing using a new markup format called XAML (Extensible Application Markup Language), pronounced “zammel.” Because XAML is a dialect of XML, it requires a slightly stricter syntax. Probably the most obvious requirement is that the xmlns directive must be used to associate the namespace with each tag:

Parallel Query Run and steps to Fix I/O Bottlenecks in Sql Server

In this post let me give you a brief description of problems of parallel query run and counters and other statistics that help us find the performance problems of parallel query run .Finally i will tell you 6 single line steps to fix I/O bottlenecks of performance in SQL Server.
Parallel Query Run:
If the query’s cost exceeds the value specified in the cost threshold for parallelism option then the optimizer attempts to generate a plan that can be run in parallel. A parallel query plan uses multiple threads to process the query. The maximum degree of parallelism can be limited server wide using the MAXDOP option.
The decision on the actual degree of parallelism (DOP) used for execution—a measure of how many threads will do a given operation in parallel—is deferred until execution time. A parallel query typically uses a similar but slightly higher amount of CPU time as compared to the corresponding serial execution plan, but it does so in a shorter duration of elapsed time. As long as there are no other bottlenecks, such as waits for physical I/O, parallel plans generally should use 100% of the CPU across all of the processors.
Look at the SQL Server: SQL Statistics – Batch Requests/sec counter.Because a query must have an estimated cost that exceeds the cost threshold for the parallelism configuration setting (which defaults to 5) before it is considered for a parallel plan, the more batches a server is processing per second the less likely it is that the batches are running with parallel plans.The plan can be retrieved using sys.dm_exec_cached_plan.We may also search for plans that are eligible to run in parallel by searching the cached plans to see if a relational operator has its Parallel attribute as a nonzero value.
Steps to fix I/O Bottlenecks
1) Check the memory configuration of SQL Server. If SQL Server has been configured with insufficient memory, it will incur more I/O overhead.
2) Buffer Cache hit ratio
3) Page Life Expectancy
4) Checkpoint pages/sec
5) Lazy writes/sec
6) Increase I/O bandwidth.

We will discuss these 6 steps in detail in the next post and let me research on them by that.Bye for Now.

Performance Tuning with System Monitor/Perfmon in Sql Server

In this post we will discuss how to detect performance problems in sql server using Syatem monitor or perfmon .Then we will have a brief view of features and working knowledge of perfmon and then finally know the performance charts available.
Detection of performance problems: You can use System Monitor (PerfMon) or SQL Trace (SQL Server Profiler) to detect excessive compiles and recompiles. The SQL Statistics object provides counters to monitor compilation and the type of requests that are sent to an instance of SQL Server. You must monitor the number of query compilations and recompilations in conjunction with the number of batches received to find out if the compiles are contributing to high CPU use. Ideally, the ratio of SQL Recompilations/sec to Batch Requests/sec should be very low unless users are submitting ad hoc queries.
System Monitor (Perfmon) Performance Monitor is a graphical tool supplied as part of the installation of any NT/2000 Server or Workstation that lets you monitor various performance indicators. Hundreds of counters are organized within performance objects. These counters can be monitored on the local machine or over the network and can be set up to monitor any object and counter on multiple systems at once from one session.
performance handles or counters that can be looked at are summarized into following steps:
1) Understand and monitor network request characteristics as they relate to SQL Server and the machine on which SQL Server has been installed. This will mean a complete profile of what is coming into and sent back out over the network from SQL Server.
2) Understand processor utilization. It might be that the processing power is the biggest issue. You need to get a handle on this early.
3) Understand and monitor memory and cache utilization. This is the next detail step into the overall memory usage at the operating system point of view and into the memory that SQL Server is using for such things as data caching, procedure caching, and so on.
4) Understand and monitor disk system utilization. We are often rewarded for a simple disk configuration or data storage approach. We won't know We have a problem unless We look for it. Techniques that are often used include disk stripping, isolation of logs from data, and so on.
Performance Monitor Views: We can switch between
1) chart view,
2) Alert view,
3) Log view
In the next post we will see the problems from parallel query run and steps to fix the bottlenecks of performance problems

BottleNecks of Performance in SQL Server

Here i provide a complete tutorial on sql server performance tuning starting with an introduction to the problems or causes of performance degradtion and provide solution to those performance problems along with the tools in sql server like performance monitor,DBCC etc.for improving the performance.
It’s not uncommon to experience occasional slowdown in sql server database.A poorly designed database or a system that is improperly configured for the workload are but several of many possible causes of this type of performance problems.We need to proactively prevent or minimize problems and, when they occur, diagnose the cause and take corrective actions to fix the problem. Here we discuss the step-by-step guidelines for diagnosing and troubleshooting common performance problems like
1) Resource Bottlenecks
2) CPU Bottlenecks
3) Memory Bottlenecks
4) I/O Bottlenecks
5) TempDB problems
6) Slowrunning Queries
Using various tools of SQL Server such as
1) System Monitor
2) Database Engine Tuning Advisor
3) Sql Server Profiler and also other techniques of using
1) DBCC
2) Dynamic Management Views
3) Index Tuning
Here we discuss the performance problems as well as monitoring the performnace using the performance tool System monitor.
There can be many reasons for a slowdown in SQL Server. We use the following three key symptoms to start diagnosing problems.
Resource bottlenecks: CPU, memory, and I/O bottlenecks are the main resource problems faced . We do not consider network issues rather than a brief view. For each resource bottleneck, we describe how to identify the problem and then iterate through the possible causes and then use the tools present. For example, a memory bottleneck can lead to excessive paging that ultimately impacts performance.
1) CPU Bottlenecks:
A CPU bottleneck that happens suddenly and unexpectedly, without additional load on the server, is commonly caused by a nonoptimal query plan, a poor configuration, or design factors, and not insufficient hardware resources. Before rushing out to buy faster and/or more processors, we should first identify the largest consumers of CPU bandwidth and see if they can be tuned.System Monitor is generally the best means to determine if the server is CPU bound. We should look to see if the Processor:% Processor Time counter is high; values in excess of 80% processor time per CPU are generally deemed to be a bottleneck. We can also monitor the SQL Server schedulers using the sys.dm_os_schedulers view to see if the number of runnable tasks is typically nonzero. A nonzero value indicates that tasks have to wait for their time slice to run; high values for this counter are a symptom of a CPU bottleneck. You can use the query below to list all the schedulers and look at the number of runnable tasks
Monitoring hidden and nonhidden schedulers: The following query outputs the state of workers and tasks in SQL Server across all schedulers. This query was executed on a computer system that has the following
SELECT scheduler_id, cpu_id, parent_node_id, current_tasks_count, runnable_tasks_count, current_workers_count, active_workers_count, work_queue_count FROM sys.dm_os_schedulers;


Here is the result set.
scheduler_id cpu_id parent_node_id current_tasks_count
------------------------------------------------------------
0 1 0 9
257 255 0 1
1 0 1 10
258 255 1 1
255 255 32 2
runnable_tasks_count current_workers_count
------------------------------------------------------
0 11
0 1
0 18
0 1
0 3
active_workers_count work_queue_count
6 0
1 0
8 0
1 0
Excessive compilation and recompilation
When a batch or remote procedure call (RPC) is submitted to SQL Server, before it begins executing, the server checks for the validity and correctness of the query plan. If one of these checks fails, the batch may have to be compiled again to produce a different query plan. Such compilations are known as recompilations. These recompilations are generally necessary to ensure correctness and are often performed when the server determines that there could be a more optimal query plan due to changes in underlying data. Compilations by nature are CPU intensive and hence excessive recompilations could result in a CPU-bound performance problem on the system.
SQL Server 2005 introduces statement-level recompilation of stored procedures. When SQL Server 2005 recompiles stored procedures, only the statement that caused the recompilation is compiled—not the entire procedure. This uses less CPU bandwidth and results in less contention on lock resources such as COMPILE locks. Recompilation can happen due to various reasons, such as:
• Schema changed• Statistics changed• Deferred compile• Temporary table changed• SET option changed• Stored procedure created with the RECOMPILE query hint or which uses OPTION (RECOMPILE)

SQL Server 2008 Best Practices

Here i provide three categories of best practices as Critically Important,Concentration Required and Best if followed.The following Best practices enhance developer productivity as well as enable the performance of sqlserver in real time business scenarios.

Critically Important
1) DO NOT use "inline" SQL Statements – use Stored Procedures in all points of contact between the database and application layer(s). Stored Procedures have compiled and cached execution plans.
2) DO NOT use dynamic SQL statements. Dynamic SQL must be compiled every time it is executed, wasting resources (and Stored Procedures containing dynamic SQL will also recompile, which can cause object-level locking issues).
3) DO NOT use "Table Valued" functions. These will always process the entire query contained within, and generate extra overhead through table variable population – often much data is discarded upon joining to the results of a function.
4) DO NOT use Cursors against "permanent" tables. The locking strategies employed by cursors can cause rapid lock escalation and some severe contention issues. Use the "while loop" construct, or cursors based on temporary objects instead.
5) DO consider indexing strategy carefully when creating or amending schema. Poorly indexed tables are inefficient – the query optimizer will not be able to efficiently retrieve the rows that it requires.
6) DO include a "WHERE" clause. This is refered to as "SARGability" – Search ARGuments allow SQL Server to prune the result set at the earliest opportunity.
7) DO use table variables instead of standard temporary tables. Temporary tables can cause recompiles due to statistical updates and context switches, leading to problems with object-level locking. Table variables are also stored in heap memory when possible.
8) DO NOT drop temporary tables explicitly. SQL Server will automatically clean these up when they fall "out of scope" (e.g. execution chain has completed). An explicit DROP TABLE operation on a temporary table will cause a recompile event to occur.
9) DO set the "NOCOUNT" option on for all stored procedures which do not need to be aware of the number of records affected. This prevents SQL Server returning extra statistics to the client.
10) DO use disconnected ADO record set objects. Retaining an open record set can cause ADO to run through cursors on the SQL Server when retrieving data. 11) DO close all connection and record set objects as soon as they are no longer required. Retaining an unnecessary open connection can cause errors (only one record set can be open for one connection at a time in some modes of operation), and can also tie up valuable system resources.
12) DO NOT use the OR operator in JOIN predicates and WHERE clauses. This operator can cause poor index use (or table scans), as SQL attempts to match up and merge alternative sets of data. Consider use of UNION [ALL] if this sort of behaviour is definitely required.
13) DO NOT use the IN operator. Make use of "EXISTS" instead, as this allows SQL Server to look up the data in the condition clause, rather than forcing it to spool through a set of values. This is particularly relevant when the clause is a co-related sub-query.
14) DO NOT use distributed transactions unless there is no other option. Locking data in a local (explicit) transaction has serious enough implications, without extending the locking to another database which is physically separated from the current context.

Concentration Required
1) DO NOT use scalar functions in queries which process a large number of rows. Due to the way SQL Server processes this sort of operation, long query durations can result (although typically without heavy resource use – a common cause of slow running queries).
2) DO NOT use sub-queries. The entire sub-query result set is often evaluated, even if many of the rows are eliminated immediately by the way the sub-query is joined to the main query.
3) DO prefix the identifiers of all objects with the name of their owner to ensure SQL Server immediately selects the correct context. E.g. Prefix all Stored procedures, Tables, Functions and Views owned by the database owner with "dbo".
4) DO NOT use LEFT JOIN if possible – due to the nature of this operation, SQL Server is often unable to make efficient use of the available indexes. Consider using UNION [ALL] and two INNER JOINed queries instead.
5) DO NOT use the DISTINCT operator. In many cases duplicate rows can be eliminated through restructuring of a query – there are very few circumstances where DISTINCT is actually necessary.
6) DO use UNION ALL when possible, instead of UNION – adding the modifier "ALL" prevents SQL Server from performing a DISTINCT operation on the data sets during merging.
7) DO minimise the length of any explicit transactions. The longer a transaction, and the more objects involved, the worse the impact which it will have on concurrency.
8) DO reference indexed columns in join predicates and where clauses. This will allow SQL Server to leverage indexes and minimise the size of the data set being processed earlier rather than later.
9) DO NOT apply the ORDER BY clause to a set of data unless it is absolutely required (e.g. for front-end presentation). Never sort data into a table where the clustered index sequence differs from the ORDER BY. Sorting data is expensive, and sorting against the clustered index is pointless as SQL Server will automatically re-sequence the data upon insertion.
10) DO NOT use triggers to maintain data in regularly accessed tables. While useful for change-logging, firing triggers at (or from) busy tables can cause serious contention issues.

Best if followed
1) DO use table variables to break up large queries, avoid repetition of work throughout a Stored Procedure and minimise contention through locking. Preparing data ahead of time can allow you to complete ACID safe work in a transaction without taking out locks on source data.
2) DO use locking hints to avoid lock escalation when the affected record count is known to be small, or to ensure other scenarios (dirty reads, retained locks, etc). Be careful though, as inappropriate use of locking hints can cause excessive contention (or insufficient contention, resulting in an error when data being read with NOLOCK is modified during the read operation). 3) DO NOT join together datasets where the clustered index contains similar columns in a different order; this can cause a bookmark lookup or heavy sort operation to occur. Remember that clustered indexes define the order of the data in a table.
4) DO keep code as simple as possible. Several short, simple SQL statements will almost always operate more efficiently than one huge query. This also improves maintainability.
5) DO NOT use RPC unless there is no other option. While useful, RPC is phenomenally expensive when compared to a normal query, because both the query execution request and result set (even if only return value or output parameters) are crossing a physical network boundary.