.Net / C# Waiting for Multiple Async Responses

While writing programs we can often come across situations when we must wait for results from different asynchronous calls. This is required when the processing is dependent upon multiple responses from external medium like web services. Lets look at the mentioned below code example

public async Task ExecuteMultipleRequestsInParallel()
{
    HttpClient client = new HttpClient();
    Task google = client.GetStringAsync("http://www.google.com");
    Task bing = client.GetStringAsync("http://www.bing.com");
    Task yahoo = client.GetStringAsync("http://yahoo.com/");
    await Task.WhenAll(google, bing, yahoo);
}

In the above code , we are executing asynchronous calls to different servers. Suppose we must wait for output from all of them before we can proceed ahead we can use the statement “WhenAll”. This will ensure that the compiler will wait for responses from all the three asynchronous calls before the processing can move ahead.

Advertisements

.Net / Dynamics – Handling Unmanaged Code With “Using” Block

Just wanted to share a coding tip which can help in improving the performance of .Net applications. As Dynamics custom code is build on .Net its helpful in that space as well.

In .Net we can have both unmanaged and managed code. For the sake of keeping the blog short, we will not go into difference between them however, a managed code in dot net is the one in which CLR or “Common language Runtime” automatically reclaims the memory while for unmanaged code such as “SQL Connection”, “File IO Operation” objects CLR does not automatically does that.

In Dynamics, OrganisationServiceProxy object is an unmanaged code resource.

If we are using unmanaged resources, unhandled  exceptions can be very harmful. They can lead to issues related to dangling memory, unclosed connections to file objects etc.

For example, in the above example where we have written a “Dispose” method to free up the memory. Suppose there comes a scenario, when the application throws a scenario before the “Dispose” method is called. In such scenario, the application will never have a chance to reclaim the memory occupied by the unmanaged resources.

To avoid such scenario’s C# provides us a feature of using “Using” block in our code. When we use the using block, whatever happens inside the using block, the dispose method is always called. Lets understand this with the code implementation mentioned below

using (DisposeImplementation d = new DisposeImplementation())
{
 
}
Console.ReadLine();
GC.Collect();
Console.ReadLine();

Review that in the above code block we are using a class “DisposeImplementation” inside the using block.

We are not explicitly nullifying the object d, to indicate the garbage collector that its no longer needed. Also we are not explicitly calling the Dispose method to free up the unmanaged resources.

However as soon as the program will go out of scope of the using block the dispose method for the unmanaged code will be called and memory will be claimed back by the compiler.

Dynamics – Copy Static Marketing List Members into another Static Marketing List

Just wanted to highlight a feature in Dynamics , using which we can copy the members present in one static marketing list to another.

This can come handy when we want to execute a campaign on the same list of members present in a previous marketing list. Mentioned below are the steps we can use

a) Select the source “Static” marketing list and click on the ribbon button “Copy Marketing List”

Source Marketing List.PNG

b) A popup window should open up for selecting the target marketing list

target marketing list.PNG

c) Once user clicks on the “Add” button, review that a progress bar should open and once the processing is over, the members from the source marketing list should be copied to the target marketing list.

 

 

Dynamics – Tip for Managing Marketing Lists

Just wanted to share a strategy which can be used when we are managing marketing lists in Dynamics.

Problem Statement
In my last engagement, we were extensively using Click Dimensions for sending out emails on marketing lists. The marketing lists were based upon contacts and accounts which often got deactivated.

Even though the contacts were deactivated, as the records were still present in different marketing lists, emails were still going to them which was causing bad experience.

Approach

We can assume a two fold approach.

  • For Dynamic Marketing List – We should encourage users to always use the filter condition of “Active” records. This will ensure that deactivated contacts are automatically excluded from the marketing list.
  • For Static Marketing List – We can either

1. Configure configure system administrator to cleanse the marketing lists by using a similar view

Inactive Contacts

In the above view, we are filtering out inactive contacts, which are present in any  marketing list.

2. Or configure some custom logic to remove the deactivated contact from all the static marketing list.

 

Dynamics – Altering Primary Field During Entity Creation

Just wanted to share a tip in regards to the primary field of “Name” when we create a custom entity in Dynamics.

When an entity is created in Dynamics, by default a primary attribute of “prefix_name” is added to it. However in some cases we may not like the primary attribute to have the schema name “prefix_name”. For example in some cases , we may just like to have it as “id” or “external_identifier” to indicate a match with the legacy source of the data.

In such cases, when we are creating the entity, we can go to the tab “Primary Field” wherein we can modify the name of the primary field by altering its name. In the below screenshot, we are changing the Name field, by removing the word “name” from it and adding as per our preference.

Altered Name.png

Azure Dev-Ops Tip – Workspace optimization for Gated Check-in and CI builds

Just wanted to share a tip for optimization when we are setting a build pipeline in Azure dev ops.

Sometimes the source control branch may consist of several folders. Not all folders will require Gated check-in and CI builds.

Therefore to optimize we should configure work-space mappings, such that we do not execute gated check-ins and CI builds on all the commits in the main branch.

We can configure the work-space mappings in the “Get Sources” tasks present inside the Build pipeline. Please refer below the screenshot for the same

Workspace mappings

What it will do is that gated check-in / CI build will not be executed on every check in. This may be relevant when we have saving some files like master data.

Azure Dev Ops – Pull Request For Code Merging in Different Branches

Through this blog, just wanted to share a strategy which can be employed in Azure Dev Ops to

  • Segregate code in different branches of development and other environments like production
  • Providing a framework of doing code review before the checked in dev code is merged with production branch.
  • Ability to configure automated deployments on production environment when code is updated in the production branch.

a) At the start, we will have two branches. To implement further security on them, we can configure branch security.

Azure Dev Ops 1

b) When the code is checked  in to build branch, we can create “Pull Requests”. On this step we can specify the related work items and the committed code that was checked in.

Azure Dev Ops 2

As shown in the above screenshot, the approve can refer to the commits, files changed , related work items in the same window.

c) If the reviewer selects merge, the two branches are merged together.

Once changes are merged on the release branch, we can configure build pipeline to deploy the merged code to different environments.

d) If in certain circumstances, two Pull Requests are created simultaneously, the approver, will only be able to merge the branches in the same order pull requests were initiated.

Azure Dev Ops 3

For more information, please refer to the link

https://docs.microsoft.com/en-us/azure/devops/repos/git/pull-requests?view=azure-devops&tabs=new-nav