Resources to learn Azure Resource Manager (ARM) Language

Azure Resource Manager(ARM) was announced in Spring 2014. It is a completely different way of deploying services on Azure platform. It matters because before the release of ARM it was only possible to deploy one service at a time. When you were deploying applications using PowerShell or Azure CLI you had to deploy all the services via a script. As the number of services increased the scripts got increasingly complex and brittle. Over the past year ARM capabilities have evolved rapidly. All future services will be deployed via ARM cmdlets or templates. The current Azure Service management API’s will be eventually deprecated. Even when using ARM you have two choices:

  • Imperative: This is very similar to how you were using Service Management API’s to provision services.
  • Declarative: Here you define the application configuration with a JSON Template. This template can be parameterized. Once this is done a single PowerShell cmdlet New-AzureResourceGroupDeployment deploys your entire application. This deployment can span regions as well. You can define dependencies between resources and deployment process will deploy them in the order necessary to make the deployment successful. If there are no dependencies it parallelizes the deployment. You can repeatedly deploy the same template and the deployment process is smart enough to determine what changed and only deploy/update the services that changed. ARM templates can not only provision the infrastructure they also also execute tasks inside the provisioned VM’s to fully configure your application. On Windows VM’s you can either use DSC or PowerShell scripts to customize it. On Linux you can use bash scripts to customize the VM after it has been created.

AWS has had a similar capability for many years. It is called CloudFormation. While ARM and CloudFormation are similar and are trying to achieve similar goals there are some differences between them as well.

Resources

If you believe in DevOps and work with Microsoft Azure platform understanding ARM will be beneficial. Another thing worth mentioning is that ARM templates will allow you to deploy services in your private cloud when Azure stack is released. I want to share some helpful resources to make it easier for you to learn ARM.

  1. Treat your Azure Infrastructure as code is an excellent overview of ARM and its benefits: https://www.linkedin.com/pulse/treat-your-azure-infrastructure-code-krishna-venkataraman?trk=prof-post
  2. ARM Language Reference: https://msdn.microsoft.com/en-us/library/azure/Dn835138.aspx?f=255&MSPPError=-2147217396
  3. Azure Quick Start Templates at Github: If you are like me you learn from examples. Here is a large repository of ARM templates. https://github.com/Azure/azure-quickstart-templates
  4. Ryan Jones from Microsoft posted many simple ARM samples here: https://github.com/rjmax/ArmExamples
  5. Full Scale 180 blog is another excellent resource to learn how to write ARM templates.  http://blog.fullscale180.com/building-azure-resource-manager-templates/   I especially like the Couchbase Sample: https://github.com/Azure/azure-quickstart-templates/tree/master/couchbase-on-ubuntu
  6. If you still want to use the imperative method of deploying Azure resource check out this sample for Joe Davies that walks you through the process provisioning a VM here: https://azure.microsoft.com/blog/2015/06/11/step-through-creating-resource-manager-virtual-machine-powershell/
  7. Here is a sample showing how to lock down your resources with Resource Manager Lock. http://blogs.msdn.com/b/cloud_solution_architect/archive/2015/06/18/lock-down-your-azure-resources.aspx
  8. Neil Mackenzie posted a sample for creating a VM with a instance IP address here: https://gist.github.com/nmackenzie/db9a4b7abdee2760dba8 https://onedrive.live.com/view.aspx?resid=96BA3346350A5309!318670&app=OneNote&authkey=!APNWE3DZp1C-RjY
  9. Alexandre Brisebois posted a sample showing how to provision Centos VM using an ARM.  In this example he shows how to customize the VM after its creation using  a bash script. https://alexandrebrisebois.wordpress.com/2015/05/25/create-a-centos-virtual-machine-using-azure-resource-manager-arm/
  10. Kloud Blog has a nice overview of how to get started with ARM and many samples: http://blog.kloud.com.au/tag/azure-resource-manager/
  11. If you want learn about best practices for writing ARM templates this is a must read document. https://azure.microsoft.com/en-us/documentation/articles/best-practices-resource-manager-design-templates/
  12. This blog post shows how you can use output section of the template publish information about newly created resources. http://blogs.msdn.com/b/girishp/archive/2015/06/16/azure-arm-templates-tips-on-using-outputs.aspx
  13. Check out this list of resources for ARM by Hans Vredevoort. It is very comprehensive. https://onedrive.live.com/view.aspx?resid=96BA3346350A5309!318670&app=OneNote&authkey=!APNWE3DZp1C-RjY
  14. This blog post shows how you can use arrays, length function, resource loops, outputs to provision multiple storage accounts http://104.42.190.81/2015/08/14/adventures-with-azure-resource-manager-part-i/

 

Samples

As I work with ARM templates I am constantly developing or looking for samples that can help me. These sample templates were created by product teams in Microsoft but have not been integrated into Quick Start templates yet. I will use this section to document some of the helpful samples I have found.

  1. Azure Web Site with a Web Job Template: This template was created by David Ebbo. This is the only ARM template sample that shows you how to publish webjobs with an ARM template. https://github.com/davidebbo/AzureWebsitesSamples/blob/master/ARMTemplates/WebAppWithWebJobs.json
  2. Length Function: As I began learning the template language I found it annoying that I had to pass in Array and its length as separate parameters. I just found a sample created by Ryan Jones which shows how to calculate length of an array. https://github.com/rjmax/ArmExamples/blob/master/copySampleWithLength.json

Tools

ARM documentation is still evolving and sometimes it is difficult to find samples you are looking for. If you are trying to create a new template and you cannot find any documentation here are few things that may be helpful

  1. Azure Resource Explorer: This is an essential tool for anybody writing ARM templates. You can deploy a resource using the portal and use the resource explorer to see the JSON schema for the resource you just created. You can make changes to the resources: https://resources.azure.com/
  2. ARM Schemas: This is the location where MSFT ARM teams are posting their schemas. https://github.com/Azure/azure-resource-manager-schemas

Debugging

You can view the logs using these PowerShell cmdlets.

  1. Get-AzureResourceLog: Gets logs for a specific Azure  resource
  2. Get-AzureResourceGroupLog: Get logs for a Azure resource group
  3. Get-AzureResourceProviderLog Gets logs for an Azure resource provider
  4. Get-AzureResourceGroupDeploymentOperation Get logs for the deployment operation

When your template deployment operation fails the error message may not have enough detail to tell you the reason for failure. You can go to the preview azure portal and examine the audit logs. You can filter by resource group, resource type, and time range. I was able to get detailed error message from the portal.

Surprises

In addition to running the cmdlet Switch-AzureMode –Name AzureResourceManager I also had to enable my subscription for specific Azure resource providers. When I was using Service Management API’s this was not necessary. As an example to be able to provision virtual networks with ARM I had to run the following cmdlet:

Register-AzureProvider –ProviderNamespace Microsoft.Network

Even though Template language can work with JSON arrays it cannot determine the number of elements in the JSON array so you have to pass the count separately. on 08/04/2015 I removed the previous line as length function is now available.

I hope these resources are helpful. If you are aware of other helpful ARM resources feel free to mention them in comments on this  blog post and I can add them to my list.

I will be posting ARM samples on my blog as well.

Posted in ARM, Automation, Azure, DevOps | Tagged , , | 4 Comments

Installing Java Runtime in Azure Cloud Services with Chocolatey

I recently wrote a blog post about installing Splunk on Azure Web/Worker roles with the help of a startup task. You can see that blog post here. In this blog post I will show you how to install Java runtime in web/worker roles. Azure Web/Worker roles are stateless so the only way to install third party software or tweak windows features on web/worker roles is via startup tasks.

Linux users have had the benefit of tools like apt, yum etc to download and install software via command line. Chocolatey provides you with similar functionality on Windows Platform. If you into DevOps and automation on Windows platform you should check out Chocolatey here. It has nearly 15000 packages already available.

Once you have Chocolatey installed installing java is a breeze. It is as simple as

 choco install javaruntime -y  

The statement above is self explanatory. Option –y  answers y to all the questions including accepting the license so you are not prompted to answer any questions.

I already provided detailed steps to define startup tasks in my  previous blog post. So I will just share the startup script along with the service definition file that shows how to deploy Java runtime in Azure web/worker role with a startup task.

Step 1

Create a startup.cmd file and add it to your worker/web role implementation. It should be saved as “Unicode (UTF-8 without signature) – Codepage 65001”.

Set the “copy to output directory” property of startup.cmd to “copy if newer”

Line 9 checks to see if the startup task ran successfully  and end if it did

Line 16 installs chocolatey

Line 22 install java run time

Line 26 only execute if java was installed successfully and it creates startupcomplete.txt file in approot directory.

1:  SET LogPath=%LogFileDirectory%%LogFileName%  
2:     
3:  ECHO Current Role: %RoleName% >> "%LogPath%" 2>&1  
4:  ECHO Current Role Instance: %InstanceId% >> "%LogPath%" 2>&1  
5:  ECHO Current Directory: %CD% >> "%LogPath%" 2>&1  
6:     
7:  ECHO We will first verify if startup has been executed before by checking %RoleRoot%\StartupComplete.txt. >> "%LogPath%" 2>&1  
8:     
9:  IF EXIST "%RoleRoot%\StartupComplete.txt" (  
10:    ECHO Startup has already run, skipping. >> "%LogPath%" 2>&1  
11:    EXIT /B 0  
12:  )  
13:    
14:  Echo Installing Chocolatey >> "%LogPath%" 2>&1  
15:    
16:  @powershell -NoProfile -ExecutionPolicy Bypass -Command "iex ((new-object net.webclient).DownloadString('https://chocolatey.org/install.ps1'))" && SET PATH=%PATH%;%ALLUSERSPROFILE%\chocolatey\bin  >> "%LogPath%" 2>&1  
17:    
18:  IF ERRORLEVEL EQU 0 (  
19:    
20:       Echo Installing Java runtime >> "%LogPath%" 2>&1  
21:    
22:       %ALLUSERSPROFILE%\chocolatey\bin\choco install javaruntime -y >> "%LogPath%" 2>&1  
23:    
24:       IF ERRORLEVEL EQU 0 (            
25:                 ECHO Java installed. Startup completed. >> "%LogPath%" 2>&1  
26:                 ECHO Startup completed. >> "%RoleRoot%\StartupComplete.txt" 2>&1  
27:                 EXIT /B 0  
28:       ) ELSE (  
29:            ECHO An error occurred. The ERRORLEVEL = %ERRORLEVEL%. >> "%LogPath%" 2>&1  
30:            EXIT %ERRORLEVEL%  
31:       )  
32:  ) ELSE (  
33:    ECHO An error occurred while install chocolatey The ERRORLEVEL = %ERRORLEVEL%. >> "%LogPath%" 2>&1  
34:    EXIT %ERRORLEVEL%  
35:  )  
36:    

 

Step 2

Update the service definition file to define the startup task.

Lines 5 through 19 define the startup task.

Lines 23 to 25 define local storage where Logs will be stored

1:  <?xml version="1.0" encoding="utf-8"?>  
2:  <ServiceDefinition name="AzureJavaPaaS" xmlns="http://schemas.microsoft.com/ServiceHosting/2008/10/ServiceDefinition" schemaVersion="2015-04.2.6">  
3:   <WorkerRole name="MyWorkerRole" vmsize="Small">  
4:    <Startup>  
5:     <Task commandLine="Startup.cmd" executionContext="elevated" taskType="simple">  
6:      <Environment>  
7:       <Variable name="LogFileName" value="Startup.log" />  
8:       <Variable name="LogFileDirectory">  
9:        <RoleInstanceValue xpath="/RoleEnvironment/CurrentInstance/LocalResources/LocalResource[@name='LogsPath']/@path" />  
10:       </Variable>  
11:       <Variable name="InstanceId">  
12:        <RoleInstanceValue xpath="/RoleEnvironment/CurrentInstance/@id" />  
13:       </Variable>  
14:       <Variable name="RoleName">  
15:        <RoleInstanceValue xpath="/RoleEnvironment/CurrentInstance/@roleName" />  
16:       </Variable>  
17:      </Environment>  
18:     </Task>  
19:    </Startup>  
20:    <ConfigurationSettings>  
21:     <Setting name="Microsoft.WindowsAzure.Plugins.Diagnostics.ConnectionString" />  
22:    </ConfigurationSettings>  
23:    <LocalResources>  
24:     <LocalStorage name="LogsPath" cleanOnRoleRecycle="false" sizeInMB="1024" />  
25:    </LocalResources>  
26:    <Imports>  
27:     <Import moduleName="RemoteAccess" />  
28:     <Import moduleName="RemoteForwarder" />  
29:    </Imports>  
30:   </WorkerRole>  
31:  </ServiceDefinition>  

 

Step 3

Publish the cloud service to Azure. I enabled remote desktop to be able to verify if the worker role was configured successfully.

Verification

I used Remote Desktop to log into the worker role. I  looked in

C:\Resources\Directory\d063631e14c1485cb6c838c8f92cd7c3.MyWorkerRole.LogsPath and found startup.txt

It had the following content. As you can see below that java was installed successfully.

1:  Current Role: MyWorkerRole   
2:  Current Role Instance: MyWorkerRole_IN_0   
3:  Current Directory: E:\approot   
4:  We will first verify if startup has been executed before by checking E:\StartupComplete.txt.   
5:  Installing Chocolatey   
6:  Installing Java runtime   
7:  Chocolatey v0.9.9.8  
8:  Installing the following packages:  
9:  javaruntime  
10:  By installing you accept licenses for the packages.  
11:    
12:  jre8 v8.0.45  
13:   Downloading jre8 32 bit  
14:    from 'http://javadl.sun.com/webapps/download/AutoDL?BundleId=106246'  
15:   Installing jre8...  
16:   jre8 has been installed.  
17:   Downloading jre8 64 bit  
18:    from 'http://javadl.sun.com/webapps/download/AutoDL?BundleId=106248'  
19:   Installing jre8...  
20:   jre8 has been installed.  
21:   PATH environment variable does not have D:\Program Files\Java\jre1.8.0_45\bin in it. Adding...  
22:   The install of jre8 was successful.  
23:    
24:  javaruntime v8.0.40  
25:   The install of javaruntime was successful.  
26:    
27:  Chocolatey installed 2/2 package(s). 0 package(s) failed.  
28:   See the log for details (D:\ProgramData\chocolatey\logs\chocolatey.log).  
29:  Java installed. Startup completed.   
30:    

I also verified that e:\startupcomplete.txt file was created.

I verified that java was installed in D:\Sun\Java directory

You can get the source code for this entire project from my GitHub Repository https://github.com/rajinders/azure-java-paas.

Posted in Azure, DevOps, PAAS | Tagged , | 2 Comments

Installing Splunk Forwarder in Azure Web/Worker Roles with a Startup Task

Overview

I recently had to install Splunk Universal Forwarder in Azure worker roles. Azure web/worker roles are stateless so the only way to install any software is to do so via Azure startup tasks. Azure startup tasks have been around for many years. MSDN documentation about startup tasks can be reviewed here:

Run startup tasks in Azure

https://msdn.microsoft.com/en-us/library/azure/hh180155.aspx?f=255&MSPPError=-2147217396

Best practices for startup tasks

https://msdn.microsoft.com/en-us/library/azure/jj129545.aspx

One of the best resources about Azure startup tasks is these two blog posts by my friend Chris Clayton.

http://blogs.msdn.com/b/cclayton/archive/2012/05/17/windows-azure-start-up-tasks-part-1.aspx

http://blogs.msdn.com/b/cclayton/archive/2012/05/17/windows-azure-start-up-tasks-part-2.aspx

Even though I will show you how to install Splunk Universal forwarder this approach can be used to install any third party software on a web/worker role.

Details

Requirements

  • We only need to install Splunk forwarder once.
  • We need to install the forwarder using command line
  • We need to leave detailed logs to help us debug any issues that may arise
  • We need to use a reliable location to download the setup files.

A few more decisions need to be made upfront.

How to make the installer available in the web/worker role?

Your choices are:

  1. Include the installer with the source code
  2. Download the installer from an external locations.

Having large deployment package can affect the speed of deployment so I tend to prefer downloading the installer from either Azure blob storage or external location like the vendor website that created the installer file.

Which scripting language to use for the startup task?

You choices are:

1. Combination of a command batch file and a PowerShell script.

2. Do all the installation via a command batch file.

In my case I chose to do use just a command batch file to install Splunk. Batch files have been around for 25 years and I wanted to keep things as simple as possible.

Step 1

Splunk Universal Forward MSI file can be downloaded from splunk.com. However we cannot be certain that download location of the installer will not change in future. So I downloaded the MSI file for Splunk installer and uploaded into blob container in a storage account. Create a Azure storage account or use an existing storage account. I used Azure management portal to create a new storage account. This account should be created in the same region where you will be deploying your cloud service. This is not a requirement but having your storage account with Splunk installer in the same region as your cloud service will increase the download speed.

Create a storage container where you will upload the Splunk installer. I created a public container called Splunk in Azure management portal.

image

Download Splunk Installer and upload it into a storage container. I used Azure management studio to upload the msi file to the newly created container. You can use any tool of your choice to upload the MSI.

Step 2

Create Startup.cmd file and add it to the worker/web role implementation project. This is a standard cmd file however it needs to be saved in “Unicode (UTF-8 without signature) – Codepage 65001” or it will not execute as a startup task.

image

image

Select the newly add startup.cmd file in the solution explorer and set the property “copy to output directory” to “Copy if newer” as shown below.

image

 

Startup.cmd needs to download the Splunk installer. I chose to use Azcopy command line utility. My other option was to use PowerShell azure storage cmdlets to download the file. You can learn more about azcopy here:

https://azure.microsoft.com/en-us/documentation/articles/storage-use-azcopy/

I downloaded azcopy and its dependencies and included them in my worker role implementation project.

Set the property “copy to output directory” to “copy if newer” to make sure azcopy and its dependencies get copied to the role instance during deployment.

image 

You need to define your startup task in the role ServiceDefinition.csdef file. Here are a few things worth mentioning:

  1. You define startup task by adding <Startup> section in the WorkerRole element.
  2. You can define as many tasks as you want.
  3. Task element mentions that command that will be executed during the execution of the startup tasks.
  4. elevationContext defines the level of access the startup task will have
  5. taskType can be simple, foreground or background.
  6. You can define variables which can access Role Environment during the startup task execution.
1:  <?xml version="1.0" encoding="utf-8"?>  
2:  <ServiceDefinition name="AzureStatupTask" xmlns="http://schemas.microsoft.com/ServiceHosting/2008/10/ServiceDefinition" schemaVersion="2015-04.2.6">  
3:   <WorkerRole name="MyWorkerRole" vmsize="Small">  
4:    <Startup>  
5:     <Task commandLine="Startup.cmd" executionContext="elevated" taskType="simple">  
6:      <Environment>  
7:       <Variable name="LogFileName" value="Startup.log" />  
8:       <Variable name="LogFileDirectory">  
9:        <RoleInstanceValue xpath="/RoleEnvironment/CurrentInstance/LocalResources/LocalResource[@name='LogsPath']/@path" />  
10:       </Variable>  
11:       <Variable name="InstanceId">  
12:        <RoleInstanceValue xpath="/RoleEnvironment/CurrentInstance/@id" />  
13:       </Variable>  
14:       <Variable name="RoleName">  
15:        <RoleInstanceValue xpath="/RoleEnvironment/CurrentInstance/@roleName" />  
16:       </Variable>  
17:      </Environment>  
18:     </Task>  
19:    </Startup>  
20:    <ConfigurationSettings>  
21:     <Setting name="Microsoft.WindowsAzure.Plugins.Diagnostics.ConnectionString" />  
22:    </ConfigurationSettings>  
23:    <LocalResources>  
24:     <LocalStorage name="LogsPath" cleanOnRoleRecycle="false" sizeInMB="1024" />  
25:    </LocalResources>  
26:    <Imports>  
27:     <Import moduleName="RemoteAccess" />  
28:     <Import moduleName="RemoteForwarder" />  
29:    </Imports>  
30:   </WorkerRole>  
31:  </ServiceDefinition>  

 

Here is the startup.cmd file I used to deploy Splunk Universal forwarder. Here is a summary of the steps:

  1. I check to see if StartupComplete.txt exists. If it exists I exit from the script.
  2. I used azcopy to download the Splunk installer from blob storage to local storage
  3. I execute the installer to install Splunk forwarder
  4. I update inputs.conf to setup monitoring for the application log files
  5. I start Splunk service
  6. If there are no errors I create StartupComplete.txt
  7. If the entire script is successful you need to exit it with return code of 0
  8. If there is a failure you return an errorlevel
1:  SET LogPath=%LogFileDirectory%%LogFileName%  
2:     
3:  ECHO Current Role: %RoleName% >> "%LogPath%" 2>&1  
4:  ECHO Current Role Instance: %InstanceId% >> "%LogPath%" 2>&1  
5:  ECHO Current Directory: %CD% >> "%LogPath%" 2>&1  
6:     
7:  ECHO We will first verify if startup has been executed before by checking %RoleRoot%\StartupComplete.txt. >> "%LogPath%" 2>&1  
8:     
9:  IF EXIST "%RoleRoot%\StartupComplete.txt" (  
10:    ECHO Startup has already run, skipping. >> "%LogPath%" 2>&1  
11:    EXIT /B 0  
12:  )  
13:    
14:  AzCopy\AzCopy.exe /Source:https://rajpublic.blob.core.windows.net/splunk/ /Dest:%TEMP% /Pattern:splunkforwarder-6.2.3-264376-x64-release.msi /Y >> "%LogPath%" 2>&1  
15:    
16:  IF ERRORLEVEL EQU 0 (  
17:    
18:       Echo Installing Splunk Forwarder >> "%LogPath%" 2>&1  
19:    
20:       msiexec.exe /i %TEMP%\splunkforwarder-6.2.3-264376-x64-release.msi AGREETOLICENSE=Yes RECEIVING_INDEXER="10.0.0.68:9997" LAUNCHSPLUNK=0 SERVICESTARTTYPE=auto WINEVENTLOG_APP_ENABLE=1 SET_ADMIN_USER=1 PERFMON=cpu,memory,network,diskspace /quiet >> "%LogPath%" 2>&1  
21:     
22:       IF ERRORLEVEL EQU 0 (  
23:    
24:              
25:            Echo [monitor://D:\logs] >> "D:\Program Files\SplunkUniversalForwarder\etc\system\local\inputs.conf"  
26:            Echo disabled = false >> "D:\Program Files\SplunkUniversalForwarder\etc\system\local\inputs.conf"  
27:            Echo followTail = true >> "D:\Program Files\SplunkUniversalForwarder\etc\system\local\inputs.conf"  
28:            Echo index = main >> "D:\Program Files\SplunkUniversalForwarder\etc\system\local\inputs.conf"  
29:            Echo sourcetype = general >> "D:\Program Files\SplunkUniversalForwarder\etc\system\local\inputs.conf"  
30:    
31:            "D:\Program Files\SplunkUniversalForwarder\bin\splunk.exe" start >> "%LogPath%" 2>&1   
32:    
33:            IF ERRORLEVEL EQU 0 (  
34:                 ECHO Splunk installed. Startup completed. >> "%LogPath%" 2>&1  
35:                 ECHO Startup completed. >> "%RoleRoot%\StartupComplete.txt" 2>&1  
36:                 EXIT /B 0  
37:            ) ELSE  
38:            (  
39:                 ECHO An error occurred while starting Splunk. The ERRORLEVEL = %ERRORLEVEL%. >> "%LogPath%" 2>&1  
40:                 EXIT %ERRORLEVEL%  
41:            )  
42:       ) ELSE (  
43:            ECHO An error occurred. The ERRORLEVEL = %ERRORLEVEL%. >> "%LogPath%" 2>&1  
44:            EXIT %ERRORLEVEL%  
45:       )  
46:  ) ELSE (  
47:    ECHO An error occurred while downloading Splunk forwarder The ERRORLEVEL = %ERRORLEVEL%. >> "%LogPath%" 2>&1  
48:    EXIT %ERRORLEVEL%  
49:  )  
50:    

 

Verification and Troubleshooting

If startup task completed successfully you will see a file called e:\startupcomplete.txt as shown below.

This file is used to determine if Splunk installer was installed successfully. 

image

Verify that Splunk forwarder was installed successfully.

image

Verify that post install configuration was complete successfully.

image

 

Troubleshooting

If your deployment fails or if Splunk in not installed successfully you will follow these steps to troubleshoot issues:

Verify that AzCopy and Startup.cmd were copied to the e:\approot directory.

image

image

 

Look in C:\Resources\temp\xxxxxxxxxxxxxxxxxxxxxxxxx.MyWorkerRole\RoleTemp

You should see the splunk installer in this directory.

 image

Look for the full log file created by the startup.cmd in C:\Resources\Directory\a1be486474e348728ec129e4109d4e13.MyWorkerRole.LogsPath

image

Here is a sample startuplog.txt file that was created when there were no errors.

1:  Current Role: MyWorkerRole   
2:  Current Role Instance: MyWorkerRole_IN_0   
3:  Current Directory: E:\approot   
4:  We will first verify if startup has been executed before by checking E:\StartupComplete.txt.   
5:  [2015/07/03 22:42:35] Transfer summary:  
6:  -----------------  
7:  Total files transferred: 1  
8:  Transfer successfully:  1  
9:  Transfer skipped:    0  
10:  Transfer failed:     0  
11:  Elapsed time:      00.00:00:03  
12:  Installing Splunk Forwarder   
13:    
14:  Splunk> CSI: Logfiles.  
15:    
16:  Checking prerequisites...  
17:       Checking mgmt port [8089]: Loading 'screen' into random state - done  
18:  Generating a 1024 bit RSA private key  
19:  ....++++++  
20:  ..................................................................++++++  
21:  writing new private key to 'privKeySecure.pem'  
22:  -----  
23:  Loading 'screen' into random state - done  
24:  Signature ok  
25:  subject=/CN=RD000D3A909BA6/O=SplunkUser  
26:  Getting CA Private Key  
27:  writing RSA key  
28:  open  
29:       Checking conf files for problems...  
30:       Done  
31:  All preliminary checks passed.  
32:    
33:  Starting splunk server daemon (splunkd)...   
34:    
35:  SplunkForwarder: Starting (pid 2436)  
36:  Done  
37:    
38:  Splunk installed. Startup completed.   
39:    

Code Sample

You can get the source of this sample from my GitHub Repository here: https://github.com/rajinders/azure-startup-task

Summary

Azure startup tasks are the only way to install third party software on your Azure web/worker roles. Failures in startup tasks can lead to role startup issues so you need to log extensively to help you troubleshoot errors. Startup tasks do have access to your role environment.

Posted in Azure, DevOps, PAAS, Windows Azure | Tagged , , | Leave a comment

How to migrate from Standard Azure Virtual Machines to DS Series Storage Optimized VM’s

Background

We are implementing Azure solutions for a few clients. Most of our clients are use cloud services and virtual machines to implement their solutions on Azure platform. For many years Azure platform  only offered just one performance tier for storage. You can see the sizes of virtual machines and cloud services and the disk performance they offer here:

https://msdn.microsoft.com/en-us/library/azure/dn197896.aspx

For standard Azure virtual machines each disk is limited to 500 IOPS per disk. If you needed better performance you had to use disk striping with multiple disk to get better performance. Number of disks one could add to Azure Virtual Machine is constrained by the size of VM. One core allows us to add 2 VHD’s. Each VHD is a page blob with a maximum size of 1 TB. When we were deploying packaged software or custom applications with high IOPS requirements it was challenging to meet the needs of our customers. All this changed with the following announcement by Mark Russinovich where he announced General Availability of Azure Premium Storage.

http://azure.microsoft.com/blog/2015/04/16/azure-premium-storage-now-generally-available-2/

Azure premium storage offers durable SSD storage. Along with premium storage Microsoft also released storage optimized virtual machines called DS Series VM’s. These are capable of achieving up to 64000 IOPS and 524 MB/sec. This will enable many scenarios like NoSQL or even large SQL database that need higher IOPS than the standard Azure virtual machines offer. You can read about the specifications for DS Series VM’s in the link posted above. If you were using a standard Azure VM you can easily scale up or down to another standard virtual machine using Portal, PowerShell and Azure CLI. Unfortunately it is currently not possible to upgrade/migrate a standard Azure virtual machine to a DS Series virtual machine with premium storage. In this blog post I will show you how you can migrate an existing virtual machine to a DS’s series virtual machine with Premium(durable SSD) storage. It will provide a PowerShell script you can leverage to migrate a standard virtual machine to a DS Series virtual machine.

Details

Creating Premium Storage Account

Premium storage account is different than standard storage accounts. If you want to leverage premium storage you need to create a new storage account in the azure preview portal. Account type you need to select is “Premium Locally Redundant”

It is not possible to use the existing Azure management portal to provision premium storage account.

New Storage Account

Here is how you can use PowerShell cmdlet to create premium storage account. As you can see it is similar to how you create standard storage accounts. I was unable to find what value I had to specify for Type and I had to read the actual source code to determine that it was ‘Premium_LRS’

001
002
003
004
005
006
007
008
$StorageAccountTypePremium = ‘Premium_LRS’

$DestStorageAccount = New-AzureStorageAccount -StorageAccountName $DestStorageAccountName -Location $Location -Type $StorageAccountTypePremium -ErrorVariable errorVariable -ErrorAction SilentlyContinue | Out-Null

if (!($?)) 
{ 
    throw “Cannot create the Storage Account [$DestStorageAccountName] on $Location. Error Detail: $errorVariable” 
}

 

Premium storage and DS Series virtual machines are not available in all regions. The complete script I will provide validates your location preference and fails if you specify a location where premium storage and DS Series VM’s are not available.

Creating DS Series Virtual machine is identical to creating standard virtual machines.

Here are a few things I learned about DS Series Virtual machines and Premium storage.

  • Premium storage does not allow us to add  disks smaller than 10 GB. If your VM has a disk smaller than 10 GB the script will fail
  • Default Host Caching option for Premium storage data disks is “Ready Only” as compared with “None” for standard data disks.
  • Default Host Caching option for Premium storage OS disk is “Read Write” which is same as standard OS disks
  • Currently this script only migrates virtual machines to the same subscription. It can be easily extended to support migration to different subscriptions. 
  • It can migrate VM’s to a different region as long as premium storage is available in that region
  • It shuts down the existing source VM before making of copy of the VHD’s for the virtual machine.
  • It validates that virtual network for the destination VM exists but does not validate if subnet also exists
  • It gives new names to the disks in the destination virtual machine
  • Currently I am only copying disks, end points, VM extensions. I am not copying ACL’s and other type of extensions like malware extension
  • I only tested the script with PowerShell SDK Version 0.9.2
  • I tested migrating standard VM in West US to DS Series VM in West US only. I logged into the newly created VM and verified that all disks were present. This is the extent of my testing. My VM with 3 Disk’s copied in 10 minutes.
  • If your destination storage account already exists it has to be of type “Premium_LRS”. If you have an existing account of different type the script will fail. If the storage account does not exist it will be created.

Sample Script

You can access the entire source code from my public GitHub repository

https://github.com/rajinders/migrate-to-azuredsvm

I have also pasted the entire source code here for your convenience.

001
002
003
004
005
006
007
008
009
010
011
012
013
014
015
016
017
018
019
020
021
022
023
024
025
026
027
028
029
030
031
032
033
034
035
036
037
038
039
040
041
042
043
044
045
046
047
048
049
050
051
052
053
054
055
056
057
058
059
060
061
062
063
064
065
066
067
068
069
070
071
072
073
074
075
076
077
078
079
080
081
082
083
084
085
086
087
088
089
090
091
092
093
094
095
096
097
098
099
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
345
346
347
348
349
350
351
352
353
354
355
356
357
358
359
360
361
362
363
364
365
366
367
368
369
370
371
372
373
374
375
376
377
378
379
380
381
382
383
384
385
386
387
388
389
390
391
392
393
394
395
396
397
398
399
400
401
402
403
404
405
406
407
408
409
410
411
412
413
414
415
416
417
418
419
420
421
422
423
424
425
426
427
428
429
430
431
432
433
434
435
436
437
438
439
440
441
442
443
444
445
446
447
448
449
450
451
452
453
454
455
456
<#
Copyright 2015 Rajinder Singh
 
Licensed under the Apache License, Version 2.0 (the “License”);
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
 
    http://www.apache.org/licenses/LICENSE-2.0
 
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an “AS IS” BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
#>

<#
.SYNOPSIS
Migrates an existing VM into a DS Series VM which uses Premium storage.
 
.DESCRIPTION
This script migrates an exitsing VM into a DS Series VM which uses Premium Storage. At this time DS Series VM’s are not available in all regions.
It currently expects the VM to be migrated in the same subscription. It supports migrating VM to the same region or a different region.
It can be easily extended to support migrating to a different subscription as well
 
.PARAMETER SourceVMName
The name of the VM that needs to be migrated
 
.PARAMTER SourceServiceName
The name of service for the old VM
 
.PARAMETER DestVMName
The name of New DS Series VM that will be created.
 
.PARAMTER DestServiceName
The name of the Service for the new VM
 
.PARAMTER Location
Region where new VM will be created
 
.PARAMTER Size
Size of the new VM
 
.PARAMTER DestStorageAccountName
Name of the storage account where the VM will be created. It has to be premium storage account
 
.PARAMETER ResourceGroupName
Resource group where the cache will be create
 
.EXAMPLE
 
# Migrate a standalone virtual machine to a DS Series virtual machine with Premium storage. Both the VM’s are in the same subscription
.\MigrateVMToPremiumStorage.ps1 -SourceVMName “rajsourcevm2” -SourceServiceName “rajsourcevm2” -DestVMName “rajdsvm12” -DestServiceName “rajdsvm12svc” -Location “West US” -VMSize Standard_DS2 -DestStorageAccountName ‘rajwestpremstg18’ -DestStorageAccountContainer ‘vhds’
 
 
# Migrate a standalone virtual machine to a DS Series virtual machine with Premium storage. Both the VM’s are in the same subscription
.\MigrateVMToPremiumStorage.ps1 -SourceVMName “rajsourcevm2” -SourceServiceName “rajsourcevm2” -DestVMName “rajdsvm16” -DestServiceName “rajdsvm16svc” -Location “West US” -VMSize Standard_DS2 -DestStorageAccountName ‘rajwestpremstg19’ -DestStorageAccountContainer ‘vhds’ -VNetName rajvnettest3 -SubnetName FrontEndSubnet
 
#>

[CmdletBinding(DefaultParameterSetName=“Default”)]
Param
(
    [Parameter (Mandatory = $true)]
    [string] $SourceVMName,

    [Parameter (Mandatory = $true)]
    [string] $SourceServiceName,

    [Parameter (Mandatory = $true)]
    [string] $DestVMName,

    [Parameter (Mandatory = $true)]
    [string] $DestServiceName,

    [Parameter (Mandatory = $true)]
    [ValidateSet(‘West US’,‘East US 2’,‘West Europe’,‘East China’,‘Southeast Asia’,‘West Japan’, ignorecase=$true)]
    [string] $Location,

    [Parameter (Mandatory = $true)]
    [ValidateSet(‘Standard_DS1’,‘Standard_DS2’,‘Standard_DS3’,‘Standard_DS4’,‘Standard_DS11’,‘Standard_DS12’,‘Standard_DS13’,‘Standard_DS14’, ignorecase=$true)]
    [string] $VMSize,

    [Parameter (Mandatory = $true)]
    [string] $DestStorageAccountName,

    [Parameter (Mandatory = $true)]
    [string] $DestStorageAccountContainer,

    [Parameter (Mandatory = $false)]
    [string] $VNetName,

    [Parameter (Mandatory = $false)]
    [string] $SubnetName
)

#publish version of the the powershell cmdlets we are using
(Get-Module Azure).Version

#$VerbosePreference = “Continue”
$StorageAccountTypePremium = ‘Premium_LRS’

#############################################################################################################
#validation section
#Perform as much upfront validation as possible
#############################################################################################################

#validate upfront that this service we are trying to create already exists
if((Get-AzureService -ServiceName $DestServiceName -ErrorAction SilentlyContinue) -ne $null)
{
    Write-Error “Service [$DestServiceName] already exists”
    return
}

#Determine we are migrating the VM to a Virtual network. If it is then verify that VNET exists
if( !$VNetName -and !$SubnetName )
{
    $DeployToVNet = $false
}
else
{
    $DeployToVNet = $true
    $vnetSite = Get-AzureVNetSite -VNetName $VNetName -ErrorAction SilentlyContinue

    if (!$vnetSite)
    {
        Write-Error “Virtual Network [$VNetName] does not exist”
        return
    }
}

Write-Host “DepoyToVNet is set to [$DeployToVnet]”

#TODO: add validation to make sure the destination VM size can accomodate the number of disk in the source VM

$DestStorageAccount = Get-AzureStorageAccount -StorageAccountName $DestStorageAccountName -ErrorAction SilentlyContinue

#check to see if the storage account exists and create a premium storage account if it does not exist
if(!$DestStorageAccount)
{
    # Create a new storage account
    Write-Output “”;
    Write-Output (“Configuring Destination Storage Account {0} in location {1}” -f $DestStorageAccountName, $Location);

    $DestStorageAccount = New-AzureStorageAccount -StorageAccountName $DestStorageAccountName -Location $Location -Type $StorageAccountTypePremium -ErrorVariable errorVariable -ErrorAction SilentlyContinue | Out-Null

    if (!($?)) 
    { 
        throw “Cannot create the Storage Account [$DestStorageAccountName] on $Location. Error Detail: $errorVariable” 
    } 
   
    Write-Verbose “Created Destination Storage Account [$DestStorageAccountName] with AccountType of [$($DestStorageAccount.AccountType)]”    
}
else
{
    Write-Host “Destination Storage account [$DestStorageAccountName] already exists. Storage account type is [$($DestStorageAccount.AccountType)]”

    #make sure if the account already exists it is of type premium storage
    if( $DestStorageAccount.AccountType -ne $StorageAccountTypePremium )
    {
        Write-Error “Storage account [$DestStorageAccountName] account type of [$($DestStorageAccount.AccountType)] is invalid”
        return
    }
}

Write-Host “Source VM Name is [$SourceVMName] and Service Name is [$SourceServiceName]”

#Get VM Details
$SourceVM = Get-AzureVM -Name $SourceVMName -ServiceName $SourceServiceName -ErrorAction SilentlyContinue

if($SourceVM -eq $null)
{
    Write-Error “Unable to find Virtual Machine [$SourceServiceName] in Service Name [$SourceServiceName]”
    return
}

Write-Host “vm name is [$($SourceVM.Name)] and vm status is [$($SourceVM.Status)]”

#need to shutdown the existing VM before copying its disks.
if($SourceVM.Status -eq “ReadyRole”)
{
    Write-Host “Shutting down virtual machine [$SourceVMName]”
    #Shutdown the VM
    Stop-AzureVM -ServiceName $SourceServiceName -Name $SourceVMName -Force
}

$osdisk = $SourceVM | Get-AzureOSDisk

Write-Host “OS Disk name is $($osdisk.DiskName) and disk location is $($osdisk.MediaLink)”

$disk_configs = @{}

# Used to track disk copy status
$diskCopyStates = @()

##################################################################################################################
# Kicks off the async copy of VHDs
##################################################################################################################

# Copies to remote storage account
# Returns blob copy state to poll against
function StartCopyVHD($sourceDiskUri, $diskName, $OS, $destStorageAccountName, $destContainer)
{
    Write-Host “Destination Storage Account is [$destStorageAccountName], Destination Container is [$destContainer]”

    #extract the name of the source storage account from the URI of the VHD
    $sourceStorageAccountName = $sourceDiskUri.Host.Replace(“.blob.core.windows.net”, “”)
   

    $vhdName = $sourceDiskUri.Segments[$sourceDiskUri.Segments.Length  1].Replace(“%20”,” “) 
    $sourceContainer = $sourceDiskUri.Segments[$sourceDiskUri.Segments.Length  2].Replace(“/”, “”)

    $sourceStorageAccountKey = (Get-AzureStorageKey -StorageAccountName $sourceStorageAccountName).Primary
    $sourceContext = New-AzureStorageContext -StorageAccountName $sourceStorageAccountName -StorageAccountKey $sourceStorageAccountKey

    $destStorageAccountKey = (Get-AzureStorageKey -StorageAccountName $destStorageAccountName).Primary
    $destContext = New-AzureStorageContext -StorageAccountName $destStorageAccountName -StorageAccountKey $destStorageAccountKey
    if((Get-AzureStorageContainer -Name $destContainer -Context $destContext -ErrorAction SilentlyContinue) -eq $null)
    {
        New-AzureStorageContainer -Name $destContainer -Context $destContext | Out-Null

        while((Get-AzureStorageContainer -Name $destContainer -Context $destContext -ErrorAction SilentlyContinue) -eq $null)
        {
            Write-Host “Pausing to ensure container $destContainer is created..” -ForegroundColor Green
            Start-Sleep 15
        }
    }

    # Save for later disk registration
    $destinationUri = “https://$destStorageAccountName.blob.core.windows.net/$destContainer/$vhdName”
   
    if($OS -eq $null)
    {
        $disk_configs.Add($diskName, “$destinationUri”)
    }
    else
    {
       $disk_configs.Add($diskName, “$destinationUri;$OS”)
    }

    #start async copy of the VHD. It will overwrite any existing VHD
    $copyState = Start-AzureStorageBlobCopy -SrcBlob $vhdName -SrcContainer $sourceContainer -SrcContext $sourceContext -DestContainer $destContainer -DestBlob $vhdName -DestContext $destContext -Force

    return $copyState
}

##################################################################################################################
# Tracks status of each blob copy and waits until all the blobs have been copied
##################################################################################################################

function TrackBlobCopyStatus()
{
    param($diskCopyStates)
    do
    {
        $copyComplete = $true
        Write-Host “Checking Disk Copy Status for VM Copy” -ForegroundColor Green
        foreach($diskCopy in $diskCopyStates)
        {
            $state = $diskCopy | Get-AzureStorageBlobCopyState | Format-Table -AutoSize -Property Status,BytesCopied,TotalBytes,Source
            if($state -ne “Success”)
            {
                $copyComplete = $true
                Write-Host “Current Status” -ForegroundColor Green
                $hideHeader = $false
                $inprogress = 0
                $complete = 0
                foreach($diskCopyTmp in $diskCopyStates)
                { 
                    $stateTmp = $diskCopyTmp | Get-AzureStorageBlobCopyState
                    $source = $stateTmp.Source
                    if($stateTmp.Status -eq “Success”)
                    {
                        Write-Host (($stateTmp | Format-Table -HideTableHeaders:$hideHeader -AutoSize -Property Status,BytesCopied,TotalBytes,Source | Out-String)) -ForegroundColor Green
                        $complete++
                    }
                    elseif(($stateTmp.Status -like “*failed*”) -or ($stateTmp.Status -like “*aborted*”))
                    {
                        Write-Error ($stateTmp | Format-Table -HideTableHeaders:$hideHeader -AutoSize -Property Status,BytesCopied,TotalBytes,Source | Out-String)
                        return $false
                    }
                    else
                    {
                        Write-Host (($stateTmp | Format-Table -HideTableHeaders:$hideHeader -AutoSize -Property Status,BytesCopied,TotalBytes,Source | Out-String)) -ForegroundColor DarkYellow
                        $copyComplete = $false
                        $inprogress++
                    }
                    $hideHeader = $true
                }
                if($copyComplete -eq $false)
                {
                    Write-Host “$complete Blob Copies are completed with $inprogress that are still in progress.” -ForegroundColor Magenta
                    Write-Host “Pausing 60 seconds before next status check.” -ForegroundColor Green 
                    Start-Sleep 60
                }
                else
                {
                    Write-Host “Disk Copy Complete” -ForegroundColor Green
                    break 
                }
            }
        }
    } while($copyComplete -ne $true) 
    Write-Host “Successfully Copied up all Disks” -ForegroundColor Green
}

# Mark the start time of the script execution
$startTime = Get-Date 

Write-Host “Destination storage account name is [$DestStorageAccountName]”

# Copy disks using the async API from the source URL to the destination storage account
$diskCopyStates += StartCopyVHD -sourceDiskUri $osdisk.MediaLink -destStorageAccount $DestStorageAccountName -destContainer $DestStorageAccountContainer -diskName $osdisk.DiskName -OS $osdisk.OS

# copy all the data disks
$SourceVM | Get-AzureDataDisk | foreach {

    Write-Host “Disk Name [$($_.DiskName)], Size is [$($_.LogicalDiskSizeInGB)]”

    #Premium storage does not allow disks smaller than 10 GB
    if( $_.LogicalDiskSizeInGB -lt 10 )
    {
        Write-Warning “Data Disk [$($_.DiskName)] with size [$($_.LogicalDiskSizeInGB) is less than 10GB so it cannnot be added” 
    }
    else
    {
        Write-Host “Destination storage account name is [$DestStorageAccountName]”
        $diskCopyStates += StartCopyVHD -sourceDiskUri $_.MediaLink -destStorageAccount $DestStorageAccountName -destContainer $DestStorageAccountContainer -diskName $_.DiskName
    }
}

#check that status of blob copy. This may take a while if you are doing cross region copies.
#even in the same region a 127 GB takes nearly 10 minutes
TrackBlobCopyStatus -diskCopyStates $diskCopyStates

# Mark the finish time of the script execution
$finishTime = Get-Date 
 
# Output the time consumed in seconds
$TotalTime = ($finishTime  $startTime).TotalSeconds 
Write-Host “The disk copies completed in $TotalTime seconds.” -ForegroundColor Green

Write-Host “Registering Copied Disk” -ForegroundColor Green

$luncount = 0   # used to generate unique lun value for data disks
$index = 0  # used to generate unique disk names
$OSDisk = $null

$datadisk_details = @{}

foreach($diskName in $disk_configs.Keys)
{
    $index = $index + 1

    $diskConfig = $disk_configs[$diskName].Split(“;”)

    #since we are using the same subscription we need to update the diskName for it to be unique
    $newDiskName = “$DestVMName” + “-disk-“ + $index

    Write-Host “Adding disk [$newDiskName]”

    #check to see if this disk already exists
    $azureDisk = Get-AzureDisk -DiskName $newDiskName -ErrorAction SilentlyContinue

    if(!$azureDisk)
    {

        if($diskConfig.Length -gt 1)
        {
           Write-Host “Adding OS disk [$newDiskName] -OS [$diskConfig[1]] -MediaLocation [$diskConfig[0]]”

           #Expect OS Disk to be the first disk in the array
           $OSDisk = Add-AzureDisk -DiskName $newDiskName -OS $diskConfig[1] -MediaLocation $diskConfig[0]

           $vmconfig = New-AzureVMConfig -Name $DestVMName -InstanceSize $VMSize -DiskName $OSDisk.DiskName 

        }
        else
        {
            Write-Host “Adding Data disk [$newDiskName] -MediaLocation [$diskConfig[0]]”

            Add-AzureDisk -DiskName $newDiskName -MediaLocation $diskConfig[0]

            $datadisk_details[$luncount] = $newDiskName

            $luncount = $luncount + 1  
        }
    }
    else
    {
        Write-Error “Unable to add Azure Disk [$newDiskName] as it already exists”
        Write-Error “You can use Remove-AzureDisk -DiskName $newDiskName to remove the old disk”
        return
    }
}

#add all the data disks to the VM configuration
foreach($lun in $datadisk_details.Keys)
{
    $datadisk_name = $datadisk_details[$lun]

    Write-Host “Adding data disk [$datadisk_name] to the VM configuration”

    $vmconfig | Add-AzureDataDisk -Import -DiskName $datadisk_name  -LUN $lun
}

#read all the end points in the source VM and create them in the destination VM
#NOTE: I don’t copy ACL’s yet. I need to add this.
$SourceVM | get-azureendpoint | foreach {

    if($_.LBSetName -eq $null)
    {
        write-Host “Name is [$($_.Name)], Port is [$($_.Port)], LocalPort is [$($_.LocalPort)], Protocol is [$($_.Protocol)], EnableDirectServerReturn is [$($_.EnableDirectServerReturn)]]”
        $vmconfig | Add-AzureEndpoint -Name $_.Name -LocalPort $_.LocalPort -PublicPort $_.Port -Protocol $_.Protocol -DirectServerReturn $_.EnableDirectServerReturn
    }
    else
    {
        write-Host “Name is [$($_.Name)], Port is [$($_.Port)], LocalPort is [$($_.LocalPort)], Protocol is [$($_.Protocol)], EnableDirectServerReturn is [$($_.EnableDirectServerReturn)], LBSetName is [$($_.LBSetName)]”       
        $vmconfig | Add-AzureEndpoint -Name $_.Name -LocalPort $_.LocalPort -PublicPort $_.Port -Protocol $_.Protocol -DirectServerReturn $_.EnableDirectServerReturn -LBSetName $_.LBSetName -DefaultProbe
    }
}

#
if( $DeployToVnet )
{
    Write-Host “Virtual Network Name is [$VNetName] and Subnet Name is [$SubnetName]” 

    $vmconfig | Set-AzureSubnet -SubnetNames $SubnetName
    $vmconfig | New-AzureVM -ServiceName $DestServiceName -VNetName $VNetName -Location $Location
}
else
{
    #Creating the virtual machine
    $vmconfig | New-AzureVM -ServiceName $DestServiceName -Location $Location
}

#get any vm extensions
#there may be other types of extensions that be in the source vm. I don’t copy them yet
$SourceVM | get-azurevmextension | foreach {
    Write-Host “ExtensionName [$($_.ExtensionName)] Publisher [$($_.Publisher)] Version [$($_.Version)] ReferenceName [$($_.ReferenceName)] State [$($_.State)] RoleName [$($_.RoleName)]”
    get-azurevm -ServiceName $DestServiceName -Name $DestVMName -Verbose | set-azurevmextension -ExtensionName $_.ExtensionName -Publisher $_.Publisher -Version $_.Version -ReferenceName $_.ReferenceName -Verbose | Update-azurevm -Verbose
}

 

Conclusion

I had to look at many different code samples as well as MSDN documentation to create this script. I am grateful to all the open source samples folks are contributing and this is my way of giving back to the Azure community. If you have questions and/or features requests drop me a line and I will do what  I can to help.

Posted in DevOps, Virtual Machines, Windows Azure | Tagged , , , | 9 Comments

Azure SDK 2.6 Diagnostics Improvements for Cloud Services

I haven’t blogged for a while because of being very busy at work. Things are slowing down a bit so I will try to write more frequently.

History

Azure SDK 2.5 made big changes to Azure diagnostics. It introduced Azure PaaS Diagnostic extension. Even though this was a good long term strategy the implementation was less than perfect. Here were a few issues that were introduced as a result of Azure SDK 2.5

  1. Local emulator did not support diagnostics
  2. No support for using different diagnostics storage account for different environments
  3. Manual editing required to create XML configuration file needed by Set-AzureServiceDiagnosticsConfiguration. This PowerShell cmdlet was required to deploy the diagnostic extension
  4. To make matters worse there was a bug in the PowerShell cmdlet which surfaced when you had a . in the name of the roles.

All these factors made it impossible to do continuous integration/deployment for Cloud service projects.

A few days ago Azure SDK 2.6 was released. I went through the release notes and read up the documentation. I ran tests to see if sanity has been restored. I am glad to report all the issues introduced by SDK 2.5 have been fixed. Here is a summary of improvements.

  1. Local emulator now supports diagnostics.
  2. Ability to specify different diagnostics storage account for different service configuration
  3. To simplify configuration of paas diagnostics extension the package output from Visual Studio contains the public configuration XML for the diagnostics extension for each role.
  4. PowerShell version 0.9.0 which was released along with the Azure SDK 2.6 also fixed the pesky bug that was happening when you had a . in the name of the role.

Here is a document that provides all the gory details for Azure SDK 2.6 diagnostics changes.

https://msdn.microsoft.com/en-us/library/azure/dn186185.aspx

Overview

If you are developing application and still not using continuous integration and continuous deployment  you should be learn more about it. I will use rest of this blog post to show how you can use PowerShell cmdlets to automate the installation and updating of PaaS diagnostics extension for Cloud Services built using Azure SDK 2.6.

Details

I installed Azure SDK 2.6 on my development machine. I install PowerShell cmdlets(version 0.9.0) and Azure CLI as well.

I created a simple Cloud Service Project. I added a web role and a worker role to it.

I  added one more Service Configuration called “Test” to this project.

image 

I examined the properties of the WebRole1 to see what has changed with SDK 2.6

If you select “All Configurations”  you can still enable/disable the diagnostics like you used to do in SDK 2.5

image

When I selected “Configure” button to configure the diagnostics I found that we don’t have to select the diagnostics storage account in the “General” tab like we used to do. Rest of the configuration is same.

image

Returning back to the configuration of the WebRole1 I changed the Service Configuration to “Cloud”.

In the past there was no way to configure diagnostics storage account for configuration type.

But now we can define a different diagnostics storage account for each configuration type.

image

Quick examination of the ServiceConfiguration.Cloud.cscfg confirmed that diagnostics connection string was defined in it.

This makes a lot of sense because rest of the environment specific configuration setting are also defined in the same file.

image 

I did not want to deploy this project directly from Visual Studio because most build servers do not use Visual studio to deploy applications.

First I created a deployment package by selecting the Cloud project and select Package.

image

Selected the “Cloud” Service Configuration and Press “Package” button.

image

The project was built and packaged successfully. It opened up the location where the package and related files were created.

It created a directory called app.publish in the bin\debug directory under the cloud service project.

This is not any different from the past. However there is a new directory called Extensions.

image

Extensions directory has PubConfig.xml file for each role type. You had to create this file manually from diagnostics.wadcfg in the past. These files are needed by PowerShell cmdlets that are used to deploy diagnostics extension.

image

We use AppVeyor for continuous integration and deployment. It uses msbuild to build the projects.

I ran “Developer Command Prompt for Visual Studio 2013” and used the following command to build and package the cloud project.

msbuild <ccproj_file> /t:Publish /p:PublishDir=<temp_path>

I verified that msbuild also created the package and all the related files.

PowerShell Cmdlets for Azure Diagnostics

image

For new Cloud Services there are two ways to apply diagnostics extensions.

  1. You can pass the extension configuration to New-AzureDeployment via –ExtensionConfiguration parameter.
  2. You can create the Cloud Service first and use Set-AzureServiceDiagnosticsExtension to apply the PaaS diagnostics extension.

You can learn about it here.

https://msdn.microsoft.com/en-us/library/azure/dn495270.aspx

I chose method one because it was faster than applying extension in a separate call.

Deploying PaaS Diagnostics Extension for the first time

The following script creates a new Cloud Services, creates the diagnostics configuration and deploys the package which also deploys the PaaS diagnostics extension.

I am setting the diagnostics extension for each Role separately.

At the end of this script I use Get-AzureServiceDiagnosticsExtension to verify if the diagnostics has been installed.

You can also use Visual Studio Server Explorer to view the diagnostics.

001
002
003
004
005
006
007
008
009
010
011
012
013
014
015
016
017
018
019
020
021
022
023
024
025
026
027
028
029
030
031
032
033
034
035
036
037
038
039
040
041
042
043
044
045
046
047
048
049
050
051
052
053
054
055
056
057
058
059
060
061
062
063
064
065
066
<#
.SYNOPSIS
Provisions a new cloud service with web/worker role built with SDK 2.6 and applies diagnostics extension
 
.DESCRIPTION
This script will create a new cloud service, deploy cloud service and apply azure diagnostics extension to each role type.
This cloud service has a WebRole1 and WorkerRole2
#>

$VerbosePreference = “Continue” 
$ErrorActionPreference = “Stop”

$SubscriptionName = “Your Subscription Name”
$VMStorageAccount = “storage account used during deployment”
$service_name = ‘cloud service name’
$location = “Central US”
$package = “C:\Git\DiagnosticsSDK26\DiagnosticsSDK26\bin\Debug\app.publish\DiagnosticsSDK26.cspkg”
$configuration = “C:\Git\DiagnosticsSDK26\DiagnosticsSDK26\bin\Debug\app.publish\ServiceConfiguration.Cloud.cscfg”
$slot = “Production”
#diagnostics storage account
$storage_name = ‘diagnostics storage account name’
#diagnostics storage account key
$key= ‘storage account key’


# SDK 2.6 tool generate these pubconfig files for each role type
$webrolediagconfig = “C:\Git\DiagnosticsSDK26\DiagnosticsSDK26\bin\Debug\app.publish\Extensions\PaaSDiagnostics.WebRole1.PubConfig.xml”
$workerrolediagconfig = “C:\Git\DiagnosticsSDK26\DiagnosticsSDK26\bin\Debug\app.publish\Extensions\PaaSDiagnostics.WorkerRole1.PubConfig.xml”

#Print the version of the PowerShell Cmdlets you are currently using
(Get-Module Azure).Version

# Mark the start time of the script execution
$startTime = Get-Date 

#set the default storage account for the subscription
Set-AzureSubscription -SubscriptionName $SubscriptionName -CurrentStorageAccountName $VMStorageAccount

if(Test-AzureName -Service $service_name)
{
    Write-Host “Serivice [$service_name] already exists”
}
else
{ 
    #Create new cloud service
    New-AzureService -ServiceName $service_name -Label “Raj SDK 2.6 Diagnostics Demo” -Location $location
}

#create storage context
$storageContext = New-AzureStorageContext –StorageAccountName $storage_name –StorageAccountKey $key

$workerconfig = New-AzureServiceDiagnosticsExtensionConfig -StorageContext $storageContext -DiagnosticsConfigurationPath $workerrolediagconfig -role “WorkerRole1”
$webroleconfig = New-AzureServiceDiagnosticsExtensionConfig -StorageContext $storageContext -DiagnosticsConfigurationPath $webrolediagconfig -role “WebRole1”

#deploy to the new cloud service and diagnostics extension
New-AzureDeployment -ServiceName rajsdk26diagdemo -Package $package -Configuration $configuration -Slot $slot -ExtensionConfiguration @($workerconfig,$webconfig)

# Mark the finish time of the script execution
$finishTime = Get-Date 

#Display the details of the extension
Get-AzureServiceDiagnosticsExtension -ServiceName $service_name -Slot Production

 
# Output the time consumed in seconds
$TotalTime = ($finishTime  $startTime).TotalSeconds 
Write-Output “The script completed in $TotalTime seconds.”

 

 

Update PaaS Diagnostics Extension

I wanted to see how we can update diagnostics extension so I made these changes to my project.

I added a new worker role to the same project. I also changed the the configuration of diagnostics.

Typically an extension is only deployed once. To deploy the extension again you have two option:

  1. You can either change the name of the extension
  2. You can remove the extension and install it again

I chose the second option.

Here is what this script does:

It removes the PaaS Diagnostics extension from the cloud service

It creates PaaS diagnostics configuration for each role.

It updates the Cloud Service and applies PaaS diagnostics extension to each role including the new worker role Hard.WorkerRole.

Having a . in the name used to break the Set-AzureServiceDiagnosticsExtension. It is nice to see it is working now

001
002
003
004
005
006
007
008
009
010
011
012
013
014
015
016
017
018
019
020
021
022
023
024
025
026
027
028
029
030
031
032
033
034
035
036
037
038
039
040
041
042
043
044
045
046
047
048
049
050
051
052
053
054
055
056
057
058
059
060
061
062
063
064
065
066
<#
.SYNOPSIS
Updates an existing Cloud service and applies azure diagnostics extension as well
 
.DESCRIPTION
This script removes diagnostics extension, updates cloud service, applies azure diagnostics extension to each role type.
This cloud service had a WebRole1 and WorkerRole2 initially. I added a new role called Hard.WorkerRole
I put . in the name because SDK 2.5 Set-AzureServiceDiagnosticsExtension had a bug where . in the name broke it.
#>

# Set the output level to verbose and make the script stop on error
$VerbosePreference = “Continue” 
$ErrorActionPreference = “Stop” 

$service_name = ‘Cloud service name’
$storage_name = ‘diagnostics storage account’
$key= ‘storage account key’
$package = “C:\Git\DiagnosticsSDK26\DiagnosticsSDK26\bin\Debug\app.publish\DiagnosticsSDK26.cspkg”
$configuration = “C:\Git\DiagnosticsSDK26\DiagnosticsSDK26\bin\Debug\app.publish\ServiceConfiguration.Cloud.cscfg”

#Print the version of the PowerShell Cmdlets you are currently using
(Get-Module Azure).Version

# Mark the start time of the script execution
$startTime = Get-Date 

#remove the old diagnostics extension
Remove-AzureServiceDiagnosticsExtension -ServiceName $service_name -Slot Production -ErrorAction SilentlyContinue -ErrorVariable errorVariable
if (!($?)) 
{ 
        Write-Error “Unable to remove diagnostics extension from Service [$service_name]. Error Detail: $errorVariable” 
        Exit
}

$storageContext = New-AzureStorageContext –StorageAccountName $storage_name –StorageAccountKey $key
$webrolediagconfig = “C:\Git\DiagnosticsSDK26\DiagnosticsSDK26\bin\Debug\app.publish\Extensions\PaaSDiagnostics.WebRole1.PubConfig.xml”
$workerrolediagconfig = “C:\Git\DiagnosticsSDK26\DiagnosticsSDK26\bin\Debug\app.publish\Extensions\PaaSDiagnostics.WorkerRole1.PubConfig.xml”
$hardwrkdiagconfig = “C:\Git\DiagnosticsSDK26\DiagnosticsSDK26\bin\Debug\app.publish\Extensions\PaaSDiagnostics.Hard.WorkerRole.PubConfig.xml”
 

#create extension config
$workerconfig = New-AzureServiceDiagnosticsExtensionConfig -StorageContext $storageContext -DiagnosticsConfigurationPath $workerrolediagconfig -role “WorkerRole1”
$webroleconfig = New-AzureServiceDiagnosticsExtensionConfig -StorageContext $storageContext -DiagnosticsConfigurationPath $webrolediagconfig -role “WebRole1”
$hardwrkconfig = New-AzureServiceDiagnosticsExtensionConfig -StorageContext $storageContext -DiagnosticsConfigurationPath $hardwrkdiagconfig -role “Hard.WorkerRole”

#upgrade the existing code and apply diagnostic extension at the same time
Set-AzureDeployment -Upgrade -ServiceName $service_name -Mode Auto -Package $package -Configuration $configuration  -Slot Production -ErrorAction SilentlyContinue -ErrorVariable errorVariable -ExtensionConfiguration @($workerconfig,$webconfig, $hardworkconfig)
if (!($?)) 
{ 
        Write-Error “Unable to upgrade Service [$service_name]. Error Detail: $errorVariable” 
        Exit
}

# Mark the finish time of the script execution
$finishTime = Get-Date 

#Display the details of the extension
Get-AzureServiceDiagnosticsExtension -ServiceName $service_name -Slot Production

 
# Output the time consumed in seconds
$TotalTime = ($finishTime  $startTime).TotalSeconds 
Write-Output “The script completed in $TotalTime seconds.”

 

Summary

Azure SDK 2.6 has addressed most of the issues related to deploying diagnostics to Cloud Services that were introduced by SDK 2.5. Cleanest way to update diagnostics extensions is the remove the existing diagnostics extension and setting it again during the deployment.  I tested deploying Diagnostics extension individually on each role it took 3-4 minutes to deploy each extension so if  you have a large number of roles your deployment times may increase. In my case with 3 role types it was taking 12 minutes for the script to run. When I used –ExtensionConfiguration parameter of New-AzureDeployment and Set-AzureDeployment it took only 5 minutes for the entire script to run.

Posted in Automation, Azure, DevOps, PowerShell | Tagged | 6 Comments