Monday, December 25, 2017

Exam Recap: 70-533 Implementing Microsoft Azure Infrastructure Solutions

Disclaimer: This blog is a last minute recap for content covered in exam 70-533. I was updated in Dec 2017. Azure exam material change rapidly, thus with time this material will become obsolete. Author just listed the content and little attention is given to formatting of the content. Content is from online available resource and thus no reference is provided. Blogger don't own any content of the blog and it is published without any seek for benefits, solemnly for the purpose of general exam taker benefit. 

Classic model is azure service manager model
ARM is azure resource manager model


VM Tier Differences
Basic
300 IOPS
Standard
Load balancing
Auto scale
SSD
500 IOPS


WebSite Tier Differences
Shared
Custom domain
Basic
Manual Scale instance size
Custom domain
Standard
Deployment slot addition
SSL support for custom domain
Traffic manager
Autoscale
Geo available
Backup
Key Vault Premium Teir
HSM-protected keys


Connect to your subscriptions
Login-AzureRmAccount


Create a new resource group
New-AzureRmResourceGroup –Name 'ContosoResourceGroup' –Location 'East US'


Key vault
Keys are stored in a vault and invoked by URI when needed.
Key Vault is designed so that Microsoft does not see or extract your keys.
Applications that use a key vault must authenticate by using a token from Azure Active Directory. To do this, the owner of the application must first register the application in their Azure Active Directory. At the end of registration, the application owner gets the following values:
An Application ID, An authentication key (also known as the shared secret).
The application must present both these values to Azure Active Directory, to get a token.

Register a resource provider
Register-AzureRmResourceProvider -ProviderNamespace "Microsoft.KeyVault"

providers/Microsoft.KeyVault/vaults
vault.azure.net
Permissions: get, create, delete, list, update, import, backup


Create a key vault
New-AzureRmKeyVault -VaultName 'ContosoKeyVault' -ResourceGroupName 'ContosoResourceGroup' -Location 'East US'


Create or import a key or secret


Azure Key Vault generates a software protected key
$key = Add-AzureKeyVaultKey -VaultName 'ContosoKeyVault' -Name 'ContosoFirstKey' -Destination 'Software'
URI of key: $key.id
To create an HSM-protected key, set the -Destination parameter to 'HSM'


Importing an existing PFX file into Azure Key Vault
$securepfxpwd = ConvertTo-SecureString –String '123' –AsPlainText –Force
$key = Add-AzureKeyVaultKey -VaultName 'ContosoKeyVault' -Name 'ContosoImportedPFX' -KeyFilePath 'c:\softkey.pfx' -KeyFilePassword $securepfxpwd
To import the key into HSMs in the Key Vault service, set the -Destination parameter to 'HSM'


To add a secret to Azure Key Vault
$secretvalue = ConvertTo-SecureString 'Pa$$w0rd' -AsPlainText -Force
$secret = Set-AzureKeyVaultSecret -VaultName 'ContosoKeyVault' -Name 'SQLPassword' -SecretValue $secretvalue


To view the value contained in the secret as plain text:
(get-azurekeyvaultsecret -vaultName "Contosokeyvault" -name "SQLPassword").SecretValueText


Revoke or delete a key or secret
Remove-AzureRmKeyVault -VaultName 'ContosoKeyVault'
Or, you can delete an entire Azure resource group, which includes the key vault and any other resources that you included in that group:
Remove-AzureRmResourceGroup -ResourceGroupName 'ContosoResourceGroup'


Authorize users or applications to access the key vault, so they can then manage or use its keys and secrets
Configure key usage (for example, sign or encrypt)
Set-AzureRmKeyVaultAccessPolicy -VaultName 'ContosoKeyVault' -ServicePrincipalName 8f8c4bbd-485b-45fd-98f7-ec6300b7b4ed -PermissionsToKeys decrypt,sign


Recovery service vault
25 Recovery Services vaults per subscription
50 machines per vault
You can run backup jobs on Windows Server or Windows workstations up to three times/day.
You can run backup jobs on System Center DPM up to twice a day.
You can run a backup job for IaaS VMs once a day
Retention policies can only be applied on backup points
Data is encrypted on the on-premises server/client/SCDPM machine using AES256 and the data is sent over a secure HTTPS link
The data sent to Azure remains encrypted (at rest).

Create a recovery services vault

register the Azure Recovery Service provider with your subscription
Register-AzureRmResourceProvider -ProviderNamespace "Microsoft.RecoveryServices"


create the new vault
New-AzureRmRecoveryServicesVault -Name "testvault" -ResourceGroupName " test-rg" -Location "WestUS"


Specify the type of storage redundancy to use
$vault1 = Get-AzureRmRecoveryServicesVault –Name "testVault"
Set-AzureRmRecoveryServicesBackupProperties  -vault $vault1 -BackupStorageRedundancy GeoRedundant


Azure Key Vault Logging
A new container named insights-logs-auditevent is automatically created for your specified storage account, and you can use this same storage account for collecting logs for multiple key vaults.


Set-AzureRmDiagnosticSetting -ResourceId $kv.ResourceId -StorageAccountId $sa.Id -Enabled $true -Categories AuditEvent -RetentionEnabled $true -RetentionInDays 90


Individual blobs are stored as text, formatted as a JSON blob
{
   "records":
   [
       {
           "time": "2016-01-05T01:32:01.2691226Z",
           "resourceId": "/SUBSCRIPTIONS/361DA5D4-A47A-4C79-AFDD-XXXXXXXXXXXX/RESOURCEGROUPS/CONTOSOGROUP/PROVIDERS/MICROSOFT.KEYVAULT/VAULTS/CONTOSOKEYVAULT",
           "operationName": "VaultGet",
           "operationVersion": "2015-06-01",
           "category": "AuditEvent",
           "resultType": "Success",
           "resultSignature": "OK",
           "resultDescription": "",
           "durationMs": "78",
           "callerIpAddress": "104.40.82.76",
           "correlationId": "",
           "identity": {"claim":{"http://schemas.microsoft.com/identity/claims/objectidentifier":"d9da5048-2737-4770-bd64-XXXXXXXXXXXX","http://schemas.xmlsoap.org/ws/2005/05/identity/claims/upn":"live.com#username@outlook.com","appid":"1950a258-227b-4e31-a9cf-XXXXXXXXXXXX"}},
           "properties": {"clientInfo":"azure-resource-manager/2.0","requestUri":"https://control-prod-wus.vaultcore.azure.net/subscriptions/361da5d4-a47a-4c79-afdd-XXXXXXXXXXXX/resourcegroups/contosoresourcegroup/providers/Microsoft.KeyVault/vaults/contosokeyvault?api-version=2015-06-01","id":"https://contosokeyvault.vault.azure.net/","httpStatusCode":200}
       }
   ]
}


Registering Windows Server or Windows client machine to a Recovery Services Vault
$credspath = "C:\downloads"
$credsfilename = Get-AzureRmRecoveryServicesVaultSettingsFile -Backup -Vault $vault1 -Path  $credspath


On the Windows Server or Windows client machine, run the Start-OBRegistration cmdlet to register the machine with the vault.


Networking settings
Set-OBMachineSetting -NoProxy
Set-OBMachineSetting -NoThrottle


Encryption settings
ConvertTo-SecureString -String "Complex!123_STRING" -AsPlainText -Force | Set-OBMachineSetting
$PassPhrase = ConvertTo-SecureString -String "Complex!123_STRING" -AsPlainText -Force
$PassCode   = 'AzureR0ckx'
Set-OBMachineSetting -EncryptionPassPhrase $PassPhrase


Configuring the backup schedule
$sched = New-OBSchedule -DaysofWeek Saturday, Sunday -TimesofDay 16:00
Set-OBSchedule -Policy $newpolicy -Schedule $sched


Configuring a retention policy
$retentionpolicy = New-OBRetentionPolicy -RetentionDays 7
Set-OBRetentionPolicy -Policy $newpolicy -RetentionPolicy $retentionpolicy


BackupSchedule  : 4:00 PM
                 Saturday, Sunday,
                 Every 1 week(s)
DsList          :
PolicyName      :
RetentionPolicy : Retention Days : 7


                 WeeklyLTRSchedule :
                 Weekly schedule is not set


                 MonthlyLTRSchedule :
                 Monthly schedule is not set


                 YearlyLTRSchedule :
                 Yearly schedule is not set


State           : New
PolicyState     : Valid


$inclusions = New-OBFileSpec -FileSpec @("C:\", "D:\")
$exclusions = New-OBFileSpec -FileSpec @("C:\windows", "C:\temp") -Exclude
Add-OBFileSpec -Policy $newpolicy -FileSpec $inclusions
Add-OBFileSpec -Policy $newpolicy -FileSpec $exclusions

Azure backup agent
the vault credentials expire after 48 hours
Backup data is sent to the datacenter of the vault to which it is registered

Integrating applications with Azure Active Directory
This registration process involves giving Azure AD details about your application, such as the URL where it’s located, the URL to send replies after a user is authenticated, the URI that identifies the app
First the application needs to obtain an authorization code from Azure AD’s /authorize endpoint
Azure AD will determine if the user needs to be shown a consent page. This determination is based on whether the user (or their organization’s administrator) has already granted the application consent.
After the user grants consent, an authorization code is returned to your application, which is redeemed to acquire an access token and refresh token.
You can select from two types of permissions for each desired web API:
-->Application Permissions: Your client application needs to access the web API directly as itself (no user context). This type of permission requires administrator consent and is also not available for native client applications.
-->Delegated Permissions: Your client application needs to access the web API as the signed-in user, but with access limited by the selected permission. This type of permission can be granted by a user unless the permission requires administrator consent.


Access scopes and roles are exposed through your application's manifest, which is a JSON file that represents your application’s identity configuration.
adding the following JSON element to the oauth2Permissions collection
{
"adminConsentDescription": "Allow the application to have read-only access to all Employee data.",
"adminConsentDisplayName": "Read-only access to Employee records",
"id": "2b351394-d7a7-4a84-841e-08a6a17e4cb8",
"isEnabled": true,
"type": "User",
"userConsentDescription": "Allow the application to have read-only access to your Employee data.",
"userConsentDisplayName": "Read-only access to your Employee records",
"value": "Employees.Read.All"
}


It is expected that the user is provided a "sign-up" button that will redirect the browser to the Azure AD OAuth2.0 /authorize endpoint or an OpenID Connect /userinfo endpoint. These endpoints allow the application to get information about the new user by inspecting the id_token.


App Proxy apps - When you expose an on-prem application with Azure AD App Proxy, a single-tenant app is registered in your tenant (in addition to the App Proxy service). This app is what represents your on-prem application for all cloud interactions (for example, authentication). (App Proxy requires Azure AD Basic or higher.)


PS C:\> Get-AzureADServicePrincipal -ObjectId 383f7b97-6754-4d3d-9474-3908ebcba1c6 | fl *


DeletionTimeStamp         :
ObjectId                  : 383f7b97-6754-4d3d-9474-3908ebcba1c6
ObjectType                : ServicePrincipal
AccountEnabled            : True
AppDisplayName            : Office 365 Exchange Online
AppId                     : 00000002-0000-0ff1-ce00-000000000000
AppOwnerTenantId          :
AppRoleAssignmentRequired : False
AppRoles                  : {...}
DisplayName               : Microsoft.Exchange
ErrorUrl                  :
Homepage                  :
KeyCredentials            : {}
LogoutUrl                 :
Oauth2Permissions         : {...
                           , class OAuth2Permission {
                             AdminConsentDescription : Allows the app to have the same access to mailboxes as the signed-in user via Exchange Web Services.
                             AdminConsentDisplayName : Access mailboxes as the signed-in user via Exchange Web Services
                             Id                      : 3b5f3d61-589b-4a3c-a359-5dd4b5ee5bd5
                             IsEnabled               : True
                             Type                    : User
                             UserConsentDescription  : Allows the app full access to your mailboxes on your behalf.
                             UserConsentDisplayName  : Access your mailboxes
                             Value                   : full_access_as_user
                           },
                           ...}
PasswordCredentials       : {}
PublisherName             :
ReplyUrl                  :
SamlMetadataUrl           :
ServicePrincipalNames     : {00000002-0000-0ff1-ce00-000000000000/outlook.office365.com, 00000002-0000-0ff1-ce00-000000000000/mail.office365.com, 00000002-0000-0ff1-ce00-000000000000/outlook.com,
                           00000002-0000-0ff1-ce00-000000000000/*.outlook.com...}
Tags                      : {}


PS C:\> Get-AzureADUserOAuth2PermissionGrant -ObjectId ddiggle@aadpremiumlab.onmicrosoft.com | fl *


ClientId    : a8b16333-851d-42e8-acd2-eac155849b37
ConsentType : Principal
ExpiryTime  : 05/15/2017 07:02:39 AM
ObjectId    : M2OxqB2F6EKs0urBVYSbN5d7PzhUZz1NlH25COvLocbJjoxkUFfRQauryBKwBWet
PrincipalId : 648c8ec9-5750-41d1-abab-c812b00567ad
ResourceId  : 383f7b97-6754-4d3d-9474-3908ebcba1c6
Scope       : full_access_as_user
StartTime   : 01/01/0001 12:00:00 AM


Azure recovery services
The Azure Backup service has two types of vaults - the Backup vault and the Recovery Services vault.
Backup vaults cannot protect Resource Manager-deployed solutions. However, you can use a Recovery Services vault to protect classically-deployed servers and VMs.
Backing up VMs is a local process. So, for every Azure location that has VMs to be backed up, at least one Recovery Services vault must exist in that location.
There is no need to specify the storage accounts used to store the backup data--the Recovery Services vault and the Azure Backup service automatically handle the storage.
The storage replication option allows you to choose between geo-redundant storage and locally redundant storage.
A backup policy defines a matrix of when the data snapshots are taken, and how long those snapshots are retained.


Recovery Services vaults overview
A Recovery Services vault is a storage entity in Azure that houses data. The data is typically copies of data, or configuration information for virtual machines (VMs), workloads, servers, or workstations. You can use Recovery Services vaults to hold backup data for various Azure services such as IaaS VMs (Linux or Windows) and Azure SQL databases. Recovery Services vaults support System Center DPM, Windows Server, Azure Backup Server, and more.


Azure site recovery
Azure Site Recovery replicates data to an Azure storage account, over a public endpoint.
Failover isn't automatic. You initiate failovers with single click in the portal, or you can use Site Recovery PowerShell to trigger a failover. Failing back is a simple action in the Site Recovery portal.
To automate you could use on-premises Orchestrator or Operations Manager to detect a virtual machine failure, and then trigger the failover using the SDK.


OAuth 2.0 and Azure Active Directory
Azure Active Directory (Azure AD) uses OAuth 2.0 to enable you to authorize access to web applications and web APIs in your Azure AD tenant.
First: register your application with your Azure Active Directory (Azure AD) tenant. This will give you an Application ID for your application, as well as enable it to receive tokens
For Web Applications, provide the Sign-On URL which is the base URL of your app, where users can sign in e.g http://localhost:12345.
For Native Applications, provide a Redirect URI, which Azure AD will use to return token responses. Enter a value specific to your application, .e.g http://MyFirstAADApp


Request an authorization code
https://login.microsoftonline.com/{tenantId}/oauth2/authorize
https://login.microsoftonline.com/common/oauth2/authorize
Returns authorization_code


https://login.microsoftonline.com/{tenant}/oauth2/authorize?
client_id=6731de76-14a6-49ae-97bc-6eba6914391e
&response_type=code
&redirect_uri=http%3A%2F%2Flocalhost%2Fmyapp%2F
&response_mode=query
&resource=https%3A%2F%2Fservice.contoso.com%2F
&state=12345


Use common for tenant-independent tokens
Response_mode can be query or form_post


Use the authorization code to request an access token
https://login.microsoftonline.com/{tenantId}/oauth2/token
https://login.microsoftonline.com/common/oauth2/token
Request OAuth bearer token providing authorization_code
Returns an access_token and refresh_token


POST /{tenant}/oauth2/token HTTP/1.1
Host: https://login.microsoftonline.com
Content-Type: application/x-www-form-urlencoded
grant_type=authorization_code
&client_id=2d4d11a2-f814-46a7-890a-274a72a7309e
&code=AwABAAAAvPM1KaPlrEqdFSBzjqfTGBCmLdgfSTLEMPGYuNHSUYBrqqf_ZT_p5uEAEJJ_nZ3UmphWygRNy2C3jJ239gV_DBnZ2syeg95Ki-374WHUP-i3yIhv5i-7KU2CEoPXwURQp6IVYMw-DjAOzn7C3JCu5wpngXmbZKtJdWmiBzHpcO2aICJPu1KvJrDLDP20chJBXzVYJtkfjviLNNW7l7Y3ydcHDsBRKZc3GuMQanmcghXPyoDg41g8XbwPudVh7uCmUponBQpIhbuffFP_tbV8SNzsPoFz9CLpBCZagJVXeqWoYMPe2dSsPiLO9Alf_YIe5zpi-zY4C3aLw5g9at35eZTfNd0gBRpR5ojkMIcZZ6IgAA
&redirect_uri=https%3A%2F%2Flocalhost%2Fmyapp%2F
&resource=https%3A%2F%2Fservice.contoso.com%2F
&client_secret=p@ssw0rd


//NOTE: client_secret only required for web apps


Successfull response:
{
 "access_token": " ey...CUQ",
 "token_type": "Bearer",
 "expires_in": "3600",
 "expires_on": "1388444763",
 "resource": "https://service.contoso.com/",
 "refresh_token": "A...AA",
 "scope": "https%3A%2F%2Fgraph.microsoft.com%2Fmail.read",
 "id_token": "e...Q."
}


expires_in How long the access token is valid (in seconds).
expires_on The time when the access token expires. The date is represented as the number of seconds from 1970-01-01T0:0:0Z UTC until the expiration time. This value is used to determine the lifetime of cached tokens.


Call webapi with access_token in authorization header
Can request a new token providing a refresh_token


JWT Token Claims:


{
"typ": "JWT",
"alg": "none"
}.
{
"aud": "2d4d11a2-f814-46a7-890a-274a72a7309e",
"iss": "https://sts.windows.net/7fe81447-da57-4385-becb-6de57f21477e/",
"iat": 1388440863,
"nbf": 1388440863,
"exp": 1388444763,
"ver": "1.0",
"tid": "7fe81447-da57-4385-becb-6de57f21477e",
"oid": "68389ae2-62fa-4b18-91fe-53dd109d74f5",
"upn": "frank@contoso.com",
"unique_name": "frank@contoso.com",
"sub": "JWvYdCWPhhlpS1Zsf7yYUxShUwtUm5yzPmw_-jX3fHY",
"family_name": "Miller",
"given_name": "Frank"
}.


iat Issued at time
nbf Not before time. The time when the token becomes effective.


Refresh token
// Line breaks for legibility only


POST /{tenant}/oauth2/token HTTP/1.1
Host: https://login.microsoftonline.com
Content-Type: application/x-www-form-urlencoded


client_id=6731de76-14a6-49ae-97bc-6eba6914391e
&refresh_token=OAAABAAAAiL9Kn2Z27UubvWFPbm0gLWQJVzCTE9UkP3pSx1aXxUjq...
&grant_type=refresh_token
&resource=https%3A%2F%2Fservice.contoso.com%2F
&client_secret=JqQX2PNo9bpM0uEihUPzyrh    // NOTE: Only required for web apps


Refresh token response
{
 "token_type": "Bearer",
 "expires_in": "3600",
 "expires_on": "1460404526",
 "resource": "https://service.contoso.com/",
 "access_token": "eyJ0eXAiOiJKV1QiLCJhbGciOiJSUzI1NiIsIng1dCI6Ik5HVEZ2ZEstZnl0aEV1THdqcHdBSk9NOW4tQSJ9.eyJhdWQiOiJodHRwczovL3NlcnZpY2UuY29udG9zby5jb20vIiwiaXNzIjoiaHR0cHM6Ly9zdHMud2luZG93cy5uZXQvN2ZlODE0NDctZGE1Ny00Mzg1LWJlY2ItNmRlNTdmMjE0NzdlLyIsImlhdCI6MTM4ODQ0MDg2MywibmJmIjoxMzg4NDQwODYzLCJleHAiOjEzODg0NDQ3NjMsInZlciI6IjEuMCIsInRpZCI6IjdmZTgxNDQ3LWRhNTctNDM4NS1iZWNiLTZkZTU3ZjIxNDc3ZSIsIm9pZCI6IjY4Mzg5YWUyLTYyZmEtNGIxOC05MWZlLTUzZGQxMDlkNzRmNSIsInVwbiI6ImZyYW5rbUBjb250b3NvLmNvbSIsInVuaXF1ZV9uYW1lIjoiZnJhbmttQGNvbnRvc28uY29tIiwic3ViIjoiZGVOcUlqOUlPRTlQV0pXYkhzZnRYdDJFYWJQVmwwQ2o4UUFtZWZSTFY5OCIsImZhbWlseV9uYW1lIjoiTWlsbGVyIiwiZ2l2ZW5fbmFtZSI6IkZyYW5rIiwiYXBwaWQiOiIyZDRkMTFhMi1mODE0LTQ2YTctODkwYS0yNzRhNzJhNzMwOWUiLCJhcHBpZGFjciI6IjAiLCJzY3AiOiJ1c2VyX2ltcGVyc29uYXRpb24iLCJhY3IiOiIxIn0.JZw8jC0gptZxVC-7l5sFkdnJgP3_tRjeQEPgUn28XctVe3QqmheLZw7QVZDPCyGycDWBaqy7FLpSekET_BftDkewRhyHk9FW_KeEz0ch2c3i08NGNDbr6XYGVayNuSesYk5Aw_p3ICRlUV1bqEwk-Jkzs9EEkQg4hbefqJS6yS1HoV_2EsEhpd_wCQpxK89WPs3hLYZETRJtG5kvCCEOvSHXmDE6eTHGTnEgsIk--UlPe275Dvou4gEAwLofhLDQbMSjnlV5VLsjimNBVcSRFShoxmQwBJR_b2011Y5IuD6St5zPnzruBbZYkGNurQK63TJPWmRd3mbJsGM0mf3CUQ",
 "refresh_token": "AwABAAAAv YNqmf9SoAylD1PycGCB90xzZeEDg6oBzOIPfYsbDWNf621pKo2Q3GGTHYlmNfwoc-OlrxK69hkha2CF12azM_NYhgO668yfcUl4VBbiSHZyd1NVZG5QTIOcbObu3qnLutbpadZGAxqjIbMkQ2bQS09fTrjMBtDE3D6kSMIodpCecoANon9b0LATkpitimVCrl PM1KaPlrEqdFSBzjqfTGAMxZGUTdM0t4B4rTfgV29ghDOHRc2B-C_hHeJaJICqjZ3mY2b_YNqmf9SoAylD1PycGCB90xzZeEDg6oBzOIPfYsbDWNf621pKo2Q3GGTHYlmNfwoc-OlrxK69hkha2CF12azM_NYhgO668yfmVCrl-NyfN3oyG4ZCWu18M9-vEou4Sq-1oMDzExgAf61noxzkNiaTecM-Ve5cq6wHqYQjfV9DOz4lbceuYCAA"
}


Azure Active Directory application manifest

Windows Azure AD Single Sign-On
Establish federation between Windows Azure AD and Litware


password Single Sign-On
Windows Azure AD stores account credentials for users to sign on to Litware


Existing Single Sign-On
Configures Windows Aure AD to support single sign-on to Litware using Active Directory Federation Services or another third-party single sign-on provider


SAML Token
<Subject>
   <NameID Format="urn:oasis:names:tc:SAML:2.0:nameid-format:unspecificed">chad.smith@example.com</NameID>
       <SubjectConfirmation Method="urn:oasis:names:tc:SAML:2.0:cm:bearer" />
     </Subject>
     <Conditions NotBefore="2014-12-19T01:03:14.278Z" NotOnOrAfter="2014-12-19T02:03:14.278Z">
       <AudienceRestriction>
         <Audience>https://tenant.example.com</Audience>
       </AudienceRestriction>
     </Conditions>


Select this option to configure password-based single sign-on for a web application that has an HTML sign-in page. Password-based SSO, also referred to as password vaulting, enables you to manage user access and passwords to web applications that don't support identity federation. It is also useful for scenarios where several users need to share a single account, such as to your organization's social media app accounts.


Existing Single Sign-On/Linked sign-on
custom web apps that currently use Azure Active Directory Federation Services (or other federation service) instead of Azure AD for authentication


How do I assign a user to an enterprise app using PowerShell
# Assign the values to the variables
$username = "<You user's UPN>"
$app_name = "<Your App's display name>"
$app_role_name = "<App role display name>"


# Get the user to assign, and the service principal for the app to assign to
$user = Get-AzureADUser -ObjectId "$username"
$sp = Get-AzureADServicePrincipal -Filter "displayName eq '$app_name'"
$appRole = $sp.AppRoles | Where-Object { $_.DisplayName -eq $app_role_name }


# Assign the user to the app role
New-AzureADUserAppRoleAssignment -ObjectId $user.ObjectId -PrincipalId $user.ObjectId -ResourceId $sp.ObjectId -Id $appRole.Id


Azure Active Directory B2C: Provide sign-up and sign-in to consumers with Facebook accounts
At Facebook-->
To use Facebook as an identity provider in Azure Active Directory (Azure AD) B2C, you need to create a Facebook application and supply it with the right parameters.
Copy the value of App ID and App Secret. You will need both of them to configure Facebook as an identity provider in your tenant.
Enter https://login.microsoftonline.com/te/{tenant}/oauth2/authresp in the Valid OAuth redirect URIs field in the Client OAuth Settings section.
To make your Facebook application usable by Azure AD B2C, you need to make it publicly available
At Azure Portal-->
Set up facebook identity provider by entering the app ID and app secret (of the Facebook application that you created earlier) in the Client ID and Client secret fields respectively.


Azure AD and Azure AD B2C are separate product offerings and cannot coexist in the same tenant. An Azure AD tenant represents an organization. An Azure AD B2C tenant represents a collection of identities to be used with relying party applications.


Azure AD B2C can't be used to authenticate users for Microsoft Office 365


you can host your application anywhere (in the cloud or on-premises). All it needs to interact with Azure AD B2C is the ability to send and receive HTTP requests on publicly accessible endpoints.


Azure WebSite


Creating azure website requires unique DNS name.
To determine the website locations that are available to your Azure subscription:
Get-AzureWebsiteLocation


To determine if an Azure website name already exists:
Test-AzureName -Website "contoso-web"


To create the website
New-AzureWebsite -Location $wsLocation -Name $wsNameEXAM


All Azure Websites are created in the azurewebsites.net domain


Every Azure website, by default, includes one deployment slot, referred to as the production deployment slot, and is where the production version of your application will be deployed.


Option of adding up to four additional deployment slots to your website.


Creating a deployment slot
$wsQASlot = "QA"
New-AzureWebsite -Location $wsLocation -Name $wsName -Slot $wsQASlot


Swapping deployment slots
$wsStaging = "Staging"
$wsProduction = "Production"
Switch-AzureWebsiteSlot -Name $wsName -Slot1 $wsStaging -Slot2 $wsProduction


Publishing a web deployment package
$pkgPath = "E:\Contoso-Web.zip"
Publish-AzureWebsiteProject -Name $wsName -Slot $wsStaging -Package $pkgPath


Deploying an Azure WebJob
$wjPath = "E:\Contoso-WebJob.exe"
$wjName = "Contoso-WebJob"
New-AzureWebsiteJob -Name $wsName -JobName $wjName -JobType Triggered -Slot $wsStaging -JobFile $wjPath


Configuring site settings
$settings = New-Object Hashtable
$settings["Contoso_HR_WebService_URL"] = "https://contoso-webservices/hr"
Set-AzureWebsite $wsName -AppSettings $settings -WebSocketsEnabled $true -ConnectionStrings $connStrs


Associating the custom domain with the website
Set-AzureWebsite -Name "contoso-web" -HostNames @(www.contoso.com, "contoso.com")


To leverage the features of Azure Traffic Manager, you should have two or more deployments of your website.


All Azure Traffic Manager profiles use the shared domain *.trafficmanager.net. Therefore, your DNS name must be unique because it will form the Azure Traffic Manager domain name that you will use when updating your DNS records.


create a Traffic Manager profile
New-AzureTrafficManagerProfile -Name ContosoTM `
-DomainName contoso-web-tm.trafficmanager.net -LoadBalancingMethod Failover `
-MonitorPort 80 -MonitorProtocol Http -MonitorRelativePath "/" -Ttl 30


add an endpoint to traffic manager
$tmProfile = Get-AzureTrafficManagerProfile -Name "ContosoTM"
Add-AzureTrafficManagerEndpoint -TrafficManagerProfile $tmProfile `
-DomainName "contoso-web-west.azurewebsites.net" -Type AzureWebsite -Status Enabled |
Set-AzureTrafficManagerProfile


Remove an endpoint to traffic manager
$tmProfile = Get-AzureTrafficManagerProfile -Name "ContosoTM"
Set-AzureTrafficManagerEndpoint -TrafficManagerProfile $tmProfile `
-DomainName "contoso-web-west.azurewebsites.net" -Status Disabled |
Set-AzureTrafficManagerProfile


Configuring handler mappings
$wsName = "contoso-web"
$handlerMapping = New-Object Microsoft.WindowsAzure.Commands.Utilities.Websites.Services.WebEntities.HandlerMapping
$handlerMapping.Extension = "*.php"
$handlerMapping.ScriptProcessor = "d:\home\site\wwwroot\bin\php54\php-cgi.exe"
Set-AzureWebsite -Name $wsName -HandlerMappings $handlerMapping


list the sites in your subscription
Azure site list
Azure site list contoso-web


Application diagnostic logs can be saved to the website's file system, Azure Table Storage, or Azure Blob Storage. The web server logging in site diagnostics can be saved to the website's file system or Azure Blob Storage.


Enabling diagnostics logs
$wsName = "contoso-web"
Set-AzureWebsite -Name $wsName -RequestTracingEnabled $true -HttpLoggingEnabled $true


Application Diagnostics:D:\Home\LogFiles\Application\
SITE DIAGNOSTICS (WEB SERVER)D:\HOME\LOGFILES\HTTP\RAWLOGS\
Site Diagnostics (Detailed Errors):D:\Home\LogFiles\DetailedErrors\
SITE DIAGNOSTICS (FAILED REQUEST TRACES):D:\HOME\LOGFILES\W3SVC<RANDOM#>\


To access the Site Control Manager, open your browser and navigate to:https://<your site name>.scm.azurewebsites.net


download the log files
$wsName = "contoso-web"
Save-AzureWebsiteLog -Name $wsName -Output e:\weblogs.zip


The streaming log service is available for application diagnostic logs and web server logs only.


Viewing streaming logs
Get-AzureWebsiteLog -Name "contoso-web-west" -Tail -Path http
Get-AzureWebsiteLog -Name "contoso-web-west" -Tail -Message Error


Azure Websites support up to two endpoints for endpoint monitoring. Each endpoint can be monitored (or tested) from up to three locations.


To configure backup for a website, you must have an Azure Storage account and a container where you want the backups stored.


The storage account used to back up Azure Websites must belong to the same Azure subscription that the Azure website belongs to.


You cannot restore to the same name as the database that was backed up.


You can scale website instances up and down, but it is not possible to start and stop the website on a schedule


It is not possible to autoscale the instance size for running instances of your website.


When moving a website to a new web hosting plan, the new plan must be in the same region and resource group as the plan it is currently in.


Implement virtual machines
Creating VM (image only)
New-AzureQuickVM -Windows `
-ServiceName $serviceName `
-Name $vmName `
-ImageName $imageName `
-AdminUsername $adminUser `
-Password $password `
-Location $location `
-InstanceSize $size


Create VM
New-AzureVMConfig -Name $vmName `
-InstanceSize $size `
-ImageName $imageName |
Add-AzureProvisioningConfig -Windows `
-AdminUsername $adminUser `
-Password $password |
Add-AzureDataDisk -CreateNew `
-DiskSizeInGB 10 `
-LUN 0 `
-DiskLabel "data" |
Add-AzureEndpoint -Name "SQL" `
-Protocol tcp `
-LocalPort 1433 `
-PublicPort 1433 |
New-AzureVM -ServiceName $serviceName `
-Location $locationEXAM


You can add at most 50 virtual machines into the same domain name/cloud service. This puts an upper limit on how many virtual machines can be load-balanced, Autoscaled, or configured for availability using availability sets.


To create a new cloud service (or domain name) the name must be unique within the cloudapp.net domain.


By default, the operating system virtual hard disks (.vhd files) for the virtual machine will be created in a container called vhds.


Set a default storage account at the subscription level
Set-AzureSubscription -SubscriptionName $subscriptionName `
-CurrentStorageAccountName $storageAccount


New-AzureVMConfig –ImageName $imageName
–MediaLocation $osDisk `
-InstanceSize $size `
-Name $vmName |
Add-AzureDataDisk -CreateNew `
–MediaLocation $data1 `
-LUN 0 `
-Label "data 1" |
Add-AzureDataDisk -CreateNew `
–MediaLocation $data2 `
-LUN 0 `
-Label "data 2" |
New-AzureVM -ServiceName $serviceName `
-Location $location


create an empty cloud service
New-AzureService -ServiceName $serviceName -Location $location


Adding certificate to cloud service
$certPath = "C:\MyCerts\myCert.pem"
$cert = Get-PfxCertificate -FilePath $certPath
Add-AzureCertificate -CertToDeploy $certPath `
-ServiceName $serviceName


Passing certificate configuration information to azure
$sshKey = New-AzureSSHKey -PublicKey -Fingerprint $cert.Thumbprint `
-Path "/home/$linuxUser/.ssh/authorized_keys"


Passing pubic key configuration to provisioning configuration
Add-AzureProvisioningConfig -SSHPublicKeys $sshkey (other parameters)


To disable windows update
Add-AzureProvisioningConfig –DisableAutomaticUpdates


Setting the time zone
Add-AzureProvisioningConfig –TimeZone "Tokyo Standard Time" (other parameters)


Deploying certificates
$pfxName = Join-Path $PSScriptRoot "ssl-certificate.pfx"
$cert = New-Object System.Security.Cryptography.X509Certificates.X509Certificate2
$cert.Import($pfxName,$certPassword,'Exportable')
Add-AzureProvisioningConfig -X509Certificates $cert


Resetting password at first log on
Add-AzureProvisioningConfig –ResetPasswordOnFirstLogon (other parameters)


if all virtual machines in the same cloud service are in the StoppedDeallocated state, you can lose the virtual IP (VIP) assigned to your cloud service


Stopping VM
Stop-AzureVM -ServiceName $serviceName -Name $vmName


Stopping all virtual machines
Get-AzureVM -ServiceName $serviceName | Stop-AzureVM -Force


Starting virtual machines
use the Start-AzureVM or Restart-AzureVM
Start-AzureVM -ServiceName $serviceName -Name $vmName


Deleting virtual machines(with out harddisk)
Remove-AzureVM -ServiceName $serviceName -Name $vmName
--> To delete disk use DeleteVHD flag


You can also delete a virtual machine by deleting the cloud service
Remove-AzureService -ServieName $serviceName -Name $vmName -Force -DeleteAll


Windows-based virtual machines by default will have an endpoint for Remote Desktop Protocol (RDP) and Windows PowerShell remoting (WinRM) enabled. Linux-based virtual machines will have SSH enabled by default.


Getting remote desktop file for remote connection
Get-AzureRemoteDesktopFile -ServiceName $serviceName -Name $vmName -Launch
to save .rdp file
Get-AzureRemoteDesktopFile -ServiceName $serviceName -Name $vmName -LocalPath $path


WinRM generate the connection string to the virtual machine.
$uri = Get-AzureWinRMUri -ServiceName $serviceName -Name $vmName
start a remote Windows PowerShell session.
$credentials = Get-Credentials
Enter-PSSession -ConnectionUri $uri -Credential $credentials


upload a virtual hard disk file to an Azure Storage account.
$storage = "[storage account name]"
$storagePath = "https://$storage.blob.core.windows.net/uploads/myosdisk.vhd"
$sourcePath = "C:\mydisks\myosdisk.vhd"
Add-AzureVhd -Destination $storagePath `
-LocalFilePath $sourcePath


Maximum file size for an Azure operating system disk: 127 GB
Maximum file size for an Azure data disk: 1023 GB (~1 TB)
Azure does not currently support the VHDX file format
Azure currently only supports fixed virtual hard disks


Add-AzureVHD will automatically convert a dynamic virtual hard disk (.vhd) to fixed format during upload.


Download vhd file
Save-AzureVhd -Source $storagePath -LocalFilePath $localPath


Copying virtual hard disks between storage accounts and subscriptions
Start-AzureStorageBlobCopy


capture a generic image
Save-AzureVMImage -ServiceName $serviceName `
-Name $vmName `
-ImageName $imageName `
-ImageLabel $imageLabel `
-OSState Generalized
--> To capture a specialized image. Shutdown the VM.  Specify the value Specialized to the OSState parameter
--> Do not pass the OSState parameter for legacy os image


Updating disk configuration of the image
$diskName = "[data disk name]"
$imageName = "[image name]"
$imageLabel = "[new image label]"
$imageCtx = Get-AzureVMImage $imageName
$config = Get-AzureVMImageDiskConfigSet -ImageContext $imageCtx
Set-AzureVMImageDataDiskConfig -DataDiskName $diskName `
-HostCaching ReadWrite `
-DiskConfig $config
Update-AzureVMImage -ImageName $diskName `
-Label $imageLabel `
-DiskConfig $config


A virtual hard disk file must have a disk or image associated in Azure in order for it to be mounted to an Azure virtual machine.


Get-AzureVMImage and Get-AzureDisk cmdlets to view existing images and disks associated in your subscription


$storage = "[storage account name]"
$storagePath = "https://$storage.blob.core.windows.net/uploads/myosdisk.vhd"
$diskName = "MyOSDisk"
$label = "MyOSDisk"
Add-AzureDisk -DiskName $diskName -Label $label -MediaLocation $storagePath -OS Windows
-->Omit the -OS parameter for data disk


Linux images are required to have the Azure Agent installed prior to associating the virtual hard disk file with an image.


There are two Azure PowerShell cmdlets for creating a virtual machine image. The Save-AzureVMImage cmdlet creates an image from an existing virtual machine and supports both Generalized and Specialized image types.
The Add-AzureVMImage cmdlet creates an image from an existing virtual hard disk file (operating system disk
only) and only supports generalized images.


Add-AzureDataDisk three separate parameter sets:
-CreateNew for creating a new disk
-Import for referencing an existing disk by name (assumes the disk is already associated)
-ImportFrom for referencing a virtual hard disk file directly in storage (This has the effect of registering the disk and attaching it to the virtual machine at the same time.)
$storagePath = https://$storage.blob.core.windows.net/uploads/mydatadisk.vhd
Add-AzureDataDisk -ImportFrom `
-DiskLabel "Data 2" `
-MediaLocation $storagePath `
-LUN 1


Attach a second data disk on the virtual machine as part of an update.
$serviceName = "contoso-vms"
$vmName = "vm1"
Get-AzureVM -ServiceName $serviceName -Name $vmName |
Add-AzureDataDisk -CreateNew -DiskSizeInGB 500 -LUN 1 -DiskLabel "data 2" |
Update-AzureVM


you can delete images and disks using the Remove-AzureVMImage and Remove-AzureDisk cmdlets. Both cmdlets support the DeleteVHD parameter, which will delete the associated virtual hard disk files (.vhd)
Remove-AzureVMImage -ImageName "MyGeneralizedImage" -DeleteVHD
Remove-AzureDisk -DiskName "mydatadisk" -DeleteVHD


Set-AzureVMCustomScriptExtension cmdlet to run custom script
The script must reside in an Azure Storage account.
Upload the file to storage using the Set-AzureStorageBlobContent cmdlet
To run the script at provisioning time, modify the virtual machine configuration with the Set-AzureVMCustomScriptExtension cmdlet to specify the scripts to run.


$scriptName = "install-active-directory.ps1"
$scriptUri = http://$storageAccount.blob.core.windows.net/scripts/$scriptName
...| Set-AzureVMCustomScriptExtension -FileUri $scriptUri -Run $scriptname -Argument "$domain $password" |...


VM update pattern: Get-AzureVM, modify, Update-AzureVM.
Get-AzureVM -ServiceName $serviceName -Name $vmName |
Set-AzureVMCustomScriptExtension -FileUri $scriptUri -Run $scriptname -Argument "$domain $password" |
Update-AzureVM


To run multiple scripts, make multiple calls to the Set-AzureVMCustomScriptExtension


Sample DSC:
configuration ContosoAdvanced
{
# Import the module that defines custom resources
Import-DscResource -Module xWebAdministration
Node "localhost"
{
# Install the IIS role
WindowsFeature IIS
{
Ensure = "Present"
Name = "Web-Server"
}
# Install the ASP .NET 4.5 role
WindowsFeature AspNet45
{
Ensure = "Present"
Name = "Web-Asp-Net45"
}
# Stop an existing website
xWebsite DefaultSite
{
Ensure = "Present"
Name = "Default Web Site"
State = "Stopped"
PhysicalPath = "C:\Inetpub\wwwroot"
DependsOn = "[WindowsFeature]IIS"
}
# Copy the website content
File WebContent
{
Ensure = "Present"
SourcePath = "\\vmconfig\share\app"
DestinationPath = "C:\inetpub\contoso"
Recurse = $true
Type = "Directory"
DependsOn = "[WindowsFeature]AspNet45"
}
# Create a new website
xWebsite Fabrikam
{
Ensure = "Present"
Name = "Contoso Advanced"
State = "Started"
PhysicalPath = "C:\inetpub\contoso"
DependsOn = "[File]WebContent"
}
}
}


publish the DSC configuration (including the resources) to an Azure Storage account
Publish-AzureVMDscConfiguration .\ContosoAdvanced.ps1


The .zip file’s name is in the format of scriptname.ps1.zip and by default will be uploaded to the windows-powershell-dsc container in the storage account.


$configArchive = "ContosoAdvanced.ps1.zip"
$configName = "ContosoAdvanced"
...Set-AzureVMDscExtension -ConfigurationArchive $configArchive -ConfigurationName $configName |...


to update the virtual machine as using the custom script extension
$configArchive = "Contoso.ps1.zip"
$configName = "ContosoAdvanced"
Get-AzureVM -ServiceName $serviceName -Name $vmName |
Set-AzureVMDscExtension -ConfigurationArchive $configArchive -ConfigurationName $configName |
Update-AzureVM


Specifying the parameters in PowerShell data file with .psd1 extension
@{
AllNodes = @(
@{
NodeName = "localhost"
WebsiteName = "ContosoWebApp"
SourcePath = "\\vmconfig\share\app"
DestinationPath = "C:\inetpub\contoso"
}
);
}


...
Node "localhost"
{
...
File WebContent
{
Ensure = "Present"
SourcePath = $Node.SourcePath
DestinationPath = $Node.DestinationPath
Recurse = $true
Type = "Directory"
DependsOn = "[WindowsFeature]AspNet45"
}
# Create a new website
xWebsite WebSite
{
Ensure = "Present"
Name = $Node.WebsiteName
State = "Started"
PhysicalPath = $Node.DestinationPath
DependsOn = "[File]WebContent"
}
...
If the DSC configuration already exists in the Azure Storage account, you can use the Force parameter to overwrite it.


use to specify the data file
-ConfigurationDataPath .\ContosoConfig.psd1 |


view the current DSC extension configuration
Get-AzureVM -ServiceName $serviceName -Name $vmName | Get-AzureVMDscExtension


remove the DSC extension
Get-AzureVM -ServiceName $serviceName -Name $vmName |
Remove-AzureVMDscExtension |
Update-AzureVM


reset the local administrator name, password, and also enable Remote Desktop access if it is accidently disabled.
Get-AzureVM -ServiceName $serviceName -Name $vmName |
Set-AzureVMAccessExtension -UserName $userName -Password $password |
Update-AzureVM


enable the Puppet extension (can also be enabled on a provisioned virtual)
...| Set-AzureVMPuppetExtension -PuppetMasterServer $puppetServer |...


To use virtual machine extensions like DSC, Puppet, and Chef on Windows, the Azure virtual machine agent must be installed on the virtual machine.


Each cloud service has a unique name that is part of the cloudapp.net domain.


The Azure DNS server does not support the advanced records needed for workloads like Active Directory. In those workloads, deploying your own DNS server is a requirement.


deploying multiple cloud services on the same virtual network, allows virtual machines even in separate cloud services to have direct connectivity.


built-in DNS server does not support name resolution across cloud services even if the virtual machines are deployed in a virtual network.


Endpoints can allow simple port forwarding from one external port to an internal port on a single virtual machine, or they can be configured to allow load balanced traffic to multiple virtual machines.


create a port forwarded endpoint
Get-AzureVM -ServiceName $serviceName -Name $vmName |
Add-AzureEndpoint -Name "SQL" -Protocol tcp -LocalPort 1433 -PublicPort 1433 |
Update-AzureVM


Modifying endpoint
Get-AzureVM -ServiceName $serviceName -Name $vmName |
Set-AzureEndpoint -Name "SQL" -Protocol tcp -LocalPort 1433 -PublicPort 2000 |
Update-AzureVM


remove an endpoint
Get-AzureVM -ServiceName $serviceName -Name $vmName |
Remove-AzureEndpoint -Name "SQL" |
Update-AzureVM


Load balance up to 50 in the same cloud service


The Azure load balancer uses the Name property of the endpoint to know which virtual machines to forward the load balanced traffic to. To add additional virtual machines to the load balanced set, you must create an endpoint on each virtual machine with the same load balanced set name.


To create a load balanced endpoint
$config | Add-AzureEndpoint -Name "WEB" `
-Protocol tcp -LocalPort 80 -PublicPort 80 -LBSetName "LBWEB" -ProbeProtocol tcp -ProbePort 80


Add-AzureEndpoint cmdlet, and specify the load balanced set name with the LBSetName parameter
Set-AzureLoadBalancedEndpoint -ServiceName $serviceName `
-LBSetName "LBWEB" -ProbeProtocolHTTP -ProbePort 80 -ProbePath "/healthcheck.aspx"


A load balanced endpoint cannot be modified using the Set-AzureEndpoint cmdlet. Use the Set-AzureLoadBalancedEndpoint cmdlet instead.


If the probe does not receive a response (two failures, 15 seconds each by default), the load balancer will take the virtual machine in question out of the load balancer rotation.


Access control lists are lists of rules to either permit or deny a remote network range to access an endpoint.
50 access control lists per endpoint.


setting up ACL for end-point
$permitSubnet1 = "[remote admin IP 1]/32"
$permitSubnet2 = "[remote admin IP 1]/32"
$acl = New-AzureAclConfig
Set-AzureAclConfig -ACL $acl -AddRule Permit -RemoteSubnet $permitSubnet1 -Order 1 -Description "remote admin 1"
Set-AzureAclConfig -ACL $acl -AddRule Permit -RemoteSubnet $permitSubnet2 -Order 2 -Description "remote admin 2"
Get-AzureVM -ServiceName $serviceName -Name $vmName |
Set-AzureEndpoint -Name "PowerShell" -ACL $acl |
Update-AzureVM


The cmdlets Add-AzureEndpoint, and Set-AzureLoadBalancedEndpoint both support the ACL parameter.


create a new reserved IP
$reservedIPName = "WebFarm"
$label = "IP for WebFarm"
$location = "West US"
New-AzureReservedIP -ReservedIPName $reservedIPName -Label $label -Location $location


After the reserved IP has been created, you can only associate it with the cloud service hosting your virtual machines at creation time.


New-AzureQuickVM -ReservedIPName $reservedIPName (other parameters)
New-AzureVM -ReservedIPName $reservedIPName (other parameters)


Delete reserved ip
Remove-AzureReservedIP


An instance level (or public IP) address is an IP address that is assigned directly to the virtual machine instead of the cloud service. This IP address does not replace the cloud service VIP, but is instead an additional IP that you can use to connect to your virtual machine directly.


Connecting to a virtual machine through its public IP address bypasses the cloud service so there is no need to open endpoints separately.


add a new public IP address
Get-AzureVM -ServiceName $serviceName -Name $vmName |
Set-AzurePublicIP -PublicIPName "PassiveFTP" |
Update-AzureVM


extract the new public IP address
Get-AzureVM –ServiceName $serviceName –Name $vmName | Get-AzurePublicIP


remove the public IP address
Get-AzureVM -ServiceName $serviceName -Name $vmName |
Remove-AzurePublicIP -PublicIPName "PassiveFTP" |
Update-AzureVM


Each availability set will have five non-configurable update domains available, Each availability set will also be comprised of two fault domains


Adding an existing virtual machine to an availability set may cause the virtual machine to restart if it has to move to another physical server.


add an existing virtual machine to an availability set
Get-AzureVM –ServiceName $serviceName –Name $vmName |
Set-AzureAvailabilitySet WebACVSet |
Update-AzureVM
at provisioning time, use AvailabilitySetName parameter.


To change the size of a virtual machine
$newSize = "A9"
Get-AzureVm -ServiceName $serviceName -Name $vmName |
Set-AzureVMSize -InstanceSize $newSize |
Update-AzureVM


You can cache reads or writes (depending on the configuration) for up to four data disks plus the operating system disk.


the most disks you can attach to a single virtual machine are 16


Azure Storage account supports a maximum of 20,000 IOPS.


Storage accounts also have a maximum storage capacity of 500 TB per Azure Storage account.


Virtual disk using storage pool
New-StoragePool –FriendlyName "VMStoragePool" `
–StorageSubsystemFriendlyName "Storage Spaces*" `
–PhysicalDisks (Get-PhysicalDisk –CanPool $True)
$disks = Get-StoragePool –FriendlyName "VMStoragePool" `
-IsPrimordial $false |
Get-PhysicalDisk
New-VirtualDisk –FriendlyName "VirtualDisk1" `
-ResiliencySettingName Simple `
–NumberOfColumns $disks.Count `
–UseMaximumSize –Interleave 256KB `
-StoragePoolFriendlyName "VMStoragePool"


create a share
$storage = "[storage account name]"
$accountKey = "[storage account key]"
$shareName = "sharedStorage"
Import-Module .\AzureStorageFile.psd1
$ctx=New-AzureStorageContext $storage $accountKey
$s = New-AzureStorageShare $shareName -Context $ctx


The diagnostics extension allows you to configure event logs, a rich set of performance counters, IIS logs, and even application trace logs to be captured and automatically stored in an Azure Storage account.


Storage diagnostic
$configPath="c:\Diagnostics\diagnosticsConfig.xml"
$storageContext = New-AzureStorageContext -StorageAccountName $storage -StorageAccountKey $accountKey
Get-AzureVM -ServiceName $serviceName -Name $vmName |
Set-AzureVMDiagnosticsExtension -DiagnosticsConfigurationPath $configPath `
-Version "1.*" `
-StorageContext $storageContext |
Update-AzureVM -ServiceName $serviceName -Name $vmName


Implement Cloud Services
Setting the role instance count for an existing cloud service role
Set-AzureRole -ServiceName $csName -RoleName $csRole -Slot Staging -Count $roleCount


https://dagsle.gitbooks.io/study-guide-70-533-implementing-microsoft-azure/22-perform-configuration-management.html
https://docs.microsoft.com/en-us/azure/virtual-machines/windows/

Subscription level activity log
Resource level diagnositic log
Guest OS-level diagnositic log


Metrics data is captured in several tables in the storage account being monitored.
Logging data is stored in blob storage within the storage account in a container named $logs.


To determine the website locations that are available to your Azure subscription
Get-AzureWebsiteLocation


Detailed help on a cmdlet using the PowerShell cmdlet
Get-Help


To determine if an Azure website name already exists
Test-AzureName -Website "contoso-web"


To create the website
$wsLocation = "West US"
$wsName = "contoso-web"
New-AzureWebsite -Location $wsLocation -Name $wsName


All Azure Websites are created in the Azurewebsites.net domain. If you name your website Contoso-web, it will be reachable using the URL contoso-web.azurewebsites.net.


Adding additional deployment slots to an Azure website requires that the website be configured for Standard mode.


Every Azure website, by default, includes one deployment slot, referred to as the production deployment slot


To create a deployment slot
$wsQASlot = "QA"
New-AzureWebsite -Location $wsLocation -Name $wsName -Slot $wsQASlot


App Service plan defines:
Region (West US, East US, etc.)
Number of VM instances
Size of VM instances (Small, Medium, Large)
Pricing tier (Free, Shared, Basic, Standard, Premium, PremiumV2, Isolated, Consumption)


You scale up by changing the pricing tier of the App Service plan that your app belongs to.


If your app depends on other services, such as Azure SQL Database or Azure Storage, you can scale up these resources separately. These resources are not managed by the App Service plan.


Create subscription --> Create resources group --> Create app service plan --> Creat and publish app


SQL database is as an additional azure service
Create Azure SQL Database logical server --> Create sql database


Allow client connection from computer: Using visual studio we need to create a firewall rule


automate management of custom domains
az webapp config hostname add \
   --webapp-name <app_name> \
   --resource-group <resource_group_name> \
   --hostname <fully_qualified_domain_name>


Set-AzureRmWebApp `
   -Name <app_name> `
   -ResourceGroupName <resource_group_name> `
   -HostNames @("<fully_qualified_domain_name>","<app_name>.azurewebsites.net")
automate SSL bindings for your web app
upload an exported PFX file
thumbprint=$(az webapp config ssl upload \
   --name <app_name> \
   --resource-group <resource_group_name> \
   --certificate-file <path_to_PFX_file> \
   --certificate-password <PFX_password> \
   --query thumbprint \
   --output tsv)


add an SNI-based SSL binding
az webapp config ssl bind \
   --name <app_name> \
   --resource-group <resource_group_name>
   --certificate-thumbprint $thumbprint \
   --ssl-type SNI \
automate SSL bindings for your web app
New-AzureRmWebAppSSLBinding `
   -WebAppName <app_name> `
   -ResourceGroupName <resource_group_name> `
   -Name <dns_name> `
   -CertificateFilePath <path_to_PFX_file> `
   -CertificatePassword <PFX_password> `
   -SslState SniEnabled
Create an API App
az login
az account set --subscription <name or id>
az group create --name myResourceGroup --location westeurope
az appservice plan create --name myAppServicePlan --resource-group myResourceGroup --sku FREE
{
 "adminSiteName": null,
 "appServicePlanName": "myAppServicePlan",
 "geoRegion": "West Europe",
 "hostingEnvironmentProfile": null,
 "id": "/subscriptions/0000-0000/resourceGroups/myResourceGroup/providers/Microsoft.Web/serverfarms/myAppServicePlan",
 "kind": "app",
 "location": "West Europe",
 "maximumNumberOfWorkers": 1,
 "name": "myAppServicePlan",
 < JSON data removed for brevity. >
 "targetWorkerSizeId": 0,
 "type": "Microsoft.Web/serverfarms",
 "workerTierName": null
}
az webapp create --name <app_name> --resource-group myResourceGroup --plan myAppServicePlan
{
 "availabilityState": "Normal",
 "clientAffinityEnabled": true,
 "clientCertEnabled": false,
 "cloningInfo": null,
 "containerSize": 0,
 "dailyMemoryTimeQuota": 0,
 "defaultHostName": "<app_name>.azurewebsites.net",
 "enabled": true,
 "enabledHostNames": [
   "<app_name>.azurewebsites.net",
   "<app_name>.scm.azurewebsites.net"
 ],
 "gatewaySiteName": null,
 "hostNameSslStates": [
   {
     "hostType": "Standard",
     "name": "<app_name>.azurewebsites.net",
     "sslState": "Disabled",
     "thumbprint": null,
     "toUpdate": null,
     "virtualIp": null
   }
   < JSON data removed for brevity. >
}
az webapp deployment user set --user-name <username> --password <password>
az webapp deployment source config-local-git --name <app_name> --resource-group myResourceGroup --query url --output tsv
az group delete --name myResourceGroup


An App Service Environment is a Premium service plan option of Azure App Service that provides a fully isolated and dedicated environment for securely running Azure App Service apps at high scale, including Web Apps, Mobile Apps, and API Apps.
App Service Environments are ideal for application workloads requiring:
Very high scale
Isolation and secure network access


App Service Environments are isolated to running only a single customer's applications, and are always deployed into a virtual network.


Creating the Base ILB ASE
$templatePath="PATH\azuredeploy.json"
$parameterPath="PATH\azuredeploy.parameters.json"
New-AzureRmResourceGroupDeployment -Name "CHANGEME" -ResourceGroupName "YOUR-RG-NAME-HERE" -TemplateFile $templatePath -TemplateParameterFile $parameterPath


Custom APIs and custom connectors are web APIs that use REST for pluggable interfaces, Swagger metadata format for documentation, and JSON as their data exchange format.


Custom APIs let you call APIs that aren't connectors, and provide endpoints that you can call with HTTP + Swagger, Azure API Management, or App Services.


Custom connectors work like custom APIs but also have these attributes:
Registered as Logic Apps Connector resources in Azure.
Appear with icons alongside Microsoft-managed connectors in the Logic Apps Designer.
Available only to the connectors' authors and logic app users who have the same Azure Active Directory tenant and Azure subscription in the region where the logic apps are deployed.


Create Azure load balancer
Create resource group
New-AzureRmResourceGroup `
 -ResourceGroupName myResourceGroupLoadBalancer `
 -Location EastUS
Create a public IP address
$publicIP = New-AzureRmPublicIpAddress `
 -ResourceGroupName myResourceGroupLoadBalancer `
 -Location EastUS `
 -AllocationMethod Static `
 -Name myPublicIP
Create a load balancer
Create a frontend IP pool
$frontendIP = New-AzureRmLoadBalancerFrontendIpConfig `
 -Name myFrontEndPool `
 -PublicIpAddress $publicIP
Create a backend address pool
$backendPool = New-AzureRmLoadBalancerBackendAddressPoolConfig -Name myBackEndPool
$lb = New-AzureRmLoadBalancer `
 -ResourceGroupName myResourceGroupLoadBalancer `
 -Name myLoadBalancer `
 -Location EastUS `
 -FrontendIpConfiguration $frontendIP `
 -BackendAddressPool $backendPool
Create a health probe
Add-AzureRmLoadBalancerProbeConfig `
 -Name myHealthProbe `
 -LoadBalancer $lb `
 -Protocol tcp `
 -Port 80 `
 -IntervalInSeconds 15 `
 -ProbeCount 2
Set-AzureRmLoadBalancer -LoadBalancer $lb
Create a load balancer rule
$probe = Get-AzureRmLoadBalancerProbeConfig -LoadBalancer $lb -Name myHealthProbe


Add-AzureRmLoadBalancerRuleConfig `
 -Name myLoadBalancerRule `
 -LoadBalancer $lb `
 -FrontendIpConfiguration $lb.FrontendIpConfigurations[0] `
 -BackendAddressPool $lb.BackendAddressPools[0] `
 -Protocol Tcp `
 -FrontendPort 80 `
 -BackendPort 80 `
 -Probe $probe
Set-AzureRmLoadBalancer -LoadBalancer $lb
Create network resources
# Create subnet config
$subnetConfig = New-AzureRmVirtualNetworkSubnetConfig `
 -Name mySubnet `
 -AddressPrefix 192.168.1.0/24


# Create the virtual network
$vnet = New-AzureRmVirtualNetwork `
 -ResourceGroupName myResourceGroupLoadBalancer `
 -Location EastUS `
 -Name myVnet `
 -AddressPrefix 192.168.0.0/16 `
 -Subnet $subnetConfig
creates a network security group rule
# Create security rule config
$nsgRule = New-AzureRmNetworkSecurityRuleConfig `
 -Name myNetworkSecurityGroupRule `
 -Protocol Tcp `
 -Direction Inbound `
 -Priority 1001 `
 -SourceAddressPrefix * `
 -SourcePortRange * `
 -DestinationAddressPrefix * `
 -DestinationPortRange 80 `
 -Access Allow


# Create the network security group
$nsg = New-AzureRmNetworkSecurityGroup `
 -ResourceGroupName myResourceGroupLoadBalancer `
 -Location EastUS `
 -Name myNetworkSecurityGroup `
 -SecurityRules $nsgRule


# Apply the network security group to a subnet
Set-AzureRmVirtualNetworkSubnetConfig `
 -VirtualNetwork $vnet `
 -Name mySubnet `
 -NetworkSecurityGroup $nsg `
 -AddressPrefix 192.168.1.0/24


# Update the virtual network
Set-AzureRmVirtualNetwork -VirtualNetwork $vnet


Create an availability set
$availabilitySet = New-AzureRmAvailabilitySet `
 -ResourceGroupName myResourceGroupLoadBalancer `
 -Name myAvailabilitySet `
 -Location EastUS `
 -Managed `
 -PlatformFaultDomainCount 3 `
 -PlatformUpdateDomainCount 2


 Create multiple VMs
 for ($i=1; $i -le 3; $i++)
{
 $vm = New-AzureRmVMConfig `
   -VMName myVM$i `
   -VMSize Standard_D1 `
   -AvailabilitySetId $availabilitySet.Id
 $vm = Set-AzureRmVMOperatingSystem `
   -VM $vm `
   -Windows `
   -ComputerName myVM$i `
   -Credential $cred `
   -ProvisionVMAgent `
   -EnableAutoUpdate
 $vm = Set-AzureRmVMSourceImage `
   -VM $vm `
   -PublisherName MicrosoftWindowsServer `
   -Offer WindowsServer `
   -Skus 2016-Datacenter `
   -Version latest
 $vm = Set-AzureRmVMOSDisk `
   -VM $vm `
   -Name myOsDisk$i `
   -DiskSizeInGB 128 `
   -CreateOption FromImage `
   -Caching ReadWrite
 $nic = Get-AzureRmNetworkInterface `
   -ResourceGroupName myResourceGroupLoadBalancer `
   -Name myNic$i
 $vm = Add-AzureRmVMNetworkInterface -VM $vm -Id $nic.Id
 New-AzureRmVM `
   -ResourceGroupName myResourceGroupLoadBalancer `
   -Location EastUS `
   -VM $vm
}


Install IIS with Custom Script Extension
for ($i=1; $i -le 3; $i++)
{
  Set-AzureRmVMExtension `
    -ResourceGroupName myResourceGroupLoadBalancer `
    -ExtensionName IIS `
    -VMName myVM$i `
    -Publisher Microsoft.Compute `
    -ExtensionType CustomScriptExtension `
    -TypeHandlerVersion 1.4 `
    -SettingString '{"commandToExecute":"powershell Add-WindowsFeature Web-Server; powershell Add-Content -Path \"C:\\inetpub\\wwwroot\\Default.htm\" -Value $($env:computername)"}' `
    -Location EastUS
}


Create Kubernetes cluster
az acs create --orchestrator-type kubernetes --resource-group myResourceGroup --name myK8sCluster --generate-ssh-keys
Connect to the cluster
az acs kubernetes get-credentials --resource-group=myResourceGroup --name=myK8sCluster
To verify the connection to your cluster
kubectl get nodes
run the application on clcuster
kubectl create -f azure-vote.yml
To monitor progress
kubectl get service azure-vote-front --watch
Delete cluster
az group delete --name myResourceGroup --yes --no-wait
List images in registry
az acr repository list --name <acrName> --output table
Scale an Azure Container Service Cluster
az acs scale --resource-group myResourceGroup --name myK8SCluster --new-agent-count 5


Azure Storage Service Encryption (SSE)
Azure Storage automatically encrypts your data prior to persisting to storage and decrypts prior to retrieval


Data can be secured in transit between an application and Azure by using Client-Side Encryption, HTTPs, or SMB 3.0


All data is encrypted using 256-bit AES encryption


SSE can be used for Azure Blob Storage and File Storage


For Azure services, Azure Key Vault is the recommended key storage solution


Permissions to use the keys stored in Azure Key Vault, either to manage or to access them for Encryption at Rest encryption and decryption, can be given to Azure Active Directory accounts.


Azure encryptions at rest models use a key hierarchy made up of the following types of keys:
-->Data Encryption Key (DEK) – A symmetric AES256 key used to encrypt a partition or block of data. A single resource may have many partitions and many Data Encryption Keys. Encrypting each block of data with a different key makes crypto analysis attacks more difficult. Access to DEKs is needed by the resource provider or application instance that is encrypting and decrypting a specific block. When a DEK is replaced with a new key only the data in its associated block must be re-encrypted with the new key.
-->Key Encryption Key (KEK) – An asymmetric encryption key used to encrypt the Data Encryption Keys. Use of a Key Encryption Key allows the data encryption keys themselves to be encrypted and controlled. The entity that has access to the KEK may be different than the entity that requires the DEK. This allows an entity to broker access to the DEK for the purpose of ensuring limited access of each DEK to specific partition. Since the KEK is required to decrypt the DEKs, the KEK is effectively a single point by which DEKs can be effectively deleted by deletion of the KEK.


To provide the ability to use your own encryption keys, SSE for Blob storage is integrated with Azure Key Vault (AKV). You can create your own encryption keys and store them in AKV, or you can use AKV’s APIs to generate encryption keys. Not only does AKV allow you to manage and control your keys, it also enables you to audit your key usage.


Azure Storage provides a comprehensive set of security capabilities:
--> The storage account can be secured using Role-Based Access Control and Azure Active Directory.
--> Data can be secured in transit between an application and Azure by using Client-Side Encryption, HTTPS, or SMB 3.0.
--> Data can be set to be automatically encrypted when written to Azure Storage using Storage Service Encryption.
--> OS and Data disks used by virtual machines can be set to be encrypted using Azure Disk Encryption.
--> Delegated access to the data objects in Azure Storage can be granted using Shared Access Signatures.
--> The authentication method used by someone when they access storage can be tracked using Storage analytics.


For identity management and authentication, Data Lake Store uses Azure Active Directory


Data Lake Store separates authorization for account-related and data-related activities in the following manner:
-->Role-based access control (RBAC) provided by Azure for account management
-->POSIX ACL for accessing data in the store


write (w), and execute (x) permissions to resources for the Owner role, for the Owners group, and for other users and groups. In the Data Lake Store Public Preview (the current release), ACLs can be enabled on the root folder, on subfolders, and on individual files.


In data lake, You can establish firewalls and define an IP address range for your trusted clients.


Azure Data Lake Store protects your data throughout its life cycle. For data in transit, Data Lake Store uses the industry-standard Transport Layer Security (TLS) protocol to secure data over the network.


Data Lake Store also provides encryption for data that is stored in the account. You can chose to have your data encrypted or opt for no encryption.


You can use auditing or diagnostic logs, depending on whether you are looking for logs for management-related activities or data-related activities.
--> Management-related activities use Azure Resource Manager APIs and are surfaced in the Azure portal via audit logs.
--> Data-related activities use WebHDFS REST APIs and are surfaced in the Azure portal via diagnostic logs.


Azure Disk Encryption leverages the industry standard BitLocker feature of Windows and the DM-Crypt feature of Linux to provide volume encryption for the OS and the data disks.
To use Storage Analytics, you must enable it individually for each service you want to monitor
Storage Analytics has a 20 TB limit on the amount of stored data that is independent of the total limit for your storage account.
Storage Analytics Logging is not available for Azure Files.
All logs are stored in block blobs in a container named $logs, which is automatically created when Storage Analytics is enabled for a storage account. The $logs container is located in the blob namespace of the storage account, for example: http://<accountname>.blob.core.windows.net/$logs.
This container cannot be deleted once Storage Analytics has been enabled, though its contents can be deleted.
Each log will be written in the following format:
<service-name>/YYYY/MM/DD/hhmm/<counter>.log
Ex: blob/2011/07/31/1800/000001.log
To Access: https://<accountname>.blob.core.windows.net/$logs/blob/2011/07/31/1800/000001.log


Storage Analytics can store metrics that include aggregated transaction statistics and capacity data about requests to a storage service.


All metrics data for each of the storage services is stored in three tables reserved for that service
$MetricsTransactionsBlob
$MetricsTransactionsTable
$MetricsTransactionsQueue


Load Balancer differences
There are different options to distribute network traffic using Microsoft Azure. These options work differently from each other, having a different feature set and support different scenarios. They can each be used in isolation, or combining them.
-->Azure Load Balancer works at the transport layer (Layer 4 in the OSI network reference stack). It provides network-level distribution of traffic across instances of an application running in the same Azure data center.
-->Application Gateway works at the application layer (Layer 7 in the OSI network reference stack). It acts as a reverse-proxy service, terminating the client connection and forwarding requests to back-end endpoints.
-->Traffic Manager works at the DNS level. It uses DNS responses to direct end-user traffic to globally distributed endpoints. Clients then connect to those endpoints directly.


To deploy a load balancer, the following objects must be created:
-->Front-end IP pool: The private IP address for all incoming network traffic.
--> Back-end address pool: The network interfaces to receive the load-balanced traffic from the front-end IP address.
--> Load balancing rules: The port (source and local) configuration for the load balancer.
--> Probe configuration: The health status probes for virtual machines.
--> Inbound NAT rules: The port rules for direct access to virtual machines.


Configuring load balancer
Select the subscription to use
Choose the resource group for the load balancer
Create the virtual network and IP address for the front-end IP pool
Create the front-end IP pool and back-end address pool
Create the configuration rules, probe, and load balancer
Create the network interfaces


Select the subscription to use
Select-AzureRmSubscription -Subscriptionid "GUID of subscription"
Choose the resource group for the load balancer
New-AzureRmResourceGroup -Name NRP-RG -location "West US"
Create a subnet for the virtual network
$backendSubnet = New-AzureRmVirtualNetworkSubnetConfig -Name LB-Subnet-BE -AddressPrefix 10.0.2.0/24
Create a virtual network.
$vnet= New-AzureRmVirtualNetwork -Name NRPVNet -ResourceGroupName NRP-RG -Location "West US" -AddressPrefix 10.0.0.0/16 -Subnet $backendSubnet
Create a front-end IP pool
$frontendIP = New-AzureRmLoadBalancerFrontendIpConfig -Name LB-Frontend -PrivateIpAddress 10.0.2.5 -SubnetId $vnet.subnets[0].Id
Create a back-end address pool
$beaddresspool= New-AzureRmLoadBalancerBackendAddressPoolConfig -Name "LB-backend"
Create the configuration rules
$inboundNATRule1= New-AzureRmLoadBalancerInboundNatRuleConfig -Name "RDP1" -FrontendIpConfiguration $frontendIP -Protocol TCP -FrontendPort 3441 -BackendPort 3389
$inboundNATRule2= New-AzureRmLoadBalancerInboundNatRuleConfig -Name "RDP2" -FrontendIpConfiguration $frontendIP -Protocol TCP -FrontendPort 3442 -BackendPort 3389
$healthProbe = New-AzureRmLoadBalancerProbeConfig -Name "HealthProbe" -RequestPath "HealthProbe.aspx" -Protocol http -Port 80 -IntervalInSeconds 15 -ProbeCount 2
$lbrule = New-AzureRmLoadBalancerRuleConfig -Name "HTTP" -FrontendIpConfiguration $frontendIP -BackendAddressPool $beAddressPool -Probe $healthProbe -Protocol Tcp -FrontendPort 80 -BackendPort 80
Create the load balancer
$NRPLB = New-AzureRmLoadBalancer -ResourceGroupName "NRP-RG" -Name "NRP-LB" -Location "West US" -FrontendIpConfiguration $frontendIP -InboundNatRule $inboundNATRule1,$inboundNatRule2 -LoadBalancingRule $lbrule -BackendAddressPool $beAddressPool -Probe $healthProbe
Create the first network interface
$vnet = Get-AzureRmVirtualNetwork -Name NRPVNet -ResourceGroupName NRP-RG
$backendSubnet = Get-AzureRmVirtualNetworkSubnetConfig -Name LB-Subnet-BE -VirtualNetwork $vnet
Create the second network interface
$backendnic2= New-AzureRmNetworkInterface -ResourceGroupName "NRP-RG" -Name lb-nic2-be -Location "West US" -PrivateIpAddress 10.0.2.7 -Subnet $backendSubnet -LoadBalancerBackendAddressPool $nrplb.BackendAddressPools[0] -LoadBalancerInboundNatRule $nrplb.InboundNatRules[1]


Create the application gateway configuration objects
# Create a subnet and assign the address space of 10.0.0.0/24
$subnet = New-AzureRmVirtualNetworkSubnetConfig -Name subnet01 -AddressPrefix 10.0.0.0/24


# Create a virtual network with the address space of 10.0.0.0/16 and add the subnet
$vnet = New-AzureRmVirtualNetwork -Name ContosoVNET -ResourceGroupName ContosoRG -Location "East US" -AddressPrefix 10.0.0.0/16 -Subnet $subnet


# Retrieve the newly created subnet
$subnet=$vnet.Subnets[0]


# Create a public IP address that is used to connect to the application gateway. Application Gateway does not support custom DNS names on public IP addresses.  If a custom name is required for the public endpoint, a CNAME record should be created to point to the automatically generated DNS name for the public IP address.
$publicip = New-AzureRmPublicIpAddress -ResourceGroupName ContosoRG -name publicIP01 -location "East US" -AllocationMethod Dynamic


# Create a gateway IP configuration. The gateway picks up an IP addressfrom the configured subnet and routes network traffic to the IP addresses in the backend IP pool. Keep in mind that each instance takes one IP address.
$gipconfig = New-AzureRmApplicationGatewayIPConfiguration -Name gatewayIP01 -Subnet $subnet


# Configure a backend pool with the addresses of your web servers. These backend pool members are all validated to be healthy by probes, whether they are basic probes or custom probes.  Traffic is then routed to them when requests come into the application gateway. Backend pools can be used by multiple rules within the application gateway, which means one backend pool could be used for multiple web applications that reside on the same host.
$pool = New-AzureRmApplicationGatewayBackendAddressPool -Name pool01 -BackendIPAddresses 134.170.185.46, 134.170.188.221, 134.170.185.50


# Configure backend http settings to determine the protocol and port that is used when sending traffic to the backend servers. Cookie-based sessions are also determined by the backend HTTP settings.  If enabled, cookie-based session affinity sends traffic to the same backend as previous requests for each packet.
$poolSetting = New-AzureRmApplicationGatewayBackendHttpSettings -Name "besetting01" -Port 80 -Protocol Http -CookieBasedAffinity Disabled -RequestTimeout 120


# Configure a frontend port that is used to connect to the application gateway through the public IP address
$fp = New-AzureRmApplicationGatewayFrontendPort -Name frontendport01  -Port 80


# Configure the frontend IP configuration with the public IP address created earlier.
$fipconfig = New-AzureRmApplicationGatewayFrontendIPConfig -Name fipconfig01 -PublicIPAddress $publicip


# Configure the listener.  The listener is a combination of the front end IP configuration, protocol, and port and is used to receive incoming network traffic.
$listener = New-AzureRmApplicationGatewayHttpListener -Name listener01 -Protocol Http -FrontendIPConfiguration $fipconfig -FrontendPort $fp


# Configure a basic rule that is used to route traffic to the backend servers. The backend pool settings, listener, and backend pool created in the previous steps make up the rule. Based on the criteria defined traffic is routed to the appropriate backend.
$rule = New-AzureRmApplicationGatewayRequestRoutingRule -Name rule01 -RuleType Basic -BackendHttpSettings $poolSetting -HttpListener $listener -BackendAddressPool $pool


# Configure the SKU for the application gateway, this determines the size and whether or not WAF is used.
$sku = New-AzureRmApplicationGatewaySku -Name Standard_Small -Tier Standard -Capacity 2


# Create the application gateway
$appgw = New-AzureRmApplicationGateway -Name ContosoAppGateway -ResourceGroupName ContosoRG -Location "East US" -BackendAddressPools $pool -BackendHttpSettingsCollection $poolSetting -FrontendIpConfigurations $fipconfig  -GatewayIpConfigurations $gipconfig -FrontendPorts $fp -HttpListeners $listener -RequestRoutingRules $rule -Sku $sku


Delete the application gateway
# Retrieve the application gateway
$gw = Get-AzureRmApplicationGateway -Name ContosoAppGateway -ResourceGroupName ContosoRG


# Stops the application gateway
Stop-AzureRmApplicationGateway -ApplicationGateway $gw


# Once the application gateway is in a stopped state, use the `Remove-AzureRmApplicationGateway` cmdlet to remove the service.
Remove-AzureRmApplicationGateway -Name ContosoAppGateway -ResourceGroupName ContosoRG -Force


Azure automatically creates a route table for each subnet within an Azure virtual network and adds system default routes to the table.


Create a route table for the Public subnet.
$routeTablePublic = New-AzureRmRouteTable `
 -Name 'myRouteTable-Public' `
 -ResourceGroupName $rgName `
 -location $location `
 -Route $routePrivate
Associate the route table to the Public subnet.
Set-AzureRmVirtualNetworkSubnetConfig `
 -VirtualNetwork $virtualNetwork `
 -Name 'Public' `
 -AddressPrefix 10.0.0.0/24 `
 -RouteTable $routeTablePublic | `
Set-AzureRmVirtualNetwork
Create a route for traffic from the Private subnet
$routePublic = New-AzureRmRouteConfig `
 -Name 'ToPublicSubnet' `
 -AddressPrefix 10.0.0.0/24 `
 -NextHopType VirtualAppliance `
 -NextHopIpAddress $nic.IpConfigurations[0].PrivateIpAddress
Create the route table for the Private subnet.
$routeTablePrivate = New-AzureRmRouteTable `
 -Name 'myRouteTable-Private' `
 -ResourceGroupName $rgName `
 -location $location `
 -Route $routePublic
Associate the route table to the Private subnet.
Set-AzureRmVirtualNetworkSubnetConfig `
 -VirtualNetwork $virtualNetwork `
 -Name 'Private' `
 -AddressPrefix 10.0.1.0/24 `
 -RouteTable $routeTablePrivate | `
Set-AzureRmVirtualNetwork
Authentication against Azure AD is done by calling out to Azure AD, located at login.microsoftonline.com. To authenticate, you need to have the following information:


Azure AD Tenant ID (the name of that Azure AD you are using to log in, often the same as your company but not necessary)
Application ID (taken during the Azure AD application creation step)
Password (that you selected while creating the Azure AD Application)


Generic HTTP Request for Azure AD authentication:
POST /<Azure AD Tenant ID>/oauth2/token?api-version=1.0 HTTP/1.1 HTTP/1.1
Host: login.microsoftonline.com
Cache-Control: no-cache
Content-Type: application/x-www-form-urlencoded


grant_type=client_credentials&resource=https%3A%2F%2Fmanagement.core.windows.net%2F&client_id=<Application ID>&client_secret=<Password>


Response:
{
 "token_type": "Bearer",
 "expires_in": "3600",
 "expires_on": "1448199959",
 "not_before": "1448196059",
 "resource": "https://management.core.windows.net/",
 "access_token": "eyJ0eXAiOiJKV1QiLCJhb...86U3JI_0InPUk_lZqWvKiEWsayA"
}


Generating access token using PowerShell:
Invoke-RestMethod -Uri https://login.microsoftonline.com/<Azure AD Tenant ID>/oauth2/token?api-version=1.0 -Method Post
-Body @{"grant_type" = "client_credentials"; "resource" = "https://management.core.windows.net/"; "client_id" = "<application id>"; "client_secret" = "<password you selected for authentication>" }
List all subscriptions
GET /subscriptions?api-version=2015-01-01 HTTP/1.1
Host: management.azure.com
Authorization: Bearer YOUR_ACCESS_TOKEN
Content-Type: application/json


Create a resource group
PUT /subscriptions/SUBSCRIPTION_ID/resourcegroups/RESOURCE_GROUP_NAME?api-version=2015-01-01 HTTP/1.1
Host: management.azure.com
Authorization: Bearer YOUR_ACCESS_TOKEN
Content-Type: application/json


{
 "location": "northeurope",
 "tags": {
   "tagname1": "test-tag"
 }
}


Deploy resources to a resource group using a Resource Manager Template
PUT /subscriptions/SUBSCRIPTION_ID/resourcegroups/RESOURCE_GROUP_NAME/providers/microsoft.resources/deployments/DEPLOYMENT_NAME?api-version=2015-01-01 HTTP/1.1
Host: management.azure.com
Authorization: Bearer YOUR_ACCESS_TOKEN
Content-Type: application/json


{
 "properties": {
   "templateLink": {
     "uri": "https://raw.githubusercontent.com/Azure/azure-quickstart-templates/master/101-simple-linux-vm/azuredeploy.json",
     "contentVersion": "1.0.0.0",
   },
   "mode": "Incremental",
   "parameters": {
       "newStorageAccountName": {
         "value": "GLOBALY_UNIQUE_STORAGE_ACCOUNT_NAME"
       },
       "adminUsername": {
         "value": "ADMIN_USER_NAME"
       },
       "adminPassword": {
         "value": "ADMIN_PASSWORD"
       },
       "dnsNameForPublicIP": {
         "value": "DNS_NAME_FOR_PUBLIC_IP"
       },
       "ubuntuOSVersion": {
         "value": "15.04"
       }
   }
 }
}


start virtual machine
POST
https://management.azure.com/subscriptions/{subscription-id}/resourceGroups/{resource-group}/providers/Microsoft.Compute/virtualMachines/{vm-name}/start?api-version=2016-03-30


Create storage account
PUT
https://management.azure.com/subscriptions/{subscription-id}/resourceGroups/{resource-group}/providers/Microsoft.Storage/storageAccounts/{storage-name}?api-version=2016-01-01
And the request body contains properties for the storage account:
{ "location": "South Central US", "properties": {}, "sku": { "name": "Standard_LRS" }, "kind": "Storage" }