Some notes/resources I am using for the upcoming 70-532 exam... Share or use at your own risk :) Remember to check back frequently on the
exam website to make sure it is current for you.
Note that currently, this blog-post is a works-in-progress...
1. Design and Implement Azure Compute, Web, and Mobile Services (35-40%)
1.1 Design Azure App Service Web Apps
1.1.1 Define and manage App Service plans;
Notes:
Free, Shared: 60/240 CPU minutes/day quota, 1 GB storage
Basic: Custom Domain, Up to 3 instances, 10 GB Storage
Standard: Custom Domain + SSL, Up to 10 instances, 50 GB storage, 5 slots, traffic manager
Premium: Includes IP SSL, Up to 20 instances, 250 GB storage, 20 slots
PremiumV2: Double memory, SSD HD
Isolated: Single Tenant, VNet, Up to 100 instances
99.95% SLA
Resources:
1.1.2 configure Web Apps settings, certificates, and custom domains;
Notes:
Settings that are swapped:
General settings - such as framework version, 32/64-bit, Web sockets
App settings (can be configured to stick to a slot)
Connection strings (can be configured to stick to a slot)
Handler mappings
Monitoring and diagnostic settings
WebJobs content
Settings that are not swapped:
Publishing endpoints
Custom Domain Names
SSL certificates and bindings
Scale settings
WebJobs schedulers
Resources:
1.1.3 manage Web Apps by using the API, Azure PowerShell, Azure Cloud Shell, and Xplat-CLI;
Notes:
Azure Traffic Manager: Priority, Weighted, Performance, Geographic
1
2
3
4
5
6
7
8
9
$profile = New-AzureRmTrafficManagerProfile
-Name MyProfile
-ResourceGroupName MyRG
-TrafficRoutingMethod Performance
-RelativeDnsName contoso
-Ttl 30
-MonitorProtocol HTTP
-MonitorPort 80
-MonitorPath "/"
Assign endpoint
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
$webapp1 = Get-AzureRMWebApp -Name webapp1
Add-AzureRmTrafficManagerEndpointConfig
-EndpointName webapp1ep
-TrafficManagerProfile $profile
-Type AzureEndpoints
-TargetResourceId $webapp1 .Id -EndpointStatus Enabled
$webapp2 = Get-AzureRMWebApp -Name webapp2
Add-AzureRmTrafficManagerEndpointConfig
-EndpointName webapp2ep
-TrafficManagerProfile $profile
-Type AzureEndpoints
-TargetResourceId $webapp2 .Id
-EndpointStatus Enabled
Set-AzureRmTrafficManagerProfile -TrafficManagerProfile $profile
Resources:
1.1.4 implement diagnostics, monitoring, and analytics;
Notes:
Enable application logging to collect diagnostic traces from your web app code. You'll need to turn this on to enable the streaming log feature. This setting turns itself off after 12 hours.
1
2
3
4
5
6
using System.Diagnostics ;
...
Trace.TraceInformation("Info" );
Trace.TraceError("Error" );
Trace.TraceWarning("Warn" );
...
Download Logs (Powershell)
1
Save-AzureWebSiteLog -Name webappname
Download Logs (CLI)
1
2
3
az webapp log download
--resource-group resourcegroupname
--name webappname
Streaming Log (Powershell)
1
2
Get-AzureWebSiteLog
-Name webappname -Tail
Resources:
1.1.5 design and configure Web Apps for scale and resilience;
Notes:
Auto Scale By (based on metric)
Auto Scale (to specific instance count)
Cool down (in minutes): The amount of time to wait after a scale operation before scaling again. For example, if cooldown is 10 minutes and a scale operation just occurred, Autoscale will not attempt to scale again until after 10 minutes. This is to allow the metrics to stabilize first.
Resources:
1.1.6 use Azure Managed Service Identity to access other Azure AD-protected resources including Azure Key Vault
Notes:
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
public async Task<ActionResult> Index()
{
AzureServiceTokenProvider provider =
new AzureServiceTokenProvider ();
var kvc = new KeyVaultClient(
new kvc.AuthenticationCallback(provider.KeyVaultTokenCallback));
var url = "https://keyvaultname.vault.azure.net/secrets/secret" ;
var secret = await kvc.GetSecretAsync().ConfigureAwait(false );
ViewBag.Secret = secret.Value;
ViewBag.Principal = provider.PrincipalUsed;
return View ();
}
Resources:
1.2 Implement Azure Functions and WebJobs
1.2.1 Create Azure Functions;
Notes:
1
2
3
4
5
6
az functionapp create
--deployment-source-url https://foo
--resource-group myResourceGroup
--consumption-plan-location westeurope
--name <app_name>
--storage-account <storage_name>
Resources:
1.2.2 implement a webhook Function;
Notes:
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
using System.Net;
using System.Net.Http;
using System.Threading.Tasks;
using Microsoft.Azure.WebJobs;
using Microsoft.Azure.WebJobs.Host;
using Newtonsoft.Json;
namespace Train.Funcs
{
public static class WebhookTrigger
{
[FunctionName("WebhookTrigger")]
public static async Task<object> Run([HttpTrigger(WebHookType = "genericJson")]HttpRequestMessage req, TraceWriter log)
{
log.Info("Webhook was triggered!");
string jsonContent = await req.Content.ReadAsStringAsync();
dynamic data = JsonConvert.DeserializeObject(jsonContent);
if (data.first == null || data.last == null)
{
return req.CreateResponse(HttpStatusCode.BadRequest, new
{
error = "Please pass first/last properties in the input object"
});
}
return req.CreateResponse(HttpStatusCode.OK, new
{
greeting = $"Hello {data.first} {data.last}!"
});
}
}
}
Resources:
1.2.3 create an event processing Function;
Notes:
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
using System;
using Microsoft.Azure.WebJobs;
using Microsoft.Azure.WebJobs.Host;
namespace Train.Funcs
{
public static class TimerTrigger
{
[FunctionName("TimerTrigger")]
public static void Run([TimerTrigger("0 */5 * * * *")]TimerInfo myTimer, TraceWriter log)
{
log.Info($"C# Timer trigger function executed at: {DateTime.Now}");
}
}
}
Resources:
1.2.4 implement an Azure-connected Function;
Notes:
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
using Microsoft.Azure.WebJobs;
using Microsoft.Azure.WebJobs.Host;
namespace Train.Funcs
{
public static class QueueTrigger
{
[FunctionName("QueueTrigger")]
public static void Run([QueueTrigger("myqueue-items", Connection = "")]string myQueueItem, TraceWriter log)
{
log.Info($"C# Queue trigger function processed: {myQueueItem}");
}
}
}
Resources:
1.2.5 design and implement a custom binding;
Notes:
Bindings must be authored in .NET, but can be consumed from any supported language.
To author an extension, you must perform the following tasks:
Declare an attribute, such as [Blob]. Attributes are how customers consume the binding.
Choose one or more binding rules to support.
Add some converters to make the rules more expressive.
Resources:
1.2.6 debug a Function;
Notes:
Install:
1
npm install -g azure-functions-core-tools@core
Create a local Functions project
1
func init MyFunctionProj
Local settings file
1
2
3
4
5
6
7
8
9
10
11
12
13
14
{
"IsEncrypted" : false ,
"Values" : {
"AzureWebJobsStorage" : "<connection string>" ,
"AzureWebJobsDashboard" : "<connection string>"
},
"Host" : {
"LocalHttpPort" : 7071 ,
"CORS" : "*"
},
"ConnectionStrings" : {
"SQLConnectionString" : "Value"
}
}
Create project and start debugging locally
1
2
3
4
5
6
7
func new --language JavaScript --template HttpTrigger --name MyHttpTrigger
func host start
--OR
func host start --debug vscode
Resources:
1.2.7 integrate a Function with storage;
Notes:
Resources:
1.2.8 implement and configure proxies;
Notes:
proxies.json
1
2
3
4
5
6
7
8
9
10
11
12
{
"$schema" : "http://json.schemastore.org/proxies" ,
"proxies" : {
"proxy1" : {
"matchCondition" : {
"methods" : [ "GET" ],
"route" : "/api/{test}"
},
"backendUri" : "https://<AnotherApp>.azurewebsites.net/api/<FunctionName>"
}
}
}
Resources:
1.2.9 integrate with App Service plan;
Notes:
Consumption Plan: On demand based on load
App Service Plan: Dedicated VM instances allocated, function host always running. You can run longer than the maximum execution time allowed on the Consumption plan (of 10 minutes). Support for App Service Environment, VNET/VPN connectivity, and larger VM sizes.
Resources:
1.2.10 implement Azure Event Grid-based serverless applications
Notes:
Using a single service, Azure Event Grid manages all routing of events from any source, to any destination, for any application.
Resources:
1.3 Implement API Management
1.3.1 Create managed APIs;
Notes:
Resources:
1.3.2 configure API Management policies;
Notes:
policies - inbound, backend, outbound, on-error
Resources:
1.3.3 protect APIs with rate limits;
Notes:
Policy statement
1
2
3
4
5
<rate-limit calls= "number" renewal-period= "seconds" >
<api name= "API name" id= "API id" calls= "number" renewal-period= "seconds" />
<operation name= "operation name" id= "operation id" calls= "number" renewal-period= "seconds" />
</api>
</rate-limit>
Example
1
2
3
4
5
6
7
8
9
<policies>
<inbound>
<base />
<rate-limit calls= "20" renewal-period= "90" />
</inbound>
<outbound>
<base />
</outbound>
</policies>
Resources:
1.3.4 add caching to improve performance;
Notes:
Policy statement
1
2
3
4
5
6
7
8
9
10
11
12
<cache-lookup vary-by-developer= "true | false" vary-by-developer-groups= "true | false" downstream-caching-type= "none | private | public" must-revalidate= "true | false" allow-private-response-caching= "@(expression to evaluate)" >
<vary-by-header> Accept</vary-by-header>
<!-- should be present in most cases -->
<vary-by-header> Accept-Charset</vary-by-header>
<!-- should be present in most cases -->
<vary-by-header> Authorization</vary-by-header>
<!-- should be present when allow-private-response-caching is "true"-->
<vary-by-header> header name</vary-by-header>
<!-- optional, can repeated several times -->
<vary-by-query-parameter> parameter name</vary-by-query-parameter>
<!-- optional, can repeated several times -->
</cache-lookup>
Example
1
2
3
4
5
6
7
8
9
10
11
12
<policies>
<inbound>
<base />
<cache-lookup vary-by-developer= "false" vary-by-developer-groups= "false" downstream-caching-type= "none" must-revalidate= "true" >
<vary-by-query-parameter> version</vary-by-query-parameter>
</cache-lookup>
</inbound>
<outbound>
<cache-store duration= "seconds" />
<base />
</outbound>
</policies>
Resources:
1.3.5 monitor APIs;
Notes:
Resources:
1.3.6 customize the Developer portal;
Notes:
Resources:
1.3.7 add authentication and authorization to applications by using API Management;
Notes:
authentication-basic username="username" password="password"
authentication-certificate thumbprint="thumbprint"
Resources:
1.3.8 configure API versions by using API Management;
Notes:
Resources:
1.3.9 implement git-based configuration using API Management
Notes:
git clone https://[name].scm.azure-api.net/
Resources:
1.4 Design Azure App Service API Apps
1.4.1 Create and deploy API Apps;
Notes:
Resources:
1.4.2 automate API discovery by using Swagger and Swashbuckle;
Notes:
Resources:
1.4.3 use Swagger API metadata to generate client code for an API app;
Notes:
Resources:
1.4.4 monitor API Apps
Notes:
Resources:
1.5 Develop Azure Logic Apps
1.5.1 Create a Logic App connecting SaaS services;
Notes:
Resources:
1.5.2 create a Logic App with B2B capabilities;
Notes:
Resources:
1.5.3 create a Logic App with XML capabilities;
Notes:
Resources:
1.5.4 trigger a Logic App from another app;
Notes:
Resources:
1.5.5 create custom and long-running actions;
Notes:
Resources:
1.5.6 monitor Logic Apps;
Notes:
Resources:
1.5.7 integrate a logic app with a function;
Notes:
Resources:
1.5.8 access on-premises data;
Notes:
Supported:
BizTalk Server 2016
DB2
File System
Informix
MQ
MySQL
Oracle Database
PostgreSQL
SAP Application Server
SAP Message Server
SharePoint
SQL Server
Teradata
Resources:
1.5.9 implement Logic Apps with Event Grid
Notes:
Resources:
1.6 Develop Azure App Service Mobile Apps
1.6.1 Create a Mobile App;
Notes:
Resources:
1.6.2 add offline sync to a Mobile App;
Notes:
Resources:
1.6.3 add authentication to a Mobile App;
Notes:
Resources:
1.6.4 add push notifications to a Mobile App
Notes:
Resources:
1.7 Design and implement Azure Service Fabric Applications
1.7.1 Create a Service Fabric application;
Notes:
Resources:
1.7.2 build an Actors-based service;
Notes:
Resources:
1.7.3 add a web front-end to a Service Fabric application;
Notes:
Resources:
1.7.4 monitor and diagnose services;
Notes:
Resources:
1.7.5 migrate apps from cloud services;
Notes:
Resources:
1.7.6 create, secure, upgrade, and scale Service Fabric Cluster in Azure;
Notes:
Resources:
1.7.7 scale a Service Fabric app;
Notes:
Resources:
1.7.8 deploy an application to a Container
Notes:
Resources:
1.8 Design and implement Third Party Platform as a Service (PaaS)
1.8.1 Implement Cloud Foundry;
Notes:
Resources:
1.8.2 implement OpenShift;
Notes:
Resources:
1.8.3 provision applications by using Azure Quickstart Templates;
Notes:
Resources:
1.8.4 build applications that leverage Azure Marketplace solutions and services;
Notes:
Resources:
1.8.5 implement solutions that use Azure Bot Service;
Notes:
Resources:
1.8.6 create Azure Managed Applications;
Notes:
Resources:
1.8.7 implement Docker Swarm applications;
Notes:
Resources:
1.8.8 implement Kubernetes applications
Notes:
Resources:
1.9 Design and Implement DevOps
1.9.1 Instrument an application with telemetry;
Notes:
Resources:
1.9.2 discover application performance issues by using Application Insights;
Notes:
Resources:
1.9.3 deploy Visual Studio Team Services with Continuous integration (CI) and Continuous development (CD);
Notes:
Resources:
1.9.4 deploy CI/CD with third party platform tools (Jenkins, GitHub, Chef, Puppet; TeamCity);
Notes:
Resources:
1.9.5 implement mobile DevOps by using HockeyApp;
Notes:
Resources:
1.9.6 perform root cause analysis using Azure Time Series Insights
Notes:
Resources:
2. Design and Implement a Storage and Data Strategy (25-30%)
2.1 Implement Azure Storage blobs and Azure Files
Data redundancy options
locally redundant storage : multiple copies of your data in one datacenter
zone redundant storage : multiple copies of your data across multiple datacenters or across regions
geographically redundant storage : multiple copies of the data in one region, and asynchronously replicating to a second region
read-access geographically redundant storage : allowing read access from the second region used for GRS
2.1.1 Read data; change data;
Notes:
https://[name].blob.core.windows.net/[table name]
https://[name].table.core.windows.net/[blob name]
https://[name].queue.core.windows.net/[queue name]
Query table stroage
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
CloudStorageAccount account =
CloudStorageAccount.Parse("<ConnectionString>" );
CloudTableClient tableClient =
account.CreateCloudTableClient();
CloudTable table =
tableClient.GetTableReference("<Table Name>" );
var queryText = TableQuery.GenerateFilterCondition(
"PartitionKey" , QueryComparisons.Equal,
"<Partition>" )
TableQuery<Foo> query =
new TableQuery<Foo().Where(queryText);
Resources:
2.1.2 set metadata on a storage container;
Notes:
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
using System;
using System.IO;
using System.Threading.Tasks;
using Microsoft.WindowsAzure.Storage;
using Shouldly;
using Xunit;
namespace Train.AzureStorage.Tests
{
public class BlobStorageTests
{
private readonly CloudStorageAccount _cloudStorageAccount = CloudStorageAccount.Parse("UseDevelopmentStorage=true");
[Fact]
public async Task SetGetContainerMetaData()
{
var client = _cloudStorageAccount.CreateCloudBlobClient();
var foo = client.GetContainerReference("foo");
await foo.CreateIfNotExistsAsync();
foo.Metadata.Add("dataId", "foo12345");
foo.Metadata["expires"] = "5/1/2018";
await foo.SetMetadataAsync();
var fooRead = client.GetContainerReference("foo");
await fooRead.FetchAttributesAsync();
fooRead.Metadata["dataId"].ShouldBe("foo12345");
fooRead.Metadata["expires"].ShouldBe("5/1/2018");
}
[Fact]
public async Task UploadAndDownloadBlobAndSetGetMeta()
{
var client = _cloudStorageAccount.CreateCloudBlobClient();
var foo = client.GetContainerReference("foo");
await foo.CreateIfNotExistsAsync();
var id = Guid.NewGuid().ToString("N");
var refId = $"{id}data1.json";
var data1Ref = foo.GetBlockBlobReference(refId);
await data1Ref.UploadFromFileAsync("data1.json");
data1Ref.Metadata["fileId"] = "f38543";
data1Ref.Metadata.Add("fileExpires", "6/1/2019");
await data1Ref.SetMetadataAsync();
var data1RefRead = foo.GetBlockBlobReference(refId);
await data1RefRead.DownloadToFileAsync("test1.json", FileMode.Create);
File.Exists("test1.json").ShouldBe(true);
await data1RefRead.FetchAttributesAsync();
data1RefRead.Metadata["fileId"].ShouldBe("f38543");
data1RefRead.Metadata["fileExpires"].ShouldBe("6/1/2019");
}
}
}
Resources:
2.1.3 store data using block and page blobs;
Notes:
Each block can be a different size, up to a maximum of 100 MB
block blob can include up to 50,000 blocks max block blob is therefore slightly more than 4.75 TB (100 MB X 50,000 blocks)
Page blobs are a collection of 512-byte pages optimized for random read and write operations
An append blob is comprised of blocks and is optimized for append operations
append blob can include up to 50,000 blocks. The maximum size of an append blob is therefore slightly more than 195 GB (4 MB X 50,000 blocks)
Resources:
2.1.4 stream data using blobs;
Notes:
1
2
3
MemoryStream ms = new MemoryStream();
CloudBlockBlob blockBlob = container.GetBlockBlobReference("<name>" );
await blockBlob.DownloadToStreamAsync(ms);
Resources:
2.1.5 access blobs securely;
Notes:
1
2
3
4
5
6
7
8
9
10
11
SharedAccessBlobPolicy policy = new SharedAccessBlobPolicy();
policy.SharedAccessExpiryTime =
DateTimeOffset.UtcNow.AddHours(24 );
policy.Permissions =
SharedAccessBlobPermissions.List |
SharedAccessBlobPermissions.Write;
var sasContainerToken = container.GetSharedAccessSignature(policy);
return container.Uri + sasContainerToken;
Resources:
2.1.6 implement async blob copy;
Notes:
1
2
destBlob.StartCopyFromBlob(
new Uri (srcBlob.Uri.AbsoluteUri + blobToken));
Resources:
2.1.7 configure Content Delivery Network (CDN);
Notes:
Resources:
2.1.8 design blob hierarchies;
Notes:
Resources:
2.1.9 configure custom domains;
Notes:
Resources:
2.1.10 scale blob storage and implement blob tiering;
Notes:
Powershell
1
2
3
4
Set-AzureRmStorageAccount
-ResourceGroupName <resource-group>
-AccountName <storage-account>
-UpgradeToStorageV2
CLI
1
2
3
4
az storage account update
-g <resource-group>
-n <storage-account>
--set kind = StorageV2
Resources:
2.1.11 create connections to files from on-premises or cloud-based Windows or Linux machines;
Notes:
1
2
3
$storageContext = New-AzureStorageContext <storage-account-name> <storage-account-key>
$share = New-AzureStorageShare logs
-Context $storageContext
Resources:
2.1.12 shard large datasets;
Notes:
Resources:
2.1.13 implement blob leasing;
Notes:
lock on a blob for write and delete operations. The lock duration can be 15 to 60 seconds, or can be infinite
1
blob.AcquireLease(TimeSpan.FromSeconds(30 ), leaseId);
Resources:
2.1.14 implement Storage Events;
Notes:
Resources:
2.1.15 implement Azure File Sync
Notes:
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
CloudFileClient fileClient =
storageAccount.CreateCloudFileClient();
CloudFileShare share =
fileClient.GetShareReference("logs" );
if (share.Exists())
{
CloudFileDirectory rootDir = share.GetRootDirectoryReference();
CloudFileDirectory sampleDir =
rootDir.GetDirectoryReference("CustomLogs" );
if (sampleDir.Exists())
{
CloudFile file = sampleDir.GetFileReference("Log1.txt" );
if (file.Exists())
{
Console.WriteLine(file.DownloadTextAsync().Result);
}
}
}
Resources:
2.2 Implement Azure storage tables, queues, and Azure Cosmos DB Table API
https://[name].table.cosmosdb.azure.com/[table name]
2.2.1 Implement CRUD with and without transactions;
Notes:
You can perform updates, deletes, and inserts in the same single batch operation.
A single batch operation can include up to 100 entities.
All entities in a single batch operation must have the same partition key.
While it is possible to perform a query as a batch operation, it must be the only operation in the batch.
1
2
3
4
5
6
TableBatchOperation batchOperation = new TableBatchOperation();
...
batchOperation.Insert(entity1);
batchOperation.Insert(entity2);
...
table.ExecuteBatch(batchOperation);
Resources:
2.2.2 design and manage partitions;
Notes:
Resources:
2.2.3 query using OData;
Notes:
1
2
3
4
5
6
7
8
9
10
11
12
13
14
// return a single named table
https://myaccount.table.core.windows.net/Tables('MyTable')
// return all entities in table table
https://myaccount.table.core.windows.net/MyTable()
$filter, $top, $select
eq, gt, ge, lt, le, ne
and, not, or
https://myaccount.table.core.windows.net/Customers(PartitionKey='<Part>',RowKey='<Key>')
Resources:
2.2.4 scale tables and partitions;
Notes:
Resources:
2.2.5 add and process queue messages;
Notes:
Create Queue
1
2
3
4
5
6
7
CloudQueueClient queueClient =
storageAccount.CreateCloudQueueClient();
CloudQueue queue =
queueClient.GetQueueReference("myqueue" );
queue.CreateIfNotExists();
Add Queue Message
1
2
CloudQueueMessage message = new CloudQueueMessage("<MSG>" );
queue.AddMessage(message);
Resources:
2.2.6 retrieve a batch of messages;
Notes:
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
// Retrieve storage account from connection string.
CloudStorageAccount storageAccount = CloudStorageAccount.Parse(
CloudConfigurationManager.GetSetting("StorageConnectionString" ));
// Create the queue client.
CloudQueueClient queueClient = storageAccount.CreateCloudQueueClient();
// Retrieve a reference to a queue.
CloudQueue queue = queueClient.GetQueueReference("myqueue" );
foreach (CloudQueueMessage message in queue.GetMessages(20 , TimeSpan.FromMinutes(5 )))
{
// Process all messages in less than 5 minutes, deleting each message after processing.
queue.DeleteMessage(message);
}
Resources:
2.2.7 scale queues;
Notes:
http://<storage account>.queue.core.windows.net/<queue>
Azure Queue storage scale targets
Resource Target
Max size of single queue 500 TiB
Max size of a message in a queue 64 KiB
Max number of stored access policies per queue 5
Maximum request rate per storage account 20,000 messages per second assuming 1 KiB message size
Target throughput for single queue (1 KiB messages) Up to 2000 messages per second
Resources:
2.2.8 choose between Azure Storage Tables and Azure Cosmos DB Table API
Notes:
Resources:
2.3 Manage access and monitor storage
2.3.1 Generate shared access signatures, including client renewal and data validation;
Notes:
Resources:
2.3.2 create stored access policies;
Notes:
Resources:
2.3.3 regenerate storage account keys;
Notes:
process for rotating your storage access keys:
Update the connection strings in your application code to reference the secondary access key of the storage account.
Regenerate the primary access key for your storage account. On the Access Keys blade, click Regenerate Key1, and then click Yes to confirm that you want to generate a new key.
Update the connection strings in your code to reference the new primary access key.
Regenerate the secondary access key in the same manner.
Resources:
2.3.4 configure and use Cross-Origin Resource Sharing (CORS);
Notes:
1
2
3
4
5
6
7
8
9
<Cors>
<CorsRule>
<AllowedOrigins> http://www.contoso.com, http://www.fabrikam.com</AllowedOrigins>
<AllowedMethods> PUT,GET</AllowedMethods>
<AllowedHeaders> x-ms-meta-data*,x-ms-meta-target*,x-ms-meta-abc</AllowedHeaders>
<ExposedHeaders> x-ms-meta-*</ExposedHeaders>
<MaxAgeInSeconds> 200</MaxAgeInSeconds>
</CorsRule>
<Cors>
Resources:
2.3.5 set retention policies and logging levels;
Notes:
1
2
Set-AzureStorageServiceLoggingProperty -ServiceType Queue
-LoggingOperations read,write,delete -RetentionDays 5
switches off logging for the table service
1
2
Set-AzureStorageServiceLoggingProperty -ServiceType Table
-LoggingOperations none
Resources:
2.3.6 analyze logs;
Notes:
1
2
3
4
5
6
7
8
9
10
11
Get-AzureStorageBlob -Container '$logs' |
where {
$_ .Name -match 'table/2014/05/21/05' -and
$_ .ICloudBlob.Metadata.LogType -match 'write'
} |
foreach {
"{0} {1} {2} {3}" – f $_ .Name,
$_ .ICloudBlob.Metadata.StartTime,
$_ .ICloudBlob.Metadata.EndTime,
$_ .ICloudBlob.Metadata.LogType
}
Resources:
2.3.7 monitor Cosmos DB storage
Notes:
5 ways:
View performance metrics on the Metrics page
View performance metrics by using Azure Monitoring
View performance metrics on the account page
Set up alerts in the portal
Monitor Azure Cosmos DB programmatically
https://management.azure.com/subscriptions/{SubscriptionId}/resourceGroups/{ResourceGroup}/providers/Microsoft.DocumentDb/databaseAccounts/{DocumentDBAccountName}/metricDefinitions?api-version=2015-04-08
Query with parameters:
https://management.azure.com/subscriptions/{SubecriptionId}/resourceGroups/{ResourceGroup}/providers/Microsoft.DocumentDb/databaseAccounts/{DocumentDBAccountName}/metrics?api-version=2015-04-08&$filter=%28name.value%20eq%20%27Total%20Requests%27%29%20and%20timeGrain%20eq%20duration%27PT5M%27%20and%20startTime%20eq%202016-06-03T03%3A26%3A00.0000000Z%20and%20endTime%20eq%202016-06-10T03%3A26%3A00.0000000Z
Resources:
2.4 Implement Azure SQL Databases
2.4.1 Choose the appropriate database tier and performance level;
Notes:
All 99.99% Uptime SLA
Basic: 7 days backup, CPU: Low, 2.5 IOPS/DTU, 5/10 r/w, 2 GB max, 5 DTU
Standard: 35 days backup, CPU: Low, Medium, High, 2.5 IOPS/DTU, 5/10 r/w, 1 TB max, 3000 DTU
Premium: 35 days backup, CPU: High, 48 IOPS/DTU, 2 r/w, 4 TB max, 4000 DTU
Resources:
2.4.2 configure and perform point in time recovery;
Notes:
Full backups are taken every week
differential backups every day
log backups every 5 minutes
3 ways:
Azure portal
PowerShell
REST API
Powershell:
Get-AzureRmSqlDatabase Gets one or more databases.
Get-AzureRMSqlDeletedDatabaseBackup Gets a deleted database that you can restore.
Get-AzureRmSqlDatabaseGeoBackup Gets a geo-redundant backup of a database.
Restore-AzureRmSqlDatabase Restores a SQL database.
PUT https://management.azure.com/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Microsoft.Sql/servers/{serverName}/databases/{databaseName}?api-version=2014-04-01
REST Body:
CreateMode: PointInTimeRestore: Creates a database by restoring a point in time backup of an existing database. sourceDatabaseId must be specified as the resource ID of the existing database, and restorePointInTime must be specified.
Resources:
2.4.3 enable geo-replication;
Notes:
Resources:
2.4.4 import and export data and schema;
Notes:
Resources:
2.4.5 scale Azure SQL databases;
Notes:
Horizontal scaling refers to adding or removing databases in order to adjust capacity or overall performance. This is also called “scaling out”. Sharding, in which data is partitioned across a collection of identically structured databases, is a common way to implement horizontal scaling.
Vertical scaling refers to increasing or decreasing the performance level of an individual database—this is also known as “scaling up.”
Horizontal scaling is managed using the Elastic Database client library.
Vertical scaling is accomplished using Azure PowerShell cmdlets to change the service tier, or by placing databases in an elastic pool
Resources:
2.4.6 manage elastic pools, including DTUs and eDTUs;
Notes:
The databases in an elastic pool are on a single Azure SQL Database server and share a set number of resources at a set price.
New-AzureRmSqlElasticPool Creates an elastic database pool on a logical SQL server.
Get-AzureRmSqlElasticPool Gets elastic pools and their property values on a logical SQL server.
Set-AzureRmSqlElasticPool Modifies properties of an elastic database pool on a logical SQL server. For example, use the StorageMB property to modify the max storage of an elastic pool.
Remove-AzureRmSqlElasticPool Deletes an elastic database pool on a logical SQL server.
Get-AzureRmSqlElasticPoolActivity Gets the status of operations on an elastic pool on a logical SQL server.
New-AzureRmSqlDatabase Creates a new database in an existing pool or as a single database.
Get-AzureRmSqlDatabase Gets one or more databases.
Set-AzureRmSqlDatabase Sets properties for a database, or moves an existing database into, out of, or between elastic pools.
Remove-AzureRmSqlDatabase
Resources:
2.4.7 manage limits and resource governor;
Notes:
enables you to specify limits on the amount of CPU, physical IO, and memory that incoming application requests can use
Resource pools. A resource pool, represents the physical resources of the server.
Workload groups.
A workload group serves as a container for session requests that have similar classification criteria
Classification. The Classification process assigns incoming sessions to a workload group based on the characteristics of the session.
Create a Resource Pool
1
2
3
4
5
CREATE RESOURCE POOL poolAdhoc
WITH (MAX_CPU_PERCENT = 20 );
GO
ALTER RESOURCE GOVERNOR RECONFIGURE;
GO
Create workload group
1
2
3
4
5
CREATE WORKLOAD GROUP groupAdhoc
USING poolAdhoc;
GO
ALTER RESOURCE GOVERNOR RECONFIGURE;
GO
Resources:
2.4.8 implement Azure SQL Data Sync;
Notes:
Resources:
2.4.9 implement graph database functionality in Azure SQL;
Notes:
1
2
3
4
SELECT Person2.Name
FROM Person Person1, Friends, Person Person2
WHERE MATCH (Person1- (Friends)-> Person2)
AND Person1.Name = ‘ John’
Resources:
2.4.10 design multi-tenant applications;
Notes:
Standalone single-tenant app with single-tenant database
Multi-tenant app with database-per-tenant
Multi-tenant app with multi-tenant databases
Multi-tenant app with sharded multi-tenant databases
Hybrid sharded multi-tenant database model
Resources:
2.4.11 secure and encrypt data;
Notes:
connection string parameters are Encrypt=True and TrustServerCertificate=False
encryption for data in motion with Transport Layer Security
data at rest with Transparent Data Encryption
data in use with Always Encrypted
firewalls prevent all access to your database server by IP
Resources:
2.4.12 manage data integrity;
Notes:
Extensive data integrity error alert monitoring
Backup and restore integrity checks
I/O system “lost write” detection
Automatic Page Repair
Data integrity at rest and in transit
Resources:
2.4.13 enable metrics and diagnostics logs for monitoring;
Notes:
Resources:
2.4.14 use adaptive query processing to improve query performance;
Notes:
make workloads automatically eligible for adaptive query processing by enabling compatibility level 140
1
ALTER DATABASE [Foo] SET COMPATIBILITY_LEVEL = 140 ;
Resources:
2.4.15 implement sharding and elastic tools;
Notes:
adding a shard and its range to an existing shard map
1
2
3
4
5
6
7
8
9
10
11
12
// sm is a RangeShardMap object.
// Add a new shard to hold the range being added.
Shard shard2 = null ;
if (!sm.TryGetShard(new ShardLocation(shardServer, "sample_shard_2" ),out shard2))
{
shard2 = sm.CreateShard(new ShardLocation(shardServer, "sample_shard_2" ));
}
// Create the mapping and associate it with the new shard
sm.CreateRangeMapping(new RangeMappingCreationInfo<long >
(new Range<long >(300 , 400 ), shard2, MappingStatus.Online));
splitting a range and assigning the empty portion to a newly added shard
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
// sm is a RangeShardMap object.
// Add a new shard to hold the range we will move
Shard shard2 = null ;
if (!sm.TryGetShard(new ShardLocation(shardServer, "sample_shard_2" ),out shard2))
{
shard2 = sm.CreateShard(new ShardLocation(shardServer,"sample_shard_2" ));
}
// Split the Range holding Key 25
sm.SplitMapping(sm.GetMappingForKey(25 ), 25 );
// Map new range holding [25-50) to different shard:
// first take existing mapping offline
sm.MarkMappingOffline(sm.GetMappingForKey(25 ));
// now map while offline to a different shard and take online
RangeMappingUpdate upd = new RangeMappingUpdate();
upd.Shard = shard2;
sm.MarkMappingOnline(sm.UpdateMapping(sm.GetMappingForKey(25 ), upd));
Nuget: https://www.nuget.org/packages/Microsoft.Azure.SqlDatabase.ElasticScale.Client/
Resources:
2.4.16 implement SQL Server Stretch Database
Notes:
Resources:
2.5 Implement Azure Cosmos DB
2.5.1 Choose a Cosmos DB API surface;
Notes:
SQL API: A schema-less JSON database engine with rich SQL querying capabilities.
MongoDB API
Cassandra API
Graph (Gremlin) API
Table API: A key-value database service built to provide premium capabilities (for example, automatic indexing, guaranteed low latency, global distribution) to existing Azure Table storage applications without making any app changes.
Resources:
2.5.2 create Cosmo DB API databases and collections;
Notes:
POST https://{databaseaccount}.documents.azure.com/dbs
POST https://{databaseaccount}.documents.azure.com/dbs/{db-id}/colls
Resources:
2.5.3 query documents;
Notes:
Resources:
2.5.4 run Cosmos DB queries;
Notes:
Resources:
2.5.5 create Graph API databases;
Notes:
1
2
3
4
5
6
DocumentClient client = new DocumentClient(new Uri(endpoint), authKey);
Database database = await client.CreateDatabaseIfNotExistsAsync(new Database { Id = "graphdb" });
DocumentCollection graph = await client.CreateDocumentCollectionIfNotExistsAsync(
UriFactory.CreateDatabaseUri("graphdb" ),
new DocumentCollection { Id = "graphcollz" },
new RequestOptions { OfferThroughput = 1000 });
Resources:
2.5.6 execute GraphDB queries;
Notes:
Filters: g.V().hasLabel('person').has('age', gt(40))
Projection: g.V().hasLabel('person').values('firstName')
Find related edges and vertices g.V('thomas').outE('knows').inV().hasLabel('person').outE('knows').inV().hasLabel('person')
Resources:
2.5.7 implement MongoDB database;
Notes:
Resources:
2.5.8 manage scaling of Cosmos DB, including managing partitioning, consistency, and RU/m;
Notes:
Resources:
2.5.9 manage multiple regions;
Notes:
Resources:
2.5.10 implement stored procedures;
Notes:
Resources:
2.5.11 implement JavaScript within Cosmos DB;
Notes:
Resources:
2.5.12 access Cosmos DB from REST interface;
Notes:
POST https://contosomarketing.documents.azure.com/dbs/XP0mAA==/colls/XP0mAJ3H-AA=/docs HTTP/1.1
x-ms-documentdb-isquery: True
x-ms-date: Mon, 18 Apr 2015 13:05:49 GMT
authorization: type%3dmaster%26ver%3d1.0%26sig%3dkOU%2bBn2vkvIlHypfE8AA5fulpn8zKjLwdrxBqyg0YGQ%3d
x-ms-version: 2015-12-16
x-ms-query-enable-crosspartition: true
Accept: application/json
Content-Type: application/query+json
Host: contosomarketing.documents.azure.com
Content-Length: 50
{
query: "SELECT * FROM root WHERE (root.Author.id = 'Don')",
parameters: []
}
Resources:
2.5.13 manage Cosmos DB security
Notes:
Resources:
2.6 Implement Redis caching
2.6.1 Choose a cache tier;
Notes:
Basic: No SLA, 250 MB - 53 GB, Max Clients 256–20,000
Standard: SLA 99.9%
Premium: Data Persistance, Cluster, Scale out to multiple scale units, Azure VNet, 6 GB - 530 GB, Max Clients 7,500–40,000, Geo-replication
Resources:
2.6.2 implement data persistence;
Notes:
(Redis database) persistence is configured, Azure Redis Cache persists a snapshot of the Redis cache in a Redis binary format to disk based on a configurable backup frequency
AOF (Append only file) persistence is configured, Azure Redis Cache saves every write operation to a log that is saved at least once per second into an Azure Storage account.
Resources:
2.6.3 implement security and network isolation;
Notes:
Azure Virtual Network (VNet) deployment provides enhanced security and isolation for your Azure Redis Cache, as well as subnets, access control policies, and other features to further restrict access.
Resources:
2.6.4 tune cluster performance;
Notes:
The ability to automatically split your dataset among multiple nodes.
The ability to continue operations when a subset of the nodes is experiencing failures or are unable to communicate with the rest of the cluster.
More throughput: Throughput increases linearly as you increase the number of shards.
More memory size: Increases linearly as you increase the number of shards.
Resources:
2.6.5 integrate Redis caching with ASP.NET session and cache providers;
Notes:
Install-Package Microsoft.Web.RedisSessionStateProvider
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19 <sessionState mode= "Custom" customProvider= "MySessionStateStore" >
<providers>
<!--
<add name="MySessionStateStore"
host = "127.0.0.1" [String]
port = "" [number]
accessKey = "" [String]
ssl = "false" [true|false]
throwOnError = "true" [true|false]
retryTimeoutInMilliseconds = "0" [number]
databaseId = "0" [number]
applicationName = "" [String]
connectionTimeoutInMilliseconds = "5000" [number]
operationTimeoutInMilliseconds = "5000" [number]
/>
-->
<add name= "MySessionStateStore" type= "Microsoft.Web.Redis.RedisSessionStateProvider" host= "127.0.0.1" accessKey= "" ssl= "false" />
</providers>
</sessionState>
Resources:
2.6.6 implement Redis data types and operations
Notes:
Resources:
2.7 Implement Azure Search
2.7.1 Create a service index; add data;
Notes:
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
using System.Collections.Generic;
using System.IO;
using System.Threading.Tasks;
using Microsoft.Azure.Search;
using Microsoft.Azure.Search.Models;
using Newtonsoft.Json;
namespace Train.AzureSearch.Tests
{
public static class BuildSearchFromFileExtensions
{
public static async Task<DocumentIndexResult> BuildSearchDataFromFile<T>(this ISearchServiceClient searchServiceClient, string jsonFileName) where T : class
{
string indexName = GetIndexName<T>();
if (await searchServiceClient.Indexes.ExistsAsync(indexName))
await searchServiceClient.Indexes.DeleteAsync(indexName);
await searchServiceClient.Indexes.CreateAsync(new Index
{
Name = indexName,
Fields = FieldBuilder.BuildForType<T>()
});
var searchIndexClient = searchServiceClient.Indexes.GetClient(indexName);
using (var reader = new StreamReader(jsonFileName))
{
var list = JsonConvert.DeserializeObject<List<T>>(await reader.ReadToEndAsync());
var batch = IndexBatch.Upload(list);
return await searchIndexClient.Documents.IndexAsync(batch);
}
}
public static IDocumentsOperations GetSearchClient<T>(this ISearchServiceClient searchServiceClient)
{
return searchServiceClient.Indexes.GetClient(GetIndexName<T>()).Documents;
}
private static string GetIndexName<T>()
{
return typeof(T).Name.ToLower();
}
}
}
Resources:
2.7.2 search an index; handle search results
Notes:
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
using System;
using System.Collections.Generic;
using System.IO;
using System.Threading.Tasks;
using Microsoft.Azure.Search;
using Microsoft.Azure.Search.Models;
using Microsoft.Extensions.Configuration;
using Newtonsoft.Json;
using Shouldly;
using Xunit;
namespace Train.AzureSearch.Tests
{
public class AzureSearchTests
{
private readonly IConfigurationRoot _configurationRoot;
public AzureSearchTests()
{
IConfigurationBuilder builder = new ConfigurationBuilder().AddJsonFile("appsettings.json");
_configurationRoot = builder.Build();
}
private ISearchServiceClient GetSearchServiceClient()
{
return new SearchServiceClient(_configurationRoot["AzureSearchName"], new SearchCredentials(_configurationRoot["AzureSearchApiKey"]));
}
public async Task SearchTextFoundInThreeFieldsIncludingFreeFormInMultipleRows()
{
// Arrange
var searchServiceClient = GetSearchServiceClient();
(await searchServiceClient.BuildSearchDataFromFile<CustomerOrder>("foobar.json")).Results.Count.ShouldBe(5);
// Act
var response = await GetSearchServiceClient().GetSearchClient<CustomerOrder>().SearchAsync("Foo");
// Assert
response.Results.Count.ShouldBe(5);
}
}
}
Resources:
3. Create and Manage Azure Resource Manager Virtual Machines (20-25%)
3.1 Deploy workloads on Azure Resource Manager (ARM) virtual machines (VMs)
3.1.1 Identify workloads that can and cannot be deployed;
Notes:
Resources:
3.1.2 run workloads including Microsoft and Linux;
Notes:
Resources:
3.1.3 create and provision VMs, including custom VM images;
Notes:
Generalize the Windows VM using Sysprep
Deallocate and mark the VM as generalized
1
2
3
4
5
6
7
Stop-AzureRmVM
-ResourceGroupName myResourceGroup
-Name myVM -Force
Set-AzureRmVM
-ResourceGroupName myResourceGroup
-Name myVM -Generalized
Create the image
1
2
3
4
5
6
7
8
9
10
11
12
$vm = Get-AzureRmVM
-Name myVM
-ResourceGroupName myResourceGroup
$image = New-AzureRmImageConfig
-Location EastUS
-SourceVirtualMachineId $vm .ID
New-AzureRmImage
-Image $image
-ImageName myImage
-ResourceGroupName myResourceGroup
Create VMs from the image
1
2
3
4
5
6
7
8
9
10
New-AzureRmVm `
-ResourceGroupName "myResourceGroup" `
-Name "myVMfromImage" `
-ImageName "myImage" `
-Location "East US" `
-VirtualNetworkName "myImageVnet" `
-SubnetName "myImageSubnet" `
-SecurityGroupName "myImageNSG" `
-PublicIpAddressName "myImagePIP" `
-OpenPorts 3389
Resources:
3.1.4 deploy workloads using Terraform
Notes:
Resources:
3.2 Perform configuration management
3.2.1 Automate configuration management by using PowerShell Desired State Configuration (DSC) or VM Agent (custom script extensions);
Notes:
Azure VM Agent is installed by default on any Windows virtual machine deployed from an Azure Gallery image
IIS_Install.ps1
1
2
3
4
5
6
7
8
9
10
11
configuration IISInstall
{
node "localhost"
{
WindowsFeature IIS
{
Ensure = "Present"
Name = "Web-Server"
}
}
}
Install
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
$resourceGroup = "dscVmDemo"
$location = "westus"
$vmName = "myVM"
$storageName = "demostorage"
Publish-AzureRmVMDscConfiguration
-ConfigurationPath .\IIS_Install.ps1
-ResourceGroupName $resourceGroup
-StorageAccountName $storageName
-force
Set-AzureRmVmDscExtension
-Version 2.72
-ResourceGroupName $resourceGroup
-VMName $vmName
-ArchiveStorageAccountName $storageName
-ArchiveBlobName IIS_Install.ps1.zip
-AutoUpdate: $true
-ConfigurationName "IISInstall"
Custom Script Extension downloads and executes scripts on Azure virtual machines.
1
2
3
4
5
6
7
Set-AzureRmVMCustomScriptExtension `
-ResourceGroupName myResourceGroup `
-VMName myVM `
-Location myLocation `
-FileUri myURL `
-Run 'myScript.ps1' `
-Name DemoScriptExtension
Resources:
3.2.2 enable remote debugging;
Notes:
open port 4022 for the Remote Debugger
Download and Install the remote tools on Windows Server
right-click the Remote Debugger app and choose Run as administrator
remote debugger is now waiting for a connection
In Visual Studio, click Debug | Attach to Process
Set [remote computer name]:4022
Resources:
3.2.3 implement VM template variables to configure VMs
Notes:
Define variable:
1
2
3
4
"variables" : {
"storageName" : "[concat(toLower(parameters('storageNamePrefix')),
uniqueString(resourceGroup().id))]"
}
Usage:
1
2
3
4
"resources" : [
{
"name" : "[variables('storageName')]" ,
"type" : "Microsoft.Storage/storageAccounts" ,
Define Complex Variable:
1
2
3
4
5
6
7
8
9
10
11
12
"variables" : {
"environmentSettings" : {
"test" : {
"instanceSize" : "Small" ,
"instanceCount" : 1
},
"prod" : {
"instanceSize" : "Large" ,
"instanceCount" : 4
}
}
},
Resources:
3.3 Scale ARM VMs
3.3.1 Scale up and scale down VM sizes;
Notes:
Resources:
3.3.2 deploy ARM VM Scale Sets (VMSS);
Notes:
load balancer distribute traffic on TCP port 80, remote desktop traffic on TCP port 3389,and PowerShell remoting on TCP port 5985:
1
2
3
4
5
6
7
8
9
New-AzureRmVmss `
-ResourceGroupName "myResourceGroup" `
-VMScaleSetName "myScaleSet" `
-Location "EastUS" `
-VirtualNetworkName "myVnet" `
-SubnetName "mySubnet" `
-PublicIpAddressName "myPublicIPAddress" `
-LoadBalancerName "myLoadBalancer" `
-Credential $cred
list VM instances:
1
2
3
Get-AzureRmVmssVM
-ResourceGroupName "myResourceGroup"
-VMScaleSetName "myScaleSet"
List connection info:
1
2
3
4
5
6
7
$lb = Get-AzureRmLoadBalancer
-ResourceGroupName "myResourceGroup"
-Name "myLoadBalancer"
Get-AzureRmLoadBalancerInboundNatRuleConfig
-LoadBalancer $lb |
Select-Object Name,Protocol,FrontEndPort,BackEndPort
Resources:
3.3.3 configure ARM VMSS auto-scale
Notes:
Change capacity of scaleset
1
2
3
4
5
6
7
8
9
10
$vmss = Get-AzureRmVmss
-ResourceGroupName "myResourceGroup"
-VMScaleSetName "myScaleSet"
$vmss .sku.capacity = 3
Update-AzureRmVmss
-ResourceGroupName "myResourceGroup"
-Name "myScaleSet"
-VirtualMachineScaleSet $vmss
Resources:
3.4 Design and implement ARM VM storage
3.4.1 Configure disk caching;
Notes:
set the disk caching policy on premium storage disks to ReadOnly, ReadWrite, or None
default disk caching policy is ReadOnly for all premium data disks, and ReadWrite for operating system disks
Resources:
3.4.2 plan for storage capacity;
Notes:
Resources:
3.4.3 configure shared storage;
Notes:
Resources:
3.4.4 configure geo-replication;
Notes:
Resources:
3.4.5 implement ARM VMs with Standard and Premium Storage;
Notes:
Resources:
3.4.6 implement Azure Disk Encryption for Windows and Linux ARM VMs;
Notes:
Resources:
3.4.7 implement Azure Disk Storage;
Notes:
You can add data disks to a virtual machine at any time, by attaching the disk to the virtual machine. You can use a VHD that you've uploaded or copied to your storage account, or use an empty VHD that Azure creates for you. Attaching a data disk associates the VHD file with the VM by placing a 'lease' on the VHD so it can't be deleted from storage while it's still attached.
Resources:
3.4.8 implement StorSimple
Notes:
manages storage tasks between an on-premises virtual array running in a hypervisor and Microsoft Azure cloud storage
Resources:
3.5 Monitor ARM VMs
3.5.1 Configure ARM VM monitoring;
Notes:
Add the Azure Diagnostics extension to the VM resource definition
Specifying diagnostics storage account as parameters
Enable extension:
1
2
3
4
5
6
7
8
$vm_resourcegroup = "myvmresourcegroup"
$vm_name = "myvm"
$diagnosticsconfig_path = "DiagnosticsPubConfig.xml"
Set-AzureRmVMDiagnosticsExtension
-ResourceGroupName $vm_resourcegroup `
-VMName $vm_name `
-DiagnosticsConfigurationPath $diagnosticsconfig_path
Log to Azure Storage:
1
2
3
4
5
6
Set-AzureRmVMDiagnosticsExtension
-ResourceGroupName $vm_resourcegroup
-VMName $vm_name
-DiagnosticsConfigurationPath $diagnosticsconfig_path
-StorageAccountName $diagnosticsstorage_name
-StorageAccountKey $diagnosticsstorage_key
Resources:
3.5.2 configure alerts;
Notes:
Resources:
3.5.3 configure diagnostic and monitoring storage location;
Notes:
Resources:
3.5.4 enable Application Insights at runtime;
Notes:
Resources:
3.5.5 monitor VM workloads by using Azure Application Insights;
Notes:
Resources:
3.5.6 monitor VMs using Azure Log Analytics and OMS;
Notes:
Resources:
3.5.7 monitor Linux and Windows VMs by using the Azure Diagnostics Extension;
Notes:
Resources:
3.5.8 monitor VMs by using Azure Monitor
Notes:
collect metrics, activity logs, and diagnostic logs
Example of event-types REST API: GET https://management.azure.com/subscriptions/{subscriptionId}/providers/microsoft.insights/eventtypes/management/values?api-version=2015-04-01&$filter={$filter}&$select={$select}
Example of metric definition REST API: GET https://management.azure.com/{resourceUri}/providers/microsoft.insights/metricDefinitions?api-version=2018-01-01&metricnamespace={metricnamespace}
Resources:
3.6 Manage ARM VM availability
3.6.1 Configure multiple ARM VMs in an availability set for redundancy;
Notes:
can't add an existing VM to an availability set after it is created
Create
1
2
3
4
5
6
7
New-AzureRmAvailabilitySet `
-Location "EastUS" `
-Name "myAvailabilitySet" `
-ResourceGroupName "myResourceGroupAvailability" `
-Sku aligned `
-PlatformFaultDomainCount 2 `
-PlatformUpdateDomainCount 2
Add VMs
1
2
3
4
5
6
7
8
9
10
11
12
13
for ($i =1; $i -le 2; $i ++)
{
New-AzureRmVm `
-ResourceGroupName "myResourceGroupAvailability" `
-Name "myVM$i" `
-Location "East US" `
-VirtualNetworkName "myVnet" `
-SubnetName "mySubnet" `
-SecurityGroupName "myNetworkSecurityGroup" `
-PublicIpAddressName "myPublicIpAddress$i" `
-AvailabilitySetName "myAvailabilitySet" `
-Credential $cred
}
Resources:
3.6.2 configure each application tier into separate availability sets;
Notes:
ensures that the VMs you create within the available set are isolated across multiple physical hardware resources
Resources:
3.6.3 combine the Load Balancer with availability sets;
Notes:
SKUs: Basic and Standard
timeout and frequency values set in SuccessFailCount
uses health probes to determine which backend pool instance should receive new flows
tcp/udp: source IP address, source port, destination IP address, destination port, and IP protocol number to map flows to available servers
Public Load Balancer
distributes network traffic equally OR
configure session affinity
Internal Load Balancer
only directs traffic to resources that are inside a virtual network
Resources:
3.6.4 perform automated VM maintenance
Notes:
Resources:
3.7 Design and Implement DevTest Labs
3.7.1 Create and manage custom images and formulas;
Notes:
Resources:
3.7.2 configure a lab to include policies and procedures;
Notes:
Resources:
3.7.3 configure cost management;
Notes:
Resources:
3.7.4 secure access to labs;
Notes:
Resources:
3.7.5 use environments in a lab;
Notes:
Resources:
4. Manage Identity, Application, and Network Services (10-15%)
4.1 Integrate an app with Azure Active Directory (AAD)
4.1.1 Develop apps that use WS-federation, OAuth, and SAML-P endpoints;
Notes:
Resources:
4.1.2 query the directory by using Microsoft Graph API, MFA and MFA API
Notes:
Example of checking member's group: POST https://graph.microsoft.com/v1.0/groups/{id}/checkMemberGroups
Example of listing all users: GET https://graph.microsoft.com/v1.0/users
Resources:
4.2 Design and implement a messaging strategy
4.2.1 Develop and scale messaging solutions using service bus queues, topics, relays, event hubs, Event Grid, and notification hubs;
Notes:
Resources:
4.2.2 monitor service bus queues, topics, relays, event hubs and notification hubs;
Notes:
Resources:
4.2.3 determine when to use Event Hubs, Service Bus, IoT Hub, Stream Analytics, and Notification Hubs;
Notes:
send messages to the event hub
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61 namespace SampleSender
{
using System ;
using System.Text ;
using System.Threading.Tasks ;
using Microsoft.Azure.EventHubs ;
public class Program
{
private static EventHubClient eventHubClient;
private const string EhConnectionString = "{Event Hubs connection string}" ;
private const string EhEntityPath = "{Event Hub path/name}" ;
public static void Main (string [] args)
{
MainAsync(args).GetAwaiter().GetResult();
}
private static async Task MainAsync (string [] args)
{
// Creates an EventHubsConnectionStringBuilder object from the connection string, and sets the EntityPath.
// Typically, the connection string should have the entity path in it, but this simple scenario
// uses the connection string from the namespace.
var connectionStringBuilder = new EventHubsConnectionStringBuilder(EhConnectionString)
{
EntityPath = EhEntityPath
};
eventHubClient = EventHubClient.CreateFromConnectionString(connectionStringBuilder.ToString());
await SendMessagesToEventHub (100 );
await eventHubClient.CloseAsync();
Console.WriteLine("Press ENTER to exit." );
Console.ReadLine();
}
// Creates an event hub client and sends 100 messages to the event hub.
private static async Task SendMessagesToEventHub (int numMessagesToSend)
{
for (var i = 0 ; i < numMessagesToSend; i++)
{
try
{
var message = $ "Message {i}" ;
Console.WriteLine($ "Sending message: {message}" );
await eventHubClient.SendAsync(new EventData(Encoding.UTF8.GetBytes(message)));
}
catch (Exception exception)
{
Console.WriteLine($ "{DateTime.Now} > Exception: {exception.Message}" );
}
await Task.Delay(10 );
}
Console.WriteLine($ "{numMessagesToSend} messages sent." );
}
}
}
Resources:
4.2.4 implement Azure Event Grid
Notes:
support sending events to Event Grid:
Azure Subscriptions (management operations)
Custom Topics
Event Hubs
IoT Hub
Resource Groups (management operations)
Service Bus
Storage Blob
Storage General-purpose v2 (GPv2)
support handling events from Event Grid:
Azure Automation
Azure Functions
Event Hubs
Logic Apps
Microsoft Flow
WebHooks
Resources:
4.3 Develop apps that use AAD B2C and AAD B2B
4.3.1 Design and implement .NET MVC, Web API, and Windows Desktop apps that leverage social identity provider authentication, including Microsoft account, Facebook, Google+, Amazon, and LinkedIn;
Notes:
Premium + Basic Features
Group-based access management/provisioning
Self-Service Password Reset for cloud users
Company Branding (Logon Pages/Access Panel customization)
Application Proxy
SLA
Premium Features
Advanced group features
Self-Service Password Reset/Change/Unlock with on-premises writeback
Device objects two-way synchronization between on-premises directories and Azure AD
Multi-Factor Authentication (Cloud and On-premises (MFA Server))
Resources:
4.3.2 leverage Azure AD B2B to design and implement applications that support partner-managed identities, enforce multi-factor authentication
Notes:
Invitation manager (create an invite - add an external user to the organization)
invitation is created
invitation is sent to the invited user (containing an invitation link)
invited user clicks on the invitation link, signs in and redeems the invitation and creation of the user entity representing the invited user completes
user is redirected to a specific page after redemption completes
1
2
3
4
5
6
7
8 POST https://graph.microsoft.com/v 1.0 /invitations
Content-type: application/json
Content-length: 551
{
"invitedUserEmailAddress" : "yyy@test.com" ,
"inviteRedirectUrl" : "https://myapp.com"
}
admin can require B2B collaboration users to proof up again only by using the following PowerShell
1
2
3
4
5
6
7
8 $cred = Get-Credential
Connect-MsolService -Credential $cred
Get-MsolUser | where { $_ .StrongAuthenticationMethods} |
select UserPrincipalName, @ {n="Methods" ; e={($_ .StrongAuthenticationMethods).MethodType}}
Reset-MsolStrongAuthenticationMethodByUpn
-UserPrincipalName gsamoogle_gmail.com#EXT#@ WoodGroveAzureAD.onmicrosoft.com
Resources:
4.4 Manage secrets using Azure Key Vault
4.4.1 Configure Azure Key Vault;
Notes:
1
2
3
4
New-AzureRmKeyVault
-VaultName 'ContosoKeyVault'
-ResourceGroupName 'ContosoResourceGroup'
-Location 'East US'
Resources:
4.4.2 manage access, including tenants;
Notes:
Change a key vault tenant ID after a subscription move
1
2
3
4
5
6 $Select -AzureRmSubscription -SubscriptionId YourSubscriptionID
$vaultResourceId = (Get-AzureRmKeyVault -VaultName myvault).ResourceId
$vault = Get-AzureRmResource – ResourceId $vaultResourceId -ExpandProperties
$vault .Properties.TenantId = (Get-AzureRmContext ).Tenant.TenantId
$vault .Properties.AccessPolicies = @ ()
Set-AzureRmResource -ResourceId $vaultResourceId -Properties $vault .Properties
Resources:
4.4.3 implement HSM protected keys;
Notes:
Step 1: Prepare your Internet-connected workstation
Step 2: Prepare your disconnected workstation
Step 3: Generate your key
Step 4: Prepare your key for transfer
Step 5: Transfer your key to Azure Key Vault
Resources:
4.4.4 manage service limits;
Notes:
When a service threshold is exceeded, Key Vault limits any further requests from that client for a period of time. When this happens, Key Vault returns HTTP status code 429 (Too many requests), and the requests fail. Also, failed requests that return a 429 count towards the throttle limits tracked by Key Vault.
Recommended client-side throttling method
On HTTP error code 429, begin throttling your client using an exponential backoff approach:
Wait 1 second, retry request
If still throttled wait 2 seconds, retry request
If still throttled wait 4 seconds, retry request
If still throttled wait 8 seconds, retry request
If still throttled wait 16 seconds, retry request
At this point, you should not be getting HTTP 429 response codes.
Resources:
4.4.5 implement logging;
Notes:
insights-logs-auditevent is automatically created for your specified storage account, and you can use this same storage account for collecting logs for multiple key vaults.
access your logging information at most, 10 minutes after the key vault operation
1
2
3
4
5
6
7 $kv = Get-AzureRmKeyVault -VaultName 'ContosoKeyVault'
Set-AzureRmDiagnosticSetting
-ResourceId $kv .ResourceId
-StorageAccountId $sa .Id
-Enabled $true
-Categories AuditEvent
Resources:
4.4.6 implement key rotation;
Notes:
tell your key vault that this application has access to update secrets in your vault
1
2
3
4 Set-AzureRmKeyVaultAccessPolicy
-VaultName <vaultName>
-ServicePrincipalName <applicationIDfromAzureAutomation>
-PermissionsToSecrets Set
Add a Runbook. Select Quick Create. Name your runbook and select PowerShell as the runbook type
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38 $connectionName = "AzureRunAsConnection"
try
{
# Get the connection "AzureRunAsConnection "
$servicePrincipalConnection =Get-AutomationConnection -Name $connectionName
"Logging in to Azure..."
Connect-AzureRmAccount `
-ServicePrincipal `
-TenantId $servicePrincipalConnection .TenantId `
-ApplicationId $servicePrincipalConnection .ApplicationId `
-CertificateThumbprint $servicePrincipalConnection .CertificateThumbprint
"Login complete."
}
catch {
if (!$servicePrincipalConnection )
{
$ErrorMessage = "Connection $connectionName not found."
throw $ErrorMessage
} else {
Write-Error -Message $_ .Exception
throw $_ .Exception
}
}
#Optionally you may set the following as parameters
$StorageAccountName = <storageAccountName>
$RGName = <storageAccountResourceGroupName>
$VaultName = <keyVaultName>
$SecretName = <keyVaultSecretName>
#Key name. For example key1 or key2 for the storage account
New-AzureRmStorageAccountKey -ResourceGroupName $RGName -Name $StorageAccountName -KeyName "key2" -Verbose
$SAKeys = Get-AzureRmStorageAccountKey -ResourceGroupName $RGName -Name $StorageAccountName
$secretvalue = ConvertTo-SecureString $SAKeys [1].Value -AsPlainText -Force
$secret = Set-AzureKeyVaultSecret -VaultName $VaultName -Name $SecretName -SecretValue $secretvalue
Resources:
4.4.7 store and retrieve app secrets including connection strings, passwords, and cryptographic keys;
Notes:
Powershell
1
2 $Secret = ConvertTo-SecureString -String 'Password' -AsPlainText -Force
Set-AzureKeyVaultSecret -VaultName 'Contoso' -Name 'ITSecret' -SecretValue $Secret
Set attributes
1
2
3
4
5 $Expires = (Get-Date ).AddYears(2).ToUniversalTime()
$Nbf = (Get-Date ).ToUniversalTime()
$Tags = @ { 'Severity' = 'medium' ; 'HR' = null}
$ContentType = 'xml'
Set-AzureKeyVaultSecretAttribute -VaultName 'ContosoVault' -Name 'HR' -Expires $Expires -NotBefore $Nbf -ContentType $ContentType -Enable $True -Tag $Tags -PassThru
Nuget:
Install-Package Microsoft.IdentityModel.Clients.ActiveDirectory -Version 3.10.305231913
Install-Package Microsoft.Azure.KeyVault
1
2
3
4
5 using Microsoft.Azure.KeyVault ;
...
var kv = new KeyVaultClient(new KeyVaultClient.AuthenticationCallback(Utils.GetToken));
var sec = kv.GetSecretAsync(<SecretID>).Result.Value;
Resources:
4.4.8 implement Azure Managed Service Identity
Notes:
Azure CLI
1 az webapp identity assign --name myApp --resource-group myResourceGroup
Azure Resource Manager template
1
2
3 "identity" : {
"type" : "SystemAssigned"
}
Resources:
Nice information for more updates Azure Online Training
ReplyDeleteGood one... azure disaster recovery is very important to save data. I found this information very helpful. Thanks for sharing
ReplyDeleteWow what a great blog, i really enjoyed reading this, good luck in your work. Planning and Administering Microsoft Azure for SAP Workloads AZ-120
ReplyDeleteWoah! That was just an excellent blog that I was looking for so long. Thanks for sharing this post. SQL Server Load Rest API
ReplyDeletecloudkeeda
ReplyDeletewhat is azure
azure free account
azure data factory
Azure Data Factory Interview Questions
bootaa
bootaa