Azure API Mgmt, Websites, Cloud Services, and Batch – Flashcards

Unlock all answers in this set

Unlock answers
question
1 of 144 The information for your cloud service is stored in what 2 configuration files?
answer
ServiceDefinition.csdef The service definition file defines the runtime settings for your cloud service including what *roles* are required, *endpoints*, and virtual machine *size*. None of the data stored in this file can be changed when your role is running. ServiceConfiguration.cscfg The service configuration file configures how many instances of a role are run and the values of the settings defined for a role. The data stored in this file can be changed while your role is running. NOTE: To be able to store different values for these settings for how your role runs, you can have multiple service configurations. You can use a different service configuration for each deployment environment. For example, you can set your storage account connection string to use the local Azure storage emulator in a local service configuration and create another service configuration to use the Azure storage in the cloud.
question
What are the 6 key features of API Management?
answer
It provides end-to-end management: Provisioning user roles Creating usage plans and quotas Applying policies for transforming payloads and access restrictions Throttling Analytics Monitoring and alerts
question
Azure offers what two tiers in which you can run your API Management service?
answer
Developer and Standard The Developer Tier is for development, testing and pilot API programs where high availability is not a concern. Standard tier can scale your reserved unit count to handle more traffic and provides your API Management service with the most processing power and performance.
question
How are APIs created and configured in Azure?
answer
API Management console, which is accessed through the Azure management portal. The creation and configuration of the API's will setup the proxy to your existing web services. When creating the API, you specify a subdomain for the URL as .azure-api.net and then the following information is provided about the existing web service: API Name - shown in management portals Web Service URL - to your existing web service Web API URL suffix - Becomes the last part of the API's public URL. The URL will be used by API consumers for sending requests to the web service. Web API URL scheme - determines to use http or https (default). Note: To reach the API Management console, click Management Console in the Azure Portal for your API Management service.
question
Each API Management service instance comes pre-configured with what?
answer
A sample Echo API which returns back the input that was sent to it. To use it, you can invoke any HTTP verb, and the return value will equal to the headers and body that you sent.
question
An API consists of a set of operations that can be invoked from a client application. How are the API operations connected to your existing web services?
answer
API operations are "proxied" to existing web services.
question
API Management Console has several menu options to choose from. Specifically, the API's menu and Security menu options are often used. What are the 4 tabs used for in the API Management console under API's menu?
answer
Summary - basic metrics and information about the API. Settings - view/edit the configuration for an API, including authentication credentials for the back-end service. Operations - add/manage the API's operations. This validates incoming requests and automatically generates documentation. Security - configures Proxy authentication for the web service implementing the API. Can use either Basic authentication or Mutual certificates authentication. OAuth 2.0 can be used to authorize developer accounts that need access to the API's. Issues - view issues reported by the developers using your APIs.
question
How does Mutual certificates authentication work with the API Management service?
answer
The Security menu (not tab) in API Management allows a certificate to be uploaded (called Client Certificate). The uploaded cert will show the thumbprint which is needed to identify which cert the API should use when selecting a cert from the Security tab (under API's menu).
question
How can OAauth 2.0 be configured in API Management?
answer
In Security menu, the OAuth 2.0 tab allows the configuration of an Identify Provider (IdP). The Authorization endpoint URL is setup. Here's an example for using Azure AD as the IdP: https://login.windows.net//oauth2/authorize The Token endpoint URL is setup. Here's an example for using Azure AD as the IdP: https://login.windows.net//oauth2/token After the authorization server is setup, the Security tab under API's will allow the selection of the authorization server.
question
What allows you to programmatically perform any operation you can manually perform on the developer and publisher API Management portals (e.g. configure your APIs, access analytics data, etc.)?
answer
API Management REST API
question
What does the developer portal allow you to do in the API Management service?
answer
It allows you to see and test your API's.
question
In API Management, what reduces latency perceived by the API consumers, lowers bandwidth consumption and decreases the load on the HTTP web service implementing the API?
answer
Response Caching
question
In Azure API Management, what does a product contain?
answer
One or more APIs Usage quota Terms of use
question
What must be done with the "product" before developers can subscribe to the product and begin to use the product's APIs?
answer
Product must be Published A product must be created and have API's associated to it before publishing. After publishing, the product must be made visible to the developers so they can view and subscribe to it.
question
What are the key concepts for API Management?
answer
To use API Management, administrators create APIs. Each API consists of one or more operations, and each API can be added to one or more products. To use an API, developers subscribe to a product that contains that API, and then they can call the API's operation, subject to any usage policies that may be in effect.
question
What access restricted policies can be implemented at the API or individual operation level?
answer
Rate limit Quotas IP restrictions Other categories: Cache Polices, Transformation Policies, and Other Policies (URL rewrite, etc.)
question
What groups are used to manage the visibility/access of products to developers?
answer
Administrators Administrators manage API Management service instances, creating the APIs, operations, and products that are used by developers. Developers Developers are the customers that build applications using your APIs. Developers are granted access to the developer portal and build applications that call the operations of an API. Guests Unauthenticated users, such as prospective customers, visiting the developer portal of an API Management instance fall into this group. They can be granted certain read-only access, such as the ability to view APIs but not call them.
question
Who represent the user accounts in an API Management service instance?
answer
Developers Developers can be created or invited to join by administrators, or they can sign up from the Developer portal. Each developer is a member of one or more groups, and can be subscribe to the products that grant visibility to those groups.
question
What are a collection of statements that are executed sequentially on the request or response of an API?
answer
Policies Policies are a powerful capability of API Management that allow the publisher to change the behavior of the API through configuration. Popular statements include format conversion from XML to JSON, call rate limiting to restrict the amount of incoming calls from a developer, ip restrictions, and many other policies are available.
question
What portal is where developers can learn about your APIs, view and call operations, and subscribe to products?
answer
Developer portal Prospective customers can visit the developer portal, view APIs and operations, and sign up.
question
In API Management, new APIs can be created and the operations added manually, or the API can be imported along with the operations in one step. What 2 formats can be used for the import?
answer
WADL Swagger
question
What are the steps in publishing an ASP.Net REST service API to Azure?
answer
Create a website in Azure and add a SQL Database to it. Use VS to create an MVC Web Application. During creation, specify the type of authentication and select option to host in Azure cloud. Test locally and then right click solution and choose Publish. In the publishing wizard, import the Azure Website profile and use Validate Connection to ensure profile settings are correct. Choose whether to deploy a Release or Debug version to cloud, then click Publish.
question
What are the 3 ways a webjob can be run in Azure Websites?
answer
non-continuously -> on demand -> on a schedule continuously There is no additional cost to use Microsoft Azure WebJobs with an Azure Website. If you configure a Recurring Job and set recurrence frequency to a number of minutes, the Azure Scheduler service is not free. Other frequencies (hours, days, and so forth) are free. If you deploy a WebJob and later change the run mode from continuous to non-continuous or vice versa, Visual Studio creates a new WebJob in Azure when you redeploy. If you change other scheduling settings but leave run mode the same or switch between Scheduled and On Demand, Visual Studio updates the existing job rather than create a new one.
question
What can be used to simplify the task of writing code that runs as a WebJob and works with Azure Storage (queues, blobs, and tables)?
answer
WebJobs SDK Here are some typical scenarios you can handle more easily with the Azure WebJobs SDK: Image processing or other CPU-intensive work A common feature of websites is the ability to upload images or videos. Often you want to manipulate the content after it's uploaded, but you don't want to make the user wait while you do that. Queue processing A common way for a web frontend to communicate with a backend service is to use queues. When the website needs to get work done, it pushes a message onto a queue. A backend service pulls messages from the queue and does the work. You could use queues for image processing: for example, after the user uploads a number of files, put the file names in a queue message to be picked up by the backend for processing. Or you could use queues to improve site responsiveness. For example, instead of writing directly to a SQL database, write to a queue, tell the user you're done, and let the backend service handle high-latency relational database work. RSS aggregation If you have a site that maintains a list of RSS feeds, you could pull in all of the articles from the feeds in a background process. File maintenance You might have log files being created by several sites or for separate time spans which you want to combine in order to run analysis jobs on them. Or you might want to schedule a task to run weekly to clean up old log files. Ingress into Azure Tables You might have files stored and want to parse and store them in tables. The ingress function could be writing lots of rows (millions in some cases), and the WebJobs SDK makes it possible to implement this functionality easily. The SDK also provides real-time monitoring of progress indicators such as the number of rows written in the table. Other long-running tasks that you want to run in a background thread, such as sending emails.
question
What are the acceptable file formats for running in Webjobs?
answer
The following file types are accepted: .cmd, .bat, .exe (using windows cmd) .ps1 (using powershell) .sh (using bash) .php (using php) .py (using python) .js (using node)
question
What must the file format be for the file that is uploaded to create a webjob?
answer
Zip File (100MB max)
question
If your website runs on more than one instance, a continuously running task will run on how many instances?
answer
All Always On Enable Always On on the Configure page for your website. The Always On feature, available in Basic and Standard mode, prevents websites from being unloaded, even if they have been idle for some time. If your website is always loaded, your continuously running task may run more reliably. Endless Loop Code for a continuous job needs to be written to run in an endless loop. Only Runs when Site Up Continuous jobs run continuously only when the site is up.
question
If your website runs on more than one instance, on-demand and scheduled webjob tasks run on how many instances?
answer
A single instance selected for load balancing by Microsoft Azure.
question
Can a webjob be deployed along with an existing VS web project, by itself, or either one?
answer
Either one. You can create an Azure Webjobs project in VS. You might want to deploy it by itself to an empty Azure Website service when it runs continuously and you want it to scale separately from the website.
question
When creating a new web project, can Visual Studio automatically create the Azure Website that you'll deploy your project to later?
answer
Yes When Visual Studio creates the site in Azure, you won't have to import the Azure website profile for publishing. The connection to the Azure website will already exist when publishing your site. The publishing wizard also allows you to choose Release or Debug for the site.
question
What are some deployment options for Websites?
answer
Each of these options have various strengths. Use an FTP client. The ability to publish from an FTP client is a simple and straight-forward solution to push new files up to your site. It also means that any existing publishing tools or processes that rely on FTP can continue to work with Azure Websites. Deploy from source control (GitHub or TFS Online). Source control provides the best control over site content releases, because changes can be tracked, published, and rolled-back to earlier versions if necessary. Publish from Visual Studio or WebMatrix. The options to publish directly from Visual Studio or Web Matrix is a convenience for developers that use either tool. One useful scenario for this feature is during the early stages of a project or for prototyping. In both cases, frequent publishing and testing is potentially more convenient from within the development environment.
question
What 2 SSL options are available for websites?
answer
IP Based SSL and SNI SSL IP Based SSL The IP Based SSL option is the traditional way to map the public dedicated IP address to the domain name. This works with all browsers. SNI SSL (Server Name Indication SSL) The SNI SSL option allows multiple domains to share the same IP address and yet have different associated SSL certificates for each domain. SNI SSL does not work with some older browsers (for more information on compatibility, see the Wikipedia entry for SNI SSL). Note: There is a monthly charge (prorated hourly) associated with each SSL certificate, and the pricing varies depending on the choice of IP based or SNI SSL.
question
What are the 2 ways to scale websites?
answer
scale-up (a.k.a. vertical scaling) - larger machines scale-out (a.k.a. horizontal scaling) - more instances Note: Autoscale preview only supports scale-out.
question
What 2 modes can an Azure Website run in?
answer
Shared Each Azure subscription has access to a pool of resources provided for the purpose of running up to 100 websites per region in Shared website mode. The pool of resources available to each Website subscription for this purpose is shared by other websites in the same geo-region that are configured to run in Shared mode. Because these resources are shared for use by other websites, all subscriptions are limited in their use of these resources. Limits applied to a subscription's use of these resources are expressed as usage quotas listed under the usage overview section of each website's Dashboard management page. Standard When a website is configured to run in Standard mode, it is allocated dedicated resources equivalent to the Small (default), Medium or Large virtual machine sizes in the table at Virtual Machine and Cloud Service Sizes for Azure. There are no limits to the resources a subscription can use for running websites in Standard mode. However, the number of Standard mode websites that can be created per region is 500.
question
How can you avoid exceeding quotas?
answer
Quotas are not a matter of performance or cost, but it's the way Azure governs resource usage in a multi-tenant environment by preventing tenants from overusing shared resources. Since exceeding your quotas means downtime or reduced functionality for your website, consider the following if you want to keep your site running when quotas are about to be reached: 1) Move your website(s) to a higher-tier Web hosting plan to take advantage of a larger quota. For example, the only quota for Basic and Standard plans is File System Storage. 2) As the number of instances of a website is increased, so is the likelihood of exceeding shared resource quotas. If appropriate, consider scaling back additional instances of a website when shared resource quotas are being exceeded.
question
What are some of the metrics that should be monitored for a Website?
answer
Endpoint monitoring CPU Memory Bandwidth Data and Log IO DTU % Successful and Failed Connections Throttled Connections Deadlocks Note: Storage should also be monitored outside of websites. For a complete list of limitations, see here: http://azure.microsoft.com/en-us/documentation/articles/azure-subscription-service-limits/#websiteslimits
question
What options are available with auto scaling of websites?
answer
The concept of scaling is being able to have the instances increase/decrease automatically based on the average CPU usage and date or time of day. You can set a target range (min/max) for average CPU usage (i.e. 65%-80%). Falling below the range will decrease instances and going above the range will increase instances. Instances also have a range to prevent from adding too many or too few, with 1 being an acceptable minimum value, the maximum will depend on cost or other factors. Example range might be 1-3 instances. First, you determine the schedule, if any. Setting the schedule will allow you to have a different range for instances and CPU usage for day versus night, weekdays versus weekends, time of day, and specific days. After defining the schedule, you have to set the target ranges for instances and average CPU usage for each schedule that was defined.
question
How often does Azure check the CPU for auto scaling of websites when Scale by Metric is enabled?
answer
Every 5 Minutes Every five minutes instances are added if needed at that point in time. If CPU usage is low, Microsoft Azure will remove instances once every 2 hours to ensure that your website remains performant. Generally, putting the minimum instance count at 1 is appropriate. However, if you have sudden usage spikes on your website, be sure that you have a sufficient minimum number of instances to handle the load. For example, if you have a sudden spike of traffic during the 5 minute interval before Microsoft Azure checks your CPU usage, your site might not be responsive during that time. If you expect sudden, large amounts of traffic, set the minimum instance count higher to anticipate these bursts.
question
What is the max number of instances that a website can scale to?
answer
Basic - 3 dedicated instances Standard - 10 dedicated instances Note: VM's can scale to 50 VM's per cloud service Cloud services have a default limit of 20 instances but this can be increased by contacting support.
question
In the Azure Preview Portal, what autoscaling options are available?
answer
*Example rules that can be created:* 1) Scale up by 1 instance if CPU percentage is above 60% for past 5 minutes 2) Scale up by 3 instances if CPU percentage is above 85% when 10 minutes after previous scaling rule occurs *More detail...* In the Azure Preview Portal, you can scale not only on CPU percentage, but also on the additional metrics of Memory Percentage, Disk Queue Length, HTTP Queue Length, Data In, and Data Out. Azure Preview Portal also shows the history of the auto scaling so you can see when the system increased/decreased number of instances and how many. You can also create one or more Scale up and Scale down rules that give you even more custom control over scaling. Your service will scale up if ANY of the scale up rules are met. Conversely, your service will scale down if ALL of the scale down rules are met. For each rule you choose: Metric - CPU Usage, Memory Percentage, Disk Queue Length, HTTP Queue Length, Data In, and Data Out. Example: CPU Usage Condition - either Greater than or Less than Example: Greater than Threshold - the number that this metric has to pass to trigger the action Example: 85% Over Past - the number of minutes that this metric is averaged over Example: 5 min Scale up or down by - the size of the scale action Example: 2 instances at a time Cool down - how long this rule should wait after the previous scale action to scale again Example: 10 min
question
Diagnostics are enabled on the Configure management page for the website. What are the two types of diagnostics?
answer
Application Diagnostics The application diagnostics section of the Configure management page controls the logging of information produced by the application, which is useful when logging events that occur within an application. Site Diagnostics The site diagnostics section of the Configure management page controls the logging performed by the web server, such as the logging of web requests, failure to serve pages, or how long it took to serve a page.
question
What storage/format types of Application Diagnostics are available?
answer
Application Logging (Website's File System) Turns on logging of information produced by the application. The Logging Level field determines whether Error, Warning, or Information level information is logged. You may also select Verbose, which will log all information produced by the application. Logs produced by this setting are stored on the file system of your website, and can be downloaded using the steps in the Downloading log files for a website section below. File system logging lasts for a period of 12 hours. You can access the logs from the FTP share for the website. Application Logging (Table Storage) Turns on the logging of information produced by the application, similar to the Application Logging (File System) option. However, the log information is stored in an Azure Storage Account in a table. To specify the Azure Storage Account and table, choose On, select the Logging Level, and then choose Manage Table Storage. Specify the storage account and table to use, or create a new table. The log information stored in the table can be accessed using an Azure Storage client. Application Logging (Blob storage) Turns on the logging of information produced by the application, similar to the Application Logging (Table Storage) option. However, the log information is stored in a blob in an Azure Storage Account. To specify the Azure Storage Account and blob, choose On, select the Logging Level, and then choose Manage Blob Storage. Specify the storage account, blob container, and blob name to use, or create a new container and blob.
question
What type of Site Diagnostics are available?
answer
Note: These logs can be turned on from Azure Websites or from VS Server Explorer. You can even get the Azure Website logs to view in the VS Output window. Any viewable logs in the Output window can also be downloaded to a .zip from the Output window. Here are the different type of site diagnostics. Web Server Logging Turn on Web Server logging to save website logs using the W3C extended log file format. Web server logging produces a record of all incoming requests to your website, which contains information such as the client IP address, requested URI, HTTP status code of the response, and the user agent string of the client. You can save the logs to an Azure Storage Account or to the Website's File System. To save web server logs to an Azure Storage Account, choose Storage, and then choose manage storage to specify a storage account and an Azure Blob Container where the logs will be kept. For more information about Azure Storage Accounts, see How to Manage Storage Accounts. To save web server logs to the file system, choose File System. This enables the Quota box where you can set the maximum amount of disk space for the log files. The minimum size is 25MB and the maximum is 100MB. The default size is 35MB. When the quota is reached, the oldest files are successively overwritten by the newest ones. If you need to retain more history 100MB, use Azure Storage, which has a much greater storage capacity. By default, web server logs are never deleted. To specify a period of time after which the logs will be automatically deleted, select Set Retention and enter the number of days to keep the logs in the Retention Period box. This setting is available for both the Azure Storage and File System options. Detailed Error Messages Turn on detailed error logging to log additional information about HTTP errors (status codes greater than 400). Failed Request Tracing Turn on failed request tracing to capture information for failed client requests, such as a 400 series HTTP status code. Failed request tracing produces an XML document that contains a trace of which modules the request passed through in IIS, details returned by the module, and the time the module was invoked. This information can be used to isolate which component the failure occurred in.
question
What are some ways to build an architecture that is resilient to failures?
answer
Backup/Restore Have an automated backup-and-restore strategy for your content by building your own tools with Windows Azure SDK or using third-party services like Cloud Cellar Redundant Websites Setup redundant copies of your website on at least 2 datacenters and Load balance incoming traffic between these datacenters. Automatic Failover Setup automatic failover capabilities when a service goes down in a datacenter using a Global traffic manager CDN Setup Content delivery network (CDN) service along with your website to boost performance by caching content and provide a high availability of your website Loosely Coupled Remove dependency of any tightly coupled components/services you use with your WAWS website if possible. For example, if your website uses a database and for some reason the database service is down at a given time causing a single point of failure in your architecture. The database here is a tightly couple component but cannot be removed from your architecture. In such scenarios: - You must replicate your database across multiple datacenters and setup automated data sync across these databases to mitigate during a failover. - You must design your application to be resilient during these situations
question
What represents a set of features and capacity that you can share across your websites?
answer
Web hosting plans (WHPs) Web hosting plans support the 4 Azure Websites pricing tiers (Free, Shared, Basic, and Standard) where each tier has its own capabilities and capacity. Sites in the same subscription, resource group, and geographic location can share a web hosting plan. All the sites sharing a web hosting plan can leverage all the capabilities and features defined by the web hosting plan tier. All websites associated with a given web hosting plan run on the resources defined by the web hosting plan. For example, if your web hosting plan is configured to use two "small" virtual machines, all sites associated with that web hosting plan will run on both virtual machines. As always with Azure Websites, the virtual machines your sites are running on are fully managed and highly available.
question
What enables users to deploy an entire application into Azure Preview Portal from Azure Gallery so the each application component don't have to be individually deployed within the Portal?
answer
Azure Templates The template wizard walks the user through setup of necessary components in the user's account. For example if an application used a website and SQL database, the wizard would walk user through setup for the website and then the sql database. The dependencies are automatically established between the application components. Templates allow a user to easily setup a 3rd party application and host it themselves. It also can be used to quickly deploy an entire application in different regions. This could replace the Powershell scripting that would normally create the same application. The key difference is that Powershell must be open as a foreground program on your computer while it runs, whereas, using a template will be instantiated in the background of Azure so you could turn off your computer while the entire application is built within your Azure account. The template has lots of Powershell scripting behind it that makes this all happen.
question
What is a new concept in Azure that serves as the lifecycle boundary for every resource contained within it?
answer
Resource group They provide summarized billings and usage across Web Hosting Plans (WHP) and websites. Here's the relationships between Resource Groups, WHP, and Websites Resource groups... Has one to many WHP... Which has one to many websites... Resource groups enable you to manage all your resources in an application together. A resource (i.e. DB, website, etc.) must exist in one and only one resource group. Looking at the relationships above, websites can also only exist in one WHP. Resource groups are enabled by the new management functionality called *Azure Resource Manager*. Resource Manager allows you to group multiple resources as a logical group which serves as the lifecycle boundary for every resource contained within it. Typically a group will contain resources related to a specific application. For example, a group may contain a Website resource that hosts your public website, a SQL Database that stores relational data used by the site, and a Storage Account that stores non-relational assets. However, a large company may choose to group by department so all databases were under the DBA department and websites were under the Web Services department. Since resource groups allow you to manage the lifecycle of all the contained resources, deleting a resource group will delete all the resources contained within it. You can also delete individual resources within a resource group.
question
What should you use to programmatically add/delete/update resource groups?
answer
Resource Manager REST API
question
What features are summarized in a WHP that are applied across all the websites in the WHP?
answer
Everything in WHP is a summarization of the website features: Quotas/Usage (cpu, file system, memory) Pricing tier - determines features like custom domains, SSL, storage amount, RAM, cores, backups, slots, max instances Scaling Monitoring Performance Metrics for Instances (In Standard Only) Connecting to Vnets (In Standard Only) Operational Events History Alerts Access (roles and users)
question
50 of 144 What features in Websites do not roll up to WHP's?
answer
Application Insights Analytics Backups Hybrid Connections Extensions Deployment Slots Deployment Credentials to Git and FTP Webjobs Streaming Logs Note: Think about WHP as a reporting group that just summarizes information at a higher level. Azure could have summarized the above items at the WHP but chose not to at the current time. Nothing says they won't some day show me all the Webjobs across all websites in a WHP. Remembering the list above may not be that important, just know that not everything is summarized at the WHP level.
question
If you change Pricing Tier in a website that belongs to a WHP with many websites, what happens to the Pricing Tier at the WHP level and all other websites?
answer
Changing the Pricing Tier at the Website level is the same as changing it at the WHP level. All websites will get the new pricing tier including the WHP itself.
question
What are the advantages of having multiple hosting plans in a single resource group for production, test, and dev?
answer
It allows separation of resources between dev and test and production sites, where you might want to allocate one web hosting plan with its own dedicated set of resources for your production sites, and a second web hosting plan for your dev and test sites.
question
When should I create a new resource group and when should I create a new web hosting plan?
answer
When creating a new website, you should consider creating a new resource group when the website you are about to create represents a new web application. In this case, creating a new resource group, an associated web hosting plan, and a websites is the right choice. Creating a new hosting plan allows you to allocate a new set of resource for your websites, and provides you with greater control over resource allocation, as each web hosting plan gets its own set of virtual machines. Since you can move websites between web hosting plans, assuming the web hosting plans are in the same regions, the decision of whether to create a new web hosting plan or not is of less important. If a given website starts consuming too many resources or you just need to separate a few websites, you can create a new web hosting plan and move your websites to it. In the example of putting Prod in one WHP and Dev/Test in a different WHP, you might need to someday separate Dev and Test into their own WHP. When this happens, its easy to move Dev to its own WHP as long as the WHP is in the same region as the Test WHP.
question
How can you setup WHP's for an application that spans across regions with the same website in multiple regions?
answer
During the website creation process, choose a Location in a different region, create the new WHP, and use an existing Resource Group. For example, a highly available website running in two regions will include two web hosting plans, one for each region, and one website associated with each web hosting plan. In such a situation, all the sites will be associated with a single resource group, which defines one application. Having a single view of a resource group with multiple web hosting plans and multiple sites makes it easy to manage, control and view the health of the websites.
question
Can you move websites between web hosting plans, assuming the web hosting plans are in the same regions?
answer
Yes, only if the web hosting plans are in the same region.
question
How do I create a Web Hosting Plan?
answer
A Web Hosting Plan is a container and as such, you can't create an empty Web Hosting Plan. However, a new Web Hosting Plan is explicitly created during website creation.
question
How do I assign a site to a Web Hosting Plan?
answer
Sites are assigned to a Web Hosting Plan as part of the site creation process.
question
How can I move a site to a different web hosting plan?
answer
You can move a site to a different web hosting plan using the Azure Preview Portal. Websites can be moved between web hosting plans in the same geographical region that belong to the same resource group.
question
How can I Scale a Web Hosting Plan?
answer
One way is to scale up your web hosting plan and all sites associated with that web hosting plan. By changing the pricing tier of a web hosting plan, all sites in that web hosting plan will be subject to the features and resources defined by that pricing tier. The second way to scale a plan is to scale it out by increasing the number of instances in your Web Hosting Plan.
question
What happens when a Web Hosting Plan is deleted?
answer
You can't directly delete a WHP. It is automatically added and deleted with the creation/deletion of websites. If you try to delete a WHP, it will give an error because websites exist under it. All websites in a WHP must first be deleted and this will automatically delete the WHP.
question
What happens to the WHP when the last website in the WHP is deleted?
answer
The WHP is also removed. Note: WHP is really just a logical view that summarizes websites within it.
question
What happens when I Delete a Resource Group?
answer
All WHP, websites, and other resources below it are also deleted.
question
What can I monitor in a web hosting plan?
answer
Web Hosting Plans can be monitored with the following metrics: * CPU Percentage * * Memory Percentage * * Disk Queue Length * * HTTP Queue Length * These metrics represent the average usage across instances belonging to a Web Hosting Plan. All of these metrics can be used to set up alerts as well as Auto Scale rules.
question
Can you move web hosting plans or websites between resource groups?
answer
No The only thing that can be moved is moving websites to a new WHP in the same region.
question
Can you move a website between two web hosting plans that are in two different regions?
answer
No
question
When adding a new website to an existing resource group, can you add the site to an existing web hosting plan or create a new web hosting plan to add the site to?
answer
Yes
question
Can you add a new website, or any other resources, to an existing resource group?
answer
Yes
question
What are deployment slots in Azure Websites?
answer
Code can be deployed to a staging area (a.k.a. deployment slot) and tested before its is swapped with the production site. To deploy to a particular slot, a different deployment profile is chosen when publishing. You can create multiple deployment slots (i.e. dev, test, uat, etc.). Must use Standard pricing tier that gives up to 5 slots per WHP. When swapping, the load balancer handles redirecting the traffic so there's no need to redeploy. All content and configuration stays put; the traffic is just redirected to the deployment slot that becomes production. By default, your website deployment slots (sites) share the same resources as your production slots (sites) and run on the same VMs. If you run stress testing on a stage slot, your production environment will experience a comparable stress load. *Note: This is not the case with Cloud Services that use Production and Staging with separate resources.*
question
What will change and remain unchanged when a deployment slot is swapped in Azure Websites?
answer
When you clone configuration from another deployment slot, the cloned configuration is editable. The following lists show the configuration that resides with the deployment slot and will be used when you swap slots. Configuration that resides with the slot (changes with a swap): General settings Connection strings Handler mappings Monitoring and diagnostic settings Configuration that does not reside with slot (will not change on slot swap): Published endpoints Custom Domain Names SSL certificates and bindings Scale settings (can only change scaling for production slot) Notes: Multiple deployment slots are only available for sites in the Standard web hosting plan. When you site has multiple slots, you cannot change the hosting plan. A slot that you intend to swap into production needs to be configured exactly as you want to have it in production. By default, a deployment slot will point to the same database as the production site. However, you can configure the deployment slot to point to an alternate database by changing the database connection string(s) for the deployment slot. You can then restore the original database connection string(s) on the deployment slot right before you swap it into production.
question
If you swap a deployment slot to become production and it doesn't work, how do you restore the deployment that was in the production slot?
answer
Just swap back. The production deployment would have been put in the slot that became production.
question
Deploying your application to a deployment slot has what benefits?
answer
You can validate website changes in a staging deployment slot before swapping it with the production slot. After a swap, the slot with previously staged site now has the previous production site. If the changes swapped into the production slot are not as you expected, you can perform the same swap immediately to get your "last known good site" back. Deploying a site to a slot first and swapping it into production ensures that all instances of the slot are warmed up before being swapped into production. This eliminates downtime when you deploy your site. The traffic redirection is seamless, and no requests are dropped as a result of swap operations.
question
What are the 2 ways to connect to the Remote Debugger session on Azure Web Sites?
answer
Manually Works with Visual Studio 2012 and 2013 out of the box (it is not available in Express versions) Automatically Much simpler than mannually, but does require that the client machine has the Azure SDK installed, and the subscription profile downloaded. You can connect Visual Studio to your own Azure Website and gain full control. You can set breakpoints, manipulate memory directly, step through code and even change the code path.
question
What are the tier limits for remote debugging?
answer
Remote debugging is available to all tiers; however, there are some limitations depending on the tier. The breakdown is as follows: Free and Shared (Basic) tiers are allowed one connection at any given time. The Standard tier is allowed five simultaneous connections.
question
Is it a good idea to debug a production website?
answer
As of May 2014, absolutely not because debugging a site causes the site to effectively be down. It will stop serving requests to other users while you debug.
question
Can debugging be MANUALLY done on the website process (W3WP)?
answer
Yes. Being able to manually attach a remote debugger to an arbitrary process has many benefits. Not only can you debug the website process (W3WP), but you can also debug processes on WebJobs or any other kind of process ran in Azure Web Sites. On the other hand, the Azure SDK, brings a more cohesive website development and maintenance story, and gives ability to AUTOMATIC debug. From a single tool (Visual Studio) you can author, deploy, and remote debug a website with minimal effort. This is very useful when doing rapid development and/or when you are managing a large number of websites. The SDK is always evolving to bring a simple and cohesive experience.
question
What are the steps in getting remote debug to work?
answer
Note: Turning on verbose logging on the web server may help you debug remotely but it is not required to attach the VS Remote Debugger. 1) In the Management Portal, turn on 2 settings in Azure: Note: Both of these settings are done automatically if using Azure SDK for automatic debugging. Remote Debugging This opens the TCP/IP ports required for the connection. Once the feature is turned on, a timer starts, and after 48 hours the feature is automatically turned off. This 48 hour limit is done for security and performance reasons. Specify version of Visual Studio In Visual Studio remote debugging is done with the help of an application called MSVSMON that ships with Visual Studio, and also runs on the Server side on Azure. One challenge, the version of MSVSMON on the server must match the version of Visual Studio on the client side which is why we have to specify the version in Azure (work around for now). 2) Import the publishing profile into Visual Studio and then publish site with Debug setting. 3) In VS, Debug/Attach to Process... to specify location of process being Qualifier: Credentials: To login to Azure This then displays a list of running processes. Pick one of the W3WP processes. If there is more than one, you will need to experiment to figure out which is your site.
question
What are constructed at compile time and are specifically matched to the libraries or executables created at that time?
answer
Debugging symbols Visual Studio is set up by default to generate the debugging symbols (pdb) when you select to compile a debug build. In Azure Web Sites, the symbols (pdb) file used by Visual Studio can be in either the local machine or the server (Git requires server side). We have a special version of MSVSMON that can utilize the symbols in the server side.
question
Frequently the easiest way to find the cause of a generic ASP.Net error is to enable detailed error messages. That requires a change in the deployed Web.config file. You could edit the Web.config file in the project and redeploy the project, or create a Web.config transform and deploy a debug build. What is another way that is much quicker in Visual Studio?
answer
In Solution Explorer, you can directly view and edit files on the remote site by using the remote view feature. In Server Explorer, expand Azure, expand Websites, and expand the node for the website you're deploying to. You'll see nodes that give you access to the website's content files and log files. Expand the Files node, and double-click the Web.config file to edit it directly on the Azure Website. Add the following line to the system.web element: Refresh the browser that is showing the unhelpful error message, and now you get a detailed error message.
question
What does Azure Websites Backup backup?
answer
Website configuration Website file content Any SQL Server or MySQL databases connected to your site (you can choose which ones to include in the backup) The Backup and Restore feature requires an Azure storage account that must belong to the same subscription as the website that you are going to back up. Backups will be visible on the Containers tab of your storage account. Your backups will be in a container called websitebackups. Each backup consists of a .zip file that contains the backed up data and an .xml file that contains a manifest of the .zip file contents. Make sure that you set up the connection strings for each of your databases properly on the Configure tab of the website so that the Backup and Restore feature can include your databases. If you delete a backup from your storage account and have not made a copy elsewhere, you will not be able to restore the backup later. Although you can back up more than one website to the same storage account, for ease of maintenance, consider creating a separate storage account for each website. On the Backups tab, you can also restore from a backup with the option to create a new site or restore over existing site. You have option to restore database from the backup.
question
What 2 publishing files contain Azure authentication information that needs to be secured or deleted after imported into VS?
answer
Publish settings file contains: - Your Azure subscription ID - A management certificate that allows you to perform management tasks for your subscription without having to provide an account name or password. Publish profile file contains: - Information for publishing to your Azure Website If you use a utility that uses publish settings or publish profile, import the file containing the publish settings or profile into the utility and then delete the file. If you must keep the file, to share with others working on the project for example, store it in a secure location such as an encrypted directory with restricted permissions. Additionally, you should make sure the imported credentials are secured. For example, Azure PowerShell and the Azure Cross-Platform Command-Line Interface both store imported information in your home directory (~ on Linux or OS X systems and /users/yourusername on Windows systems.) For extra security, you may wish to encrypt these locations using encryption tools available for your operating system.
question
Instead of storing authentication credentials and sensitive information in config files that could be exposed by the website or put into source control for others to see, what 2 other options are available?
answer
App Settings and Connection Strings Azure Websites allows you to store configuration information as part of the Websites runtime environment as app settings and connection strings. The values are exposed to your application at runtime through environment variables for most programming languages. For .NET applications, these values are injected into your .NET configuration at runtime. App settings and connection strings are configurable using the Azure management portal or utilities such as PowerShell or the Azure Cross-Platform Command-Line Interface.
question
How do you use FTP to publish website files?
answer
Go to the Windows Azure Management Portal. Click on Web Sites. Go to the Site's DASHBOARD page and click on "Download the publish profile" Save the file and open it in notepad.exe. The file contains 2 sections. One for Web Deploy and another for FTP. Under the section for FTP make a note of the following values: publishUrl (hostname only) userName userPWD Once logged into the FTP site, you will see two folders under the root: Logfiles and Site. Logfiles folder as the name indicates provides storage for various logging options you see under the CONFIGURE management page on the Azure Portal. Site folder is where the application resides. To be more specific the code resides here: /site/wwwroot
question
What is the root file of the configuration system when you are using IIS 7 and above? It includes definitions of all sites, applications, virtual directories and application pools, as well as global defaults for the web server settings (similar to machine.config and the root web.config for .NET Framework settings).
answer
ApplicationHost.config It is the only IIS configuration file available when the web server is installed (however, users can still add web.config files if they want to). It includes a special section (called configSections) for registering all IIS and Windows Activation System (WAS) sections (machine.config has the same concept for .NET Framework sections). It has definitions for locking-down most IIS sections to the global level, so that by default they cannot be overridden by lower-level web.config files in the hierarchy.
question
How can the ApplicationHost.config be transformed?
answer
The Azure Websites platform provides flexibility and control for site configuration. Although the standard IIS ApplicationHost.config configuration file is not available for direct editing in Windows Azure Websites, the platform supports a declarative ApplicationHost.config transform model based on XML Document Transformation (XDT). By using XML Document Transformation (XDT) declarations, you can transform the ApplicationHost.config file in your Windows Azure websites. You can also use XDT declarations to add private site extensions to enable custom site administration actions. To leverage this transform functionality, you create an ApplicationHost.xdt file with XDT content and place under the site root. Then, on the Configure page in the Windows Azure Portal, you set the WEBSITE_PRIVATE_EXTENSIONS app setting to 1 (you may need to restart the site).
question
What are site extensions for Azure Websites?
answer
The Azure Websites site extension feature allows developers to essentially write "apps" that can be run on an Azure Website to add administrative functionality to it. These extensions can also be published to the Site Extensions Gallery, which allows others to install and use the extension as well. Writing a Site Extension isn't much different from writing a regular web site. The only real difference is how your code gets installed to your site. Site extensions are web apps with simple metadata for extension registration. Site Extensions can be authored for any development stack supported by the Azure Websites platform. You can create an applicationHost.xdt file for specific configuration, in absence of such a file, a simple template will be generated automatically to indicate the application and virtual directory paths for the site extension.
question
Azure Websites supports site extensions as an extensibility point for site administrative actions. In fact, some Azure Websites platform features are implemented as pre-installed site extensions. While the pre-installed platform extensions cannot be modified, you can create and configure private extensions for your own sites. This functionality also relies on XDT declarations. What are the key steps for creating a private site extension?
answer
Create Website Create any web application supported by Azure Websites Declare Site Extension Create an ApplicationHost.xdt file. Any change to the ApplicationHost.xdt file requires a site recycle. Deploy Site Extension FTP all the files of your web app to the SiteExtensions[your-extension-name] folder of the site on which you want to install the extension. Be sure to copy the ApplicationHost.xdt file to this location as well. We'll call this the extension root. Enable Site Extension Set the WEBSITE_PRIVATE_EXTENSIONS variable in app setting to 1.
question
How are site extensions packaged?
answer
Site extensions are packaged in NuGet format. For example, the NuGet.exe command line utility can be downloaded to package a simple sample extension.
question
Why are site extensions packaged?
answer
To become publically available to other developers in the Site Extension Gallery.
question
What are the steps to create a package for a site extension?
answer
Create a folder with the name of your site extension. Create the web app and place all relevant content in a Content folder under the site extension folder. Create a NuSpec file for your extension and make sure to include a link to license terms such as the example at this link. Make sure the NuGet.exe utility is in your path and run the following from your site extension folder to create the NuGet package: nuget pack example.nuspec To submit the NuPkg site extension for availability across the Azure Websites platform, access the submission portal at http://www.siteextensions.net and upload the package.
question
Whenever you create an Azure Website, you also get a corresponding "SCM" (site control manager) site which is used for what?
answer
Administration/debugging It always runs over SSL. The URL for your SCM site is always the same as your default hostname, plus "scm". For example, if my site is "demo.azurewebsites.net", then my SCM site is "demo.scm.azurewebsites.net".
question
Site extensions installed for your site can be accessed through what?
answer
"SCM" (site control manager)
question
Where do site extensions need to be uploaded to allow other Azure Websites users to easily install your extension on their site?
answer
Site Extension Gallery (http://www.siteextensions.net)
question
When your website is configured as a Traffic Manager endpoint, what endpoint address will be used when creating DNS records for you custom domain?
answer
*.trafficmanager.net Notes: You must add your website as a Traffic Manager endpoint. The custom domain name routes to Traffic Manager. Traffic Manager then routes to your website. Website must be using Standard mode or it won't be listed as a possible endpoint within Traffic Manager. You can only use CNAME records when setting up DNS to point a custom domain to the above Traffic Manager address. It can take some time for your CNAME to propagate through the DNS system. You can use a service such as http://www.digwebinterface.com/ to verify that the CNAME is available.
question
Database geo-replication for SQL Databases are offered in which 2 modes?
answer
*Offline (not readable)* - DB will become available after replication stops and Azure determines primary location has failed. Used for Standard pricing tier. RTO/RPO: <2hr/<30min *Active (readable)* - Allows reading the database from up to 4 other regions while replication is running. Premium (up to 4 other regions) pricing tiers. RTO/RPO: <1hr/<5min
question
What are the 3 places for storing uploaded files from Azure Websites?
answer
Database - This is an option but does not scale as well as storage or CDN. Blob Storage - This will provide better scalability and performance than the database. CDN - This will give the best response to the website.
question
What are some features of CDN?
answer
*Origin Domain* The "origin domain" is where the file content is published to by the developer which is nothing more than an Azure Storage Blob. *CDN URL* Developer replaces all the local content references in code to the CDN URL provided by the CDN service. *HTTPS Enabled* https can also be enforced on all CDN content. If site is accessed with both http and https, remove http: from all content references (i.e. //mysite.com/image.jpg) so it uses whatever the site is currently using. *Edge Servers* The files are distributed to "edge" servers by the CDN service. These "edge" servers exist around the globe closer to your users. When a CDN request comes in, the ip address of a local CDN edge server closest to user is returned so content can be pulled from a local edge server. *Query Strings* Query string parameters can be used to force edge servers to get new version of content (i.e. id=1 versus id=2).
question
What are some options available to scale a website globally when it is attached to a SQL Database?
answer
Scaling Website Use traffic manager to route traffic to websites in different regions. Scaling Database 1) Have all websites writing to the same database located in a single region. This option is not good for scalability but it is possible. 2) Replace database with a NoSQL option. 3) Decouple website from database by using a local Service Bus Queue which inserts/updates data to the single database in the master region. Then, turn on database geo-replication for master database to have read-only copies of the database in multiple regions. The websites will read from a database in their own region but all updates get propagated through the service bus and database replication. There are likely other options. These are 3 options to help understand some possibilities in Azure.
question
What are the the 3 characteristics that a workload should have to be an ideal fit for Azure Cloud Services?
answer
Elastic demand One of the key value propositions of moving to Azure is elastic scale: the ability to add or remove capacity from the application (scaling-out and scaling-in) to more closely match dynamic user demand. If your workload has a static, steady demand (for example, a static number of users, transactions, and so on) this advantage of Azure Cloud Services is not maximized. Distributed users and devices Running on Azure gives you instant access to global deployment of applications. If your workload has a captive user base running in a single location (such as a single office), cloud deployment may not provide optimal return on investment. Partitionable workload (can scale out; not up) Cloud applications scale by scaling out - and thereby adding more capacity in smaller chunks. If your application depends upon scaling up (for example, large databases and data warehouses) or is a specialized, dedicated workload (for example, large, unified high-speed storage), it must be decomposed (partitioned) to run on scale-out services to be feasible in the cloud. Depending on the workload, this can be a non-trivial exercise. When evaluating your application, you may well achieve a high return on the investment of moving or building on Azure Cloud Services if your workload has only one of the preceding three aspects that shine in a platform-as-a-service environment like Azure Cloud Services. Applications that have all three characteristics are likely to see a strong return on the investment. For a complete read on this topic with detailed customer experience, visit http://msdn.microsoft.com/en-us/library/azure/jj717232.aspx.
question
What is typically one of the biggest hurdles in designing for the cloud?
answer
Changing from using well-known scale-up techniques to scale-out techniques for data and state management.
question
100 of 144 What resources in Azure have a limit?
answer
Everything Be it an individual role instance, a storage account, a cloud service, or even a data center - every available resource in Azure has some finite limit. These may be very large limits, such as the amount of storage available in a data center (similar to how the largest cargo ships can carry over 10,000 containers), but they are finite. With this in mind, the approach to scale is to: partition the load, and compose it across multiple scale units - be that multiple VMs, databases, storage accounts, cloud services, or data centers.
question
Do resiliency solutions typically cost more for tightly coupled or loosely coupled "add more cloned stuff" approaches?
answer
Tightly coupled typically cost more because they require highly trained personnel, specialized hardware, with careful configuration and testing. Not only is it hard to get it right, but it costs money to do it correctly.
question
Deploying applications in multiple data centers requires what 3 infrastructure and application capabilities?
answer
Application logic to route users of the services to the appropriate data center (based on geography, user partitioning, or other affinity logic). Synchronization and replication of application state between data centers, with appropriate latency and consistency levels. Autonomous deployment of applications, such that dependencies between data centers are minimized (that is, avoid the situation wherein a failure in data center A triggers a failure in data center B).
question
Each cloud service may have up to how many roles?
answer
25
question
Within a cloud service, all instances are assigned what range of private IP addresses?
answer
10.0.0.0 - 10.255.255.255
question
Within a cloud service, all outbound connections appear to come from what IP address?
answer
A single virtual IP address, or VIP (which is the VIP of the cloud service deployment), through Network Address Translation.
question
What is one of the reasons that batched or "chunky" cross-service connections are encouraged for scalability?
answer
The cross-service latency (that is, the traversing the NAT out of one cloud service and through the load balancer into another) is far more variable than the on-premise equivalent.
question
Applications that leverage a distributed cache platform should consider what guidelines?
answer
Leverage a distributed caching platform as a worker role within your hosted service. This close proximity to the clients of the cache reduces latency and throughput barriers presented by load balancer traversal. In-Role Cache on Azure Cache hosts caching on worker roles within your cloud service. Use the distributed caching platform as the primary repository for accessing common application data and objects (for example, user profile and session state), backed by SQL Database or other durable store in a read-through or cache-aside approach. Cache objects have a time-to-live which affects how long they are active in the distributed cache. Applications either explicitly set time-to-live on cached objects or configure a default time-to-live for the cache container. Balance the choice of time-to-live between availability (cache hits) versus memory pressure and staleness of data. Caches present a key->byte[] semantic; be aware of the potential for overlapping writes to create inconsistent data in cache. Distributed caches do not generally provide an API for atomic updates to stored data, as they are not aware of the structure of stored data. Cache performance is bounded on the application tier by the time required to serialize and deserialize objects. To optimize this process, leverage a relatively symmetrical (same time required to encode/decode data), highly efficient binary serializer such as protobuf.
question
High amounts of traffic may require many web servers to handle this traffic. Instead of a 2 tier approach in having all the web servers connect directly to all the sharded SQL Databases, how can connection affinity be accomplished with a 3 tier approach to lessen the connections to the SQL database?
answer
A 3 tier approach implements connection affinity between the web and application layers to pin calls from specific application instances to specific databases. For example, to request data from DB1, web instances must request the data via application instances App1 and App2 that go specifically to DB1; while App3 and App4 servers go directly to DB2. As the Azure load balancer currently randomly distributes traffic using a hashing algorithm, delivering affinity in your application does require careful design and implementation.
question
What's the cookie called that the load balancer uses to keep the user coming back to the same server?
answer
*ARR (Application Request Routing) Affinity Cookie* This keeps users going back to same instance until user closes their browser. This could be helpful for stateful websites which is not recommended for scaling reasons. Note: Websites should have a session strategy that should not matter if users go to different servers. Then, disabling ARR Affinity does not matter. There are situations where keeping affinity is not desired. For example, some users don't close their browser, and remain connected for extended periods of time. When this happens, the affinity cookie remains in the browser, and this keeps the user attached to his server for a period that could last hours, days or even more (in theory, indefinitely!). This can put the balance between servers out of balance. For this reason, ARR Affinitity can be disabled using code or the web.config.
question
What ASP.Net framework makes it easy to build HTTP services that reach a broad range of clients, including browsers and mobile devices and is an ideal platform for building RESTful applications on the .NET Framework?
answer
ASP.NET Web API
question
What are some techniques developers can use when coding Cloud Services to improve scalability, resiliency, and throughput?
answer
* Implement Retry Logic * Assume that all services, network calls, and dependent resources are potentially unreliable and susceptible to transient and ongoing failure modes. Implement retry policis, backoff policies and log all error/failure events. * Don't use Direct Threads * Do not directly create threads for scheduling work; instead leverage a scheduling and concurrency framework such as the .NET Task Parallel Library. Threads are relatively heavyweight objects and are nontrivial to create and dispose. Schedulers that work against a shared thread pool can more efficiently schedule and execute work. * Optimize DTO's * Optimize data transfer objects (DTOs) for serialization and network transmission. Given the highly distributed nature of Azure applications, scalability is bounded by how efficiently individual components of the system can communicate over the network. Any data passed over the network for communication or storage should implement JSON text serialization or a more efficient binary format with appropriate hints to minimize the amount of metadata transferred over the network * Use Lightweight Frameworks (ASP.NET Web API) * Where practical, leverage lightweight frameworks for communicating between components and services. Many traditional technologies in the .NET stack provide a rich feature set which might not be aligned with the distributed nature of Azure. Components that provide a high degree of abstraction between intent and execution often carry a high performance cost. For example, try using the ASP.NET Web API instead of WCF for implementing web services. * Use Compression * Reduce the amount of data delivered out of the data center by enabling HTTP compression in IIS for outbound data. * Affinitize Connections * Affinitize connections between tiers to reduce chattiness and context switching of connections. * Use Blob Storage or CDN * To reduce load on the application, use blob storage to serve larger static content (> 100 kB). To reduce load on the application, use the Content Delivery Network (CDN) via blob storage to serve static content, such as images or CSS. * Avoid DB for Session Data * Avoid using SQL Database for session data. Instead, use distributed cache or cookies.
question
What guidelines can be used with Azure Storage to improve scalability, resiliency, and throughput?
answer
* Use Multiple Storage Accounts * Leverage multiple storage accounts for greater scalability, either for increased size (> 500 TB) or for more throughput (> 20,000 operations per second). Ensure that your application code can be configured to use multiple storage accounts, with appropriate partitioning functions to route work to the storage accounts. * Optimize Partitions * Carefully select partitioning functions for table storage to enable the desired scale in terms of insert and query performance. Look to time-based partitioning approach for telemetry data, with composite keys based on row data for non-temporal data. Keep partitions in an appropriate range for optimal performance; very small partitions limit the ability to perform batch operations (including querying), while very large partitions are expensive to query (and can bottleneck on high volume concurrent inserts). Partitions can be as small as a single entity; this provides highly optimized performance for pure lookup workloads such as shopping cart management. * Batch Operations * When possible, batch operations into storage. Table writes should be batched, typically through use of the SaveChanges method in the .NET client API. Insert a series of rows into a table, and then commit the changes in a single batch with the SaveChanges method. Updates to blob storage should also be committed in batch, using the PutBlockList method. * Short Column Names * Choose short column names for table properties; as the metadata (property names) are stored in-band. The column names also count towards the maximum row size of 1 MB. Excessively long property names are wasteful of system resources.
question
Use of a central sequence generation facility should be avoided for any nontrivial aspect of the application, due to availability and scalability constraints. Many applications leverage sequences to provide globally unique identifiers, using a central tracking mechanism to increment the sequence on demand. This architecture creates a global contention point and bottleneck that every component of the system would need to interact with. This bottleneck is especially problematic for potentially disconnected mobile applications. What could be used instead of a central sequence generation facility?
answer
Applications should leverage functions which can generate globally unique identifiers, such as GUIDs, in a distributed system. By design, GUIDs are not sequential, so they can cause fragmentation when used as a CLUSTERED INDEX in a large table. To reduce the fragmentation impact of GUIDs in a large data model, shard the database, keeping individual shards relatively small. This allows SQL Database to defragment your databases automatically during replica failover.
question
To make more efficient use of a SQL Database, what should be done with queries and data inserts against a SQL Database?
answer
Batched insert AND Avoid chatty interfaces - Reduce the number of round-trips required to the database to perform a query or set of operations.
question
How does Big Compute differ from Big Data?
answer
Big Compute typically involves applications that rely on CPU power and memory, such as engineering simulations, financial risk modeling, and digital rendering. The clusters that power a Big Compute solution might include computers with specialized muliticore processors to perform raw computation, and specialized, high speed networking hardware to connect the computers. Typical Big Compute examples: Financial risk modeling Image rendering and image processing Media encoding and transcoding - Azure Media uses Azure Batch to encode hundreds of videos Monte Carlo simulations Software testing Oil and gas reservoir modeling Engineering design and analysis, such as computational fluid dynamics Physical simulations such as car crashes and nuclear reactions Weather forecasting In contrast, Big Data solves data analysis problems that involve an amount of data that cannot be managed by a single computer or database management system, such as large volumes of web logs or other business intelligence data. Big Data tends to rely more on disk capacity and I/O performance than on CPU power, and a Big Data solution often includes specialized tools such as Apache Hadoop to manage the cluster and partition the data.
question
What Azure compute services are used for Big Compute?
answer
* Batch * Batch will be discussed in greater detail following this question. * Cloud Services * Can run Big Compute applications in worker role instances Enable scalable, reliable applications with low administrative overhead, running in a platform as a service (PaaS) model May require additional tools to integrate with existing on-premises cluster solutions Continuously monitor virtual machine health, and move a virtual machine to a new host in case of failures * Virtual Machines * Provide compute infrastructure as a service (IaaS) using Microsoft Hyper-V technology Enable you to flexibly provision and manage persistent virtual machines from standard Windows Server or Linux images, or images and data disks you supply Extend on-premises compute cluster tools and applications readily to the cloud
question
Why not use my own scripting to create VM's instead of using the Batch compute?
answer
Creating your own solution would mean having to also deal with all the administrative work as follows: Create and Manage VM's Install Task Applications on each VM Manage and Authenticate Users Start the Tasks Move Task Input/Output Deal with Failed/Frozen Tasks Queue Tasks Scale Up/Down as Needed Cleanup after tasks are complete.
question
What MS pack enables you to deploy an on-premises Windows compute cluster and dynamically extend to Azure when you need additional capacity for Big Compute?
answer
Microsoft HPC Pack You can also use HPC Pack to deploy a cluster entirely on Azure and connect to it over a VPN or the Internet. The Windows HPC solution combines a comprehensive set of *deployment, administration, job scheduling, and monitoring* tools for your Windows HPC cluster environment, and a flexible platform for developing and running HPC applications.
question
What compute instances have been designed for compute intensive workloads such as high-performance compute (HPC) and parallel Message Passing Interface (MPI) applications because they have high speed, multicore CPUs and large amounts of memory?
answer
Compute sizes A8 and A9 What's unique about A8 and A9 instances is the backend network that supports Remote Direct Memory Access (RDMA) communication between compute nodes. We have virtualized RDMA through Hyper-V with near bare metal performance of less than 3 microsecond latency and greater than 3.5 gigabytes per second bandwidth. RDMA is accomplished on a second network adapter to connect to a low-latency and high-throughput application network in Azure. This network is used exclusively for MPI process communication. For some workloads, these capabilities enable MPI application performance in the cloud that is comparable to performance in on-premises clusters with dedicated application networks. MPI, or Message Passing Interface, is a standard programming model used for developing parallel applications. MPI is language-independent, and our customers run applications written in Fortran, C, and .NET. MPI is used in engineering applications to model stress in building or part designs, simulate impact and falls, and other processes to build and manufacture better products. MPI is also at the heart of sophisticated weather modeling.
question
What provides job scheduling and auto-scaling of compute resources as a platform service, making it easy to run large-scale parallel and high-performance computing (HPC) applications in the cloud.
answer
Azure Batch Here's why it is needed as a compute: Batch processing began with mainframe computers and punch cards. Today it still plays a central role in business, engineering, science, and other pursuits requiring running lots of automated tasks—processing bills and payroll, calculating portfolio risk, designing new products, rendering animated films, testing software, searching for energy, predicting the weather, and finding new cures for disease. Previously only a few had access to the computing power for these scenarios. With Azure, that power is available to you when you need it, without a massive capital investment.
question
For HPC, Azure Batch can scale to how many virtual machines?
answer
1000's of VMs Batch works well with intrinsically parallel (sometimes called "embarrassingly parallel") applications or workloads, which lend themselves to running as parallel tasks on multiple computers, such as the compute VMs managed by the Batch service.
question
What are the different performance levels for Batch?
answer
Note: These performance levels are also the pricing tiers. General Purpose Instances (A0 to A4) $0.0075/hr Memory Intensive Instances (A5 to A7) $0.0175/hr Compute Intensive Instances (A8 to A9) $0.03/hr SSD Based Instances (SSD storage, 60% faster CPU) (D1 to D4, D11 to D14) $0.0175/hr
question
For the SSD storage on D Series of VM's, does the storage persist between VM rebuilds/moves/failures?
answer
No, it should be used for temporary storage. Use Data Disks attached to storage for persistent storage.
question
The REST-based Batch APIs support what 2 developer scenarios to help you configure and run your batch workloads with the Batch service?
answer
Distribute computations as work items (Batch API) AND Publish and run applications with the Batch service (Batch Apps API) Distribute computations as work items (Batch API) - Use the Batch APIs to create and manage a flexible pool of compute VMs and specify work items that run on them. Publish and run applications with the Batch service (Batch Apps API) - The Batch Apps APIs provide a higher level of abstraction and job execution pipeline hosted by the Batch service. With Batch Apps you can create a batch workload as a service in the cloud from an application that runs today on client workstations or a compute cluster. Batch Apps helps you wrap existing binaries and executables and publish them to run on pooled VMs that the Batch service creates and manages in the background. The Batch Apps framework handles the movement of input and output files, job execution, job management, and data persistence. Batch Apps also allows you to model tasks for how data is partitioned and for multiple steps in a job.
question
What is an Azure VM that the Batch service dedicates to running a specific workload (task) for your application - such as an executable file (.exe), or in the case of Batch Apps, one or more programs from an application image?
answer
Task Virtual Machine (TVM) Unlike a typical Azure VM, you don't provision or manage a TVM directly; instead, the Batch service creates and manages TVMs as a "pool" of similarly configured compute resources. If you use the Batch APIs, you can create a "pool" directly, or configure the Batch service to create one automatically when you specify the work to be done. If you use the Batch Apps APIs, a pool gets created automatically when you run your cloud-enabled Batch application.
question
What are the attributes of a TVM pool for Batch?
answer
A size for the TVMs The operating system that runs on the TVMs (currently only W2012 or W2008 SP1) The maximum number of TVMs A scaling policy for the pool - a formula based on current workload and resource usage that dynamically adjusts the number of TVMs that process tasks Whether firewall ports are enabled on the TVMs to allow intrapool communication A certificate that is installed on the TVMs - for example, to authenticate access to a storage account A start task for TVMs, for one-time operations like installing applications or downloading data used by tasks Additional Notes: A pool can only be used by the Batch account in which it was created. A Batch account can have more than one pool. Every TVM that is added to a pool is assigned a unique name and an associated IP address. When a TVM is removed from a pool, it loses the changes that were made to the operating system, its local files, its name, and its IP address. When a TVM leaves a pool, its lifetime is over.
question
You can assign a priority using the Batch API. Is priority assigned to the work item, job, or task?
answer
Work Item Each job under the workitem is created with this priority which determines the order of job scheduling within an account. The priority values can range from -1000 to 1000, with -1000 being the lowest priority and 1000 being the highest priority. You can update the priority of a job by using the UpdateJob operation. Within the same Batch account, higher priority jobs have scheduling precedence over lower priority jobs. A job with a higher priority value in one account does not have scheduling precedence over another job with a lower priority value in a different account. Job scheduling on different pools are independent. Across different pools, it is not guaranteed that a higher priority job is scheduled first, if its associated pool is short of idle Task Virtual Machines (TVMs). On the same pool, jobs with the same priority level have an equal chance of being scheduled.
question
In Batch, what are work items, jobs, and tasks?
answer
A work item is a template that specifies how an application will run on Task Virtual Machines (TVMs) in a pool. A job is a scheduled instance of a work item and might occur once or reoccur. A job consists of a collection of tasks. Here's the hierarchy: 1 Work Item 1 Job (could be reoccurring) 1 to Many Tasks Example: Work Item A Job 1 (Daily) Job Manager Task Task 1 Task 2
question
In addition to tasks that you can define to perform computation on a TVM, what are 2 special tasks provided by the Batch service?
answer
* Start task * * Job manager task *
question
In Batch for each TVM in a pool, you can configure the operating system, install software, and start background processes with what type of task?
answer
Start Task The start task runs every time a Task Virtual Machine (TVM) starts for as long as it remains in the pool. The Start Tasks are defined when a pool (collection) of TVMs are defined.
question
What type of task is started before all other Job tasks (not including the Start Task)?
answer
Job manager task It has the following characteristics: It is automatically created by the Batch service when the job is created. Its associated TVM is the last to be removed from a pool when the pool is being downsized. It is given the highest priority when it needs to be restarted. If an idle TVM is not available, the Batch service may terminate one of the running tasks in the pool to make room for it to run. Its termination can be tied to the termination of all tasks in the job. Note: A job manager task in a job does not have priority over tasks in other jobs. Across jobs, only job level priorities are observed.
question
The scaling policy for the pool uses a formula based on what 3 types of metrics?
answer
Task metrics Based on the status of tasks, such as Active, Pending, and Completed. Time metrics Based on statistics collected every five minutes in the specified number of hours. Resource metrics Based on CPU usage, bandwidth usage, memory usage, and number of TVMs.
question
The Batch service exposes a portion of the file system on a TVM as the root directory. The root directory of the TVM is available to a task through the WATASK_TVM_ROOT_DIR environment variable. What are the 3 sub-directories in the root directory?
answer
Tasks This location is where all of the files are stored that belong to tasks that run on the TVM. Shared This location is a shared directory for all of the tasks under the account. Start This location is used by a start task as its working directory. All of the files that are downloaded by the Batch service to launch the start task are also stored under this directory. Note: When a TVM is removed from the pool, all of the files that are stored on the TVM are removed.
question
What are the 2 API's available in Batch?
answer
Batch API If you develop an application with the lower level Batch APIs, you need to programmatically define all the work items, jobs, and tasks that the Batch service runs and configure the TVM pools that run the tasks. Batch Apps API If you integrate a client application by using the Batch Apps APIs and tools, you can use components that automatically split a job into tasks, process the tasks, and merge the results of individual tasks to the final job results. When you submit the workload to the Batch service, the Batch Apps framework manages the jobs and executes the tasks on the underlying compute resources. Note: Azure Batch REST APIs can be accessed from within a service running in Azure, or directly over the Internet from any application that can send an HTTPS request and HTTPS response.
question
What is a typical workflow to distribute work items to pool VM's for Batch API?
answer
1. Upload input files (such as source data or images) required for a job to an Azure storage account. These files must be in the storage account so that the Batch service can access them. The Batch service loads them onto a TVM when the task runs. 2. Upload the dependent binary files to the storage account. The binary files include the program that is run by the task and the dependent assemblies. These files must also be accessed from storage and are loaded onto the TVM. 3. Create a pool of TVMs, specifying the size of the TVMs in the pool, the OS they run, and other properties. When a task runs, it is assigned a TVM from this pool. 4. Create a work item. A job will be automatically created when you create a work item. A work item enables you to manage a job of tasks. 5. Add tasks to the job. Each task uses the program that you uploaded to process information from a file you uploaded. 6. Run the application and monitor the results of the output.
question
What is a basic workflow to publish an application by using the Batch Apps API and then submit jobs to the application enabled by Batch?
answer
1. Prepare an application image - a zip file of your existing application executables and any support files they need. These might be the same executables you run in a traditional server farm or cluster. 2. Create a zip file of the cloud assembly that will invoke and dispatch your workloads to the Batch service. This contains two components that are available via the SDK: a. Job splitter - breaks a job down into tasks that can be processed independently. For example, in an animation scenario, the job splitter would take a movie rendering job and divide it into individual frames. b. Task processor - invokes the application executable for a given task. For example, in an animation scenario, the task processor would invoke a rendering program to render the single frame specified by the task. 3. Use the Batch Apps APIs or developer tools to upload the zip files prepared in the previous two steps to an Azure storage account. These files must be in the storage account so that the Batch service can access them. This is typically done once per application, by a service administrator. 4. Provide a way to submit jobs to the enabled application service in Azure. This might be a plugin in your application UI, a web portal, or an unattended service as part of your backend system. There are samples available with the SDK to demonstrate various options. To run a job: a. Upload the input files (such as source data or images) specific to the user's job. These files must be in the storage account so that the Batch service can access them. b. Submit a job with the required parameters and list of files. c. Monitor job progress by using the APIs or the Batch Apps portal. d. Download outputs.
question
What can be used in Windows Azure to offload resource-hungry operations from the web roles that handle user interaction?
answer
Worker role These worker roles can perform tasks asynchronously when the web roles do not require the output from the worker role operations to be immediately available. By using worker roles to handle storage interactions in your application, and queues to deliver storage insert, update, and delete requests to the worker role, you can implement load leveling. This is particularly important in the Windows Azure environment because both Windows Azure storage and SQL Database can throttle requests when the volume of requests gets too high. If concurrency is an issue, such that exclusive access is required for a storage resource, you may be required to manage concurrent access in the design or just use a single worker role.
question
What algorithm may provide a way to parallelize the calculations across multiple worker role instances?
answer
MapReduce MapReduce is a programming model that enables you to parallelize operations on a large dataset.
question
The delayed write pattern is particularly useful when the tasks that must be carried out can run as background processes, and you want to free the application's UI for other tasks as quickly as possible. However, it does mean that you cannot return the result of the background process to the user within the current request. For example, if you use the delayed write pattern to queue an order placed by a user, you will not be able to include the order number generated by the background process in the page you send back. Because queues are the natural way to communicate between the web and worker roles in a Windows Azure application, it's tempting to always consider using them for an operation such as saving data collected in the UI. The UI code can write the data to a queue and then continue to serve other users without needing to wait for operations on the data to be completed. What is an alternative to writing to a queue that can be just as fast?
answer
Writing to a queue takes approximately the same time as writing to blob storage, and so there is no additional overhead for the web role to save directly to the blob storage when using the delayed write pattern. Since the example given would be able to return the order number, it would be better to write directly to blob storage and let the worker role complete the order by reading the blob storage. The other factor to consider is cost. The key difference between the two approaches is in the number of storage transactions that may be required and the fact that the service bus has additional costs. Read the complete scenario here: http://msdn.microsoft.com/en-us/library/hh534484.aspx
question
How can 3rd party software be installed to persist in Cloud Services?
answer
1) Create command (i.e. startup.cmd or startup.ps1) script that is included with Visual Studio project. 2) In VS for command script, set "Copy to Output Directory" to "Copy always", to ensure that the script will be included inside your package when it is built even though it is not used within the project. 3) Add Startup Task to the ServiceDefinition.csdef file that will be called when the service starts up. Here are some situations where a startup task cannot be used: 1. Installation that cannot be scripted out 2. Installation that requires many user involvement 3. Installation that takes a very long time to complete For more details: https://www.simple-talk.com/cloud/platform-as-a-service/installing-third-party-software-on-windows-azure-%E2%80%93-what-are-the-options/
question
What are the 3 different ways for web roles to communicate with worker roles?
answer
* Direct Communication * Input Endpoints - A public endpoint that goes through load balancer and accessible to the public. Internal Endpoints - Allows role instances to communicate with a particular role instance in the same cloud service. InstanceInput Endpoint - A public input endpoint with a range of ports that tie to a specific instance of a role. * Shared Storage * Communicate by storing data in storage, cache, or database. * Messaging * Communicate using the Service Bus.
question
How can developers test the performance of their websites?
answer
Use http://www.webpagetest.org/ to determine areas that can be improved.
question
When should I use websites versus web roles in Cloud Services?
answer
Advantages of Web roles: - RDP and install any MSI allowing more flexibility - created within VM's and access to IIS with RDP - Attach to File Storage (however, websites can still upload files to blob storage) - Can be WITHIN a Virtual Network so web and worker roles traffic don't go through firewalls (note that both can connect to a VNet) - scales to more instances than websites - can install 3rd party software that persists - can use non-standard ports - can use Report Viewer - offers a better approach for multi-tier applications with direct communication between web roles and worker roles. - Because deployment slots do not share resources unlike websites, testing in Staging slot will not affect performance in Production slot. Advantage for websites: - Scale up machine size without redeploy. - Web server instances share content and configuration, which means you don't have to redeploy or reconfigure as you scale. - Near-instant deployment because new Websites instances are created in VMs that already exist. - webjobs allow .cmd, .bat, .exe, .ps1, .sh, .php, .py, .js. to be executed continuously or on a schedule. - You don't have to set up a separate project in your Visual Studio solution to deploy to Azure - you can simply deploy a normal web app solution as-is - Can deploy with Git and FTP (Cloud Services can't but VM's can). - You only need one instance to get the SLA rather than 2 instances in Web Roles - You get free, valid HTTPS out-of-the-box with the azurewebsites.net domain - Easily create a website from the Gallery to use apps like Joomla, WordPress, Drupal, nopCommerce, and many more. - Because deployment slots share resources unlike web roles, having additional slots does not cost more. CONCLUSIONS: Websites is adding new features rapidly. If in doubt which one to use, choose websites because its easy to change over later if you hit a roadblock. For a complete comparison, see these articles: http://azure.microsoft.com/en-us/documentation/articles/choose-web-site-cloud-service-vm/ http://robdmoore.id.au/blog/2012/06/09/windows-azure-web-sites-vs-web-roles/
question
Do websites or cloud services or both use the same resources for deployment slots?
answer
Websites deployment slots use the same resources because the deployment slot must be in the same WHP as the production slot. Cloud Services only has Production or Staged deployment sites and use different resources. This is why its important to delete the Staged deployment after swapping with production to lower cost.
Get an explanation on any task
Get unstuck with the help of our AI assistant in seconds
New