Quantcast
Channel: You Had Me At EHLO…
Viewing all 703 articles
Browse latest View live

Public Folders and Exchange Online

$
0
0

Update 6/5/2013: We have updated the blog post to add the link to the first TechNet document on public folder Hybrid scenarios.

“You mean… this is really happening?”

Last November we gave you a teaser about public folders in the new Exchange. We explained how public folders were given a lot of attention to bring their architecture up-to-date, and as a result of this work they would take advantage of the other excellent engineering work put into Exchange mailbox databases over the years. Many of you have given the new public folders a try in Exchange Online and Exchange Server 2013 in your on-premises environments. At this time we would like to give you a bit more detail surrounding the Exchange Online public folder feature set so you can start planning what makes sense for your environment. So, yes, we really meant our beloved public folders were coming to Exchange Online!

How do we move our public folders to Exchange Online?

We are still putting the finishing touches on some of our migration documentation for on-premises Exchange Server environments to Exchange Online. We know there is a lot of interest in this documentation and we are making sure it is as easy to follow as possible. We will update this article with links to the content when more documentation becomes available on TechNet. The following two articles are available now.

Important

Before we cover the migration process at a high level (and very deeply in those TechNet articles!), we want to be very clear everyone understands the following few important points.

  • Public Folder migrations to Exchange Online should not be performed unless all of your users are located in Exchange Online, and/or all of your on-premises users are on Exchange Server 2013.

  • Public folder migrations are a cutover migration. You cannot have some public folders on-premises and some public folders in Exchange Online. There will be a small window of public folder access downtime required when the migration is completed and all public folder connections are moved from on-premises to Exchange Online.

  • Public folder migrations are entirely PowerShell based at this time. Once the migration has completed you can then perform your public folder management in the tool of your choice, EAC or PowerShell.

So what are the steps I can expect to go through?

In the TechNet content we walk you through exactly how to use PowerShell and some scripts provided by the product group to help automate the analysis and content location mapping in Exchange 2013 or Exchange Online. The migration process is similar whether you are doing an on-premises to on-premises migration, or an on-premises to Exchange Online migration with the latter having a couple more twists. Both scenarios will include a few major steps you will go through to migrate your legacy public folder infrastructure. Again, the following section is meant to be an overview and not a complete rendering of what the more detailed step-by-step TechNet documentation contains. Consider this section an appetizer to get you thinking about your migration and what potential caveats may or may not affect you. The information below is tailored more to an Exchange Online migration, but our on-premises customers will also be facing many of the same steps and considerations.

Prepare Your Environment

  • Are my on-premises servers at the necessary patch levels?
    • Exchange 2007 SP3 RU10 or later
    • Exchange 2010 SP3 or later
    • Exchange 2013 RTM CU1 or later
      • The CU1 released on April 2nd 2013 is necessary. Because there is no Service Pack released for Exchange 2013 at this time it is referred to as RTM CU1.
  • Are my Windows Outlook users using client versions at the necessary patch levels?
    • Outlook 2007, 12.0.6665.5000 or later
    • Outlook 2010, 14.0.6126.5000 or later
    • Outlook 2013, 15.0.4420.1017 or later
  • Are all on-premises users on Exchange Server 2013 or have been moved to Exchange Online?

Analyze Your Current Public Folders and Content

(Size limits pertain to Exchange Online)

  • What does my current public folder infrastructure look like?
    • Who has access to what?
    • What is my total content size?
      • Is the total public folder content on Exchange 2007/2010 over 950 GB when Get-PublicFolderStatistics is run? (“Why” is discussed later)
      • Is the total public folder content on Exchange 2013 over 1.25 TB when Get-PublicFolderStatistics is run?
    • Is any single public folder over 15GB that we should trim down first? (“Why” is discussed later)
  • What will my public folder mailbox layout be?
    • Can my content fit within the allowed public folder mailboxes and their quotas?
    • What public folders will go into what public folder mailboxes?

Create the Initial Public Folder Mailboxes

  • Public folder mailboxes are created by the admin so your content has a place to live in Exchange Online. Customers with less than 25GB of content may only need a single public folder mailbox to start, but our scripts will help you determine your starting layout while backend automation will determine if you need more public folder mailboxes down the road. On-premises customers will utilize quota values that make sense for their own deployments.

Begin the Migration Request and Initial Data Sync

  • The initial copy of public folder content from on-premises to Exchange Online is performed. This may take a long time depending on how much content you have. There is no easy way to predict the length of time it will take as there are many variables to consider, but you can monitor the progress via PowerShell. Users will continue using the on-premises public folder infrastructure during this time so there is no impact to the on-premises environment

Perform Delta Syncs of Changed Content

  • These content delta syncs run by the admin help shorten the window of downtime for the finalization process by copying only data changed after the initial migration request copy was performed. Numerous delta syncs may be required in large environments with many public folder servers.

Lock On-premises Public Folders and Finalize the Migration Request

  • Access to the on-premises public folder environment is blocked and a final delta sync of changed data is performed. When this stage is completed your Exchange Online public folders will be ready for user access. The access block is required to prevent any content changes taking place on-premises just before your users connections are transitioned to the Exchange Online public folder environment.

Validate the Exchange Online Public Folder Environment

  • Create new content and permission reports, and compare them to the reports created prior to the migration.
    • If the administrator is happy, the new Exchange Online public folders will then be unlocked for user access.
    • If the administrator feels the migration was not successful, a roll back to the on-premises public folder infrastructure is initiated. However, if any changes were made to Exchange Online public folders such as content, permissions, or folders created/deleted before the rollback is initiated, then those changes will not be replicated to the on-premises infrastructure.

Removal of legacy public folder content

  • The administrator will remove the public folder databases from the on-premises infrastructure.

Microsoft, what can I do/not do with these things in Exchange Online?

Now that we have given you an idea of what the migration process will be let us talk about the feature itself. Starting with the new Office 365, customers of Exchange Online will be able to store, free of charge, approximately 1.25 terabytes of public folder data in the cloud. Yes, you read the right… over a terabyte. The way this works is your tenant will be allowed to create up to fifty (50) public folder mailboxes, each yielding a 25 GB quota. However, when operating in a hybrid environment, public folders can exist only on-premises or in Exchange Online.

Once you complete the migration process of public folders to Exchange Online, the on-premises public folder infrastructure will have its hierarchy locked to prevent user connections and its content frozen at that point in time. By locking the on-premises content we provide you with a way to rollback a migration from Exchange Online, if you deem it necessary. However, as mentioned before, a rollback can result in data loss as no changes made while using the Exchange Online public folder infrastructure are copied back on-premises.

We will support on-premises Exchange Server 2013 users accessing Exchange Online public folders. We will also support Exchange Online users accessing on-premises public folders if you choose to keep your public folder infrastructure local. The below table depicts what users can access what public folder infrastructures. Please note for a hybrid deployment on-premises users must be on Exchange 2013 if you wish for them to access Exchange Online public folders. Also it bears worth repeating that public folders can only exist in one location, on-premises or in Exchange online. You cannot have two different public folder infrastructures being utilized at once.

PF location >2007 On-Premises2010 On-Premises2013 On-PremisesExchange Online
Mailbox version:    
Exchange 2007

Yes

Yes

No

No

Exchange 2010

Yes

Yes

No

No

Exchange 2013

Yes

Yes

Yes

Yes

New Exchange Online

Yes

Yes

Yes

Yes

How is public folder management in Exchange Online performed?

When your public folder content migration is complete or you create public folders for the very first time, you will not have to worry about managing many aspects of public folders in Exchange Online. As you previously read, public folders in Exchange Server 2013 and Exchange Online are now stored within a new mailbox type in the mailbox database. Our on-premises customers will have to create public folder mailboxes, monitor their usage, create new public folder mailboxes when necessary, and split content to different public folder mailboxes as their content grows over time. In Exchange Online we will automatically perform the public folder mailbox management so you may focus your time managing the actual public folders and their content. If we were to peek behind the Exchange Online curtain, we would see two automated processes running at all times to make everything happen:

  1. Automatic public folder moves based on public folder mailbox quota usage
  2. Automatic public folder mailbox creation based on active hierarchy connection count

Let’s go through each one of them, shall we?

1. Automatic public folder moves based on public folder mailbox quota usage

This process actively monitors your public folder mailbox quota usage. This process’ goal ensures you do not inadvertently fill a public folder mailbox and stop it from being able to accept new content for any public folder within it.

When a public folder mailbox reaches the Issue Warning Quota value of 24.5 GB, this process is automatically triggered to redistribute where your public folders currently reside. This may result in Exchange Online simply moving some public folders from the nearly-filled public folder mailbox to another pre-existing public folder mailbox holding less content. However, if there are no public folder mailboxes with enough free space to move public folders into, Exchange Online will automatically create a new public folder mailbox and move some of your public folders into the newly created public folder mailbox. The end result will be all public folder mailboxes being below the Issue Warning Quota.

Public folder moves from one public folder mailbox to another are an online move process similar to normal mailbox moves. Due to the move process being an online experience your users may experience a slight disruption in accessing one or more public folders during the completion phase of the online move process. Any mail destined for mail enabled public folders being moved would be temporarily queued and then delivered once the move request completes.

In case the curious amongst you are wondering, we do not currently prevent customers from lowering the public folder mailbox quota values even though there is no reason you should do that. However, you are prevented from configuring quotas values larger than 25 GB.

Let us take a moment to visualize this process as a picture is worth a thousand words. In the first scenario below a customer currently has to two public folder mailboxes, PFMBX-001 and PFMBX-002. PFMBX-001 contains three public folders while PFMBX-002 contains only one public folder. PFMBX-001 has gone over the IssueWarningQuota value of 24.5 GB and currently contains 24.6 GB of content. When the automatic split process runs in this environment it sees there is plenty of space available in PFMBX-002, and moves a public folder from PFMBX-001 into PFMBX-002. In this example, the final result is two public folder mailboxes with a similar amount of data in each of them. Depending on the size of your folders this process may move a single large public folder, or numerous mall public folders. The example shows a single folder being moved.

image
Scenario 1: Auto split process shuffles public folders from one public folder mailbox to another one.

In a second scenario below, a customer has a single public folder mailbox, PFMBX-001 containing three public folders. PFMBX-001 has gone over the IssueWarningQuota value of 24.5 GB and contains 24.6 GB of content. When the split process runs in this environment it sees there are no other public folder mailboxes available to move public folders into. As a result, the process creates a new empty public folder mailbox, PFMBX-002, and moves some public folders into the new public folder mailbox; the final result is two public folder mailboxes with a similar amount of data in each of them. Again in this example we are showing a single public folder being moved, but the process may determine it has to move many smaller public folders.

image
Scenario 2: Auto split process must create a new empty public folder mailbox before moving a public folder.

One noteworthy limit in Exchange Online which should be mentioned is no single public folder in Exchange Online can be over 25 GB in size due to the underlying public folder mailbox having a 25 GB quota. To give you an idea how much data that is; 25 GB of data is similar to 350,000 items of 75 KB each, or 525,000 items of 50 KB each. In most cases this volume of data can easily be split amongst multiple public folders to avoid a single folder coming anywhere near the 25 GB limit of a single public folder.

Our migration documentation will also suggest if you currently have a single public folder over 15 GB that you try to reduce that public folder’s size to under 15 GB prior to the migration by deleting old content or splitting it into multiple smaller public folders. When we say a single public folder over 15 GB we mean exactly that and it excludes any child folders. Any child folder of a parent folder is not considered part of the 15 GB content limit suggestion for these purposes because the child public folder may reside in a different public folder mailbox if necessary. The reason for this suggestion is two-fold. First, it helps prevent you from triggering the automated split-process as soon as your migration takes place if you were to migrate very large public folders form on-premises. Second, content moved from Exchange 2007/2010 to Exchange Online may result in the reported space utilized by a single public folder increasing by 30%. The increase is due to a more accurate method used by Exchange Server 2013 to calculate space used within a mailbox database compared to earlier versions of Exchange Server. If you were to migrate a single massive public folder residing in on-premises Exchange Server 2007/2010 to Exchange Online this space recalculation may push the single public folder over the 25 GB quota. We want to help you avoid this situation as this would only be noticed once you were well into the data copy portion of the migration, and would cause you lost time having to redo the process all over again.

If you have a particular business requirement which does not allow you to reduce the size of this single massive public folder in one of the ways previously suggested, then we will recommend you retain your entire public folder infrastructure on-premises instead of moving it to Exchange Online as we cannot increase the public folder mailbox quota beyond 25 GB.

2. Automatic public folder mailbox creation based on active hierarchy connection count

The second automated process helps maintain the most optimal user experience accessing public folders in Exchange Online. Exchange Online will actively monitor how many hierarchy connections are being spread across all of your public folder mailboxes. If this value goes over a pre-determined number we will automatically create a new public folder mailbox. Creating the additional public folder mailbox will reduce the number of hierarchy connections accessing each public folder mailbox by scaling the user connections out across a larger number of public folder mailboxes. If you are a customer whom has a small amount of public folder content in Exchange Online, yet you have an extremely large number of active users, then you may see the system create additional public folder mailboxes regardless of your content size.

Ready for another example? In this example we will use low values for explanatory purposes. Let us pretend in Exchange Online we did not want more than two hundred active hierarchy connections per public folder mailbox. The diagram below shows nine hundred users making nine hundred active hierarchy connections across four public folder mailboxes. This scenario will work out to approximately 225 active hierarchy connections per public folder mailbox as the Client Access Servers spread the hierarchy connections across all available public folder mailboxes in the customer’s environment. When Exchange Online monitoring determines the desired number of two hundred active hierarchy connections per public folder mailbox has been exceeded, PFMBX-005 is automatically created. Immediately after creating PFMBX-005, Exchange Online will force a hierarchy sync to PFMBX-005 ensuring it has the most up to date information available regarding public folder structure and permissions before allowing it to accept client hierarchy connections. The end result in this example is we now have five public folder mailboxes accepting nine hundred active hierarchy connections for an average of 180 connections per public folder mailbox, thus assuring all active users have the best interactive experience possible.

image
Scenario 3: Auto split process creates a new public folder mailbox to scale out active hierarchy connections.

Once you begin utilizing the Exchange Online public folder infrastructure we are confident this built-in automation will help our customers focus on doing what they do best, which is running their business. Let us take care of the infrastructure for you so you have more time to spend on your other projects.

Summary

In summary we are extremely excited to deliver public folders in the new Exchange Online to you, our customers. We believe you will find the migration process from on-premises to Exchange Online fairly straightforward and our backend automation will alleviate you from having to manage many aspects of the feature. We really hope you enjoy using the public folders with Exchange Online as much as we enjoyed creating them for you.

Special thanks to the entire Public Folder Feature Crew, Nino Bilic, Tim Heeney, Ross Smith IV and Andrea Fowler for contributing to and validating this data.

Brian Day
Senior Program Manager
Exchange Customer Experience


Ask the Perf Guy: Sizing Exchange 2013 Deployments

$
0
0

Since the release to manufacturing (RTM) of Exchange 2013, you have been waiting for our sizing and capacity planning guidance. This is the first official release of our guidance in this area, and updates to our TechNet content will follow in a future milestone.

As we continue to learn more from our own internal deployments of Exchange 2013, as well as from customer feedback, you will see further updates to our sizing and capacity planning guidance in two forms: changes to the numbers mentioned in this document, as well as further guidance on specific areas not covered here. Let us know what you think we are missing and we will do our best to respond with better information over time.

First, some context

Historically, the Exchange Server product group has used various sources of data to produce sizing guidance. Typically, this data would come from scale tests run early in the product development cycle, and we would then fine-tune that guidance with observations from production deployments closer to final release. Production deployments have included Exchange Dogfood (our internal pre-release deployment that hosts the Exchange team and various other groups at Microsoft), Microsoft IT’s corporate Exchange deployment, and various early adopter programs.

For Exchange 2013, our guidance is primarily based on observations from the Exchange Dogfood deployment. Dogfood hosts some of the most demanding Exchange users at Microsoft, with extreme messaging profiles and many client sessions per user across multiple client types. Many users in the Dogfood deployment send and receive more than 500 messages per day, and typically have multiple Outlook clients and multiple mobile devices simultaneously connected and active. This allows our guidance to be somewhat conservative, taking into account additional overhead from client types that we don’t regularly see in our internal deployments as well as client mixes that might be different from what's considered “normal” at Microsoft.

Does this mean that you should take this conservative guidance and adjust the recommendations such that you deploy less hardware? Absolutely not. One of the many things we have learned from operating our own very high-scale service is that availability and reliability are very dependent on having capacity available to deal with those unexpected peaks.

Sizing is both a science and an art form. Attempting to apply too much science to the process (trying to get too accurate) usually results in not having enough extra capacity available to deal with peaks, and in the end, results in a poor user experience and decreased system availability. On the other hand, there does need to be some science involved in the process, otherwise it’s very challenging to have a predictable and repeatable methodology for sizing deployments. We strive to achieve the right balance here.

Impact of the new architecture

From a sizing and performance perspective, there are a number of advantages with the new Exchange 2013 architecture. As many of you are aware, a couple of years ago we began recommending multi-role deployment for Exchange 2010 (combining the Mailbox, Hub Transport, and Client Access Server (CAS) roles on a single server) as a great way to take advantage of hardware resources on modern servers, as well as a way to simplify capacity planning and deployment. These same advantages apply to the Exchange 2013 Mailbox role as well. We like to think of the services running on the Mailbox role as providing a balanced utilization of resources rather than having a set of services on a role that are very disk intensive, and a set of services on another role that are very CPU intensive.

Another example to consider for the Mailbox role is cache effectiveness. Software developers use in-memory caching to prevent having to use higher-latency methods to retrieve data (like LDAP queries, RPCs, or disk reads). In the Exchange 2007/2010 architecture, processing for operations related to a particular user could occur on many servers throughout the topology. One CAS might be handling Outlook Web App for that user, while another (or more than one) CAS might be handling Exchange ActiveSync connections, and even more CAS might be processing Outlook Anywhere RPC proxy load for that same user. It’s even possible that the set of servers handling that load could be changing on a regular basis. Any data associated with that user stored in a cache would become useless (effectively a waste of memory) as soon as those connections moved to other servers. In the Exchange 2013 architecture, all workload processing for a given user occurs on the Mailbox server hosting the active copy of that user’s mailbox. Therefore, cache utilization is much more effective.

The new CAS role has some nice benefits as well. Given that the role is totally stateless from a user perspective, it becomes very easy to scale up and down as demands change by simply adding or removing servers from the topology. Compared to the CAS role in prior releases, hardware utilization is dramatically reduced meaning that fewer CAS role machines will be required. Additionally, it may make sense for many customers to consider a multi-role deployment in which CAS and Mailbox are co-located – this allows further simplification of capacity planning and deployment, and also increases the number of available CAS which has a positive effect on service availability. Look for a follow up post on the benefits of a multi-role deployment soon.

Start to finish, what’s the process?

Sizing an Exchange deployment has six major phases, and I will go through each of them in this post in some detail.

  1. You begin the process by making sure you fully understand the available guidance on this topic. If you are reading this post, that’s a great start. There may have been updates posted either here on the Exchange team blog, or over on TechNet. Make sure you take a look before proceeding.
  2. The second step is to gather any available data on the existing messaging deployment (if there is one) or estimate user profile requirements if this is a totally new solution.
  3. The third step is perhaps the most difficult. At this point, you need to figure out all of the requirements for the Exchange solution that might impact the sizing process. This can include decisions like the desired mailbox size (mailbox quota), service level objectives, number of sites, number of mailbox database copies, storage architecture, growth plans, deployment of 3rd party products or line-of-business applications, etc. Essentially, you need to understand any aspect of the design that could impact the number of servers, user count, and utilization of servers.
  4. Once you have collected all of the requirements, constraints, and user profile data, it’s time to calculate Exchange requirements. The easiest way to do this is with the calculator tool, but it can also be done manually as I will describe in this post. Clearly the calculator makes the process much easier, so if the calculator is available, use it!
  5. Once the Exchange requirements have been calculated, it’s time to consider various options that are available. For example, there may be a choice between scaling up (deploying fewer larger servers) and scaling out (deploying a larger number of smaller servers), and the options could have various implications on high availability, as well as the total number of hardware or software failures that the solution can sustain while remaining available to users. Another typical decision is around storage architecture, and this often comes down to cost. There are a range of costs and benefits to different storage choices, and the Exchange requirements can often be met by more than one of these options.
  6. The last step is to finalize the design. At this point, it’s time to document all of the decisions that were made, order some hardware, use Jetstress to validate that the storage requirements can be met, and perform any other necessary pre-production lab testing to ensure that the production rollout and implementation will go smoothly.

Gather requirements and user data

The primary input to all of the calculations that you will perform later is the average user profile of the deployment, where the user profile is defined as the sum of total messages sent and total messages received per-user, per-workday (on average). Many organizations have quite a bit of variability in user profiles. For example, a segment of users might be considered “Information Workers” and spend a good part of their day in their mailbox sending and reading mail, while another segment of users might be more focused on other tasks and use email infrequently. Sizing for these segments of users can be accomplished by either looking at the entire system using weighted averages, or by breaking up the sizing process to align with the various segments of users. In general it’s certainly easier to size the whole system as a unit, but there may be specific requirements (like the use of certain 3rd party tools or devices) which will significantly impact the sizing calculation for one or more of the user segments, and it can be very difficult to apply sizing factors to a user segment while attempting to size the entire solution as a unit.

The obvious question in your mind is how to go get this user profile information. If you are starting with an existing Exchange deployment, there are a number of options that can be used, assuming that you aren’t the elusive Exchange admin who actually tracks statistics like this on an ongoing basis. If you are using Exchange 2007 or earlier, you can utilize the Exchange Profile Analyzer (EPA) tool, which will provide overall user profile statistics for your Exchange organization as well as detailed per-user statistics if required. If you are on Exchange 2010, the EPA tool is not an option for you. One potential option is to evaluate message traffic using performance counters to come up with user profile averages on a per-server basis. This can be done by monitoring the MSExchangeIS\Messages Submitted/sec and MSExchangeIS\Messages Delivered/sec counters during peak average periods and extrapolating the recorded data to represent daily per-user averages. I will cover this methodology in a future blog post, as it will take a fair amount of explanation. Another option is to use message tracking logs to generate these statistics. This could be done via some crafty custom PowerShell scripting, or you could look for scripts that attempt to do this work for you already. One of our own consultants points to an example on his blog.

Typical user profiles range from 50-500 messages per-user/per-day, and we provide guidance for those profiles. When in doubt, round up.

image001

The other important piece of profile information for sizing is the average message size seen in the deployment. This can be obtained from EPA, or from the other mentioned methods (via transport performance counters, or via message tracking logs). Within Microsoft, we typically see average message sizes of around 75KB, but we certainly have worked with customers that have much higher average message sizes. This can vary greatly by industry, and by region.

Start with the Mailbox servers

Just as we recommended for Exchange 2010, the right way to start with sizing calculations for Exchange 2013 is with the Mailbox role. In fact, those of you who have sized deployments for Exchange 2010 will find many similarities with the methodology discussed here.

Example scenario

Throughout this article, we will be referring to an example deployment. The deployment is for a relatively large organization with the following attributes:

  • 100,000 mailboxes
  • 200 message/day profile, with 75KB average message size
  • 10GB mailbox quota
  • Single site
  • 4 mailbox database copies, no lagged copies
  • 2U commodity server hardware platform with internal drive bays and an external storage chassis will be used (total of 24 available large form-factor drive bays)
  • 7200 RPM 4TB midline SAS disks are used
  • Mailbox databases are stored on JBOD direct attached storage, utilizing no RAID
  • Solution must survive double failure events

High availability model

The first thing you need to determine is your high availability model, e.g., how you will meet the availability requirements that you determined earlier. This likely includes multiple database copies in one or more Database Availability Groups, which will have an impact on storage capacity and IOPS requirements. The TechNet documentation on this topic provides some background on the capabilities of Exchange 2013 and should be reviewed as part of the sizing process.

At a minimum, you need to be able to answer the following questions:

  • Will you deploy multiple database copies?
  • How many database copies will you deploy?
  • Will you have an architecture that provides site resilience?
  • What kind of resiliency model will you deploy?
  • How will you distribute database copies?
  • What storage architecture will you use?

Capacity requirements

Once you have an understanding of how you will meet your high availability requirements, you should know the number of database copies and sites that will be deployed. Given this, you can begin to evaluate capacity requirements. At a basic level, you can think of capacity requirements as consisting of storage for mailbox data (primarily based on mailbox storage quotas), storage for database log files, storage for content indexing files, and overhead for growth. Every copy of a mailbox database is a multiplier on top of these basic storage requirements. As a simplistic example, if I was planning for 500 mailboxes of 1GB each, the storage for mailbox data would be 500GB, and then I would need to apply various factors to that value to determine the per-copy storage requirement. From there, if I needed 3 copies of the data for high availability, I would then need to multiply by 3 to obtain the overall capacity requirement for the solution (all servers). In reality, the storage requirements for Exchange are far more complex, as you will see below.

Mailbox size

To determine the actual size of a mailbox on disk, we must consider 3 factors: the mailbox storage quota, database white space, and recoverable items.

The mailbox storage quota is what most people think of as the “size of the mailbox” – it’s the user perceived size of their mailbox and represents the maximum amount of data that the user can store in their mailbox on the server. While this is certainly represents the majority of space utilization for Exchange databases, it’s not the only element by which we have to size.

Database whitespace is the amount of space in the mailbox database file that has been allocated on disk but doesn’t contain any in-use database pages. Think of it as available space to grow into. As content is deleted out of mailbox databases and eventually removed from the mailbox recoverable items, the database pages that contained that content become whitespace. We recommend planning for whitespace size equal to 1 day worth of messaging content.

Estimated Database Whitespace per Mailbox = per-user daily message profile x average message size

This means that a user with the 200 message/day profile and an average message size of 75KB would be expected to consume the following whitespace:

200 messages/day x 75KB = 14.65MB

When items are deleted from a mailbox, they are really “soft-deleted” and moved temporarily to the recoverable items folder for the duration of the deleted item retention period. Like Exchange 2010, Exchange 2013 has a feature known as single item recovery which will prevent purging data from the recoverable items folder prior to reaching the deleted item retention window. When this is enabled, we expect to see a 1.2 percent increase in mailbox size for a 14 day deleted item retention window. Additionally, we expect to see a 3 percent increase in the size of the mailbox for calendar item version logging which is enabled by default. Given that a mailbox will eventually reach a steady state where the amount of new content will be approximately equal to the amount of deleted content in order to remain under quota, we would expect the size of the items in the recoverable items folder to eventually equal the size of new content sent & received during the retention window. This means that the overall size of the recoverable items folder can be calculated as follows:

Recoverable Items Folder Size = (per-user daily message profile x average message size x deleted item retention window) + (mailbox quota size x 0.012) + (mailbox quota size x 0.03)

If we carry our example forward with the 200 message/day profile, a 75KB average message size, a deleted item retention window of 14 days, and a mailbox quota of 10GB, the expected recoverable items folder size would be:

(200 messages/day x 75KB x 14 days) + (10GB x 0.012) + (10GB x 0.03)
= 210,000KB + 125,819.12K + 314,572.8KB = 635.16MB

Given the results from these calculations, we can sum up the mailbox capacity factors to get our estimated mailbox size on disk:

Mailbox Size on disk = 10GB mailbox quota + 14.65MB database whitespace + 635.16MB Recoverable Items Folder = 10.63GB

Content indexing

The space required for files related to the content indexing process can be estimated as 20% of the database size.

Per-Database Content Indexing Space = database size x 0.20

In addition, you must additionally size for one additional content index (e.g. an additional 20% of one of the mailbox databases on the volume) in order to allow content indexing maintenance tasks (specifically the master merge process) to complete. The best way to express the need for the master merge space requirement would be to look at the average database file size across all databases on a volume and add 1 database worth of disk consumption to the calculation when determining the per-volume content indexing space requirement:

Per-Volume Content Indexing Space = (average database size x (databases on the volume + 1) x 0.20)

As a simple example, if we had 2 mailbox databases on a single volume and each database consumed 100GB of space, we would compute the per-volume content indexing space requirement like this:

100GB database size x (2 databases + 1) x 0.20 = 60GB

Log space

The amount of space required for ESE transaction log files can be computed using the same method as Exchange 2010. You can find details on the process in the Exchange 2010 TechNet guidance. To summarize the process, you must first determine the base guideline for number of transaction logs generated per-user, per-day, using the following table. As in Exchange 2010, log files are 1MB in size, making the math for log capacity quite straightforward.

Message profile (75 KB average message size)Number of transaction logs generated per day
5010
10020
15030
20040
25050
30060
35070
40080
45090
500100

Once you have the appropriate value from the table which represents guidance for a 75KB average message size, you may need to adjust the value based on differences in the target average message size. Every time you double the average message size, you must increase the logs generated per day by an additional factor of 1.9. For example:

Transaction logs at 200 messages/day with 150KB average message size = 40 logs/day (at 75KB average message size) x 1.9 = 76

Transaction logs at 200 messages/day with 300KB average message size = 40 logs/day (at 75KB average message size) x (1.9 x 2) = 152

While daily log volume is interesting, it doesn’t represent the entire requirement for log capacity. If traditional backups are being used, logs will remain on disk for the interval between full backups. When mailboxes are moved, that volume of change to the target database will result in a significant increase in the amount of logs generated during the day. In a solution where Exchange native data protection is in use (e.g., you aren’t using traditional backups), logs will not be truncated if a mailbox database copy is failed or if an entire server is unreachable unless an administrator intervenes. There are many factors to consider when sizing for required log capacity, and it is certainly worth spending some time in the Exchange 2010 TechNet guidance mentioned earlier to fully understand these factors before proceeding. Thinking about our example scenario, we could consider log space required per database if we estimate the number of users per database at 65. We will also assume that 1% of our users are moved per week in a single day, and that we will allocate enough space to support 3 days of logs in the case of failed copies or servers.

Log Capacity to Support 3 Days of Truncation Failure = (65 mailboxes/database x 40 logs/day x 1MB log size) x 3 days = 7.62GB

Log Capacity to Support 1% mailbox moves per week = 65 mailboxes/database x 0.01 x 10.63GB mailbox size = 6.91GB

Total Local Capacity Required per Database = 7.62GB + 6.91GB = 14.53GB

Putting all of the capacity requirements together

The easiest way to think about sizing for storage capacity without having a calculator tool available is to make some assumptions up front about the servers and storage that will be used. Within the product group, we are big fans of 2U commodity server platforms with ~12 large form-factor drive bays in the chassis. This allows for a 2 drive RAID array for the operating system, Exchange install path, transport queue database, and other ancillary files, and ~10 remaining drives to use as mailbox database storage in a JBOD direct attached storage configuration with no RAID. Fill this server up with 4TB SATA or midline SAS drives, and you have a fantastic Exchange 2013 server. If you need even more storage, it’s quite easy to add an additional shelf of drives to the solution.

Using the large deployment example and thinking about how we might size this on the commodity server platform, we can consider a server scaling unit that has a total of 24 large form-factor drive bays containing 4TB midline SAS drives. We will use 2 of those drives for the OS & Exchange, and the remaining drive bays will be used for Exchange mailbox database capacity. Let’s use 12 of those drive bays for databases – that leaves 10 remaining drive bays that could contain spares or remain empty. For this sizing exercise, let’s also plan for 4 databases per drive. Each of those drives has a formatted capacity of ~3725GB. The first step in figuring out the number of mailboxes per database is to look at overall capacity requirements for the mailboxes, content indexes, and required free space (which we will set to 5%).

To calculate the maximum amount of space available for mailboxes, let’s apply a formula (note that this doesn’t consider space for logs – we will make sure that the volume will have enough space for logs later in the process). First, we can remove our required free space from the available storage on the drive:

Available Space (excluding required free space) = Formatted capacity of the drive x (1 – free space)

Then we can remove the space required for content indexing. As discussed above, the space required for content indexing will be 20% of the database size, with an additional 20% of one database for content indexing maintenance tasks. Given the additional 20% requirement, we can’t model the overall space requirement as a simple 20% of the remaining space on the volume. Instead we need to compute a new percentage that takes the number of databases per-volume into consideration.

image016

Now we can remove the space for content indexing from our available space on the volume:

image017

And we can then divide by the number of databases per-volume to get our maximum database size:

image018

In our example scenario, we would obtain the following result:

image019

Given this value, we can then calculate our maximum users per database (from a capacity perspective, as this may change when we evaluate the IO requirements):

image020

Let’s see if that number is actually reasonable given our 4 copy configuration. We are going to use 16-node DAGs for this deployment to take full advantage of the scalability and high-availability benefits of large DAGs. While we have many drives available on our selected hardware platform, we will be limited by the maximum of 50 database copies per-server in Exchange 2013. Considering this maximum and our desire to have 4 databases per volume, we can calculate the maximum number of drives for mailbox database usage as:

image021

With 12 database volumes and 4 database copies per-volume, we will have 48 total database copies per server.

image022

With 66 users per database and 100,000 total users, we end up with the following required DAG count for the user population:

image023

In this very large deployment, we are using a DAG as a unit of scale or “building block” (e.g. we perform capacity planning based on the number of DAGs required to meet demand, and we deploy an entire DAG when we need additional capacity), so we don’t intend to deploy a partial DAG. If we round up to 8 DAGs we can compute our final users per database count:

image024

With 65 users per-database, that means we will expect to consume the following space for mailbox databases:

Estimated Database Size = 65 users x 10.63GB = 690.95GB
Database Consumption / Volume = 690.95GB x 4 databases = 2763.8GB

Using the formula mentioned earlier, we can compute our estimated content index consumption as well:

690.95GB database size x (4 databases + 1) x 0.20 = 690.95GB

You’ll recall that we computed transaction log space requirements earlier, and it turns out that we magically computed those values with the assumption that we would have 65 users per-database. What a pleasant coincidence! So we will need 14.53GB of space for transaction logs per-database, or to get a more useful result:

Log Space Required / Volume = 14.53GB x 4 databases = 58.12GB

To sum it up, we can estimate our total per-volume space utilization and make sure that we have plenty of room on our target 4TB drives:

image029

Looks like our database volumes are sized perfectly!

IOPS requirements

To determine the IOPS requirements for a database, we look at the number of users hosted on the database and consider the guidance provided in the following table to compute total required IOPS when the database is active or passive.

Messages sent or received per mailbox per dayEstimated IOPS per mailbox (Active or Passive)
500.034
1000.067
1500.101
2000.134
2500.168
3000.201
3500.235
4000.268
4500.302
5000.335

For example, with 50 users in a database, with an average message profile of 200, we would expect that database to require 50 x 0.134 = 6.7 transactional IOPS when the database is active, and 50 x 0.134 = 6.7 transactional IOPS when the database is passive. Don’t forget to consider database placement which will impact the number of databases with IOPS requirements on a given storage volume (which could be a single JBOD drive or might be a more complex storage configuration).

Going back to our example scenario, we can evaluate the IOPS requirement of the solution, recalling that the average user profile in that deployment is the 200 message/day profile. We have 65 users per database and 4 databases per JBOD drive, so we can estimate our IOPS requirement in worst-case (all databases active) as:

65 mailboxes x 4 databases per-drive x 0.134 IOPS/mailbox at 200 messages/day profile = ~34.84 IOPS per drive

Midline SAS drives typically provide ~57.5 random IOPS (based on our own internal observations and benchmark tests), so we are well within design constraints when thinking about IOPS requirements.

Storage bandwidth requirements

While IOPS requirements are usually the primary storage throughput concern when designing an Exchange solution, it is possible to run up against bandwidth limitations with various types of storage subsystems. The IOPS sizing guidance above is looking specifically at transactional (somewhat random) IOPS and is ignoring the sequential IO portion of the workload. One place that sequential IO becomes a concern is with storage solutions that are running a large amount of sequential IO through a common channel. A common example of this type of load is the ongoing background database maintenance (BDM) which runs continuously on Exchange mailbox databases. While this BDM workload might not be significant for a few databases stored on a JBOD drive, it may become a concern if all of the mailbox database volumes are presented through a common iSCSI or Fibre Channel interface. In that case, the bandwidth of that common channel must be considered to ensure that the solution doesn’t bottleneck due to these IO patterns.

In Exchange 2013, we expect to consume approximately 1MB/sec/database copy for BDM which is a significant reduction from Exchange 2010. This helps to enable the ability to store multiple mailbox databases on the same JBOD drive spindle, and will also help to avoid bottlenecks on networked storage deployments such as iSCSI. This bandwidth utilization is in addition to bandwidth consumed by the transactional IO activity associated with user and system workload processes, as well as storage bandwidth consumed by the log replication and replay process in a DAG.

Transport storage requirements

Since transport components (with the exception of the front-end transport component on the CAS role) are now part of the Mailbox role, we have included CPU and memory requirements for transport with the general Mailbox role requirements described later. Transport also has storage requirements associated with the queue database. These requirements, much like I described earlier for mailbox storage, consist of capacity factors and IO throughput factors.

Transport storage capacity is driven by two needs: queuing (including shadow queuing) and Safety Net (which is the replacement for transport dumpster in this release). You can think of the transport storage capacity requirement as the sum of message content on disk in a worst-case scenario, consisting of three elements:

  • The current day’s message traffic, along with messages which exist on disk longer than normal expiration settings (like poison queue messages)
  • Queued messages waiting for delivery
  • Messages persisted in Safety Net in case they are required for redelivery

Of course, all three of these factors are also impacted by shadow queuing in which a redundant copy of all messages is stored on another server. At this point, it would be a good idea to review the TechNet documentation on Transport High Availability if you aren’t familiar with the mechanics of shadow queuing and Safety Net.

In order to figure out the messages per day that you expect to run through the system, you can look at the user count and messaging profile. Simply multiplying these together will give you a total daily mail volume, but it will be a bit higher than necessary since it is double counting messages that are sent within the organization (i.e. a message sent to a coworker will count towards the profile of the sending user as well as the profile of the receiving user, but it’s really just one message traversing the system). The simplest way to deal with that would be to ignore this fact and oversize transport, which will provide additional capacity for unexpected peaks in message traffic. An alternative way to determine daily message flow would be to evaluate performance counters within your existing messaging system.

To determine the maximum size of the transport database, we can look at the entire system as a unit and then come up with a per-server value.

Overall Daily Messages Traffic = number of users x message profile

Overall Transport DB Size = average message size x overall daily message traffic x (1 + (percentage of messages queued x maximum queue days) + Safety Net hold days) x 2 copies for high availability

Let’s use the 100,000 user sizing example again and size the transport database using the simple method.

Overall Transport DB Size = 75KB x (100,000 users x 200 messages/day) x (1 + (50% x 2 maximum queue days) + 2 Safety Net hold days) x 2 copies = 11,444GB

In our example scenario, we have 8 DAGs, each containing 16-nodes, and we are designing to handle double node failures in each DAG. This means that in a worst-case failure event we would have 112 servers online with 2 failed servers in each DAG. We can use this value to determine a per-server transport DB size:

image034

Sizing for transport IO throughput requirements is actually quite simple. Transport has taken advantage of many of the IO reduction changes to the ESE database that have been made in recent Exchange releases. As a result, the number of IOPS required to support transport is significantly lower. In the internal deployment we used to produce this sizing guidance, we see approximately 1 DB write IO per message and virtually no DB read IO, with an average message size of ~75KB. We expect that as average message size increases, the amount of transport IO required to support delivery and queuing would increase. We do not currently have specific guidance on what that curve looks like, but it is an area of active investigation. In the meantime, our best practices guidance for the transport database is to leave it in the Exchange install path (likely on the OS drive) and ensure that the drive supporting that directory path is using a protected write cache disk controller, set to 100% write cache if the controller allows optimization of read/write cache settings. The write cache allows transport database log IO to become effectively “free” and allows transport to handle a much higher level of throughput.

Processor requirements

Once we have our storage requirements figured out, we can move on to thinking about CPU. CPU sizing for the Mailbox role is done in terms of megacycles. A megacycle is a unit of processing work equal to one million CPU cycles. In very simplistic terms, you could think of a 1 MHz CPU performing a megacycle of work every second. Given the guidance provided below for megacycles required for active and passive users at peak, you can estimate the required processor configuration to meet the demands of an Exchange workload. Following are our recommendations on the estimated required megacycles for the various user profiles.

Messages sent or received per mailbox per dayMcycles per User, Active DB Copy or Standalone (MBX only)Mcycles per User, Active DB Copy or Standalone (Multi-Role)Mcycles per User, Passive DB Copy
502.132.660.69
1004.255.311.37
1506.387.972.06
2008.5010.632.74
25010.6313.283.43
30012.7515.944.11
35014.8818.594.80
40017.0021.255.48
45019.1323.916.17
50021.2526.566.85

The second column represents the estimated megacycles required on the Mailbox role server hosting the active copy of a user’s mailbox database. In a DAG configuration, the required megacycles for the user on each server hosting passive copies of that database can be found in the fourth column. If the solution is going to include multi-role (Mailbox+CAS) servers, use the value in the third column rather than the second, as it includes the additional CPU requirements for the CAS role.

It is important to note that while many years ago you could make an assumption that a 500 MHz processor could perform roughly double the work per unit of time as a 250 MHz processor, clock speeds are no longer a reliable indicator of performance. The internal architecture of modern processors is different enough between manufacturers as well as within product lines of a single manufacturer that it requires an additional normalization step to determine the available processing power for a particular CPU. We recommend using the SPECint_rate2006 benchmark from the Standard Performance Evaluation Corporation.

The baseline system used to generate this guidance was a Hewlett-Packard DL380p Gen8 server containing Intel Xeon E5-2650 2 GHz processors. The baseline system SPECint_rate2006 score is 540, or 33.75 per-core, given that the benchmarked server was configured with a total of 16 physical processor cores. Please note that this is a different baseline system than what was used to generate our Exchange 2010 guidance, so any tools or calculators that make assumptions based on the 2010 baseline system would not provide accurate results for sizing an Exchange 2013 solution.

Using the same general methodology we have recommended in prior releases, you can determine the estimated available Exchange workload megacycles available on a different processor through the following process:

  1. Find the SPECint_rate2006 score for the processor that you intend to use for your Exchange solution. You can do this the hard way (described below) or use Scott Alexander’s fantastic Processor Query Toolto get the per-server score and processor core count for your hardware platform.
    1. On the website of the Standard Performance Evaluation Corporation, select Results, highlight CPU2006, and select Search all SPECint_rate2006 results.
    2. Under Simple Request, enter the search criteria for your target processor, for example Processor MatchesE5-2630.
    3. Find the server and processor configuration you are interested in using (or if the exact combination is not available, find something as close as possible) and note the value in the Result column and the value in the # Cores column.
  2. Obtain the per-core SPECint_rate2006 score by dividing the value in the Result column by the value in the # Cores column. For example, in the case of the Hewlett-Packard DL380p Gen8 server with Intel Xeon E5-2630 processors (2.30GHz), the Result is 430 and the # Cores is 12, so the per-core value would be 430 / 12 = 35.83.
  3. To determine the estimated available Exchange workload megacycles on the target platform, use the following formula:

    image035

    Using the example HP platform with E5-2630 processors mentioned previously, we would calculate the following result:

    image036
    x 12 processors = 25,479 available megacycles per-server

Keep in mind that a good Exchange design should never plan to run servers at 100% of CPU capacity. In general, 80% CPU utilization in a failure scenario is a reasonable target for most customers. Given that caveat that the high CPU utilization occurs during a failure scenario, this means that servers in a highly available Exchange solution will often run with relatively low CPU utilization during normal operation. Additionally, there may be very good reasons to target a lower CPU utilization as maximum, particularly in cases where unanticipated spikes in load may result in acute capacity issues.

Going back to the example I used previously of 100,000 users with the 200 message/day profile, we can estimate the total required megacycles for the deployment. We know that there will be 4 database copies in the deployment, and that will help to calculate the passive megacycles required. We also know that this deployment will be using multi-role (Mailbox+CAS) servers. Given this information, we can calculate megacycle requirements as follows:

100,000 users ((10.63 mcycles per active mailbox) + (3 passive copies x 2.74 mcycles per passive mailbox)) = 1,885,000 total mcycles required

You could then take that number and attempt to come up with a required server count. I would argue that it’s actually a much better practice to come up with a server count based on high availability requirements (taking into account how many component failures your design can handle in order to meet business requirements) and then ensure that those servers can meet CPU requirements in a worst-case failure scenario. You will either meet CPU requirements without any additional changes (if your server count is bound on another aspect of the sizing process), or you will adjust the server count (scale out), or you will adjust the server specification (scale up).

Continuing with our hypothetical example, if we knew that the high availability requirements for the design of the 100,000 user example resulted in a maximum of 16 databases being active at any time out of 48 total database copies per server, and we know that there are 65 users per database, we can determine the per-server CPU requirements for the deployment.

(16 databases x 65 mailboxes x 10.63 mcycles per active mailbox) + (32 databases x 65 mailboxes x 2.74 mcycles per passive mailbox) = 11055.2 + 5699.2 = 16,754.4 mcycles per server

Using the processor configuration mentioned in the megacycle normalization section (E5-2630 2.3 GHz processors on an HP DL380p Gen8), we know that we have 25,479 available mcycles on the server, so we would estimate a peak average CPU in worst-case failure of:

image041

That is below our guidance of 80% maximum CPU utilization (in a worst-case failure scenario), so we would not consider the servers to be CPU bound in the design. In fact, we could consider adjusting the CPU selection to a cheaper option with reduced performance getting us closer to a peak average CPU in worst-case failure of 80%, reducing the cost of the overall solution.

Memory requirements

To calculate memory per server, you will need to know the per-server user count (both active and passive users) as well as determine whether you will run the Mailbox role in isolation or deploy multi-role servers (Mailbox+CAS). Keep in mind that regardless of whether you deploy roles in isolation or deploy multi-role servers, the minimum amount of RAM on any Exchange 2013 server is 8GB.

Memory on the Mailbox role is used for many purposes. As in prior releases, a significant amount of memory is used for ESE database cache and plays a large part in the reduction of disk IO in Exchange 2013. The new content indexing technology in Exchange 2013 also uses a large amount of memory. The remaining large consumers of memory are the various Exchange services that provide either transactional services to end-users or handle background processing of data. While each of these individual services may not use a significant amount of memory, the combined footprint of all Exchange services can be quite large.

Following is our recommended amount of memory for the Mailbox role on a per mailbox basis that we expect to be used at peak.

Messages sent or received per mailbox per dayMailbox role memory per active mailbox (MB)
5012
10024
15036
20048
25060
30072
35084
40096
450108
500120

To determine the amount of memory that should be provisioned on a server, take the number of active mailboxes per-server in a worst-case failure and multiply by the value associated with the expected user profile. From there, round up to a value that makes sense from a purchasing perspective (i.e. it may be cheaper to configure 128GB of RAM compared to a smaller amount of RAM depending on slot options and memory module costs).

Mailbox Memory per-server = (worst-case active database copies per-server x users per-database x memory per-active mailbox)

For example, on a server with 48 database copies (16 active in worst-case failure), 65 users per-database, expecting the 200 profile, we would recommend:

16 x 65 x 48MB = 48.75GB, round up to 64GB

It’s important to note that the content indexing technology included with Exchange 2013 uses a relatively large amount of memory to allow both indexing and query processing to occur very quickly. This memory usage scales with the number of items indexed, meaning that as the number of total items stored on a Mailbox role server increases (for both active and passive copies), memory requirements for the content indexing processes will increase as well. In general, the guidance on memory sizing presented here assumes approximately 15% of the memory on the system will be available for the content indexing processes which means that with a 75KB average message size, we can accommodate mailbox sizes of 3GB at 50 message profile up to 32GB at the 500 message profile without adjusting the memory sizing. If your deployment will have an extremely small average message size or an extremely large average mailbox size, you may need to add additional memory to accommodate the content indexing processes.

Multi-role server deployments will have an additional memory requirement beyond the amounts specified above. CAS memory is computed as a base memory requirement for the CAS components (2GB) plus additional memory that scales based on the expected workload. This overall CAS memory requirement on a multi-role server can be computed using the following formula:

image044

Essentially this is 2GB of memory for the base requirement, plus 2GB of memory for each processor core (or fractional processor core) serving active load at peak in a worst-case failure scenario. Reusing the example scenario, if I have 16 active databases per-server in a worst-case failure and my processor is providing 2123 mcycles per-core, I would need:

image045

If we add that to the memory requirement for the Mailbox role calculated above, our total memory requirement for the multi-role server would be:

48.75GB for Mailbox + 4.08GB for CAS = 52.83GB, round up to 64GB

Regardless of whether you are considering a multi-role or a split-role deployment, it is important to ensure that each server has a minimum amount of memory for efficient use of the database cache. There are some scenarios that will produce a relatively small memory requirement from the memory calculations described above. We recommend comparing the per-server memory requirement you have calculated with the following table to ensure you meet the minimum database cache requirements. The guidance is based on total database copies per-server (both active and passive). If the value shown in this table is higher than your calculated per-server memory requirement, adjust your per-server memory requirement to meet the minimum listed in the table.

Per-Server DB CopiesMinimum Physical Memory (GB)
1-108
11-2010
21-3012
31-4014
41-5016

In our example scenario, we are deploying 48 database copies per-server, so the minimum physical memory to provide necessary database cache would be 16GB. Since our computed memory requirement based on per-user guidance including memory for the CAS role (52.83GB) was higher than the minimum of 16GB, we don’t need to make any further adjustments to accommodate database cache needs.

Unified messaging

With the new architecture of Exchange, Unified Messaging is now installed and ready to be used on every Mailbox and CAS. The CPU and memory guidance provided here assumes some moderate UM utilization. In a deployment with significant UM utilization with very high call concurrency, additional sizing may need to be performed to provide the best possible user experience. As in Exchange 2010, we recommend using a 100 concurrent call per-server limit as the maximum possible UM concurrency, and scale out the deployment if the sizing of your deployment becomes bound on this limit. Additionally, voicemail transcription is a very CPU-intensive operation, and by design will only transcribe messages when there is enough available CPU on the machine. Each voicemail message requires 1 CPU core for the duration of the transcription operation, and if that amount of CPU cannot be obtained, transcription will be skipped. In deployments that anticipate a high amount of voicemail transcription concurrency, server configurations may need to be adjusted to increase CPU resources, or the number of users per server may need to be scaled back to allow for more available CPU for voicemail transcription operations.

Sizing and scaling the Client Access Server role

In the case where you are going to place the Mailbox and CAS roles on separate servers, the process of sizing CAS is relatively straightforward. CAS sizing is primarily focused on CPU and memory requirements. There is some disk IO for logging purposes, but it is not significant enough to warrant specific sizing guidance.

CAS CPU is sized as a ratio from Mailbox role CPU. Specifically, we need to get 25% of the megacycles used to support active users on the Mailbox role. You could think of this as a 1:4 ratio (CAS CPU to Mailbox CPU) compared to the 3:4 ratio we recommended in Exchange 2010. One way to compute this would be to look at the total active user megacycles required for the solution, take 25% of that, and then determine the required CAS server count based on high availability requirements and multi-site design constraints. For example, consider the 100,000 user example using the 200 message/day profile:

Total CAS Required Mcycles = 100,000 users x 8.5 mcycles x 0.25 = 212,500 mcycles

Assuming that we want to target a maximum CPU utilization of 80% and the servers we plan to deploy have 25,479 available megacycles, we can compute the required number of servers quite easily:

image048

Obviously we would need to then consider whether the 11 required servers meet our high availability requirements considering the maximum CAS server failures that we must design for given business requirements, as well as the site configuration where some of the CAS servers may be in different sites handling different portions of the workload. Since we specified in our example scenario that we want to survive a double failure in the single site, we would increase our 11 CAS servers to 13 such that we could sustain 2 CAS server failures and still handle the workload.

To size memory, we will use the same formula that was used for Exchange 2010:

Per-Server CAS Memory = 2GB + 2GB per physical processor core

image050

Using the example scenario we have been using, we can calculate the per-server CAS memory requirement as:

image051

In this example, 20.20GB would be the guidance for required CAS memory, but obviously you would need to round-up to the next highest possible (or highest performing) memory configuration for the server platform: perhaps 24GB.

Active Directory capacity for Exchange 2013

Active Directory sizing remains the same as it was for Exchange 2010. As we gain more experience with production deployments we may adjust this in the future. For Exchange 2013, we recommend deploying a ratio of 1 Active Directory global catalog processor core for every 8 Mailbox role processor cores handling active load, assuming 64-bit global catalog servers:

image052

If we revisit our example scenario, we can easily calculate the required number of GC cores required.

image053

Assuming that my Active Directory GCs are also deployed on the same server hardware configuration as my CAS & Mailbox role servers in the example scenario with 12 processor cores, then my GC server count would be:

image054

In order to sustain double failures, we would need to add 2 more GCs to this calculation, which would take us to 7 GC servers for the deployment.

As a best practice, we recommend sizing memory on the global catalog servers such that the entire NTDS.DIT database file can be contained in RAM. This will provide optimal query performance and a much better end-user experience for Exchange workloads.

Hyperthreading: Wow, free processors!

Turn it off. While modern implementations of simultaneous multithreading (SMT), also known as hyperthreading, can absolutely improve CPU throughput for most applications, the benefits to Exchange 2013 do not outweigh the negative impacts. It turns out that there can be a significant impact to memory utilization on Exchange servers when hyperthreading is enabled due to the way the .NET server garbage collector allocates heaps. The server garbage collector looks at the total number of logical processors when an application starts up and allocates a heap per logical processor. This means that the memory usage at startup for one of our services using the server garbage collector will be close to double with hyperthreading turned on vs. when it is turned off. This significant increase in memory, along with an analysis of the actual CPU throughput increase for Exchange 2013 workloads in internal lab tests has led us to a best practice recommendation that hyperthreading should be disabled for all Exchange 2013 servers. The benefits don’t outweigh the negative impact.

You are going to give me a calculator, right?

Now that you have digested all of this guidance, you are probably thinking about how much more of a pain it will be to size a deployment compared to using the Mailbox Role Requirements Calculator for Exchange 2010. You would be right, and we fully understand that. In fact, we are hard at work on a new calculator for Exchange 2013 and we plan to deliver it later this quarter. Stay tuned to the Exchange team blog for an announcement.

Hopefully that leaves you with enough information to begin to properly size your Exchange 2013 deployments. If you have further questions, you can obviously post comments here, but I’d also encourage you to consider attending one of the upcoming TechEd events. I’ll be at TechEd North America as well as TechEd Europe with a session specifically on this topic, and would be happy to answer your questions in person, either in the session or at the “Ask the Experts” event. Recordings of those sessions will also be posted to MSDN Channel9 after the events have concluded.

Jeff Mealiffe
Senior Program Manager Lead
Exchange Customer Experience

Use Exchange Web Services and PowerShell to Discover and Remove Direct Booking Settings

$
0
0

Prior to Exchange 2007, there were two primary methods of implementing automated resource scheduling – Direct Booking and the AutoAccept Agent(a store event sink released as a web download for Exchange 2003). In Exchange 2007, we changed how automated resource scheduling is implemented. The AutoAccept Agent is no longer supported, and the Direct Booking method, technically an Outlook function, has been replaced with server-side calendar booking function called the Resource Booking Attendant.

Note There are various terms associated with this new Resource Booking function, such as: Calendar Processing, Automatic Resource Booking, Calendar Attendant Processing, Automated Processing and Resource Booking Assistant. We will be using the “Resource Booking Attendant” nomenclature for this article.

While the Direct Booking method for resource scheduling can indeed work on Exchange Server 2007/2010/2013, we strongly recommend that you disable Direct Booking for resource mailboxes and use the Resource Booking Attendant instead. Specifically, we are referring to the “AutoAccept” Automated Processing feature of the Resource Booking Attendant, which can be enabled for a mailbox after it has been migrated to Exchange 2007 or later and upgraded to a Resource Mailbox.

Note The published resource mailbox upgrade guidance on TechNet specifies to disable Direct Booking in the resource mailbox while still on Exchange 2003, move the mailbox, and then enable the AutoAccept functionality via the Resource Booking Attendant. This order of steps can introduce an unnecessary amount of time where the resource mailbox may be without automated scheduling capabilities.

We are currently working to update that guidance to reflect moving the mailbox first, and only then proceed with disabling the Direct Booking functionality, after which the AutoAccept functionality via the Resource Booking Attendant can be immediately enabled. This will shorten the duration where the mailbox is without automated resource scheduling capabilities.

This conversion process to resource mailboxes utilizing the Resource Booking Attendant is sometimes an honest oversight or even deliberately ignored when migrating away from Exchange 2003 due to Direct Booking’s ability to continue to work with newer versions of Exchange, even Exchange Online. This will often result in resource mailboxes (or even user mailboxes!) with Direct Booking functionality remaining in place long after Exchange 2003 is ancient history in the environment.

Why not just leave Direct Booking enabled?

There are issues that can arise from leaving Direct Booking enabled, from simple administrative burden scenarios all the way to major calendaring issues. Additionally, Resource Booking Attendant offers advantages over Direct Booking functionality:

  1. Direct Booking capabilities, technically an Outlook function, has been deprecated from the product as of Outlook 2013. It was already on the deprecation list in Outlook 2010 and required a registry modification to reintroduce the functionality.
  2. Direct Booking and Resource Booking Attendant are conflicting technologies, and if simultaneously enabled, unexpected behavior in calendar processing and item consistency can occur.
  3. Outlook Web App (as well as any non-MAPI clients, like Exchange ActiveSync (EAS) devices) cannot use Direct Booking for automated resource scheduling. This is especially relevant for Outlook Web App-only environments where the users do not have Microsoft Outlook as a mail client.
  4. The Resource Booking Attendant AutoAccept functionality is a server-side solution, eliminating the need for client-side logic in order to automatically process meeting requests.

How do I check which mailboxes have Direct Booking Enabled?

How does one validate if Direct Booking settings are enabled on mailboxes in the organization, especially if mailboxes had previously been hosted on Exchange 2003?

Screenshot: Resource Scheduling properties
Figure 1: Checking Direct Booking settings in Microsoft Outlook 2010

Unfortunately, the manual steps involve assigning permissions to all mailboxes, creating MAPI profiles for each mailbox, logging into each mailbox, checking Tools> Options> Calendar> Resource Scheduling, note which of the three Direct Booking checkboxes are checked, click OK/Cancel a few times, log out of mailbox. Whew! That can be a major undertaking even for a small to midsize company that has more than a handful of mailboxes! Having staff perform this type of activity manually can be a costly and tedious endeavor. Once you have discovered which mailboxes have the Direct Booking settings enabled, you would then have to repeat this entire process to disable these settings unless you removed them at the time of discovery.

Having an automated method to discover, track, and even disable Direct Booking settings would be nice right?

Look no further, we have the solution for you!

Using Exchange Web Services (EWS) and PowerShell, we can automate the discovery of Direct Booking settings that are enabled, track the results, and even disable them! We wrote Remove-DirectBooking.ps1, a sample script, to do exactly that and even more to aid in automating this manual effort.

After you've downloaded it, rename the file and remove the .txt extension.

IMPORTANT  The previously uploaded script had the last line truncated to Stop-Tran (instead of Stop-Transcript). We've uploaded an updated version to TechNet Gallery. If you downloaded the previous version of the script, please download the updated version. Alternatively, you can open the previously downloaded version in Notepad or other text editor and correct the last line to Stop-Transcript.

Let’s break down the major tasks the PowerShell script does:

  1. Uses EWS Application Impersonation to tap into a mailbox (or set of mailboxes) and read the three MAPI properties where the Direct Booking settings are stored. It does this by accessing the localfreebusy item sitting in the NON_IPM_SUBTREE\FreeBusy Data folder, which resides in the root of the Information Store in the mailbox. The three MAPI properties and their equivalent Outlook settings the script looks at are:

    • 0x686d Automatically accept meeting requests and remove canceled meetings
    • 0x686f Automatically decline meeting requests that conflict with an existing appointment or meeting
    • 0x686e Automatically decline recurring meeting requests

    These three properties contain Boolean values mirroring the Resource Scheduling checkboxes found in Outlook (see Figure 1 above).

  2. For mailboxes where Direct Booking settings were detected, it checks for conflicts by determining if the mailbox also has Resource Booking Attendant enabled with AutomateProcessing set to AutoAccept.
  3. Optionally, disables any enabled Direct Booking settings encountered.

    Note It is important to understand that by default the script runs in a read-only mode. Additional command line switches are available to run the script to disable Direct Booking settings.

  4. Writes a detailed runtime processing log to console and log file.
  5. Creates a simple output text file containing a list of mailboxes that can be later leveraged as an input file to feed the script for disabling the Direct Booking functionality.
  6. Creates a CSV file containing statistics of the list of mailboxes processed with detailed information, such as what was discovered, any errors encountered, and optionally what was disabled. This is useful for performing analysis in the discovery phase and can also be used as another source to create an input file to feed into the script for disabling the Direct Booking functionality.

Example Scenarios

Here are a couple of example scenarios that illustrate how to use the script to discover and remove enabled Direct Booking settings.

Scenario 1

You've recently migrated from Exchange 2003 to Exchange 2010 and would like to disable Direct Booking for your company’s conference room mailboxes as well as any user mailboxes that may have Direct Booking settings enabled. The administrator’s logged in account has Application Impersonation rights and the View-Only RecipientsRBACrole assigned.

  1. On a machine that has the Exchange management tools & the Exchange Web Services API 1.2 or greater installed, open the Exchange Management Shell, navigate to the folder containing the script, and run the script using the following syntax:

    .\Remove-DirectBooking.ps1 –identity * -UseDefaultCredentials

  2. The script will process all mailboxes in the organization with detailed logging sent to the shell on the console. Note, depending the number of mailboxes in the org, this may take some time to complete
  3. When the script completes, open the Remove-DirectBooking_<timestamp>.txtfile in Notepad, which will contain list of mailboxes that have Direct Booking enabled:

    Screnshot: The Remove-Directbooking log generated by the script
    Figure 2: Output file containing list of mailboxes with Direct Booking enabled

  4. After reviewing the list, rerun the script with the InputFile parameter and the RemoveDirectBookingswitch:

    .\Remove-DirectBooking.ps1 –InputFile ‘.\Remove-DirectBooking_<timestamp>.txt’ –UseDefaultCredentials -RemoveDirectBooking

  5. The script will process all the mailboxes listed in the input file with detailed logging sent to the shell on the console. Because you specified the RemoveDirectBooking switch, it does not run in read-only mode and disables all currently enabled Direct Booking settings encountered.
  6. When the script completes, you can check the status of the removal operation by checking the Remove-DirectBooking_<timestamp>.csv file. A column called Direct Booking Removed? will record if the removal was successful. You can also check the runtime processing log file RemoveDirectBooking_<timestamp>.logas well.

    Log file results in Excel
    Figure 3: Reviewing runtime log file in Excel (see larger screeshot))

Note The Direct Booking Removed? column now shows Yes where applicable, but the three Direct Booking settings columns still show their various values as “Yes”; this is because we record those three values pre-removal. If you were to run the script again in read-only mode against the same input file, those columns would reflect a value of N/A since there would no longer be any Direct Booking settings enabled. The Resource Room?, AutoAccept Enabled?, and Conflict Detected all have a value of N/A regardless because they are not relevant when disabling the Direct Booking settings.

Scenario 2

You're an administrator who's new to an organization. You know that they migrated from Exchange 2003 to Exchange 2007 in the distant past and are currently in the process of implementing Exchange 2013, having already migrated some users to Exchange 2013. You have no idea what resources mailboxes or even user mailboxes may be using Direct Booking and would like to discover who has what Direct Booking settings enabled. You would then like to selectively choose which mailboxes to pilot for Direct Booking removal before taking action on the majority of found mailboxes.

Here's how you would accomplish this using the Remove-DirectBooking.ps1 script:

  1. Obtain a service account that has Application Impersonation rights for all mailboxes in the org.
  2. Ensure service account has at least Exchange View-Only Administrator role (2007) and at least have an RBAC Role Assignment of View Only Recipients (2010/2013).
  3. On a machine that has the Exchange management tools & the Exchange Web Services API 1.2 or greater installed, preferably an Exchange 2013 server, open the Exchange Management Shell, navigate to the folder containing the script, and run the script using the following syntax:

    .\Remove-DirectBooking.ps1 –Identity *

  4. The script will prompt you for the domain credentials of the account you wish to use because no credentials were specified. Enter the service account’s credentials.
  5. The script will process all mailboxes in the organization with detailed logging sent to the shell on the console. Note, depending the number of mailboxes in the org, this may take some time to complete.
  6. When the script completes, open the Remove-DirectBooking_<timestamp>.csvin Excel, which will looks something like:


    Figure 4: Reviewing the Remove-DirectBooking_<timestamp>.csv in Excel (see larger screeshot))

  7. Filter or sort the table by the Direct Booking Enabled? column. This will provide a list that can be scrutinized to determine which mailboxes are to be piloted with Direct Booking removal, such as those that have conflicts with already having the Resource Booking Attendant’s Automated Processing set to AutoAccept (which you can also filter on using the AutoAccept Enabled? column).
  8. Once the list has been reviewed and the targeted mailboxes isolated, simply copy their email addresses into a text file (one address per line), save the text file, and use it as the input source for the running the script to disable the Direct Booking settings:

    .\Remove-DirectBooking.ps1 –InputFile ‘.\’ -RemoveDirectBooking

  9. As before, the script will prompt you for the domain credentials of the account you wish to use. Enter the service account’s credentials.
  10. The script will process all the mailboxes listed in the input file with detailed logging sent to the shell on the console. It will disable all enabled Direct Booking settings encountered.
  11. Use the same validation steps at the end of the previous example to verify the removal was successful.

Script Options and Caveats

Please see the script’s help section (via “get-help .\remove-DirectBooking.ps1 -full”) for full information on all the available parameters. Here are some additional options that may be useful in certain scenarios:

  1. EWSURL switch parameter By default, the script will attempt to retrieve the EWS URL for each mailbox via AutoDiscover. This is preferred, especially in complex multi-datacenter or hybrid Exchange Online/On-premises environments where different EWS URLs may be in play for any given mailbox depending on where it resides in the org. However, there may be times where one would want to supply an EWS URL manually, such as when AutoDiscover is having “issues”, or the response time for AutoDiscover requests is introducing delays in overall script execution (think very large quantity of number of mailbox identities to churn through) and the EWS URL is the same across the org, etc. In these situations, one can use the EWSURL parameter to feed the script a static EWS URL.
  2. UseDefaultCredentials If the current user is the service account or perhaps simply has both the Impersonation and the necessary Exchange Admin rights per the script’s requirements and they don’t wish to be prompted to type in a credential (another great example is scheduling the script to run as a job for instance), you can use the UseDefaultCredentials to run the script under that security context.
  3. RemoveDirectBooking By default, the script runs in read-only mode. In order to make changes and disable Direct Booking settings on the mailbox, you mus specify the RemoveDirectBooking switch.

The script does have several prerequisites and caveats to ensure proper operation and meaningful results:

  1. Application Impersonation rights and minimum Exchange Admin rights must be used
  2. Exchange Web Services Managed API 1.2 or later must be installed on the machine running the script
  3. Exchange management tools must be installed on the machine running the script
  4. Script must be executed from within the Exchange Management Shell
  5. The Shell session must have the appropriate execution policy to allow the script to be executed (by default, you can't execute unsigned scripts).
  6. AutoDiscover must be configured correctly (unless the EWS URL is entered manually)
  7. Exchange 2003-based mailboxes cannot be targeted due to lack of EWS capabilities
  8. In an Exchange 2010/2013 environment that also has Exchange 2007 mailboxes present, the script should be executed from a machine running Exchange 2010/2013 management tools due to changes in the cmdlets in those versions

Summary

The discovery and removal of Direct Booking settings can be a tedious and costly process to perform manually, but you can avoid and automate it using current functions and features via PowerShell and EWS in Microsoft Exchange Server 2007, 2010, & 2013. With careful use, the Remove-DirectBooking.ps1 script can be a valuable tool to aid Exchange administrators in maintaining automated resource scheduling capabilities in their Microsoft Exchange environments.

Your feedback and comments are welcome.

Thank you to Brian Day and Nino Bilic for their guidance in content review, and to our customers (you know who you are) for piloting the script.

Seth Brandes& Dan Smith

Released: Exchange 2013 Server Role Requirements Calculator

$
0
0

It’s been a long road, but the initial release of the Exchange 2013 Server Role Requirements Calculator is here. No, that isn’t a mistake, the calculator has been rebranded.  Yes, this is no longer a Mailbox server role calculator; this calculator includes recommendations on sizing Client Access servers too! Originally, marketing wanted to brand it as the Microsoft Exchange Server 2013 Client Access and Mailbox Server Roles Theoretical Capacity Planning Calculator, On-Premises Edition.  Wow, that’s a mouthful and reminds me of this branding parody.  Thankfully, I vetoed that name (you’re welcome!).

The calculator supports the architectural changes made possible with Exchange 2013:

Client Access Servers

Like with Exchange 2010, the recommendation in Exchange 2013 is to deploy multi-role servers. There are very few reasons you would need to deploy dedicated Client Access servers (CAS); CPU constraints, use of Windows Network Load Balancing in small deployments (even with our architectural changes in client connectivity, we still do not recommend Windows NLB for any large deployments) and certificate management are a few examples that may justify dedicated CAS.

When deploying multi-role servers, the calculator will take into account the impact that the CAS role has and make recommendations for sizing the entire server’s memory and CPU. So when you see the CPU utilization value, this will include the impact both roles have!

When deploying dedicated server roles, the calculator will recommend the minimum number of Client Access processor cores and memory per server, as well as, the minimum number of CAS you should deploy in each datacenter.

Transport

Now that the Mailbox server role includes additional components like transport, it only makes sense to include transport sizing in the calculator. This release does just that and will factor in message queue expiration and Safety Net hold time when calculating the database size. The calculator even makes a recommendation on where to deploy the mail.que database, either the system disk, or on a dedicated disk!

Multiple Databases / JBOD Volume Support

Exchange 2010 introduced the concept of 1 database per JBOD volume when deploying multiple database copies. However, this architecture did not ensure that the drive was utilized effectively across all three dimensions – throughput, IO, and capacity. Typically, the system was balanced from an IO and capacity perspective, but throughput was where we saw an imbalance, because during reseeds only a portion of the target disk’s total capable throughput was utilized. In addition, capacity on the 7.2K disks continue to increase with 4TB disks now available, thus impacting our ability to remain balanced along that dimension. In addition, Exchange 2013 includes a 33% reduction in IO when compared to Exchange 2010. Naturally, the concept of 1 database / JBOD volume needed to evolve. As a result, Exchange 2013 made several architectural changes in the store process, ESE, and HA architecture to support multiple databases per JBOD volume. If you would like more information, please see Scott’s excellent TechEd session in a few weeks on Exchange 2013 High Availability and Site Resilience or the High Availability and Site Resilience topic on TechNet.

By default, the calculator will recommend multiple databases per JBOD volume. This architecture is supported for single datacenter deployments and multi-datacenter deployments when there is copy and/or server symmetry. The calculator supports highly available database copies and lagged database copies with this volume architecture type. The distribution algorithm will lay out the copies appropriately, as well as, generate the deployment scripts correctly to support AutoReseed.

High Availability Architecture Improvements

The calculator has been improved in several ways for high availability architectures:

  • You can now specify the Witness Server location, either primary, secondary, or tertiary datacenter.
  • The calculator allows you to simulate WAN failures, so that you can see how the databases are distributed during the worst failure mode.
  • The calculator allows you to name servers and define a database prefix which are then used in the deployment scripts.
  • The distribution algorithm supports single datacenter HA deployments, Active/Passive deployments, and Active/Active deployments.
  • The calculator includes a PowerShell script to automate DAG creation.
  • In the event you are deploying your high availability architecture with direct attached storage, you can now specify the maximum number of database volumes each server will support. For example, if you are deploying a server architecture that can support 24 disks, you can specify a maximum support of 20 database volumes (leaving 2 disks for system, 1 disk for Restore Volume, and 1 disks as a spare for AutoReseed).

Additional Mailbox Tiers (sort of!)

Over the years, a few, but vocal, members of the community have requested that I add more mailbox tiers to the calculator. As many of you know, I rarely recommend sizing multiple mailbox tiers, as that simply adds operational complexity and I am all about removing complexity in your messaging environments. While, I haven’t specifically added additional mailbox tiers, I have added the ability for you to define a percentage of the mailbox tier population that should have the IO and Megacycle Multiplication Factors applied. In a way, this allows you to define up to eight different mailbox tiers.

Processors

I’ve received a number of questions regarding processor sizing in the calculator.  People are comparing the Exchange 2010 Mailbox Server Role Requirements Calculator output with the Exchange 2013 Server Role Requirements Calculator.  As mentioned in our Exchange 2013 Performance Sizing article, the megacycle guidance in Exchange 2013 leverages a new server baseline, therefore, you cannot directly compare the output from the Exchange 2010 calculator with the Exchange 2013 calculator.

Conclusion

There are many other minor improvements sprinkled throughout the calculator.  We hope you enjoy this initial release.  All of this work wouldn’t have occurred without the efforts of Jeff Mealiffe (for without our sizing guidance there would be no calculator!), David Mosier (VBA scripting guru and the master of crafting the distribution worksheet), and Jon Gollogy (deployment scripting master).

As always we welcome feedback and please report any issues you may encounter while using the calculator by emailing strgcalc AT microsoft DOT com.

Ross Smith IV
Principal Program Manager
Exchange Customer Experience

Released: Exchange Server 2013 Management Pack

$
0
0

The Microsoft Exchange Server 2013 Management Pack (SCOM MP) is now live!

As I discussed in my Managed Availability article, the key difference between this management pack and previous releases, is that our health logic is now built into Exchange, as opposed to the management pack. This means updates to Exchange 2013 (like our cumulative updates), will include changes to the probes, monitors, and responders. Any issues that Managed Availability cannot solve are bubbled up to SCOM via an event monitor.

You can download the management pack via Microsoft Download Center at http://www.microsoft.com/en-us/download/details.aspx?id=39039.

You can also view the following documentation:

More information can be found at the SCOM team’s blog - http://blogs.technet.com/b/momteam/archive/2013/05/14/exchange-2013-management-pack-released.aspx.

Ross Smith IV
Principal Program Manager
Exchange Customer Experience

Using Exchange Web Services to Apply a Personal Tag to a Custom Folder

$
0
0

In Exchange 2010, we introduced Retention Tags, a Messaging Records Management (MRM) feature that allows you to manage email lifecycle. You can use retention policies to retain mailbox data for as long as it’s required to meet business or regulatory requirements, and delete items older than the specified period.

One of the design goals for MRM 2.0 was to simplify administration compared to Managed Folders, the MRM feature introduced in Exchange 2007, and allow users more flexibility. By applying a Personal Tag to a folder, users can have different retention settings apply to items in that folder than the default tag applied to the entire mailbox(known as a Default Policy Tag). Similarly, users can apply a different tag to a subfolder than the one applied to the parent folder. Users can also apply a Personal Tag to individual items, allowing them the freedom to organize messages based on their work habits and preference, rather than forcing them to move messages, based on the retention requirement, to an admin-controlled Managed Folder.

You can still use Managed Folders in Exchange 2010, but they’re not available in Exchange 2013.

For a comparison of Retention Tags with Managed Folders and migration details, see Migrate Managed Folders.

If you like the Managed Folders approach of being able to create a folder in the user’s mailbox and configure a retention setting for that folder, you can use Exchange Web Services (EWS) to accomplish something similar, with some caveats mentioned later in this post. You can write your own code or even a PowerShell script to create a folder in the user’s mailbox and apply a Personal Tag to it. There are scripts available on the interwebs, including some code samples on MSDN to accomplish this. For example:

Note: The above scripts are examples for your reference. They’re not written or tested by the Exchange product group.

But is it supported?

We frequently get questions about whether this is supported by Microsoft. Short answer: Yes. Exchange Web Services (EWS) is a supported and documented API, which allows ISVs and customers to create custom solutions for Exchange.

When using EWS in your code or PowerShell script to apply a Personal Tag to a folder, it’s important to consider the following:

For Developers

  • EWS is meant for developers who can write custom code or scripts to extend Exchange’s functionality. As a developer, you must have a good understanding of the functionality available via the API and what you can do with it using your code/script.
  • Support for EWS API is offered through our Exchange Developer Support channels.

For IT Pros

  • If you’re an IT Pro writing your own code or scripts, you’re a developer too! Above applies to you.
  • If you’re an IT Pro using 3rd-party code or scripts, including the code samples & scripts available on MSDN, TechNet or elsewhere on the interwebs, we recommend that you follow the general best practices for using such code or scripts, including (but not limited to)the following:
    • Do not use code/scripts from untrusted sources in a production environment.
    • Understand what the script or code does. (This is easy for scripts – you can look at the source in a text editor.)
    • Test the script or code thoroughly in a non-production environment, including all command-line options/parameters available in it, before installing or executing it in your production environment.
    • Although it’s easy to change the PowerShell execution policy on your servers to allow unsigned scripts to execute, it’s recommended to allow only signed scripts in production environments. You can easily sign a script if it's unsigned, before running it in a production environment.

So should I do it?

If using EWS to apply a Personal Tag to custom folders helps you meet your business requirements, absolutely! However, do note and consider the following:

  • You’re replicating some of the functionality available via Managed Folders, but it doesn’t turn the folder into a Managed Folder.
  • Remember - it’s a Personal Tag! Users can remove the tag from the folder using Outlook or Outlook Web App.
  • If you have additional Personal Tags available in your environment, users can change the tag on the custom folder.
  • Users can tag individual items with a different Personal Tag. There is no way to enforce inheritance of retention tag if Personal Tags have been provisioned and available to the user.
  • Users can rename or delete custom folders. Unlike Managed Folders, which are protected from changes or deletion by users, custom folders created by users or by admin are just like any other (non-default) folder in the mailbox.

Provisioning custom folders with different retention settings (by applying Personal Tags) may help you meet your organization’s retention requirements. As an IT Pro, make sure you understand the above and follow the best practices.

Bharat Suneja

Log Parser Studio 2.0 is now available

$
0
0

Since the initial release of Log Parser Studio (LPS) there have been over 30,000 downloads and thousands of customers use the tool on a daily basis. In Exchange support many of our engineers use the tool to solve real world issues every day and in turn share with our customers, empowering them to solve the same issues themselves moving forward. LPS is still an active work in progress; based on both engineer and customer feedback many improvements have been made with multiple features added during the last year. Below is a short list of new features:

Improved import/export functionality

For those who create their own queries this is a real time-saver. We can now import from multiple XML files simultaneously only choosing the queries we wish to import from multiple query libraries or XML files.

Search Query Results

The existing feature allowing searching of queries in the library is now context aware meaning if you have a completed query in the query window, the search option searches that query. If you are in the library it searches the library and so on. This allows drilling down into existing query results without having to run a new query if all you want to do is narrow down existing result sets.

Input/Output Format Support

All LP 2.2 Input and Output formats contain preliminary support in LPS. Each format has its own property window containing all known LP 2.2 settings which can be modified to your liking.

Exchange Extensible Logging Support

Custom parser support was added for most all Exchange logs. These are covered by the EEL and EELX log formats included in LPS which cover Exchange logs from Exchange 2003 through Exchange 2013.

Query Logging

I can't tell you how many times myself or another engineer spent lots of time creating the perfect query for a particular issue we were troubleshooting, forgetting to save the query in the heat of the moment and losing all that work. No longer! We now have the capability to log every query that is executed to a text file (Query.log). What makes this so valuable is if you ran it, you can retrieve it.

Queries

There are now over 170 queries in the library including new sample queries for Exchange 2013.

image

image

PowerShell Export

You can now export any query as a standalone PowerShell script. The only requirement of course is that Log Parser 2.2 is installed on the machine you run it on but LPS is not required. There are some limitations but you can essentially use LPS as a query editor/test bed for PowerShell scripts that run Log Parser queries for you!

image

Query Cancellation

The ability to submit a request to cancel a running query has been added which will allow you to cancel a running query in many cases.

Keyboard Shortcuts

There are now 23 Keyboard shortcuts. Be sure to check these out as they will save you lots of time. To display the short cuts use CTRL+K or Help > Keyboard Shortcuts.

There are literally hundreds of improvements and features; far too many to list here so be sure and check out our blog series with existing and upcoming tutorials, deep dives and more. If you are installing LPS for the first time you'll surely want to review the getting started series:

If you are already familiar with LPS and are installing this latest version, you'll want to check out the upgrade blog post here:

Additional LPS articles can be found here:

http://blogs.technet.com/b/karywa/

LPS doesn't require an install so just extract to the folder of your choice and run LPS.EXE. If you have the previous version of LPS and you have added your own custom queries to the library, be sure to export those queries as a backup before running the newest version. See the "Upgrading to LPS V2" blog post above when upgrading.

Kary Wall

Adventures in querying the EventHistory table

$
0
0

Beginning with Exchange 2007 the Exchange database has had an internal table called EventHistory.  This table has been used to track the events upon which several of the assistants are based and for other short term internal record keeping.  The way to query the table hasn’t been publicized before but it has a number of uses:

  • It may tell you the fate of a deleted item (for situations where Audit logging or store tracing was not in place at the time of the delete)
  • It can list accounts who have recently touched a mailbox
  • It can show you the clients that have touched a mailbox

Events are kept in the EventHistory table for up to 7 days by default.  You can check what your retention period is for all databases by running:

Get-mailboxdatabase | fl name,event*
Name                        : MainDB
EventHistoryRetentionPeriod : 7.00:00:00

There are a number of approaches to querying the table.  Let’s start with a script (please review my caveats before actually running the script) and review the data that is displayed.  The script is:

Add-PSSnapin Microsoft.Exchange.Management.Powershell.Support
$db = (get-mailbox <user alias>).database
$mb=(get-mailbox <user alias>).exchangeguid
Get-DatabaseEvent $db -MailboxGuid $mb -resultsize unlimited | ? {$_.documentid -ne 0 -and $_.CreateTime -ge  “<mm/dd/yyyy>”} | fl > c:\temp\EventHistory.txt

For the CreateTime specify the day of the event you are looking for.  By default a maximum of 7 days are tracked.  Depending on the date range selected and the activity in the mailbox the resulting file size starts at about 5KB and I have seen it rise to nearly 1GB.  You can also replace the “| fl > c:\temp\EventHistory.txt” with “| export-csv c:\temp\EventHistory.csv”.  I am using the FL output because it is easier for illustration purposes.

Inside the EventHistory.txt file will be events like this one (this one is a bulk delete of emails using OWA):

Counter          : 15328155
CreateTime       : 1/28/2013 9:46:16 PM
ItemType         : MAPI_MESSAGE
EventName        : ObjectMoved
Flags            : None
MailboxGuid      : d05f83c1-255c-42ae-b74f-1ac3329b306a
ObjectClass      : IPM.Note
ItemEntryId      : 000000008CFDF3C2BA873648866A1C17D0E3F1AB0700BC9C9BA42124CD4F896E8915C86B2BD00000006027C20000BC9C9BA4
2124CD4F896E8915C86B2BD0000041B6E6570000

ParentEntryId    : 000000008CFDF3C2BA873648866A1C17D0E3F1AB0100BC9C9BA42124CD4F896E8915C86B2BD00000006027C20000
OldItemEntryId   : 000000008CFDF3C2BA873648866A1C17D0E3F1AB0700BC9C9BA42124CD4F896E8915C86B2BD00000006027BF0000BC9C9BA4
2124CD4F896E8915C86B2BD0000041B6D6260000

OldParentEntryId : 000000008CFDF3C2BA873648866A1C17D0E3F1AB0100BC9C9BA42124CD4F896E8915C86B2BD00000006027BF0000
ItemCount        : 0
UnreadItemCount  : 0
ExtendedFlags    : 2147483648
ClientCategory : WebServices
PrincipalName : Contoso\TestUser
PrincipalSid : S-1-5-21-915020002-1829042167-1583638127-1930
Database         : Mailbox Database 1858470524
DocumentId       : 10876

The EventName shows what was done with the object.  End user deletes will be listed as moves.  When you delete an item it is moved to either Deleted Items or to the Recoverable Items subtree

I highlighted the ItemEntryID because that ties directly to the Item you need to locate.  The subject and other human readable properties are not included in this table.  The ItemEntryID is the database engine’s way of uniquely identifying each item.  You can use this to search the mailbox in MFCMAPI and get properties like Subject, From, To, etc.

  • The ParentEntryID is the folder in which the item presently resides.
  • The OldItemEntryID is the previous ItemEntryID before the item was deleted.
  • The OldParentEntryID is the folder it used to reside in.

Flags will often show values like SearchFolder.  Many events flagged as being related to search folders or folders are not going to be interesting to your investigations.  If you are researching the fate of a deleted item they can be ignored.

ClientCategory is the type of client that requested the operation.  In this case webservices means that OWA was used to remove the item as part of a bulk operation conducted against a 2010 mailbox.  If it was deleted individually then Exchange 2010 would list OWA here.   The way ClientCategories are tracked in Exchange 2013 is a little different; you should see OWA for all End User deletes through that tool.

PrincipalName and PrincipalSid give you the identity of the account that was passed to the information store when the operation was requested.  At the time of writing these are not displayed by Exchange 2013.

So – we have an output file.  What do we do with it?  The easy uses for the file (once it is imported into your favorite data analysis tool) at this time are:

  • List of all accounts that have caused an event to be logged in the time period you specified
  • Get a summary of operations (deletes, moves, new items, etc.) conducted on the days you specified
  • Get a list of client types that have changed something in the mailbox
  • Search the records returned for a particular ItemEntryID

In our output the ItemEntryID is not immediately useful.  To find out what the ItemEntryID in each record actually is we need to use MFCMAPI (steps related to MFCMAPI are at the end of this blog).  Once you are in MFCMAPI you can go to the Tools menu, select “Entry ID” and then “Open given entry ID”.  In the dialog that appears paste in the ItemEntryId or the OldItemEntryId that you want to investigate.  When you click OK MFCMAPI will take you to the item you specified (if it is still in the mailbox).  Once MFCMAPI takes you to the mail item you will see the Subject, From, To, Creation date and other meaningful properties.  You will also see there is a property called PR_ENTRYID.  PR_ENTRYID is the MAPI name for ItemEntryID.  This field is our link between the representation of the data in our PowrShell cmdlet and in the more human readable presentation in MFCMAPI.

Pulling ItemEntryIDs from the PowerShell output and looking them up one at a time in MFCMAPI may be a little too tedious for most Exchange administrators.  If you have more than a handful of items you want to check (to see if they are useful and meaningful) it will take a long time to locate them all. 

The alternative is to start in MFCMAPI.  If you can find the item you want there by looking at the subject line, date or other properties you can use the content of the PR_ENTRYID field in MFCMAPI to modify the Get-DatabaseEvent query to pull up the history for just that item.  To do this you need access to either a restored copy of the mailbox in a lab or the item of interest must still be in the mailbox (possibly in deleted items or recoverable items).  Here is a sample of how the get-databaseevent cmdlet would be used if you have the PR_ENTRYID:

Get-DatabaseEvent $db -MailboxGuid $mb -resultsize unlimited | ? {$_.ItemEntryID -eq 
“000000008CFDF3C2BA873648866A1C17D0E3F1AB0700BC9C9BA42124CD4F896E8915C86B2BD00000006027C20000BC9C9
BA42124CD4F896E8915C86B2BD0000041B6E6570000”
–or $_.OldItemEntryId –eq
“000000008CFDF3C2BA873648866A1C17D0E3F1AB0700BC9C9BA42124CD4F896E8915C86B2BD00000006027C20000BC9C9
BA42124CD4F896E8915C86B2BD0000041B6E6570000”} | export-csv c:\temp\SingleItemEventHistory.txt

Sometimes I have not been able to locate an item using this technique.  If that happens it is useful to note that the PR_ENTRYID contains the ID of the mailbox, the folder and the item.  For example here is the PR_ENTRYID of an item in the Inbox followed by the PR_ENTRYID of the Inbox itself:

000000006064986ABA58DF40A86C0C67E716264807004885B50069B1D04994374C02417D45A100000000324E00003DEF8F7
FFC1E3448B9D276F022E0E42D0000396D1B280000
000000006064986ABA58DF40A86C0C67E716264801004885B50069B1D04994374C02417D45A100000000324E0000

For the sake of comparison here are the PR_ENTRYIDs of two more folders in the same mailbox:

000000006064986ABA58DF40A86C0C67E716264801004885B50069B1D04994374C02417D45A10000000032510000 - deleted items folder
000000006064986ABA58DF40A86C0C67E716264801004885B50069B1D04994374C02417D45A100000000324B0000 - ipm_subtree folder

From this you should be able to get an idea of how the field is divided up by looking at where the repeated digits end.  For the purpose of tracking down an individual item that may be in a different folder (because of multiple moves) we want to be able to isolate the portion of the PR_ENTRYID that is specific to the item and modify our PowerShell statement appropriately.  The final statement would look like this:

Get-DatabaseEvent $db –MailboxGuid $mb -resultsize unlimited | ? {$_.ItemEntryID -like  “*3DEF8F7FFC1E3448B9D276F022E0E42D0000396D1B280000” –or $_.OldItemEntryId –like “*3DEF8F7FFC1E3448B9D276F022E0E42D0000396D1B280000”} | export-csv c:\temp\SingleItemEventHistory.txt

At this point if we still can’t find the item we want then our last chances are to remove the $_.MailboxGuid from the conditions (meaning we will search all mailboxes in the database – a very expensive operation please review the caveats) or to search other databases in the organization (databases containing delegates of the current user would be the ones to start with).  If the data still can’t be found you have either made an error or the records are no longer present.  If the records are present you should see all actions taken on the item recently.

Caveats:

  • At the time of writing Exchange 2013 is not reporting the account information in the EventHistory records.  You can use the technique – you just won’t get any account names or SIDs from it.
  • You can change the length of time items stay in the EventHistory table with Set-MailboxDatabase -EventHistoryRetentionPeriod.  You can choose a period from 1 second up to 30 days.  I don’t recommend setting a time that is too short as I have not tested how Event based assistants would react to that.  For the full syntax of Set-MailboxDatabase please check the TechNet article for your Exchange version.
  • If you choose to direct your output to a variable instead of a text file you should make sure you are running the PowerShell cmdlets from a workstation with the management tools installed.  The variable (and the PowerShell session) are likely to consume a substantial amount of memory. 
  • These queries of the EventHistory table are expensive to run.  Use good judgment in when you choose to run them based on the demands of your environment.  In the labs I use all these queries take a second or two, but on a busy server with large databases  you can easily be looking at 20-30 minutes per query.  There will also be an I/O impact, but I don’t have a way to estimate that for you in advance.

You can make the operation less expensive by lowering the number of records returned by Get-DatabaseEvent.  We are already including the database and mailbox to look for.  You can also add the EventNames and the StartCounter.  The latter of these might be a little tricky.  The StartCounter is an internal number that is specific to this table in the current database.  You probably won’t know what counter value to use until you have already run a query and noted the counter values.  This means StartCounter is mostly useful for reducing the impact of your second and subsequent queries of the same table in the same database.

Assuming you know a relevant StartCounter value here is an example of doing this:

Get-DatabaseEvent $db -MailboxGuid $mb –EventNames objectmodified, objectdeleted –StartCounter 15328155 -resultsize unlimited | ? {$_.documentid -ne 0 -and $_.CreateTime -ge  “<mm/dd/yyyy>”} | fl > c:\temp\EventHistory.txt

The example above searches a mailbox on a particular database for the event types specified and ignores any rows with a lower counter value than specified.  This smaller dataset is then passed to the PowerShell pipeline for additional filtering and is ultimately saved to a CSV file that you can import into your favorite analysis tool.  If you prefer to conduct your analysis in PowerShell you also have the option of assigning the result of Get-DatabaseEvent to a PowerShell variable (just remember the variable and the PowerShell session will consume memory proportional to the resultset returned).

So how do you find the PR_ENTRYIDs I mentioned above in MFCMAPI?

You can download MFCMAPI from https://mfcmapi.codeplex.com.

1. We need an Outlook profile for the mailbox we are searching.  That profile should NOT be configured for Cached mode.  If you are doing this from your machine make sure you have Full Access to the mailbox of the user.  You can then create a profile for that specific user.

2. Once you have the profile open MFCMAPI and Log on

image

3. Select the profile you created for Step 1.  You will see a screen like this one:

image

4. Double-click the mailbox which will open a window showing you the mailbox details.

5. If you already know the ItemEntryID you want to open and inspect you can locate it with this menu option:

image

6. If you don’t have the ItemEntryID expand the Root Container, Recoverable Item and Top of Information Store.  If you are trying to locate details on a deleted item look in the Deleted Items folder and the Recoverable Items folder (and it’s subfolders)

image

7. Double-click Deleted Items to open a window that looks like this one:

image

8. Click the item to fill in the lower half of the window with the properties

9. Locate the PR_EntryID property and double-click it

image

10. The Binary box contains the value of the PR_ENTRYID field that you can use to search the EventHistory table in the Store.  If you locate this value with MFCMAPI first you can use it to limit the search as I described above.  If you don’t have this value you can pull the full history and use the ItemEntryIDs as a basis to search MFCMAPI.

Thanks to Jesse Tedoff for the idea!

Chris Pollitt


Managed Availability and Server Health

$
0
0

Every second on every Exchange 2013 server, Managed Availability polls and analyzes hundreds of health metrics.  If something is found to be wrong, most of the time it will be fixed automatically.  But of course there will always be issues that Managed Availability won’t be able to fix on its own.  In those cases, Managed Availability will escalate the issue to an administrator by means of event logging, and perhaps alerting if System Center Operations Manager is used in tandem with Exchange 2013. When an administrator needs to get involved and investigate the issue, they can begin by using the Get-HealthReport and Get-ServerHealth cmdlets.

Server Health Summary

Start with Get-HealthReport to find out the status of every Health Set on the server:

Get-HealthReport –Identity <ServerName>

This will result in the following output (truncated for brevity):

ServerStateHealthSetAlertValueLastTransitionTimeMonitorCount
------------------------------------
Server1NotApplicableADHealthy5/21/2013 12:2314
Server1NotApplicableECPUnhealthy5/26/2013 15:402
Server1NotApplicableEventAssistantsHealthy5/29/2013 17:5140
Server1NotApplicableMonitoringHealthy5/29/2013 17:219

In the above example, you can see that that the ECP (Exchange Control Panel) Health Set is Unhealthy. And based on the value for MonitorCount, you can also see that the ECP Health Set relies on two Monitors. Let's find out if both of those Monitors are Unhealthy.

Monitor Health

The next step would be to use Get-ServerHealth to determine which of the ECP Health Set Monitors are in an unhealthy state.

Get-ServerHealth –Identity <ServerName> –HealthSet ECP

This results in the following output:

ServerStateNameTargetResourceHealthSetNameAlertValueServerComponent
------------------------------------------
Server1NotApplicableEacSelfTestMonitor ECPUnhealthyNone
Server1NotApplicableEacDeepTestMonitor ECPUnhealthyNone

 

As you can see above, both Monitors are Unhealthy.  As an aside, if you pipe the above command to Format-List, you can get even more information about these Monitors.

Troubleshooting Monitors

Most Monitors are one of these four types:

 

 

The EacSelfTestMonitor Probes along the "1" path, while the EacDeepTestMonitor Probes along the "4" path. Since both are unhealthy, it indicates that the problem lies on the Mailbox server in either the protocol stack or the store. It could also be a problem with a dependency, such as Active Directory, which is common when multiple Health Sets are unhealthy. In this case, the Troubleshooting ECP Health Set topic would be the best resource to help diagnose and resolve this issue.

Abram Jackson

Program Manager, Exchange Server

Released: Exchange Server 2013 RTM Cumulative Update 2

$
0
0

Today, we released Exchange Server 2013 RTM Cumulative Update 2 (CU2) to the Microsoft Download Center. In addition to this article, the Exchange 2013 RTM release notes (updated for CU2) are also available.

The final build number for Exchange 2013 RTM CU2 is 15.0.712.22.

Note: Some article links may not be available at the time of this post's publication. Updated Exchange 2013 documentation, including Release Notes, will be available on TechNet soon.

Servicing Model Update

In the new Exchange servicing model customers will continue to receive assistance from Microsoft Support for the lifecycle of the Exchange server product - a customer is not required to be at the most current CU to receive assistance. There are two scenarios that we would like to clarify though:

  1. If during the course of a support incident it is determined that the solution is available in a published CU (e.g., CU2), the customer will be required to install the update that contains the fix. We will not be building a new fix to run on top of a CU published earlier (e.g., CU1).
  2. If during the course of a support incident it is determined that you have discovered a new problem for which we confirm a fix is required, that fix will be published in a future CU that you can then install to correct the problem reported.

An important benefit of the Exchange servicing model is that it provides the ability to receive independent security releases outside of the CU or Service Pack (SP) process. What this means for you is that future security fixes will not require you to install a CU to get the individual fix for a reported vulnerability. This allows you to quickly validate and install a security update with confidence knowing that only the fixes which address a particular security problem will be included as part of that release.

Exchange Server Cumulative Updates are scheduled to be released quarterly. We realize that some customers spend several months validating environments, third-party products, etc., and require more time for testing. Therefore, we will continue to ship a Service Pack which provides all of the updates included in prior cumulative updates in one installation and acts as a logical milestone for updating your servers.

Customers who are using Exchange Server 2013 and Office 365 together in an Exchange Hybrid scenario get a rich set of capabilities to manage and run mailboxes on-premises and in the cloud. Updates come to Office 365 frequently and thus customers in hybrid scenarios are strongly recommended to stay current as Cumulative Updates are released. Keeping current will allow your on-premises Exchange Server to be running the same code as the Office 365 Exchange servers. This helps keep consistency between on-premises and Office 365 users and puts you in the best position to take advantage of new features as they are made available in the service. This always updated approach is available for everyone and is the recommend approach for all customers to obtain fixes and new features as soon as they become available.

Overall, the new Exchange Server servicing strategy provides a predictable pattern for releases and provides customer control options for on-premises customers. Each CU receives extensive validation as the builds released in a CU have been deployed in the Office 365 service – you can deploy a CU knowing it has already had datacenter scale validation in the world’s largest and most demanding Exchange environment.

Upgrading/Deploying Cumulative Update 2

Unlike previous versions, cumulative updates do not use the rollup infrastructure; cumulative updates are actually full builds of the product, meaning that when you want to deploy a new server, you simply use the latest cumulative update build available and do not necessarily need to apply additional Exchange Server updates.

Important: To prevent issues during the installation or upgrade of Exchange 2013 RTM CU2, you should ensure that the Windows PowerShell Script Execution Policy is set to “Unrestricted”. Failure to do so could cause the Exchange 2013 server to be in an unusable state and some downtime could occur. To verify the policy settings, run the Get-ExecutionPolicy cmdlet from PowerShell on the Exchange 2013 Server(s). If the policies are NOT set to Unrestricted you should use the resolution steps in the following article to adjust the settings KB 981474.

Active Directory Preparation

Prior to upgrading or deploying the new build onto a server, you will need to update Active Directory. For those of you with a diverse Active Directory permissions model you will want to perform the following steps:

  1. Exchange 2013 RTM CU2 includes schema changes. Therefore, you will need to execute setup.exe /PrepareSchema /IAcceptExchangeServerLicenseTerms.
  2. Exchange 2013 RTM CU2 includes enterprise Active Directory changes (e.g., RBAC roles have been updated to support new cmdlets and/or properties). Therefore, you will need to execute setup.exe /PrepareAD /IAcceptExchangeServerLicenseTerms.

Note: If your environment contains only Exchange 2007, and you upgrade to Exchange 2013, keep in mind you cannot deploy Exchange 2010 in that environment at a later time. If you foresee a need to deploy Exchange 2010 servers into your environment, deploy an Exchange 2010 multi-role server (with all four servers roles) prior to executing Exchange 2013 setup.exe /PrepareAD. As long as you retain at least one role of each legacy server, you will continue to be able to install additional servers of that version into your coexistence environment. Once you remove the last server role of a legacy version, you will no longer be able to reintroduce that version into the environment.

Server Deployment

Once the preparatory steps are completed, you can then deploy CU2 and start your coexistence journey. If this is your first Exchange 2013 server deployment, you will need to deploy both an Exchange 2013 Client Access Server and an Exchange 2013 Mailbox Server into the organization. As explained in Exchange 2013 Client Access Server Role, CAS 2013 is simply an authentication and proxy/redirection server; all data processing (including the execution of remote PowerShell cmdlets) occurs on the Mailbox server. You can either deploy a multi-role server or each role separately (just remember if you deploy them separately, you cannot manage the Exchange 2013 environment until you install both roles).

If you already deployed Exchange 2013 RTM code and want to upgrade to CU2, you will run setup.exe /m:upgrade /IAcceptExchangeServerLicenseTerms from a command line after completing the Active Directory preparatory steps or run through the GUI installer. Deploying future cumulative updates will operate in the same manner.

Note: Unlike previous versions, in Exchange 2013, you cannot uninstall a single role from a multi-role server. For example, if you deploy the CAS and MBX roles on a single machine, you cannot later execute setup to remove the CAS role; you can only uninstall all server roles.

Changes in Exchange 2013 RTM CU2

In addition to bug fixes, Exchange 2013 RTM CU2 introduces enhancements in the following areas.

  • Per-server database support
  • OWA Redirection
  • High Availability
  • Managed Availability
  • Cmdlet Help
  • OWA Search Improvements
  • Malware Filter Rules

Per-Server Database Support

As mentioned previously, Exchange 2013 RTM CU2 increases the per-server database support from 50 databases to 100 databases in the Enterprise Edition of the product. Please note that this architectural change may not provide any additional scalability as CPU may be a bottleneck, thereby limiting the number of mailboxes you can deploy per-server.

As promised, the Exchange 2013 Server Role Requirements Calculator has been updated for this architectural change.

OWA Redirection

Depending on your deployment model, Exchange 2013 RTM CU1 supported the following redirection or proxy scenarios:

  1. In environments where Exchange 2013 and Exchange 2010 coexist, Exchange 2013 CAS proxies OWA requests to Exchange 2010 CAS for Exchange 2010 mailboxes.
  2. In environments where Exchange 2013 and Exchange 2007 coexist, Exchange 2013 CAS redirects the request to the Exchange 2007 CAS infrastructure’s ExternalURL. While this redirection is silent, it is not a single sign-on event.
  3. In native Exchange 2013 environments:
    1. Exchange 2013 CAS proxies the OWA request directly to the Exchange 2013 Mailbox server when in a single site.
    2. Exchange 2013 CAS proxies the OWA request directly to the Exchange 2013 Mailbox server when the Mailbox server exists in a different site and the CAS infrastructure in the target site has no ExternalURL defined.
    3. Exchange 2013 CAS proxies the OWA request directly to the Exchange 2013 Mailbox server when the Mailbox server exists in a different site and the CAS infrastructure in the target site has an ExternalURL that matches the source site’s ExternalURL.
    4. Exchange 2013 CAS redirects the OWA request to the CAS infrastructure in the target site when the target site’s ExternalURL does not match the source site’s ExternalURL. While this redirection is silent, it is not a single sign-on event.

Exchange 2013 RTM CU2 changes this behavior by providing a single sign-on experience when Forms-Based Authentication (FBA) is used on the source and destination OWA virtual directories by issuing back to the web browser a hidden FBA form with the fields populated. This hidden form contains the same information as what the user had originally submitted to the source CAS FBA page (username, password, public/private selector) as well as, a redirect to the target Exchange specific path and query string. As soon as this form is loaded it is immediately submitted to the target URL. The result is the user is automatically authenticated and can access the mailbox data.

Many of you may be familiar with this functionality in Exchange 2010 SP2. However, there are differences in the Exchange 2013 RTM CU2 implementation:

  1. Silent redirection is the default behavior in Exchange 2013, meaning that if FBA is enabled on source and target OWA virtual directories, the redirection will also be a single sign-on event.
  2. You can disable silent redirection on the source CAS via the web.config file located at <ExchangeSetupDir>\FrontEnd\HttpProxy\owa by adding the following line in the <appSettings>section:

    <add key="DisableSSORedirects" value="true" />

High Availability

Exchange 2013 RTM CU2 introduces a new service, the DAG Management Service. The DAG Management service contains non-critical code that used to reside in the Replication service. This change does not introduce any additional complexities in event reporting, either – events are written into the Application event log with the source of MSExchangeRepl and crimson channel.

Managed Availability

In addition to improvements in various probes and monitors, there have been changes to the responder throttling framework. Prior to Exchange 2013 RTM CU2, many responders were only throttled per-server (e.g., RestartService). Now, these responders are throttled per group. For example, originally RestartService was throttled based on the number of occurrences that occurred on a server; in Exchange 2013 RTM CU2, RestartService can execute every 60 minutes DAG-wide, with a maximum of 4 restarts per day DAG-wide.

RecoveryAction

Enabled

Per Server

Per Group

Minutes Between Actions

Max Allowed Per Hour

Max Allowed Per Day

Minutes Between Actions

Max Allowed Per Day

ForceReboot

True

720

N/A

1

600

4

SystemFailover

True

60

N/A

1

60

4

RestartService

True

60  

N/A

1

60

4

ResetIISPool

True

60

N/A

1

60

4

DatabaseFailover

True

120

N/A

1

120

4

ComponentOffline

True

60

N/A

1

60

4

ComponentOnline

True

5

12

288

5

Large

MoveClusterGroup

True

240

N/A

1

480

3

ResumeCatalog

True

5

4

8

5

12

WatsonDump

True

480

N/A

1

720

4

Cmdlet Help

Exchange 2013 RTM CU2 introduces the capability for administrators to get updates to Exchange Management Shell cmdlets without needing to deploy a new service pack or cumulative update. Administrators can launch the Exchange Management Shell and run the Update-ExchangeHelp cmdlet to update their local Shell help.

OWA Search Improvements

Previously searching for keywords within OWA did not give indications of the location of the keyword in the search result set. Exchange 2013 RTM CU2 improves OWA’s search results highlighting in three ways:

  1. Conversation items are auto-expanded that have hits in them.
  2. Whenever you search for a term and select a conversation from the result list, OWA will move the scroll position of the reading pane so that the first item part with that search term is in view.
  3. Hit navigation within a conversation – you can jump between search hits quickly using a control built into the reading pane.

Malware Filter Rules

Exchange 2013 RTM CU2 introduces the –MalwareFilterRule cmdlets. You can use the –MalwareFilterRule cmdlets to apply custom malware filter policies to specific users, groups, or domains in your organization. Custom policies always take precedence over the default company-wide policy, but you can change the priority (that is, the running order) of your custom policies.

Looking Ahead

The Exchange Product Group is in the final validation stages to support Windows Azure for Witness Server placement. Specific guidance on using Windows Azure for the Witness Server placement will be available via TechNet at a later date. Support for this scenario will occur once the guidance has been released.

Conclusion

We understand that some features delivered in CU2 were available in Exchange 2010 and haven’t been available until this update. The lack of single sign-on capability in OWA redirection and the reduced per-server database support were due in part to the complete re-write of these components in Exchange 2013. Holding back these features were necessary to meet our code stability and performance criteria for release. It was your feedback which helped prioritize the return of these features. Our new servicing model allows us to add incremental improvements to the product at a faster cadence than the previous model.

As always, we continue to identify ways to better serve your needs through our regular servicing releases. We hope you find these improvements useful. Please keep the feedback coming, we are listening.

Ross Smith IV
Principal Program Manager
Exchange Customer Experience

Updates

  • 7/11/13: Added info about PowerShell Execution Policy and KB981474.
  • 7/11/13: Exchange 2013 Release Notes on TechNet have been refreshed.

Announcing the Jetstress 2013 Field Guide

$
0
0

Due to the success of the Jetstress 2010 field guide, we have decided to continue the tradition by releasing an updated version of the guide for Jetstress 2013. As with the previous version, the aim of the document is as follows:

  • Explain how Jetstress works.
  • Provide storage validation planning and configuration guidance.
  • Provide Jetstress results interpretation guidance

So, what’s changed? Well, the good news is that Jetstress 2013 is very similar to Jetstress 2010. There are some modifications to accommodate the storage changes within Exchange Server 2013, however the planning, configuration and results interpretation process remain largely the same as they were in Jetstress 2010.

Change overview in Jetstress 2013

  • The Event log is captured and logged to the test log. These events show up in the Jetstress UI as the test is progressing.
  • Any errors are logged against the volume that they occurred. The final report shows the error counts per volume in a new sub-section.
  • A single IO error anywhere will fail the test.
  • In case of CRC errors (JET -1021), Jetstress will simulate the same behaviour as Exchange “page patching”.
  • Detects -1018, -1019, -1021, -1022, -1119, hung IO, DbtimeTooNew, DbtimeTooOld.
  • Threads, which generate IO, are now controlled at a global level. Instead of specifying Threads/DB, you now specify a global thread count, which works against all databases.

Updates in the Jetstress 2013 Field Guide

Not content with simply updating Jetstress, we have also added some more information into the field guide.

  • Updated internals section to reflect changes made in Jetstress 2013 [4]
  • Updated validation process flow charts [5.1]
  • Improved failure mode testing section [5.4]
  • Updated initialisation time table [5.6.1]
  • Updated installation section [6]
  • Updated report data section [9]
  • Updated thread count section [Appendix A]

The Jetstress Field Guide will be the only documentation released for Jetstress 2013, so if you have any feedback please feel free to share it with us here.

Thanks,

Neil Johnson
Senior Consultant, MCS UK

Exchange 2013 RTM CU2 Issue - Public Folder Permissions Loss After PF Mailbox Move

$
0
0

Late yesterday we became aware of a specific issue with Exchange 2013 RTM CU2. This issue only occurs within native Exchange 2013 environments that are leveraging Modern Public Folders. The issue exists when you move public folder mailboxes. The specific issue is that if you move a public folder mailbox, there is the potential for the permission structure on some public folders to be lost. Specifically:

  1. If you move (via New-MoveRequest) a secondary public folder (PF) mailbox, the permissions on any public folder (including well known folders) not stored in the secondary PF mailbox would be lost from the secondary PF mailbox and replaced by the default ACL. The original ACLs can be restored via a full synchronization event by executing Update-PublicFolderMailbox -InvokeSynchronizer <Public Folder Mailbox> -FullSync.
  2. If you move (via New-MoveRequest) the primary PF mailbox, the permissions on any public folder (including well known folders) not stored in that public folder mailbox are lost and replaced by the default ACL.

The default ACL gives Author permissions to Default authenticated users.

Recommendation

If you have already deployed Exchange 2013 RTM CU2 (712.22) and have Modern Public Folders in your environment, we recommend you do not move public folder mailboxes so that you do not experience this issue. We will be releasing an IU that will address this issue in the near future.

If you are in the midst of a migration to Exchange 2013 and will not be deploying Modern Public Folders for some time, you can proceed with installing Exchange 2013 RTM CU2 (712.22). Once you are ready to deploy Modern Public Folders ensure you have deployed the soon-to-be-released Interim Update or the latest available Cumulative Update.

Questions/Answers

Q: What if I have already deployed CU2 and moved a public folder mailbox?

A: If you have moved a secondary PF mailbox, then you can execute Update-PublicFolderMailbox -InvokeSynchronizer <Public Folder Mailbox> -FullSync to replace the permissions. If you moved the primary PF mailbox, you will need to manually reassign permissions.

Q: What is a Primary Public Folder Mailbox? 

A: The primary Public Folder (PF) mailbox is the mailbox defined as the RootPublicFolderMailbox within the organization. You can look up the RootPublicFolderMailbox GUID via the Get-OrganizationConfig cmdlet.

Q: What is a Secondary Public Folder Mailbox? 

A: A secondary PF mailbox is any public folder mailbox that is not defined as the RootPublicFolderMailbox within the organization.

Q: Can you explain the issue in a different way?

A: Let’s say you have the following public folder structure:

Hierarchy

Folder Location

\Root Folder

Primary Public Folder Mailbox

      \Child Folder 1

Secondary PF Mailbox 1

      \Child Folder 2

Secondary PF Mailbox 2

      \Child Folder 3

Secondary PF Mailbox 3

If you were to move the primary PF mailbox from Database1 to Database2, the permission structure, in the primary PF mailbox, for each of the child folders would be replaced with the default ACL. This hierarchy change would replicate to all other secondary PF mailboxes.

If you were to move secondary PF mailbox 1 from Database1 to Database2, then the permission structure for \Root Folder, \Child Folder 2, and \Child Folder 3 hierarchy objects within secondary PF mailbox 1 would be temporarily reset to the default ACL. However, hierarchy replication would overwrite the permissions when you execute a full synchronization event.

Q: Can this issue occur in an environment that is coexisting with a legacy version of Exchange when the public folders are still hosted on the legacy version?

A: No; this issue only affects Modern Public Folders. You can only migrate to Modern Public Folders once you complete your user mailbox migration.

Q: Does this affect normal mailbox moves?

A: No, this only affects public folder mailbox moves.

Q: Does this affect public folders stored in Exchange Online that are moved by the auto-split process described in the Public Folders and Exchange Online article?

A: No, auto-split moves folders from one mailbox to another using New-PublicFolderMoveRequest and this cmdlet preserves the permissions.

Q: When will this fix be included in a cumulative update so that I do not have to deploy an IU?

A: This fix will be included in Exchange 2013 RTM Cumulative Update 3.

Q: When will the IU be available?

A: The IU will be available shortly; we will update this article with the KB article number once is released.

Q: Why didn’t you test this scenario?

A: The short answer is we thought we did. We didn’t realize we missed validating the permission structure after the public folder mailbox move. The Exchange team has well over 100,000 automated tests that we use to validate our product before we ship it. With the richness and number of scenarios and behaviors that Exchange supports, automated testing is the only scalable solution. We execute these tests in varying scenarios and conditions repeatedly before we release the software to our customers. We also supplement these tests with manual validation where necessary.

Q: What are you doing to prevent similar things from happening in the future?

A: We are conducting an internal review of our processes to determine how to prevent issues such as this in the future.

We deeply regret the impact this has on our customers and as always, we continue to identify ways to better serve your needs through our regular servicing releases.

Ross Smith IV
Principal Program Manager
Exchange Customer Experience

A significant update to Remove-DirectBooking script is now available

$
0
0

A short while ago, we posted an article on how to Use Exchange Web Services and PowerShell to Discover and Remove Direct Booking Settings. We received a lot of constructive feedback with some noting that users can experience an issue when enabling the Resource Booking Attendant on mailboxes that were cleansed of their direct booking settings via the sample script we provided. Specifically, the following error can be encountered when the organizer is scheduling a regular non-recurring meeting against the resource mailbox:

“…declined your meeting because it is recurring. You must book each meeting separately with this resource.”

We have updated the script to account for this scenario to prevent and correct this from occurring and we have also updated the article to reflect the changes as well.

In a nutshell, the issue is encountered when we have a divergence of what settings are enabled/disabled between the Schedule+ Free/Busy System (Public) Folder item representing the user’s mailbox and the user’s local mailbox free/busy item. Outlook’s Direct Booking process actually queries the Schedule+ item’s Direct Booking settings when attempting to perform Direct Booking functionality. The Schedule+ folder tree normally contains an item that contains a synced set of Direct Booking settings of that which is stored in the user’s localfreebusy mailbox item. The issue is encountered when the settings between the Schedule+ item and the local mailbox item do not match.

Normally, Outlook triggers a sync of the local mailbox item to the Schedule+ item via deliberate native MAPI code. However, in our case we are using EWS in the sample script, and that syncing trigger does not natively exist. We therefore updated the script to find the Schedule+ item and ensure its settings are congruent with the local item’s settings. The logic for this is actually a bit complicated for two main reasons:

  1. No Schedule+ item exists in the organization – There are valid scenarios where the Schedule+ item may not exist, such as the mailbox was never opened with Outlook and the Direct Booking settings were enabled via another means, such as MFCMAPI and so on.
  2. Co-existent versions of Exchange - EWS is rather particular on how public folder and public folder item bindings can occur. EWS by design will not allow a cross-version public folder (or item) bind operation. Period. This means a session, for example on a mailbox on Exchange 2010 would not be able to bind to a public folder or its items on Exchange 2007, there would need to be a replica of the folder on Exchange 2010 for the bind operation to be successful. Further, continuing our example, even if the there is a replica on Exchange 2010, the bind operation would still fail if the user’s mailbox database’s “default public folder database” is set to a non-2010 public folder database (i.e. an Exchange 2007 database). The EWS session would kick back an error stating: ‘There are no public folder servers available’

With these guidelines in mind, we approached the script update to maximize the congruency potential between the local mailbox item and the public folder item. We only disable the direct booking settings in the local mailbox item if one of the following criteria is met regarding the Schedule+ item:

  • We can successfully bind to the user’s Schedule+ item
    • There is a replica we can touch with the EWS session, and we found the item representing the user and we can therefore safely keep congruency between the local and the Schedule+ items.
  • There is no replica present that would potentially contain an item representing the user
    • There is no replica in the org (any version of exchange) that would contain an item for the user so there is no potential for getting into an incongruent state between the local and the Schedule+ items.
  • There is a replica of the Schedule+ folder on the same version of Exchange that the EWS session is connected to, AND the default public folder database of the user is likewise on the same version of Exchange.
    • We could not find a Schedule+ item for the user (if we did, we would have satisfied condition 1 above), but not because there was no replica containing the item (if we did, we would have satisfied condition 2 above), and not because we could not bind to the folder via the EWS limitations we outlined above. We can therefore state that congruency between the local and the Schedule+ items are not at risk and there is no Schedule+ item representing the user.

It should be noted that we will always take action to disable the Direct Booking settings from the Schedule+ item even if the local mailbox item does not have its Direct Booking settings enabled – this keeps us true to our “congruency” logic.

In closing, please remember that the script is a sample and does not cover every possible scenario out there – we made this update because the aforementioned issue reported is central to having the script produce the desired outcome of fully disabling Direct Booking. We are thankful for and welcome your continued feedback!

Dan Smith & Seth Brandes
Exchange PFEs

Managed Availability Monitors

$
0
0

Monitors are the central component of Managed Availability. They define what data to collect, what constitutes the health of a feature, and what actions to take to restore a feature to good health. Because there are several different aspects to Monitors, it can be hard to figure out how a specific Monitor works.

All of the properties discussed in this article can be found in the Monitor’s definition event in the Microsoft.Exchange.ActiveMonitoring\MonitorDefinition crimson channel are of the Windows event log.

See this article for how these definitions can be easily collected.

What Data is Collected?

Nearly all Monitors collect one of three types of data: direct notifications, Probe results, or performance counters. Monitors that change states based on a direct notification only get data from the notification.

Monitors based on Probe results become unhealthy when some Probes fail. There are two main types of these Monitors, those based on a number of consecutive Probe failures, and those based on a number of Probes failing over an interval.

Monitors based on performance counters simply determine if a counter is higher or lower than the built-in defined threshold for the required time.

The TypeName property of a Monitor definition indicates what data it is collecting and the kind of threshold must be reached before it is considered Unhealthy. Here are the most common types with what they use:

OverallPercentSuccessMonitorLooks at the results of all probes matching the SampleMask property and calculates the aggregate percent success over the past MonitoringIntervalSeconds. Becomes Unhealthy if the calculated percent success is less than the MonitoringThreshold.
OverallConsecutiveProbeFailuresMonitorLooks at the last X probe results as configured in MonitoringThreshold that match the SampleMask. Becomes Unhealthy if all of those results are failures.
OverallXFailuresMonitorLooks at the results of all probes matching the SampleMask property over the past MonitoringIntervalSeconds. Becomes Unhealthy if at least X results as configured in MonitoringThreshold are failures.
OverallConsecutiveSampleValueAboveThresholdMonitorLooks at the last X performance counter results as configured in SecondaryMonitoringThreshold matching SampleMask over the past MonitoringIntervalSeconds. Becomes Unhealthy if at least X performance counters are above the threshold configured in MonitoringThreshold.

Healthy or Not

One more thing must happen before the Monitor will become Unhealthy. The code for individual Monitors that checks the threshold only runs every X seconds, where X is specified by the RecurrenceIntervalSeconds property. The threshold is checked only when the Monitor runs.

As soon as the Monitor runs while the threshold is met, the Monitor becomes Unhealthy. Get-ServerHealth will report that the Monitor is Degraded for the first 60 seconds, but the functional behavior of the Monitor does not have a concept of being Degraded; it is either Healthy or Unhealthy.

The Health Set that a Monitor is part of is defined by the Monitor’s ServiceName property. If any Monitor is Unhealthy, the entire Health Set will be marked as Unhealthy as viewed from Get-HealthReport or via System Center Operations Manager (SCOM).

Responder Timeline

The StateTransitionXML property of a Monitor definition indicates which Responders execute and when, as each Responder is tied to a transition state of the Monitor. Let’s consider a Monitor that has this value for its StateTransitionXML property:

<StateTransitions>

<Transition ToState="Unhealthy" TimeoutInSeconds="0" />

<Transition ToState="Unhealthy1" TimeoutInSeconds="30" />

<Transition ToState="Unhealthy2" TimeoutInSeconds="330" />

<Transition ToState="Unrecoverable" TimeoutInSeconds="1500" />

</StateTransitions>

As soon as the Monitor runs while its defined threshold is met, it will transition to the “Unhealthy” state. These transition states are only used for internal consumption. Although they share a term, the Monitor can only be Healthy or Unhealthy from an external perspective. Any Responders set to execute when this Monitor is in this transition state will now execute. After 30 more seconds, any Responders set to execute when the Monitor is in the “Unhealthy1” state will now execute. The next Responder will be 300 seconds later (for a total of 330 seconds) when the Monitor is set to the “Unhealthy2” state. The transition state each Responder is tied to is set by the TargetHealthState property on a Responder definition, which is an integer. Here are the transition states that the integer indicates:

0None
1Healthy
2Degraded
3Unhealthy
4Unrecoverable
5Degraded1
6Degraded2
7Unhealthy1
8Unhealthy2
9Unrecoverable1
10Unrecoverable2

We call all these Responders that are tied to a Monitor transition states a Responder chain. As a Monitor’s threshold continues to be met, stronger and stronger Responders execute until the Monitor determines it is Healthy or an administrator is notified via event log escalation. If the code for this Monitor runs while it is in the “Unhealthy1” state and the threshold is no longer met, the Monitor will immediately transition to None. No more Responders will execute. Get-ServerHealth would again report this Monitor as Healthy.

Abram Jackson

Program Manager, Exchange Server

OWA for iPhone and OWA for iPad are now available!

$
0
0

Today, we are excited to announce the availability of OWA for iPhone and OWA for iPad, which provides even more value to organizations on any Office 365 subscription that includes Exchange Online. OWA for iPhone and OWA for iPad are mobile apps that offer the same email, calendar, and contact functionality you get in Outlook Web App on the browser, but with additional capabilities that are only possible through native integration of the app with mobile devices.

Our goal is to help our customers remain productive anytime, anywhere. This includes providing a great email experience on smartphones and tablets. We already offer Exchange ActiveSync (EAS), which is the de facto industry standard for accessing Exchange email on mobile devices. Windows Phone 8 comes with Outlook Mobile, and the Outlook Web App experience in the Windows Phone 8 browser is top-notch. In order to better support many of our customers who use their iPhones and iPads for work, we are introducing OWA for iPhone and OWA for iPad, which bring a native Outlook Web App experience to iOS devices!

OWA for iPhone and OWA for iPad can be installed directly from the Apple App Store. A subscription to Office 365 that includes Exchange Online is required to use the app.1 If you aren’t already an Office 365 subscriber, you can visit www.office.com to learn more and sign up.

Note: OWA for iPhone/iPad is for Office 365 customers who have the newest update of Exchange Online. Support for Exchange Server 2013 is planned for the future.

For more details, head over to OWA for iPhone and OWA for iPad on the Office 365 technology blog.

The Exchange Team


Life in a Post TMG World – Is It As Scary As You Think?

$
0
0

Let’s start this post about Exchange with a common question: Now that Microsoft has stopped selling TMG, should I rip it out and find something else to publish Exchange with?

I have occasionally tried to answer this question with an analogy. Let’s try it.

My car (let’s call it Threat Management Gateway, or TMG for short), isn’t actively developed or sold any more (like TMG). However, it (TMG) works fine right now, it does what I need (publishes Exchange securely) and I can get parts for it and have it serviced as needed (extended support for TMG ends 2020) and so I ‘m keeping it. When it eventually either doesn’t meet my requirements (I want to publish something it can’t do) or runs out of life (2020, but it could be later if I am ok to accept the risk of no support) then I’ll replace it.

Now, it might seem odd to offer up a car analogy to explain why Microsoft no longer selling TMG is not a reason for Exchange customers to panic, but I hope you’ll agree, it works, and leads you to conclude that when something stops being sold, like your car, it doesn’t immediately mean you replace it, but instead think about the situation and decide what to do next. You might well decide to go ahead and replace TMG simply based on our decision to stop selling or updating it, that’s fine, but just make sure you are thinking the decision through.

Of course, you might also decide not to buy another car. Your needs have changed. Think about that.

Here are some interesting Exchange-related facts to help further cement the idea I’m eventually going to get to.

  1. We do not require traffic to be authenticated prior to hitting services in front of Exchange Online.
  2. We do not do any form of pre-authentication of services in front of our corporate, on-premises messaging deployments either.
  3. We have spent an awfully large amount of time as a company working on securing our code, writing secure code, testing our code for security, and understanding the threats that exist to our code. This is why we feel confident enough to do #1 and #2.
  4. We have come to learn that adding layers of security often adds little additional security, but certainly lots of complexity.
  5. We have invested in getting our policies right and monitoring our systems.

This basically says we didn’t buy another car when ours didn’t meet our needs any more. We don’t use TMG to protect ourselves any more. Why did we decide that?

To explain that, you have to cast your mind back to the days of Exchange and Windows 2000. The first thing to admit is that our code was less ‘optimal’ (that’s a polite way of putting it), and there were security issues caused by anonymous access. So, how did we (Exchange) tell you to guard against them? By using something called ISA (Internet Security and Acceleration – which is an odd name for what it was, a firewall). ISA, amongst other things, did pre-authentication of connections. It forced users to authenticate to it, so it could then allow only authenticated users access to Exchange. It essentially stopped anonymous users getting to Windows and Exchange. Which was good for Windows and Exchange, because there were all kinds of things that they could do if they got there anonymously.

However once authenticated users got access, they too could still do those bad things if they chose to. And so of course could anyone not coming through ISA, such as internal users. So why would you use ISA? It was so that you would know who these external users were wouldn’t you?

But do you really think that’s true? Do you think most customers a) noticed something bad was going on and b) trawled logs to find out who it was who did it? No, they didn’t. So it was a bit like an insurance policy. You bought it, you knew you had it, you didn’t really check to see if it covers what you were doing until you needed it, and by then, it was too late, you found out your policy didn’t cover that scenario and you were in the deep doo doo.

Insurance alone is not enough. If you put any security device in front of anything, it doesn’t mean you can or should just walk away and call it secure.

So at around the same time as we were telling customers to use ISA, back in the 2000 days, the whole millennium bug thing was over, and the proliferation of the PC, and the Internet was continuing to expand. This is a very nice write up on the Microsoft view of the world.

Those industry changes ultimately resulted in something we called Trustworthy Computing. Which was all about changing the way we develop software – “The data our software and services store on behalf of our customers should be protected from harm and used or modified only in appropriate ways. Security models should be easy for developers to understand and build into their applications.” There was also the Secure Windows Initiative. And the Security Development Lifecycle. And many other three letter acronyms I’m sure, because whatever it was you did, it needed a good TLA.

We made a lot of progress over those ten years since then. We delivered on the goal that the security of the application can be better managed inside the OS and the application rather than at the network layer.

But of course most people still seem to think of security as being mainly at the network layer, so think for a moment about what your hardware/software/appliance based firewall does today. It allows connections from a destination, on some configurable protocol/port, to a configured destination protocol/port.

If you have a load balancer, and you configure it to allow inbound connections to an IP on its external interface, to TCP 443 specifically, telling it to ignore everything else, and it takes those packets and forward them to your Exchange servers, is that not the same thing as a firewall?

Your load balancer is a packet filtering firewall. Don’t tell your load balancing vendor that, they might want to charge you extra for it, but it is. And when you couple that packet level filtering firewall/load balancer with software behind it that has been hardened for 10 years against attacks, you have a pretty darn secure setup.

And that is the point. If you hang one leg of your load balancer on the Internet, and one leg on your LAN, and you operate a secure and well managed Windows/Exchange Server – you have a more secure environment than you think. Adding pre-authentication and layers of networking complexity in front of that buys you very little extra, if anything.

So let’s apply this directly to Exchange, and try and offer you some advice from all of this. What should YOU do?

The first thing to realize is that you now have a CHOICE. And the real goal of this post is to help you make an INFORMED choice. If you understand the risks, and know what you can and cannot do to mitigate them, you can make better decisions.

Do I think everyone should throw out that TMG box they have today and go firewall commando? No. not at all. I think they should evaluate what it does for them, and, if they need it going forward. If they do that, and decide they still want pre-auth, then find something that can do it, when the time to replace TMG comes.

You could consider it a sliding scale, of choice. Something like this perhaps;

TMGScale

So this illustrated that there are some options and choices;

  1. Just use a load balancer– as discussed previously, a load balancer only allowing in specified traffic, is a packet filtering firewall. You can’t just put it there and leave it though, you need to make sure you keep it up to date, your servers up to date and possibly employ some form of IDS solution to tell you if there’s a problem. This is what Office 365 does.
  2. TMG/UAG– at the other end of the scale are the old school ‘application level’ firewall products. Microsoft has stopped selling TMG, but as I said earlier, that doesn’t mean you can’t use it if you already have it, and it doesn’t stop you using it if you buy an appliance with it embedded.

In the middle of these two extremes (though ARR is further to the left of the spectrum as shown in the diagram) are some other options.

Some load balancing vendors offer pre-authentication modules, if you absolutely must have pre-auth (but again, really… you should question the reason), some use LDAP, some require domain joining the appliance and using Kerberos Constrained Delegation, and Microsoft has two options here too.

The first, (and favored by pirates the world over) is Application Request Routing, or ARR! for short. ARR! (the ! is my own addition, marketing didn’t add that to the acronym but if marketing were run by pirates, they would have) “is a proxy based routing module that forwards HTTP requests to application servers based on HTTP headers and server variables, and load balance algorithms” – read about it here, and in the series of blog posts we’ll be posting here in the not too distant future. It is a reverse proxy. It does not do pre-authentication, but it does let you put a non-domain joined machine in front of Exchange to terminate the SSL, if your 1990’s style security policy absolutely requires it, ARR is an option.

The second is WAP. Another TLA. Recently announced at TechEd 2013 in New Orleans is the upcoming Windows Server 2012 R2 feature – Web Application Proxy. A Windows 2012 feature that is focused on browser and device based access and with strong ADFS support and WAP is the direction the Windows team are investing in these days. It can currently offer pre-authentication for OWA access, but not for Outlook Anywhere or ActiveSync. See a video of the TechEd session here (the US session) and here (the Europe session).

Of course all this does raise some tough questions. So let’s try and answer a few of those;

Q: I hear what you are saying, but Windows is totally insecure, my security guy told me so.

A: Yes, he’s right. Well he was right, in the yesteryear world in which he formed that opinion. But times have changed, and when was the last time he verified that belief? Is it still true? Do things change in this industry?

Q: My security guy says Microsoft keeps releasing security patches and surely that’s a sign that their software is full of holes?

A: Or is the opposite true? All software has the potential for bugs and exploits, and not telling customers about risks, or releasing patches for issues discovered is negligent. Microsoft takes the view that informed customers are safer customers, and making vulnerabilities and mitigations known is the best way of protecting against them.

Q: My security guy says he can’t keep up with the patches and so he wants to make the server ‘secure’ and then leave it alone. Is that a good idea?

A: No. It’s not (I hope) what he does with his routers and hardware based firewalls is it? Software is a point in time piece of code. Security software guards against exploits and attacks it knows of today. What about tomorrow? None of us are saying Windows, or any other vendor’s solution is secure forever, which is why a well-managed and secure network keeps machines monitored and patched. If he does not patch other devices in the chain, overall security is compromised. Patches are the reality of life today, and they are the way we keep up with the bad guys.

Q: My security guy says his hardware based firewall appliance is much more secure than any Windows box.

A: Sure. Right up to the point at which that device has a vulnerability exposed. Any security device is only as secure as the code that was written to counter the threats known at that time. After that, then it’s all the same, they can all be exploited.

Q: My security guy says I can’t have traffic going all the way through his 2 layers of DMZ and multitude of devices, because it is policy. It is more secure if it gets terminated and inspected at every level.

A: Policy. I love it when I hear that. Who made the policy? And when? Was it a few years back? Have the business requirements changed since then? Have the risks they saw back then changed any? Sure, they have, but rarely does the policy get updated. It’s very hard to change the entire architecture for Exchange, but I think it’s fair to question the policy. If they must have multiple layers, for whatever perceived benefit that gives (ask them what it really does, and how they know when a layer has been breached), there are ways to do that, but one could argue that more layers doesn’t necessarily make it better, it just makes it harder. Harder to monitor, and to manage.

Q: My security guy says if I don’t allow access from outside except through a VPN, we are more secure.

A: But every client who connects via a VPN adds one more gateway/endpoint to the network don’t they? And they have access to everything on the network rather than just to a single port/protocol. How is that necessarily more secure? Plus, how many users like VPN’s? Does making it harder to connect and get email, so people can do their job, make them more productive? No, it usually means they might do less work as they cannot bothered to input a little code, just so they can check email.

Q: My security guy says if we allow users to authenticate from the Internet to Exchange then we will be exposed to an account lockout Denial of Service (DoS).

A: Yes, he’s right. Well, he’s right only because account lockout policies are being used, something we’ve been advising against for years, as they invite account lockout DoS’s. These days, users typically have their SMTP address set to equal their User Principal Name (UPN) so they can log on with (what they think is) their email address. If you know someone’s email address, you know their account logon name. Is that a problem? Well, only if you use account lockout policies rather than using strong password/phrases and monitoring. That’s what we have been telling people for years. But many security people feel that account lockouts are their first line of defense against dictionary attacks trying to steal passwords. In fact, you could also argue that a bad guy trying out passwords and getting locked out now knows the account he’s trying is valid…

Note the common theme in these questions is obviously – “the security guy said…..”. And it’s not that I have it in for security guys generally speaking, but given they are the people who ask these questions, and in my experience some of them think their job is to secure access by preventing access. If you can’t get to it, it must be safe right? Wrong. Their job is to secure the business requirements. Or put another way, to allow their business to do their work, securely. After all, most businesses are not in the business of security. They make pencils. Or cupcakes. Or do something else. And is the job of the security folks working at those companies to help them make pencils, or cupcakes, securely, and not to stop them from doing those things?

So there you go, you have choices. What should you choose? I’m fine with you choosing any of them, but only if you choose the one that meets your needs, based on your comfort with risk, based on your operational level of skill, and based on your budget.

Greg Taylor
Principal Program Manager Lead
Exchange Customer Adoption Team

Part 1: Reverse Proxy for Exchange Server 2013 using IIS ARR

$
0
0

For a long time, ForeFront TMG (and ISA before it) has been the go-to Microsoft reverse proxy solution for many applications, including Exchange Server. However, with no more development roadmap for TMG 2010 a lot of customers are looking out for an alternative solution that works well with Exchange Server 2013.

The Windows team have added an additional component called Application Request Routing (ARR, or as Greg the pirate says, ARR!) 2.5 to the Internet Information Service (IIS) role, which enables IIS to handle reverse proxy requests. By using the URL Rewrite Module and Application Request Routing you can implement complex and flexible load balancing and reverse proxy configurations.

There are three options when implementing this solution and each have their pros and cons, which I'll cover in three posts. In this first post, we'll take a look at:

  1. Installation steps.
  2. Option 1 of implementing ARR as a reverse proxy solution for Exchange 2013 (this option is the simplest of the three configurations).

In the next 2 posts in the series, we'll cover Options 2 & 3 and some troubleshooting steps. The troubleshooting steps would also help you to verify if you have implemented the reverse proxy solution correctly.

Here's a diagram of the environment we'll use when discussing how to implement ARR.

Arr1

Prerequisites

  1. The IIS ARR server need not be domain joined. It's your choice to decide if you want to domain join this server or not.
  2. The IIS ARR server should have two NICs, one for the internal network and the other for the external network.

    TIP To make sure you're configuring and using the right network interface, rename the NICs to Internal and External.

  3. If you're not using an internal DNS server, you should update the HOSTS file on the IIS ARR server so that it can perform name resolution for the internal CAS and the published Exchange namespaces.
  4. Make sure you have already set the Internal and External URL’s for Outlook Anywhere, OWA, EWS and EAS, have your certificates installed correctly and this is all working as expected. If not, get it working first before you start adding ARR into the mix.

Installing ARR

Requirements: IIS ARRis supported on Windows Server 2008, Windows Server 2008 R2, and Windows Server 2012. It is also supported on Windows Vista, Windows 7, and Windows 8 with the Web services features installed. Note that IIS ARR does not require IIS 6.0 compatibility mode.

Note: As with all such changes, we recommend that you test this in a non-production environment before deploying in production environment.

To install IIS with the ARR module on the server identifid as the Reverse Proxy:

  1. 1. Install IIS, including .NET 3.5.1 and Tracing. You can use run this command in PowerShell to add all of the required features.

    Import-Module ServerManager
    Add-WindowsFeature Web-Static-Content,Web-Default-Doc,Web-Dir-Browsing,Web-Http-Errors,Web-Net-Ext,Web-Http-Logging,Web-Request-Monitor,Web-Http-Tracing,Web-Filtering,Web-Stat-Compression,Web-Mgmt-Console,NET-Framework-Core,NET-Win-CFAC,NET-Non-HTTP-Activ,NET-HTTP-Activation,RSAT-Web-Server

  2. Export the Exchange certificate (from a CAS) and import the certificate to the local machine certificate store on the IIS Reverse Proxy, together with any required root or intermediate certificates. See the following topics on how to export & import certificates:
    1. Export an Exchange Certificate
    2. Import a Server Certificate (IIS 7)
  3. On the Default Web Site, add an HTTPS binding and associate the (imported) Exchange certificate.

    ARR2

  4. Download and Install the latest version: IIS ARR 2.5.

    If you don’t have internet access on the IIS ARR server, you can use the steps highlighted in How to install Application Request Routing (ARR) 2.5 without Web Platform Installer (WebPI).

OPTION 1

This is the simplest way of implementing IIS ARR as a Reverse Proxy solution for Exchange Server 2013. This implementation requires a minimum number of SAN entries in your certificate and minimum number of DNS entries.

This set up assumes that all protocols (OWA, ECP, EWS etc) have been published with the mail.tailspintoys.com namespace.

  • Certificate: mail.tailspintoys.com, autodiscover.tailspintoys.com
  • DNS: Public IP address for each of the above namespaces

Step 1: Create a Server Farm

  1. Open IIS and click on Server Farm.
  2. Create a new farm and give it a name as shown below.

    ARR3

  3. On the Add Server page, add each of the Client Access server and click Finish.

    ARR4

  4. Select Yesat the below prompt.

    ARR5

Step 2: Server Farm Configuration Changes

On the Server Farm settings node make the configuration changes as detailed below:

  1. Select Caching and choose Disable Disk Cache.
  2. Select Health Test.  This is used to make sure that a particular application is up and running. It is similar to a Load Balancer’s service availability test.

    In Exchange 2013 there is a new component called Managed Availability and it uses various checks to make sure that each of the protocols (OA, OWA, EWS, etc.) are up and running. If any protocol fails this check then an appropriate action is automatically taken. (This was just a very simple explanation as to what Managed availability is of course, but if you can take it, and want a more detailed understanding watch Ross Smith IV’s TechEd 2013 Session). We are going to leverage one of these checks to make sure that the service/protocol is available.

    https://<fqdn>/<protocol>/HealthCheck.htm is the default web page present in Exchange 2013. These URL’s are specific for each protocol and do not have to be created by the administrator.

    Examples:

    https://autodiscover.tailspintoys.com/Autodiscover/HealthCheck.htm

    https://mail.tailspintoys.com/EWS/HealthCheck.htm

    https://mail.tailspintoys.com/OAB/HealthCheck.htm

    Configure the Health Test with the following settings:

    URL: https://mail.tailspintoys.com/OWA/HealthCheck.htm

    Interval: 5 seconds

    Time-Out: 30 seconds

    Acceptable Status Code: 200

    ARR6

  3. Select Load Balance and choose Least Current Request. There are other options, but for this scenario, we find this to be simple and effective.

    ARR7

  4. Select Monitoring and Management. This shows the current state of the CAS that are part of this Server Farm. The Health Status is based on the output of the Health Test mentioned above.

    ARR8

  5. Select Proxy.  Change the below two values.  The actual value for these settings may need to be tweaked for your deployment, but these usually work well as a starting point.

    Time-Out: 200 seconds

    Response Buffer threshold: 0

  6. Select Routing Rules and uncheck Enable SSL Offloading as it is not supported in Exchange 2013.
  7. Select Server Affinity.  Due to major architectural changes in the way CAS works in Exchange 2013 we do not need to maintain session affinity. As long as you can get to a CAS server, you will be able to access your mailbox. Thus leave this setting as is. Which means, no changes required.

Step 3: Create URL Rewrite Rules

  1. At the IIS Root (this is the root and not the properties of the Default Web Site) click on URL Rewrite.

    ARR9

  2. You should see two URL Rewrite rules already created (these were created when you selected “Yes” at the end of Server Farm creation).
  3. Deletethe one for HTTP .

    ARR10

  4. Open the properties of the HTTPS rule and make the changes as below;
    1. Under Conditions add a condition for {HTTP_HOST} and make sure it looks like this:

      ARR11

    2. Under Action make sure that you have the below options set i.e.: choose the appropriate Server Farm from the drop down menu.

      ARR12

      Note: Make sure the option “Stop processing of subsequent rules” is selected. This is to make sure that the validation process stops once the requested URL finds a match.

    3. Repeatthe same steps of creating a Server Farm and URL Rewrite rule for your AutoDiscover URL (i.e., autodiscover.tailspintoys.com). The final result is as shown below.

      ARR13

That’s it!!!! ....You are now all set and have a reverse-proxy-with-load-balancing solution for your Exchange 2013 environment!

Give it a try and see how it works. Make sure DNS for mail.tailspintoys.com resolves to your reverse proxy and try connecting a client. And if it doesn’t work, go back through the steps and see where you went wrong. And if it still doesn’t work, post a comment here, or wait for Part 3, Troubleshooting (so please don’t do all this for the first time in a production environment! Really, we mean it!).

Finally, here are a couple of additional changes we recommend you review and optionally consider making to your IIS ARR configuration.

  1. Implement the changes (Step3 and Step4) from Install Application Request Routing Version 2.
  2. For optimization of RPC-HTTP traffic make the changes as stated. Click on the root of IIS and open the properties for Request Filtering. Then click on “Edit Feature Settings” and change the settings for “Maximum allowed content length” to the below.

    ARR14

We've spent time testing this configuration and found it to work as we hoped and expected. Note that support for IIS ARR is provided by the Windows/IIS team, not Exchange. That's no different than support for TMG or UAG(if you use either of these products to publish Exchange).

We would really appreciate any feedback on your implementation and/or any configuration where this doesn’t seem to work.

Keep your eyes peeled for the next set of articles where we’ll talk about slightly complex and interesting implementations of IIS ARR for Exchange 2013.

I would like to thank Greg Taylor (Principal PM Lead) for his help in reviewing this article.

References

B. Roop Sankar
Premier Field Engineer, UK

Office Configuration Analyzer Tool (OffCAT) version 1.1 is now available

$
0
0

On Friday, July 19, the OffCAT team released OffCAT version 1.1 to the Microsoft Download site, replacing the original version of OffCAT. There are many new features, diagnostic rules, and fixes in OffCAT v1.1, and the v1.1 upgrade is only a few clicks away.

To entice you to install v1.1 right away, here are some of the new features only found in this latest version:

  • Option to scan All Office programs (Access, Excel, InfoPath, OneNote, Outlook, PowerPoint, Publisher, Visio and Word)
  • CalCheck v2.1.1 included
  • Option to update rule files without prompting
  • Alternative rule file download location
  • Additional group policy support
  • New ‘Options’ page
    • Default folder for saving scans
    • Theme color

As always, please follow the OffCAT team on Twitter to receive news of any publicly available OffCAT updates. You can also send email to OffCATsupp@microsoft.com if you encounter problems with OffCAT or you want to submit a feature request.

Updating OffCAT to v1.1

There were two basic ways to install OffCAT v1.0, so there are two ways to install the v1.1 update.

  • OffCAT.msi
  • OffCAT 1.1.zip

OffCAT.msi

Most people install OffCAT using the OffCAT.msi file from the Microsoft Download site. If you installed OffCAT v1.0 this way, here is the quickest way to install OffCAT v1.1:

1. Start OffCAT (v1.0)

2. When you see the following prompt (assuming you have ‘Check for updates on startup’ enabled), select the ‘Update the tool in place’ option.

image

The OffCAT.msi file from the Microsoft Download site will be downloaded and launched, ready for you to complete the installation.

image

The setup process for OffCAT v1.1 automatically removes OffCAT v1.0, so there’s no need to remove OffCAT v1.0 before updating to v1.1.

3. Then, after the update is finished, launch OffCAT from the shortcut on the Start menu/page.

OffCAT 1.1zip

If you ‘installed’ OffCAT v1.0 by extracting the files included in OffCAT 1.0.zip, you can use the following steps to get OffCAT v1.1 onto your computer.

1. Locate and delete the folder containing the files extracted from OffCAT 1.0.zip

2. Go to the OffCAT download page and then select OffCAT 1.1.zip when prompted to select your download file.

image

3. Extract the files from OffCAT 1.1.zip into a new folder.

4. Launch OffCAT v1.1 using OffCAT.exe in this new folder.

OffCAT v1.1 documentation

The Microsoft Download site also provides a download for a complete user's guide on OffCAT tool. It is highly recommended you read this document before installing and using the OffCAT tool:

OffCAT ReadMe (ReadMe_OffCATv1.1.docx)

Note: You do not have to download the ReadMe from the Download site as it is included in the OffCAT.msi installation and the OffCAT 1.1.zip file. However, if you want to read the documentation before installing OffCAT, then you can download it.

New feature details

Several of the new features found in OffCAT v1.1 deserve a little more than a simple bullet point in a summary list. For details on all the new features, please see the OffCAT ReadMe file.

Option to scan All Office programs

OffCAT v1.1 added a new scan option called ‘All’, as shown in the following figure.

image

When you select the ‘All’ option, you are then presented with another screen where you can selectively enable/disable scanning for each of the detected Office programs.

image

When click Start scanning, OffCAT will generate separate scans for each program that is enabled in the above screen. Then, when you click Select a scan to view, you can also see the individual scans for each Office program that was scanned using the ‘All’ option.

image

Note: If you use the ‘All’ option on a machine where you have multiple versions of Outlook installed (for example, Office 2013 Click-to-run plus an earlier Office version), Outlook will not be scanned. In this scenario, you will have to scan Outlook separately.

CalCheck v2.1.1 included

The CalCheck tool was recently update to v2.1.1. This version of CalCheck is included in OffCAT v1.1.

By default, OffCAT displays up to 10 warnings and 10 errors from the issues found by CalCheck. However, if you want to see more than the default 10 items, you can use the new group policy setting for OffCAT v1.1 that allows you to see up to 50 items. Details on this policy setting are provide in ReadMe_OffCATv1.1.docx.

Control the prompt to download updated rule files
By default, OffCAT checks the Microsoft.com Internet site for updated rule files whenever you start OffCAT. If new files are found, you are prompted to download the update(s). OffCAT v1.1 provides a new option that will download the updated rule file(s) without prompting you. To configure this option, click OffCAT Updates in the left panel and then enable the “When updated rule files are avaialble, install without prompting”.

image

New ‘Options’ page

The left panel in OffCAT includes an Options link. Click Options to examine and configure the following three settings.

  • Default folder in which OffCAT scan files are saved
  • Default OffCAT theme color

image

Default location for scans

When OffCAT scans your Office application(s) a new scan file is created each time a scan is run. These files are saved by default in the %AppData%\Microsoft\OffCAT folder. If you want these scan files to be saved to a different folder, click ‘Modify’ to the right of ‘Default location for scans’ and then select the new folder.

Default OffCAT theme color

If you do not like the default color theme used by OffCAT, select one of the 10 colors available on the Options page next to ‘Choose a theme’.

Alternative download location for rule files

OffCAT v1.1 provides a group policy setting that allows you to specify an HTTP, UNC, or local file (not FTP) path to a folder containing the OffCAT rule files. If this policy is enabled, OffCAT will not look to the default Internet location on Microsoft.com for the latest rule files.

Please see ReadMe_OffCATv1.1.docx for complete details on configuring this new policy setting.

Additional group policy support

Some of the user-configurable settings in OffCAT can be managed by group policy so administrators can control these settings on behalf of users. To configure OffCAT settings through group policy, download the group policy template (Offcatv11.adm) from the page on the Microsoft Download site from where you downloaded OffCAT.msi (or OffCAT 1.1.zip). Then, import the Offcatv11.adm template into your group policy editor, as demonstrated in the following figure.

image

The following OffCAT settings can be configured using group policy.

  • Default scan folder location
  • Alternate folder location for report files
  • Delete local report file
  • Alternative download location for rule files
  • Always check for OffCAT updates when starting OffCAT
  • Download updated rule files without prompting
  • Hide ‘Fix it for me’ links in rule solutions
  • Show the Welcome screen when OffCAT is launched
  • Number of warnings or errors reported from CalCheck

Please see the ReadMe_OffCATv1.1.docx file for complete details on all of these policy settings.

Greg Mansius

Now Available: Updated Release of Exchange 2013 RTM CU2

$
0
0

On July 12th, we announced that Exchange Server 2013 RTM CU2 contained an issue that could result in the loss of public folder permissions when the public folder mailbox is moved between Exchange 2013 databases. Initially, we indicated that an interim update (IU) would be available via Microsoft Support to resolve this issue. However, after receiving your feedback, we decided to generate a new build of Exchange 2013 RTM CU2 that includes the fix for the issue.

The new build number of Exchange 2013 RTM CU2 is 15.0.712.24. You can download the new build from the Download Center.

Installing/Upgrading to Exchange 2013 RTM CU2 (712.24)

As always, we recommend you test updates in a lab environment that closely mirrors your production environment prior to deploying in your production environment.

If you have not deployed Exchange 2013 RTM CU2, you can follow the steps outlined in the Upgrading/Deploying Cumulative Update 2 section in the Exchange 2013 RTM CU2 announcement article.

If you have already installed Exchange 2013 RTM CU2 (712.22), you can simply execute setup.exe /m:upgrade /IAcceptExchangeServerLicenseTerms from a command line to upgrade your servers to the 712.24 build; alternatively you can upgrade via the setup user interface. Attempting to uninstall the 712.22 build will result in the complete uninstall of the server and is not recommended.

Important: Regardless of whether you are using modern public folders, we strongly recommend upgrading to this build of Exchange 2013 RTM CU2. Any security updates released for CU2 will be dependent on this build.

We deeply regret the impact this updated release has on our customers and as always, we continue to identify ways to better serve your needs through our regular servicing releases.

Ross Smith IV
Principal Program Manager
Exchange Customer Experience

A reminder on real life performance impact of Windows SNP features

$
0
0

I think we are due for a reminder on best practices related to Windows features collectively known as “Microsoft Scalable Networking Pack (SNP)”, as it seems difficult to counter some of the “tribal knowledge” on the subject. Please also see our previous post on the subject.

Recently we had a customer that called in on the subject and this particular case stressed the importance of the Scalable Networking Pack features. The background of this case was that the customer was Running Exchange 2010 SP2 RU6 on Windows 2008 R2 SP1, and they had multiple physical sites with a stretched DAG. This customer had followed our guidance from Windows 2003 times and disabled all of the relevant options on all of their servers, similar to below:

Receive-Side Scaling State : disabled
Chimney Offload State : disabled
NetDMA State : disabled
Direct Cache Acess (DCA) : disabled
Receive Window Auto-Tuning Level : disabled
Add-On Congestion Control Provider : ctcp
ECN Capability : disabled
RFC 1323 Timestamps : disabled

The current problem was that the customer was trying to add copies of all their databases to a new physical site, so that they could retire an old disaster recovery site. The majority of these databases were around 1 to 1.5TB in size, with some ranging up to 3TB. The customer stated that the databases took five days to reseed, which was unacceptable in his mind, especially since he had to decommission this site in two weeks. After digging into this case a little bit more and referencing this article, we first started by looking at the network drivers. With all latency issues or transport issues overs a WAN or LAN, we should always make sure that the network drivers are updated. Since the majority of the servers in this customer’s environment were virtual servers running the latest version of the virtual software, we switched our focus over to physical machines. When we looked at the physical machines we saw they had a network driver with a publishing date of December 17, 2009.

At this point I recommended to update the network driver to a newer version with at least a driver date of 2012 or newer. We then tested again, and still saw the transfer speeds roughly similar to those before updating the drivers. At that point I asked the customer to change the scalable network pack items from above to:

Receive-Side Scaling State : enabled
Chimney Offload State : automatic
NetDMA State : enabled

(Here is how you change these items.)

The customer changed the SNP features and then rebooted the machines in question. At that time he started to reseed a 2.2TB database across their WAN at around 12pm. The customer sent me an email later that night that stated the database would now take around 12 hours to reseed. The next morning he sent me another email and the logs were copied over before he showed up for work at 7am. This time the reseed took 19 hours to complete compared to 100+ hours with SNP features disabled. Customer stated that he was very happy, and started planning how to upgrade network drivers on all other physical machines in his environment. Once that was done he was going to change RSS, TCP Chimney, and NetDMA to the recommended values on all of his other Windows 2008 R2 SP1 machines.

The following two articles show the current recommendations for the Scalable Networking Pack features:

  1. Here is the document reference above that shows the correct settings for each version
  2. Even thoughthis article specifies SQL, this is still relevant to the operating system that Exchange sits on.

So, what exactly is our point?

Friends don’t let friends run their modern OS-servers with old network drivers and SNP features turned off! As mentioned in our previous blog post on the subject, please make sure that you update network-level drivers first, as many vendors made various fixes in their driver stacks to make sure that SNP features function correctly. The above is just one illustration of issues that incorrect settings in this area can bring to your environment.

David Dockter

Viewing all 703 articles
Browse latest View live