Introduction
The CCR cluster in Exchange 2007 allows your environment to
become fault tolerant (up to a level) and is the basis of the 2010 DAG
technology. Whilst it is a bit of a fairy/drama queen when it comes to
replication (in my opinion) it’s easier to manage and set up than a 2003 high
availability cluster.
A certain number of requirements apply to a CCR cluster:
·
You must have the failover clustering component
on your OS (So enterprise/datacentre licenses are required)
·
You’ll need a heartbeat network interface
·
A file share witness is required on a third
server
·
A CCR can only run on a mailbox only server. So
no sharing of Exchange roles…
·
Updating is a bit more complicated.
·
Management becomes more “difficult” versus a
normal installation of Exchange.
So for this tutorial we’ll be using the following
infrastructure:
·
The environment we setup in the transition from
2003 document
·
2 extra 2008 enterprise server
·
Exchange 2007 SP3.
Preparation and prerequisites
As mentioned before you need an enterprise license minimum
and a heartbeat nic for both servers. Once you have everything configured on
the OS level we can start configuring the servers for the CCR cluster! The
first thing we have to do is install & configure the clustering component
for windows 2008.
·
Add features
·
Select Failover Clustering
·
Next
·
Install
Repeat on the second server in your CCR Cluster. As soon as
you have configured this on both server (reboot might be required, but did not happen in my case) open up the
failover cluster manager and select the “validate a configuration wizard”.
Enter the names of the servers and click next.
Select the radio button in front of “run only tests I select”
and click next. Uncheck the storage part (a
CCR does not use shared storage!) and click next, next and wait for
completion. If the results of the tests are all “passed” you are good to go, if
not, review why it failed and correct where possible.
When all tests are returned in the green select the “create
cluster wizard”. For the time being only enter one server name in the
selection. Since you already ran the validation wizard you can ignore the “validation
warning” window.
Name your cluster and give it and IP, click on next, next
finish. Now in the failover cluster manager expand the nodes and select the
action “add node”. Go through the wizard until finished J. Open up the networks section
in the cluster manager and rename the networks according to their function.
This helps troubleshooting in case something would go wrong in the future… On
the heartbeat network, the checkmark “allow clients to connect through this
network” needs to be unticked. Double check this is the case.
Now on each server run the following commands to install the
prerequisites for Exchange 2007.
ServerManagerCmd -i Web-Server
ServerManagerCmd -i Web-ISAPI-Ext
ServerManagerCmd -i Web-Metabase
ServerManagerCmd -i Web-Lgcy-Mgmt-Console
ServerManagerCmd -i Web-Basic-Auth
ServerManagerCmd -i Web-Windows-Auth
Now, on the server you elected to use for the File Share
Witness, open a command window (DOS prompt) and perform the following actions:
·
MKDIR FSW_CCR_MAIL
·
NET SHARE FSW_CCR=C:\ FSW_CCR_MAIL
/GRANT:CCRCLUSTER$,FULL
·
CACLS C:\ FSW_CCR_MAIL /G
BUILTIN\Administrators:F CCRCLUSTER$:F
Note that CCRMAIL$ is
the actual computer name you configured your cluster with, in my case mail$.
Now, on your failover cluster, right click, nore actions,
configure cluster quorum settings. Select the “Node and file share majority”
option and click next. Now browse to the shared folder path, click next, next
& finish.
Installation
Time to start the installation! Fire up the Exchange setup
and opt to install Exchange server 2007…
Once setup has started select “custom Exchange server installation” and check
the “active clustered mailbox role” checkbox. If necessary change the
installation path. On the next screen make sure cluster continuous replication
is selected and specify the name for the CCR (needs to be different from your
cluster name!). Use the screen that follows to assign an IP address (again,
this needs to be different from the cluster IP) and let the installation do its
work on the server. Once setup completes reboot the server.
Now that the first node has rebooted and is up and running, turn
your attention to the second server. Launch setup on this server and select the
custom installation option, but select the passive clustered mailbox role
checkbox this time over. When the prerequisite checking has completed press the
install button and wait for setup to complete.
Finishing up
Now that the installation has been completed open up the
Exchange management console and expand the server configuration > Mailbox.
As you can see the individual server you installed Exchange on are not listed,
instead the cluster name has been listed as a mailbox server.
At this point you can go ahead and move the databases and
log files to a different drive. The recommended configuration is that both the
database and the log files are on a different physical spindle (hard drive) for
optimum performance. Before telling you how to move the files to a different
path you’ll have to note that the hard drive configuration needs to be the same
on both CCR nodes. That means the drive letters need to be the same! Ideally
you would have the same hard drive space available on both nodes as well as you
don’t want either server to run out of disk space.
Now, to move the database and log file path follow this
process:
1.
Suspend the storage group copy:
Suspend-StorageGroupCopy
-Identity <Server\StorageGroupName>
2. Dismount
the database:
Dismount-Database
-Identity <Server\StorageGroupName\DatabaseName>
3. Move
the database files:
Move-DatabasePath
-Identity <Server\StorageGroupName\DatabaseName> -EdbFilePath
<NewPath> -ConfigurationOnly
4. Move
the logfiles folder path:
Move-StorageGroupPath -Identity:<Server\StorageGroupName>
-LogFolderPath:<NewPath>
-SystemFolderPath:<NewPath>
-ConfigurationOnly
Note that you have to use the configurationOnly parameter and manually move the database files on
both the active and passive node!
5. Mount
the database:
Mount-Database
-Identity <Server\StorageGroupName\DatabaseName>
6.
Resume the storage group copy:
Resume-StorageGroupCopy
-Identity <Server\StorageGroupName>
7.
Check that replication is occurring and the replication
status is healthy.
1.