Forum Search:
Forum.Brain-Cluster.com: Brain Cluster Technical Forum
Ultimate forum for Technical Discussions

Home » Microsoft » Windows Server » Server Clustering » 2003r2 - Moving ressources to a new hardware and storage
2003r2 - Moving ressources to a new hardware and storage [message #361640] Thu, 07 January 2010 05:23 Go to next message
Adrien Maugard  is currently offline Adrien Maugard
Messages: 2
Registered: January 2010
Junior Member
Hello,

I'm looking for help on the following project:
I've got some 2 nodes clusters with disk replication supported by Veritas
Storage Foundation, each one hosting 3 to 6 ressources in A/P mode (obviously
;) )

I'm looking to migrate the clusters to a new storage (SAN with inbound
replication, thus far better than the actual solution), and as we really
don't know how Veritas is integrated to the OS we can't re-use the actual
servers in the clusters (without a format...)

So how can I migrate a ressource from one cluster node to a new cluster new
node (2 nodes with FSW cluster)?
I planned to do it this way:
1. Copy/move all files manually between storages
2. destroy the ressources & virtual server from the old cluster
3. Rebuild a new ressource & virtual server on the new cluster
4. Rince/repeate until nothing is left on the old server
5. uninstall clustering, remove services & computer account in AD
6. Format the old server clusters and re-use the hardware for another
cluster migration.

I'm doing something wrong this way?

Thanks
RE: 2003r2 - Moving ressources to a new hardware and storage [message #362505 is a reply to message #361640] Fri, 08 January 2010 03:26 Go to previous messageGo to next message
Gaurav Anand  is currently offline Gaurav Anand
Messages: 5
Registered: January 2010
Junior Member
Hi Adrien

You can follow that approach or you can add a new node to existing cluster
and then use cluster recovery software to migrate disk. Once disk resources
are migrated along with data you can get rid of old disk resources managed by
veritas and uninstall veritas software after removing veritas managed
physical disk resources from cluster. This way you will not have to
reconfigure everything from scartch.

http://www.microsoft.com/downloads/details.aspx?familyid=2be 7ebf0-a408-4232-9353-64aafd65306d&displaylang=en


The cluster recovery utility allows a new disk, managed by a new physical
disk resource to be substituted in the resource dependency tree and for the
old disk resource (which now no longer has a disk associated with it) to be
removed.

To replace a failed disk use the following procedure:
· Add a new disk drive to the cluster. In a storage area network
environment, adding a new disk drive may involve creating a new logical unit
and exposing it to the server cluster nodes with appropriate LUN masking,
security and zoning properties.
· Make sure that the new disk is only visible to one node in the cluster.
Until the Cluster service takes control of the new disk and a physical disk
resource is created, there is nothing to stop all nodes that can see the disk
from accessing it. To avoid file system issues, you should try to avoid
exposing a disk to more than one node until it has been added to the cluster.
In some cases (such as with low-end fiber channel RAID devices or devices in
a shared SCSI storage cabinet) there is no way to avoid multiple machines
from accessing the same disk. In these cases, a CHKDSK may run when the disk
resource is brought online in step 5 of this procedure. Although this
situation is recoverable through CHKDSK, you can avoid it by shutting down
the other cluster nodes, although this may not be appropriate if the cluster
is hosting other, currently functioning applications and services.
· Partition and format the new disk drive as required. Note: For a disk
drive to be considered as a cluster-capable disk drive, it must be an MBR
format disk and must contain at least one NTFS partition. Assign it a drive
letter other than the letter it is replacing for now.
· Create a new physical disk resource for the new disk drive using Cluster
Administrator (or the cluster.exe command line utility).
· Make the disk drive visible to the same set of nodes as the disk drive
that it is replacing (in a typical configuration, a disk driver is visible to
all nodes in the server cluster). In the event that the device does not
appear to the cluster nodes, you may perform a manual rescan for new hardware
using the device manager. At this stage you should try to bring the disk
resource online and then fail it over all nodes of the cluster in turn to
ensure that the new physical disk is correctly configured and can be viewed
from all nodes.
· Use the Server Cluster Recovery Utility to substitute the newly created
physical disk resource for the failed resource. Note: The Server Cluster
Recovery Utility ensures that the old and new disk resources are in the same
resource group. It will take the resource group offline and transfer the
properties of the old resource (such as failover policies and chkdsk
settings) to the new resource. It will also rename the old resource to have
"(lost)" appended to the name and rename the new resource to be the same as
the old resource. Any dependencies on the old resource will be changed to
point to the new resource.
· Change the drive letter of the new physical disk to match that of the
failed disk. Note: The new physical disk resource must be brought online
first and then the drive letter can be changed (on the node hosting the
physical disk resource) using the Disk Management snap-in available via
Computer Management.
· Once you have validated that the new resource is correctly installed, you
should delete the old physical disk resource as it no longer represents a
real resource on the cluster.
· Once the cluster is configured, you should restore the application data
to the new disk drive.

GAURAV ANAND

"Adrien Maugard" wrote:

> Hello,
>
> I'm looking for help on the following project:
> I've got some 2 nodes clusters with disk replication supported by Veritas
> Storage Foundation, each one hosting 3 to 6 ressources in A/P mode (obviously
> ;) )
>
> I'm looking to migrate the clusters to a new storage (SAN with inbound
> replication, thus far better than the actual solution), and as we really
> don't know how Veritas is integrated to the OS we can't re-use the actual
> servers in the clusters (without a format...)
>
> So how can I migrate a ressource from one cluster node to a new cluster new
> node (2 nodes with FSW cluster)?
> I planned to do it this way:
> 1. Copy/move all files manually between storages
> 2. destroy the ressources & virtual server from the old cluster
> 3. Rebuild a new ressource & virtual server on the new cluster
> 4. Rince/repeate until nothing is left on the old server
> 5. uninstall clustering, remove services & computer account in AD
> 6. Format the old server clusters and re-use the hardware for another
> cluster migration.
>
> I'm doing something wrong this way?
>
> Thanks
RE: 2003r2 - Moving ressources to a new hardware and storage [message #362593 is a reply to message #362505] Fri, 08 January 2010 06:39 Go to previous message
Adrien Maugard  is currently offline Adrien Maugard
Messages: 2
Registered: January 2010
Junior Member
Thanks, I will try this solution.

"Gaurav Anand" wrote:

>
> Hi Adrien
>
> You can follow that approach or you can add a new node to existing cluster
> and then use cluster recovery software to migrate disk. Once disk resources
> are migrated along with data you can get rid of old disk resources managed by
> veritas and uninstall veritas software after removing veritas managed
> physical disk resources from cluster. This way you will not have to
> reconfigure everything from scartch.
>
> http://www.microsoft.com/downloads/details.aspx?familyid=2be 7ebf0-a408-4232-9353-64aafd65306d&displaylang=en
>
>
> The cluster recovery utility allows a new disk, managed by a new physical
> disk resource to be substituted in the resource dependency tree and for the
> old disk resource (which now no longer has a disk associated with it) to be
> removed.
>
> To replace a failed disk use the following procedure:
> · Add a new disk drive to the cluster. In a storage area network
> environment, adding a new disk drive may involve creating a new logical unit
> and exposing it to the server cluster nodes with appropriate LUN masking,
> security and zoning properties.
> · Make sure that the new disk is only visible to one node in the cluster.
> Until the Cluster service takes control of the new disk and a physical disk
> resource is created, there is nothing to stop all nodes that can see the disk
> from accessing it. To avoid file system issues, you should try to avoid
> exposing a disk to more than one node until it has been added to the cluster.
> In some cases (such as with low-end fiber channel RAID devices or devices in
> a shared SCSI storage cabinet) there is no way to avoid multiple machines
> from accessing the same disk. In these cases, a CHKDSK may run when the disk
> resource is brought online in step 5 of this procedure. Although this
> situation is recoverable through CHKDSK, you can avoid it by shutting down
> the other cluster nodes, although this may not be appropriate if the cluster
> is hosting other, currently functioning applications and services.
> · Partition and format the new disk drive as required. Note: For a disk
> drive to be considered as a cluster-capable disk drive, it must be an MBR
> format disk and must contain at least one NTFS partition. Assign it a drive
> letter other than the letter it is replacing for now.
> · Create a new physical disk resource for the new disk drive using Cluster
> Administrator (or the cluster.exe command line utility).
> · Make the disk drive visible to the same set of nodes as the disk drive
> that it is replacing (in a typical configuration, a disk driver is visible to
> all nodes in the server cluster). In the event that the device does not
> appear to the cluster nodes, you may perform a manual rescan for new hardware
> using the device manager. At this stage you should try to bring the disk
> resource online and then fail it over all nodes of the cluster in turn to
> ensure that the new physical disk is correctly configured and can be viewed
> from all nodes.
> · Use the Server Cluster Recovery Utility to substitute the newly created
> physical disk resource for the failed resource. Note: The Server Cluster
> Recovery Utility ensures that the old and new disk resources are in the same
> resource group. It will take the resource group offline and transfer the
> properties of the old resource (such as failover policies and chkdsk
> settings) to the new resource. It will also rename the old resource to have
> "(lost)" appended to the name and rename the new resource to be the same as
> the old resource. Any dependencies on the old resource will be changed to
> point to the new resource.
> · Change the drive letter of the new physical disk to match that of the
> failed disk. Note: The new physical disk resource must be brought online
> first and then the drive letter can be changed (on the node hosting the
> physical disk resource) using the Disk Management snap-in available via
> Computer Management.
> · Once you have validated that the new resource is correctly installed, you
> should delete the old physical disk resource as it no longer represents a
> real resource on the cluster.
> · Once the cluster is configured, you should restore the application data
> to the new disk drive.
>
> GAURAV ANAND
>
> "Adrien Maugard" wrote:
>
> > Hello,
> >
> > I'm looking for help on the following project:
> > I've got some 2 nodes clusters with disk replication supported by Veritas
> > Storage Foundation, each one hosting 3 to 6 ressources in A/P mode (obviously
> > ;) )
> >
> > I'm looking to migrate the clusters to a new storage (SAN with inbound
> > replication, thus far better than the actual solution), and as we really
> > don't know how Veritas is integrated to the OS we can't re-use the actual
> > servers in the clusters (without a format...)
> >
> > So how can I migrate a ressource from one cluster node to a new cluster new
> > node (2 nodes with FSW cluster)?
> > I planned to do it this way:
> > 1. Copy/move all files manually between storages
> > 2. destroy the ressources & virtual server from the old cluster
> > 3. Rebuild a new ressource & virtual server on the new cluster
> > 4. Rince/repeate until nothing is left on the old server
> > 5. uninstall clustering, remove services & computer account in AD
> > 6. Format the old server clusters and re-use the hardware for another
> > cluster migration.
> >
> > I'm doing something wrong this way?
> >
> > Thanks
Goto Forum:
  


Current Time: Sun Sep 24 15:28:35 EDT 2017

Total time taken to generate the page: 0.03284 seconds
.:: Contact :: Home ::Sitemap::.

Powered by: FUDforum 3.0.0RC2.
Copyright ©2001-2009 FUDforum Bulletin Board Software