Mike Manchester wrote:
> Check this out and it works on all systems.
> http://www.cis.upenn.edu/~bcpierce/unison/
Think of Unison as a "two way rsync". This is a good way for keeping
static directories in sync, particularly when there is no write
contention between two files. If two files on differing filesystems are
modified at the same time, you will need to fix any "collisions" manually.
Is this a dynamic system or a static system?
If you have a relational database backend, for example, you must use
some kind of log replication or a RDBMS capable of doing this natively
(like Backplane - http://www.backplane.com)
If you have a static system, Unison might be enough, or you might try
something like OpenGFS, Intermezzo (or better, Luster), OpenAFS (or
CODA), or another network based replicated filesystem for keeping
directory trees in sync across a number of servers.
>> On Monday 21 June 2004 06:36 pm, Mike Branda wrote:
>>
>>>
>>> In a few days I will be faced with the challenge of having a duplicate
>>> of one of my servers at all times. Not a new concept I know but I need
>>> to get it done in a hurry. I'm not sure the proper term...server load
>>> balancing.....pooling.....anyway, is there a normal setup to have
>>> fairly
>>> realtime results between the two?? what software, hardware,
>>> controllers, etc are needed? Any direction to a project or docs or even
>>> an example or two would be great!!
>>
I've found that using dirvish to do rsync backups between two servers is
enough.
For my personal hosting cluster, for example, I have 2 identical servers
(Athlon XPs, RAID 5), peered together. On each server, I have 2 virtual
segments using Linux bridging. The server itself runs the netfilter
rules and heartbeat to failover the IP addresses for all machines behind
it (one server is the primary firewall, the other is a standby). Each
server runs a UML image running failover LVS between the two virtual
bridged segments. Each server runs a farm of 10+ UML images on the
second bridge segment, behind the LVS UML images, which is behind the
server's netfilter rules. The bridged segments are peered via a
crossover cable betwen the boxes using a tagged VLAN.
The key for recovery here is heartbeat failover of IP traffic, peered
sets of UML servers running on each physical server, virtual bridged
segments between the servers, and LVS balancing of traffic between the
servers. For storage that must span two UML images for persistance
store, I found HyperSCSI with Linux UML kernel software RAID1 mirroring
to be far superior in stability than OpenGFS. If one server should
entirely die, the other server should take over half the load
transparently. If I need to rebuild the UML images of another host, I
keep nightly dirvish backups to roll back to any previous day for recovery.
Honestly, this complexity is a bit of overkill.
>> Have a look here: http://www.linuxvirtualserver.org/
>>
>> You should find what you are looking for ;-)
>
Linux Virtual Server (LVS) is the appropriate tool for what you're
looking to do, with the understanding that data failover and coherency
is really just as important as the IP traffic.
If you want all of the above in something "out of the box", take a look
at the OpenSSI project (http://www.openssi.org). Instead of two seperate
servers, you get one big virtual server across both physical boxes, with
native networked RAID mirroring between the nodes. It scales well beyond
2 servers as well. Also, it has LVS built-in (one IP address).
If you have any questions, I'd love to help. Linux clustering is
something I've been playing around with myself on the side for quite
some time.
- Ian
-----------------------------------------------------------------------
This list is provided as an unmoderated internet service by Networked
Knowledge Systems (NKS). Views and opinions expressed in messages
posted are those of the author and do not necessarily reflect the
official policy or position of NKS or any of its employees.
This archive was generated by hypermail 2.1.3 : Fri Aug 01 2014 - 18:00:02 EDT