Q: Can I migrate my server automatically from Paris to Baltimore, or vice-versa?
A: No; there is no automatic solution to migrate a virtual server between the Paris and Baltimore datacenters due to technical limitations (IP addressing, platform differences, etc.)
Q: Okay, but I really want to migrate my server... how can I do it ?
A: If you are really sure, then there are a couple of ways to do it.
Firstly you will need to create a new server through the admin interface (or public API). You will then need to transfer the data from the old server to the new one.
For Gandi AI, you will need to save the source files for your website and export the database, for example, with PHPMyAdmin, and then import these on the new server.
If your server is running in expert mode, it is possible to transfer these files directly over SSH. Another method, again in expert mode, involves creating a new server with a data disk in the target datacenter, and transfer the system disk from the old server to the newly attached data disk of the new server (for example, with 'dd' but please note that this may take some time).
Finally you can then change the disk type from data to system via the admin interface in the advanced configuration for the disk, shut down the new VM, remove the old system disk and attach this new image as the new system disk, then start the VM.
There are a few methods to achieve this:
dd iflag=direct bs=8k if=/dev/xvdYX | gzip -9 | ssh baltimore-server "gzip -d | dd bs=8k of=/dev/xvdYx"
1. First start the target on the Baltimore server:
mbuffer -q -s 128k -m 8M -I <baltimore_server>:<port> | gzip -d | dd bs=8k of=/dev/xvdYX
2. Start the sender on the Paris server:
dd iflag=direct bs=8k if=/dev/xvdYX | gzip -9 | mbuffer -q -s 128k -m 2M -0 <baltimore_server>:<port>
You can simply copy the files that you need.
rsync -arvz src baltimore_server:/dst
Q: Can I detach the IP address from my Paris server and attach it to my Baltimore server?
A: No; the IP address allocations and the server provisioning systems for the two datacenters are completely independent.
Q: Can I attach a disk in the Paris datacenter onto my server in the Baltimore, for example, to make an off-site backup?
A: No; for the same reasons that an automatic migration between datacenters is not possible, the two platforms are completely independent.
Q: I see that Gandi will be deploying Anycast DNS servers, will this allow me to do geolocalization for my servers?
A: No; the anycast DNS servers simply give a physical presence for the Gandi DNS servers in multiple locations so that DNS lookup requests can be answered by the closest server. The DNS service does not provide any geolocalization capability.
Q: Can I have a server in Paris and another one in Baltimore and use anycast to allow the closest server to serve the content?
A: No, because the hosting platforms are independent of each other and use separate IP allocations.
Q: I have my own ASN and assigned /24 for Anycast purposes, can I use a Gandi VPS to host my anycast service, assuming that I use other providers to expand the geographic and network scope?
A: It is technically possible to do this under certain conditions; We have already tested such a solution, but it is entirely bespoke and has certain limitations concerning how your own IP block can be used on your VM. You will need to discuss your requirements with our technical team to determine if we can accommodate your requirement.
Q: Why when I traceroute from my Paris to my Baltimore server, do I have a 92ms latency all of a sudden?
A: Latency is defined as the one-way path delay between two points on a network.; What you see in a traceroute is the round-trip delay for the hop in question (with a special exception for hops contained within an MPLS label-switched-path -- see the next question)
Transatlantic latency plus the physical fiber cable distances incur a one-way path delay of roughly 45ms (plus or minus a little). While it is true that at the speed of light the delay to cross the Atlantic is only in the order of 28ms in a straight line, you need to add in the factors of ocean depth, and then the land-based cable distances between the two end locations. This translates to a round-trip delay of at least 90ms, thus this is normal.
(This is a simplified explanation, as there are actually a number of factors that determine the delay observed, and we would be happy to discuss these with those who are really interested in the nitty gritty details...)
Q: When I traceroute from my Paris to my Baltimore server, I reach a router which is still in Paris but shows a 92ms round-trip time, even though the hop just before it is only 1ms. Doesn't this point to a problem on your network?
A: Our network is running MPLS (MultiProtocol Label Switching) in the core, with Traffic Engineering (TE) enabled. As such, when a packet crosses the network and enters a known end-to-end pathway between two endpoints (known as a Label-Switched-Path, or LSP), the IP packets remain within that pathway until they egress at the remote end if there is an existing TE tunnel along that path.
As a result, the intermediate routers within the LSP will still respond to the traceroute with the ICMP TTL-Exceeded packets as normal, but these packets are forwarded along the full path and back before being sent back to the originating source of the traceroute.
In consequence, the round-trip-time that you see for the Paris-side router in this instance is actually the round-trip-time for the full MPLS path, plus the time between the source and the router itself. Again, this is perfectly normal behavior.
Q: Why do you run MPLS on your network?
A: MPLS allows for a number of different solutions, not least of which is providing traffic engineering and flow control based on certain criteria. We can specify certain traffic based on its characteristics to follow a given MPLS path under normal circumstances, while still allowing for alternative paths in case of link failures.
We can also make use of other features of MPLS to enable layer2 and layer3 VPN connectivity across different portions of the network for some of the Gandi services, whilst keeping the size of the routing tables as low as possible. (The DNS servers being deployed in anycast use some of the features provided by MPLS across the core of the network, for example).
Q: I have a server in Baltimore and one in Paris. Can I have a second interface each with a private VLAN to connect the two servers with a back-end private network?
A: We are working on a private VLAN solution to be deployed later in 2011 and we are looking at ways to provide private VLAN connectivity for customers between their servers in different datacenters.
Q: I'm not convinced! This is a new datacenter, and I am not running any services yet, but even between my server and its next hop router I get weird and inconsistent ping results. Why?
A: This is due to hardware buffer sizes on physical ethernet interfaces. In order to transmit data physically "onto the wire" the interface needs to fill the buffer. If there is little or no traffic, small amounts of data will be "stored" in the buffer until there is enough data to transmit. This will result in false delay/round-trip and variance/jitter readings. Under normal data usage, this phenomenon is much less evident because the buffers will fill more quickly. It is, nevertheless, completely normal behavior for any network device or server NIC.
We hope that these help answer some of the questions that you have expressed. We will additionally be putting this FAQ along with any new questions and answers over time on Gandi's online knowledge center (http://wiki.gandi.net), so please check back from time to time for updates!