:: [devuan-dev] Fwd: Re: Status of my…
Forside
Slet denne besked
Besvar denne besked
Skribent: Daniel Reurich
Dato:  
Til: Ralph Ronnquist, devuan developers internal list
Gamle-emner: Re: [devuan-dev] Status of my action points
Emne: [devuan-dev] Fwd: Re: Status of my action points
Hi Ralph and devuan-devs,

below is a previous post I made about the ganeti setup.

We only have one active internet facing interface on each server. The
second interface is not active and not connected.

With regards to your question about why I've setup so many bridges,
heres is the short explanation:
a) for any vm to get traffic, be it for wan traffic or private lan
traffic they will connect to the appropriate bridge interface.

b) we will also need to bridge these networks between the servers,
either via ipsec or openvpn. I'm leaning more towards ipsec as it is an
easier and more reliable configuration once you scale past a single
connection and the same config is used on all nodes, making what is
essentially a mesh network.


c) we need 3 propogated networks in my opinion:

wan bridge): - this is how the internet facing vm's are directly exposed
to the internet. The wan bridge will need to be addition during
migrations whilst waiting for SoYouStart ip address move to occur
between the ganeti nodes, it allows the continued flow of traffic to be
automatically redirected to the

private lan bridge): for private servers that don't need to connect to
the internet. Also with a vpn setup can provide for connectivity for
builders to CI etc.

ganeti bridge): this is for the ganeti management layer as well as node
to node migration data streaming for drdb and instance migrations. We
keep this separate from the lan as it's specific to ganeti and it makes
it easier to allocate bandwidth minimums essential to ensuring smooth
operation of the ganeti nodes.

the dmz_bridge is IMHO rendundant and can be removed.

d) we need to decide on an address route propogation method. Nextime
suggested OSPF and was going to share his scripts for this. I've also
looked at other route discovery mechanisms that wouldn't require a
scripted solution. Would love thoughts on this as it's the biggest
roadblock to enabling instance migrations.





-------- Forwarded Message --------
Subject: Re: [devuan-dev] Status of my action points
Date: Wed, 2 Aug 2017 23:29:23 +1200
From: Daniel Reurich <daniel@???>
To: devuan developers internal list <devuan-dev@???>, KatolaZ
<katolaz@???>

On 02/08/17 20:42, KatolaZ wrote:
> On Tue, Aug 01, 2017 at 06:03:03PM +0200, Jaromil wrote:
>> On Mon, 31 Jul 2017, Daniel Reurich wrote:
>>
>>> Sure... but we don't appear to have any on the new server yet....
>>> Better bug jaromil about getting that block of 16
>>
>> there is a 16th block on the new server and an extra ip on the old,
>> but the extra ip on the old does not ping.
>>
>> do you have any idea what is happening?
>>
>> also can you or others outline what are the blockers to proceed with
>> an installation of the new server, keeping in mind if should be
>> incrementally tooled, rather than overcommitting to an effort on which
>> we cannot follow up within the current time costraints.
>>
>
> In this respect, I have already setup the VM for amprolla3 and for the
> mirror server. parazyd has already installed amprolla3 on that VM, and
> we might be ready to test the rsync server soon (I am very busy with a
> thing at work that I must finish by tomorrow morning, but after that I
> am on it).
>
> However, I am stuck again.
>
> The first reason is the problem with the FO IP mentioned by jaromil
> above. I am sure this can be solved soon. I simply tried to assign the
> failover IP to one of the vms (replicating the same configuration we
> have for the gitlab machine, modulo the actual FO IP), but it does not
> ping. There might be something wrong but I can't see it.


I'll check the configuration. Can I get access to the instance?

All you should need for now is to set the IP address and route in
/etc/interfaces as so:

# wan interface
allow-auto ethX
iface ethX inet static
    address <IP Address/32>
    post up ip route add 91.121.209.254 dev eth scope link src <IP Addr>z
    post up ip route replace default via 91.121.209.254
    post down ip route delete 91.121.209.254 dev eth2 scope link src <IP Addr>
    post down post up ip route delete default via 91.121.209.254


Also when running up the instance the network setup should have included:
    --net -1:add,link=wanbr --hotplug <vm name>


if not then you will need to do something like:

`gnt-instance modify --net -1:add,link=wanbr --hotplug <vm-name>` to add
the wan link to the vm.

> The second reason is more fundamental. I had the impression that we
> wanted to have a dmz where to put vms for external services, so that
> all the public IPs could be routed by a VM acting as a
> firewall. However, Centurion_Dan now seems convinced that the physical
> machine should do the routing (and the firewalling).


There is no benefit to a dmz, or firewall routing as the routes will
change depending on which node the vm runs on, and push traffic from one
node to the the other to push it through a firewall vm is plain just a
huge waste of resources. That also adds an extra disconnected layer of
management that a server admin would have to deal with. Better just
push it over the bridged wan interface into the vm - less overhead and
the server admin is responsible for their server config, and after a
migration the client will need to automatically switch to the local
router - which is where RIP|OSPF comes in to play.

>
> I think this is not a good idea, and I am not sure about how this
> plays with ganeti failover procedures. I guess that having two
> identical fw VMs (one on each ganeti node) would be the best option,
> since upon failover we would simply need to route the external IPs to
> the mac of the new master, and everything will be already working. If
> instead we go for managing routing on the physical machine, we need to
> setup all the fw rules on the new master node, which might be a bit of
> a burden.

As I said the other day, a vm based firewall will create big problems
for management and big due to pushing traffic across 3 virtual ethernet
interfaces for each vm (regardless of public IP or private lan IP. It
will perform poorly and not give us any benefit but creates a whole
other layer of management to train ganeti to work with. There is simply
no benefit in a separate virtual firewall.

Network configuration and in particular route discovery - probably via
RIP or OSPF client setup on the vm's so they will discover routing
changes when migrated. I was promised this detail from nextime, so I
hope to have it soon (for now a static config will work on the single
node).

Anyway, The only firewall rules required on the nodes will be the NAT
rules to allow outbound internet connectivity from the ganeti and lan
networks. The vm servers can do their own firewalling if needed (really
shouldn't be an issue apart from running ssh-guard and not running
unnecessary services.

I hope that by this weekend I will be on top of my current workload
again and with nextimes help will get the cluster running 2 nodes
allowing migrations to be possible.
>
> Since we are now again blocked on this point, I think we must clarify
> these issues and proceed as soon as possible.

Agreed.


--
Daniel Reurich
Centurion Computer Technology (2005) Ltd.
021 797 722

--
Daniel Reurich
Centurion Computer Technology (2005) Ltd.
021 797 722