Cloud in Action: Migrate OpenStack from Linux Bridge to Open vSwitch
薛国锋 xueguofeng2011@gmail.com
Open vSwitch supports most of the features you would find on a physical switch, providing some advanced features like RSTP support, VXLANs, OpenFlow, and supports multiple vlans on a single bridge. Today I am going to migrate my OpenStack lab environment from Linux Bridge Agent to Open vSwitch Agent and make it possible for the future integration with SDN Controller - OpenDaylight. We will make the configuration adjustment on top of the lab environment of last time:
We will just create a minimum POC for the purpose of learning about OpenStack andOpen vSwitch, not for production system installions:
1)The controller nodes runs all the serivces – Dashboard, Networking, Compute, Image and Identity, while the compute nodes only run Nova-compute and Neutron-OpenvSwitch-Agent.
2)The management and data networks are integrated by eth0 in this environment, which means the management traffic and the VxLAN traffic among VMs are mixed.
3)All the traffic of tenant would go from compute nodes to the controller node first through VxLAN tunnels, and then go to the DC GW via its vRouter.
controller | compute1 | compute2 |
// Remove all instances, vRouters, Floating IPs, selfservice and provider networks via the dashboard
| ||
// Stop neutron-linuxbrige-agent sudo service neutron-linuxbridge-agent stop
| ||
// Remove neutron-linuxbrige-agent and its configuration and data files sudo apt-get remove neutron-linuxbridge-agent sudo apt-get purge neutron-linuxbridge-agent
| ||
// Install neutron-openvswitch-agent sudo apt-get update sudo apt-get install neutron-openvswitch-agent
| ||
sudo ovs-vsctl add-br br-provider sudo ovs-vsctl add-port br-provider eth1
|
#sudo ovs-vsctl add-br br-provider #sudo ovs-vsctl add-port br-provider eth1
|
#sudo ovs-vsctl add-br br-provider #sudo ovs-vsctl add-port br-provider eth1
|
If you want to launch VMs to the provider netowrk directly in compute nodes, br-provider is needed. | ||
sudo gedit /etc/neutron/neutron.conf
[DEFAULT] core_plugin = ml2 service_plugins = router allow_overlapping_ips = true transport_url = rabbit://openstack:ipcc2014@controller auth_strategy = keystone notify_nova_on_port_status_changes = true notify_nova_on_port_data_changes = true
|
sudo gedit /etc/neutron/neutron.conf
[DEFAULT] #core_plugin = ml2 transport_url = rabbit://openstack:ipcc2014@controller auth_strategy = keystone |
sudo gedit /etc/neutron/neutron.conf
[DEFAULT] #core_plugin = ml2 transport_url = rabbit://openstack:ipcc2014@controller auth_strategy = keystone |
sudo gedit /etc/neutron/plugins/ml2_conf.ini
[ml2] type_drivers = flat,vlan,vxlan tenant_network_types = vxlan #mechanism_drivers = linuxbridge,l2population mechanism_drivers = openvswitch,l2population extension_drivers = port_security
[ml2_type_flat] flat_networks = provider
[ml2_type_vlan] network_vlan_ranges = provider
[ml2_type_vxlan] vni_ranges = 1:1000 | ||
sudo gedit /etc/neutron/plugins/ml2/openvswitch_agent.ini
[ovs] bridge_mappings = provider:br-provider local_ip = 10.0.0.11
[agent] tunnel_types = vxlan l2_population = True
[securitygroup] firewall_driver = iptables_hybrid
|
sudo gedit /etc/neutron/plugins/ml2/openvswitch_agent.ini
[ovs] #bridge_mappings = provider:br-provider local_ip = 10.0.0.31
[agent] tunnel_types = vxlan l2_population = True
[securitygroup] firewall_driver = iptables_hybrid |
sudo gedit /etc/neutron/plugins/ml2/openvswitch_agent.ini
[ovs] #bridge_mappings = provider:br-provider local_ip = 10.0.0.32
[agent] tunnel_types = vxlan l2_population = True
[securitygroup] firewall_driver = iptables_hybrid |
// bridge_mappings is to connect br-int to br-provider; wthout the setting of bridge_mapping, you cannot launch VMs to the provider netowrk in compute nodes.
| ||
sudo gedit /etc/neutron/l3_agent.ini
[DEFAULT] #interface_driver = linuxbridge interface_driver = openvswitch external_network_bridge =
| ||
sudo gedit /etc/neutron/dhcp_agent.ini
[DEFAULT] #interface_driver = linuxbridge interface_driver = openvswitch dhcp_driver = neutron.agent.linux.dhcp.Dnsmasq enable_isolated_metadata = true force_metadata = True
| ||
sudo gedit /etc/neutron/metadata_agent.ini
[DEFAULT] nova_metadata_ip = controller metadata_proxy_shared_secret = ipcc2014
| ||
// Upgrdade the database sudo su -s /bin/sh -c "neutron-db-manage --config-file /etc/neutron/neutron.conf \ --config-file /etc/neutron/plugins/ml2/ml2_conf.ini upgrade head" neutron
| ||
reboot |
Delete the Linux bridge agents in the database:
neutron agent-delete8c69e233-75d4-4ded-bcce-81c48193f18a
neutron agent-delete94e62fbc-f6a8-4dc6-8870-11fb362869f1
neutron agent-deleted0b66ca5-aba8-4e81-9c30-dbe79d6d6f94
Create the privder and self-service networks:
. admin-openrc
openstacknetwork create --share --external --provider-physical-network provider--provider-network-type flat xgf_provider
openstacksubnet create --network xgf_provider --allocation-poolstart=192.168.100.200,end=192.168.100.220 --dns-nameserver 10.0.1.1 --gateway192.168.100.111 --subnet-range 192.168.100.0/24 xgf_sub_provider
demo-openrc
openstacknetwork create xgf_selfservice_1
openstacksubnet create --network xgf_selfservice_1 --dns-nameserver 10.0.1.1 --gateway192.168.101.111 --subnet-range 192.168.101.0/24 xgf_sub_selfservice_1
openstackrouter create demo_router
neutronrouter-interface-add demo_router xgf_sub_selfservice_1
neutronrouter-gateway-set demo_router xgf_provider
. admin-openrc
openstacknetwork create xgf_selfservice_2
openstacksubnet create --network xgf_selfservice_2 --dns-nameserver 10.0.1.1 --gateway192.168.102.111 --subnet-range 192.168.102.0/24 xgf_sub_selfservice_2
openstackrouter create admin_router
neutronrouter-interface-add admin_router xgf_sub_selfservice_2
neutronrouter-gateway-set admin_router xgf_provider
Launch 4 VMs and check OVS: