Installing
The SHIELD platform consists of a series of components. Its source code is available in different repositories within the GitHub repository. Each component provides its own README file and may need some third-party tools to be installed, as well as require some previous configuration in order for our software to work.
The following components are described:
- vNSF Ecosystem
- Trusted Infrastructure
- Big Data Analytics
- Infrastructure
vNSF Ecosystem
vNSF Store
-
Download the source code:
git clone http://github.com/shield-h2020/store.git
- Move ("cd") to the downloaded folder and read the README file
-
Install the dependencies.
First, the python-pip requirements:pip install -r requirements-store.txt
Also install Docker:sudo apt-get install --no-install-recommends apt-transport-https curl software-properties-common python-pip curl -fsSL 'https://sks-keyservers.net/pks/lookup?op=get&search=0xee6d536cf7dc86e2d7d56f59a178ac6c6238f52e' | sudo apt-key add - sudo add-apt-repository "deb https://packages.docker.com/1.13/apt/repo/ubuntu-$(lsb_release -cs) main" sudo apt-get update sudo apt-get -y install docker-engine sudo pip install docker-compose
-
Start the component:
cd docker && ./run.sh --environment .env.production --verbose
Then run the following to create the needed persistence volume:docker exec docker_store-persistence_1 bash -c "/usr/share/dev/store/docker/setup-datastore.sh --environment /usr/share/dev/store/docker/.env.production"
-
When you no longer wish to use the component, you may stop its containers:
cd docker && ./run.sh --shutdown
And also prune the system for unused resources:docker system prune; docker system prune --volumes
vNSF Orchestrator
-
Download the source code:
git clone http://github.com/shield-h2020/nfvo.git
- Move ("cd") to the downloaded folder and read the README file
-
Install the dependencies.
The following script can be used:cd bin && ./deploy.sh
This script generates the credentials for the server running the vNSFO API and generates a copy of the sample configuration files Also install Docker:sudo apt-get install python3 python3-pip -y curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo apt-key add - sudo apt-get update sudo add-apt-repository "deb [arch=amd64] https://download.docker.com/linux/ubuntu $(lsb_release -cs) stable" sudo apt-get install docker-ce=17.09.1~ce-0~ubuntu sudo usermod -G docker $(whoami) sudo pip3 install docker-compose==1.17.1
-
For each configuration folder, adapt with your values of choice. Configurations are available under the "conf" folder
- api.conf
-
Determines the information of the server that runs the vNSFO API.
[general] host = 0.0.0.0 # to serve it publicly or any other specific IP port = 8448 # port where the vNSFO API runs debug = True # enable or disable debug messages in the vNFO API logs (these can be checked via "docker logs -f docker_nfvo_1") [security] https_enabled = True # enable or disable the vNSFO being served over HTTPS verify_client_cert = False # enable or disable the enforcement to trust the certificate of the client interacting with the vNSFO API
- attacks.conf
-
Provides the mapping between the attacks identified in the DARE with the NSs instantiated by the vNSFO.
[general] default = l3filter_nsd # name of the NS package that will be instantiated to remediate an attack by default # name of the NS package that will be instantiated to remediate an attack identified by DARE with the name in the left Worm = l3filter_nsd Wannacry = l3filter_nsd DoS = l3filter_nsd TCP flood = l3filter_nsd tcp_flood = l3filter_nsd UDP flood = l3filter_nsd udp_flood = l3filter_nsd Slowloris = l3filter_nsd dns_results = l3filter_nsd DNS tunneling = l3filter_nsd Cryptocurrency Mining = l3filter_nsd
- db.conf
-
Configuration of the Mongo database.
[general] host = nfvo-db # Mongo DB exposed address port = 27017 # Mongo DB exposed port [db] name = shield-nfvo # name of the db to insert collections user = user # user id for db password = user # password for user in db auth_source = admin # authSource parameter used by db admin_username = admin # admin user id for db admin_password = adminpass # password for admin user in db
- isolation.conf
-
Configuration of the values required for the isolation and termination processes. Note that 2 KVM-based instances and 1 Docker-based instance are allowed.
[scripts] path = src/templates/isolation # path to the templates that will be used to perform isolation and termination procedures shutdown = shutdown.sh # default file to execute a node shutdown delflow = delflow.sh # default file to execute a flow removal ifdown = ifdown.sh # default file to execute an interface deactivation [keys] default_username = ubuntu # default user name that allows access to the virtual nodes deployed default_key = keys/default.pem # relative path (from the repo source) to the key that should be inserted in the nodes deployed (i.e., via the VIM) [commands] default_shutdown = sudo poweroff # default shutdown command to terminate nodes [kvm_vim_1] vim_account_id = d4ec6514-4760-47f5-914e-df951ac20dec # UUID in OSM of a specific KVM-based VIM identity_endpoint = https://openstack.shield.yourorganisation:5000/v3 # OpenStack endpoint for the identity service username = shield # username for the OpenStack identity service password = shield # password for the OpenStack identity service project_name = shield # project/tenant name to use in OpenStack domain_name = default # domain name to access OpenStack [kvm_vim_2] vim_account_id = 0cc01f33-f66d-47bd-8124-ca7ff2dbfc85 # UUID in OSM of a specific KVM-based VIM identity_endpoint = http://10.102.10.48:5000/v3 # OpenStack endpoint for the identity service username = shield # username for the OpenStack identity service password = shield # password for the OpenStack identity service project_name = shield # project/tenant name to use in OpenStack domain_name = default # domain name to access OpenStack [docker_vim] vim_account_id = 260c6bfc-5b52-4341-96a0-72cbd254c662 # UUID in OSM of a specific Docker-based VIM identity_endpoint = http://10.102.10.49:6001/v3.0 # OpenStack-like endpoint for the identity service username = admin # username for the OpenStack-like identity service password = admin # password for the OpenStack-like identity service project_name = admin # project/tenant name to use in OpenStack domain_name = default # domain name to access OpenStack
- nfvo.conf
-
Configuration of the NFVO endpoints (for both OSMr2 and OSMr4/OSMr5), as well as credentials and other information.
# OSMr2 configuration [general] host = 10.102.10.50 # OSMr2 endpoint for the SO service port = 8000 # OSMr2 port for the SO service default_kvm_datacenter = d4ec6514-4760-47f5-914e-df951ac20dec # UUID in OSM of a specific KVM-based VIM default_docker_datacenter = 260c6bfc-5b52-4341-96a0-72cbd254c662 # UUID in OSM of a specific Docker-based VIM default_kvm_datacenter_net = provider # name of the management network for the KVM-based VIM (connecting the NFVO to the VNFs) default_docker_datacenter_net = default # name of the management network for the Docker-based VIM (connecting the NFVO to the VNFs) [package] host = 10.102.10.50 # OSMr2 endpoint for the package operations port = 443 # OSMr2 port for the package operations [ro] host = 10.102.10.50 # OSMr2 endpoint for the RO service port = 9090 # OSMr2 port for the RO service # OSMr4/OSMr5 configuration [nbi] protocol = https # protocol under which OSM is served host = 10.102.10.51 # OSMr5 exposed endpoint port = 9999 # OSMr5 exposed port username = admin # username to access the OSMr5 endpoint password = admin # password to access the OSMr5 endpoint default_kvm_datacenter = d4ec6514-4760-47f5-914e-df951ac20dec # UUID in OSM of a specific KVM-based VIM default_docker_datacenter = 260c6bfc-5b52-4341-96a0-72cbd254c662 # UUID in OSM of a specific Docker-based VIM default_kvm_datacenter_net = provider # name of the management network for the KVM-based VIM (connecting the NFVO to the VNFs) default_docker_datacenter_net = default # name of the management network for the Docker-based VIM (connecting the NFVO to the VNFs) default_flavor = m1.small # default flavour to be used in OpenStack
- nfvo.mspl.conf
-
Configuration of the MSPL-related operations to be received by the vNSFO API.
[monitoring] timeout = 3600 # time (in milliseconds) to wait for outcome of the monitoring process to check the operational and configuration status for the instantiated VNFs interval = 5 # time (in seconds) between the monitoring process is called again target_status = running # target status that a VNF should reach to consider the instantiation to be successful
- sdn.conf
-
Data related to the SDN controller and the network device the vNSFO API interacts with. Note that 1 controller (ODL Carbon) and 1 device are allowed.
[general] push_delay = 10000 # delay between pushing any rule and attesting (in milliseconds) [controller] protocol = http # protocol under which ODL is served host = 10.102.10.52 # ODL exposed endpoint port = 8181 # ODL exposed port username = admin # username to access ODL password = admin # password to access ODL [infrastructure] default_device = openflow:112591078470795328 # dpid of the switch default_table = 0 # default table
- tm.conf
-
Configuration to reach the Trust Monitor instance.
[general] host = 10.102.10.53 # Trust Monitor exposed address port = 443 # Trust Monitor exposed port protocol = https # protocol under which the Trust Monitor is served default_analysis_type = load-time+cont-check,l_req=l4_ima_all_ok|==,cont-list= # identifier of the analysis to carry out default_pcr0 = example # default PCR0 value default_distribution = CentOS7 # default Operating System distribution for the attestation default_driver = linux # default driver to perform the attestation default_node = nfvi-node # name in OSM of the default Docker-based VIM to attest
- tm.sdn.reference.json
- Trusted set of flows that act as a reference for the SDN attestation of the flows in the switch. This equals to the compressed JSON output of the operational endpoint (e.g., "http://10.102.10.52:8181/restconf/operational/opendaylight-inventory:nodes/node/openflow:112591078470795328/flow-node-inventory:table/0") in ODL.
-
Start the component:
./setup.sh
Initially this will create both the volumes for the vNSFO API and the DB. In subsequent runs, the DB volume will be retained and re-used from disk -
When you no longer wish to use the component, you may stop its containers:
./teardown.sh
And also prune the system for unused resources:docker system prune; docker system prune --volumes
Trusted Infrastructure
Trust Monitor
-
Download the source code:
git clone http://github.com/shield-h2020/trust-monitor.git
- Move ("cd") to the downloaded folder and read the README file
-
Install the dependencies.
Install Docker:sudo apt-get remove docker docker-engine docker.io sudo apt-get update sudo apt-get install apt-transport-https ca-certificates curl software-properties-common curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo apt-key add - sudo add-apt-repository "deb [arch=amd64] https://download.docker.com/linux/ubuntu $(lsb_release -cs) stable" sudo apt-get update sudo apt-get install docker-ce
Also install Docker-compose:sudo curl -L https://github.com/docker/compose/releases/download/1.18.0/docker-compose-Linux-x86_64 -o /usr/local/bin/docker-compose sudo chmod +x /usr/local/bin/docker-compose
-
Adapt the configuration with your values of choice
- trust_monitor_
django/settings.py -
Configuration to reach the Trust Monitor instance. At least the following must be configured:
CASSANDRA_LOCATION = $WHITELIST_DB_IP # instance running the white-list database CASSANDRA_PORT = '9160' # default port where Cassandra runs
- trust_monitor_
-
Start the component:
docker-compose up --build
-
When you no longer wish to use the component, you may stop its containers:
docker-compose rm
And also prune the system for unused resources:docker system prune; docker system prune --volumes
Big Data Analytics
Security Dashboard
-
Download the source code:
git clone http://github.com/shield-h2020/dashboard.git
- Move ("cd") to the downloaded folder and read the README file
-
Install the dependencies.
First, the python-pip requirements:pip install -r requirements-store.txt
Also install Docker:sudo apt-get install --no-install-recommends apt-transport-https curl software-properties-common python-pip curl -fsSL 'https://sks-keyservers.net/pks/lookup?op=get&search=0xee6d536cf7dc86e2d7d56f59a178ac6c6238f52e' | sudo apt-key add - sudo add-apt-repository "deb https://packages.docker.com/1.13/apt/repo/ubuntu-$(lsb_release -cs) main" sudo apt-get update sudo apt-get -y install docker-engine sudo pip install docker-compose
-
Start the component:
cd docker && ./run.sh --environment .env.production --verbose
Then run the following to create the needed persistence volume:docker exec -it docker_dashboard-persistence_1 bash -c "/usr/share/dev/dashboard/docker/setup-datastore.sh --environment .env.production"
-
When you no longer wish to use the component, you may stop its containers:
cd docker && ./run.sh --shutdown
And also prune the system for unused resources:docker system prune; docker system prune --volumes
Data acquisition and storage
-
Install the dependencies for the component:
Then, install the python-pip packages:
pip install pyspark==2.2.0.post0
DARE Workers
Streaming Worker
-
Download the source code:
git clone http://github.com/shield-h2020/dare-workers.git
- Move ("cd") to the downloaded folder and read the README file
-
Install the dependencies for the component.
First, the packages from the Operating System:sudo apt-get install python-pip python-virtualenv zip
Then, install the python-pip packages:pip install avro
-
Adapt the configuration with your values of choice
- .worker.json
-
Configuration of a given worker:
{ "database": "spotdb", # name of the Hive DB where data is ingested "kerberos": { # configuration for the network authentication protocol "kinit": "/usr/bin/kinit", "principal": "spotuser", "keytab": "/opt/security/spotuser.keytab" }, "zkQuorum": "cloudera01:2181" # Zookeeper connection string }
-
Start the component:
./run.sh -t "pipeline_configuration" --topic "my_topic" -p "num_of_partition" # Examples ./run.sh -t flow --topic SPOT-INGEST-TEST-TOPIC -p 1 ./run.sh -t flow --topic SPOT-INGEST-TEST-TOPIC -p 1 2>/tmp/spark2-submit.log ./run.sh -t dns --topic SPOT-INGEST-DNS-TEST-TOPIC -p 2 -a AppIngestDNS -b 10 -g DNS-GROUP
Simple Worker
-
Download the source code:
git clone http://github.com/shield-h2020/dare-workers.git
- Move ("cd") to the downloaded folder and read the README file
-
First, the packaes from the Operating System:
sudo apt-get -y install python-virtualenv
Then, install the python-pip packages:virtualenv --no-site-packages venv source venv/bin/activate pip install -r requirements.txt
-
Install the worker
python setup.py install
-
Adapt the configuration with your values of choice
- .worker.json
-
Configuration of a given worker:
{ "consumer": { # configuration parameters for the KafkaConsumer "bootstrap_servers": ["kafka_server:kafka_port"], "group_id": "" }, "kerberos": { # configuration for the network authentication protocol "kinit": "/usr/bin/kinit", "principal": "user", "keytab": "/opt/security/user.keytab" }, }
-
Start the component:
s-worker --topic "my_topic" -p "num_of_partition" -d "path_to_HDFS" # Examples s-worker --topic SPOT-INGEST-TEST-TOPIC -p 0 -d /user/spotuser/flow/stage s-worker --topic SPOT-INGEST-DNS-TEST-TOPIC -p 0 -d /user/spotuser/dns/stage --parallel-processes 8 -i 30 s-worker --topic SPOT-INGEST-TEST-TOPIC -p 2 -d /user/spotuser/flow/stage -l DEBUG
-
When you no longer wish to use the component, you may deactivate its virtual environment:
deactivate rm -r venv/
Infrastructure
Big Data cluster
-
Install the dependencies for the Spot framework:
- Cloudera Express 5.12.0
- Java HotSpot(TM) 64-Bit Server VM, version 1.8.0_151
- The following Cloudera Express services: HDFS, HIVE, IMPALA, SPARK (yarn) (spark 1.6 and spark 2.3), YARN, ZOOKEEPER
-
Install the following Spot components:
- spot-setup: scripts that create the required HDFS paths, HIVE tables and configuration for Apache Spot (incubating). Must be located in the Ingest VM
- spot-ingest: binary and log files are captured or transferred into the Hadoop cluster, where they are transformed and loaded into solution data stores. Must be located in the Ingest VM
- spot-ml: machine learning algorithms are used for anomaly detection. Must be located in the ClusterOrchestrator
- spot-oa: data output from the machine learning component is augmented with context and heuristics, then is available to the user for interacting with it. Must be located in the ClusterOrchestrator
-
The nodes available to the Big Data cluster must each have a defined role in Cloudera:
- Orchestrator: configured as the Master Node, includes all the management services and required gateways. Moreover, the ML and OA components are located in this host
- Worker 1: configured as a Worker Node
- Worker 2: configured as a Worker Node
- Ingest (big-shield): configured as a Ingest Node. Includes the Impala and HIVE components
- Configure the services in Cloudera as follows:
- HDFS
-
Role Type State Host Commission State Role Group Balancer N/A ClusterOrchestrator Commissioned Balancer Default Group DataNode Started worker2 Commissioned DataNode Default Group DataNode Started worker1 Commissioned DataNode Default Group NameNode (Active) Started ClusterOrchestrator Commissioned NameNode Default Group SecondaryNameNode Started ClusterOrchestrator Commissioned SecondaryNameNode Default Group
- HIVE
-
Role Type State Host Commission State Role Group Gateway N/A worker2 Commissioned Gateway Default Group Gateway N/A worker1 Commissioned Gateway Default Group Gateway N/A ClusterOrchestrator Commissioned Gateway Default Group Gateway N/A big-shield Commissioned Gateway Default Group Hive Metastore Server Started ClusterOrchestrator Commissioned Hive Metastore Server Default Group HiveServer2 Started ClusterOrchestrator Commissioned HiveServer2 Default Group
- Impala
-
Role Type State Host Commission State Role Group Impala Catalog Server Started ClusterOrchestrator Commissioned Impala Catalog Server Default Group Impala Daemon Started worker2 Commissioned Impala Daemon Default Group Impala Daemon Started worker1 Commissioned Impala Daemon Default Group Impala Daemon Started ClusterOrchestrator Commissioned Impala Daemon Default Group Impala Daemon Stopped big-shield Commissioned Impala Daemon Default Group Impala StateStore Started ClusterOrchestrator Commissioned Impala StateStore Default Group
- Kafka
-
Role Type State Host Commission State Role Group Kafka Broker (Active Controller) Started ClusterOrchestrator Commissioned Kafka Broker Default Group Kafka Broker Started big-shield Commissioned Kafka Broker Default Group
- Spark
-
Role Type State Host Commission State Role Group Gateway N/A worker2 Commissioned Gateway Default Group Gateway N/A worker1 Commissioned Gateway Default Group Gateway N/A ClusterOrchestrator Commissioned Gateway Default Group Gateway N/A big-shield Commissioned Gateway Default Group History Server Started ClusterOrchestrator Commissioned History Server Default Group
- Spark 2
-
Role Type State Host Commission State Role Group Gateway N/A worker2 Commissioned Gateway Default Group Gateway N/A worker1 Commissioned Gateway Default Group Gateway N/A ClusterOrchestrator Commissioned Gateway Default Group Gateway N/A big-shield Commissioned Gateway Default Group History Server Started ClusterOrchestrator Commissioned History Server Default Group
- YARN
(MR2 included) -
Role Type State Host Commission State Role Group Gateway N/A big-shield Commissioned Gateway Default Group JobHistory Server Started ClusterOrchestrator Commissioned JobHistory Server Default Group NodeManager Started worker2 Commissioned NodeManager Default Group NodeManager Started worker1 Commissioned NodeManager Default Group NodeManager Started ClusterOrchestrator Commissioned NodeManager Default Group ResourceManager (Active) Started ClusterOrchestrator Commissioned ResourceManager Default Group
- ZooKeeper
-
Role Type State Host Commission State Role Group Server Started worker1 Commissioned Server Default Group Server Started ClusterOrchestrator Commissioned Server Default Group Server Started big-shield Commissioned Server Default Group
VIM for the VNFs
In SHIELD, the VIMs in use are both OpenStack (All-in-One, running on top of Centos 7 - centos-release-7-4.1708.el7.centos.x86_64) and vim-emu (a VIM that supports Docker and offers virtualised APIs for OpenStack)
OpenStack Ocata
The OpenStack instance used in the project is the "Ocata" distribution, running on top of CentOS 7.5.1804. It uses OpenStack All-in-One (AiO) to automate the installation and configuration procedure by using Ansible scripts
-
Install the CentOS operating system and configure it.
The partitions for root (“/”), swap, volumes (“/cinder”) and other OpenStack data (“/openstack”) must be defined first, then its mounting point must be set. For instance:System /dev/md126: "/" RAID1: /dev/sda1 & /dev/sdb1 (130 GiB / 133120 MiB) /dev/md127: "swap" RAID1: /dev/sda2 & /dev/sdb2 (9832 MiB) Data /dev/md125: "/cinder" RAID1: /dev/sdc1 & /dev/sdd1 (200 GiB / 204800 MiB) /dev/md124: "/openstack" RAID1: /dev/sdc2 & /dev/sdd2 (695 GiB / 711680 MiB) # A sensible outcome of the "df -h" command is as follows: Filesystem Size Used Avail Use% Mounted on /dev/md126 130G 14G 117G 11% / devtmpfs 63G 0 63G 0% /dev tmpfs 63G 0 63G 0% /dev/shm tmpfs 63G 2.4G 61G 4% /run tmpfs 63G 0 63G 0% /sys/fs/cgroup /dev/md124 755M 34M 721M 5% /openstack /dev/md125 200G 33M 200G 1% /cinder tmpfs 13G 0 13G 0% /run/user/0 tmpfs 13G 0 13G 0% /run/user/1000
Install dependenciesyum -y install vim net-tools policycoreutils-python ansible atop htop iotop git ntp ntpdate bridge-utils lsof lvm2 openssh-server sudo tcpdump vlan bridge-utils iputils yum groupinstall "Development Tools" yum upgrade yum reboot yum install -y git ntp ntpdate openssh-server python-devel sudo '@Development Tools'
Configure the network as you wish to connect to other devices.
Note: "eno1" should provide management and Internet access (i.e., have a reachable IP); whilst "eno2" should be connected to the external OpenStack network (“VLAN” and “FLAT” types)sudo ip address add 10.10.150.52/23 dev eno1 # gateway: 10.10.150.1 echo "nameserver 8.8.8.8" > /etc/resolv.conf vim /etc/sysconfig/network-scripts/ifcfg-eno1 # Set eno1 with: BOOTPROTO=static PREFIX=23 # Restart network systemctl restart network service network restart
Configure the Network Time Protocol (NTP) in /etc/ntp.conf to synchronize with a suitable time source and start the servicesystemctl enable ntpd.service systemctl start ntpd.service
Secure your NFVI PoP by hcanging the SSH port (use "semanage" and "fireall-cmd", then restart the "sshd" service), insert SSH keys, do not allow password-based access and do not allow root access, etc
Manually create the bridgesbrctl addbr br-mgmt brctl addbr br-storage # Created by ASIS (should these be added?) brctl addbr br-vlan brctl addbr br-vxlan
-
Download the OpenStack Ansible repository for Ocata
git clone -b stable/ocata https://github.com/openstack/openstack-ansible.git /opt/openstack-ansible
-
Install the OpenStack Ansible playbooks
cd /opt/openstack-ansible/ scripts/bootstrap-ansible.sh
-
Configure the OpenStack Ansible templates according to your needs. Features like VLAN shall be enabled, bridges must be created in the Operating System (manual creation produces the best results), the definition of the Swift and Nova loopback disks shall take place, as well as other data such as the bridge to link the virtual interfaces of the VMs deployed by OpenStack
# Add appropriate kernel modules to the /etc/modules tile to enable VLAN and bond ifaces echo "# Load the bonding kernel module at boot" > /etc/modules-load.d/bonding.conf echo "bonding" >> /etc/modules-load.d/bonding.conf echo "# Load the 8021.q/VLAN kernel module at boot" > /etc/modules-load.d/vlan.conf echo "8021q" >> /etc/modules-load.d/vlan.conf # Copy and edit generic configuration cp -Rp /opt/openstack-ansible/etc/openstack_deploy /etc/openstack_deploy/ cd /etc/openstack_deploy cp -p openstack_user_config.yml.example /etc/openstack_deploy/openstack_user_config.yml # Define values for network ranges in the following file vim openstack_user_config.yml # Update the following configuration file with values vim /opt/openstack-ansible/tests/roles/bootstrap-host/defaults/main.yml # Size of the Swift loopback disk in gigabytes (GB). #bootstrap_host_loopback_swift_size: 1024 ## OLD VALUE bootstrap_host_loopback_swift_size: 250 ## NEW VALUE # Size of the Nova loopback disk in gigabytes (GB). #bootstrap_host_loopback_nova_size: 1024 ## OLD VALUE bootstrap_host_loopback_nova_size: 250 ## NEW VALUE # See https://wiki.debian.org/BridgeNetworkConnections for more details. bootstrap_host_bridge_mgmt_ports: none bootstrap_host_bridge_vxlan_ports: none bootstrap_host_bridge_storage_ports: none #bootstrap_host_bridge_vlan_ports: "br-vlan-veth" ## OLD VALUE bootstrap_host_bridge_vlan_ports: "eno2" ## NEW VALUE
- Configure the system
# Define previous interface (eno2) so that it is attached to VLAN bridge (br-vlan) vim /opt/openstack-ansible/tests/roles/bootstrap-host/templates/osa_interfaces.cfg.j2 (...) auto br-vlan iface br-vlan inet static bridge_stp off bridge_waitport 0 bridge_fd 0 address 172.29.248.100 netmask 255.255.252.0 offload-sg off # Create veth pair, don't bomb if already exists pre-up ip link add br-vlan-veth type veth peer name eth12 || true # Set both ends UP pre-up ip link set br-vlan-veth up pre-up ip link set eth12 up # Delete veth pair on DOWN post-down ip link del br-vlan-veth || true # bridge_ports br-vlan-veth ## OLD VALUE ## “eno2” to be used for VLAN-based bridges bridge_ports br-vlan-veth ## NEW VALUE (...) # If the above does not work, create the following file and make it executable vim /etc/sysconfig/network-scripts/ifup-post-veth-br-vlan-2-eno2 #!/usr/bin/env bash if [ "${DEVICE}" == "br-vlan" ]; then # Attach eno2 to the bridge brctl addif br-vlan eno2 || true fi chmod +x /etc/sysconfig/network-scripts/ifup-post-veth-br-vlan-2-eno2 Then, set "eno2" network interface to be set-up in UP mode along with "eno1" (see here: http://xmodulo.com/how-to-run-startup-script-automatically-after-network-interface-is-up-on-centos.html) # Create the file with the following content and make it executable vim /etc/sysconfig/network-scripts/ifup-post-eno1 #!/usr/bin/env bash if [ "${DEVICE}" == "eno1" ]; then if [ "True" == "True" ]; then ip link set eno2 up fi fi chmod +x /etc/sysconfig/network-scripts/ifup-post-eno1 # Call it from the automatic network script vim /etc/sysconfig/network-scripts/ifup-post (...) . /etc/sysconfig/network-scripts/ifup-post-eno1 ## NEW VALUE (... Other similar ifup-post-${eth} calls ...) exit 0 # Override the wheel's version in the "repo" container (otherwise the Python wheels may fail) vim /etc/openstack_deploy/user_variables.yml # Use a newer version of libvirt-python to avoid issues in repo # Example: https://bugs.launchpad.net/networking-midonet/+bug/1730314 repo_build_upper_constraints_overrides: - "libvirt-python==4.0.0" - "libvirt==4.0.0"
-
Finally, run the OpenStack bootstrap script, verify that the environment is according to the configuration defined before running such script and run the whole setup via its specific playbook
cd /opt/openstack-ansible/ ./scripts/bootstrap-aio.sh
All Ansible tasks should be OK. Verify the configuration of the system at this point. For instance, the bridges should be more or less as follows (though "lxcbr0" may not be there)brctl show bridge name bridge id STP enabled interfaces br-mgmt 8000.000000000000 no br-storage 8000.000000000000 no br-vlan 8000.96c558969050 no br-vlan-veth br-vxlan 8000.000000000000 no lxcbr0 8000.000000000000 no
-
Get the Keystone password to be able to access (Horizon or RC file):
cat /etc/openstack_deploy/user_secrets.yml | grep "keystone_auth_admin_password" keystone_auth_admin_password: ***
-
Run the whole installation
cd /opt/openstack-ansible/playbooks/ openstack-ansible setup-everything.yml
Wait for it to end and check the "Play recap" to verify that all the playbooks and tasks did run successfully
The system should present the following status as well:
# Partitions $ df -h Filesystem Size Used Avail Use% Mounted on /dev/md126 130G 26G 105G 20% / devtmpfs 63G 0 63G 0% /dev tmpfs 63G 0 63G 0% /dev/shm tmpfs 63G 2.5G 61G 4% /run tmpfs 63G 0 63G 0% /sys/fs/cgroup /dev/md124 755M 754M 212K 100% /openstack /dev/md125 200G 33M 200G 1% /cinder tmpfs 13G 0 13G 0% /run/user/0 tmpfs 13G 0 13G 0% /run/user/1000 /dev/loop1 246G 61M 234G 1% /var/lib/nova/instances /dev/loop2 250G 33M 250G 1% /srv/swift1.img /dev/loop3 250G 33M 250G 1% /srv/swift2.img /dev/loop4 250G 33M 250G 1% /srv/swift3.img # Bridges and interfaces $ brctl show bridge name bridge id STP enabled interfaces br-mgmt 8000.fe2e686e7080 no 01c95cb6_eno1 0cfb9ab5_eno1 0eb6e266_eno1 11229f9b_eth1 1ed01d5d_eno1 1ede6a57_eno1 1fa77522_eno1 206b8dea_eno1 296dffbc_eno1 2e82cc87_eno1 4a124a3f_eno1 5e4dd83e_eno1 73b26a6f_eno1 82787a2c_eno1 9653749e_eno1 9c40ab85_eno1 9c90639f_eno1 b78dc1c8_eno1 be28270f_eno1 cbfc9b1f_eno1 df4c9e53_eth1 f9ef960b_eno1 faf74974_eno1 br-storage 8000.fe4b479aa98e no 11229f9b_eth2 2e82cc87_eno2 b78dc1c8_eno2 br-vlan 8000.96c558969050 no br-vlan-veth f9ef960b_eno11 f9ef960b_eno12 f9ef960b_eth11 f9ef960b_eth12 br-vxlan 8000.fe94bf4e3e2e no f9ef960b_eno10 lxcbr0 8000.fe07f389e9a2 no 01c95cb6_eth0 0cfb9ab5_eth0 0eb6e266_eth0 11229f9b_eth0 1ed01d5d_eth0 1ede6a57_eth0 1fa77522_eth0 206b8dea_eth0 296dffbc_eth0 2e82cc87_eth0 4a124a3f_eth0 5e4dd83e_eth0 73b26a6f_eth0 82787a2c_eth0 9653749e_eth0 9c40ab85_eth0 9c90639f_eth0 b78dc1c8_eth0 be28270f_eth0 cbfc9b1f_eth0 df4c9e53_eth0 f9ef960b_eth0 faf74974_eth0 # Running containers and usage # lxc-ls aio1_cinder_api_container-b78dc1c8 aio1_cinder_scheduler_container-5e4dd83e aio1_designate_container-df4c9e53 aio1_galera_container-73b26a6f aio1_glance_container-2e82cc87 aio1_heat_apis_container-296dffbc aio1_heat_engine_container-01c95cb6 aio1_horizon_container-0cfb9ab5 aio1_keystone_container-be28270f aio1_memcached_container-82787a2c aio1_neutron_agents_container-f9ef960b aio1_neutron_server_container-1fa77522 aio1_nova_api_metadata_container-206b8dea aio1_nova_api_os_compute_container-faf74974 aio1_nova_api_placement_container-cbfc9b1f aio1_nova_conductor_container-9c90639f aio1_nova_console_container-4a124a3f aio1_nova_scheduler_container-1ede6a57 aio1_rabbit_mq_container-9653749e aio1_repo_container-9c40ab85 aio1_rsyslog_container-1ed01d5d aio1_swift_proxy_container-11229f9b aio1_utility_container-0eb6e266 # lxc-top Container CPU CPU CPU BlkIO Mem Name Used Sys User Total Used aio1_cinder_api_co 2.97 1.43 1.45 4.00 KiB 25.14 MiB aio1_cinder_schedu 2.72 1.28 1.34 4.00 KiB 25.16 MiB aio1_designate_con 2.90 1.34 1.45 4.00 KiB 25.17 MiB aio1_galera_contai 3.03 1.51 1.44 4.00 KiB 25.07 MiB aio1_glance_contai 3.38 1.71 1.57 4.00 KiB 25.19 MiB aio1_heat_apis_con 3.09 1.56 1.45 4.00 KiB 25.07 MiB aio1_heat_engine_c 2.82 1.34 1.40 4.00 KiB 25.18 MiB aio1_horizon_conta 3.09 1.52 1.49 4.00 KiB 24.96 MiB aio1_keystone_cont 2.95 1.44 1.41 4.00 KiB 25.03 MiB aio1_memcached_con 2.85 1.37 1.39 4.00 KiB 25.18 MiB aio1_neutron_agent 3.81 1.82 1.89 4.00 KiB 27.04 MiB aio1_neutron_serve 2.93 1.44 1.40 4.00 KiB 25.08 MiB aio1_nova_api_meta 2.95 1.46 1.41 4.00 KiB 25.24 MiB aio1_nova_api_os_c 2.95 1.49 1.37 4.00 KiB 25.33 MiB aio1_nova_api_plac 3.03 1.51 1.44 0.00 25.06 MiB aio1_nova_conducto 2.83 1.35 1.41 0.00 27.11 MiB aio1_nova_console_ 2.85 1.39 1.39 4.00 KiB 25.00 MiB aio1_nova_schedule 2.86 1.37 1.40 4.00 KiB 25.08 MiB aio1_rabbit_mq_con 3.24 1.68 1.46 4.00 KiB 25.06 MiB aio1_repo_containe 273.26 51.84 219.80 148.84 MiB 832.54 MiB aio1_rsyslog_conta 2.92 1.35 1.50 4.00 KiB 25.20 MiB aio1_swift_proxy_c 3.08 1.46 1.52 4.00 KiB 25.21 MiB aio1_utility_conta 3.13 1.55 1.50 4.00 KiB 25.07 MiB TOTAL 23 of 23 339.65 84.21 251.88 148.92 MiB 1.35 GiB
vim-emu
The VIM-emulator instance for testing the container-based Network Services does run on a single host machine equipped with CentOS Linux 7.5.1804 and Trusted Platform Module (TPM) version 1.2 (the latter being needed for proper attestation by the Trust Monitor).
The host runs kernel 4.4.19 with a modified version of the Integrity Measurement Architecture (IMA) module to support container attestation. It also runs an instance of the OpenAttestation (OAT) HostAgent software for measuring the platform trust along with containers’ individual trust
- Install the dependencies: Docker Container engine 19.03.0-dev
- Install the VIM-emulator, which is available from the OSM wiki and muset be adapted to support the CentOS distribution
NFVO
In SHIELD, the NFVO of choice is OSM. The supported releases are TWO, FOUR, FIVE.
OSM release FIVE
OSM releases FOUR and FIVE introduced several changes w.r.t release TWO, both at the end-user side and at the northbound APIs.
- Install and configure OSM in your environment by following these installation steps
-
Setup the network configuration.
OSM must be running in a server that is connected to the NFVI PoP (that is, where the VNFs will be instantiated and running). The network interconnecting the OSM VM and the NFVI PoP is the network of "management" type and it is defined in the NFVI PoP itself. For instance, you may use a network called "provider" with range 10.102.10.0/24 - Configure OSM by defining the VIM(s) with the following values: "name, type=OpenStack, vim url (endpoint for the Openstack Identity service), vim username, vim password, tenant/project name, vim network name=provider"
- Onboard the different vNSF and NS packages. Note that in OSM the vNSF package must be onboarded first (as the NS package refers to it in its NSD)
SDN controller
In SHIELD, the SDN controller used in conjuction with the vNSF Orchestrator (specifically with the vNSFO API) and the Trust Monitor is OpenDayLight in its version "Carbon 0.6.3". The controller enables the control of the SDN switch and allows to dynamically manage its network flow rules.
-
Install Java. It should match the following version:
openjdk version "1.8.0_212"
OpenJDK Runtime Environment (build 1.8.0_212-8u212-b03-0ubuntu1.16.04.1-b03)
OpenJDK 64-Bit Server VM (build 25.212-b03, mixed mode) - Download the OpenDayLight Carbon 0.6.3 compressed file
-
Uncompress the file
tar zxvf distribution-karaf-0.6.3-Carbon.tar.gz
-
Move to the binaries folder and run the Karaf subsystem binary
cd bin ./start
-
Access the Karaf console. Since OpenDaylight does not support the control of layer 2 devices out of the box, multiple other features are required to enable this functionality. This will be configured controller
ssh karaf@127.0.0.1 -p 8101 # use "karaf" as password feature:install odl-mdsal-models odl-aaa-shiro odl-akka-scala-2.11 odl-akkasystem-2.4 odl-akka-clustering-2.4 odl-akka-leveldb-0.7 odl-akkapersistence-2.4 odl-netty odl-netty-4 odl-guava-18 odl-lmax-3 odl-triemap0.2 odl-restconf-all odl-restconf odl-restconf-noauth odl-mdsal-apidocs odlyangtools-yang-data odl-yangtools-common odl-yangtools-yang-parser odl-aaaapi odl-aaa-authn odl-aaa-encryption-service odl-aaa-cert odlopenflowplugin-southbound-he odl-openflowplugin-nsf-model-he odlopenflowplugin-app-lldp-speaker-he odl-mdsal-dom odl-mdsal-common odl-mdsaldom-api odl-mdsal-dom-broker odl-mdsal-binding-base odl-mdsal-bindingruntime odl-mdsal-binding-api odl-mdsal-binding-dom-adapter odl-mdsal-eoscommon odl-mdsal-eos-dom odl-mdsal-eos-binding odl-mdsal-singleton-common odl-mdsal-singleton-dom standard config region package http war kar ssh management odl-openflowplugin-flow-services-ui odl-openflowplugin-flowservices-rest odl-openflowplugin-flow-services odl-openflowplugin-southbound odl-openflowplugin-nsf-model odl-openflowplugin-app-config-pusher odlopenflowplugin-app-topology odl-openflowplugin-app-forwardingrules-manager odl-l2switch-switch odl-l2switch-switch-rest odl-l2switch-switch-ui odll2switch-hosttracker odl-l2switch-addresstracker odl-l2switch-arphandler odl-l2switch-loopremover odl-l2switch-packethandler odl-dluxappsapplications odl-dluxapps-nodes odl-dluxapps-topology odl-dluxapps-yangui odl-dluxapps-yangman odl-dluxapps-yangvisualizer pax-jetty pax-http paxhttp-whiteboard pax-war odl-config-persister odl-config-startup odl-dluxcore odl-config-netty odl-config-api odl-config-netty-config-api odl-configcore odl-config-manager odl-openflowjava-protocol odl-mdsal-all odl-mdsalcommon odl-mdsal-broker-local odl-toaster odl-mdsal-xsql odl-mdsalclustering-commons odl-mdsal-distributed-datastore odl-mdsal-remoterpcconnector odl-mdsal-broker
-
Setup the network configuration
- The VM where ODL runs must be in the same network as the SDN-enabled switch. For instance, the ODL VM used in the project runs on a server that is connected to the switch via an intermediate switch that is tagged with an appropriate VLAN. Other possible setups would be to physically connect one interface of such server to the switch
-
The setup of the interfaces is as follows:
- The server has a bridge that links i) the interface connected to the switch with ii) the interface assigned to the VLAN in use for the switch (as well as external connectivity)
- The ODL VM has one interface connected to a virtual network (e.g., 10.102.20.0/24)
-
Enable TLS encryption of the OpenFlow traffic. To do so, a CA must be created.
The intermediate pair must be skipped if the switch does not support chained certificates. The leaf certificates are signed directly with the root certificate by using the “policy_loose” preset.
On the switch, commands similar to the following ones (for HP Aruba 3800) must be issuedconfig term crypto pki ta-profile
This will generate a Certificate Signing Request (CSR) in the terminal. The corresponding certificate, using the “usr_cert“ preset, is created and then installed with the following command:copy tftp ta-certificate crypto pki identity-profile SwitchIdentity subject crypto pki create-csr certificate-name ta-profile usage openflow crypto pki install-signed-certificate
For ODL, a certificate must first be generated by using the "server_cert" presetsudo openssl pkcs12 -export -in
The export password must be set to "opendaylight" (otherwise the ODL configuration shall be changed according to the password of choice). Note that the following commands use this default password.-inkey -out ctl.p12 -name
Then, the certificate must be imported to the keystorekeytool -importkeystore -deststorepass opendaylight -destkeypass opendaylight -destkeystore ctl.jks -srckeystore ctl.p12 -srcstoretype PKCS12 -srcstorepass opendaylight -alias odlserver
Also, the switch certificate must be imported into the truststore:keytool -importcert -file SwitchCert.pem -keystore truststore.jks -storepass opendaylight
Then the Java Key Store (JKS files: ctl.jks and truststore.jks) files are copied into the ODL folder “./configuration/ssl”
Finally, TLS is enabled in the OpenFlow connection via the following configuration (in file "/etc/opendaylight/datastore/initial/config/default-openflowconnection-config.xml")<transport-protocol>TLS</transport-protocol>
<tls>
<keystore>configuration/ssl/ctl.jks</keystore>
<keystore-type>JKS</keystore-type>
<keystore-path-type>PATH</keystore-path-type>
<keystore-password>opendaylight</keystore-password>
<truststore>configuration/ssl/truststore.jks</truststore>
<truststore-type>JKS</truststore-type>
<truststore-path-type>PATH</truststore-path-type>
<truststore-password>opendaylight</truststore-password>
<certificate-password>opendaylight</certificate-password>
</tls>
SDN switch
The SDN switch is connected to the SDN controller and must be configured accordingly (for instance, reachable via sharing the same VLAN). Also, the SDN switch should feature a TPM switch that allows to fetch metrics that assess its trustworthy (or not) state.
-
Create a VLAN.
The setup of the switch (here, the HP Aruba 3800) requires its connection to the OpenFlow controller. In order to be managed by the OpenDaylight controller, a dedicated VLAN must also be instantiated: the management VLAN. At least one port must be part of it and configured to reach the ODL controller (it is instantiated similarly to the managed network described below). In order to create a managed network, another VLAN is necessary.
To create a VLAN, instantiate an OpenFlow instance and assign ports to it, these commands should be issued on the switch console:config term vlan
At least one port shall be set as untagged for the VLAN, so that it is assigned to the VLAN.untagged exit exit
-
On the managed network, a virtual OpenFlow instance is defined and configured to connect to the ODL controller. OpenFlow 1.0 must be enabled:
config term openflow controller-id
If the ODL instance is secured with TLS, the secure option must be set when configuring the controller ID for the instance.ip controllerinterface vlan instance "odl_ " version 1.0 member vlan controller-id [secure] flow-location hardware-only enable exit enable exit exit
Following these commands, the general status of the OpenFlow instance can be monitored with show openflow, while the details of the instance can be checked withshow openflow instance
- To allow communication between the switch and the Trust Monitor, SNMP has to be enabled. To enable SNMPv3, type
config term snmpv3 enable
Then follow the on-screen instructions. Once finished, type:snmpv3 user
This sets the user passwords and encryption algorithms, and it also assigns the user to the manager/privacy groupauth sha priv aes snmpv3 group managerpriv user sec-model ver3