Livro de Minicursos

of 325 /325

Embed Size (px)

Transcript of Livro de Minicursos

  • Copyright 2015 Sociedade Brasileira de ComputaoTodos os direitos reservados

    Capa: Carlos Bao, Atlie Eventos (Vitria/ES)Produo editorial: Rodolfo Villaa, UFES

    Cpias Adicionais: Sociedade Brasileira de Computao (SBC) Av. Bento Gonalves, 9500 Setor 4 Prdio 43.412 Sala 219 Bairro Agronomia CEP 91.509-900 Porto Alegre RS Fone: (51) 3308-6835 E-mail: [email protected]

    Dados Internacionais de Catalogao na PublicaoSimpsio Brasileiro de Redes de Computadores e Sistemas Distribudos (33. :

    2015 : Vitria, ES)Minicursos / XXXIII Simpsio Brasileiro de Redes de Computadores e

    Sistemas Distribudos; organizao: Magnos Martinello, Moises Renato Nunes Robeiro, Antnio Augusto Arago Rocha Porto Alegre: Sociedade Brasileira de Computao, 2015.

    325 p. il. 21 cm.

    Vrios autoresInclui bibliografiasISSN: 2177-4978

    1. Redes de Computadores. 2. Sistemas Distribudos. I. Martinello, Magnos. II. Ribeiro, Moises Renato Nunes. III. Rocha, Antnio Augusto Arago. IV. Ttulo.

    Minicursos do XXXIII Simpsio Brasileiro de Redes de Computadores e Sistemas Distribudos SBRC 2015

    i

  • Sociedade Brasileira de Computao SBC

    PresidnciaPaulo Roberto Freire Cunha (UFPE), PresidenteLisandro Zambenedetti Granville (UFRGS), Vice-Presidente

    DiretoriasRenata de Matos Galante (UFRGS), Diretora AdministrativaCarlos Andr Guimares Ferraz (UFPE), Diretor de FinanasAltigran Soares da Silva (UFAM), Diretor de Eventos e Comisses EspeciaisMirella Moura Moro (UFMG), Diretora de EducaoJos Viterbo Filho (UFF), Diretor de PublicaesClaudia Lage da Motta (UFRJ), Diretora de Planejamento e Programas EspeciaisMarcelo Duduchi Feitosa (CEETEPS), Diretor de Secretarias RegionaisEdson Norberto Cceres (UFMS), Diretor de Divulgao e Marketing

    Diretorias ExtraordinriasRoberto da Silva Bigonha (UFMG), Diretor de Relaes ProfissionaisRicardo de Oliveira Anido (UNICAMP), Diretor de Competies CientficasRaimundo Macdo (UFBA), Diretor de Cooperao com Sociedades CientficasAvelino Francisco Zorzo (PUC-RS), Diretor de Articulao de Empresas

    ContatoAv. Bento Gonalves, 9500Setor 4 - Prdio 43.412 - Sala 219Bairro Agronomia91.509-900 Porto Alegre RSCNPJ: 29.532.264/0001-78http://www.sbrc.org.br

    Laboratrio Nacional de Redes de Computadores LARC

    Diretoria (2014-2016)Elias P. Duarte Jr. (UFPR), Vice-Diretor ExecutivoRonaldo Alves Ferreira (UFMS), Vice-Diretor do Conselho Tcnico-cientficoRossana Maria de C. Andrade (UFC), Diretora do Conselho Tcnico-cientficoPaulo Andr da Silva Gonalves (UFPE), Diretor Executivo

    Membros InstitucionaisSESU/MEC, INPE/MCT, UFRGS, UFMG, UFPE, UFCG, UFRJ, USP, PUC-Rio,UNICAMP, LNCC, IME, UFSC, UTFPR, UFC, UFF, UFSCar, CEFET-CE, UFRN,UFES, UFBA, UNIFACS, UECE, UFPR, UFPA, UFAM, UFABC, PUCPR, UFMS,UNB, PUC-RS, UNIRIO, UFS, UFU.

    Minicursos do XXXIII Simpsio Brasileiro de Redes de Computadores e Sistemas Distribudos SBRC 2015

    ii

  • Organizao do SBRC 2015

    Coordenadores GeraisMagnos Martinello (UFES)Moises Renato Nunes Ribeiro (UFES)

    Coordenadores do Comit de ProgramaChristian Esteve Rothenberg (UNICAMP)Jussara Almeida (UFMG)

    Coordenador de Palestras e TutoriaisMarinho Barcellos (UFRGS)

    Coordenador de Painis e DebatesAntnio Abelm (UFPA)

    Coordenador de MinicursosAntnio Augusto de Arago Rocha (UFF)

    Coordenador de WorkshopsSidney Lucena (UNIRIO)

    Coordenador do Salo de FerramentasCesar Marcondes (UFSCAR)Alfredo Goldman (USP)

    Comit ConsultivoDorgival Guedes (UFMG) Joni Fraga (UFSC) Frank Siqueira (UFSC) Luciano Gaspary (UFRGS) Markus Endler (PUC-Rio) Jacir Luiz Bordim (UnB) Rafael Timteo de Sousa Jnior (UnB) William Ferreira Giozza (UnB) Carlos Andr Guimares Ferraz (UFPE) Jos Augusto Suruagy Monteiro (UFPE) Paulo Andr da Silva Gonalves (UFPE)

    Minicursos do XXXIII Simpsio Brasileiro de Redes de Computadores e Sistemas Distribudos SBRC 2015

    iii

  • Mensagem dos Coordenadores Gerais

    Boas-vindas a todos os participantes do 33 Simpsio Brasileiro de Redes deComputadores e Sistemas Distribudos (SBRC 2015). Atravs dos anos, o SBRC vem seconsolidando com um dos principais eventos cientficos da rea de Informtica no pas,trazendo sempre conceitos e prticas inovadoras nas reas de Redes de Computadores eSistemas Distribudos. Nos sentimos extremamente honrados pela confiana em nsdepositada para realizar esse importante evento pela primeira vez em Vitria, EspritoSanto.

    O SBRC se caracteriza por proporcionar uma rica troca de ideias e experincias entreprofessores, pesquisadores, profissionais e estudantes atuantes na rea de interesse dosimpsio. sempre um desafio manter o SBRC nos padres de qualidade que o temcaracterizado em suas exitosas edies passadas.

    A programao do SBRC 2015 engloba um conjunto de atividades bastante abrangentee de alta qualidade tcnica. So 18 sesses tcnicas nas quais sero apresentados 58artigos completos, selecionados por meio de um rigoroso trabalho de reviso,envolvendo uma grande variedade de temas pertinentes e atuais relacionados s reas deRedes de Computadores e Sistemas Distribudos. A programao inclui ainda seispalestras proferidas por pesquisadores internacionalmente renomados e trs painisabordando temas extremamente atuais. So oferecidos seis minicursos, voltados formao e atualizao dos participantes sobre tpicos selecionados atravs de chamadapblica. Adicionalmente, oito workshops so realizados em paralelo ao SBRC, focadosem temas especficos e emergentes relacionados rea de interesse do simpsio. Nestaedio, tambm homenageamos com o prmio Destaque SBRC uma personalidade dasreas de Redes de Computadores e Sistemas Distribudos por sua contribuiosignificativa para a evoluo da pesquisa e para a estruturao de uma slidacomunidade cientfica no Brasil.

    As excelentes atividades programadas nesta edio so produto dos esforos de seusrespectivos coordenadores. Um agradecimento muito especial a Jussara Almeida,Christian Esteve Rothenberg, Marinho Barcellos, Antnio Abelm, Antnio Augusto deArago, Sidney Lucena, Cesar Augusto Marcondes e Alfredo Goldman pelo tempo eempenho na efetivao das vrias trilhas do SBRC 2015.

    Ressaltamos o trabalho intenso e extremamente competente realizado pelo integrantesdo Comit de Organizao Local ao qual gostaramos de agradecer especialmente aoRodolfo Villaa, Renato Moraes, Maxwell Monteiro, Maria Jos Pontes, Sabrina Felix,Marcia Paiva, Roberta Lima Gomes e Celso Alberto Saibel Santos. Agradecemos aindas diretorias da SBC e do LARC, promotores do SBRC, pela confiana e pelocompetente apoio organizacional prestado pela equipe administrativa da SBC. Somostambm gratos aos integrantes do Comit Consultivo do SBRC e coordenao daComisso Especial de Redes de Computadores e Sistemas Distribudos da SBC, pelosaconselhamentos e pelo suporte financeiro prestado organizao do SBRC 2015.

    Gostaramos de agradecer ainda aos patrocinadores do Simpsio: o Comit Gestor daInternet no Brasil, s agncias governamentais de fomento CNPq, CAPES e FAPES

    Minicursos do XXXIII Simpsio Brasileiro de Redes de Computadores e Sistemas Distribudos SBRC 2015

    iv

  • e s empresas patrocinadoras por reconhecerem o SBRC como um evento importantepara o fomento pesquisa e inovao nas reas de Redes de Computadores e SistemasDistribudos.

    Por fim, agradecemos aos Departamentos de Informtica e Engenharia Eltrica daUFES, e aos Programas de Ps-Graduao em Informtica (PPGI) e Eltrica (PPGEE)por prestarem o indispensvel suporte para a realizao do SBRC.Desejamos a todos os participantes que tenham uma excelente estadia em Vitria e quetirem proveito de todo o conhecimento que as atividades o SBRC 2015 tem a lhesoferecer.

    Vitria, 18 de maio de 2015. Magnos Martinello (UFES)

    Moiss Renato Nunes Ribeiro (UFES)Coordenadores Gerais

    Minicursos do XXXIII Simpsio Brasileiro de Redes de Computadores e Sistemas Distribudos SBRC 2015

    v

  • Mensagem do Coordenador de Minicursos

    Este livro apresenta a seleo de Minicursos da 33a edio do Simpsio Brasileiro deRedes de Computadores e Sistemas Distribudos (SBRC), realizado em Vitria-ES,entre os dias 18 e 22 de maio de 2015. As sesses de minicursos do evento representamuma oportunidade para acadmicos e profissionais se aprofundarem em temasrelevantes e atuais da rea, e que normalmente no so cobertos em grades curricularesdas universidades. O SBRC possui, tradicionalmente em sua programao tcnica, umasrie desses minicursos. Em 2015, foram submetidas 20 propostas, das quais 6 foramselecionadas para publicao e apresentao, representando assim uma taxa de aceitaode 30%. Os nmeros demonstram o rigor do processo e, sem dvida, refletem tambm aqualidade das propostas selecionadas. O comit de avaliao dos minicursos foicomposto por 17 renomados pesquisadores, que desempenharam um excelente trabalhono processo de elaborao dos pareceres para seleo das propostas.

    Esta edio rene, portanto, seis captulos, produzidos pelos autores das propostasaceitas. No Captulo 1, os autores discutem sobre os problemas de aplicao de redesdefinidas por software em sistemas de computao em nuvem. O Captulo 2 discorresobre a gerao distribuda de energia, seus desafios e perspectivas em redes decomunicao. No captulo 3, os autores discorrem sobre plataformas para a Internet dascoisas. O Captulo 4 apresenta a plataforma NetFPGA para processamento de pacotesem hardware. O Captulo 5 apresenta uma introduo a rdios definidos por softwarecom aplicaes em GNU radio. E, finalmente, o Captulo 6 aborda os desafios eoportunidades das pesquisas em redes de sensoriamento participativo.

    Como Coordenador de Minicursos, gostaria de agradecer a todos envolvidos naproduo deste livro. Primeiramente aos coordenadores gerais do SBRC 2015, MagnosMartinello (UFES) e Moiss R. N. Ribeiro (UFES), pelo convite para a coordenao deminicursos, alm de todo o suporte necessrio para a realizao do evento. Agradeotambm a todos os membros do comit de avaliao por terem aceitado o meu convite ededicado grande esforo para produzir as revises de alta qualidade para todos ostrabalhos submetidos. Por fim, agradeo a todos que submeteram propostas, algumas demuita qualidade no puderam ser aceitas devido limitao de espao no evento, masagradeo especialmente aos autores que tiveram seus trabalhos aceitos e que sededicaram imensamente para produzir este livro e apresentaes de qualidade mpar.

    Antonio Augusto de Arago RochaProfessor da Universidade Federal FluminenseCoordenador de Minicursos do SBRC 2015

    Minicursos do XXXIII Simpsio Brasileiro de Redes de Computadores e Sistemas Distribudos SBRC 2015

    vi

  • Comit de Avaliao de Minicursos do SBRC 2015

    Artur Ziviani (LNCC)Ana Paula Couto da Silva (UFMG)Igor Moraes (UFF)Michele Nogueira (UFPR)Antonio Abelm (UFPA)Stenio Fernandes (UFPE)Marinho Pilla Barcellos (UFRGS)Daniel Figueiredo (UFRJ)Jose F. de Rezende (UFRJ)Carlos Alberto Vieira Campos (UNIRIO)Aldri Santos (UFPR)Daniel Macdo Batista (IME-USP)Gustavo Bittencourt Figueiredo (UFBA)Jose Augusto Suruagy Monteiro (UFPE)Miguel Elias Mitre Campista (UFRJ)Flvia Delicato (UFRJ)Alex Vieira (UFJF)

    Minicursos do XXXIII Simpsio Brasileiro de Redes de Computadores e Sistemas Distribudos SBRC 2015

    vii

  • Sumrio

    1. Applying Software-defined Networks to Cloud Computing Tereza Carvalho (Universidade de So Paulo), Marcos Simplicio Jr (Escola Politcnica USP), Bruno Barros (Universidade de So Paulo), Marco Rojas (Universidade de So Paulo), Fernando Redigolo (Universidade de Sao Paulo), Dino Magri (Universidade de So Paulo), Gustavo Cavalcanti (UDESC), Ewerton Andrade (Universidade de Sao Paulo) ...................................................1

    2. Gerao Distribuda de Energia: Desafios e Perspectivas em Redes de ComunicaoYona Lopes (Universidade Federal Fluminense), Natalia Castro Fernandes (Universidade Federal Fluminense), Debora Muchaluat Saade (UFF) ..........55

    3. Plataformas para a Internet das CoisasPaulo Pires (UFRJ Brazil), Flavia Delicato (UFRJ Brazil), Thais VasconcelosBatista (UFRN Brazil), Thomaz Avila (UFRJ Brazil), Everton Cavalcante (UFRN Universidade Federal do Rio Grande do Norte Brazil), Marcelo Pitanga (Universidade Federal do Rio de Janeiro Brazil) ..............................110

    4. NetFPGA: Processamento de Pacotes em HardwarePablo Goulart (Universidade Federal de Minas Gerais Brazil), Italo Cunha (Universidade Federal de Minas Gerais Brazil), Marcos Vieira (UFMG Brazil), Cesar Marcondes (Universidade Federal de So Carlos Brazil), Ricardo Menotti (Universidade Federal de So Carlos Brazil), Dorgival Guedes (UFMG Brazil) .................................................................................170

    5. Introduo a Rdios Definidos por Software com aplicaes em GNU RadioWendley Silva (Universidade Federal do Cear Brazil), Jefferson Rayneres S. Cordeiro (UFMG Brazil), Jose-Marcos Nogueira (Universidade Federal de Minas Gerais Brazil), Daniel Fernandes Macedo (Universidade Federal de Minas Gerais Brazil), Marcos Vieira (UFMG Brazil), Luiz Filipe Vieira (UFMG Brazil) ..............................................................................................216

    6. Redes de Sensoriamento Participativo: Desafios e OportunidadesThiago Silva (UFMG Brazil), Pedro Olmo Vaz de Melo (Universidade Federalde Minas Gerais Brazil), Jussara Almeida (DCC-UFMG Brazil), Joo Borges(Universidade Federal do Rio Grande do Norte Brazil), Clayson Celes (Universidade Federal de Minas Gerais (UFMG) Brazil), Anna Izabel Tostes (Universidade Federal de Minas Gerais Brazil), Felipe Domingos da Cunha (UFMG Brazil), Antonio Alfredo Ferreira Loureiro (UFMG Brazil) ........266

    Minicursos do XXXIII Simpsio Brasileiro de Redes de Computadores e Sistemas Distribudos SBRC 2015

    viii

  • Captulo

    1Applying Software-defined Networks to CloudComputing

    Bruno Medeiros de Barros (USP), Marcos Antonio Simplicio Jr. (USP),Tereza Cristina Melo de Brito Carvalho (USP), Marco Antonio Torrez Rojas(USP), Fernando Frota Redgolo (USP), Ewerton Rodrigues Andrade (USP),Dino Raffael Cristofoleti Magri (USP)

    Abstract

    Network virtualization and network management for cloud computing systems have be-come quite active research areas in the last years. More recently, the advent of theSoftware-Defined Networks (SDNs) introduced new concepts for tackling these issues,fomenting new research initiatives oriented to the development and application of SDNsin the cloud. The goal of this course is to analyze these opportunities, showing howthe SDN technology can be employed to develop, organize and virtualize cloud network-ing. Besides discussing the theoretical aspects related to this integration, as well as theensuing benefits, we present a practical a case study based on the integration betweenOpenDaylight SDN controller and OpenStack cloud operating system.

    1.1. IntroductionThe present section introduces the main topics of the course, providing an evolutionaryview of network virtualization in cloud computing and distributed systems. We present themain changes occurred in the field in the latest years, focusing in the advent of Software-defined Networks (SDN) and its implications in the current research scenario.

    1.1.1. The Role Networking in Cloud Computing

    Cloud computing has ushered the information technology (IT) field and service providersinto a new era, redefining how computational resources and services are delivered andconsumed. With cloud computing, distinct and distributed physical resources such ascomputing power and storage space can be acquired and used in an on-demand ba-sis, empowering applications with scalability and elasticity at low cost. This allowsthe creation of different service models, generally classified as [Mell and Grance 2011]:

    Minicursos do XXXIII Simpsio Brasileiro de Redes de Computadores e Sistemas Distribudos SBRC 2015

    1 c2015 SBC Soc. Bras. de Computao

  • Infrastructure-as-a-Service (IaaS), which consists in providing only fundamental comput-ing resources such as processing, storage and networks; Platform-as-a-Service (PaaS), inwhich a development platform with the required tools (languages, libraries, etc.) is pro-vided to tenants; and Software-as-a-Service (SaaS), in which the consumer simply usesthe applications running on the cloud infrastructure.

    To actually provide cost reductions, the cloud needs to take advantage ofeconomies of scale, and one key technology for doing so is resource virtualization. Afterall, virtualization allows creation of a logical abstraction layer above the pool of physi-cal resources, thereby enabling a programmatic approach to allocate resource whereverneeded while hiding the complexities involved in their management. The result is poten-tially very efficient resource utilization, better manageability, on-demand and program-matic resource instantiation, and resource isolation for better control, accounting andavailability.

    In any cloud environment, the network is a critical resource that connects variousdistributed and virtualized components, such as servers, storage elements, appliances andapplications. For example, it is the network that allows aggregation of physical servers,efficient virtual machine (VM) migration, and remote connection to storage systems, ef-fectively creating the perception of large, monolithic resource pool. Furthermore, it isalso the network that enables delivery of cloud based applications to end users. Yet, whileevery component in a cloud is getting virtualized, the physical network connecting thesecomponents is not. Without virtualization, the network is one physical common network,shared by all cloud end-users and cloud components. Without virtualization, the networklikely becomes a single complex system in the cloud as the cloud evolves to provide newservices with diverse requirements while trying to sustain the scale.

    1.1.2. The Advent of Software-Defined Networks (SDNs)

    The term SDN originally appeared in [Greene 2009], referring to the ability of Open-Flow [McKeown et al. 2008] to support the configuration of table flows in routers andswitches using software. However, the ideas behind SDNs come from the goal of hav-ing a programmable network, whose research started short after the emergence of theInternet, led mainly by the telecom industry. Today, the networking industry has shownenormous interest in the SDN paradigm, given the expectations of reducing both cap-ital and operational costs with service providers and enterprise data centers with pro-grammable, virtualizable and easily partitionable networks. Actually, programmabilityis also becoming a strategic feature for network hardware vendors, since it allows amany devices to be programmed and orchestrated in large network deployments (e.g.,data centers). In addition, discussions related to the future Internet has led to the stan-dardization of SDN-related application programming interfaces (API), with new com-munication protocols being successfully deployed on experimentation and real scenarios[Kim et al. 2013, Pan et al. 2011].

    These features of SDNs make them highly valuable for cloud computing systems,where the network infrastructure is shared by a number of independent entities and, thus,network management becomes a challenge. Indeed, while the first wave of innovationin the cloud focused on server virtualization technologies and on how to abstract com-

    1: Applying Software-defined Networks to Cloud Computing.

    2 c2015 SBC Soc. Bras. de Computao

  • putational resources such as processor, memory and storage, SDNs are today promot-ing a second wave with network virtualization [Lin et al. 2014]. The emergence of largeSDN controllers focused on ensuring availability and scalability of virtual networking forcloud computing systems (e.g., OpenDayLight [Medved et al. 2014] and OpenContrail[OpenContrail 2014]) is a clear indication of this synergy between both technologies.

    Besides the cloud, SDNs have also been adopted in other computing scenarios,with device vendors following the SDN path and implementing most of control logicin software over standard processors. This has led to the emergence of software-definedbase stations, software defined optical switches, software-defined radios, software-definedrouters, among others.

    1.2. Cloud Computing and Network VirtualizationThis section aims to introduce the concepts and technologies related to network virtual-ization in cloud computing systems. We start by describing the main virtualization tech-nologies used to implement multitenant networks. Then, we present an architectural viewof virtual networks in the cloud, discussing the main components involved, their respon-sibilities and existing interfaces. Finally, we focus on security, scalability and availabilityaspects of the presented solutions.

    1.2.1. Cloud Computing and Resource Virtualization

    Virtualization is not a new concept in computing, having in fact appeared in the 70s[Jain and Paul 2013a, Menasc 2005]. The concept of virtualization has evolved withtime, however, going from virtual memory to processor virtualization (e.g., Hyper-V,AMD-V, Hyper-threading) up to the virtualization of network resources (e.g., SDN, Open-vSwitch, etc).

    With the advent of cloud computing and the demand of virtualizing entirecomputing environments, new virtualization techniques were developed, among them[Amazon 2014]:

    Full Virtualization or Hardware VM: all hardware resources are simulated viasoftware. The hardware itself is not directly accessed by VMs, as the hypervisor translatesall interruptions and calls between the virtual and physical appliances. Obviously, thistechnique incurs performance penalties due to I/O limitations, so it is less efficient thanits counterparts. However, it offers high flexibility, as systems running on VMs do notneed to be altered if there is a change on the underlying physical hardware;

    Para-Virtualization: the hardware is not simulated, but divided in different do-mains so they can be accessed by VMs. Systems running on VMs need to be adaptedso that they can directly access the physical machines hardware resources. Performancehere is close to the performance on the physical machine (bare-metal), with the drawbackof limited flexibility, as hardware upgrades may demand changes on VMs.

    Para-virtualized drivers (Para+Full Virtualization): a combination of the pre-vious techniques. As para-virtualized storage and networking devices have a much betterperformance than their full-virtualized counterparts [Amazon 2014], this is the techniqueapplied to these devices, while full-virtualization (and the consequent flexibility brought

    Minicursos do XXXIII Simpsio Brasileiro de Redes de Computadores e Sistemas Distribudos SBRC 2015

    3 c2015 SBC Soc. Bras. de Computao

  • by it) is applied to devices whose performance is not critically affected. This approachallows minimum changes when physical hardware upgrades are needed.

    Several studies highlight the benefits of virtualization on a computing environ-ment. Among them, the following can be cited [Menasc 2005, Kotsovinos 2010]:

    Resource sharing: when a device has more resources than what can be con-sumed by a single entity, those resources can be shared among different users or processesfor better usage efficiency. For example, the different user applications or VMs runningon a server can share its multiple processors, storage disks or network links. If properlyexecuted, the economy achieved in small server consolidation onto VMs, for example,can be from 29 % to 64 % [Menasc 2005];

    Resource aggregation: devices with a low availability of resources can be com-bined to create a larger-capacity virtual resource. For example, with an adequate filemanagement system, small-size magnetic disks can be combined to create the impressionof a large virtual disk.

    Ease of management: one of the main advantages of virtualization is that itfacilitates maintenance of virtual hardware resources. One reason is that virtualizationusually provides standard software interfaces that abstract the underlying hardware (ex-cept for para-virtualization). In addition, legacy applications placed in virtualized en-vironments can keep running even after being migrated to a new infrastructure, as thehypervisor becomes responsible to translate old instructions into those comprehensibleby the underlying physical hardware.

    Dynamics: with the constant changes to application requirements and work-loads, rapid resource realocation or new resource provisioning becomes essential for ful-filling these new demands. Virtualization is a powerful tool for this task, since virtualresources can be easily expanded, realocated, moved or removed without concerns aboutwhich physical resources will support the new demands. As an example, when a userprovisions a dynamic virtual disk, the underlying physical disk does not need to have thatcapacity available at provisioning time: it just needs to be available when the user actuallyneeds to use it.

    Isolation: multiple users environments may contain users that do not trust oneach other. Therefore, it is essential that all users have their resources isolated from otherusers, even if this is done logically (i.e., in software). When this happens, malicious usersare unable to monitor and/or interfere with other users activities, preventing a vulnera-bility or attack to a given machine from affecting other users.

    Despite their benefits, there are also disadvantages of virtualized environments,such as [Kotsovinos 2010]:

    Performance: even though there is no single method for measuring perfor-mance, it is intuitive that the extra software layer of the hypervisors leads to higher pro-cessing costs than a comparable system with no virtualization.

    Management: virtual environments abstract physical resources in software andfiles, so they need to be instantiated, monitored, configured and saved in an efficient and

    1: Applying Software-defined Networks to Cloud Computing.

    4 c2015 SBC Soc. Bras. de Computao

  • auditable manner, which is not always an easy task.

    Security: whereas isolation is a mandatory requirement for VMs in many realcase scenarios, completely isolating a virtualized resource from another, or applicationsrunning on the physical hardware from virtualized ones, are involved (if not impossible)tasks. Therefore, it is hard to say whether or not a physical server hosting several virtual-ized applications is monitoring them with the goal of obtaining confidential information,or even whether a VM is somehow attacking or monitoring another VM.

    1.2.2. Mechanisms for Network Virtualization

    To understand the mechanisms that can implement network virtualization, first we needto understand which resources can be virtualized in a network. In terms of resources,networks are basically composed by network interface cards (NICs) connected to a layer2 network through a switch. These layer 2 networks can be connected through routers toform a layer 3 network, which in turn can be connected via routers to compose the Inter-net. Each of these network components NIC, L2 network, L2 switch, L3 networks, andL3 routers can be virtualized [Jain and Paul 2013b]. However, there are multiple, oftencompeting, mechanism for virtualizing these resources, as discussed in what follows:

    Virtualization of NICs: Every networked computer system is composed by atleast one NIC. In virtualized environments with multiple VMs, it becomes, thus, necessaryto provide every VM with its own virtual NIC (vNIC). This need is currently satisfied bythe hypervisor software, which is able to provide as many vNICs as the number of VMsunder its responsibility. The vNICs are connected to the physical NIC (pNIC) througha virtual switch (vSwitch), just like physical NICs can be connected through a physicalswitch to compose layer 2 networks. This NIC virtualization strategy has benefits such astransparency and simplicity and, thus, is generally proposed by software vendors. Never-theless, there is an alternative design proposed by pNIC (chip) vendors, which is to vir-tualize NIC ports using single-route I/O virtualization (SR-IOV) [PCI-SIG 2010] on theperipheral-component interconnect (PCI) bus. This approach directly connects the VMsto the pNICs, potentially providing better performance (as it eliminates intermediary soft-ware) and resource isolation (as the traffic does not go through a shared vSwitch). A thirddesign approach, promoted by physical switch vendors, is to provide virtual channels forinter-VM communication using a virtual Ethernet port aggregator (VEPA) [IEEE 2012b],which in turn passes VM frames to an external switch that implements inter-VM com-munication. This approach not only frees up server resources, but also provides bettervisibility and control over the traffic between any pair of VMs.

    Virtualization of L2 Switches: The number of ports in a typical switch is lim-ited, and possibly lower than the number of physical machines that need to be connectedon an L2 network. Therefore, several layers of L2 switches need to be connected to ad-dress network scalability requirements. To solve this issue, IEEE Bridge Port Extensionstandard 802.1BR [IEEE 2012a] proposes a virtual bridge with a large number of portsusing physical or virtual port extenders (like a vSwitch).

    Virtualization of L2 Networks: In a multitenant data center, VMs in a sin-gle physical machine may belong to different clients and, thus, need to be in different

    Minicursos do XXXIII Simpsio Brasileiro de Redes de Computadores e Sistemas Distribudos SBRC 2015

    5 c2015 SBC Soc. Bras. de Computao

  • virtual LANs (VLANs) [IEEE 2014]. VLANs implement package tagging for allowingL2 devices to isolate clients traffic in different logical L2 networks, so different virtualnetworks can use the same addressing space for different clients.

    Virtualization of L3 Networks: When the multitenant environment is extendedto a layer 3 network, there are a number of competing proposals to solve the problem. Ex-amples include: virtual extensible LANs (VXLANs) [Mahalingam et al. 2014b]; networkvirtualization using generic routing encapsulation (NVGRE) [Sridharan et al. 2011]; andthe Stateless Transport Tunneling (STT) protocol [Pan and Wu 2009].

    Virtualization of L3 Router: Multicore processors allow the design of network-ing devices using software modules that run on standard processors. By combiningmany different software-based functional modules, any networking device (L2 switch, L3router, etc.) can be implemented in a cost-effective manner while providing acceptableperformance. Network Function Virtualization (NFV) [Carapinha and Jimnez 2009]provides the conceptual framework for developing and deploying virtual L3 routers andother layer 3 network resources.

    1.2.3. Virtual Network Applications in Cloud Computing

    As discussed in Section 1.1.1 the interest surrounding network virtualization has been fu-eled by cloud computing and its isolation and scalability requirements. All the networkvirtualization mechanisms presented in Section 1.2.2 can be applied to solve specific net-work issues in cloud computing, in especial for the implementation of multitenant datacenters. Specifically, as depicted in Figure 1.1, a data center consist mainly of serversin racks interconnected via a top-of-rack Ethernet switch, which in turn connects to anaggregation switch, also known as an end-of-rack switch. The aggregation switches thenconnect to each other, as well as the other servers in the data center. A core switch con-nects the various aggregation switches and provides connectivity to the outside world,typically through layer 3 networks. In multitenant data centers, client VMs are com-monly placed in a different server, connected through the L2 network composed by thisswitch-enabled infrastructure. The virtualization of L2 switches via mechanisms such asVLAN enables the abstraction of tenant L2 networks on the distributed cloud data cen-ter, allowing traffic isolation of tenant networks with a different logical addressing space.Similarly, the virtualization of L3 routers using technologies such as VXLAN and GREtunneling enable the abstraction of layer 3 networks, connecting multiple data centers andallowing tenant networks to be distributed in different sites.

    Another inherent characteristic of multitenant data centers is the virtualization ofservers, enabling the instantiation of multiple VMs. VMs deployed in the same cloudserver commonly belong to different tenants and share the same computing resources,including the network interface (NIC). Mechanisms to virtualize server NICs such as vir-tual switches (i.e., in software) and SR-IOV (i.e., in hardware) are necessary to addressmulti-tenancy. Besides virtual switches, other software-based virtualization mechanismsare enabled by the NFV approach. NFV consists on the virtualization of network func-tional classes, such as routers, firewalls, load balancers and WAN accelerators. Theseappliances take the form software-based modules that can be deployed in one or moreVMs running on top of servers.

    1: Applying Software-defined Networks to Cloud Computing.

    6 c2015 SBC Soc. Bras. de Computao

  • Figure 1.1. Cloud data center network.

    1.2.4. Security, Scalability and Availability aspects

    The widespread adoption of cloud computing has emerged important security concernsinherited from multi-tenant environments. The need for isolating the server and networkresources consumed by different tenants is an example of the security requirements intro-duced by cloud computing. Virtualization technologies such as the presented in section1.2.2 play a crucial role in these scenario as mechanisms to enforce the resource isolation.Considering the cloud networking scenario, virtualization technologies should provide se-cure network domains for the cloud tenants, enabling secure connectivity for the servicesrunning inside the cloud. Understand the nature of security threats is a fundamental com-ponent of managing risks and implementing secure network services for cloud tenants.According to [Barros et al. 2015], there are three threat scenarios in cloud computing net-working. The scenarios are explained below.

    Tenant-to-Tenant: Threats related to attacks promoted by a legitimate tenant tar-geting another legitimate tenant exploiting security vulnerabilities in the cloud networkinginfrastructure.

    Tenant-to-Provider: Threats related to cloud vulnerabilities that allow a legit-imate tenant to disrupt the operation of the cloud infrastructure, preventing the cloudprovider from delivering the service in accordance with the service level agreements(SLAs) established with other legitimate tenants.

    Provider-to-Tenant: Threats related to vulnerabilities in the cloud provider in-frastructure, which allows malicious insider attacks from employees and other agents with

    Minicursos do XXXIII Simpsio Brasileiro de Redes de Computadores e Sistemas Distribudos SBRC 2015

    7 c2015 SBC Soc. Bras. de Computao

  • direct access to the cloud infrastructure.

    Different scenarios can also originate different groups of security threats in cloudnetworking scenarios. Consequently, different groups of security solutions built upon net-work virtualization mechanisms should be applied to ensure secure cloud services provi-sion. Also according to [Barros et al. 2015], the sources of security threats concerned tocloud networking scenario are described as follow.

    Physical isolation: Security threats originated from shared phyisical resourcesin the underlying network infrastructure, such as server NICs, switches and routers. At-tacks are commonly related to hijacking and analyzing tenant data from shared resourcesor even causing resource exhaustion on shared physical network elements.

    Logical isolation: Security threats originated from shared virtual resources suchas virtual switches, Linux bridge and virtual routers. Security attacks commonly exploitvulnerabilities in software-based virtualization mechanisms to access unauthorized dataand to reduce the quality of cloud network services.

    Authentication: Security threats originated from authentications vulnerabilitiesrelated to inadequate authentication, which allow attackers to mask their real identities.This can be accomplished by exploiting authentication protocols, acquiring credentialsand/or key materials by capturing data traffic, or via password recovery attacks (e.g.,brute force or dictionary attacks).

    Authorization: Security threats originated by vulnerabilities related to autho-rization problems, allowing granting or scaling rights, permissions or credentials to orfrom an unauthorized user. For example, the attacker can exploit a vulnerability in thecloud platform authorization modules, or even in the victim computer, to create or changeits credentials in order to obtain privileged rights.

    Insecure API: Security threats related to failures, malfunctions and vulnerabil-ities in APIs that compose the cloud system. Attacks of this class try to exploit insecureinterfaces for accessing or tampering with services running in other tenants or cloud ad-ministrative tools.

    Following the principles of cloud computing, the cloud networking should behighly scalable. Tha scalability of cloud networks are directly related to the features pro-vided by the network virtualization mechanisms. The use of technologies such as VLANand SR-IOV have intrinsic scalability limitations related to the number of VMs hostedin the same node. The capacity to replicate and migrate virtual domains in cloud com-puting are fundamental keys to ensure the availability of cloud services. Redundant linksin the underlying infrastructure, as well as eliminating single point of failure in physicaland virtual network resources, are good practices for network availability in multi-tenantenvironments.

    1.3. Software-defined Networks (SDNs)The current section introduces the concept of SDNs, as well as its importance in the net-work virtualization scenario. Introducing the conceptual and practical division between

    1: Applying Software-defined Networks to Cloud Computing.

    8 c2015 SBC Soc. Bras. de Computao

  • control plane and data plane, we explore the opportunities to apply SDN technologiesin different network architectures, focusing on the role of SDN control layer in networkvirtualization deployments. We also present a reference architecture to implement virtualnetworks in real scenarios. This section also present an evolutionary view of the SDNcontrollers currently available in the market, aiming to support network professionals anddecision maker to adopt the right SDN approach in his deployment. We finish by focusingon security, scalability and availability aspects of the presented solutions.

    1.3.1. Creating Programmable Networks: a Historical Perspective

    Recently, there has been considerable excitement surrounding the SDN concept, whichis explained by the emergence of new application areas such as network virtualizationand cloud networking. However, the basic ideas behind the SDN technology are actuallya result of more than 20 years of advances in the network field, in especial the interestof turning computer networks into programmable systems. Aiming to give an overviewof this evolution, we can divide the historical advancements that culminated in the SDNconcept into the three different phases [Feamster et al. 2013], as follows:

    1. Active Networks (from the mid-1990s to the early 2000s): This phase followsthe historical advent of the Internet, a period in which the demands for innovation in thecomputer networks area were met mainly by the development and tests of new proto-cols in laboratories with limited infrastructure and simulation tools. In this context, theso-called active networks appeared as a first initiative aiming to turn network devices(e.g., switches and routers) into programmable elements and, thus, allow furthers inno-vations in the area. This programmability could then allow a separation between the twomain functionalities of networking elements: the control plane, which refers to the de-vices ability to decide how each packet should be dealt with; and the data plane, whichis responsible for forwardind packets at high speed following the decisions made by thecontrol plane. Specifically, active networks introduced an new paradigm for dealing withthe networks control plane, in which the resources (e.g., processing, storage, and packetqueues) provided by the network elements could be accessed through application pro-gramming interfaces (APIs). As a result, anyone could develop new functionalities forcustomizing the treatment given to the packets passing by each node composing the net-work, promoting innovations in the networking area However, the criticism received dueto the potential complexity it would add to the Internet itself, allied to the fact that thedistributed nature of the Internets control plane was seen as a way to avoid single pointsof failure, reduced the interest and diffusion of the active network concept in the industry.

    2. Control- and data-plane separation (from around 2001 to 2007): After the In-ternet became a much more mature technology in the late 1990s, the continuous growth inthe volume of traffic turned the attention of the industry and academic communities to re-quirements such as reliability, predictability and performance of computer networks. Theincreasing complexity of network topologies, together with concerns regarding the perfor-mance of backbone networks, led different hardware manufacturers to develop embeddedprotocols for packet forwarding, promoting the high integration between the control anddata planes seen in todays Internet. Nevertheless, network operators and Internet ServiceProviders (ISPs) would still seek new management models to meet the needs from net-

    Minicursos do XXXIII Simpsio Brasileiro de Redes de Computadores e Sistemas Distribudos SBRC 2015

    9 c2015 SBC Soc. Bras. de Computao

  • work topologies ever larger and more complex. The importance of a centralized controlmodel has become more evident, as well as the need of a separation between the controland data planes. Among the technological innovations arising from this phase, we can citethe creation of open interfaces for communications between the control and data planessuch as ForCES (Forwarding and Control Element Separation)[Yang et al. 2004], whosegoal was to enable a locally centralized control over the hardware elements distributedalong the network topology [Caesar et al. 2005, Lakshman et al. 2004]. To ensure the ef-ficiency of centralized control mechanisms, the consistent replication of the control logicamong the data plan elements would play a key role. The development of such distributedstate management techniques is also among the main technological contributions fromthis phase. There was, however, considerable resistance from equipment suppliers toimplement open communication interfaces, which were seen as a factor that would facil-itate the entry of new competitors in the network market. This ended up hindering thewidespread of the separation of data and control planes, limiting the number and varietyof applications developed for the control plane in spite of the possibility of doing so.

    3. OpenFlow and Network Operating System (from 2007 to 2010): Theever growing demand for open interfaces in the data plane led researchers to ex-plore different clean slate architectures for logically centralized network control[Casado et al. 2007, Greenberg et al. 2005, Chun et al. 2003]. In particular, the Ethaneproject [Casado et al. 2007] created a centralized control solution for enterprise networks,reducing switch control units to programmable flow-tables. The operational deploymentof Ethane in the Stanford computer science department, focusing on network experi-mentation inside the campus, was indeed huge success, and resulted in the creation ofOpenFlow protocol [McKeown et al. 2008]. OpenFlow enables fully programmable net-works by providing a standard data plane API for existing packet switching hardware.The creation of the OpenFlow API, on its turn, allowed the emergence of SDN controlplatforms such as NOX [Gude et al. 2008], thus enabling the creation of a wide range ofnetwork applications. OpenFlow provided an unified abstraction of network devices andits functions, defining forwarding behavior through traffic flows based on 13 different in-structions. OpenFlow also led to the vision of a network operating system that, differentfrom the node-oriented system preconized by active networks, organize the networks op-eration into three layers: (1) a data plane with an open interface; (2) a state managementlayer that is responsible for maintaining a consistent view of the overall network state; and(3) control logic that performs various operations depending on its view of network state[Koponen et al. 2010]. The need for integrating and orchestrating multiple controllers forscalability, reliability and performance purposes also led to significant enhancements ondistributed state management techniques. Following these advances, solutions such asOnix [Koponen et al. 2010] and its open-source counterpart, ONOS (Open Network Op-erating System) [Berde et al. 2014], introduced the idea of a network information basethat consists of a representation of the network topology and other control state shared byall controller replicas, while incorporating past work in distributed systems to satisfy stateconsistency and durability requirements.

    Analyzing this historical perspective and the needs recognized in each phase, itbecomes easier to see that the SDN concept emerged as a tool for allowing further net-work innovation, helping researchers and network operators to solve longstanding prob-

    1: Applying Software-defined Networks to Cloud Computing.

    10 c2015 SBC Soc. Bras. de Computao

  • lems in network management and also to provide new network services. SDN has beensuccessfully explored in many different research fields, including areas such as networkvirtualization and cloud networking.

    1.3.2. SDNs and the Future Internet

    Todays Internet was designed more than 30 years ago, with specific requirements toconnect, in a general and minimalist fashion, the (few) existing networks at the time.After it was proven to be very successful at this task, the TCP/IP model became widelyadopted, in especial due to the possibility of running many distinct applications over itsinfrastructure while keeping the core of the network as simple as possible. However,the increase in the number of applications, users and devices making intense use of thenetwork resources would bring many (usually conflicting) requirements with each newtechnology developed, turning the Internet into a completely different environment filledwith disputes regarding its evolution [Moreira et al. 2009].

    While in the early days of the Internet the simplicity of the TCP/IP model wasconsidered one of its main strengths, enabling the rapid development of applications andthe growth of the network itself, it became a weakness because it would imply an unintel-ligent network.

    Figure 1.2. Ossification of the Internet

    That is the main reason why TCP/IPs simplicity is sometimes accused of beingresponsible for the ossification of the Internet (See Figure 1.2): without the ability ofadding intelligence to the core of the network itself, many applications had to take cor-rective actions on other layers; many patches would be sub-optimal, imposing certainrestrictions on the applications that could be deployed with the required levels of secu-rity, performance, scalability, mobility, maintainability, etc. Therefore, even though theTCP/IP model displays a reasonably good level of efficiency and is able to meet many ofthe original requirements of the Internet, many believe it may not be the best solution forthe future [Alkmim et al. 2011].

    Many of the factors pointed out as the cause of the Internets ossification are re-lated to the strong coupling between the control and data planes, so the decision on howto treat the data flow and the execution of this decision are both handled by the same

    Minicursos do XXXIII Simpsio Brasileiro de Redes de Computadores e Sistemas Distribudos SBRC 2015

    11 c2015 SBC Soc. Bras. de Computao

  • device. In such environment, new network applications or features have to be deployeddirectly into the network infrastructure, a cumbersome task given the lack of standard in-terfaces for doing so in a market dominated by proprietary solutions. Actually, even whena vendor does provide interfaces for setting and implementing policies into the networkinfrastructure, the presence of heterogeneous devices with incompatible interfaces endsup hindering such seemingly trivial tasks.

    This ossification issue has led to the creation of dedicated appliances for tasksseem as essential for the correct networks operation, such as firewalls, intrusion detectionsystems (IDS), network address translators (NAT), among others [Moreira et al. 2009].Since such solutions are many times seen as palliative, studies aimed at changing thisossification state became more prominent, focusing especially in two approaches. Thefirst, more radical, involved the proposal of a completely new architecture that couldreplace the current Internet model, based on past experiences and identified limitations.This clean state strategy has not received much support, however, not only to the highcosts involved in its deployment, but also because it is quite possible that, after years ofeffort to build such specification, it might become outdated after a few decades due to theappearance of new applications with unanticipated requirements. The second approachsuggests evolving the current architecture without losing compatibility with current andfuture devices, thus involving lower costs. By separating the data and control planes, thusadding flexibility to how the network is operated, the SDN paradigm gives support to thissecond strategy [Feamster et al. 2014].

    According to [Open Networking Foundation 2012], the formal definition of anSDN is: an emerging architecture that is dynamic, manageable, cost-effective, and adapt-able, making it ideal for the high-bandwidth, dynamic nature of todays applications. Thisarchitecture decouples the network control and forwarding functions enabling the net-work control to become directly programmable and the underlying infrastructure to beabstracted for applications and network services. This definition is quite comprehensive,making it clear that the main advantage of the SDN paradigm is to allow different policiesto be dynamically applied to the network by means of a logically centralized controller,which has a global view of the network and, thus, can quickly adapt the network con-figuration in response to changes [Kim and Feamster 2013]. At the same time, it enablesindependent innovations in the now decoupled control and data planes, besides facilitat-ing the network state visualization and the consolidation of several dedicated networkappliances into a single software implementation [Kreutz et al. 2014]. This flexibility isprobably among the main reasons why companies from different segments (e.g., devicemanufacturers, cloud computing providers, among others) are increasingly adopting theSDN paradigm as the main tool for managing their resources in an efficient and cost-effective manner [Kreutz et al. 2014].

    1.3.3. Data and Control Planes

    Given that the separation between data and control planes is at the core of the SDN tech-nology, it is important to discuss them in some detail. Figure 1.3 shows a simplifiedSDN architecture and its main components, showing that the data and control planes areconnected via a well-defined programming interface between the switches and the SDNcontroller.

    1: Applying Software-defined Networks to Cloud Computing.

    12 c2015 SBC Soc. Bras. de Computao

  • Figure 1.3. SDN architecture overview

    The data plane corresponds to the switching circuitry that interconnects all de-vices composing the network infrastructure, together with a set of rules that define whichactions should be taken as soon as a packet arrives at one of the devices ports. Examplesof common actions are to forward the packet to another port, rewrite (part of) its header,or even to discard the packet.

    The control plane, on its turn, is responsible for programming and managingthe data plane, controlling how the routing logic should work. This is done by one ormore software controllers, whose main task is is to set the routing rules to be followedby each forwarding device through standardized interfaces, called the southbound inter-faces. These interfaces can be implemented using protocols such as OpenFlow 1.0 and1.3 [OpenFlow 2009, OpenFlow 2012], OVSDB [Pfaff and Davie 2013] and NETCONF[Enns et al. 2011] The control plane concentrates, thus, the intelligence of the network,using information provided by the forwarding elements (e.g., traffic statistics and packetheaders) to decide which actions should be taken by them [Kreutz et al. 2014].

    Finally, developers can take advantage of the protocols provided by the controlplane through the northbound interfaces, which abstracts the low-level operations for con-trolling the hardware devices similarly to what is done by operating systems in computingdevices such as desktops. These interfaces can be provided by remote procedure calls(RPC), restful services and other cross-application interface models. This greatly facili-tates the construction of different network applications that, by interacting with the controlplane, can control and monitor the underlying network. This allows them to customize thebehavior of the forwarding elements, defining policies for implementing functions suchas firewalls, load balancers, intrusion detection, among others.

    Minicursos do XXXIII Simpsio Brasileiro de Redes de Computadores e Sistemas Distribudos SBRC 2015

    13 c2015 SBC Soc. Bras. de Computao

  • 1.3.4. The OpenFlow Protocol

    The OpenFlows protocol is one of the most commonly used southbound interfaces, beingwidely supported both in software and hardware, and standardized by the Open Network-ing Foundation (ONF). It works with the concept of flows, defined as groups of packetsmatching a specific (albeit non-standard) header [McKeown et al. 2008], which receivemay be treated differently depending how the network is programmed. OpenFlows sim-plicity and flexibility, allied to the high performance at low cost, ability to isolate experi-mental traffic from production traffic, and to cope with vendors need for closed platforms[McKeown et al. 2008], are probably among the main reasons for this success.

    Whereas other SDN approaches take into account other network elements, suchas routers, OpenFlow focus mainly on switches [Braun and Menth 2014]. Its architecturecomprises, then, three main concepts [Braun and Menth 2014]: (1) the networks dataplane is composed by OpenFlow-compliant switches; (2) the control plane consists ofone or more controllers using the OpenFlow protocol; (3) the connection between theswitches and the control plane is made through a secure channel.

    An OpenFlow switch is basically a forwarding device endowed with a Flow Table,whose entries define the packet forwarding rules to be enforced by the device. To accom-plish this goal, each entry of the table comprises three elements [McKeown et al. 2008]:match fields, counters, and actions. The match fields refer to pieces of information thatidentify the input packets, such as fields of its header or its ingress port. The counters, ontheir turn, are reserved for collecting statistics about the corresponding flow. They can,for example, be used for keeping track of the number of packets/bytes matching that flow,or of the time since the last packet belonging to that flow was seen (so inactive flows canbe easily identified) [Braun and Menth 2014]. Finally, the actions specify how the pack-ets from the flow must be processed, the most basic options being: (1) forward the packetto a given port, so it can be routed to through the network; (2) encapsulate the packet anddeliver it to a controller so the latter can decide how it should be dealt with (in this case,the communication is done through the secure channel); or (3) drop the packet (e.g., forsecurity reasons).

    There are two models for the implementation of an OpenFlow switch[McKeown et al. 2008]. The first, consists in a dedicated OpenFlow switch, which isbasically a dumb device that only forwards packets according to the rules defined by aremote controller.

    In this case (See Figure 1.4), the flows can be broadly defined by the applications,so the network capabilities are only limited by how the Flow Table is implemented andwhich actions are available. The second, which may be preferable for legacy reasons, is aclassic switch that supports OpenFlow but also keeps its ability to make its own forward-ing decisions. In such hybrid scenario, it is more complicated to provide a clear isolationbetween OpenFlow and classical traffic. To be able to do so, there are basically twoalternatives: (1) to implement one extra action to the OpenFlow Table, which forwardspackets to the switches normal processing pipeline, or (2) to define different VLANs foreach type of traffic.

    Whichever the case, the behaviors of the switchs OpenFlow-enabled portion may

    1: Applying Software-defined Networks to Cloud Computing.

    14 c2015 SBC Soc. Bras. de Computao

  • Figure 1.4. OpenFlow switch proposed by [McKeown et al. 2008].

    be either reactive or proactive. In the reactive mode, whenever a packet arrives at theswitch, it tries to find an entry in its Flow Table matching that packet. If such an entryis found, the corresponding action is executed; otherwise, the flow is redirected to thecontroller, which will insert a new entry into the switchs Flow Table for handling theflow and only then the packet is forwarded according to this new rule. In the proactivemode, on the other hand, the switchs Flow Table is pre-configured and, if an arriving flowdoes not math any of the existing rules, the corresponding packets are simply discarded[Hu et al. 2014a].

    Operating in the proactive mode may lead to the need of installing a large numberof rules beforehand on the switches, one advantage over the reactive mode is that inthis case the flow is not delayed by the controllers flow configuration process. Anotherrelevant aspect is that, if the switch is unable to communication with the controller in thereactive mode, then the switchs operation will remain limited to the existing rules, whichmay not be enough for dealing with all flows. In comparison, if the network is designedto work in the proactive mode from the beginning, it is more likely that all flows will behandled by the rules already installed on the switches.

    As a last remark, it is interesting to notice that implementing the controller as acentralized entity can provide a global and unique view of the network to all applications,potentially simplifying the management of rules and policies inside the network. How-ever, as any physically centralized server, it also becomes a single point of failure, po-tentially impairing the networks availability and scalability. This issue can be solved byimplementing a a physically distributed controller, so if one controller is compromised,only the switches under its responsibility are affected. In this case, however, it wouldbe necessary to implement synchronization protocols for allowing a unique view of thewhole network and avoid inconsistencies. Therefore, to take full advantage of the benefitsfrom a distributed architecture, such protocols must be efficient enough not to impact theoverall networks performance.

    Minicursos do XXXIII Simpsio Brasileiro de Redes de Computadores e Sistemas Distribudos SBRC 2015

    15 c2015 SBC Soc. Bras. de Computao

  • 1.3.5. SDN Controllers

    An SDN controller, also called a network operating system, is a software platform whereall the network control applications are deployed. SDN controllers commonly containa set of modules that provide different network services for the deployed applications,including routing, multicasting, security, access control, bandwidth management, trafficengineering, quality of service, processor and storage optimization, energy usage, andall forms of policy management, tailored to meet business objectives. The network ser-vices provided by the SDN controller consist of network applications running upon thecontroller platform, and can be classified as follows:

    Basic Network Service: Basic network applications that implement protocol,topology and device essential functions. Examples of basic network services are topologymanagement, ARP handling, host tracking, status management and device monitoring.Basic network services are commonly used by other network services deployed in thecontroller platform to implement more complex control functionalities.

    Management Services: Management network applications that make use of ba-sic functions to implement business-centric management functionalities. Examples ofmanagement services are authentication and authorization services, virtual tenant networkcoordination, network bandwidth slicing and network policy management.

    Core Services: Core network applications oriented to manage and orchestratethe operation of the control platform, including managing communication between othernetwork services and shared data resources. Examples of core services are messaging,control database managing and service registering.

    Custom Application Services: Custom network applications consist of any ap-plication developed by the platform users. The applications commonly use other networkservices deployed in the same SDN control platform to implement different network so-lutions. Examples of custom application services oriented toward security are DDoSprevention, load balancing and firewalling. Custom application services can also targetareas such as QoS implementation, enforcement of policies and integration with cloudcomputing orchestration systems.

    Open source controllers have been an important vector of innovation in the SDNfield. The dynamics of the open source community led the development of lots of SDNprojets, including software-based switches and SDN controllers [Casado 2015]. To eval-uate and compare different open-source controller solutions and their suitability to eachdeployment scenario, one can employ the following metrics:

    Programming Language: The programming language used to build the con-troller platform. The controller language will also dictate the programming languageused to develop the network services, and can directly influence other metrics such asperformance and learning curve. Moreover, some operating systems may not provide fullsupport for all programming languages.

    Performance: The performance of the controller can be determinant whenchoosing the correct platform for production purposes. The performance of an SDN con-

    1: Applying Software-defined Networks to Cloud Computing.

    16 c2015 SBC Soc. Bras. de Computao

  • troller can be influenced by many factors, including the programming language, designpatterns adopted and hardware compatibility.

    Learning Curve: The learning curve of the control platform is a fundamentalmetric to consider when starting a project. It measures the experience necessary to learnthe SDN controller platform and build the necessary skills. The learning curve directlyinfluences the time to develop a project and also the availability of skilled developers.

    Features: The set of network functions provided by the SDN controller. In ad-diction to basic network services, control platforms can also provide specialized servicesrelated to controlling and managing network infrastructures. Two important groups offeatures are the set of protocols supported in the southbound API of the controller (e.g.,OpenFlow, OVSDB, NETCONF), which will determine the supported devices in the un-derlying network infrastructure, and the support for integration with cloud computingsystems.

    Community Support: The support provided by the open source community isessential to measure how easy it would be to solve development and operating questions,as well as the frequency in which new features are released. Some open source SDNprojects are also supported or maintained by private companies, which is likely to accel-erate releases and lead to better support for specific business demands.

    To give a concrete example on the usefulness of these metrics, we canapply them to some of the most popular open source SDN controller projects,namely: NOX [Gude et al. 2008, NOXRepo.org 2015], POX [NOXRepo.org 2015],Ryu [Ryu 2015], Floodlight [Floodlight 2015] and OpenDaylight [Medved et al. 2014,Linux Foundation 2015].

    NOX Controller: The NOX controller is part of the first generation of OpenFlowcontrollers, being developed by Nicira Networks side-by-side with the OpenFlow proto-col. As the oldest OpenFlow controller, it is considered very stable by the industry and theopen source community, and is largely deployed in production and educational environ-ments. The NOX controller has two versions. The first, NOX-Classic, was implementedin C++ and Phyton, and supports the development of network control application usingboth languages. This cross-language design was later proved to be less efficient than de-signs based on a single language, since it ended up leading to some inconsistency in termsof features and interfaces. Possibly due to these issues, NOX-Classic is no longer sup-ported, being superseded by the second version, called simply NOX or new NOX. Thissecond version of NOX was implemented using the C++ programming language, sup-porting network application services developed with the same language using an event-oriented programming model. The code NOX was also reorganized to provide betterperformance and programmability compared with NOX-Classic, introducing support forboth 1.0 and 1.3 versions of the OpenFlow protocol. A modern NOX SDN controller isrecommended when: users know the C++ programming language; users are willing to uselow-level facilities and semantics of the OpenFlow protocol; users need production levelperformance.

    POX Controller: POX is a Python implementation of the NOX controller, beingcreated to be a platform for rapid development and prototyping of network control soft-

    Minicursos do XXXIII Simpsio Brasileiro de Redes de Computadores e Sistemas Distribudos SBRC 2015

    17 c2015 SBC Soc. Bras. de Computao

  • ware. Taking advantage of Pythons flexibility, POX has been used as basis for many SDNprojects, being applied for prototyping and debugging SDN applications, implementingnetwork virtualization and designing new control and programming models. The POXcontroller has also official support from the NOX community. POX support the version1.0 only of the OpenFlow protocol and provides better performance when compared withPython applications deployed on NOX-Classic. However, since Python is an interpretedinstead of compiled language, POX does not provide production level performance asNOX controller do. Therefore, a POX SDN controller is recommended when: users knowthe Python programming language; users are not much concerned with the controllersperformance; users need a rapid SDN platform for prototyping and experimentation, e.g.,for research, experimentation, or demonstrations purposes; users are looking for an easyway to learn about SDN control platforms (e.g., for educational purposes).

    Ryu Framework: Ryu is a Python component-based SDN framework that pro-vides a large set of network services through a well-defined API, making it easy for devel-opers to create new network management and control applications for multiple networkdevices. Differently from NOX and POX SDN controllers, which support only Open-Flow protocols in their southbound API, Ryu supports various protocols for managingnetwork devices, such as OpenFlow (versions 1.0 and 1.2 1.4), Netconf and OF-config.Another important feature of the Ryu framework is the integration with the OpenStackcloud orchestration system [OpenStack 2015], enabling large deployments on cloud datacenters. Even though Ryu was implemented using Python, its the learning curve is mod-erated, since it provides a large set of service components and interfaces that need tobe understood before it can be integrated into new applications. As a result, the RyuSDN framework is recommended when: users know the Python programming language;users are not much concerned with the controllers performance; the control applicationsrequire versions 1.3 or 1.4 of the OpenFlow protocol or some of the other supported pro-tocols; users intend to deploy the SDN controller on a cloud data center that makes use ofOpenStacks orchestration system.

    Floodlight Controller: The Floodlight Open SDN Controller is a Java-basedOpenFlow Controller supported by an open source community of developers that includesa number of engineers from Big Switch Networks. Floodlight is the core of Big SwitchNetworks commercial SDN products and is actively tested and improved by the industryand the developers community. Floodlight was created as a fork from the Beacon JavaOpenFlow controller [Erickson 2013], the first Java-based controller to implement fullmultithread and runtime modularity features. Even though it has a quite extensive docu-mentation and counts with official support from both the industry and open source com-munity, Floodlight has a steep learning curve due to the large set of features implemented.Among those features, we can cite the ability to integrate with OpenStack orchestrationsystem and the use of RESTful interfaces [Richardson and Ruby 2008] in the northboundAPI, enabling easy integration with external business applications. Floodlight controlleris recommended when: users know the Java programming language; users need pro-duction level performance and would like to have industry support; applications shouldinteract with the SDN controller through a RESTful API; users intend to deploy the SDNcontroller on a cloud data center that makes use of OpenStacks orchestration system.

    1: Applying Software-defined Networks to Cloud Computing.

    18 c2015 SBC Soc. Bras. de Computao

  • OpenDaylight Controller: OpenDaylight is a Java-based SDN controller built toprovide a comprehensive network programmability platform for SDN. It was created asa Linux Foundation collaborative project in 2013 and intends to build a comprehensiveframework for innovation in SDN environment. OpenDaylight project is supported by aconsortium of network companies such as Cisco, Ericsson, IBM, Brocade and VMware,besides the open source community and industry that collaborate in the project. Open-Dayligh is also based on the Beacon OpenFlow controller and provides production levelperformance with support for different southbound protocols, such as OpenFlow 1.0 and1.3, OVSDB and NETCONF. It also provides integration with OpenStacks cloud or-chestration system. The OpenDaylight controller proposes an architectural framework byclearly defining the southbound and northbound APIs and how they interact with externalbusiness applications and internal network services. A a drawback, OpenDaylight has asteep learning curve due to its architectural complexity and the large set of services em-bedded in the controller. It is, nevertheless, recommended when: users know the Javaprogramming language; users need production level performance and would like to haveindustry support; users intend to deploy the SDN controller on a cloud data center thatmakes use of OpenStacks orchestration system; target applications require modularitythrough an architectural design; applications need to integrate with third party businessapplications, as well as with heterogeneous underlying network infrastructures.

    Table 1.1 presents a summary of the main characteristics of the described opensource SDN controllers, based on the metrics hereby discussed.

    Table 1.1. Summary of the main characteristics of open source SDN controllers

    NOX POX Ryu Floodlight ODLLanguage C++ Python Python Java Java

    Performance High Low Low High High

    Distributed No No Yes Yes Yes

    OpenFlow 1.0 1.0 1.0, 1.21.4 1.0, 1.3 1.0, 1.3

    Multi-tenant clouds No No Yes Yes Yes

    Learning curves Moderate Easy Moderate Steep Steep

    1.3.6. Network Virtualization using SDNs

    Even though network virtualization and SDN are independent concepts, the relationshipbetween these two technologies has become much closer in recent years. Network virtual-ization creates the abstraction of a network that is decoupled from the underlying physicalequipment, allowing multiple virtual networks to run over a shared infrastructure with atopology that differs from the actual underlying physical network.

    Even though network virtualization has gained prominence as a use case for SDN,the concept has in fact evolved in parallel with programmable networking. In especial,both technologies are tightly coupled by the programmable networks paradigm, whichpresumes mechanisms for sharing the infrastructure (across multiple tenants in a datacenter, administrative groups in a campus, or experiments in an experimental facility) andsupporting logical network topologies that differ from the physical network. In what fol-lows, we provide an overview of the state of the art on network virtualization technologies

    Minicursos do XXXIII Simpsio Brasileiro de Redes de Computadores e Sistemas Distribudos SBRC 2015

    19 c2015 SBC Soc. Bras. de Computao

  • before and after the advent of SDN.

    The creation of virtual networks in the form of VLANs and virtual private net-works has been supported by multiple network equipment vendors for many years. Thesevirtual networks could only be created by network administrators and were limited to runthe existing protocols, delaying the deployment of new network technologies. As an alter-nate, researchers started building overlay networks by means of tunneling, forming theirown topology on top of a legacy network to be able to run their own control-plane proto-cols. In addition to the significant success of peer-to-peer applications, built upon overlaynetworks, the networking community reignited research on overlay networks as a way ofimproving the network infrastructure. Consequently, virtualized experimental infrastruc-tures such as PlanetLab [Chun et al. 2003] were built to allow multiple researchers to runtheir own overlay networks over a shared and distributed collection of hosts. The successof PlanetLab and other shared experimental network platforms motivated investigationson the creation of virtual topologies that could run custom protocols inside the underlyingnetwork [Bavier et al. 2006], thus enabling realistic experiments to run side by side withproduction traffic. As an evolution of these experimental infrastructures, the GENI project[Berman et al. 2014] took the idea of a virtualized and programmable network infrastruc-ture to a much larger scale, building a national experimental infrastructure for researchin networking and distributed systems. These technologies ended up by leading some toargue that network virtualization should be the basis of a future Internet, allowing mul-tiple network architectures to coexist and evolve over time to meet needs in continuousevolution [Feamster et al. 2007, Anderson et al. 2005, Turner and Taylor 2005].

    Researches on network virtualization evolved independently of the SDN con-cept. Indeed, the abstraction of the physical network in a logical network does not re-quire any SDN technology, neither does the separation of a logically centralized con-trol plane from the underlying data plane imply some kind of network virtualization.However, a symbiosis between both technologies has emerged, which has begun to cat-alyze several new research areas, since SDN can be seen as an enabling technology fornetwork virtualization. Cloud computing, for example, introduced the need for allow-ing multiple customers (or tenants) to share a same network infrastructure, leading tothe use of overlay networks implemented through software switches (e.g., Open vSwitch[Open vSwitch 2015, Pfaff et al. 2009]) that would encapsulate traffic destined for VMsrunning on other servers. It became natural, thus, to consider using logically centralizedSDN controllers to configure these virtual switches with the rules required to control howpackets are encapsulated, as well as to update these rules when VMs move to new physicallocations.

    Network virtualization, on its turn, can be used for evaluating and testing SDNcontrol applications. Mininet [Handigol et al. 2012a, Lantz et al. 2010], for example,uses process-based network virtualization to emulate a network with hundreds of hosts,virtual switches and SDN controllers on a single machine. This environment enables re-searchers and network operator to develop control logic applications and easily evaluate,test and debug them on a full-scale emulation of the production data plane, accelerat-ing the deployment on the real production networks. Another contribution from networkvirtualization to the development of SDN technologies is the ability to slice the underly-ing network, allowing it to run simultaneous and isolated SDN experiments. This con-

    1: Applying Software-defined Networks to Cloud Computing.

    20 c2015 SBC Soc. Bras. de Computao

  • cept of network slicing, originally introduced by the PlanetLab project [Chun et al. 2003],consists in separate the traffic-flow space into different slices, so each slice has a shareof network resources and can be managed by a different SDN controller. FlowVisor[Sherwood et al. 2010], for example, provides a network slicing system that enablesbuilding testbeds on top of the same physical equipment that carries the production traf-fic.

    1.3.7. SDN Applications in Network Virtualization

    SDN facilitates network virtualization and may, thus, makes it easier to implement fea-tures such as dynamic network reconfiguration (e.g., in multitenant environments). How-ever, it is important to recognize that the basic capabilities of SDN technologies do notdirectly provide these benefits. Some SDN features and their main contributions to im-prove network virtualization are:

    Control plane and data plane separation: The separation between control anddata planes in SDN architectures, as well as the standardization of interfaces for the com-munication between those layers, allowed to conceptually unify different vendor networkdevices under the same control mechanisms. For network virtualization purposes, theabstraction provided by the control plane and data plane separation facilitates deploying,configuring, and updating devices across virtualized network infrastructures. The controlplane separation also introduces the idea of network operating systems, which consistsof a scalable and programmable platform for managing and orchestrating virtualized net-works.

    Network programmability: Programmability of network devices is one of themain contributions from SDN to network virtualization. Before the advent of SDN, net-work virtualization was limited to the static implementation of overlay technologies (suchas VLAN), a task delegated to network administrators and logically distributed amongthe physical infrastructure. The programming capabilities introduced by SDN provide thedynamics necessary to rapidly scale, maintain and configure new virtual networks. More-over, network programmability also allows the creation of custom network applicationsoriented to innovative network virtualization solutions.

    Logically centralized control: The abstraction of data plane devices provided bySDN architecture gives the network operating system, also known as SDN orchestrationsystem, a unified view of the network. Therefore, it allows custom control applications toaccess the entire network topology from a logically centralized control platform, enablingthe centralization of configurations and policy management. This way, the deploymentand management of network virtualization technologies becomes easier than in early dis-tributed approaches.

    Automated management: the SDN architecture enhances network virtualizationplatforms by providing support for automation of administrative tasks. The centralizedcontrol and the programming capabilities provided by SDN allow the development ofcustomized network applications for virtual network creation and management. Auto-scaling, traffic control and QoS are examples of automation tools that can be applied tovirtual network environments.

    Minicursos do XXXIII Simpsio Brasileiro de Redes de Computadores e Sistemas Distribudos SBRC 2015

    21 c2015 SBC Soc. Bras. de Computao

  • Among the variety of scenarios where SDN can improve network virtualizationimplementations, we can mention campus network testbeds [Berman et al. 2014], enter-prise networks [Casado et al. 2007], multitenant data centers [Koponen et al. 2014] andcloud networking [Jain and Paul 2013b]. Despite this successful application of SDNtechnologies in such network virtualization use cases scenarios, however, much work isneeded both to improve the existing network infrastructure and to explore SDNs poten-tial for solving problems in network virtualization. Examples include SDN applications toscenarios such as home networks, enterprise networks, Internet exchange points, cellularnetworks, Wi-Fi radio access networks, and joint management of end-host applications.

    1.3.8. Security, Scalability and Availability aspects

    Since the SDN concept became a prominent research topic in the area of computer net-works, many studies have discussed fundamental aspects such as its scalability, availabil-ity and, in especial, security.

    Even though scalability issues apply both to the controller and to forwardingnodes, the latter are not specifically affected by the SDN technology and, thus, we onlyfocus on the former. Specifically, there are three main challenges for attaining controllerscalability [Yeganeh et al. 2013, Sezer et al. 2013], both of which originate in the factthe networks intelligence is moved from the distributed forwarding nodes to the controlplane: (1) the latency incurred by the communications between the forwarding nodes andthe controller(s); (2) the size of the controllers flow database, and (3) the communicationbetween controllers in a physically distributed control plane architecture. As previouslymentioned in Section 1.3.4, the first challenge may be tackled with proactive approach,i.e., by installing most flow rules on the SDN-enable switches so they do not need to con-tact the controllers too frequently. Even though this might sacrifice flexibility, this maybe inevitable especially for large flows.

    Another strategy for tackling latency issues in the control plane, as well as the sizeof the flow databases, consists in using multiple controllers. As a result, they can share thecommunication burden, reducing delays potentially caused by queuing requests comingfrom switches, and also the storage of flow information, as each controller is responsibleby a subset of forwarding elements. However, this also aggravates the third challenge,due to the need of further interactions between controllers to ensure a unified view ofthe network [Sezer et al. 2013]. Nonetheless, since a distributed controller architecturealso improves availability by improving the systems resiliency to failures, there havebeen many proposals focused on improving the scalability of this approach. One exampleis HyperFlow [Tootoonchian and Ganjali 2010], an NOX-oriented application that can in-stalled on all network controllers to create a powerful event propagation system based on apublish/subscribe messaging paradigm: basically, each controller publishes events relatedto network changes to other controllers, which in turn replay those events to proactivelypropagate the information throughout the whole control plane. Another strategy, adoptedin Onix [Koponen et al. 2010] and ONOS [Berde et al. 2014], consists in empoweringcontrol applications with general APIs that facilitate access to network state information.All things considered, ensuring scalability and state consistency among all controllers, aswell as a reasonable level of flexibility, ends up being an important design trade-off inSDNs [Yeganeh et al. 2013].

    1: Applying Software-defined Networks to Cloud Computing.

    22 c2015 SBC Soc. Bras. de Computao

  • Regarding security, the SDN technology brings both new opportunities and chal-lenges (for a survey on both views, see [Scott-Hayward et al. 2013]). On the posi-tive side, SDN can enhance network security when the control plane is seen as a toolfor packet monitoring and analysis that is able to propagate security policies (e.g.,access control [Nayak et al. 2009]) along the entire network in response to attacks[Scott-Hayward et al. 2013]. In addition, with the higher control over how the packetsare routed provided SDN, one can install security appliances such as firewalls and IDSin any part of the network, not only on its edges [Gember et al. 2012]: as long as thecontrollers steer the corresponding traffic to those nodes, the packets can be analyzedand treated accordingly. This flexibility is, for example, at the core of the Software De-fined Perimeter (SDP) concept [Bilger et al. 2013], by means of which all devices tryingto access a given network infrastructure must be authenticated and authorized before theflow rules that allows its entrance are installed in the networks forwarding elements. Itis also crucial to thwart denial-of-service (DoS) attacks, since then the task of discardingmalicious packets is not concentrated on one or a few security devices near the attackstarget, but distributed along the network [YuHunag et al. 2010]. Another interesting ap-plication of SDNs for thwarting DoS, as well as other threats targeting a same static IP(e.g. port scanning or worm propagation), is to create the illusion of a moving target,i.e., by having the SDN translate the hosts persistent address to different IPs over time[Jafarian et al. 2012].

    Whereas the security enhancements resulting from the SDN approach is com-monly recognized, it also brings security risks that need to be addressed. In[Kreutz et al. 2013], seven main threat vectors are identified, the first three being SDN-specific: (1) attacks on control plane communications, especially when they are madethrough insecure channels; (2) attacks on and vulnerabilities in controllers, (3) lack ofmechanisms to ensure trust between the controller and management applications; (4)forged traffic flows; (5) attacks exploring vulnerabilities in switches; (6) attacks on andvulnerabilities in administrative stations that access the SDN controllers, and (7) thelack of trusted resources for forensics and remediation. Such threats usually requireholistic solutions providing authentication and authorization mechanisms for handlingthe different entities configuring the network and detecting anomalies. This need is ad-dressed, for example, by the FortNOX security kernel [Porras et al. 2012], as well asby its successor, Security-Enhanced Floodlight [Porras et al. 2015], which enable au-tomated security services while enforcing consistency of flow policies and role-basedauthorization; it is also the focus of FRESCO [Shin et al. 2013], an application frame-work that facilitates the development and deployment of security applications in Open-Flow Networks. There are also solutions focused on specific issues, such as identify-ing conflicts and inconsistencies between the policies defined by multiple applications[Al-Shaer and Al-Haj 2010, Canini et al. 2012, Khurshid et al. 2013] or facilitating au-diting and debugging [Handigol et al. 2012b, Khurshid et al. 2013]. Nevertheless, thereis much place for innovation in the field, as the number of articles proposing solutions forSDN security issues are still considerably less prevalent in the literature than those focus-ing on using the SDN paradigm to provide security services [Scott-Hayward et al. 2013].

    Minicursos do XXXIII Simpsio Brasileiro de Redes de Computadores e Sistemas Distribudos