You are here

Research Themes


Amazon Elastic Compute Cloud (EC2) offers developers access to a virtually unlimited number of computing resources. Commercial and scientific applications are now developed with this possibility in mind. Designing systems with several components being executed concurrently is still a challenge for both parallel and distributed applications. Google's MapReduce programming model got the attention of the developers community precisely because it makes development easier for this kind of applications. Apache Hadoop is now the the facto standard MapReduce execution framework and it is used by thousands of developers and companies.

Scalable solutions to execute large MapReduce applications such as the Amazon Elastic MapReduce are built based on Apache Hadoop. However, the architecture of Hadoop may inflict some scalability problems because of its dependence on centralized components. The Master node is known as a single point of failure and can become a bottleneck of MapReduce applications.

The Apache Hadoop development team has just released a new version of the framework to mitigate this problem. Hadoop 2.0 (or YARN) series presents a change on the Hadoop architecture to alleviate the bottlenecks caused by its centralized components. Still, some of the decisions are still made by centralized agents.

Apache Hadoop's choice is understandable. On the one hand, the use of centralized entities on distributed systems has the advantage of using a global view of the system to make decisions that can improve the overall performance. Centralized components of Hadoop know all the nodes, their current loads and have a general view of the data distribution on HDFS. With all these information, it is easier to optimize the node resource utilization. On the other hand, optimization of resource usages (e.g. load-balancing) is known to lead to problems that are NP-hard. However, for a large number of resources, decentralization of the decision-taking system can avoid bottlenecks on the centralized entities and lead to better scalability.

In this work, we will study the trade-offs of centralization versus decentralization on different kinds of distributed systems. Our goal is to analyse the cost of the decision-making process on different applications and characterize the conditions where a centralized decision can lead to better performance than its distributed counterpart.

Our work begins with the study of aforementioned Apache Hadoop by first analysing YARN changes impact on centralized components. For this step, we will run simulations and analyse real large-scale applications while monitoring the resource usage and response time of those components. After analyzing the behavior of both versions (before and after YARN changes), we will be able to predict when such components can become a bottleneck in the newest version. Then, we will propose a distributed version capable of handling systems with larger numbers of tasks and clusters by using algorithmic game-theory methods and results.

The work continues by analyzing other MapReduce applications and inputs. Later, other problems not related to MapReduce can also be investigated, like web service orchestrations versus choreographies in large-scale environments. In the end, we will better understand how far a system can scale and the problems it solves before requiring code changes. Moreover, we will find out what changes and patterns are most suitable for large-scale systems.



  1. Dependency Management and Dynamic Adaptation of Choreographies

    Interdependencies between modules have been an object of study since Parnas [Pa79]. Highly coupled modules are difficult to work with, because they are difficult to be understood in isolation and changes performed in them propagate throughout the system [SJSJ05]. The effects of the lack of dependency management become evident in long term due to the increasing cost of software evolution. When the software is constituted of Web services, the problems and difficulties increase. In this context, the software components are no longer under control of a single organization and the infrastructure that connects services changes in a dynamic fashion.

    Robert Martin [Ma06] presents symptoms of eroded architecture in term of dependencies between modules. Martin informally calls them bad smells: rigidity, fragility, immobility and viscosity. Rigidity is the tendency of causing a cascade of problems when changing an element. Fragility is somehow related to rigidity and refers to the tendency for software to break in many different and non-conceptually related areas when a single change is made. In a system with high fragility, the developers become afraid of performing non-critical changes in code, since they could cause unpredictable collateral effects. Immobility is the difficulty in reusing code, which happens when the risk of separating the desired parts is so high that the developers prefer to rewrite the module instead of reusing it. Viscosity is related to the difficulty in implementing changes the “right way”, leading to structural hacks that degrade software stability.

    In the context of service choreographies, there is a requirement of synthetizing abstract plans in concrete choreographies that coordinate the services. To this synthesis, it is necessary to take into account the minimization of dependencies between services. A strategy to be investigated consists in the dynamic creation of adaptors and proxies for groups of services based on a dependencies analysis, so as to favor substitution and reuse. Some studies in the literature [BTRR04, RBK05, CH06, ZPPNRY07] investigate dependency management through the instrumentation of services and middleware, as well as definition of relationship rules and a service-oriented architecture (SOA) sensitive to dependencies. Architectural models can also be used to propitiate the dynamic adaptation of interdependencies, by taking into account the variability of resources, user mobility, changes in requirements, system failures, connection intermittency, scalability and location stability [CGSSH02].

    In this research project, we propose novel mechanisms and instruments to manage the interdependencies between services distributed throughout the Internet, in order to identify and prevent system architectural degradation and to better orchestrate and choreograph the system based on the stability and dependency analysis. In this distributed environment, several aspects must be taken into consideration by the software architecture, such as: dynamism, heterogeneity, scalability and the ability to evolve. In this work, traditional methods for dependency management and software stability analysis [Ma06, FA01] will be extended. We will also investigate: mechanisms for the identification of dependencies, stability and coupling analysis algorithms, definition of architectural rules and restrictions and supporting tools to assist web service developers.


    [Pa79] Parnas, D.L., “Designing Software for Ease of Extension and Contraction”, Transaction on Software Engineering, SE-5(2), 1979.

    [SJSJ05] Sangal, N., Jordan, E., Sinha, V., and Jackson, D. 2005. Using dependency models to manage complex software architecture. In Proceedings of the 20th Annual ACM SIGPLAN Conference on Object-Oriented Programming, Systems, Languages, and Applications (San Diego, CA, USA, October 16 - 20, 2005). OOPSLA '05. ACM, New York, NY, 167-176.

    [Ma06] Martin, R. “Agile Principles, Patterns and Practices in C#”, Prentice Hall, 2006.

    [BTRR04] N. Badr, A. Taleb-Bendiab, M. Randles, D. Reilly, “A Deliberative Model for Self- Adaptation Middleware Using Architectural Dependency”, Proceedings of the 15th International Workshop on Database and Expert Systems Applications (DEXA’04)

    [RBK05] Randic, M. Blaskovic, B. Knezevic, P., “Modeling Service Dependencies in Ad Hoc Collaborative Systems”. EUROCON 2005, 1-4244-0049-X, pp 1842-1845

    [CH06] Cervantes, H. and Hall, R.S. Automating Service Dependency Management in a Service-Oriented Component Model,. ICSE CBSE6 Workshop, 2003

    [ZPPNRY07] Jiehan Zhou Pakkala, D. Perala, J. Niemela, E. Riekki, J. Ylianttila, M., “Dependency-aware Service Oriented Architecture and Service Composition”, IEEE International Conference on Web Services 2007, pp. 1146-1149

    [CGSSH02] Cheng, S., Garlan, D. Schmerl, B., Sousa, J.P., Steenkiste, P. & Hu, N. “Software Architecture-Based Adaptation for Pervasive Systems”, Trends in Network and Pervasive Computing, Lecture Notes in Computer Science, Volume 2299/2002, pp. 217-233

    [FA01] Fayad, M. E. and Altman, A. 2001. Thinking objectively: an introduction to software stability. Commun. ACM 44, 9 (Sep. 2001), 95.

  2. Verification and Validation of Choreographies

    Software testing has been studied for over 30 years [MY79] as a mechanism of software quality improvement. In the last decade, the application of automated testing during the development of complex systems has proved to be a useful mechanism to decrease the number of defects [WMV03] and to increase the software development speed [BE02, ME07, CG09]. Methodologies and test automation tools started being deeply used by industry, both within the development process traditional software and in the context of Agile Methods [Co02]. Nowadays we have many good tools that support test automation. For instance in the open source context, testing frameworks such as xUnit (JUnit, CppUnit, C # Unit, SUnit, etc..) TestNG, EasyMock, Mockito, Rspec and Selenium  provide a solid basis for building test suites. Some tools such as Emma and JaBUTi help assess the quality of test suites assisting developers to improve your tests and finally, tools like Testability-Explorer help developers write code more testable.

    Nevertheless, all these approaches are applicable for those conventional systems (client/server model). There is a lack of similar tools to test distributed systems composed by multiple nodes as complex web services choreographies. Thus, in the major of SOA projects, V&V activities like testing the web services composition are not automated tests or are performed manually with a high cost and low efficiency and effectiveness. Moreover, manual executing is tedious, prone to errors and are not reproducible. Finally, in those few projects where there are automated tests, them are written using ad-hoc approaches.

    The dynamic and scalable nature of composition of web services (e.g., orchestrations and choreographies) does not allow the application of traditional strategies and techniques, thus, systematic studies about new or adaption of existent techniques are a demand. These researches have to face some challenges inherent to large scale systems like (i) variability and dependencies of web services; (ii) unavailability of source code (third-party services); (ii) governance issues and dynamism of the environment.

    Since composability is one of fundamental principles of SOA [ERL07], in this project, one of our goals is to develop a TDD (Test-Driven Development) methodology that will help developers and project leaders deal to the key issues involved in testing web services choreographies. To achieve this goal, we first intend to develop an open source environment to support the methodology proposed. This environment will include (1) the definition or adaptation of a simple language for specifying the deployment of a distributed application (and its associated choreographies) across the network in a reproducible way, (2) the construction of a tool for parsing specifications written in this language and deploying the application, and (3) the development of a framework for writing and executing the tests in the distributed system at (testing) runtime.


    [My79] Glenford J. Myers. The Art of Software Testing. John Wiley and Sons, New York. 1979.

    [WMV03] Laurie A. Williams, E. Michael Maximilien, Mladen A. Vouk. “Test-Driven Development as a Defect-Reduction Practice” in 14th International Symposium on Software Reliability Engineering. IEEE Computer Society, 2003.

    [BE02] Kent Beck. Test-Driven Development: By Example. Addison-Wesley, 2002.

    [ME07] Gerard Meszaros. XUnit Test Patterns: Refactoring Test Code. Addison-Wesley, 2007.

    [CG09] Lisa Crispin and Janet Gregory. Agile Testing: A Practical Guide for Testers and Agile Teams. Addison-Wesley, 2009.

    [Co02] Alistair Cockburn. Agile Software Development. Addison-Wesley Longman. 2002.

    [ERL07] Erl Thomas. SOA Principles of Service Design (Prentice Hall Service-Oriented Computing Series from Thomas ERL). Prentice Hall International,  1 edition, August 2007.

  3. Development of a testbed environment for evaluation of large scale composition and Analysis of Choreography Service Allocation to Computing Nodes in a Grid or Cloud Environment

    We are implementing synthetic orchestrations (and choreographies) that will run initially on Amazon's EC2 and later in other Cloud environments (as Open Cirrus). The goal is to have a very easy way to create large-scale orchestrations and choreographies for our experiments. This environment will also be used later as an use-case to help the development of methods for V&V of choreographies and for dependency management.

    Although the analysis of the scalability and performance has not been the focus of the previous years of the project, the knowledge obtained in that period is invaluable for us to begin to understand the impact that the choice of technologies and the environment in which the choreographies are executed have on the final solution. One of the expected results the second research line is to have a synthetic choreography generator that can produce large-scale choreographies to be studied and analyzed. This large-scale choreographies will be used as an invaluable resource to investigate the performance implications of service allocation in grid and cloud environments.

    However, to carry out a comprehensive analysis, one of the first concerns that must be taken into consideration is that the allocation of the services throughout the available machines in the cloud (or grid, depending on the context) has a great influence over the end result. In this part of the project we intend to minimize the total execution time of a choreography.The first important factor to be considered is that the amount of time necessary to communicate varies between each machine pair, as they might be in the same or distinct networks. Although being apart thousands of miles does not necessarily imply higher latency and lower bandwidth, this is usually the case, meaning that a clever and dynamic way to allocate the services to the available machines in the grid/cloud must be devised, as the properties of the entire network are themselves dynamic. To allocate, or schedule, these processes to the available machines in a dynamic fashion, techniques such as migration, duplication, and preemption of the services, or even the choreography of which they are part of, could be used. The criteria for doing so would have to be based on measurable attributes of the whole system. The individual machine processing, memory, and network loads and capacity serve as examples of such attributes. On the other hand, in recent years there have been significant changes also on the hardware context. The high demand for performance has made hardware engineers to adopt all sorts of solutions to keep up with Moore’s law, including the advent of multi-core processors. Multi-core processors are now commonplace in personal  computers and high performance hardware – processors with 2, 4, or even 8 cores are readily available at the market and an increasing share of the computers on the Top500 [Top10] list are endowed with multi-core processors. The problem with the use of multi-core processors is two-fold. First, most of the currently existing software is not capable of profiting from the new hardware (they were not designed to cope with multi-threaded environments), and most people find concurrent programming difficult. The actor model [HBS73] implemented by some programming languages [Erl10, Sca10] and Apple’s GCD [Com10] are some of the relevant works in this area. Last, the process allocation on each of the available cores cannot disregard the impact of the memory hierarchy [CCH+ 06, KCS04, RMC+ 09] (with effects even more prominent in a NUMA architecture [Kle04]) and the memory bandwidth bottleneck [FJMS07]. In this project, we are interested in the latter.

    A choreography can be seen as a parallel application composed of several tasks that communicate with each other. So to use a well known terminology we will not talk about the allocation of services to computers but the allocation of tasks to computing nodes. To effectively analyze and optimize the performance of large scale choreographies, we should also optimize the performance locally. The tasks to be executed on each computer have to be elected and the tasks on individual machines need to be allocated in a convenient way. A bad choice on the elected tasks or an inefficient allocation to the cores can lead to avoidable delays. The current solutions on the parallel computing context do not take fully into account the considerations of memory bottleneck or cache misses. To do that, the scheduler should take into account the profile of each running tack or services, that is, if they are memory, CPU, or I/O intensive. This profiling should be dynamic as the application behavior can change from execution to execution or even during its lifecycle. Our aim is to make these optimizations in such a way that they could be used to improve a system that is executing over a SMP, NUMA, grid, or cloud architecture.


    [CCH+ 06] Hu Chen, Wenguang Chen, Jian Huang, Bob Robert, and H. Kuhn. Mpipp: an automatic profile-guided parallel process placement toolset for smp clusters and multiclusters. In ICS ’06: Proceedings of the 20th annual international conference on Supercomputing, pages 353–360, New York, NY, USA, 2006. ACM.

    [Com10] Apple Computers. Grand central dispatch (GCD) reference, Mac OS X reference library., 2010.

    [Erl10] Erlang programming language website., 2010.

    [FJMS07] Dave Field, Deron Johnson, Don Mize, and Robert Stober. Scheduling to overcome the multicore memory bandwidth bottleneck., 2007.

    [HBS73] Carl Hewitt, Peter Bishop, and Richard Steiger. A universal modular actor formalism for artificial intelligence. 1973.

    [KCS04] Seongbeom Kim, Dhruba Chandra, and Yan Solihin. Fair cache sharing and partitioning in a chip multiprocessor architecture. In PACT ’04: Proceedings of the 13th International Conference on Parallel Architectures and Compilation Techniques, pages 111–122, Washington, DC, USA, 2004. IEEE Computer Society.

    [RMC+ 09] Christiane Pousa Ribeiro, Jean-Francois Mehaut, Alexandre Carissimi, Marcio Castro, and Luiz Gustavo Fernandes. Memory affinity for hierarchical shared memory multiprocessors. Computer Architecture and High Performance Computing, Symposium on, 0:59–66, 2009.

    [Sca10] The scala programming language., 2010.

    [Top10] Top 500 supercomputer sites webpage., 2010.