Academic Positions

  • Present 2015

    University Professor

    University of Milano-Bicocca, School of Science

  • 2021 2007

    Co-founder of University Spin-Off

    University of Camerino, e-Lios

  • 2015 2014

    Postdoctoral Researcher

    University of Lugano, Faculty of Informatics

  • 2014 2009

    Postdoctoral Researcher

    University of Milano Bicocca, Computer Science Department

  • 2008 2006

    Visiting Researcher

    Stony Brook University, Computer Science Department

Education & Training

  • Ph.D. 2009

    Ph.D. in Information Science and Complex systems

    University of Camerino

  • M.Sc. 2005

    M.Sc. in Computer Science

    University of Camerino

  • B.Sc. 2003

    B.Sc. in Computer Science

    University of Camerino

Research Summary

Much of my work lies in the field of software engineering, with an emphasis on software testing, program analysis and self-* systems.

Software testing is a common practice adopted by software engineers to validate software systems. Testing of modern systems requires a lot of developer effort and testing costs often prevent engineers from exhaustively validating their systems. My research focuses on the enhancement of software testing practices through the automation of software testing activities. In particular, I focus on automatic generation of tests and test data that are syntactically and semantically compliant with the domain of the software under test.

Program analysis is a software engineering practice that aims at analyzing the behavior of software systems through the inspection of static and dynamic information like source code or execution traces. My research focuses both on static and dynamic analysis. In particular, I focus on exctracting models of program behavior either statically from the source code or dynamically from execution traces in order to exploit them for testing purposes.

Self-* systems are systems which automatically adjust their behavior in response to changing conditions of their work environments. The goal of this research is to manage the growing complexity of modern systems and enrich them with capabilities that occur in biological systems such as self-configuration, self-optimization, self-healing and self- protection. My research focuses on modeling, analyzing and regulating the behavior of complex systems with the primary goal of avoiding or repairing faulty conditions. I am currently developing new techniques for failure prediction and automatic repair of software on an Internet (World) scale.

Interests

  • Software Engineering
  • Software Testing
  • Program Analysis
  • Self-* Systems

This is a selection of my publications which are potentially of broad interest. If you are looking for a specific publication not on this list, or need more detailed information on these topics, please contact me at oliviero.riganelli@unimib.it.

Most of the papers available from this page appear in print, and the corresponding copyright is held by the publisher. While the papers can be used for personal use, redistribution or reprinting for commercial purposes is prohibited.

Filter by type:

Sort by year:

Predicting failures in multi-tier distributed systems.

L. Mariani and M. Pezzè and O. Riganelli and Rui Xin
Journal Paper Journal of Systems and Software, Volume 161, March 2020, 110464

Abstract

Many applications are implemented as multi-tier software systems, and are executed on distributed infrastructures, like cloud infrastructures, to benefit from the cost reduction that derives from dynamically allocating resources on-demand. In these systems, failures are becoming the norm rather than the exception, and predicting their occurrence, as well as locating the responsible faults, are essential enablers of preventive and corrective actions that can mitigate the impact of failures, and significantly improve the dependability of the systems. Current failure prediction approaches suffer either from false positives or limited accuracy, and do not produce enough information to effectively locate the responsible faults. In this paper, we present PreMiSE, a lightweight and precise approach to predict failures and locate the corresponding faults in multi-tier distributed systems. PreMiSE blends anomaly-based and signature-based techniques to identify multi-tier failures that impact on performance indicators, with high precision and low false positive rate. The experimental results that we obtained on a Cloud-based IP Multimedia Subsystem indicate that PreMiSE can indeed predict and locate possible failure occurrences with high precision and low overhead.

Data loss detector: automatically revealing data loss bugs in Android apps

O. Riganelli and S. P. Mottadelli and C. Rota and D. Micucci and L. Mariani
Conference Paper In Proc. of the International Symposium on Software Testing and Analysis, 2020, Pages 141-152

Abstract

Android apps must work correctly even if their execution is interrupted by external events. For instance, an app must work properly even if a phone call is received, or after its layout is redrawn because the smartphone has been rotated. Since these events may require destroying, when the execution is interrupted, and recreating, when the execution is resumed, the foreground activity of the app, the only way to prevent the loss of state information is to save and restore it. This behavior must be explicitly implemented by app developers, who often miss to implement it properly, releasing apps affected by data loss problems, that is, apps that may lose state information when their execution is interrupted. Although several techniques can be used to automatically generate test cases for Android apps, the obtained test cases seldom include the interactions and the checks necessary to exercise and reveal data loss faults. To address this problem, this paper presents Data Loss Detector (DLD), a test case generation technique that integrates an exploration strategy, data-loss-revealing actions, and two customized oracle strategies for the detection of data loss failures. DLD revealed 75% of the faults in a benchmark of 54 Android app releases affected by 110 known data loss faults, and also revealed unknown data loss problems, outperforming competing approaches

From source code to test cases: A comprehensive benchmark for resource leak detection in Android apps.

O. Riganelli and D. Micucci and L. Mariani
Journal Paper Software: Practice and Experience, Volume 49, Issue 3, 2019, Pages 540-548

Abstract

Android apps share resources, such as sensors, cameras, and Global Positioning System, that are subject to specific usage policies whose correct implementation is left to programmers. Failing to satisfy these policies may cause resource leaks, that is, apps may acquire but never release resources. This might have different kinds of consequences, such as apps that are unable to use resources or resources that are unnecessarily active wasting battery. Researchers have proposed several techniques to detect and fix resource leaks. However, the unavailability of public benchmarks of faulty apps makes comparison between techniques difficult, if not impossible, and forces researchers to build their own data set to verify the effectiveness of their techniques (thus, making their work burdensome). The aim of our work is to define a public benchmark of Android apps affected by resource leaks. The resulting benchmark, called AppLeak, is publicly available on GitLab and includes faulty apps, versions with bug fixes (when available), test cases to automatically reproduce the leaks, and additional information that may help researchers in their tasks. Overall, the benchmark includes a body of 40 faults that can be exploited to evaluate and compare both static and dynamic analysis techniques for resource leak detection.

Controlling Interactions with Libraries in Android Apps Through Runtime Enforcement.

O. Riganelli and D. Micucci and L. Mariani
Journal Paper ACM Transactions on Autonomous and Adaptive Systems, Volume 14, Issue 2, 2019, Pages 8:1-8:29

Abstract

Android applications are executed on smartphones equipped with a variety of resources that must be properly accessed and controlled, otherwise the correctness of the executions and the stability of the entire environment might be negatively affected. For example, apps must properly acquire, use, and release microphones, cameras, and other multimedia devices, otherwise the behavior of the apps that use the same resources might be compromised. Unfortunately, several apps do not use resources correctly, for instance, due to faults and inaccurate design decisions. By interacting with these apps, users may experience unexpected behaviors, which in turn may cause instability and sporadic failures, especially when resources are accessed. In this article, we present an approach that lets users protect their environment from the apps that use resources improperly by enforcing the correct usage protocol. This is achieved by using software enforcers that can observe executions and change them when necessary. For instance, enforcers can detect that a resource has been acquired but not released and automatically perform the release operation, thus giving the possibility to use that same resource to the other apps. The main idea is that software libraries, in particular, the ones controlling access to resources, can be augmented with enforcers that can be activated and deactivated on demand by users to protect their environment from unwanted app behaviors. We call the software libraries augmented with one or more enforcers proactive libraries, because the activation of the enforcer decorates the library with proactive behaviors that can guarantee the correctness of the execution despite the invocation of the operations implemented by the library. For example, enforcers can detect that a resource has not been released on time and proactively release it. Our experimental results with 27 possible misuses of resources in real Android apps reveal that proactive libraries are able to effectively correct library misuses with negligible runtime overheads.

A Benchmark of Data Loss Bugs for Android Apps.

O. Riganelli and M. Mobilio and D. Micucci and L. Mariani
Conference Paper In Proc. of the International Conference on Mining Software Repositories, 2019, Pages 582-586

Abstract

Android apps must be able to deal with both stop events, which require immediately stopping the execution of the app without losing state information, and start events, which require resuming the execution of the app at the same point it was stopped. Support to these kinds of events must be explicitly implemented by developers who unfortunately often fail to implement the proper logic for saving and restoring the state of an app. As a consequence apps can lose data when moved to background and then back to foreground (e.g., to answer a call) or when the screen is simply rotated. These faults can be the cause of annoying usability issues and unexpected crashes. This paper presents a public benchmark of 110 data loss faults in Android apps that we systematically collected to facilitate research and experimentation with these problems. The benchmark is available on GitLab and includes the faulty apps, the fixed apps (when available), the test cases to automatically reproduce the problems, and additional information that may help researchers in their tasks.

FILO: FIx-LOcus Recommendation for Problems Caused by Android Framework Upgrade

M. Mobilio and O. Riganelli and D. Micucci and L. Mariani
Conference Paper In Proc. of the International Symposium on Software Reliability Engineering (ISSRE), 2019, Pages 358-368

Abstract

Dealing with the evolution of operating systems is challenging for developers of mobile apps, who have to deal with frequent upgrades that often include backward incompatible changes of the underlying API framework. As a consequence of framework upgrades, apps may show misbehaviours and unexpected crashes once executed within an evolved environment. Identifying the portion of the app that must be modified to correctly execute on a newly released operating system can be challenging. Although incompatibilities are visibile at the level of the interactions between the app and its execution environment, the actual methods to be changed are often located in classes that do not directly interact with any external element. To facilitate debugging activities for problems introduced by backward incompatible upgrades of the operating system, this paper presents FILO, a technique that can recommend the method that must be changed to implement the fix from the analysis of a single failing execution. FILO can also select key symptomatic anomalous events that can help the developer understanding the reason of the failure and facilitate the implementation of the fix. Our evaluation with multiple known compatibility problems introduced by Android upgrades shows that FILO can effectively and efficiently identify the faulty methods in the apps.

The Next Generation Platform as A Service: Composition and Deployment of Platforms and Services.

A. Mimidis-Kentis and J. Soler and P. Veitch and A. Broadbent and M. Mobilio and O. Riganelli and S. Van Rossem and W. Tavernier and B. Sayadi
Journal Paper Future Internet, Volume 11, Issue 5, 2019, Pages 119-139

Abstract

The emergence of widespread cloudification and virtualisation promises increased flexibility, scalability, and programmability for the deployment of services by Vertical Service Providers (VSPs). This cloudification also improves service and network management, reducing the Capital and Operational Expenses (CAPEX, OPEX). A truly cloud-native approach is essential, since 5G will provide a diverse range of services - many requiring stringent performance guarantees while maximising flexibility and agility despite the technological diversity. This paper proposes a workflow based on the principles of build-to-order, Build-Ship-Run, and automation; following the Next Generation Platform as a Service (NGPaaS) vision. Through the concept of Reusable Functional Blocks (RFBs), an enhancement to Virtual Network Functions, this methodology allows a VSP to deploy and manage platforms and services, agnostic to the underlying technologies, protocols, and APIs. To validate the proposed workflow, a use case is also presented herein, which illustrates both the deployment of the underlying platform by the Telco operator and of the services that run on top of it. In this use case, the NGPaaS operator facilitates a VSP to provide Virtual Network Function as a Service (VNFaaS) capabilities for its end customers.

Localizing Faults in Cloud Systems.

L. Mariani and C. Monni and M. Pezzè and O. Riganelli and R. Xin
Conference Paper In Proc. of the International Conference on Software Testing, Verification and Validation (ICST), 2018, Pages 262-273

Abstract

By leveraging large clusters of commodity hardware, the Cloud offers great opportunities to optimize the operative costs of software systems, but impacts significantly on the reliability of software applications. The lack of control of applications over Cloud execution environments largely limits the applicability of state-of-the-art approaches that address reliability issues by relying on heavyweight training with injected faults. In this paper, we propose LOUD, a lightweight fault localization approach that relies on positive training only, and can thus operate within the constraints of Cloud systems. LOUD relies on machine learning and graph theory. It trains machine learning models with correct executions only, and compensates the inaccuracy that derives from training with positive samples, by elaborating the outcome of machine learning techniques with graph theory algorithms. The experimental results reported in this paper confirm that LOUD can localize faults with high precision, by relying only on a lightweight positive training.

Static/Dynamic Test Case Generation For Software Upgrades via ARC-B and Deltatest

P. Braione, G. Denaro, O. Riganelli, M. Baluda, A. Muhammad
Book Chapter Springer International Publishing | July 2, 2015 | ISBN-13: 978-3-319-10623-6
image

Abstract

This chapter presents test generation techniques that address the automatic production of test cases to validate evolving software, aiming to improve the adequacy of testing in the light of a performed upgrade. For human experts it is usually hard to achieve high test case coverage by manually generating test cases. In particular, when a program is upgraded, testers need to adapt the test suite of the base version of the program to the new version, to cover the relevant code according to the kind of upgrade that has been implemented. The test case generation techniques presented in this chapter aim to automatically augment the existing test suites with test cases that exercise the uncovered regions of the code. These test cases represent extremely useful executions to give as complete a view as possible of the behavior of the upgraded program. We will describe ARC-B, a technique for the automatic generation of test cases, and its extension as DeltaTest that we have developed in the context of the European FP7 Project PINCETTE. DeltaTest extends ARC-B to target software changes in a more specific fashion, according to ideas that resulted from the feedback gained while using ARC-B during the project. Specifically, DeltaTest exploits a program slicer to distinguish the code impacted by modifications, and builds on this information to generate test suites that specifically address the testing of software changes. In the next sections, we describe the technology that underlies ARC-B, report our experience of applying ARC-B to industrial software provided as case studies by industrial partners of the project, present the DeltaTest technique, and discuss initial data on the strength of DeltaTest.

G-RankTest: Dynamic Analysis and Testing of Upgrades in LabVIEW Software

L. Mariani, O. Riganelli, M. Santoro, A. Muhammad
Book Chapter Springer International Publishing | July 2, 2015 | ISBN-13: 978-3-319-10623-6
image

Abstract

In this chapter we present G-RankTest, a technique for the automatic generation, ranking, and execution of regression test cases for controller applications.

Link: Exploiting the Web of Data to Generate Test Inputs.

L. Mariani and M. Pezzè and O. Riganelli and M. Santoro
Conference Paper In Proc. of the International Symposium on Software Testing and Analysis, 2014, Pages 373-384

Abstract

Applications that process complex data, such as maps, personal data, book information, travel data, etc., are becoming extremely common. Testing such applications is hard, because they require realistic and coherent test inputs that are expensive to generate manually and difficult to synthesize automatically. So far the research on test case generation techniques has focused mostly on generating test sequences and synthetic test inputs, and has payed little attention to the generation of complex test inputs. This paper presents Link, a technique to automatically generate test cases for applications that process complex data. The novel idea of Link is to exploit the Web of Data to generate test data that match the semantics of the related fields, and satisfy the semantic constraints that arise among interrelated fields. Link automatically analyzes the GUI of the application under test, generates a model of the required inputs, queries DBPedia to extract the data that can be used in the tests, and uses the extracted data to generate complex system test inputs. The experimental results show that Link can generate realistic and coherent test inputs that can exercise behaviors difficult to exercise with currently available techniques.

Automatic Testing of GUI-Based Applications.

L. Mariani, M. Pezzè, O. Riganelli and M. Santoro
Journal Paper International Journal of Software Testing, Verification and Reliability, Volume 24, Issue 5, 2014, Pages 341-366

Abstract

Testing GUI-based applications is hard and time consuming because it requires exploring a potentially huge execution space by interacting with the graphical interface of the applications. Manual testing can cover only a small subset of the functionality provided by applications with complex interfaces, and thus, automatic techniques are necessary to extensively validate GUI-based systems. This paper presents AutoBlackTest, a technique to automatically generate test cases at the system level. AutoBlackTest uses reinforcement learning, in particular Q-learning, to learn how to interact with the application under test and stimulate its functionalities. When used to complement the activity of test designers, AutoBlackTest reuses the information in the available test suites to increase its effectiveness. The empirical results show that AutoBlackTest can sample better than state of the art techniques the behaviour of the application under test and can reveal previously unknown problems by working at the system level and interacting only through the graphical user interface.

Extracting Widget Descriptions from GUIs.

G. Becce and L. Mariani and O. Riganelli and M. Santoro
Conference Paper In Proc. of the International Conference on Fundamental Approaches to Software Engineering (FASE), 2012, Pages 347-361

Abstract

Graphical User Interfaces (GUIs) are typically designed to simplify data entering, data processing and visualization of results. However, GUIs can also be exploited for other purposes. For instance, automatic tools can analyze GUIs to retrieve information about the data that can be processed by an application. This information can serve many purposes such as ease application integration, augment test case generation, and support reverse engineering techniques. In the last years, the scientific community provided an increasing attention to the automatic extraction of information from interfaces. For instance, in the domain of Web applications, learning techniques have been used to extract information from Web forms. The knowledge about the data that can be processed by an application is not only relevant for the Web, but it is also extremely useful to support the same techniques when applied to desktop applications. In this paper we present a technique for the automatic extraction of descriptive information about the data that can be handled by widgets in GUI-based desktop applications. The technique is grounded on mature standards and best practices about the design of GUIs, and exploits the presence of textual descriptions in the GUIs to automatically obtain descriptive data for data widgets. The early empirical results with three desktop applications show that the presented algorithm can extract data with high precision and recall, and can be used to improve generation of GUI test cases.

AutoBlackTest: Automatic Black-Box Testing of Interactive Applications.

L. Mariani and M. Pezzè and O. Riganelli and M. Santoro
Conference Paper In Proc. of the International Conference on Software Testing, Verification and Validation (ICST), 2012, Pages 81-90

Abstract

Automatic test case generation is a key ingredient of an efficient and cost-effective software verification process. In this paper we focus on testing applications that interact with the users through a GUI, and present AutoBlackTest, a technique to automatically generate test cases at the system level. AutoBlackTest uses reinforcement learning, in particular Q-Learning, to learn how to interact with the application under test and stimulate its functionalities. The empirical results show that AutoBlackTest can execute a relevant portion of the code of the application under test, and can reveal previously unknown problems by working at the system level and interacting only through the GUI.

My teaching experience includes undergraduate and graduate courses. I have taught the following courses:

Current Teaching

  • Present 2012

    Software Specification and Design

    The course helps the student to create object-oriented projects through the application of a series of heuristics and principles. In particular, the student will be able to analyze a problem, produce a specification of the requirements, carry out the analysis and design of the solution, produce an implementation of a system that is consistent with a project. Course Website.

    Bachelor's degree in Computer Science, University of Milano-Bicocca, Italy.

Teaching History

  • 2016 2015

    Programming 1

    The course introduces concepts which are the basis of high level programming languages, with a deeper focus on imperative paradigm.

    Bachelor's degree in Computer Science, University of Milano-Bicocca, Italy.

  • 2014 2013

    Software Quality

    This course introduces the main testing and analysis techniques that can be used to identify failures and verify the quality of software systems. The course shows how to apply them to solve relevant quality problems, illustrates complementarities and differences among the different techniques, and presents the organization of a coherent quality process.

    Master's degree in Computer Science, University of Milano-Bicocca, Italy.

  • 2012 2011

    Software Engineering

    The principal aim of this course is to introduce the software development process, concentrating in particular on the object-oriented analysis and design phases using the unified process and UML modeling language.

    Bachelor's degree in Computer Science, University of Milano-Bicocca, Italy.

  • 2012 2010

    Concurrent and Distributed Paradigms

    The principal aim of this course is to gain knowledge of the internal structure of operating systems, concurrency models and distributed architectures, and wireless networking technologies.

    Bachelor's degree in Computer Science, University of Milano-Bicocca, Italy.

  • 2012 2010

    Algorithms and Programming

    The course aims to teach object-oriented programming and elements of software design. At the end of the course the student is expected to model a problem following the object-oriented paradigm and, then, to translate the model into a corresponding program written in an object-oriented language.

    Bachelor's degree in Mathematics, University of Milano-Bicocca, Italy.

  • 2010 2009

    Complex System Design

    This course attempts to bring to students the core mathematical concepts of Process Calculi without losing sight of the practical needs of the software programmers and analysts. It will cover basic techniques to describe the form and meaning of program terms and to reason about them.

    Master's degree in Computer Science, University of Camerino, Italy.

  • 2010 2006

    Software Engineering

    The course is designed to present software engineering concepts and principles in parallel with the software development life cycle concentrating in particular on the object-oriented analysis and design phases using the unified process and UML modeling language.

    Bachelor's degree in Computer Science, University of Camerino, Italy.

  • 2006 2005

    Elements of Computer Science

    The course is designed to provide a background knowledge on computer science in order to understand how a computer works and its structure (hardware and software), the fundamental concepts of networking (Internet in particular), and essential notions of programming.

    Bachelor's degree in Mathematics, University of Camerino, Italy.

Current Projects

  • image

    SISMA

    Solutions for Engineering Microservice Architectures.

    Funding: 2017 MiUR-PRIN grant (n.201752ENYB)

    Time frame: 13/11/2019 -

    Microservices (or microservice architecture) are an architectural style where applications are structured as collections of loosely coupled components (microservices), each hosted on a dedicated execution environment. This architectural style fosters the autonomy of components to improve independent scalability and maintainability. Microservices envision polyglot systems, where each component is implemented and operated by dedicated means, with no need for application-wide choices and solutions. Self-contained granules and independent executors enable the continuous deployment of new features or new releases of existing ones. Since dedicated (virtual) machines would be a too heavy execution environment, microservices are usually hosted in containers, and nowadays they are often executed through server-less functions, without any need for provisioning or managing servers explicitly. Since changes are isolated in dedicated executors, possible errors generate local failures with limited or zero impacts onto the remaining parts of the system. Microservices do not have large failures: big services fail big, small services fail small: A large number of services can be down at the same time without the users even noticing it; New service instances can be added easily and quickly to manage increasing workloads, others can be removed, running services can be updated instantly, and new services can be integrated to provide additional functionality, but system-wide management must be adopted to keep systems healthy and let them be safe and reliable. While the architectural style is extremely flexible, versatile and dynamic, our project SISMA (Solutions for Engineering Microservice Architectures) aims to move a step forward and foster the quality development, deployment, and operation of microservice-based applications by proposing novel techniques and tools that cover the whole lifecycle of microservices.

    This project is founded by Ministry of Education, University and Research (MIUR) under the call Research Projects of National Interest (PRIN)

    read more...
  • image

    AST

    Automatic System Testing.

    Funding: H2020-EU.1.1.

    Time frame: 01/01/2020 -

    Verifying the correctness of software systems requires extensive and expensive testing sessions. While there are tools and methodologies to efficiently address unit and integration testing, system testing is still largely the result of manual effort.

    Testing software applications at the system level requires executing the applications through their interfaces to verify the correctness of their functionalities and stimulate all their layers and components. Automating just part of this process can dramatically improve the effectiveness of verification activities and significantly reduce development costs, relevantly alleviating developers from their verification effort.

    This project addresses the development of a pre-commercial tool that has the unique capability of efficiently and automatically generating semantically-relevant system test cases equipped with functional oracles.

    read more...

Past Projects

  • image

    GAUSS

    Governing Adaptive and Unplanned Systems of Systems.

    Funding: 2015 MiUR-PRIN grant (n.2015KWREMX)

    Time frame: 01/01/2017 - 30/06/2020

    The GAUSS project will deliver the methodological enablers required to identify, integrate, and manage “emergent” Systems of Systems (eSoS). These require dynamic and opportunistic engineering due to their intrinsically variable nature tied to their scale and heterogeneity. GAUSS will release a set of integrated technologies to address these engineering problems of eSoS at runtime, when specific execution contexts may invalidate design-time solutions. GAUSS will govern eSoS by enriching initial lightweight designs with concrete and contextualised aspects obtained from the runtime context.

    This project is founded by Ministry of Education, University and Research (MIUR) under the call Research Projects of National Interest (PRIN).

    read more...
  • image

    LEARN

    Learning From Failing and Passing Executions At the Speed of Internet.

    Funding: H2020-EU.1.1.

    Time frame: 01/10/2015 - 30/09/2019

    Modern software systems must be extremely flexible and easily adaptable to different user needs and environments. Such flexibility requirements are so important that it is indeed common practice to develop applications that can be updated, modified and adapted in the field, directly by the end- users. However, this flexibility also introduces relevant quality issues. Almost all computer users have had the unpleasant experience to watch their favorite applications fail and crash frequently after an update. These problems are so common that is sufficient browsing the Web to find millions of reports about failures observed after updates and incompatibilities caused by the interaction of a newly installed component with the existing components. Even worse each of these problems affected a population of thousands of users.

    The impact of problems introduced by end-users (e.g., the installation of a new plug-in) can be dramatic because end-users can easily modify applications, like developers do, but end-users have neither the knowledge nor the skill of developers, and they cannot debug and fix the problems that they unintentionally introduce. It is thus necessary to timely develop novel solutions that can increase the reliability of the moderns systems, which can be extended and adapted by end-users, with the capability to automatically address problems that are unknown at development-time.

    The Learn project aims to produce innovative solutions for the development of systems that can work around the problems introduced by end-users when modifying their applications. The three key elements introduced by Learn to automatically produce a (temporary) fix for the software are: (1) the definition of the InternetLearn infrastructure, which is a network infrastructure that enables communication between every individual instance of a same program running at different end-users' sites, thus augmenting each application with the capability to access a huge amount of information collected in-the-field from other sites; (2) the definition of analysis techniques that can learn the characteristics of successful and failed runs by monitoring executions in the field from a number of instances running at many end-user sites; and (3) the definition of techniques for the automatic generation and actuation of temporary fixes on an Internet (World) scale.

    read more...
  • image

    NGPaaS

    Next Generation Platform as a Service.

    Funding: EU H2020-ICT-2016-2017

    Time frame: 01/06/2017 - 31/08/2019

    Cloud innovations have had a major impact on the IT industry but not yet on networks. The danger is that 5G will be a niche industry providing basic connectivity for the cloud applications and services boom. The NGPaaS project envisages 5G as: a build-to-order platform, with components, features and performance tailored to a particular use case; developed through a “Dev-for-Operations” model that extends the IT industry’s DevOps approach to support a multi-sided platform between operators, vendors and verticals; and with revised Operational and Business Support Systems (OSS/BSS) to reflect the new parameters and highly dynamic environment. NGPaaS can enable 5G to become central to a cooperative future with cloud developers, by removing the technological silos between the telco and IT industries. NGPaaS builds on 5G-PPP phase 1 projects and lays the foundation for large-scale phase 3 deployments and industrial usage, through a stepped validation of several Telco, IoT/vertical and combined scenarios culminating in a live test in Paris-Saclay campus that can incorporate innovative SMEs selected for showcasing NGPaaS’s operational, service and business benefits.

    This project has received funding from the European Union’s H2020-ICT-2016-2017 Programme under grant agreement n° 761557

    read more...
  • image

    IDEAS

    Integrated Design and Evolution of Adaptive Systems.

    Funding: 2012 MiUR-PRIN grant (n.2012E47TM2)

    Time frame: 01/05/2015-31/10/2015

    This project aims at studying adaptive and self configurable systems by developing an integrated approach to the design and the evolution of such systems, in a software engineering perspective. Software-intensive systems are increasingly called to cope with highly dynamic environments in which resources (such as energy, computing and storage infrastructures, network bandwidth) change continuously and in unpredictable ways, and might even be unknown at design time entailing a growing need for adaptation and self-evolution capabilities. Since adaptation is becoming a core aspect of an increasing number of systems, it should emerge in all phases of the life-cycle.

    Software should be designed for adaptability, tested for adaptability, configured and maintained for adaptability. The classic boundary between design approaches and runtime infrastructure fades: models traditionally used at design time shall be made available at runtime to enable late verification, which in turns supports adaptation. Similarly, data collected at runtime may be reflected in changes to the design time models to avoid undesirable behaviors. The presence of independent adaptation mechanisms at various abstraction levels calls for new coordination mechanisms that shall avoid unstable behaviors and violations to key properties.

    In the last few years researchers have focused on adaptability-related topics and all the units presenting this proposal have ongoing works in the field. However, we still lack a unified framework leading to a novel life cycle spanning development and runtime for self-adaptive systems. The project proposes a new framework encompassing applications and platforms guaranteeing sound adaptability at different abstraction levels that cope both with changes in the environment and runtime evolution of the execution platforms. The former is the case of virtualized environments like cloud computing and service based systems, while the latter is the case of cyber physical systems. The new framework will be organized in two layers: (i) a methodological and linguistic layer, related to how to develop, represent, verify, and validate adaptable software and (ii) a runtime and evolution layer, related to the environment that supports execution of self-adaptive software, possibly integrating legacy code. The added value of this project proposal is to bring the separate, but largely complementary, knowledge of the involved partners together to define an integrated process that deals with adaptation within the software life-cycle using different techniques and granting mechanisms to trace adaptability-related requirements all through the development, deployment, runtime execution and evolution of a system. The proposers are in the position to fully achieve the goal, since they are already studying specific perspectives of this comprehensive problem and the proposed project will allow to factorize the competences acquired so far to fully exploit their synergies.

    This project is founded by Ministry of Education, University and Research (MIUR) under the call Research Projects of National Interest (PRIN)

  • image

    STARTEL

    Self-healing Technical Research on COTS Based Telecom Cloud.

    Funding:Huawei Technologies Co., Ltd.

    Time frame: 01/05/2014 - 30/04/2015

    In telecom industry, the deployment of new communications services often requires a long and expensive development process and often relies on new proprietary software running on new purpose built hardware making it difficult for Communications Service Providers (CSPs) to keep up with the competitive pressure of the rapidly changing market of the communications services.

    Network functions virtualization (NFV) is an initiative driven by several large CSPs that aims to leverage commercial off-the-shelf (COTS) systems, cloud technologies and dynamic service chaining to allow dynamic provisioning, deployment and configuration of communications services, while also enabling service personalization. This in turn would enable the reduction of cost in development and provide faster time to market for new and differentiated services.

    The challenge is to adapt cloud technologies to the telecom industry requirements of high reliability, low latency, and the ability to scale to support millions of users. Cloud systems must be adjusted to meet the specific reliability requirements in the telecom environment, where services cannot be down for more than 5 minutes over the span of an entire year (i.e. 99.999 percent availability). End users naturally expect services offered via cloud technologies to deliver at least the same reliability and availability as traditional communications service implementation models.

    The project focuses on the development of new approaches to failure prediction and self-healing for virtualized Telecom systems in order to meet the high availability and reliability requirements of telecom services, such as VoIP services.

    This research project was entirely funded by Huawei Technologies Co., Ltd. and conducted by the University of Lugano

  • image

    PINCETTE

    Validating changes and upgrades in networked software.

    Funding: EU FP7-ICT (Grant agreement ID: 257647)

    Time frame: 01/07/2010 - 31/10/2013

    Software for networked systems is usually not written all at once, but is built incrementally, due to several reasons, such as maintenance (fixing errors and flaws, hardware changes, etc.) and enhancements (new functionality, improved efficiency, extension, new regulations, etc.). Changes are done frequently during the lifetime of most systems and can introduce software errors that were not present in the old version, or expose errors that were present before but did not get exercised. In addition, upgrades are done gradually, so the old and new versions have to co-exist in the same system. PINCETTE focuses on networked systems that have high reliability requirements. In these systems, the correctness of the system has to be re-validated after any upgrade or change. Currently, error detection relies on the execution of extensive test suites, which is very time consuming, and thus, expensive; fault localization is mainly manual and driven by experts' knowledge of the system; fault fixing often introduces new faults that are hard to detect and remove. Moreover, upgrading one node in a networked system is extremely risky, as it can potentially cause a crash in the whole system. In addition, the cost of this validation dominates the maintenance costs of the software.

    The vision of PINCETTE is to solve the problem of high cost of changes by introducing an automated framework and methodology, and a mix of technologies to identify the impact of changes that derive from intra-component changes (due to error fixing and functionality enhancement) and from component replacement within a single product and a product family. This methodology improves the reliability of networked software by implementing an innovative solution for the automatic detection, localization, and repair of program bugs.

    This project was founded under the European Commission's FP7 Work Programme.

    read more...

Contact & Meet Me

I would be happy to talk to you if you need my assistance in your research or whether you need professional support for your company.

  •    office: 0039-02-6448-7821
  •    lab: 0039-02-6448-7853
  •    oliviero.riganelli@unimib.it
  •    oliviero.riganelli@gmail.com
  •    oliviero.riganelli
  •    it.linkedin.com/in/origanelli

At My Office

You can find me at my office located at the Computer Science Department of the University of Milano Bicocca. The office is at the second floor, room 2015.

I am at my office every day from 9:00 until 18:00 pm, but you may consider a call to fix an appointment.

At My Lab

If I am not in my office, chances are I am in the Laboratory of Test and Analysis (LTA) located at the Computer Science Department of the University of Milano Bicocca. The laboratory is at the ground floor, room T033-T034.