Trust is a choice that is based on past experience. Trust takes time to build, but trust can disappear in a second. Trusting cloud services is as complicated as trusting people. You need a way to measure it and pieces of evidence to build trust.
Adaptive, Trustworthy, Manageable, Orchestrated, Secure, Privacy-assuring, Hybrid Ecosystem for REsilient Cloud Computing (2017-2019) (hereinafter “ATMOSPHERE”) is a 24-month Research and Innovation Action, funded by the European Commission under the H2020 Programme and the Secretary of Politics of Informatics (SEPIN) of the Brazilian Ministry of Science, Technology, Innovation and Communication (MCTIC) that aims at designing and developing a framework and a platform to implement trustworthy cloud services on a federated intercontinental hybrid resource pool.
Trust in a cloud environment is considered as the reliance of a customer on a cloud service and, consequently, on its provider. Based on the given definition of trust in cloud computing, trustworthiness can be defined as the worthiness of a service and its provider for being trusted.
But trust lies on a broad spectrum of properties such as Security, Privacy, Coherence, Isolation, Stability, Fairness, Transparency and Dependability.
Nowadays, few approaches deal with the quantification of trust in cloud computing. ATMOSPHERE will support the development, build, deployment, measurement and adaptation of trustworthy cloud resources, data management and data processing services, demonstrated on a sensitive scenario of distributed telemedicine.
To achieve cloud computing trust services, ATMOSPHERE focuses on providing four components:
- A dynamically reconfigurable federated infrastructure that provides isolation, high-availability, Quality of Service and flexibility for hybrid resources, including virtual machines and containers.
- Trustworthy Distributed Data Management services that maximise privacy when accessing and processing sensitive data.
- Trustworthy Distributed Data Processing services to build up and deploy adaptive applications for Data Analytics, providing high-level trustworthiness metrics for computing fairness and explainability properties.
- An evaluation and monitoring framework, to compute trustworthiness measures from the metrics provided by the different layers, and trigger adaptation measures when needed.
The different trustworthiness properties identified need to be considered at different layers.
- Security. Services should be resilient to malicious attacks and free of vulnerabilities. There are many vulnerability databases that can provide a score for applications, infrastructure and services. Security must be assessed at design time and reassured when new vulnerabilities come up, preventing access to vulnerable services and resources and migrating services to updated resources.
- Privacy assurance. Privacy is the guarantee of an entity to be secure from unauthorized disclosure of sensible information. Privacy is a crucial property for trust, as privacy may not be guaranteed even in a secure environment with anonymised data. Computing the privacy risk, as the risk of reidentifying the subjects or inferring sensitive information, is needed to trigger more in-depth anonymisation techniques or to prevent the disclosure of the information.
- Coherence. Distributed environments are highly convenient for high-availability, legal restrictions on data transfers and robustness, but consistency is a challenge. Assuming that eventual consistency is the feasible approach, one should know the degree of inaccuracy of such consistency.
- Isolation. Multitenancy on a shared resource pool is a useful approach for enhancing resource utilisation and economy of scale. However, resource share inevitably implies isolation risks, especially in the performance of shared resources such as cache memory, network performance or disk I/O. Moreover, container-based computing reduces the isolation at the level of the process. Measuring how applications are mutually affecting themselves is very important to anticipate Quality of Service reductions and unavailability.
- Fairness. Artificial Intelligence systems rely on the data used for the training and many complex parameters. Classification systems can be biased toward sex, race, education, etc. It is essential to understand this bias (which could be reasonable if the reality presents it) to interpret the results.
- Transparency. Nowadays, we assume the results of Artificial Intelligence algorithms as ground truth, but in many cases, and especially in Deep Learning, results are hard to explain and understand. We do not know the critical parameters that drive our models, which could be vital in defining the liability of complex decision-making systems. Being able to understand those principles will enable defining the boundaries for a model.
- Dependability. Differently from other trustworthiness properties discussed above, dependability has been better studied in the literature. It includes multiple sub-dimensions, such as Integrity, Availability, Reliability, Maintainability, Safety and Performance stability over time.
More information about ATMOSPHERE can be found in the website (https://www.atmosphere-eubrazil.eu/), in Twitter (@AtmosphereEUBR) and LinkedIn (https://www.linkedin.com/in/atmosphere/).
Full Position Paper in PDF format: https://www.atmosphere-eubrazil.eu/sites/default/files/webform/Position%20Paper%20ATMOSPHERE%20-%20CLOUDSCAPE%202018.pdf