I-DELTA Information View
T2 Software & the I-DELTA Consortium

This is Part 7 in a series on the I-DELTA project. Read Part 1, Part 2, Part 3, Part 4, Part 5, Part 6, Part 7, Part 8.
Türkçe için buraya tıklayınız.


I-DELTA aims to create an interoperable DLT based platform. For this purpose an independent and sovereign chain gives the ability to communicate with other blockchains. The aim of the Information View is to describe the information model used for the description of system/platform including data points and methodology for the semantic description of the data points and services. Information View can be analyzed through security, membership, inter-operability, consensus and development kit subjects.

Security

I-DELTA will function under the local/global security model. This security model works as it can be seen from the figure below.

I-DELTA Security Diagram

Security will consist of four different parts. Other Chains are the blockchains within I-DELTA network. Other Chains are independent state machines with their own rules. Block producers of Other Chains are the Collators. Collators collect data from Other Chains in order to validate transactions and create blocks accordingly. Validators collects blocks from Collators to validate changes. If Validators validate the blocks, blocks will be updated in the Decentralized Web. Since, Validators can reject blocks from Other Chains, to prevent any malicious rejections Validator shuffling and Master Validators will be used. Validator shuffling will match Validators with different Collators to minimize the malicious rejections of blocks from Other Chains. Master Validators will continually check the Validators’ activities to prevent anomalies. Collators will act as children of a parent chain Decentralized Web. This security system will give a shared security model for the project. Other Chains will benefit from this security system of Decentralized Web through Validators.

Membership

People will be able to build a hub which will make inter-chain communication efficient. Hubs will have their own sovereign blockchains for connecting to other blockchains. These hubs will be validated by Distributed Web from Figure 1. Working principle of hubs is shown in Figure 2.

Hub Diagram

Hubs will increase the efficiency of connecting multiple chains. Each hub will have their own governance and rules.

Interoperability

Inter-blockchain communication will be done with targeting a message between Other Chains. Chain A will be able to call a smart contract inside Chain B to communicate. With the help of security system, I-DELTA will be able to use two different mechanism to secure the inter-operability. Shared security between Other Chains through Validators described in Figure 1 results in trust between Other Chains. Each Other Chain will have a uniform level of security and Decentralized Web will be secure enough to allow communication with the help of shared security. In order to prevent malicious communication between Other Chains, Validators will be shuffled and assigned to Other Chains randomly. With this approach, malicious communication will be minimized through the system. Additionally, Master Validators will check the Validators’ and Other Chains’ activities to ensure the elimination of malicious communication. By using these safety measures in inter-operability, Decentralized Web will be able to roll back malicious communication by using shared security and Master Validators.

Development Kit

Development Kit of I-DELTA project will help developers to start building their own chains by altering the rules (governance modules, authentication modules etc.) for the purpose of their projects.

The Dependability and Technical Management Perspective

The underlying requirement to provide a reliable service comes to risk of a downtime as it may translate into a series of undesired consequences:

▪       Loss of data

▪       Revenue loses

▪       Decreased productivity

▪       Reputation of the service – lost business opportunities

▪       Costs and time needed for the service recovery (annually increasing)






To assure a system or service is uninterrupted while providing the needed performance we utilize our own GPC Framework, developed for management, monitoring and maintenance in mind. Definition and planning of GPC JOBs is the core of the system management here. In each case GPC JOB corresponds to a change request. Planned tasks can be either:

     automatic – service shutdown, offline backup, creating a snapshot and service restart, synchronization of passive data storage in stand-by.

     passive  – as a maintenance window for administrator of a service

The primary objective is to provide access to dependent applications and eliminate situations such as when one provider is shutting down a database while another is in the process of uploading data.

Monitoring agents

Are software components to execute autonomous tasks. Monitoring agents can be assigned to any service and define their outputs for higher and lower levels of dependent services. A set of monitoring policies with different interpretations of data can be defined to further improve system performance. They can communicate remotely using API for external applications.

Active vs passive monitoring

Passive monitoring silently analyzes network traffic through a span port or tap to identify endpoints and traffic patterns. It creates no additional network traffic and has virtually no risk of disrupting critical processes by interacting directly with endpoints. However, passive monitoring can take more time to collect asset data as it must wait for network traffic to be generated to or from each asset to create a complete baseline profile. Also, in some cases, span ports are not available in all areas of the network which can limit the ability to passively monitor traffic across the entire OT environment.  

Active monitoring works by sending test traffic into the network and polling endpoints with which it comes into contact. Active monitoring can be very effective in gathering basic profile information such as device name, IP and MAC address, NetFlow or syslog data, as well as more granular configuration data such as make and model, firmware versions, installed software/versions and OS patch levels. By sending packets directly to endpoints, active scanning can be faster in collecting data, but this also increases the risk of endpoint malfunction by pushing incompatible queries to them or saturating smaller networks with traffic. And active scanning typically does not monitor the network 24/7, so it may not detect transient endpoints or devices in listen-only mode.[1]

Healing mechanisms

For any services required to run continuously a so-called healing mechanism should be in place. It consists of a monitoring system collecting logs and validating occurrences of the service malfunctions. When a malfunction is discovered a restore protocol is executed automatically depending on the specific error. Restore protocols might include transferring services onto another server, a simple restart or notification for responsible administrators.

From complexity perspective healing mechanisms can be divided into three levels:

  1. basic in form of rules, might require intervention from administrators
  2. medium showing metrics and visualization of the malfunctions
  3. advanced root cause is determined automatically

High availability

This concept prevents the system from relying on a single point of failure either by running multiple servers with parallel or with floating IPs to help with redirecting traffic. Switching to the backup servers needs to be seamless. Replication of data serves as recovery in case of a system failure.

Docker communication

For the I-DELTA framework in particular Docker communication could prove useful in order to simplify layers of networking involved between multiple services. Docker containers allow communication with each other and outwards via a host machine. Docker supports many different types of networks, each fit for certain use cases.

For example, building an application which runs on a single Docker container will have a different network setup as compared to a web application with a cluster with database, application and load balancers which span multiple containers that need to communicate with each other. Additionally, clients from the outside will need to access the web application container.

Deployment

For continual deployment a plan needs to be in place first. Such as a time schedule for adopting new updates including retirement of the given project. For this purpose a deployment diagram (UML type) demonstrating architecture needs to be defined.  As manual updates of applications are time consuming, repetitive and prone to human errors an automated process is advised. Especially when we want to avoid bottlenecks in the release process.

Before deploying we have to put into consideration:

·     What will be the impact?

·     Do we anticipate resistance?

·     What will be the impact if deployment fails?

Deployment scenarios supported by the GPC Framework

Note: GPC Master acts as a central GPC agent. It has access to the main database and contains information from all agents and agent clusters

Autoscaling and Kubernetes

When considering deployment options for the I-DELTA project Kubernetes has one big advantage over traditional deployments. This advantage is the ability to adapt to an amount of traffic and usage of resources. The adaptation is made through automatic scaling of pods and even nodes of the cluster. To guarantee that the deployment is scaled Kubernetes uses a Horizontal Pod Autoscaler API object[2] which responds to CPU workload, memory consumption or other metrics.References


[1]     Source: https://www.securityweek.com/active-vs-passive-monitoring-no-longer-either-or-proposition

This is Part 7 in a series on the I-DELTA project. Read Part 1, Part 2, Part 3, Part 4, Part 5, Part 6, Part 7, Part 8.
Türkçe için buraya tıklayınız.