In-memory, parallelization and scheduling optimization
CONWEAVER GmbH strengthens its technology base with results from a completed research project. The project on the topic “development of in-memory scheduling mechanisms for real-time updates of knowledge networks” was funded by the German Federal Ministry for Economic Affairs and Energy (BMWi) within the Central Innovation Programme (ZIM) for SMEs. In-memory execution of analytics workflows, parallelization and decoupling of processing pipelines, bottlenecks during database read/write operations etc. were optimized by combining scheduling algorithms and streaming methods. This enables for example fast calculations of change impact analysis during the digital engineering phase of automobile or airplane variant designs.
An important key for successful digital engineering is the integration of distributed product and engineering data. In the domain of automotive or aircraft construction the interplay of thousands of components has to be orchestrated: Each component interacts with further components. During the design and development of a single component, interdisciplinary teams of engineers have to consider multifaceted relationships, for example between requirements, other components, assemblies or configurations. One change of one engineering artefact can impact many other artefacts.
Semantic knowledge networks, as developed and distributed by CONWEAVER GmbH, support the search for engineering data artefacts such as components, variants and relationships between them, beyond company divisions or IT infrastructure boundaries. Due to the usually very large amount of engineering data, a knowledge network is pre-calculated initially, and then iteratively updated. A development engineer retrieves results from the net very fast, which itself, however, cannot be altered very fast.
In consequence of the daily growing amount of data, e. g. further product variants, design and development data, the pre-calculation of change impacts becomes more and more costly in terms of time. Additional hardware is only part of the solution, as with more processor power or increasing main memory the effort to internally control the data flows between those hardware components escalates likewise.
The other part of the solution is an intelligent software-defined control of data flows across all analytics components. This second part of the solution was the result of the research project: A specialized in-memory control which optimizes and parallelizes the execution workflows of CONWEAVER modules, which are responsible for the engineering data processing.
“Technically challenging were the removal of difficult to parallelizable dependencies in our analytics workflows and the minimization of bottleneck throughput along our processing pipeline, from database read/write operations up to the presentation of calculation results on customer frontends. Our goal was to minimize the time required for complex semantic “small data” update operations, in interaction with “big data” computations which do not fit into main memory. In order to meet this goal we had to sort of lay a new priority line of rails through our architecture and rewire lots of railway control centers. These new tracks now enable fast in-memory update operations.”
“Technically challenging were the removal of difficult to parallelizable dependencies in our analytics workflows and the minimization of bottleneck throughput along our processing pipeline, from database read/write operations up to the presentation of calculation results on customer frontends. Our goal was to minimize the time required for complex semantic “small data” update operations, in interaction with “big data” computations which do not fit into main memory. In order to meet this goal we had to sort of lay a new priority line of rails through our architecture and rewire lots of railway control centers. These new tracks now enable fast in-memory update operations.”
Project Scope
The project “Development of an in-memory control for real-time update operations for knowledge networks” ran from December 2014 to February 2016 and was funded with 140.000 EUR by the German Federal Ministry for Economic Affairs and Energy (BMWi) within the Central Innovation Programme for SMEs under grant number EP141075.
Engineering teams developing a product variant often spend days up to weeks to calculate and adjust the impacts of changes of components or configurations. With the help of knowledge networks and the new in-memory capabilities, they are now supported by automated pre-calculation nearly in real-time. Customers of CONWEAVER GmbH enormously profit from their shortened innovation cycles in the design and development of their products.
Dr. Thomas Kamps (CONWEAVER GmbH) describes the project results with the help of an analogy: “Up to now we have been building a container ship, which reliably ships huge amounts of cargo -say in 3 weeks- from the US to Europe. This corresponds to “big data” calculations, where the size of a shipload is important, but time does not matter that much. Now, we can continually send small supersonic aircrafts along the same route. This corresponds to the fast calculations, for example during change impact analysis.”