Recently Deep Learning which is a subset of Artificial Intelligence has made great progress. The accuracy of computer vision, automatic speech recognition, image processing and language translation using deep learning have already surpassed human capability. However, these technologies required huge amounts of data to train. The size of the neural network has grown a few orders in term of size. In order to achieve higher level artificial intelligence, we need to develop algorithms that can be trained with much smaller data sizes or simulated models. In this talk, the author discusses the state of art AI algorithm, applications and how simulation is being used for Artificial General Intelligence.
Professor Simon See is currently the Solution Architecture and Engineering Director and Chief Solution Architect for Nvidia AI Technology Center. He is also a Professor and Chief Scientific Computing Officer at Shanghai Jiao Tong University, Professor in Beijing University of Posts and Telecommunications (BUPT), and Professor in Universitas Indonesia (UI). He is being conferred as a Distinguished Fudan Scholar in September 2018 by Fudan University, Shanghai, China. Previously Professor See was also the Chief Scientific Computing Advisor for BGI (China) and has a position in Nanyang Technological University (Singapore) and King-Mong Kung University of Technology (Thailand).
Professor See is currently involved in a number of smart city projects, especially in Singapore and China. His research interests are in the area of High-Performance Computing, Big Data, Artificial Intelligence, Machine Learning, Computational Science, Applied Mathematics and Simulation Methodology. Professor See is also leading some of the AI initiatives in the Asia Pacific. He is a Steering Committee member of NSCC's flagship High Performance Computing Conference Supercomputing Asia (SCA) since March 2018.. He has published over 200 papers in these areas and has won various awards.
Professor See is also the member of SIAM, IEEE, and IET. He is also the committee member of more than 50 conferences.
Dr. See graduated from the University of Salford (UK) with a Ph.D. in electrical engineering and numerical analysis in 1993. Prior to joining NVIDIA, Dr. See worked for SGI, DSO National Lab. of Singapore, IBM, International Simulation Ltd (UK), Sun Microsystems and Oracle. He is also providing consultancy to a number of national research and supercomputing centers.
In the last decade, model-based and simulated-aided design have widely spread out at any stage of product life cycle. This has been facilitated by the extensive increase of powerful, multi-level, multi-physics, and multi-engineering purpose, off-the-shelf commercial simulation environments. However, the authors' experience has highlighted the poor efficiency of simulation-based engineering at the whole system level, in multi-actor projects that involve many industry and laboratory partners. Given this feedback, numerous weaknesses have been identified. Best practices have been proposed to take the best of each partner contribution to make it acknowledged, shared, and reusable. The main objective is to smooth the activity flow during the development and integration cycles, though saving time, reducing risks and capitalizing experience. The proposed talk will introduce the best practices that have been applied in the recent years. A simple example will be used to illustrate the proposals.
Professor Jean-Charles Maré teaches mechanics and systems at Institut National des Sciences Appliquées de Toulouse. He develops his research activity at Institut Clément Ader, where he focuses on architecting and system-level virtual prototyping of embedded actuation systems. He has more than 35 years experience in simulation for design and virtual validation. where simulation is typically used to support architecting, control design, power sizing and energy management, response to faults, test means and test programs definition. He has been involved in numerous European or national research projects dealing with more electrical actuation for aerospace. For more than 25 years, his experience in modelling and simulation has also been serving served several industrial projects for commercial aircraft, helicopters and launchers. All these activities have enabled Jean-Charles Maré to identify the main weaknesses in the simulation process and practices, in particular concerning the efficiency of simulation-aided engineering in an industrial context.
Machine learning structures and algorithms are originally inspired from the mechanisms studied in neurobiology or behavioral psychology during the seventies and the eighties. In neurobiology, the study of electrical activity and plasticity of neuronal networks grounded the mechanisms involved in data-based learning by artificial neural networks. In behavioral psychology, the trials and errors of learning animals making their responses to their environments more probable (or not) grounded the reinforcement learning principles. Nowadays, understanding the psychology and the neural mechanisms of humans and animals is one of the biggest challenges of the XXIst century. There is a lot of unknown and undiscovered mechanisms and the way to experiment brain mechanisms face current device limitations. However, confronting both behavioral and neural activities should allow better understanding the dynamics of the individual learning. The activity-based modeling and simulation of the dynamics of both neuronal and behavioral levels automates the explanation of one level by the other. This constitutes a breakthrough to design new algorithms capturing the learning process dynamics.
Alexandre Muzy is CNRS research fellow at Université Côte d'Azur (I3S computer science laboratory). He is co-director of the NeuroMod institute and in charge of the Modeling, Simulation & Neurocognition (MS&N) research group. He is a specialist of computational modeling and simulation based on system theory, more specifically discrete event systems currently applied to neurocognitive systems, with more than 70 international research publications. He created the computational activity paradigm for structuring models and developed with Bernard P. Zeigler the computational iterative system paradigm. The latter paradigm has been used as a new foundation of the Theory of modeling and simulation - (3d edition). Based on the mapping from in vivo neurocognitive activities to temporal computations, he works on the specification of the computational neurocognitive system (cf. Computabrain project) at learning, modeling and simulation levels.
The matrix multiplication implementations on contemporary computer hardware are very carefully designed and optimized with respect to their efficiency, due to the essential significance of that operation in other science and engineering domains.
Thanks to that the available implementations are very fast and it is a natural desire to yield from the efficiency of those implementations in other, both matrix and non matrix, problems. Such an approach is often called a black boxed matrix computation paradigm in the literature of the subject. A common goal of this paradigm is to construct an algorithm of the same time complexity as the involved matrix multiplication algorithm. That is often feasible, but in certain problems we obtain a complexity being a function of the matrix multiplication complexity, usually with a logarithmic or linear factor, which is a decent achievement either.
In this presentation we gathered a broad series of algorithms yielding from the efficiency of fast matrix multiplication algorithms in other mathematical and computer science operations.
Jerzy Respondek is a Polish computer scientist and mathematician, professor at Silesian University of Technology, Gliwice. Respondek is best known for his works on special matrices and their applications in control theory.
Prof. Respondek lectured in numerous universities, such as the mathematics department of the University of Pisa (Italy), computer departments of universities of Valencia (Spain), Nuremberg (Germany), Alcala (Spain) and the Department of Computer Science of the University of Manchester (UK), Alan Turing's domestic department.
In 2008-17 he participated in the editorial board of the journal "Mathematics and Computers in Simulation", the main journal of the IMACS organization, recognized by the numerical methods scientists community.
In 2012-13 Respondek belonged to one of the main advisory groups of the Polish Ministry of Science. Between 2014-16 he worked in the science-popularization advisory group of that ministry. As a delegate of these groups he participated in the proceedings of the National Parliamentary Commission of Education, Science and Youth in Warsaw.
In 2007-08 he was a member of the Forecast Committee of the Polish Academy of Sciences in Warsaw. It is a specialized, national think tank cooperating with the Club of Rome. His works in that group pertained mainly to the social and economic aspects of computer science.
Since 2016 he shares his time between Poland and Brussels where he works in the European Research Executive Agency (REA) as an expert of the Era Chairs - a European program within the H2020 framework, designed to support European universities to hire outstanding scientists.
CyberFactory#1 aims at designing, developing, integrating and demonstrating a set of key enabling capabilities to foster optimization and resilience of the Factories of the Future (FoF). It will address the needs of pilots from transportation, textile, electronics and machine manufacturing industries around use cases such as statistical process control, real time asset tracking, distributed manufacturing and collaborative robotics. It will also propose preventive and reactive capabilities to address security and safety concerns to FoF like blended cyber-physical threats, manufacturing data theft or adversarial machine learning. The project outputs form a total of 12 key capabilities arranged in 3 capacity Layers: 1) Modelling and simulation of Factory System of Systems (Factory SoS); 2) Factory of the Future optimization; 3) Factory of the Future resilience. For each of the above defined capacities, 4 key capabilities enable to address the cyber-physical, economic, human and societal dimensions which the project holistically embraces. The project results will be validated though a set of 8 pilot experimentations with a mix of simulation and field demonstrations. Particularly in scope of ESM conference is the introduction of modelling and simulation capacity layer which includes: i) Cyber-physical system modelling based on digital twins and CyberRange technology; ii) Econometric modelling and factory transactions in supply chain, iii) Human behavior modelling and analysis of psychological factors, iv) Factory System of Systems modelling and simulation of distributed manufacturing architectures. CyberFactory#1 project has started in December 2018 and will end in June 2022. It involves 28 partners from 7 Countries and has received public funding from Canada, Finland, Germany, Portugal, Spain and Turkey. It is executed in the frame of EUREKA-ITEA cluster for software innovation and is coordinated by Airbus Cybersecurity SAS (France).
Adrien Bécue (M) is Head of Innovation of Airbus Cybersecurity. He graduated from Toulouse Business School in 2003 with a Master in Aerospace Management. In 2004, he joined the French defence procurement agency (DGA) as a Program Purchaser for Land Weapon systems. In this position he managed a portfolio of industrial and research projects driven by Network-Centric Warfare and Digital Battlefield transformations. He joined EADS group in 2008 to manage acquisition programs in Tactical Communication systems for the French Navy and the International Security Assistance Force (ISAF) deployed in Afghanistan. In 2010 he became Research & Technology Project Manager for border security and maritime surveillance projects. In 2013 he joined the Airbus DS Cybersecurity as Research & Technology Coordinator for France and launched several projects with a focus on attack detection security of industrial control systems. In 2016 he was promoted Head of R&T and Innovation, taking over responsibility of the whole R&T portfolio of Airbus DS Cybersecurity across UK, France and Germany. In 2017 he won Eureka Award of Innovation for ADAX ITEA Project dealing with advanced detection and simulation-based decision support. In 2018 he won ITEA Award of Innovation for FUSE-IT Project dealing with security and energy efficiency for smart buildings. He is working-group leader for Industry and Transportation vertical markets at ECSO (European Cyber-Security Organization) and a member of ENISA's (European Network & Information Security Agency) expert groups for security of Industry 4.0 and IoT.
Modelling & Simulation is a core capability to enable resilient Factory of the Future (FoF). Digital models are used as starting point for optimizing FoF environments as well as monitoring and improving the resilience to them. To define the necessary digital models and simulation capabilities inside the CyberFactory#1 capability framework an approach was used based on the definition of Use- and Misuse-Cases. The Use-Case descriptions are pointing out necessary developments and were used as basis to perform risk assessments and create Misuse-Case descriptions. Use-Case and Misuse-Case descriptions were later used to define requirements and by this precise the planned capabilities in modelling and simulation of Factory System of Systems.
Based on a FoF Use-Case working on a dynamic and self-organizing fleet of transport robots the Keynote gives a brief introduction to the used approach. It highlights the benefits and lessons learned using the described approach on the way to resilient FoF environments and presents the defined needs and related requirements on modelling and simulation within this Use-Case.
Matthias Glawe graduated from Helmut-Schmidt University in Hamburg in 2010 with a diploma in Computational Engineering. Following he was lead of IT departments in two facilities of German Armed Forces responsible for IT and IT security planning. In 2014 he came back to the Helmut-Schmidt University as research fellow focusing on the area of ICS security, security ontologies and knowledge-based support systems. After two years in the German Armed Forces planning office working on IT architectures and requirements derivation he joined Airbus Cybersecurity in April 2019 in the area of OT security. Since the start of the German CyberFactory#1 Consortium he is the national consortium lead and was also leading the works in the CyberFactory#1 work package focusing on Business-Models, Use-Cases, Misuse-Cases and SoS design.
Starting to be seen as simulation models that mirror physical systems and allows simulations, Digital Twins (DTs) assume today the vision of a simulation with the ability of continuous monitoring of real assets with communication capabilities that provide a near-real-time comprehensive linkage between the physical and digital worlds. This way DT can be seen as an evolving digital profile of the historical and current behavior of a physical object or process that may provide important insights about its performance. In CyberFactory#1, to foster a complete manufacturing SoS modelling and simulation capacity, we make new uses of Digital Twins (DT). One of them relates to cybersecurity testing and training, together with cyber ranges, to enable risk anticipation and promote resilience of Factories of the Future (FoF). Another one is the holistic consideration of cyber and physical assets, including humans, in a co-simulation approach that provide the overall view of the shop floor for enhanced decision support for a balance between the need of optimization and resilience of FoF. In this talk current examples of DTs and our own approach under CyberFactory#1 will be discussed
Isabel Praça, Advisor of ISEP Presidency for R&D, Professor at ISEP and Senior Researcher at GECAD - Isabel has a PhD in Electrical and Computer Engineering, and a Post-Doc on the application of multi-agent systems and machine learning to Intelligent Energy services, awarded by the Portuguese National Science Foundation with a scholarship (SFRH/BPD/30111/2006). She has participated in over 25 national and international R&D projects, with relevant responsibilities. She has published over 150 papers, more than 50 in international journals and books. She has been working in the application of AI techniques to real problems since 2001. She participates in the technical and scientific committees of several conferences and is an active member of IEEE. She works in the area of Artificial Intelligence (AI) with special interest in machine learning, multi-agent systems, knowledge-based systems, decision support, context aware methodologies, and modelling and simulation. Main areas of application are: cyber-security; intelligent services, like predictive maintenance, optimization, data lake exploitation, among other, in Industry 4.0; multi-agent systems and forecasting in power systems. She's CyberFactory#1 Technical Coordinator, and responsible for the work package on Optimization and Self-improvement of Factories of the Future.