Monday, June 11
Monday, June 11, 09:00 - 12:00
T1: Tutorial Session 1: Information Quality in Human-Machine Integrated Environment
Abstract: Situation awareness in complex dynamic environment requires contextual understanding and interpretation of the events and behaviors of interest, which can be achieved by building a dynamic situational picture. The process of building such picture involves gathering and fusing a large amount of multimedia and multispectral information coming from geographically distributed sources to produce estimates of objects and their characteristics, and gain knowledge of the entire domain of interest. Information to be processed and made sense of includes but is not limited to data obtained from sensors surveillance reports, human intelligence reports, operational information, and information obtained from open sources (internet, radio, TV, etc.). Successful processing of this information may also demand information sharing and dissemination, and action cooperation of multiple stakeholders. Such complex environments call for an integrated fusion based human-machine system, in which some processes are best executed automatically while for others the judgment and guidance of human experts and end-users are critical.
The problem of building such integrated systems is complicated by the fact that data and information obtained from observations and reports as well as information produced by both human and automatic processes are of variable quality and may be uncertain, unreliable, of low fidelity, insufficient resolution, contradictory, and/or redundant. The success of decision making in complex fusion driven human-machine environments depends on the success of being aware of, and compensating for, insufficient information quality at each step of information exchange. Thus quality considerations play an important role at each time when raw data (sensor reading, open source, database search results, and intelligence reports) enter the system as well as when information is transferred between automatic processes, between humans, and between automatic processes and humans.
The tutorial will discuss major challenges and some possible approaches addressing the problem of representing and incorporating information quality into fusion processes. In particular, it will present an ontology of quality of information and identify potential methods of representing and assessing the values of quality attributes and their combination. It will also examine the relation between information quality and context, and suggest possible approaches to quality control compensating for insufficient information and model quality.
Intended Audience: This tutorial is intended for both researchers and practitioners from a wide variety of fields such as communication, intelligence, business processes, personal computing, health care, and databases, who are interested in understanding information quality problems in the complex human-machine integrated environment, and building methods for dealing with these problems.
T2: Tutorial Session 2: Experiences in Technology Transition for Human System Capabilities
Abstract: This tutorial is focused on the challenges of effective technology transition for Human Systems capabilities, ranging from platform associates, scheduling systems, and distributed collaboration for situational awareness.
The tutorial will be organized into 5 sections. The first section will provide an overview of the instructors experience with technology transition and preview the projects that will be covered. The second section will focus on the experiences and lessons learned from a project called Pilots Associate. This was an early attempt to apply Artificial Intelligence to create an automated intelligent back seater for a single seat fighter aircraft. This project went from 1985 to 1990. The third section will cover another project, DART. DART was a project that occurred from Sept to November 1990, during operation Desert Shield. This project applied AI technology to the problem of developing and analyzing military deployment plans. The fourth section will cover the third and final project, Command Post of the Future (CPOF). CPOF was a project that ran from 1997 to 2007. The goal was to apply the most advanced human systems technology to revolutionize a military command post. The result of this project was deployed to Iraq in 2004, and became the primary command and control system for all forces in Iraq and Afghanistan.
For each of the three projects, the tutorial will include samples of the artifacts used to support both the development and the transition.
The last section of the tutorial will draw from all three project experiences to present some guidelines for organizing and executing technology development and transition efforts. In addition, as all three of these projects had an element of advanced information technology and AI to support human decision making, the instructor will provide some thoughts on how to approach the general challenge of creating advanced tools to support human decision making.
Monday, June 11, 12:00 - 13:30
Lunch (On your own)
Monday, June 11, 13:30 - 16:30
T3: Tutorial Session 3: Uncertainty and Indeterminacy in Formal and Informal Inferencing
Abstract: Eleanor Duckworth defines learning as a process of making increasingly subtle mistakes.1
This tutorial relates the concepts of uncertainty as employed in three realms: classical physics, quantum physics and human behavior.
Classical physics is the realm of most macroscopic science and engineering, to include the disciplines of data fusion, target tracking and recognition. It assumes a world of discrete and persistent objects, mechanistic causal dynamics, conditionally independent measurements and uncertainty captured by Fisher likelihood and Bayesian probabilities. Deductive and induction logics and abductive modeling are used to understand and predict observations. Observations are assumed to correspond, via causal transforms, to world states.
In contrast, the quantum revolution of the early 20th Century postulates a counter-intuitive world of fields perturbed by four forces, with fundamental indeterminacy and violations of spatio-temporal causality via, e.g., entanglement and time reversal. Under some interpretations, observations actually determine the observed states.
Finally, people in their ordinary behavior deal with the uncertainties of the world not in terms of classical or quantum physics. Cognitive Psychology has identified several specific heuristics that people use in comprehending and operating in an uncertain world. Heuristics of similarity, availability and anchoring lead to characteristic biases and are susceptible to systematic errors. Deceptive techniques that exploit these biases will be discussed, with examples from stage magic, "fake news", military and cyber deception.
As with quantum physics, the naïve human world-view includes entities and notions that are difficult to reconcile with those of classical physics. The classical notion of discrete, persistent objects is expanded to include ideas of the self. Classical causality is violated by notions of free-will. These notions can include ideas of souls, spirits and divine intercession. The subconscious workings of the mind can be as inaccessible and unfamiliar as the invisible quantum world. Hence the difficulty of defining and of artificially creating entities with self-awareness and free-will. The institutions of human society - of property, responsibility, laws and morality - are premised on the ineffable notions of self, consciousness and free-will.
We will examine the often-unspoken assumptions in each of these realms: the closed-world and causal assumptions of classical inferencing, the epistemological/ontological entanglement of quantum mechanics, and the notions of identity, self-awareness and free-will in human activity.
We will compare, contrast and inter-relate these diverse uncertainty models and discuss methods to reason across these domains. It will be suggested how closed-world, model-based methods used in classical macroscopic inferencing - e.g. in data fusion and robotics - can be extended to open, poorly-modeled domains. Amusing examples, exercises and puzzles will illustrate the surprising, often paradoxical aspects of these three world views and their interactions in human behavior.
T4: Tutorial Sessions 4: Introduction to Ontologies with Examples and Practical Implications for Situation Management and Decision Making
Abstract: An ontology is a formal structure specifying the types, properties, and interrelationships of the entities that are relevant to a particular domain of discourse. Ontologies are commonly used in a great variety of fields, including artificial intelligence, the Semantic Web, systems engineering, software engineering, biomedical informatics, library science, and financial systems. Some of the problems that ontologies can address include intelligent search, interoperability, integration, reuse of domain knowledge, clarifying meaning, reducing unnecessary complexity, improving agility and flexibility.
In this tutorial, we give an introduction to ontologies with particular attention to formalizing situations. While the term "situation" is commonly used in scientific publications, such as this conference, there are significant differences in the intended meaning. This can lead to misunderstandings among researchers and a lack of interoperability among programs that address situational awareness.
The notion of situation will be introduced at various abstraction levels. First, the problem of situational awareness formulated by Endsley, as well as the place of situational awareness in the JDL Data Fusion Model, will be presented. Then, the Situation Theory of Barwise, Perry and Devlin will be formalized using an ontology called the Situation Theory Ontology (STO). Finally, it will be shown how computer programs can make use of such a formalization for achieving situational awareness and decision making.
Intended Audience: The intended audience for this tutorial consists of both researchers and practitioners who want to achieve a better understanding of ontologies, situational awareness and situation management. The participants are expected to have some basic knowledge of formal logic. The participants will learn basic ontology engineering techniques using the major ontology representation languages; how situations can be formalized in logical terms; how to use general reasoning/inference engines to monitor or query situations; and how situations can be used within a general decision making process.
Monday, June 11, 18:00 - 21:00
Welcome Reception
Tuesday, June 12
Tuesday, June 12, 08:00 - 09:00
Breakfast
Tuesday, June 12, 09:00 - 09:10
Conference Opening
Tuesday, June 12, 09:10 - 10:00
K1: Keynote by Prof. Robert Kozma, Department of CS, UMass Amherst, MA, Department of Mathematics, U of Memphis, TN, "Energy Constraints in Cognitive Processing - The Role of Constraint Satisfaction in Emergent Awareness"
Tuesday, June 12, 10:00 - 10:30
Coffee Break
Tuesday, June 12, 10:30 - 12:00
S1: Oral Presentation Session: Decision Support
- Tactical Decision Support for UAV Deployment in MUM-T Helicopter Missions
- Action Selection Under Uncertainty and Risk Involving High Consequence Rare Events
- Interactive Decision Support: A Framework to Improve Diagnostic Processes of Cancerous Diseases Using Bayesian Networks
Tuesday, June 12, 12:00 - 13:30
Lunch (On your own)
Tuesday, June 12, 13:30 - 15:00
S2: Oral Presentation Session: Ontologies & Information Representation
- A Method to Identify Relevant Information Sufficient to Answer Situation Dependent Queries
- Towards an Ontology of Scenes and Situations
- Multimodal Interactions in Multi-Display Semi-Immersive Environments
Tuesday, June 12, 15:00 - 15:30
Coffee Break
Tuesday, June 12, 15:30 - 17:00
S3: Oral Presentation Session: Situation Assessment and Learning
- Automatic Identification of Maritime Incidents from Unstructured Articles
- Situation Granularity
- Handling Ambiguous Object Recognition Situations in a Robotic Environment via Dynamic Information Fusion
Tuesday, June 12, 17:30 - 19:30
CogSIMA 2019 Planning Meeting
Wednesday, June 13
Wednesday, June 13, 08:00 - 09:00
Breakfast
Wednesday, June 13, 09:00 - 10:00
K2: Keynote by Kim A. Mayyasi, CEO, SmartCloud Inc. Boston, MA, USA "Neil Armstrong, Jack Welch and Bill Belichick Walk into a Bar: Situational Reasoning for Explanatory Artificial Intelligence"
Wednesday, June 13, 10:00 - 10:30
Coffee Break
Wednesday, June 13, 10:30 - 12:00
S4: Oral Presentation Session: Situation Understanding
- Adaptive Situational Leadership Framework
- ACT-R Modeling of Intelligence Analysts' Perception of Information to Improve Situational Understanding
- Development of an Interactive Dashboard to Analyze Cognitive Workload of Surgical Teams During Complex Procedural Care
Wednesday, June 13, 12:00 - 13:30
Lunch (On your own)
.
Wednesday, June 13, 13:30 - 15:00
Poster and Late Breaking Report Oral Presentations
- Cognitive Support to Promote Shared Mental Models During Safety-Critical Situations in Cardiac Surgery (Late Breaking Report)
- Modelling Complex System-Of-Systems for Creating Situation Awareness (Late Breaking Report)
- Situations, Information, and Evidence (Late Breaking Report)
- Artificial Swarms Find Social Optima (Late Breaking Report)
- A Probabilistic Time Reversal Theorem (Poster)
- Throughput and Channel Occupancy Time Fairness Trade-Off for Downlink LAA-Cat4 and WiFi Coexistence Based on Markov Chain (Poster)
- Human Understanding of Information Represented in Natural Versus Artificial Language (Poster)
- High Clutter, Close Range, Wi-Fi Imaging and Probabilistic, Learning Classifier (Poster)
- Automating Behaviour Tree Generation for Simulating Troop Movements (Poster)
- Towards an Optimal Design of a Data Fusion System for Maritime Domain Awareness (Poster)
- Distributed Detection of Correlated Signal Using Markov Chain Monte Carlo (Poster)
Wednesday, June 13, 15:00 - 15:30
Coffee Break
Wednesday, June 13, 15:30 - 17:00
Poster Session
Wednesday, June 13, 18:00 - 21:00
Conference Banquet Dinner
Awards Ceremony
Thursday, June 14
Thursday, June 14, 08:00 - 09:00
Breakfast
Thursday, June 14, 09:00 - 10:00
K3: Keynote by Dr. Gabriel Jakobson, Chief Scientist, CyberGem Consulting, Inc., "Mission-Centric Resilient Cyber Defense: Inspirations from Collective Cognitive Behavior"
Thursday, June 14, 10:00 - 10:30
Coffee Break
Thursday, June 14, 10:30 - 12:00
S5: Oral Presentation Session: Situations and Simulations
- The Observer Effect
- Situations in Simulations: An Initial Appraisal
- Virtual Leadership in Complex Multiorganizational Research and Development Programs
Thursday, June 14, 12:00 - 13:30
Lunch (On your own)
Thursday, June 14, 13:30 - 15:00
S6: Oral Presentation Session: Human-Machine Interaction
- The Complex Dynamics of Team Situation Awareness in Human-Autonomy Teaming
- Empirically Identified Gaps in a Situation Awareness Model for Human-Machine Coordination
- Operationalized Intent for Communication in Human-Agent Teams