Research behind ASI

The goal of our research is the creation of systems capable of producing high level output (deep semantic representations) out of any input text, in whatever domain and for whatever application.

This ambitious goal naturally drives our research towards the adoption of schemes for Semantic Representation of Text (SRT) and technologies close to Meaning Representation Parsing (MRP): these high level representations are indeed the only ones guaranteeing a radical independence from specific tasks, thus improving flexibility.

At the current state of the art, our technology is both event centric and entity centric, in the sense that, depending on the application, it might emphasize either  relations between events and actors in the events or (possibly atemporal) relations among Named Entities. Notice that in order to improve the capability to capture “hidden” relationships among named entities, we recently started a new project aiming at integrating the Knowledge Graph into semantic representations.

Current projects at IrradiantLabs:

  • Semarillion: It makes use of cognitive grammars to achieve a logical representation of sentences in terms of triples <subject,predicate, object> (RDF, Resource Description Framework ). Central to this project is the solution of long-standing problems such as anaphora resolution, word sense disambiguation, entity linking.
  • E-ventivity:  It aims at producing scenarios of connected events deduced from textual reports. It tackles the problem of information overload by developing a proprietary algorithm determining the saliency of each event composing the global scenario. In this project, we make heavy use of graph embedding techniques.
  • ADA (Automatic domain adaptivity): While basic structures of the language stay the same across domain and across application (and to a certain extent, across languages) lexical semantics varies across domains in terms of semantic features, semantic subcategorization, role assignments etc. ADA uses transformers based architectures to achieve lexical adaptation in a completely unsupervised model.
  • NLU2: Predictions of our systems normally follow a chain of predefined steps of information normalization and abstraction. It is crucial for the acceptance of any AI system that it is possible to explain each one of these steps to the user. In this project, we experiment about techniques for making any predictions reasonable and justifiable even for the layman, irrespective of its correctness.

Last but not least, our research is driven by some extra functional principles which we consider essential for a sustainable development of AI:

Flexibility, Explainability and Inclusiveness.