Welcome to the homepage of the DIG seminar, which is the regular seminar of the DIG team of LTCI at Télécom ParisTech. The seminar features talks by members of the team and guests from other research groups, as well as discussions on topics of relevance to the team. Talks are held at Télécom ParisTech, 46 rue Barrault, Paris, France, métro Corvisart.

Attendance is open to the public, but please register in advance by emailing me at a3nm.seminar<REMOVETHIS>@a3nm.net if you are planning to attend.

You can subscribe to the seminar sessions as an iCalendar feed (e.g., with ICSdroid on Android) using the following URL: https://a3nm.net/work/seminar/calendar.ics

The seminar has been formerly called "DBWeb seminar" and "IC2 seminar". You may also be interested in the LTCI Data Science Seminar, which is co-organized by DIG and S2A.

10 January 2019, 12:00, C46

Nedeljko Radulović, Télécom ParisTech
Explainable Artificial Intelligence

Abstract: In recent years, machine learning and artificial intelligence systems are reaching, sometimes even exceeding, the human performance in tasks such as image recognition, speech understanding, or strategic decision making. The main problem with many of these models is their lack of transparency and interpretability: There is no information about how exactly they reached their predictions. This is a major issue in sensitive fields such as healthcare, policing, and finance. To address these issues, explainable artificial intelligence (XAI) has become an important topic of interest in the research community.

Through our research, we want to address this problem with insights from another field that has recently celebrated great advances: that of large knowledge bases (KBs). By contributing the link to the real world, KBs can give a semantic dimension to machine learning (ML) algorithms. While semantic background knowledge has long been used in ML, we believe that the recent explosion of the size of KBs warrants a revisit of this approach. KBs are now much larger, much broader in terms of thematic coverage, and much cleaner at scale. We imagine that a symbiosis between these new KBs and ML could take several forms: semantics can be injected a posteriori into a learned model; semantics can be taken into account as background knowledge during the learning process, or the learning process can feed directly from the semantic data. We aim to systematically explore all of these possibilities and investigate how they can serve to make AI and ML models more interpretable, more explainable, and ultimately more human-intelligible.

Bio: I studied Electrical Engineering and Computer Science at School of Electrical Engineering, University of Belgrade, Serbia. This year I obtained M. Sc. in Computer Science at Telecom ParisTech. I am starting Ph.D. studies with Professors Albert Bifet and Fabian Suchanek. The research topic of my Ph.D. is Explainable AI.

13 December 2018, 12:00, C47

David Carral, TU Dresden
Reasoning with Description Logics Ontologies and Knowledge Graphs

Abstract: Ontology-based access to knowledge graphs (KGs) has recently gained a lot of attention. One of the research challenges when accessing these large data structures is to enable "the capability of combining diverse reasoning methods and knowledge representations while guaranteeing the required scalability, according to the reasoning task at hand." [1]

In our work, we address this challenge with a focus on reasoning with KGs extended with Description Logics (DL) ontologies. In principle, one could make use of existing DL reasoners to solve these reasoning tasks. However, DL reasoners---which are designed to deal with complex terminological axioms---do not scale well in the presence of large amounts of assertional information. In contrast, existing rule engines such as VLog or RDFOx can efficiently reason with data-intensive knowledge bases. To take advantage of these powerful implementations, we propose several data-independent mappings from DL TBoxes into rule sets that preserve the outcomes of conjunctive query (CQ) answering. Our experiments indicate that reasoning with rule engines over the resulting CQ-preserving rewritings can be significantly more efficient than using state-of-the-art DL reasoners over the original DL ontologies.

[1] This quote is taken from the description of a recent Daghstul seminar on Knowledge Graphs: New Directions for Knowledge Representation on the Semantic Web

Bio: Since October 2016, I am a postdoctoral scholar at the Knowledge-Based Systems group led by Prof. Markus Krötzsch at Technische Universität Dresden. I completed my doctor’s degree at Wright State University under the supervision of Prof. Pascal Hitzler. For a couple months at the beginning of my Ph.D., I was an exchange student at the University of Oxford, working under the supervision of Prof. Bernardo Cuenca Grau.

Broadly speaking, I am interested in the study of logical languages such as Description Logics and existential rules, the implementation of reasoning algorithms for these languages, and the use and application of semantic web technologies in different domains.

11 December 2018, 14:00, C48

LTCI Data Science Seminar session. Speaker: Shai Ben-David.

See the LTCI Data Science Seminar Webpage for details.

29 November 2018, 14:00, C48

LTCI Data Science Seminar session. Speakers: Rodrigo Mello and Olivier Sigaud.

See the LTCI Data Science Seminar Webpage for details.

15 November 2018, 12:00, B551

Arnaud Soulet, Université de Tours
Representativeness of Knowledge Bases with the Generalized Benford’s Law

Abstract: Knowledge bases (KBs) such as DBpedia, Wikidata, and YAGO contain a huge number of entities and facts. Several recent works induce rules or calculate statistics on these KBs. Most of these methods are based on the assumption that the data is a representative sample of the studied universe. Unfortunately, KBs are biased because they are built from crowdsourcing and opportunistic agglomeration of available databases. This work aims at approximating the representativeness of a relation within a knowledge base. For this, we use the Generalized Benford's law, which indicates the distribution expected by the facts of a relation. We then compute the minimum number of facts that have to be added in order to make the KB representative of the real world. Experiments show that our unsupervised method applies to a large number of relations. For numerical relations where ground truths exist, the estimated representativeness proves to be a reliable indicator.

Bio: Arnaud Soulet is an associate professor at University of Tours. His research interests include databases, data mining and knowledge bases.

8 November 2018, 12:00, C47

Borja Balle, Amazon Research
Privacy-Aware Machine Learning Systems (slides)

Abstract: Privacy-aware machine learning systems allow us to train models on sensitive data without the need to have plain-text access to the data. For example, such systems could enable hospitals in different countries to learn models on their combined datasets without the need to entrust the data held by each hospital to a centralized computing node. In this talk I will describe how several privacy-enhancing technologies like differential privacy and secure multi-party computation come together in this line of work. In particular, I will highlight our current progress in this space and the remaining challenges to obtain scalable and trusted large-scale deployments.

Bio: Borja Balle is currently a Machine Learning Scientist at Amazon Research in Cambridge (UK). Before joining Amazon, Borja was a lecturer at Lancaster University (2015-2017), a postdoctoral fellow at McGill University (2013-2015), and a graduate student at Universitat Politecnica de Catalunya where he obtained his PhD in 2013. His main research interest is in privacy-preserving machine learning, including the use of differential privacy and multi-party computation in distributed learning systems, and the mathematical foundations of privacy-aware data science.

25 October 2018, 12:00, C47

Fabian M. Suchanek, Télécom ParisTech
An introduction to deep learning (slides)

Abstract: in this talk, I will present the basics of deep learning. The goal of the presentation is two fold: 1) share what I learnt about deep learning with those who would like to know what it is and 2) receive feedback from those who already know more than myself about it. I have slides, but the presentation will follow the interaction with the audience.

Biography: Fabian Suchanek is a professor in the group.

4 October 2018, 13:00, C47

Quentin Lobbé, Télécom ParisTech
Where the dead blogs are: a disaggregated exploration of Web archives to reveal extinct online collectives (slides)

Abstract: The Web is an unsteady environment. As Web sites emerge and expand every days, whole communities may fade away over time by leaving too few or incomplete traces on the living Web. Worldwide volumes of Web archives preserve the history of the Web and reduce the loss of this digital heritage. Web archives remain essential to the comprehension of the lifecycles of extinct online collectives. In my talk, I will introduce a framework to follow the intern dynamics of vanished Web communities, based on the exploration of corpora of Web archives. To achieve this goal, I propose the definition of a new unit of analysis called Web fragment: a semantic and syntactic subset of a given Web page, designed to increase historical accuracy. This contribution has practical value for those who conduct large-scale archive exploration (in terms of time range and volume) or are interested in computational approach to Web history and social science. By applying this framework to the Moroccan archives of the e-Diasporas Atlas, we will first witness the collapsing of an established community of Moroccan migrant blogs. We will show its progressive mutation towards rising social platforms, between 2008 and 2018. Then, we will study the sudden creation of an ephemeral collective of forum members gathered by the wave of the Arab Spring in the early 2011. We will finally yield new insights into historical Web studies by suggesting the concept of pivot moment of the Web.

Biography: Quentin Lobbé is a PhD student in the group; this is a rehearsal talk for the BDA conference.

6 September 2018, 12:00, C47

Rodrigo Mello, University of Sao Paulo
The Statistical Learning Theory in Practical Problems (slides, code)

Abstract: In this 30-minute talk Prof. Rodrigo Mello will introduce its main research interests in a informal way: the Statistical Learning Theory, Data Streams/Time Series modeling using Statistics and Dynamical Systems, and How Theoretical Aspects can support the design of Deep Learning architectures. Several applications will be also mentioned during this talk.

Biography: Rodrigo Mello is currently an Associate Professor at the Institute of Mathematics and Computer Sciences, Department of Computer Science, University of São Paulo, São Carlos, Brazil. Prof. Mello is currently in a sabbatical year as invited professor at Télécom ParisTech, after an invitation by Prof. Albert Bifet. He completed his PhD degree from University of São Paulo, São Carlos in 2003 and has another one-year experience as Invited Professor at St. Francis Xavier University, Antigonish, NS, Canada. His research interests are mostly associated to theoretical aspects of Machine Learning, Data Streams/Time Series modeling and prediction, and Deep Learning.

12 July 2018, 12:00, C48

Joe Raad, Université Paris-Saclay
Towards a solution to the “sameAs problem” (slides)

Abstract: In the absence of a central naming authority on the Semantic Web, it is common for different datasets to refer to the same thing by different IRIs. Whenever multiple names are used to denote the same thing, owl:sameAs statements are needed in order to link the data and foster reuse. However, studies that date back as far as 2009 have observed that the Semantic Web identity predicate is sometimes used incorrectly, leaving multiple incorrect owl:sameAs statements in the Web. This problem is known as the “sameAs problem”. In this talk, we show how network metrics, such as the community structure of the owl:sameAs graph, can be used for detecting such possibly erroneous statements. One benefit of the here presented approach is that it can be applied to the network of owl:sameAs links itself, and does not rely on any additional knowledge. In order to illustrate its ability to scale, the approach is evaluated on the largest collection of identity links to date, containing over 558 million owl:sameAs links scraped from the LOD Cloud.

Biography: Joe is a PhD student at the University of Paris-Saclay, and member of the LINK (AgroParisTech-INRA, Paris) and LAHDAK teams (LRI, Orsay). His current research comprises knowledge representation using Semantic Web languages, as well as studying the use of identity in the Semantic Web.

5 July 2018, 12:00, C47

Thomas Rebele, Télécom ParisTech, DIG team
Extending the YAGO knowledge base (slides)

Abstract: A knowledge base is a set of facts about the world. YAGO was one of the first large-scale knowledge bases that were constructed automatically. This presentation shows our work on extending the YAGO knowledge base along two axes: extraction and preprocessing.

The first part of the talk presents methods that increase the number of facts about people in YAGO. We have developed algorithms and heuristics for extracting more facts about birth and death date, about gender, and about the place of residence. We also show how to use these data for studies in Digital Humanities.

The second part discusses two algorithms for repairing a regular expression automatically so that it matches a given set of words. Experiments on various datasets show the effectiveness and generality of these algorithms. Both algorithms improve the recall of the initial regular expression while achieving a similar or better precision.

The third part presents a system for translating database queries into Bash scripts. This approach allows preprocessing large tabular datasets and knowledge bases by executing Datalog and SPARQL queries, without installing any software beyond a Unix-like operating system. Experiments show that the performance of our system is comparable with state-of-the-art systems.

Biography: Thomas Rebele is a PhD student in our group.

14 June 2018, 12:00, C47

Lucie-Aimée Kaffee, University of Southampton
Multilinguality of Wikidata (slides)

Abstract: The web in general shows a lack of support for non-English languages. One way of overcoming this lack of information is using multilingual linked data. Wikidata data supports over 400 languages in theory. In practice, however, not all languages are equally supported. As a first step, we want to explore the language distribution of a collaboratively edited knowledge base such as Wikidata d label coverage of the web of data in general. Labels are the access point for humans to the web of data, and a lack thereof means limited reusability. Wikipedia is an ideal candidate for reuse of the multilingual data: the project has instances in over 280 languages, but the number of articles differ drastically. For many readers it could be a first starting point to get information. wever, with a lack of information the project is unlikely to attract new community members that could create new articles. We investigate the possibility of neural natural language generation for underserved Wikipedia communities, using kidata’s facts and evaluate this approach with the help of the Arabic and Esperanto Wikipedia communities. This approach can only be as good as the amount of multilingual data we have at our disposal. Therefore, we discuss future ways of improving the coverage of under-resourced languages’ information in Wikidata.

Biograhphy: Lucie is a PhD student at the School of Electronics and Computer Science, University of Southampton, as part of the Web and Internet Science (WAIS) research group. Additionally, she is part of the part of the Marie Skłodowska-Curie ITN Aqua. Generally, she is working on how to support underserved languages on the web with the means of linked data. Therefore, her research interests include linked data, multilinguality, Wikidata, underserved languages on the web and most recently natural language generation and relation extraction. Before getting involved with research, she worked as a software developer at Wikimedia Deutschland in the Wikidata team. There she was already involved in the previously mentioned topics, developing the ArticlePlaceholder extension, ich includes Wikidata’s structured knowledge on Wikipedias of small languages, a project she continued research on. She is still involved in Open Source projects, mainly Wikimedia related, where she is currently part of the Code of Conduct Committee for technical spaces.

23 May 2018, 12:05, C47

Viktor Losing, University of Bielefeld, HONDA Research Institute Europe
Memory Models for Incremental Learning Architectures (slides)

Abstract: There are more and more products available with automated functions for human assistance or autonomous services in home or outdoor environments. A common problem is the inadequate match between user expectations which are highly individual and the assistant system function which is typically rather standardized. Incremental learning methods offer a way to adapt the parameters and behavior of an assistant system according to user needs and preferences. In this talk, I will illustrate the benefits of personalization and incremental learning using the task of driver maneuver prediction at intersections. The study is based on a collection of commuting drivers who recorded their daily routes with a standard smart phone and GPS receiver. The personalized prediction based on at least one experience of a certain intersection already improves the prediction performance over an average prediction model trained.

A closely related topic is incremental learning in non-stationary data streams which is highly challenging, since the possibly occurring types of drift are fundamentally different and undermine classical assumptions such as data independence or stationary distributions. Here, I will introduce the Self Adjusting Memory (SAM) model for the k Nearest Neighbor (kNN) algorithm. The basic idea is to construct dedicated models for the current and former concepts and apply them according to the demands of the given situation. In an extensive evaluation, SAM-kNN achieves highly competitive results throughout all experiments, underlining its robustness and capability to handle heterogeneous concept drift.

Biography: Viktor Losing received his M. Sc. in Intelligent Systems at the University of Bielefeld in 2014. Since 2015 he is a PhD student at the CoR-Lab of the University of Bielefeld in cooperation with the HONDA Research Institute Europe. His research interests comprise incremental and online learning, learning under concept drift as well as corresponding real-world applications.

28 March 2018, 12:05, C47

Romain Giot, IUT Bordeaux and LaBRI
Biometric performance evaluation with novel visualization (slides)

Abstract: Biometric authentication verifies the identity of individuals based on what they are. However, biometric authentication systems are error prone and can reject genuine individuals or accept impostors. Researchers on biometric authentication quantify the quality of their algorithm by benchmarking it several databases. However, although the standard evaluation metrics state the performance of a system, they are not able to explain the reasons of these errors.

After presenting the existing evaluation procedures of biometric authentication systems as well as visualisation properties, this talk presents a novel visual evaluation of the results of a biometric authentication system which helps to find which individuals or samples are sources of errors and could help to fix the algorithms. Two variants are proposed: one where the individuals of the database are modelled as a firected graph and another one where the biometric database of scores is modelled as a partitioned power-graph where nodes represent biometric samples and power-nodes represent individuals. A novel recursive edge bundling method is also applied to reduce clutter. This proposal has been successfully applied on several biometric databases and proved its interest.

Biography: I am associate professor at the IUT de Bordeaux and the LaBRI and head of the team “Back to Bench and Beyond” of the group “Bench to Knowledge end Beyond”. I have a research experience in biometric authentication (as a PhD student at the university of Caen where I worked on template update and multibiometrics for keystroke dynamics), anomaly detection (as a postdoctoral researcher at Orange Labs where I worked on fraud detection in mobile payment), and large graph visualisation (since I'm associate professor at Bordeaux).

5 March 2018, 12:05, C46

Fadi Badra, LIMICS
Analogical Transfer: a Form of Similarity-Based Inference? (slides)

Abstract: Making an analogical transfer consists in assuming that if two situations are alike in some ways, they may be alike in others. Such a cognitive process is the inspiration for different machine learning approaches like analogical classification, the k-nearest neighbors algorithm, or case-based reasoning. This talk explores the role of similarity in the transfer phase of analogy, by taking a qualitative reasoning viewpoint. We first show that there exists an intimate link between the qualitative measurement of similarity and computational analogy. Essential notions of formal models of analogy, such as analogical equalities/inequalities, or analogical dissimilarity, and the related inferences (mapping and transfer) can be formulated as operations on ordinal similarity relations. In the light of these observations, we will defend the idea that analogical transfer is a form of similarity-based inference.

Biography: Fadi Badra is an assistant professor at Paris 13 University, and is a member of the Medical Informatics and Knowledge Engineering Research Group (LIMICS) in Paris, France. He completed his PhD in the Orpailleur Research Group at the LORIA Lab in Nancy, France. His current research interests are in the area of computational analogy and case-based reasoning, with a particular focus on its adaptation phase.

22 November 2017, 12:00, C47

Vwani Roychowdhury, UCLA
The Unreasonable Effectiveness of Data: A Scalable framework for "Understanding" Social Forums and Online Discussions (no slides provided)

Abstract: As humans we interpret and react to the world around us in terms of narratives. At a basic level, a narrative is comprised of principal actors and entities, their interactions, and finally the decisions they make to reinforce and protect their interests. The primary question we address in this talk is whether a computer can automatically distill and create such narrative maps from millions of posts and discussions that happen in the online world. How much and which parts of the underlying narratives can be extracted via unsupervised statistical methods, and how much "humanness" needs to becoded into a computer? We provide a framework that uses statistical techniques to generate automated summaries, and show that when augmented with a small-size dictionary that encodes "humanness," the framework can generate effective narratives from a number of domains. We will present several sets of empirical results where millions of posts are processed to generate story graphs and plots of the underlying discussions.

Biography: Vwani Roychowdhury is a Professor of Electrical and Computer Engineering at University of California, Los Angeles (UCLA). He specializes in interdisciplinary work that deal with the modeling and design of information and computing systems, ranging from the physical, biological and engineered systems. He has done pioneering work in Quantum Computing, Nanoelectronics, Peer-to-Peer (P2P), social and complex networks, machine learning, text mining, artificial neural networks, computer vision, and Internet-Scale data processing. He has published more than 200 peer reviewed journal and conference papers, and co-authored several books. He has also cofounded several silicon valley startups, including www.netseer.com and www.stieleeye.com.

18 October 2017, 12:00, C47

Yun Sing Koh, University of Auckland
Using Volatility in Concept Drift Detection and Capturing Recurrent Concept Drift in Data Streams (slides)

Abstract: Much of scientific research involves the generation and testing of hypotheses that can facilitate the development of accurate models for a system. In machine learning the automated building of accurate models is desired. However traditional machine learning often assumes that the underlying models are static and unchanging over time. In reality there are many applications that analyse data streams where the underlying model or system changes over time. This may be caused by changes in the conditions of the system, or a fundamental change in how the system behaves. In this talk, I will present a change detector called SEED, and how we capture stream volatility. We coin the term stream volatility, to describe the rate of changes in a stream. A stream has a high volatility if changes are detected frequently and has a low volatility if changes are detected infrequently. I will also present a drift prediction algorithm to predict the location of future drift points based on historical drift trends which we model as transitions between stream volatility patterns. Our method uses a probabilistic network to learn drift trends and is independent of the drift detection technique. I will then present a meta-learner, Concept Profiling Framework (CPF) that uses a concept drift detector and a collection of classification models to perform effective classification on data streams with recurrent concept drifts, through relating models by similarity of their classifying behaviour.

Biography: Yun Sing Koh is a Senior Lecturer at the Department of Computer Science, The University of Auckland, New Zealand. She completed her PhD at the Department of Computer Science, University of Otago, New Zealand in 2007. Her current research interest is in the area of data mining and machine learning, specifically data stream mining and pattern mining.

12 September 2017, 12:00, C47

Bob Durrant, University of Waikato
Random Projections for Dimensionality Reduction (slides)

12 July 2017, 12:00, C47

Amin Mantrach, Criteo Research
Deep Character-Level Click-Through Rate Prediction for Sponsored Search (slides)

31 May 2017, 12:00, C48

Quentin Lobbé, Télécom ParisTech
An exploration of web archives beyond the pages : Introducing web fragments (slides)
Mikaël Monet, Télécom ParisTech
Probabilistic query evaluation: towards tractable combined complexity (slides)

26 April 2017, 12:00, C47

Themis Palpanas, LIPADE, Paris Descartes University
Riding the Big IoT Data Wave: Complex Analytics for IoT Data Series (slides)

8 March 2017, 12:00, C47

Thomas Bonald, Télécom ParisTech
Community detection in graphs (slides)

27 February 2017, 12:00, C46

Laurent Decreusefond, Télécom ParisTech
Stochastic geometry, random hypergraphs, random walks (slides)

26 January 2017, 12:00, C47

Nofar Carmeli, Technion
Efficiently Enumerating Tree Decompositions (slides)

11 January 2017, 12:00, C47

Simon Razniewski, Free University of Bozen-Bolzano
Query-driven Data Completeness Assessment (slides)

14 December 2016, 12:00, C47

Fabian M. Suchanek, Télécom ParisTech
A hitchhiker’s guide to Ontology (slides)

23 November 2016, 12:00, C47

Ngurah Agus Sanjaya ER, Télécom ParisTech
Set of T-uples Expansion by Example (slides)
Qing Liu, National University of Singapore
Top-k Queries over Uncertain Scores (slides)

26 October 2016, 12:00, C46

Maria Koutraki, Université Paris-Saclay
Approaches towards unified models for integrating Web knowledge bases. (slides)

From November 2013 to September 2016

During this time, the DBWeb seminar was held as part of the IC2 group seminar. These seminars are listed on the IC2 seminar Web page.

10 September 2013, 14:00, C49

Antoine Amarilli
Taxonomy-Based Crowd Mining (slides)
Jean-Louis Dessalles
Relevance (slides)

14 January 2013, 10:00, B549

Vincent Lepage, Cinequant
Cinequant, datamining pour le monde réel
Jean Marc Vanel, Déductions SARL
EulerGUI, un outil libre pour le Web Sémantique et l'inférence

04 December 2012, 10:00, C017

Jean-Louis Dessalles
Why spend (so much) time on the social Web? A model of investment in communication
François Rousseau
Short talk and brainstorming on graph based text representation and mining

20 November 2012, 10:00, C017

Mohamed-Amine Baazizi
Static analysis for optimizing the update of large temporal XML documents
Christos Giatsidis
S-cores and degeneracy based graph clustering

6 November 2012, 10:00, C49

Jonathan Michaux, Télécom ParisTech
Interaction safety in Web service orchestrations (slides)
Georges Gouriten
Brainstorming on knowledge-based content suggestions on the social Web

16 October 2012, 10:00, C49

Clémence Magnien, Université Pierre et Marie Curie
Measuring, studying, and modelling the dynamics of Internet topology
Imen Ben Dhia
Evaluating reachability queries over large social graphs (slides)

2 October 2012, 10:00, C017

Idrissa Sarr, Université Cheikh Anta Diop
Dealing with the disappearance of nodes in social networks (slides)
Damien Munch
“Eating cake during a scientific talk:” Can we reverse-engineer natural language aspectual processing? (slides)

18 September 2012, 10:00, C017

Silviu Maniu
Context-Aware Top-k Processing using Views
Asma Souihli
Optimizing Approximations of DNF Query Lineage in Probabilistic XML (slides)

4 September 2012, 10:00, C017

Antoine Amarilli
Advances in holistic ontology alignment (slides)
Yannis Papakonstantinou, University of California, San Diego
Declarative, optimizable data-driven specifications of web and mobile applications