a3nm's blog

Self-hosted, server-side MathJax

— updated

According to a followup blog post, the method presented here no longer works with modern versions of MathJax. If you have insights about how to make them work currently, I'd be interested to know, please email me!

I use MathJax on my website to render math equations to HTML. The standard way to use MathJax is to use JavaScript so that visitors will get it from their CDN. While this is simpler and also a good idea for caching, it has drawbacks which I did not find acceptable:

  • It means that the MathJax CDN may be pinged whenever a visitor loads a page, which is bad for privacy.
  • It makes your website's security dependent on that of the CDN: if the CDN starts distributing malicious JS (i.e., MathJax turns evil or it gets hacked), then your visitors will be getting them.
  • It renders math in the browser using JavaScript. This is jarring (as the page jumps around while rendering is done), and I find it esthetically unpleasant. All of this website is static and pre-generated on my machine, I don't see why math rendering would be an exception. I find static websites preferable in terms of deployability, security, and elegance.

This post is just to explain how to render MathJax when generating static pages, using MathJax-node. As I wanted to play with Docker, and didn't want to install Node or run the code directly on my real machine, I will also explain how to set up the environment in a Docker container, but there's nothing forcing you to do that. :)

I am now using this setup on this website, so all math should now be served with server-side rendering, without requests to third-party CDNs. (In fact, this removes what was essentially the only use of JavaScript on this site.)

Setting up a mirror of the fonts

While we won't need to serve the MathJax JavaScript code to readers, we will need to serve them the fonts used by MathJax.

Fortunately, on Debian systems, these fonts are already packaged as fonts-mathjax and fonts-mathjax-extras, so you can just rely on the package manager to retrieve them and keep them up to date. The fonts are installed in /usr/share/javascript/mathjax/, so you just have to configure your Web server to serve this folder. I serve it as a3nm.net/mathjax. It's preferable to serve it from the same domain that you will otherwise use, otherwise it's necessary to jump through additional hoops because of the same-origin policy: see an explanation here.

Installing MathJax-node

I installed MathJax-node in a Docker image, and as I was paranoid I also generated my own base image for the underlying system. Feel free to simplify the instructions if you don't need to do any of this.

I'm using an amd64 Debian system. I installed Docker as docker.io (packaged with Debian), added myself to the docker group, logged out and logged in. I tweaked Docker by editing /etc/default/docker and symlinking /var/lib/docker to move its files to a partition with more disk space.

I created the base system by issuing the following:

mkdir testing
sudo debootstrap testing testing/
sudo tar -C testing/ -c . | docker import - my-debian-testing

Here is the Dockerfile:

The following commands no longer work, because npm is no longer packaged in Debian. You should probably install npm manually instead (I haven't tried it yet). Thanks to Ted for pointing this out!

FROM my-debian-testing:latest
RUN apt-get -y update && apt-get install -y npm nodejs-legacy && apt-get clean
RUN npm install mathjax-node
RUN npm install cssstyle
CMD ["bash"]

As mathjax has reorganized their repositories, to make the following work, you will probably need to install manually mathjax-node-cli, as well as maybe installing mathjax-node and possibly mathjax-node-page. Again, I haven't tried it. Thanks again to Ted for pointing this out!

In the folder of the file, issue:

docker build -t my-mathjax .

You can now use the image by starting a container, let's call it my-container:

docker run -di --name=my-container my-mathjax bash >/dev/null

And you can then apply page2html by piping your HTML code in the following invocation:

docker exec -i my-mathjax node_modules/mathjax-node/bin/page2html \
    --fontURL https://a3nm.net/mathjax/fonts/HTML-CSS"

Replace the fontURL parameter by the URL you are serving the MathJax fonts.

Another possibility is to use page2svg to render the math to SVG instead of HTML markup. However, this means that text-based browsers will not be able to see it.

The actual code that I use is here. I also increase the size of math by adding the following CSS as indicated here:

.mjx-chtml {font-size: 2.26ex ! important}
.mjx-chtml .mjx-chtml {font-size: inherit ! important}

Marking up MathJax

I use Markdown to write Web pages. I use python-markdown-math to convert math notation from a dollar-based notation in Markdown to HTML spans with the right classes (class="tex" or class="math"). To use python-markdown-math with the server-side setup, simply prevent it from adding the MathJax script boilerplate. I also use the render_to_span config parameter to ensure that no script is being generated.

To prepare the MathJax afterwards, you should be careful that the command of the previous section needs to apply to the entire HTML document, not just the HTML markup generated from Markdown before applying a template. Indeed, you will see that modifies the head element as well.

Performance

On modern hardware with this setup, it takes about one second to process an HTML page (even when there is no math in it). It takes a few seconds to process HTML pages with real markup such as this one.

Here's an example formula to see how it works: it should look the same as any regular MathJax formula, but without the blinking caused by JavaScript rendering: π(I)=(FIπ(F))×(FJI(1π(F))).

Summary on (hyper)graph tractability parameters

— updated

For my PhD research, I had to investigate various notions of (hyper)graph tractability, such as treewidth, cliquewidth, pathwidth, etc. Here is a very terse summary of what I have found about them. It is mostly my personal summary of reading this survey and some other papers, but maybe it may interest others.

To give the broad context: computer scientists are interested in solving problems on graphs, e.g., given a graph, does it have a specific property (for instance, can it be drawn without edge crossings, can it be colored with 3 colors, etc.). Some of these problems are known to be intractable in the sense of complexity theory, which means that there is probably no fast algorithm to solve them. However, many of those hard problems can be solved efficiently in the specific case of graphs that are forests, i.e., that do not have cycles.

Graph theorists have tried to understand what makes trees tractable, and to extend the notion of acyclicity ("being a forest") in two broad respects. First, by identifying broader notions for graphs, which are parameterized: a parameter k indicates how much the graph deviates from acyclicity, and the notion should collapse to acyclicity for small k. The hope is then that, for fixed k (i.e., for graphs that are not-acyclic only by a small margin), we can still enjoy tractability, for instance in the sense of parameterized complexity. Second, theorists have tried to extend acyclicity from graphs to the more expressive formalism of hypergraphs, i.e., graphs where edges can connect an arbitrary number of vertices.

In all this post except where explicitly stated otherwise, we consider undirected graphs, i.e., a pair (V,E) of a set of vertices and a set of edges which are pairs of vertices.

Roadmap:

  • Section 1 presents treewidth, branchwidth (essentially equivalent), and pathwidth (more restrictive, i.e., a graph's pathwidth is more than its treewidth), and treedepth, which is more restrictive than pathwidth in the same sense.
  • Section 2 presents rankwidth and cliquewidth (equivalent, and less restrictive than the previous ones).
  • Section 3 discusses preservation under subgraphs, minors, and other operations.
  • Section 4 discusses extended definitions for hypergraphs.
  • Section 5 discusses the complexity of computing these parameters.
  • Section 6 discusses their applicability for the decidability of logical theories, the tractability of model checking and other such problems for expressive languages.
  • Section 7 studies their use for less expressive languages: the problems of CSP, query evaluation and homomorphisms.
  • Section 8 is an appendix with a remark about partial orders.

All links to scientific works are open-access, i.e., available without a subscription. I give them as direct links, but I also give the DOI of each link at the end.

Related work: one interesting related resource is the Information System on Graph Classes and their Inclusions.

Warning: Please be aware that this summary are my personal notes: it may contain errors and should not be taken at face value. Please let me know if you find any bugs.

Acknowledgements: Thanks to Pierre Senellart, Mikaël Monet and Robin Morisset for proofreading and comments.

Treewidth, pathwidth, branchwidth

Treewidth (tw)

Treewidth is the most common measure, and can be defined in multiple different ways. The treewidth of a graph G is the smallest k such that it admits a tree decomposition of width k in one of the following senses:

  • Formally: we can associate a tree T to G (called the tree decomposition), with each node of T (called bag) labeled by a subset of the graph vertices, such that:

    • for each edge of G, its endpoints co-occur in some bag of T
    • for each vertex of G, the nodes of T where it occurs form a connected subtree in T

    The width of the tree decomposition T is k1 where k is the maximal number of vertices in a bag of T.

    While tree decompositions are not oriented, it is often convenient to pick a root and see them as rooted trees.

  • As a decomposition scheme. Trees can be decomposed by picking a separator formed of two adjacent vertices, removing the edge between them, and decomposing recursively each connected component: in each component, you can pick a separator that includes the vertices of the parent separator which are also in that component.

    As a generalization, graphs of treewidth k can be decomposed by picking a separator of k+1 vertices, removing edges between them, and decomposing recursively each connected component, requiring that each component's separator includes the vertices of the parent separator which are part of the component. The tree of the separators chosen in this process is a tree decomposition of the instance, because the separators have the right size, cover all edges, and whenever a vertex occurs in a separator it will occur in all descendent separators until we reach a component where the vertex no longer occurs.

    Conversely, when looking at any non-root bag b of a tree decomposition T, considering the subgraph G1 of the decomposed graph formed of the vertices occurring in b and its descendants (the subtree Tb of b rooted at T), and G2 the subgraph of the vertices occurring in the other bags, the common vertices between G1 and G2 are those shared between b and its parent (often called an interface), and removing those in G disconnects G1 and G2 (any other path connecting them could not be legally reflected in the decomposition). So Tb can be recursively understood as a tree decomposition of G1, which has been disconnected from the rest of G.

    This explanation with pictures may help you understand what I mean.

  • As a divide and conquer scheme. Many problems on a rooted tree can be solved by a dynamic programming algorithm that solves the problem on any subtree given the solution to its child subtrees; this is usually just thought of as a bottom-up computation. In graphs of treewidth k, following the above scheme (or a rooted tree decomposition) gives you a way to solve problems on the graph with a similar divide-and-conquer dynamic programming scheme.

    Again, this is nicely covered by the explanation linked above.

  • As a pursuit-evasion game, where "cops" use a decomposition of the graph to corner and catch a "robber". For more details, see hlineny2007width, Section 5.7.

Forests have treewidth 1 (so "having treewidth 1" means "being acyclic"), cycles have treewidth 2, cliques with k vertices have treewidth k1, but grids illustrate that graphs may be planar but have a high treewidth: a k by k grid has treewidth k.

In terms of structural assumptions, it is easily seen that tree decompositions can be rewritten to ensure that each tree node has degree at most 3, i.e., degree at most 2 when we see T as a rooted tree. Further, tree decompositions can be rewritten to have logarithmic height, at the cost of multiplying the treewidth by a constant; see bodlaender1988nc.

A variant of tree decompositions are simplicial decompositions (see diestel1989simplicial), where we additionally require that for every pair of adjacent bags in the tree decomposition T, letting X be the vertices that occur in both bags, then X is a complete subgraph of the graph G.

Pathwidth (pw)

Pathwidth is a variant of treewidth where the underlying tree in the formal definition is required to be a path graph rather than a general tree. It has many alternative characterizations, in particular in terms of overlapping intervals.

I won't be coming back to pathwidth later, but to contrast it with treewidth, for any graph G:

  • tw(G)pw(G): immediate as one notion is more restrictive than the other;
  • pw(G)=O(tw(G)log|G|): see bodlaender1998partial, Corollary 24.

So "having bounded pw" is strictly more restrictive than "having bounded tw".

Branchwidth (bw)

Branchwidth can be defined as follows. Let G=(V,E) be a graph. Given a partition of its edges E=E1E2, call its weight the number of vertices incident both to an edge in E1 and an edge in E2. A branch decomposition of width k of G is a subcubic tree (nodes have degree either 1 or 3) where each leaf is labeled bijectively by an edge of G, and for each edge of the tree, considering E1E2 where E1 and E2 are the edges that annotate each half of the tree, the weight of this partition is at most k.

Branchwidth is less used than treewidth in computer science, but it has the nice property that a variant of the definition yields the notion of rankwidth (see later). Further, it generalizes to hypergraphs (see later), and it generalizes to matroids (I won't give more details).

hlineny2007width (Section 3.1.1) gives a description1 of graphs of bw 2. Further, from robertson1991graph10, cited and proved in hlineny2007width, Theorem 3.1, we have, for any graph G with branchwidth >1: bw(G)tw(G)+11.5bw(G). Hence, "having bounded tw" and "having bounded bw" are equivalent.

Treedepth

This is an addition to the original post

Treedepth is defined as follows. An elimination forest F of a graph G=(V,E) is a forest (i.e., a collection of rooted trees) on V that ensures that for any {x,y}E, one of the nodes for x and y in F is an ancestor of the other in F (or, equivalently, there is a path from the root of F to a leaf that covers both x and y). The depth of an elimination forest F is the height of F in the standard sense (i.e., the maximal height of a tree of F). The treedepth of G is the minimal depth d such that G has an elimination forest F of depth d.

I will not describe the notion of treedepth in more detail, but, to connect it with the previous notions, we know, by Lemma 11 of bodlaender1995approximating, that, for any graph G, we have pw(G)td(G). However, path graphs have treedepth which is logarithmic in their size (with the elimination forest given as the tree of a binary search), but have constant pathwidth. So "having bounded td" is strictly more restrictive than "having bounded pw" which is strictly more restrictive than "having bounded tw".

Rankwidth and cliquewidth

Rankwidth (rw)

Rankwidth is defined just like branchwidth, except that we consider vertex partitions V=V1V2, whose weight is the rank of the adjacency matrix from V1 to V2 (taken as a submatrix of the full graph's adjacency matrix): and we consider subcubic trees where leaves are labeled by graph vertices rather than graph edges.

Rankwidth is less than branchwidth: rw(G)max(1,bw(G)) as shown in oum2007rank. Hence, "having bounded bw" implies "having bounded rw".

Graphs with rankwidth 1 are the distance-hereditary graphs, as shown in oum2005rank.

Cliquewidth (cw)

A clique decomposition of width k, or k-expression, in an expression over the following operations, i and j denoting any colors in {1,,k}:

  • create the graph with one isolated vertex labeled by i
  • change all vertices of color i to color j
  • connect all vertices of color i to those of color j (with ji, so no self-loops)
  • take the disjoint union of two graphs constructed by such an expression

A graph has cliquewidth k if some coloring of it can be obtained by a clique decomposition.

Graphs with cliquewidth 1 are those with no edges, and graphs with cliquewidth 2 are the cographs. The k by k grid has cliquewidth k+1 by (golumbic2000clique).

Cliquewidth can be connected to the modular decomposition of graphs. Further, there are variants such as symmetric cliquewidth, with "bounded cw" equivalent to "bounded scw", see hlineny2007width, section 4.1.3.

Cliquewidth is connected to rankwidth (oum2006approximating): rw(G)cw(G)21+rw(G)1. Hence, "having bounded cw" and "having bounded rw" are equivalent, though the rank-width bound can be exponentially larger.

Cliquewidth is connected to treewidth: we have cw(G)32tw(G)1 by hlineny2007width2, section 4.2.6. Hence, "having bounded tw" implies "having bounded cw", though the bound may again be exponential. Conversely, tw(G) can be bounded as a function of cw(G) and of the degree of G (kaminski2009recent, Proposition 2 a), so "having bounded cw" and "having bounded tw" are equivalent on graph families of bounded degree. The dependence on degree disappears for graph families that exclude any fixed graph as a minor (Proposition 2 b).

Preservation under graph operations

Subgraphs and induced subgraphs

A subgraph of a graph G is obtained by keeping a subset of the edges and vertices of G. Further, it is an induced subgraph if the edges kept are exactly the restriction of those of G to the kept vertices (i.e., no edge between kept vertices was removed).

A property is S-closed (resp. I-closed) if, when a graph has it, every subgraph (resp. induced subgraph) also does.

Minors and topological minors

A minor of a graph can be obtained from that graph by removing edges and vertices and contracting edges. Alternatively, it is witnessed by mapping the vertices v of the minor G to disjoint subsets s(v) of vertices of the graph G, and mapping each edge (u,v) of G to an edge between s(u) and s(v) in G.

A topological minor maps each vertex v of G to a single vertex s(v) of G, and maps edges (u,v) of G to vertex-disjoint paths from s(u) to s(v) in G: equivalently, a subdivision of G is isomorphic to a subgraph of G. Clearly if G is a topological minor of G then it is a minor of G.

We define M-closed and T-closed for minors and topological minors as we defined S-closed and I-closed.

In summary, from makowsky2003tree3:

  • M-closed implies T-closed, S-closed, I-closed;
  • T-closed implies S-closed, I-closed;
  • S-closed implies I-closed;
  • No other implication holds.

Monotonicity of our definitions

Standard graph acyclicity is M-closed. Hence, it is natural to wonder whether our other parameters also are.

  • "having treewidth k" is M-closed (as can be seen by applying deletions and merges to the tree decomposition).
  • "having pathwidth k" is M-closed (same argument).
  • "having branchwidth k" is M-closed (hlineny2007width, Prop 3.2).
  • "having cliquewidth k" is I-closed courcelle1990upper but obviously not S-closed (complete graphs have cliquewidth 2 but their subgraphs cover all graphs) nor T-closed.
  • "having rankwidth k" is I-closed but not S-closed for the same reasons.

Rankwidth is, however, preserved by the graph operation of vertex-minor (oum2005rank).

Other operations

Cliquewidth and rankwidth behave well under graph complementation: complementing can at most double the cw, and adds or removes at most 1 to the rw (quoted in hlineny2007width, section 4.2.1). By contrast, no such bounds are possible for treewidth and hence rankwidth (consider the graphs with no edges).

Treewidth, branchwidth, pathwidth, cliquewidth, rankwidth, can be computed on a graph with multiple connected components as the max of the same measure over each of the connected components.

Adding or deleting k vertices or k edges can only make rankwidth and cliquewidth vary by at most k (hlineny2007width). The same holds for treewidth (immediate as each vertex addition in a tree decomposition can only affect the width by at most 1).

For S-closed families, "having bounded cw" and "having bounded tw" are equivalent (makowsky2003tree, Proposition 7, attributed to courcelle1995logical which seems unavailable online).

Hypergraphs

Hypergraphs generalize graphs by allowing hyperedges that include an arbitrary number of vertices.

We will sometimes consider instances, whose hyperedges (called facts) are allowed to have a label (called the predicate or relation, which has a fixed arity) and where the occurrences of vertices in facts may be labeled as well: a fact is written A(a1,,an), where A is the predicate, n is the arity, and the ai need not be all different. Hence, an instance in logical terms is a structure for some relational signature. In this context it is natural to assume that the maximal arity (number of vertices per hyperedge) is bounded by a constant, though this assumption is not usually done when working on vanilla hypergraphs.

From hypergraphs to graphs

The following discussion of how to get from a hypergraph to a graph is from hlineny2007width. The second and third method can be applied to (non-hyper)graphs rather than hypergraphs, to get a different graph.

  • The primal graph or Gaifman graph P(H) of a hypergraph H is the graph on the vertices of H where two vertices are connected iff they co-occur in a hyperedge. In other words, hyperedges are encoded as cliques. The primal graph of a (non-hyper)graph is the graph itself.

  • The incidence graph I(H) of a hypergraph H is the bipartite graph on the vertices and edges of H where a vertex is connected to an edge if the vertex occurs in that edge. In other words, hyperedges are encoded as a vertex connected to all the vertices that it contains.

    The incidence graph of a (non-hyper)graph amounts to subdividing each edge once.

  • The line graph L(H) of a hypergraph H is the graph on the hyperedges of H where two hyperedges are connected iff they share a vertex. In other words, the vertices are encoded as cliques on the hyperedges. Alternatively, the line graph is the primal graph of the dual hypergraph.

    This construction also makes sense (i.e., is not the identity) for (non-hyper)graphs, where it is also called the line graph.

We will see in a later section how bounds on various width notions can be obtained through these transformations (for graphs and hypergraphs).

Hypergraph acyclicity

Before trying to come up with parameterized notions, there have been attempts to formalize crisp acyclicity notions for hypergraphs. See Wikipedia or fagin1983degrees for sources to the claims and more details. Each notion in the list is implied by the next notion (so they are given from the least restrictive to the most restrictive) and none of the reverse implications hold (fagin1983degrees, Theorem 6.1). Further, they all collapse to the usual notion of acyclicity (i.e., "being a forest") for hypergraphs that are actually (non-hyper)graphs (i.e., all hyperedges have size 2).

  • α-acyclicity: we can reduce the hypergraph to the empty hypergraph with four operations: remove an isolated vertex; remove a vertex in a single hyperedge; remove a hyperedge contained in another; remove empty hyperedges.

    Equivalently, a hypergraph is α-acyclic iff it has a join forest, which is a forest of join trees, which are a special form of tree decomposition: we require that, for each bag, there is a hyperedge containing exactly the elements of the bag. There are other equivalent characterizations of α-acyclicity which I will not define.

    We can test in linear time whether an input hypergraph is α-acyclic, and if so compute a join forest: see Theorem 5.6 of flum2002query. Counter-intuitively, α-acyclicity can be gained when adding hyperedges (think of a hyperedge covering all vertices), and hence also lost when removing vertices.

  • β-acyclicity: all sub-hypergraphs of the hypergraph (including itself) are α-acyclic; this is obviously more restrictive than α-acyclicity, in fact strictly so. It is testable in PTIME: see fagin1983degrees Section 9.3.

  • γ-acyclicity: I won't define it here; see fagin1983degrees. It is testable in PTIME: see fagin1983degrees Section 9.4.

  • Berge-acyclicity: the incidence graph of the hypergraph is acyclic in the standard sense; this is obviously testable in linear time and is strictly more restrictive than γ-acyclicity.

We now turn to acyclicity definitions with a parameter k. Following standard usage, we will talk of the treewidth of a hypergraph H to mean the treewidth of its primal graph P(H). However, by hlineny2007width, Section 5.2, there are α-acyclic hypergraphs which have unbounded treewidth, i.e., their primal graphs have unbounded treewidth; and ditto for the notions of incidence graphs and of line graphs. This motivates the search for specific hypergraph definitions of treewidth (and of the other notions presented before).

Querywidth, hypertreewidth, generalized hypertreewidth

A decomposition of a hypergraph, for all three notions of width presented here, is a tree decomposition of its primal graph: however, the vertices of each bag are additionally covered by a set of hyperedges, and the width of the decomposition is redefined to mean the maximal number of hyperedges in a bag (where we do not subtract 1, unlike in the definition of treewidth). The differences are:

  • querywidth: the hyperedges of each bag must exactly cover the vertices of the bag.
  • hypertreewidth: the hyperedges of each bag must cover the vertices of the bag and they cannot include additional vertices if those occur in strict descendants of the bag.
  • generalized hypertreewidth: the hyperedges must cover the vertices, no further restriction is imposed.

Thus ghtw(H)tw(P(H)) and ghtw(H)htw(H)qw(H). In fact qw(H)1+tw(I(G)) (chekuri2000conjunctive, Lemma 2): a tree decomposition of the incidence graph is in particular a query decomposition when restricting the attention to the vertices of the incidence graph that code hyperedges, and the "+1" is because the definition of treewidth subtracts 1 to bag sizes.

Further, all three notions are within a factor 3 of each other (adler2007hypertree, gottlob2007generalized), so that "having bounded qw", "having bounded htw", and "having bounded ghtw" are all equivalent.

A (non-empty) hypergraph H is α-acyclic iff ghtw(H)=htw(H)=qw(H)=1 as mentioned in gottlob2014treewidth.

All three notions are bounded by n/2+1 on hypergraphs with n vertices (gottlob2005computing, Theorem 3.5 and subsequent remarks).

From the join tree definition, it is obvious that a hypergraph is α-acyclic iff it has querywidth equal to 1.

Last, it is obvious by definition that if the arity is bounded by a constant a, then we have tw(P(H))/aghtw(H). Hence, in this case, "having bounded tw" is equivalent to "having bounded htw" (and ditto for the other two hypergraph notions).

Fractional hypertreewidth

This is an addition to the original post

The notion of fractional hypertreewidth is defined in grohe2014constraint (not available online) and in marx2009approximating. It is again defined via tree decompositions of the primal graph of the hypergraph, but this time we annotate them with weighting sets of hyperedges: for each bag b of the tree decomposition, we have a weighting function wb for bag b that maps each hyperedge e of the hypergraph to a nonnegative real number wb(e). We require that, for every bag b, the function wb covers the set S of the vertices contained in b, namely, for each vS, the sum wb(e) over all hyperedges e of the hypergraph that contain v must be at least 1. The width of these augmented tree decompositions is the maximum, over all bags b, of the sum wb(e) over all hyperedges; and the fractional hypertreewidth of a hypergraph is the smallest possible width of such a decomposition.

Of course, the fractional hypertreewidth is less than the generalized hypertreewidth, as generalized hypertreewidth is obtained by restricting the decompositions to use weight functions having values in {0,1}.

Hyperbranchwidth

Hyperbranchwidth (adler2007hypertree) is defined like branchwidth, with partitions of hyperedges, except that the weight of a hyperedge partition is not its number of incident vertices, but the number of hyperedges required to cover its incident vertices (not necessarily exactly, i.e., we may cover a superset of them). We have (adler2007hypertree, Lemmas 16 and 17):

  • hbw(H)ghw(H) for any hypergraph H
  • ghw(H)2hbw(H) for any non-trivial hypergraph H (not all hyperedges are pairwise disjoint)

Hence "bounded hyperbranchwidth" and "bounded generalized hypertreewidth" are equivalent.

Apparently hyperbranchwidth is a problematic function because it is not submodular, unlike the functions used for branchwidth and rankwidth.

I do not know whether we can define variants of hyperbranchwidth with partition weight counted as the number of hyperedges required to cover exactly the vertices, and connect this to querywidth.

Cliquewidth

Cliquewidth is usually defined for hypergraphs as that of the incidence graph, in which case we have qw(H)cw(H), and there are hypergraphs with bounded querywidth but unbounded cliquewidth. Hence "having bounded querywidth" (and the other notions) is weaker than "having bounded cliquewidth". (gottlob2004hypergraphs)

Connections via transformations

In some cases, bounds on graphs or hypergraphs relate to bounds on the incidence, line, and primal graph.

The branchwidth of a (non-hyper)graph G is the rankwidth of its incidence graph, up to removing 1: bw(G)1rw(I(G))bw(G) (oum2007rank).

The treewidth of a (non-hyper)graph can be used to bound the cliquewidth of its line graph: .25(tw(G)+1)cw(L(G))2tw(G)+2 (gurski2007line, Corollary 11 1., and Theorem 5).

The treewidth of a (non-hyper)graph is exactly that of its incidence graph: tw(G)=tw(I(G)) (kreutzer2009algorithmic, Theorem 3.22). A similar result for cliquewidth is known assuming bounded vertex degree (kaminski2009recent, Proposition 8). Further, for any hypergraph H, we have tw(H)tw(I(H))+1 (gradel2002back), where we recall that the treewidth of a hypergraph was defined as that of its primal graph.

I am not aware of results connecting width measures on hypergraph to measures on the dual hypergraph.

Directed graphs

I file "directed graphs" in the same category as "hypergraphs", because it is about having a richer incidence relation. Width definitions for directed graphs are interesting because the usual acyclicity notion for directed graphs is different from that of undirected graphs (i.e., "being a forest", which we studied so far). This is an interesting area with lots of recent research that I will not attempt to summarize here. See hlineny2007width for a survey.

I will just note that clique decompositions can extend to directed graphs, where the operation that creates edges is meant to create directed edges (hlineny2007width, section 4.1.1).

Complexity of recognition

Definitions

Having studied all of these definitions, it is a natural question to ask, given an (hyper)graph, how can we compute its *-width. I will only deal with the parameterized width notions; for the crisp acyclicity definitions, I have already given the complexity of recognition when defining them in this section.

For the parameterized notions, we must be careful in the problem definition. There are two possible tasks:

  • Just determine the *-width of the input (hyper)graph
  • Determine the *-width and compute a *-decomposition of the input

Further, it is often useful to study a parameterized version of these decision problems. Hence, we consider two possible kinds of inputs; the first is of course harder than the second:

  • We are only given an input (hyper)graph and the algorithm must solve the task.
  • We have fixed kN; the algorithm is given a (hyper)graph with the promise that its *-width is at most k. In other words, if its width is >k the algorithm may do anything, otherwise it must solve the task.

In the second phrasing, example situations are:

  • the problem gets NP-hard whenever the parameter k is given a sufficiently large value;
  • for every k, the problem can be solved in PTIME, i.e., for some function f:NN, in O(nf(k)); this is the parameterized complexity class XP if f is computable (in our case it always will be);
  • the problem can be solved in complexity O(f(k)nc) for a fixed (computable) function f and a fixed constant cN independent of k, in which case we say that the problem is fixed-parameter tractable (FPT) (and we may talk of linear-FPT, quadratic-FPT, etc., to reflect the value c).

Robertson-Seymour

The well-known Robertson-Seymour theorem shows that any M-closed property can be characterized by a finite set of forbidden minors. As testing whether a fixed graph G is a minor of an input graph G can be performed in quadratic time (kawarabayashi2012disjoint)4; this implies the existence5 of a quadratic time test for any M-closed property. We will use this general result later.

Results

When k is fixed and a graph G is given as input:

  • For treewidth, branchwidth, pathwidth:
    • Determining the value of the treewidth, branchwidth, and pathwidth are quadratic FPT because "having treewidth k" and "having branchwidth k" are M-closed, so we can apply the Robertson-Seymour-based reasoning above.
    • For treewidth and pathwidth, there is a linear FPT algorithm to compute the width and a decomposition (bodlaender1996linear, the parameter being exponential in the width). The algorithm is infeasible in practice due to huge constants. For treewidth, it is preferable in practice to use an algorithm that computes a decomposition of a graph G of treewidth k in time O(|G|k+2) (flum2002query, end of Section 3), or various heuristics for approximation (see at the end of the section).
    • Branchwidth can be computed, along with an optimal branch decomposition, in FPT linear time (bodlaender1997constructive)
  • For rankwidth, cliquewidth:
    • Rankwidth, and an optimal decomposition, can be computed in cubic FPT time (hlineny2007finding).
    • It is PTIME to determine the cliquewidth for k3 (corneil2012polynomial, requires a subscription); otherwise this is still open.

For hypergraph acyclicity measures (α-acyclicity, etc.), refer back to the paragraph where they are defined.

For hypergraph measures, when k is fixed and a hypergraph H is given as input:

  • It is NP-complete to decide whether an input hypergraph H has generalized hypertreewidth 3 (gottlob2007generalized). For 2, the complexity was open but NP-completeness is shown in the recent preprint fischl2016general
  • It is NP-complete to decide whether an input hypergraph H has querywidth 4 (gottlob2002hypertree).
  • For any k, we can determine in PTIME whether an input hypergraph H has hypertreewidth k, and if so compute a decomposition (gottlob2002hypertree); however this is unlikely to be in FPT (gottlob2005hypertree).
  • We can test in linear time whether it has generalized hypertreewidth 1, querywidth 1, or hypertreewidth 1, because this is equivalent to it being α-acyclic (mentioned in gottlob2014treewidth).
  • It is NP-complete to test whether an input graph has fractional hypertreewidth 2, as shown in the recent preprint fischl2016general

When a graph G is given as input, with no bound on the parameter k:

  • Treewidth is NP-hard to compute on general graphs; it is computable in PTIME on chordal graphs (bodlaender2007combinatorial, after Lemma 11); on planar graphs, this is open.
  • Branchwidth is NP-hard on general graphs, but on planar graphs it can be determined and a decomposition constructed in PTIME (seymour1994call; cubic bound shown in gu2005optimal, unavailable without subscriptions).
  • Cliquewidth is NP-hard to compute (fellows2009clique).
  • Fractional hypertreewidth can be approximated in PTIME: given a hypergraph H, we can compute in PTIME a fractional hypertree decomposition of width O(w3) of H, where w is the real fractional hypertreewidth: see marx2009approximating.

There are practical implementations to compute tree decompositions of graphs, trying to compute the optimal treewidth or an approximation thereof. A fellow student of mine successfully used libtw. See Section 4.2 of bodlaender2007combinatorial for more details about how treewidth can be approximated (including, e.g., the use of metaheuristics). See also bodlaender2009treewidth.

Logical problems

Logics

Propositional logic (Boolean connectives) is the logic allowing variables and Boolean connectives: negation, conjunction, disjunction. Propositional logic formulae are often given in conjunctive normal form (CNF), i.e., a conjunction of clauses which are disjunctions of literals (variables or negated variables); or in disjunctive normal form (a disjunction of conjunctions of literals).

(Function-free) first-order logic (FO) is the logic obtained from propositional logic by allowing existential and universal quantifiers.

Monadic second-order logic (MSO) is the logic obtained from FO by further allowing existential and universal quantification over sets (unary relations).

Monadic second-order logic with edge quantification (MSO2) is the logic on graphs that further allows quantification over sets of edges. Its generalization to hypergraphs (quantifying over relations of arbitrary arity) is called guarded second-order logic (GSO). In GSO, quantifications over relations of arity >1 are semantically restricted to always range over guarded tuples, i.e., tuples of values that already co-occur in a hyperedge. Likewise, quantification over sets of edges in MSO2 means quantifications over edges in the actual structure, not quantification over arbitrary pairs. However, this should not be confused with guarded logics, where first-order quantification is restricted as well: MSO2 and GSO without second-order quantification is exactly FO, it is not restricted to the guarded fragment of FO.

It turns out that MSO2 over graphs is equivalent to MSO over the incidence graph of these graphs, where the incidence graph is assumed to be labeled in a way that allows the logic to distinguish between vertices and edges. More generally, GSO over hypergraphs (or labeled hypergraphs, i.e., relational signatures) is equivalent to MSO on the incidence instance (gradel2002back, Proposition 7.1).

An example of a proposition that can be expressed in MSO2 but not MSO is the existence of a Hamiltonian cycle in a graph (see kreutzer2009algorithmic, Section 3.2).

Problems

The satisfiability problem for a logic L and class of (hyper)graphs C asks, given a formula ϕL, whether it has a model in C.

The satisfiability problem for MSO2 sentences, MSO sentences, and actually FO sentences is undecidable over general graphs (even over finite graphs, in fact; see Trakhtenbrot's theorem which is the corresponding result for validity, i.e., the negation of satisfiability of the negated formula). For propositional logic, the problem SAT of deciding whether a propositional formula is satisfiable is NP-complete.

The model-checking problem for a logic L and class of (hyper)graphs C, asks for a fixed formula ϕL with no free variables, given an input (hyper)graph GC, whether G satisfies ϕ in the logical sense. I will not study the version of model checking where both the formula and (hyper)graph are given as input, which is of course considerably harder.

The model-checking problem for MSO in general is hard for every level of the polynomial hierarchy (ajtai2000closure). A simple example that shows NP-hardness on graphs is asking whether an input graph can be colored with 3 colors, which is MSO-expressible.

A variant of the model-checking problem is the counting problem that asks, for a formula with free variables, given a (hyper)graph, how many assignments of the free variables satisfy the formula; and the enumeration problem, that requests that the satisfying assignments be computed (with complexity measured as a function of the size of the output, or measured as the maximum delay between two enumerated outputs).

Monadic second order

For satisfiability, it is known that, for any k, satisfiability for MSO2 over the class of graphs of treewidth k is decidable (seese1991structure); this implies the same for branchwidth (an equivalent claim) and pathwidth (a weaker claim). Conversely, it is known that for any class of graphs of unbounded treewidth, satisfiability for MSO2 is undecidable: this relies on a different result by Robertson and Seymour, the grid minor theorem, to extract large grid minors from unbounded treewidth graphs, and using the fact that the MSO theory of grids is undecidable. For details see kreutzer2009algorithmic, Section 6.

Further, it is known that, for any k, satisfiability for MSO over the class of graph of cliquewidth k is decidable. The converse result was conjectured in seese1991structure and a weakening was proven in courcelle2007vertex: it shows the converse result but for a logic further allowing to test that a set variable of MSO is interpreted by a set of even cardinality. Also, it is already known that a class of graphs has bounded cliquewidth iff it is interpretable in the class of colored trees (cited as Lemma 4.22 in kreutzer2009algorithmic).

For model checking, it is known that the problem for MSO2 on bounded treewidth structures is FPT linear (with the formula and the width bound as parameter), see kreutzer2009algorithmic, Section 3.4; this is what is known as Courcelle's theorem. The model checking problem for MSO on bounded cliquewidth structures is FPT cubic, the bottleneck being the computation of the rank decomposition: see kreutzer2009algorithmic, Section 4.3; this is also due to Courcelle. The FPT tractability (not linearity) of MSO2 on bounded treewidth reduces to the same result for MSO and bounded cliquewidth by going through the incidence graph (see these slides, slide 8).

The tractability of MSO on bounded cliquewidth does not extend to MSO2. Indeed, some problems definable in MSO2 but not in MSO are known to be W[1]-hard when parameterized by clique-width, e.g., Hamiltonian cycle, see fomin2009clique and fomin2010algorithmic. Further, it is known that there is a MSO2 formula such that model checking on the class of all unlabeled cliques (which has bounded cliquewidth) is not in PTIME, assuming P1NP1 (those are unary languages): see Theorem 6 of courcelle2000linear.

For model-checking, the constant factor that depends on the fixed formula ϕ is nonelementary even for a formula in FO (frick2004complexity).

The tractability results for model checking on bounded treewidth structures are usually shown using interpretability results to translate the logic on the instances to a logic on the (suitably annotated) tree decompositions (see kreutzer2009algorithmic), and using a connection from MSO on trees to tree automata that originated in thatcher1968generalized (subscription required). See flum2002query for a summary. A connection to automata for bounded rankwidth structures was more recently investigated in ganian2010parse.

The necessity of bounded treewidth for the tractability of model-checking under closure assumptions (i.e., S-closed, M-closed, etc.) was studied in makowsky2003tree. The S-closed case was refined in kreutzer2011lower (for MSO2), and in ganian2012lower (for MSO but additionally assuming closure under vertex labellings).

The problem of counting the number of assignments to a MSO formula is also known to be linear on bounded treewidth (hyper)graphs (arnborg1991easy), and computing the assignments (the enumeration problem) can be done in time linear in the input and in the output (flum2002query). For results on constant-delay enumeration, see, e.g., segoufin2012enumerating.

First-order

I am not aware of conditions that try to ensure decidability of FO satisfiability (in the spirit of those for MSO and MSO2 above), so I focus on model checking.

The model-checking problem for FO is in PTIME, in fact in AC0, but this does not imply that it is FPT (where the query is the parameter): the degree of the polynomial in the (hyper)graph depends in general on the formula.

It is more complicated for FO than for MSO to find criteria that guarantee that FO model checking is FPT, because of locality properties of FO. In particular, there are notions of locally tree-decomposable structures (frick2001deciding), which ensure FPT linearity of FO model checking in more general situations.

For more about FO, see Section 7 of kreutzer2009algorithmic, noting that Open Problem 7.23 has now been solved in the affirmative (dvorak2013testing).

The problem of enumeration for first-order is also studied in segoufin2012enumerating.

Propositional logic

The model-checking problem for propositional formulae is obviously always tractable. However, it is more interesting, given a propositional formula in normal form, encoded as an instance, to determine whether it is satisfiable (SAT), and count the number of satisfying assignments (#SAT).

The standard way to represent a conjunctive normal form formula is as a bipartite graph between clauses and variables with colored edges indicating whether a variable occurs positively or negatively in a clause (see ganian2010better, Definition 2.7). It is then known that SAT and #SAT are FPT if this graph has bounded treewidth or if it has bounded rankwidth (see ganian2010better and references therein).

Tractability for #SAT is known for other cases, e.g., when the standard representation of the formula as a hypergraph is β-acyclic (brault2014understanding).

Query evaluation and constraint satisfaction problems

Problems

This section studies the constraint satisfaction problem, formalized as the problem CSP of deciding, given two relational instances I1 and I2, whether there is a homomorphism from I1 to I2, i.e., an application mapping the elements of I1 to those of I2 such that each fact (hyperedge) of I1 exists in I2 when mapped by the homomorphism; as well as a counting variant that asks how many such homomorphisms exist.

The CSP problem is also studied in database theory, seen as Boolean conjunctive query (CQ) evaluation: a Boolean CQ is an FO sentence consisting of an existentially quantified conjunction of atoms. It is clear that a CQ Q is satisfied in an instance I iff there is a homomorphism from IQ to I where IQ is the canonical instance associated to Q.

One problem phrasing is when both I1 and I2 are given as input, which I will call the combined complexity approach. In this case, it is sensible to restrict I1 and I2 to live in fixed (generally infinite) classes C1 and C2, and study how the choice of class influences the complexity.

Another problem phrasing, common in database theory, is when one of C1 and C2 is a singleton class, so we are in one of two settings:

  • data complexity: I1 is a fixed structure and I2 is the input
  • query complexity: I2 is a fixed structure and I1 is the input

The question then becomes how the complexity of the CSP problem evolves depending on the fixed structure, and on the class in which the input structures may be taken; so this is really the combined complexity approach but with one of the classes being a singleton (so the corresponding structure is effectively fixed rather than given as input).

Results

Without any restriction, the CSP problem is clearly in NP in combined complexity (guess the homomorphism and check it) and its counting variant is in #P (count nondeterministically over all mappings that are checked to be homomorphisms). The CSP problem is easily seen to be NP-hard in query complexity for very simple I2, even when C1 is the class of graphs: fixing I2 to be a triangle, an input I1 has a homomorphism to I2 iff it is 3-colorable. For any fixed I1, the data complexity of the CSP problem when C2 is all structures is in AC0 from FO model checking, but it is W[1]-complete (papadimitriou1999complexity, Theorem 1) and hence unlikely to be in FPT.

For arbitrary I1, data complexity becomes FPT when the input instances I2 are restricted to have bounded treewidth or bounded cliquewidth, from the results on FO.

Combined complexity becomes LOGCFL-complete (and hence in PTIME) when C2 is a class of bounded querywidth structures, or structures of bounded degree of cyclicity (yet another definition), and they are given with a suitable decomposition (gottlob2001complexity). This generalizes an earlier analogous result for α-acyclic queries (yannakakis1981algorithms, seems unavailable online). For bounded treewidth or bounded hypertreewidth, the same result holds but the decomposition can even be computed in LOGCFL, so it doesn't need to be given. The PTIME membership still holds when C2 is a class of bounded fractional hypertreewidth structures (grohe2014constraint, unavailable online).

For hypergraphs of fixed arity, assuming that FPTW[1], it is known that the combined complexity where C2 is all structures is in PTIME iff C1 has bounded treewidth modulo homomorphic equivalence. This result generalizes to the problem of counting the number of homomorphisms (dalmau2004counting). In cases where C1 satisfies this condition, the CSP problem is FPT with the parameter being the size of I1. A trichotomy result for the complexity in such cases is given in chen2013fine, including for counting.

When restricting C1 and C2 to graphs (not hypergraphs), an analogous result is known for combined complexity when C1 is all structures: tractability holds iff C2 is bipartite (grohe2007complexity), otherwise NP-completeness holds. The Feder-Vardi conjecture claims that there is an analogous dichotomy even when C1 and C2 can be hypergraphs rather than graphs (i.e., the query complexity of CSP is never, e.g., in NPI). Solutions to the problem have been proposed: see here.

Partial order theory

As bonus material, because I love partial order (poset) theory: an unconventional way to look for tractability parameters on graphs is to look at tractability parameters on posets and look at the (directed) graphs defined by the transitive reduction of these posets, namely, their Hasse diagrams; and look at the cover graph which is the undirected version of the Hasse diagram.

There has been recent work connecting the tractability measure of order dimension on posets to graph measures on the cover graph. Here are results, for any poset P with cover graph G:

Last, a note about something that confused me: imposing that a poset is series-parallel is not the same as imposing that its cover graph is a series-parallel graph. The N-shaped poset, which is the standard example of a non-series-parallel poset, but its cover graph is just a path graph. Hence series-parallel posets intrinsically rely on directionality.

List of DOIs

To ensure that the references in this article can still be followed even if some preprints are removed from the Web, here is the DOI of the citations, in the order in which they appear in the text.


  1. However, in point (iii) in the list of hlineny2007width, the definition of "series-parallel graph" can be misleading. I would rephrase point (iii) as: G has bw 2 iff G does not contain K4 as a minor (bodlaender1998partial, Theorem 10, (iii)) iff G has tw 2 (bodlaender1998partial, remark after Theorem 10) iff every biconnected component of G is a series-parallel graph (bodlaender1998partial, Theorem 42). 

  2. The original paper is corneil2005relationship: D. G. Corneil, U. Rotics, On the Relationship Between Clique-Width and Treewidth, 2005, but it is not available without subscriptions. 

  3. Observation 1 of makowsky2003tree does not include the fact that T-closed implies S-closed (hence I-closed), but I believe this to be an oversight. 

  4. If G is not fixed, it is NP-hard, given two graphs G and G, to decide whether G is a minor of G; Wikipedia), 

  5. While this indeed shows the existence of a quadratic test, to be able to construct this algorithm, we need to know the actual finite set of forbidden minors. However, the Robertson-Seymour proof is not constructive, and undecidability results are known about computing these families in general; see kreutzer2009algorithmic, section 5.4.). 

A list of open research questions

Researchers publish papers about the problems that they manage to solve, but they say much less about the other problems, those that remain open even after a long fight. These problems are sometimes briefly mentioned in papers, others are folklore, and only the most famous end up on Wikipedia. Still, I didn't like it that I did not have a central public list of the open problems I had heard about or struggled with.

So I compiled open problems from my own papers, problems that seemed interesting and were left open by other papers, some of my unanswered questions on CStheory, and other sources.

Here is the list, which I will try to keep up-to-date over time to add new problems and update those on which progress is made. Comments and feedback welcome!

Summarizing my PhD research

Oddly enough, I don't think I ever mentioned my PhD research on this blog. Part of the reason is that I easily get fascinated by technical questions, but I find it less natural to explain what I do in non-technical terms. However, I have to write the introduction to my PhD thesis, so I thought it was a nice excuse to start by summarizing here what I am doing.

For readers who don't know, I'm doing my PhD in Télécom ParisTech, in the DBWeb team. Before this, I stayed for some months in Tel Aviv to work with Tova Milo, and in Oxford to work with Michael Benedikt, which explains some of my current collaborations.

The general context: Uncertain data

The great success story of database research is the relational model. It is a rare example of something that is both extremely relevant in practice1 and nice from a theoretical perspective, with close connections to logics and finite model theory. The relational model gives an abstract model of how data should be represented and queried, which allows people to use databases as a black box, leaving fine but domain-agnostic optimisations to database engines. Relational databases, like compilers, filesystems, or virtual memory, have become an abstraction that everyone relies on.

One thing that the relational model does not handle is uncertainty and incompleteness in the data. In the relational semantics, the database is perfectly accurate, and it contains all facts that may be of interest. In practice, however, this is rarely satisfactory:

  • Some values may be missing: for instance, the user did not fill in some fields when registering, or some fields were added since they registered.
  • The data may be incomplete: for instance, you are constructing your database by harvesting data from the Web, and you haven't harvested everything yet; but you still wonder whether the data that you have allows you to give a definitive answer to your query.
  • Data may be outdated: for instance, the user moved but they did not update their address.
  • Data may simply be wrong.

Indeed, nowadays, people manage data that may have been inferred by machine learning algorithms, generated by data mining techniques, or automatically extracted from natural language text. Unlike a hand-curated database, you can't just assume mistakes will not happen, or that you will fix all of them when you find them. The data is incomplete and contains errors, and you have to deal with that.

These important problems are already being addressed, but in unsatisfactory ways. The general domain-agnostic solutions are fairly naive: e.g., set a confidence threshold, discard everything below the threshold, consider everything above the threshold as certain. In domains where uncertainty is paramount and where simple solutions do not suffice, e.g., in character recognition (OCR) or speech recognition (ASR), people use more elaborate solutions, e.g., weighted automata. The problem is that these solutions are often domain-specific, i.e., they are tailored to the domain at hand. The vision of uncertain data management is that there should be principled and generic tools to manage such noisy and incomplete data in all domains where it occurs; in the same way that relational databases today are both highly optimized and domain-agnostic tools.

For now, the most visible aspect of uncertainty management in concrete databases are nulls, which are part of the SQL standard. NULL is a special value that indicates that a record does not have a value for a specific attribute: for instance, the attribute did not exist when the record was created. Of course, there are many problems that NULLs do not address:

  • NULL can only represent missing values in a record, not missing records ("There may be more entries that I don't know about"), or uncertainty about whether the record exists altogether ("I think I know this, but it may be wrong.")
  • NULL just means that we know nothing about the value, it doesn't allow us to say that there are multiple possible values, or to model them using a probability distribution, or that two unknown values must be the same.
  • The semantics of NULLs (as defined by the SQL standard) are very counter-intuitive. For instance, trying to select all values which are either equal to 42 or different to 42 will not return the values marked NULL. This is because NULLs are understood as a three-valued logic, but it still does not make sense in terms of the possible worlds : NULL represents an unknown value, and no matter the value, it must be equal to 42 or different from 42...

The semantics of NULLs for SQL databases are now set in stone by the standard, even though people have complained about their imperfections as early as 19772, and even though they still confuse people today. There is still much debate about what the semantics of NULLs could have been instead, see, e.g., the keynote by Libkin, which contains more counter-intuitive examples with NULLs, a review of other theoretical models of NULLs to address some of these problems, as well as proposed fixes.

Research is also studying uncertainty management approaches that have nothing to do with NULLs. For instance, incompleteness can be managed by designing a different semantics to evaluate queries on databases, which is called the open-world assumption, and posits that all missing data is unknown rather than being wrong. To represent uncertainty about the existence of tuples, one can use, e.g., tuple-independent databases: they are standard relational databases, but with records carrying an independent "probability of existence". This formalism cannot represent correlations between tuples (if this tuple is here, then this tuple must be here, or cannot be here), and the result of queries on such databases may not be representable in the same formalism, but more expressive formalisms exist, such as pc-tables. There are also formalisms that can represent uncertainty on other structures, such as XML documents3.

The goal would be to develop a system where relations can be marked as incomplete, and data can simply be marked as uncertain, or inserted directly with probabilities or confidence scores; the system would maintain these values during query evaluation, to be able to tell how certain its answers are. For now, though, real-world uncertain database systems are quite experimental. There are many reasons for that:

  • It's hard to define principled semantics on uncertain data.
  • It's hard to find ways to represent uncertainty about any kind of data.
  • Somewhat surprisingly, probabilistic calculations on data are often computationally expensive, much harder than without probabilities!

This is why we are still trying to understand the basic principles about the right way to represent uncertainty and query it in practice, which is the general area of my PhD. The main way in which it differs from existing work is that I focused on restricting the structure of uncertain data, so that it can be queried in a decidable and tractable fashion. More specifically, my PhD research has studied three main directions:

  1. Connecting approaches for incomplete data reasoning that originated in various communities, to query incomplete data whose structure is constrained by expressive but decidable rule languages.
  2. Representing uncertainty on new structures, namely, on order relations, which had not been investigated before.
  3. Achieving computational efficiency for query evaluation, when we add qualitative uncertainty in the form of probabilistic distributions, by restricting the structure of data, e.g., imposing bounded treewidth.

Connecting different approaches to reason on incomplete data

The first direction deals with incompleteness in databases, and reasoning about an incomplete database in an open-world setting: What is possible? What is certain? The challenge is that this problem has been studied from various communities. On the one hand, database researchers have studied how to reason about databases. On the other hand, people who study reasoning and artificial intelligence are interested in drawing inference from data, and have tried to make their approach work on large data collections.

An important such community is that of description logics (DLs). The goal of description logics is to do reasoning, with the following goals:

  • Separate the data and the rules used to reason about the data.
  • Remain computationally tractable when the data is large, as long as the rules remain manageable.
  • Study precisely how the computational cost varies depending on which kind of reasoning operators4 you allow in the rules.

The data managed in the DL context differs from relational databases, however. The semantics is not exactly the same, but, most importantly, the data representation is different:

  • DLs represent data as a graph, where relations connect at most two objects. For instance, "John lives in Paris."
  • Relational databases, on the other hand, are a tabular formalism, where each row has a fixed schema that relates multiple objects at once. For instance, "John was born in Paris in 1942".

A different formalism, that works with relational databases, is that of existential rules (or tuple-generating dependencies in database parlance). Such rules allow you to deduce more facts from existing facts: "If someone was born in France and they have a parent born in France, then they are French." However, they cannot express other constraints such as negation: "It is not possible that someone is their own child." Further, existential rules can assert the existence of new objects: "If someone wrote a literary prize, then they wrote some book."

For people from databases, DLs, and existential rules, the goal is to design languages to reason about incomplete data, imposing the following properties:

  • principled, though the languages may make different semantic assumptions;
  • expressive, so they can accurately model the rules of the real world;
  • decidable, so that they are not so expressive that it would be fundamentally impossible to reason about them;
  • efficient, so they machines can reason with them with reasonable performance.

Description logics and existential rules

A first part of my PhD research is about bridging the gap between these communities. The goal is to design languages to reason about incomplete data that combine rules from these various communities. My work with Michael Benedikt (Oxford), Combining Existential Rules and Description Logics, connects the two formalisms I presented.

Say that you have a database, which you know is incomplete, and you have rules to reason about the implicit consequences of the data. Our work studies hybrid languages where you can express both existential rules and DL constraints. More precisely, we determine when such languages are decidable, and when they are not, i.e., it is fundamentally impossible to reason about them. The goal is to have the "best of both worlds": on parts of the data that can be represented as a graph, we would use the expressive DL rules; on parts of the data which are more naturally described as a relational database, we would have less expressive constraints inspired by existential rules.

In our work, we pinpointed which features of these two languages are dangerous and give an undecidable language if you mix them. The big problematic feature of DLs are functionality assertions: being able to say that, e.g., a person has only one place of birth. If we want to express such constraints, we must do away with two problematic features of existential rules. The first is exporting two variables: "If someone was born in some country, then that person has lived in that country." What we deduce should only depend on one element from the hypothesis: "If someone won a literary prize, then they wrote some book." The second is a restriction on the facts which the rules can deduce, which shouldn't form complex cyclic patterns in a certain technical sense.

Apart from these problematic features, however, we show that we can combine existential rules and expressive DL rules. This gives us hope: it could be possible to reconcile the two approaches and design very expressive languages that capture both kinds of constraints.

Open-world query answering and finiteness

I have explained how we connected two non-database approaches to reason about incomplete data. I also worked with Michael on reasoning about relational databases, in our work Finite Open-World Query Answering with Number Restrictions.

The general context is the same: we have incomplete data and we want to reason about its consequences, using logical rules. This time, however, our rule language is just defined using classical integrity constraints from relational database theory. Indeed, people have designed many ways to constrain the structure of databases to detect errors in the data: "Any customer in the Orders table must also appear in the Customers table". We use these as rules to reason about the complete state of the data, even though our database is incomplete and may violate them.

The language of constraints that we use includes functional dependencies, which are like5 the functionality assertions in DLs that I already mentioned. We also study unary inclusion dependencies, which are existential rules6 but of a very restricted kind.

The main problem comes from the semantics. In database theory, people usually assume databases to be finite. In particular, when using the rules to derive consequences, the set of consequences should be finite as well. This is justified by common sense ("the world is finite") but this problem is usually neglected in works about DLs and existential rules. So we studied finite reasoning in this database context, for the open-world query answering task of reasoning with our rules and incomplete information.

It is not hard to see that assuming finiteness makes a difference in some cases. Consider the following information about an organization:

  • Jane advises John (the database)
  • Each advisee is also the advisor of someone (an inclusion dependency)
  • Each advisee has a single advisor (a functional dependency)

Is it true, then, that someone advises Jane? The rule allows us to deduce that John, as he is advised by Jane, advises someone else, say Janice; Janice herself advises someone else, say Jack. In general, this could go on indefinitely, and we cannot deduce that someone advises Jane. However, if we also assume that the organization is finite, then it has to stop somewhere: someone along this chain (Jennifer, say) must advise someone that we already know about. And by the rule that no one is advised by two different people, we deduce that Jennifer must be advising Jane.

As we show in our work, the only difference that finiteness makes is that it causes more rules of the same kind to be added7. Once this is done, it turns out that we can essentially forget about the finiteness hypothesis, because we can no longer distinguish between finite and infinite consequences. This is surprisingly difficult to show, though; and to establish this correspondence, we need much more complex tools that those used in the infinite case to actually do the reasoning!

Representing uncertainty on ordered data

The second part of my PhD research deals with the representation of uncertainty on new kinds of data, namely, ordered data. Order can appear at different levels: on values ("retrieve all pairs of people when one is born before the other") or on records ("retrieve all page views that occurred yesterday"). In SQL databases, order is supported with the standard comparison operators on numbers ("WHERE length > 4200"), with sorting operators ("ORDER BY date"), and with the LIMIT operator to select the first results.

Order on facts

Why is it important to have uncertain ordered data? Well, an interesting phenomenon is that combining certain ordered data may cause uncertainty to appear. Consider a travel website where you can search for hotels and which ranks them by quality. You are a party of four, and you want either two twin rooms, or one room with four beds; but the website doesn't allow you to search for both possibilities at once8. So you do one search, and then the other, and you get two ordered lists of hotels. In real life you would then browse through both lists, but this is inefficient! What you would really want, in database parlance, is the union of these two lists, i.e., the list of hotels that have either two twin rooms or one 4-bed room. But how should this list be ordered? This depends on the website's notion of quality, which you don't know. Hence, the order on the union is uncertain. It is not fully unspecified, though: if two hotels occur only in one list, then their order relation should probably be reflected in the union. How can you formalize this?

To solve this kind of problems, people have studied rank aggregation: techniques to reconcile ordered lists of results. However, these methods are quantitative, i.e., if something appears near the top in most lists then it should appear near the top in the result. What we set out instead is to represent what we know for sure, i.e., the set of all possible consistent ways to reconcile the order, without trying to make a choice between them.

Couldn't we just use existing uncertainty representation frameworks? If you try, you will realize that they do not work well for ordered data. You don't want tuple-level uncertainty, because you know what the results are. You could say that the position of each result is uncertain: "this result is either the first or the third". Yet, when you know that hotel a must come before hotel b, you must introduce some complicated dependency between the possible ranks of a and b, and this gets very tedious.

With M. Lamine Ba (former fellow PhD student in Télécom, now in Qatar), Daniel Deutch (from Israel) and my advisor Pierre Senellart, we have studied this question in our work, Possible and Certain Answers for Queries over Order-Incomplete Data. We use the mathematical notion of a partial order as a way to represent uncertain ordered data. More precisely, we explain how we can apply to partial orders the usual operators of the relational algebra (select, project, union, product). We extend this to accumulation, to answer queries such as "is it certain that all hotels in this district are better than all hotels in that district?"

This poses several interesting challenges. The most technically interesting one is that of efficiency: how hard it is to compute the result of such queries? Somewhat surprisingly, we show that it is intractable (NP-hard). In fact, it is already hard to determine, given such a partially ordered database, whether some totally ordered database is possible9. In other words, it is intractable already to determine whether some list is a consistent way to reorder the data! To mitigate this, we study "simplicity measures" of the original ordered relations (are they totally ordered, totally unordered, or not too ordered, not too unordered?), and we see when queries become tractable in this case, intuitively because the result of integrating the relations must itself be simple.

Order on values

I also work on uncertainty on orders with Tova Milo and Yael Amsterdamer (from Tel Aviv). This time, however, the partial order itself is known, and we want to represent uncertainty on numerical values that satisfy the order.

Our work, Top-k Queries on Unknown Values under Order Constraints, is inspired by crowdsourcing scenarios. When working on data that you collect from the crowd, every query that you make introduces latency (you need to wait for the answers to arrive) and monetary costs (you have to pay the workers). Hence, you cannot simply acquire data about everything that you are interested in: instead, you must extrapolate from what you can afford. Following our earlier works on similar questions, we are interested about classifying an items in a taxonomy of products. In this example application, the order is induced by the monotonicity property: shirts are more specific than clothing, so if we classify an item as being a shirt, it must also be a piece of clothing.

The underlying technical problem is to interpolate missing values under total order constraints, which I think it is an interesting question in its own right. We study a principled way to define this formally, based on geometry in polytopes, and study again the computational complexity of this task. Why is this computationally intractable in general, and what are circumstances where it can reasonably be done? For instance, we can tractably perform the task when the taxonomy of products is a tree.

Reasoning about order

More recently, I also started working with Michael Benedikt on open-world query answering tasks that involve order relations. Again, this is useful in practice, because order often shows up in queries and rules; however it is challenging because it is hard to talk about orders in rules. Essentially, to write a rule that says that something is an order, you have to express transitivity: if a < b and b < c then a < c. Unfortunately, such rules are prohibited by many existing rule languages, because they could make the rules too expressive to be decidable. Hence, we study in which cases the specific axioms of order relations can be supported, without causing undecidability. We are still working on it and the results are not ready yet.

Tractability of probabilistic query evaluation

The last direction of my PhD research, and indeed the only one I really started during the PhD, is about the tractability of query evaluation on probabilistic database formalisms.

In fact, apart from the work with Tova and Yael and Pierre that deals with numerical values, everything that I described so far was about uncertainty in a logical sense, not in a quantitative sense. Things are either possible or impossible, either certain or not, but there is no numerical value that quantifies how likely something is over something else.

There are multiple reasons to focus on logical uncertainty. The first reason is that defining a set of possibilities is easier than additionally defining a probability distribution on them. It is hard to define meaningful and principled probabilities over the space of all possible consequences of an incomplete database, or over the linear extensions of a partial order.10 This is why people usually study probability distributions on comparatively simple models, e.g., tuple-independent databases (TID). With TID, the set of possible worlds has a clear structure (i.e., all subsets of the database) and a simple probability density: the probability of a subset is the probability of keeping exactly these facts and dropping the others, assuming independence.

The second difficulty is that, even on the TID model, computing probabilities is computationally intractable. Imagine that a dating website has a table indicating who sent a message to whom, in addition to a table indicating in which city users live. We pose a simple query asking for pairs of users that live in the same city, such that one messaged the other. On a standard relational database, this is easy to write in SQL, and can be efficiently evaluated. Now, imagine that we give a probability to each fact. Say, e.g., we are not sure about the possible cities a user may be in, and we only look at messages expressive a positive sentiment: in both cases we use machine learning to obtain probability values for each fact. Now, we want to find the probability that two people live in the same city and one sent the other a positive message.

As it turns out, this is much harder than plain query evaluation!11 The problem is that correlations between facts can make the probability very intricate to compute: intuitively, we cannot hope to do much better than just enumerating the exponential number of possible subsets of the data and count over them. An important dichotomy result by Dalvi and Suciu classifies the queries (such as this example) for which this task is intractable, and the ones for which it is tractable.

With my advisor and Pierre Bourhis from Lille (whom I met at Oxford), we wondered whether everything was lost for hard queries. We had hope, because the result by Dalvi and Suciu allows arbitrary input databases, which is quite pessimistic: real data is usually simpler than the worst possible case. We thought of this in connection with a well-known result by Courcelle: some queries which are generally hard to evaluate become tractable when the data has bounded treewidth, intuitively meaning that it is close to a tree. In the probabilistic XML context, it had already been noted12 that probabilistic evaluation is tractable on trees, e.g., on probabilistic XML data, even for very expressive queries. We thought that this could generalize to bounded treewidth data.

We show such a result in our work Provenance Circuits for Trees and Treelike Instances, which studies the more general question of computing provenance information. The field of provenance studies how to represent the dependency of query results on the initial data; it relates to probability evaluation because determining the probability of a query amounts to determining the probability of its provenance information, or lineage. Intuitively, the lineage of my example query would say "There is a match if: Jean messaged Jamie and Jean lives in Paris and Jamie lives in Paris, or..." To determine the probability that the query has a match, it suffices to evaluate the probability of this statement (which may still be intractable).

We rephrase probabilistic query evaluation in terms of provenance because there is a very neat presentation of provenance in terms as semiring annotations manipulated through relational algebra operators. Our work shows that how an analogous notion can be defined in the context of trees and automata, and can be "transported" through Courcelle's correspondence between bounded treewidth graphs and trees. As it turns out, the lineages that we can construct are circuits13 and themselves have bounded treewidth, so we incidentally obtain that probability evaluation is tractable on bounded treewidth instances. We extended this to show the tractability of many existing database formalisms, and the existence of tractable lineage representations.

In a more recent work, Tractable Lineages on Treelike Instances: Limits and Extensions, we show that bounded-treewidth tractability essentially cannot be improved: there are queries which are hard to evaluate for any restriction on input instances which does not imply that the treewidth is bounded (or that the instances are non-constructible in a strong sense). This lead us to investigate questions about the extraction of topological minors, using a recent breakthrough in this area. We developed a similar dichotomy result for other tasks, including a meta-dichotomy of the queries that are intractable in a certain sense14 on all unbounded-treewidth constructible instance families, a result reminiscent of the dichotomy of queries by Dalvi and Suciu, but with a very different landscape.

The hope would be to develop query evaluation methods on probabilistic data which use the structure of both the instance and query to achieve tractability, or to make well-principled approximations. We hope that such techniques could lead to realistic ways to evaluate queries on probabilistic databases.

Other things

This post only presents my past or current work that fits well with the general story that I plan to tell about my PhD research. I'm sorry to my past and current collaborators who are not featured in the list. In particular, some of my other collaborators are:


  1. It is true that NoSQL databases have become popular as well, but it says something about the ubiquity of SQL that we refer to other databases as being "not SQL". Besides, the new trend of NewSQL databases further suggests that the relational model is here to stay. To give a practical example, Android and Firefox contain SQL database implementations, e.g., with sqlite

  2. An example from 1977 is John Grant, Null values in a relational data base, which complains about the semantics. (Sadly, this paper seems paywalled, but I just give it for the reference and the date, not for its specific contents.) 

  3. If you want to know more about uncertainty management frameworks on relational databases, a good reference is the Probabilistic Databases book by Suciu, Olteanu, and Koch (but it doesn't seem available online). For probabilistic models, a good reference is this article by Benny Kimelfeld and my advisor Pierre Senellart. If you're curious about the connections between the two, I wrote an article with Pierre about precisely that. 

  4. An example of an expensive feature is disjunction: if you can infer from the data that something or something else is true, it becomes harder to reason about the possible consequences of your data. 

  5. Functional dependencies are slightly more complicated than DLs functionality assertions. This is because databases are tabular, so we can write functional dependencies like "the pair (first name, last name) determines the customer ID in the Customers table". This means that the Customers table cannot have two rows with the same first and last name but different customer IDs. Such rules do not make sense with DLs, because the "rows" have at most two columns. 

  6. In particular, they cannot export two variables, because as I pointed out, this causes undecidability with functional dependencies. The other restrictions are that they can only deduce a single fact (so they respect the second condition about non-cyclic heads), they only use a single fact as hypothesis, and they do not any variable appear at multiple positions of an atom. 

  7. In the example, we would deduce that each advisor is an advisee, and also that no advisor advises two different people. 

  8. Yes, I speak from experience, and I have a specific website in mind, but I won't do them the favor of offering them free advertising as a reward for bad usability. 

  9. Formally, we want to determine whether a labeled partial order has a compatible total order whose label sequence falls in a certain language. This interesting direction was not explored by standard poset theory, as far as we know, probably because it is too closely related to computational questions.
    Intuitively, hardness is caused by the ambiguity introduced by duplicate values. In other words, if two different hotels have the same name in the input lists, it is hard to determine how they should be paired with the elements of the candidate combined list that we are checking. 

  10. These are hard questions, but it doesn't mean that they wouldn't be interesting to study — quite the opposite in fact. :) For probabilities on linear extensions, see internship offer 2. For probabilities on infinite instance collections, as it turns out, we seriously tried to define such things via recursive Markov chains before we got sidetracked by simpler questions that were also unsolved. 

  11. Formally, evaluating this query is #P-complete in the input database, whereas it is in AC0 if the input database is not probabilistic. 

  12. Cohen, Kimelfeld, and Sagiv, Running Tree Automata on Probabilistic XML

  13. Circuit representations of provenance are a recent idea by Daniel Deutch, Tova Milo, Sudeepa Roy, and Val Tannen. The fact that we use their work, however, is not connected to my collaborations with Daniel or Tova. It's a small world. 

  14. Formally, they do not have polynomial-width OBDD representations of their provenance, which is a common reason for queries to be tractable; they may still be tractable for other reasons, though we do not show it. 

Lettre ouverte à ma députée sur le vote prorogeant l'état d'urgence

To my English-speaking readers: France is currently busy destroying public freedoms in the wake of the november attacks. Both the lower and upper legislative chambers approved by a near-unanimous vote a 3-month state of emergency. This seems sufficiently dire to me to write to my representative and post the letter on this blog.

Je ne parle pas beaucoup de politique ici, mais ce qui se passe en ce moment me semble suffisamment grave pour être mentionné. Ce jeudi, l'Assemblée nationale a voté trois mois d'état d'urgence, à l'unanimité moins six voix, suivie vendredi par le Sénat en un vote unanime. Ma députée, Julie Sommaruga, ne faisait pas partie des six qui ont eu le courage de s'opposer, et je lui ai donc écrit pour lui manifester ma désapprobation. (Les liens sont des ajouts par rapport à la version envoyée.)

Objet : Lettre ouverte sur votre vote prorogeant l'état d'urgence

Madame la députée,

C'est vous qui me représentez à l'Assemblée nationale, et je souhaitais réagir à votre vote sur le projet de loi prorogeant l'état d'urgence.

Le 16 novembre, sur votre site Web, votre réaction aux attentats du 13 se concluait ainsi : « Notre fermeté, notre unité, notre sang-froid et notre fidélité à nos valeurs républicaines triompheront de la barbarie, vaincront le terrorisme. »

Le 19 novembre, vous avez permis à l'Assemblée d'entériner, pour trois mois, un état d'urgence qui représente une grave atteinte aux libertés fondamentales. Vous avez permis à l'exécutif d'assigner à résidence sur la base de comportements, de perquisitionner, d'interdire manifestations et rassemblements ; tout cela au nom de la menace terroriste, cette perpétuelle urgence des dernières années et de celles à venir.

Madame la députée, les valeurs dont vous vous revendiquez sont précisément celles qui m'amènent à désapprouver votre vote. Ce n'est pas agir avec fermeté, que de céder à la faiblesse de ceux qui veulent qu'on les rassure, ceux qui sont réconfortés par un spectacle de mesures sécuritaires inefficaces, ceux-là même qui applaudissent les actions militaires revanchardes et aveugles où nous nous enlisons déjà. Ce n'est pas faire preuve de sang-froid, que d'adopter si tôt de telles mesures, alors que nous sommes tous encore sous le choc de ces atrocités et du tapage sensationnaliste des médias.

Il faut se méfier de cette unité à laquelle on nous demande de croire, comme si le débat politique devait se suspendre sous la violence de ces crimes, et une opinion commune s'imposer immédiatement à chacun. N'est-il pas un peu suspect, finalement, que nous nous soyons tous mis d'accord si vite ? Ne pensez-vous pas, comme moi, que l'on entendrait des avis dissidents s'élever, s'il leur était permis de manifester, s'il leur était possible de s'opposer à l'alarmisme ambiant sans qu'on leur objecte le respect dû aux victimes ?

Comme vous, je crois à nos valeurs républicaines, comme vous je suis convaincu que c'est elles qui doivent nous guider en ces temps difficiles. Pourtant, nombre d'entre elles me semblent gravement menacées par le texte que vous avez défendu. Ainsi, le pouvoir législatif, que vous exercez en me représentant, ne bafoue-t-il pas le principe de la séparation des pouvoirs, quand il accède aux demandes d'un pouvoir exécutif qui veut court-circuiter le pouvoir judiciaire ? Peut-on encore parler de présomption d'innocence, quand l'exécutif prive arbitrairement ses citoyens de liberté, au nom d'une urgence qu'il veut étaler sur trois mois ? L'État de droit peut-il s'accommoder de cet état d'urgence permanent avec lequel on parle de le concilier ? En somme, n'est-ce pas laisser triompher le terrorisme, que de rogner ainsi nos chères valeurs, pour mieux les protéger ?

Madame la députée, je tenais à vous écrire pour vous dire que votre vote ne représente pas mes convictions. Comme d'autres de vos électeurs, peut-être, je regrette amèrement de vous voir approuver les graves erreurs que nous commettons aujourd'hui, et j'ai peur de cette France que l'on nous prépare pour demain.

Bien cordialement,

Antoine Amarilli (Montrouge), doctorant

Je recommande aussi fortement la lecture de J'ai peur, un article écrit par Pablo, qui correspond bien à mon avis sur ce qui est en train de se passer.